Monthly Archives: July 2020

Everyone’s talking about testing!

I can’t remember a time in my life when “testing” has been such a hot topic of coverage in the media. It feels like every news item leads with some mention of the number of tests conducted to detect the COVID-19 coronavirus, whether it be it locally or further afield. This level of coverage of the topic of testing even exceeds that during Y2K (according to my memory at least), albeit in a very different context.

It was interesting to see the reaction when the President of the United States said that the US case numbers are high because so many tests have been conducted – and that a reduction in testing might be in order. This led Ben Simo to tweet on this idea in the context of software testing:

Stop testing your software! Bugs are the result of testing. Bugs that don’t kill users dead instantly aren’t really bugs. No testing is the key to zero defect software! If you don’t see it, it doesn’t exist. If you don’t count it, it doesn’t matter. No testing!

I felt similarly when I read some of the coverage of the worldwide testing efforts during the pandemic. “Testing” for COVID-19 is revealing valuable information and informing public health responses to the differing situations in which we find ourselves in different parts of the world right now. (In this context, “testing” is really “checking” as it results in an algorithmically-determinable “pass” or “fail” result.)

When we test software, we reveal information about it and some of that information might not be to the liking of some of our stakeholders. A subset of that information will be made up of the bugs we find. In testing “more”, we will likely unearth more bugs as we explore for different kinds of problems, while “more” in the COVID-19 sense means performing the same test but on more people (or more frequently on the same people).

We should remain mindful of when we’ve done “enough” testing. If we are genuinely not discovering any new valuable information, then we might decide to stop testing and move onto something else. Our findings from any test, though, represent our experience only at a point in time – a code change tomorrow could cause problems we didn’t see in our testing today and an unlucky person could acquire COVID-19 the day after a test giving them the all clear.

There is a balance to be struck in terms of what constitutes “enough” testing, be that in the context of COVID-19 or software. There comes a point where the cost of discovering new information from testing outweighs the value of that information. We could choose not to test at all, but this is risky as we then have no information to help us understand changing risks. We could try to test everyone every day for COVID-19, but this would be hugely expensive and completely overwhelm our testing capacity – and would be overkill given what we already understand about its risks.

Many of us are testing products in which bugs are potentially irritating for our users, but not life and death issues if they go undetected before release. The context is clearly very different in the case of detecting those infected by COVID-19.

As levels of COVID-19 testing coverage have increased, the risk of acquiring the virus has become better understood. By understanding the risks, different mitigation strategies have been employed such as (so-called) social distancing, progressively more stringent “lockdowns”, and mandatory mask wearing. These strategies are influenced by risk analysis derived from the results of the testing effort. This is exactly what we do in software testing too, testing provides us with information about risks and threats to value.

It’s also interesting to observe how decisions are being made by bearing in mind a broader context, not just taking into account the testing results in a particular area or country. Data from all across the world is being collated and research studies are being published & referenced to build up the bigger picture. Even anecdotes are proving to be useful inputs. This is the situation we find ourselves in as software testers too, the software in front of us is a small part of the picture and it’s one of the key tenets of context-driven testing that we are deliberate in our efforts to explore the context and not just look at the software in isolation. In this sense, anecdotes and stories – as perhaps less formal sources of information – are incredibly valuable in helping to more fully understand the context.

Test reporting continues to be a topic of great debate in our industry, with some preferring lightweight visual styles of report and others producing lengthy and wordy documents. The reporting of COVID-19 case numbers continues to be frequent and newsworthy, as people look to to form a picture of the situation in their locale. Some of this media reporting is very lightweight in the form of just new daily case and fatality numbers, while some is much deeper and allows the consumer to slice and dice worldwide data. Charts seem to be the reporting style of choice, sometimes with misleading axes that either exaggerate or down play the extent of the problem depending on the slant of the publisher. Different people react to the same virus infection reports in quite different ways, based on their own judgement, biases and influences. We see the same issue with software test reporting, especially when such reporting is purely based around quantitative measures (such as test case counts, pass/fail ratios, etc.) The use of storytelling as a means of reporting is nothing new in the media and I’d argue we would be well served in software testing to tell a story about our testing when we’re asked to report on what we did (see Michael Bolton’s blog for an example of how to tell a three-part testing story – a story about the product and its status, a story about how the testing was done, and a story about the quality of the testing work).

While I don’t normally focus on counting tests and their results, I’ll be happy to see more COVID-19 tests taking place and fewer new daily positive results in both my area of Australia and the world more generally. Stay safe.

(Many thanks to Paul Seaman for his review of this post, his sage feedback has made for a much better post than it otherwise would have been.)

Going meta: a blog post about writing a blog post

My recent experiences of authoring blog posts in WordPress have been less enjoyable than usual thanks to the use of their latest “block editor”, leading me to ask on Twitter:

WordPress seems to update their post editor very frequently so I just about learn the quirks of one when it is superseded by another.

This post will serve as my (long) answer to WordPress’s reply. I’m going to spend the next 45 minutes running an exploratory testing session, creating a blog post and noting issues as I come across them while using the block editor.

Session from Tuesday 21st July 2020 , on Windows 10 laptop (using keyboard and mouse controls only) using Chrome browser

4:10pm I’m starting my session by writing a very basic block of unformatted text. I note that when I move my mouse, a small toolbar appears which covers the end of the previous block (this could be an issue when in the flow of writing). The toolbar disappears as soon as I type and reappears on every mouse movement. The content of the toolbar seems very limited, maybe to just the most used formatting features (most used by the whole WordPress community or most used by me)? At least each icon in the toolbar has a tooltip. There’s a very odd control that only appears when hovering over the leftmost icon (to change block type or style) which appears to facilitate moving the whole block up or down in the post. I wonder why the toolbar is so narrow, snce there is plenty of room to add more icons to allow easier discovery of available options here. I’ve been distracted by the toolbar but now resume my mission to complete a basic paragraph of text.

OK, so hitting Enter gives me a new paragraph block, that makes sense. Let’s get more creative now, how about changing the colour of some text? The toolbar doesn’t appear to have a colour picker, oh, it’s tucked away under “More rich text controls”. I’ve typed some text, highlighted it and then selected a custom colour. That worked OK once I found the colour picker. The colour picker control seems to stay in the toolbar after using it – or does it? I’ll try it again but lo, it’s back under the hidden controls again. There’s probably a deliberate choice of behaviour here, but I’ll choose not to investigate it right now.

I’m trying to select some text across blocks using Shift+Arrow keys but that doesn’t work as I’d expect, being inconsistent with other text selection using this keyboard combination in other text processing applications. (Ctrl+Shift_Arrow keys suffers the same fate.) Shift+Page Up/Down only select within the current block, again not what I’d expect.

4:30pm After adding this new block (just by pressing Enter from the previous one), I’m intrigued by the array of block types to choose from when pressing the “+” button which appears in seemingly different spots below here (and I just spotted another “+” icon on the very top toolbar of the page and it looks like it does the same thing). There are many block types, so many that a search feature is provided (a testing rabbit hole I’ll choose not to go down at the moment). Some of the block types have names which indicate they require payment to use and the available block types are categorized (e.g. Text, Media, etc.) I decide to try a few of the different block types.

Adding a “quote” block now, which offers two areas, one for the quote and one for the citation. It appears that the citation cannot be removed and so more space is left below the quote text than I’d like (but maybe it doesn’t render the empty space when published?).

A test quote without citation

Moving on to adding a list and this works as I’d expected, offering a choice between bulleted and numbered with indentation (maybe there’s a limit on nesting here, but not investigated).

  • First item of my list
  • Next item of my list
    • Indented!

Even though I’ve been using this editor for my last few blog posts, I still tend to forget that auto-save is no longer a thing and I just happened to notice the “Save Draft” in the top right corner of the page, so let’s save.

In reality, my blog posts are mainly paragraphs of text with an occasional quote and image so exploring more block types doesn’t seem worth the effort. But looking at images feels like a path worth following.

Copying an image on the clipboard seems to work OK, though immediately puts cursor focus into the caption so I started typing my next bunch of paragraph text incorrectly as the image caption.

Options in the toolbar for the image make sense and I tried adding an image from a file with similar results (deleted from the post before publishing). Adding images into a post is straightforward and it’s good to see copying in directly from the clipboard working well as there have been issues with doing so in previous incarnations of the editor.

4:45pm Returning to simply writing text, I often add hyperlinks from my posts so let’s try that next. Ctrl+K is my usual “go to” for hyperlinks (from good ol’ Word) and it pops up a small edit window to add the URL and Enter adds it in: http://www.google.com Selecting some text and using the same shortcut does the same thing, allowing the text and the URL to be different. The hyperlinking experience is fine (and I note after adding the two hyperlinks here that there’s a “Link” icon in the toolbar also).

I remember to save my draft. As I resume typing, the toolbar catches my eye again and I check out “More options” under the ellipsis icon. I notice there are two very similar options, “Copy” and “Duplicate”, so I’ll try those. Selecting “Copy” changes the option to “Copied!” and pasting into Notepad shows the text of this block with some markup. I note that “Copied!” has now changed back to “Copy”. Selecting “Duplicate” immediately copies the content of this block right underneath (deleted for brevity), I’m not sure what the use case would be for doing that over and above the existing standard copy functionality. OK, I’ve just realised that I’ve been distracted by the toolbar yet again.

I just added this block via a “hidden” control, I’m not sure why products persist with undiscoverable features like this. Hovering just below an existing block halfway across the block reveals the “+” icon to add a block (though it often seems to get ‘blocked’ by, you’ve guessed it, that toolbar again).

My time is just about up. As I review my short session to create this blog post, I think it’s the appearing/disappearing toolbar that frustrates me the most during authoring of posts. I almost never use it (e.g. I always use keyboard shortcuts to bold and italicize text, and add hyperlinks) and, when I do, the option I’m after is usually tucked away.

Thanks to WordPress for responding to my tweet (and providing what is still generally a great free platforms for blogging!) and for giving me a good excuse to test, learn and document a session!

ER of attending my first virtual testing conference, Tribal Qonf (27 & 28 June 2020)

I spotted some promotion on Twitter for a new testing conference in India, Tribal Qonf, and the virtual nature of it (thanks to COVID-19) plus the impressive speaker line-up (including James Bach and Michael Bolton) spurred my interest. Looking into it further, the pricing was incredibly low so I decided to register for it (for around AU$30 at the time I registered).

Although the weekend scheduling of the conference and Indian timezone wasn’t ideal, the conference promised to provide all content via recordings so I didn’t tune into any of the presentations “live”, waiting instead the ten days or so for recordings to be made available. I then watched most of the presentations from the two-day event over a period of a few days.

The first presentation I watched was the opening talk from day 1 by James Bach, titled “Weaving Testing: Thread by Thread” This was a fascinating talk and it was great to see such a detailed analysis of what actually happens during good testing by skilled practitioners, especially compared to the mythology we’ve generally been conditioned with about what makes for ‘proper’ testing.

Next up, I opted for Pradeep Soundararajan‘s talk on “The Business Value of Testing”. I’ve unfortunately never managed to catch Pradeep presenting in person, but this virtual presentation displayed the passion I expected from him. It was also engaging and refreshingly honest about the challenges we face in terms of recognizing how different stakeholders view the “value” of what we provide as testers.

My next choice was “Adopting a simplified Risk-Based Testing Approach” by Nishi Grover Garg, in which she outlined the basics of the approach, very much in the style of practitioners like Rex Black. The approach was presented very clearly here and I liked the way Nishi contextualized the risk-based testing approach to her startup environment.

A nicely-crafted story came next thanks to Ajay Balamurugadas and his talk “Lessons from 14 Years of Software Testing Career”. He detailed his learnings from each of his testing jobs and offered practical suggestions for areas to focus on at different levels of experience in testing. This presentation reminded me very much of my “A Day In The Life Of A Test Architect” talk which I gave at STARWest in 2016 and again at CAST in 2017.

Rounding out the talks for day 1, I somewhat hesitantly tuned into the ‘expert panel’ on “Testing after 2020”. I’ve become a little jaded about panel sessions but I really enjoyed this one featuring Aprajita Mathur, Ashok Thiruvengadam, Rahul Verma and Pradeep Soundararajan. The panelists responses to the various questions were refreshingly down to earth and practical. I was particularly pleased to see the considered, reasonable and sensible discussions around AI/ML in testing, providing welcome relief from the usual Kool Aid drinkers around these topics in the industry at the moment. A shout out to Lalit Bhamare too for his skillful moderation of this panel session which was a significant factor in its success for me.

I kicked off my “day 2” viewing with the first talk from that day, viz. , Ashock Thiruvengadam with “Be in a Flow. Test Brilliantly” This was something a little different in terms of topic for a testing conference (which is always good to see), focusing on introducing the idea of “flow”. I was reminded of the importance of uninterrupted sessions when performing exploratory testing while listening to this talk.

Next, I opted for Mike Talks with “The Hard Lessons Learned in Test Automation”, in which he shared some interesting stories of lessons learned resulting from his chats with testers over coffee in his home city of Wellington (New Zealand). It was unsurprising to me that his chats resulted in a few very common themes, all of which were familiar territory from my various conversations about automation with testers from all over the world over the last twenty-odd years. It seems we have a long way to go in terms of learning these hard lessons, despite them being covered ad nauseam in blogs, articles, books and conference talks.

My next choice was “A Quick Recipe for Test Strategy” from Brijesh Deb and I immediately liked his take on the topic. He defined a test strategy simply as a “set of ideas that guide test design” and made it clear that we shouldn’t conflate this with a hefty “one size fits all” document of some sort. I also liked his focus on driving test strategy by asking questions, with not just a shout out to James Bach‘s Heuristic Test Strategy Model but also an example of using it in practice.

The penultimate talk I watched was “Who Are Your Stakeholders?” with Anna Royzman. We often hear the term “stakeholders” used in testing (and software development more generally) but rarely do we seem to agree on what this term means in the context of our projects. Anna gave a good introduction on how to identify different types of stakeholders and what kinds of information these different stakeholders might be looking for.

I concluded my binge watching of the conference talks with the closing session from the event, in the shape of a “Fireside Chat with Michael Bolton” with questions coming from Ajay Balamurugadas. I loved Michael’s answer to Ajay’s question “what has changed in testing from 1994 to 2020?”, “not enough!” This was a fun fifty minute session and a perfect way to wrap up the conference.

Obviously, “attending” a virtual conference is a completely different experience to an in-person event. I chose not to watch all of the presentation recordings but did watch most of them and the quality was high. I didn’t watch the recordings back-to-back either, rather spreading out my viewing across a few days alongside my usual work commitments. I also didn’t contribute to the conference’s Slack channels as the event had been over for two weeks or so by the time I got to the recordings.

I personally missed the in-person aspects that make traditional conferences so valuable, but it might not be the case that we have to choose one over the other as we move forward. I wonder if we’re entering a new era for conferences, driven by changes forced upon us by COVID-19. There are enormous accessibility benefits of the virtual model, thanks to lower pricing and the removal of the need to travel and spend time away from home & family. Such virtual events also open up opportunities for new voices who might be unable or unwilling to travel to a “normal” event, or are too uncomfortable to address a physical audience.

The selection of topics on offer during this event was good and the talks were of a high standard. It appeared to be well organized too, so thanks to Lalit and the Test Tribe crew for putting on a worthwhile testing event during these difficult times! I enjoyed the experience of this virtual conference and I am now considering attending other virtual testing conferences through 2020 before – maybe! – more normal service resumes in 2021…