Author Archives: therockertester

Common search engine questions about testing #7: “Is software testing a good career?”

This is the seventh of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Is software testing a good career?” (and the related questions, “How is software testing a career?” and “Why choose software testing as a career?”).

Reflecting first on my own experience, software testing ended up being an awesome career. I didn’t set out to become a career software tester, though. After a few years as a developer in the UK, I moved to Australia and started looking for work in the IT industry. Within a couple of weeks of arriving in the country, I landed an interview at Quest Software (then in the Eastern suburbs of Melbourne) for a technical writer position. After interviewing for that position, they mentioned that the “QA Manager” was also looking for people and asked whether I’d be interested in chatting with her also. Long story short, I didn’t land the technical writing job but was offered a “Senior Tester” position – and I accepted it without hesitation! I was simply happy to have secured my first job in a new country, never intending it to be a long-term proposition with Quest or the start of a new career in the field of software testing. As it turned out, I stayed with Quest for 21 years in testing/quality related roles from start to finish!

So, there was some luck involved in finding a good company to work for and a job that I found interesting. I’m not sure I’d have stayed in testing, though, had it not been for the revelation that was attending Rapid Software Testing with Michael Bolton in 2007 – that really gave me the motivation to treat software testing more seriously as a long-term career prospect and also marked the time, in my opinion, that I really started to add much more value to Quest as well. The company appreciated the value that good testers were adding to their development teams and I was fortunate to mentor, train, coach and work alongside some great testers, not only in Australia but all over the world. Looking back on my Quest journey, I think it was the clear demonstration of value from testing that led to more and more opportunities for me (and other testers), as predicted by Steve Martin when he said “be so good they can’t ignore you”!

The landscape has changed considerably in the testing industry over the last twenty years, of course. It has to be acknowledged that it’s becoming very difficult to secure testing roles in which you can expect to perform exploratory testing as the mainstay of your working day (and especially so in higher cost locations). I’ve rarely seen an advertisement for such a role in Australia in the last few years, with most employers now also demanding some “automated testing” skills as part of the job. Whether the reality post-employment is that nearly all testers are now performing a mix of testing (be it scripted, exploratory or a combination of both) and automation development, I’m not so sure. If your desire is to become an excellent (exploratory) tester without having some coding knowledge/experience, then I think there are still some limited opportunities out there but seeking them out will most likely require you to be in the network of people in similar positions in companies that understand the value that testing of this kind can bring.

Making the effort to learn some coding skills is likely to be beneficial in terms of getting your resume over the line. I’d recommend not worrying too much about which language(s)/framework(s) you choose to learn, but rather focusing on the fundamentals of good programming. I would also suggest building an understanding of the “why” and “what” in terms of automation (over the “how”, i.e. which language and framework to leverage in a particular context) as this understanding will allow you to quickly add value and not be so vulnerable to the inevitable changes in language and framework preferences over time.

I think customers of the software we build expect that the software has undergone some critical evaluation by humans before they acquire it, so it both intrigues and concerns me that so many big tech companies publicly express their lack of “testers” as some kind of badge of honour. I simply don’t understand why this is seen as a good thing and it seems to me that this trend is likely to come full (or full-ish) circle at some point when the downsides of removing specialists in testing from the development, release and deployment process outweigh the perceived benefits (not that I’m sure what these are, apart from reduced headcount and cost?).

I still believe that software testing is a good career choice. It can be intellectually challenging, varied and stimulating in the right organization. It’s certainly not getting any easier to secure roles in which you’ll spend all of your time performing exploratory testing, though, so broadening your arsenal to include some coding skills and building a good understanding of why and what makes sense to automate are likely to help you along the way to gaining meaningful employment in this industry.

You can find the first six parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my erstwhile review team (Paul Seaman and Ky) for their helpful feedback on this post.

Common search engine questions about testing #6: “Is software testing easy?”

This is the sixth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Is software testing easy?” (and the related question, “Why is software testing so hard?”).

There exists a perception that “anyone can test” and, since testing is really just “playing with the software”, it’s therefore easy. By contrast, it seems that programming is universally viewed as being difficult. This reasoning leads people to believe that a good place to start their career in IT is as a tester, with a view to moving “up” to the more hallowed ranks of developers.

My experience suggests that many people often have no issue with trying to tell testers how to do their job, in a way that those same people wouldn’t dream of doing towards developers. This generally seems to be based on some past experience from their career when they considered themselves a tester, even if that experience is now significantly outdated and they didn’t engage in any serious work to advance themselves as testers. Such interactions are a red flag that many in the IT industry view testing as the easy part of the software development process.

The perception that testing is easy is also not helped by the existence and prevalence of the simple and easy to achieve ISTQB Foundation certification. The certification has been achieved by several hundred thousand people worldwide (the ISTQB had issued 721000+ certifications as of May 2020, with the vast majority of those likely to be at Foundation level), so it’s clearly not difficult to obtain (even without study) and has flooded the market with “testers” who have little but this certification behind them.

Thanks to Michael Bolton (via this very recent tweet) for identifying another reason why this perception exists. “Testing” is often conflated with “finding bugs” and we all know how easy it is to find bugs in the software we use every day:

There’s a reason that many people think testing is easy, due to an asymmetry. No one ever fired up a computer and stumbled into creating a slick UI or a sophisticated algorithm, but people stumble into bugs every day. Finding bugs is easy, they think. So testing must be easy.

Another unfortunate side effect of the idea that testing is easy is that testers are viewed as fungible, i.e. any tester can simply be replaced by another one since there’s not much skill required to perform the role. The move to outsource testing capability to lower cost locations then becomes an attractive proposition. I’m not going to discuss test outsourcing and offshoring in any depth here, but I’ve seen a lot of great, high value testers around the world lose their jobs due to this process of offshoring based on the misplaced notion of fungibility of testing resources.

Enough about the obvious downsides of mistakenly viewing testing as easy! I don’t believe good software testing is at all easy and hopefully my reasons for saying this will help you to counter any claims that testing (at least, testing as I talk about it) is easy work and can be performed equally well by anyone.

As a good tester, we are tasked with evaluating a product by learning about it through exploration, experimentation, observation and inference. This requires us to adopt a curious, imaginative and critical thinking mindset, while we constantly make decisions about what’s interesting to investigate further and evaluate the opportunity cost of doing so. We look for inconsistencies by referring to descriptions of the product, claims about it and within the product itself. These are not easy things to do.

We study the product and build models of it to help us make conjectures and design useful experiments. We perform risk analysis, taking into account many different factors to generate a wealth of test ideas. This modelling and risk analysis work is far from easy.

We ask questions and provide information to help our stakeholders understand the product we’ve built so that they can decide if it’s the product they wanted. We identify important problems and inform our stakeholders about them – and this is information they sometimes don’t want to hear. Revealing problems (or what might be problems) in an environment generally focused on proving we built the right thing is not easy and requires emotional intelligence & great communication skills.

We choose, configure and use tools to help us with our work and to question the product in ways we’re incapable of (or inept at) as humans without the assistance of tools. We might also write some code (e.g. code developed specifically for the purpose of exercising other code or implementing algorithmic decision rules against specific observations of the product, “checks”), as well as working closely with developers to help them improve their own test code. Using tooling and test code appropriately is not easy.

(You might want to check out Michael Bolton’s Testing Rap, from which some of the above was inspired, as a fun way to remind people about all the awesome things human testers actually do!)

This heady mix of aspects of art, science, sociology, psychology and more – requiring skills in technology, communication, experiment design, modelling, risk analysis, tooling and more – makes it clear to me why good software testing is hard to do.

In wrapping up, I don’t believe that good software testing is easy. Good testing is challenging to do well, in part due to the broad reach of subject areas it touches on and also the range of different skills required – but this is actually good news. The challenging nature of testing enables a varied and intellectually stimulating job and the skills to do it well can be learned.

It’s not easy, but most worthwhile things in life aren’t!

You can find the first five parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my dedicated review team (Paul Seaman and Ky) for their helpful feedback on this post. Paul’s blog, Not Everybody Can Test, is worth a read in relation to the subject matter of this post.

Common search engine questions about testing #5: “Can you automate software testing?”

This is the fifth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

As I reach the halfway point in this series, I come to the question “Can you automate software testing?” (and the related question, “How can software test automation be done?”).

If you spend any time on Twitter and LinkedIn following threads around testing, this question of whether testing can be automated crops up with monotonous regularity and often seems to result in very heated discussion, with strong opinions from both the “yes” and “no” camps.

As a reminder (from part one of this blog series), my preferred definition of testing comes from Michael Bolton and James Bach, viz.

Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.

Looking at this definition, testing is clearly a deeply human activity since skills such as learning, exploring, questioning and inferring are not generally those well modelled by machines (even with AI/ML). Humans may or may not be assisted by tools or automated means while exercising these skills, but that doesn’t mean that the performance of testing is itself “automated”.

The distinction drawn between “testing” and “checking” made by James Bach and Michael Bolton has been incredibly helpful for me when talking about automation and countering the idea that testing can be automated (much more so than “validation” and “verification” in my experience). As a refresher, their definition of checking is:

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

As Michael says, “We might choose to automate the exercise of some functions in a program, and then automatically compare the output of that program with some value obtained by another program or process. I’d call that a check.” Checking is a valuable component of our overall testing effort and, by this definition, lends itself to be automated. But the binary evaluations resulting from the execution of such checks form only a small part of the testing story and there are many aspects of product quality that are not amenable to such black and white evaluation.

Thinking about checks, there’s a lot that goes into them apart from the actual execution (by a machine or otherwise): someone decided we needed a check (risk analysis), someone designed the check, someone implemented the check (coding), someone decided what to observe and how to observe it, and someone evaluated the results from executing the check. These aspects of the check are testing activities and, importantly, they’re not the aspects that can be given over to a machine, i.e. be automated. There is significant testing skill required in the design, implementation and analysis of the check and its results, the execution (the automated bit) is really the easy part.

To quote Michael again:

A machine producing a bit is not doing the testing; the machine, by performing checks, is accelerating and extending our capacity to perform some action that happens as part of the testing that we humans do. The machinery is invaluable, but it’s important not to be dazzled by it. Instead, pay attention to the purpose that it helps us to fulfill, and to developing the skills required to use tools wisely and effectively.

We also need to be mindful to not conflate automation in testing with “automated checking”. There are many other ways that automation can help us, extending human abilities and enabling testing that humans cannot practically perform. Some examples of applications of automation include test data generation, test environment creation & configuration, software installation & configuration, monitoring & logging, simulating large user loads, repeating actions en masse, etc.

If we make the mistake of allowing ourselves to believe that “automated testing” exists, then we can all too easily fall into the trap of narrowing our thinking about testing to just automated checking, with a resulting focus on the development and execution of more and more automated checks. I’ve seen this problem many times across different teams in different geographies, especially so in terms of regression testing.

I think we are well served to eliminate “automated testing” from our vocabulary, instead talking about “automation in testing” and the valuable role automation can play in both testing & checking. The continued propaganda around “automated testing” as a thing, though, makes this job much harder than it sounds. You don’t have to look too hard to find examples of test tool vendors using this term and making all sorts of bold claims about their “automated testing solutions”. It’s no wonder that so many testers remain confused in answering the question about whether testing can be automated when a quick Google search got me to some of these gems within the top few results: What is automated testing? (SmartBear), Automated software testing (Atlassian) and Test Automation vs. Automated Testing: The Difference Matters (Tricentis).

I’ve only really scratched the surface of this big topic in this blog, but it should be obvious by now that I don’t believe you can automate software testing. There is often value to be gained by automating checks and leveraging automation to assist and extend humans in their testing efforts, but the real testing lies with the humans – and always will.

Some recommended reading related to this question:

  • The Testing and Checking Refined article by James Bach and Michael Bolton, in which the distinction between testing and checking is discussed in depth, as well as the difference between checks performed by humans and those by machines.
  • The Automation in Testing (AiT) site by Richard Bradshaw and Mark Winteringham, their six principles of AiT make a lot of sense to me.
  • Bas Dijkstra’s blog

You can find the first four parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.

2020 in review

It’s time to wrap up my blogging for the year again, after a quite remarkable 2020!

I published 22 blog posts during the year, a significant increase in output compared to the last few years (largely enabled by the change in my employment situation, but more on that later). My blog attracted about 50% more views than in 2019 and I’m very grateful for the amplification of my blog posts via their regular inclusion in lists such as 5Blogs, Testing Curator’s Testing Bits and Software Testing Weekly. November 2020 saw my blog receiving twice as many views as any other month since I started blogging back in 2014, mainly due to the popularity of my critique of two industry reports during that month.

I closed out the year with about 1,100 followers on Twitter, up around 10% over the year – this surprises me given the larger number of tweets around veganism I’ve posted during the year, often a cause of unfollowing!

COVID-19

It wouldn’t be a 2020 review blog without some mention of COVID-19, but I’m not going to dwell too much on it here. I count myself lucky in so many ways to have escaped significant impact from the pandemic. Living in regional Australia meant restrictions were never really too onerous (at least compared to metropolitan Melbourne), while I could continue working from home (until my COVID-unrelated retrenchment).

The only major inconvenience caused by the pandemic was somewhat self-inflicted when we made the unwise decision to travel to the UK in mid-March, arriving there just as restrictions kicked in. It was a stressful and expensive time finding a way back to Australia, but I’m very glad we escaped when we did to ride out the pandemic for the rest of the year at home in Australia. (I blogged about these interesting international travels here and here.)

The end of an era

My 21-year stint at Quest Software came to an end in August. It was an amazing journey with the company, the only job I’ve had since moving to Australia back in 1999! I consider myself lucky to have had such a great environment in which to learn and develop my passion for testing. Of course, the closing out of this chapter of my professional life took a while to adjust to but I’ve spent the time since then focusing on decompressing, helping ex-colleagues in their search for new opportunities, looking to new ventures (see below) and staying connected with the testing community – while also enjoying the freedoms that come with not working full-time in a high pressure corporate role.

Conferences and meetups

I started the year with plans to only attend one conference – in the shape of CAST in Austin – but 2020 had other ideas of course! While in-person conferences and meetups all disappeared from our radars, it was great to see the innovation and creativity that flowed from adversity – with existing conferences finding ways to provide virtual offerings, meetups going online and new conferences springing up to make the most of the benefits of virtual events.

Virtual events have certainly opened up opportunities for attendance and presenting to new people in our community. With virtual conferences generally being very affordable compared to in-person events (with lower registration costs and no travel & accommodation expenses), it’s been good to see different names on attendee lists and seeing the excitement and passion expressed by first-time conference attendees after these events. Similarly, there have been a lot of new faces on conference programmes with the opportunity to present now being open to many more people, due to the removal of barriers such as travelling and in-person public speaking. It feels like this new model has increased diversity in both attendees and presenters, so this is at least one positive out of the pandemic. I wonder what the conference landscape will look like in the future as a result of what organisers have learned during 2020. While there’s no doubt in my mind that we lose a lot of the benefits of a conference by not being physically present in the same place, there are also clear benefits and I can imagine a hybrid conference world emerging – I’m excited to see what develops in this area.

I only attended one meetup during the year, the DDD Melbourne By Night event in September during which I also presented a short talk, Testing Is Not Dead, to a largely developer audience. It was fun to present to a non-testing audience and my talk seemed to go down well. (I’m always open to sharing my thoughts around testing at meetups, so please let me know if you’re looking for a talk for your meetup.)

In terms of conferences, I participated in three events during the year. First up, I attended the new Tribal Qonf organised by The Test Tribe and this was my first experience of attending a virtual conference. The registration was ridiculously cheap for the great range of quality presenters on offer over the two-day conference and I enjoyed catching up on the talks via recordings (since the “live” timing didn’t really work for Australia).

In November, I presented a two-minute talk for the “Community Strikes The Soapbox” part of EuroSTAR 2020 Online. I was in my element talking about “Challenging The Status Quo” and you can see my presentation here.

Later in November, I was one of the speakers invited to participate in the inaugural TestFlix conference, again organised by The Test Tribe. This was a big event with over one hundred speakers, all giving talks of around eight minutes in length, with free registration. My talk was Testing Is (Still) Not Dead and I also watched a large number of the other presentations thanks to recordings posted after the live “binge” event.

The start of a new era

Starting a testing consultancy business

Following my unexpected departure from Quest, I decided that twenty five years of full-time corporate employment was enough for me and so, on 21st October, I launched my testing consultancy business, Dr Lee Consulting. I’m looking forward to helping different organisations to improve their testing and quality practices, with a solid foundation of context-driven testing principles. While paid engagements are proving elusive so far, I’m confident that my approach, skills and experience will find a home with the right organisations in the months and years ahead.

Publishing a testing book

As I hinted in my 2019 review post at this time last year, a project I’ve been working on for a while, both in terms of concept and content, finally came to fruition in 2020. I published my first testing book, An Exploration of Testers, on 7th October. The book contains contributions from different testers and a second edition is in the works as more contributions come in. All proceeds from sales of the book will go back into the testing community and I plan to announce how the first tranche of proceeds will be used early in 2021.

Volunteering for the UK Vegan Society

When I saw a call for new volunteers to help out the UK’s Vegan Society, I took the opportunity to offer some of my time and, despite the obvious timezone challenges, I’m now assisting the organisation (as one of their first overseas volunteers) with proofreading of internal and external communications. This is a different role in a different environment and I’m really enjoying working with them as a way to be more active in the vegan community.

Thanks to my readers here and also followers on other platforms, I wish you all a Happy New Year and I hope you enjoy my posts to come through 2021.

I’ll be continuing my ten-part blog series answering common questions around software testing (the first four parts of which are already live) but, please remember, I’m more than happy to take content suggestions so let me know if there are any topics you particularly want me to express opinions on.

Common search engine questions about testing #4: “How is software testing done?”

This is the fourth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I ponder the question “How is software testing done?” (and the related questions, “What are software testing methodologies?”, “What is the software testing life cycle?” and “What is the software testing process?”).

There are many different ways in which software testing is performed, by different people in different organizations with different ideas about what constitutes “good testing”. Don’t be fooled into believing there is “one way” to do testing! There is certainly no single, approved, credible and official way to perform testing – and this is actually a good thing, in my opinion.

So, the question should perhaps be “How might software testing be done?” and, in answering this question, the idea of context is paramount. James Bach defines “context” (in Context-Driven Methodology) as follows:

When I say “context” I mean the totality of a situation that influences the success or failure of an enterprise.

(and Dictionary.com similarly offers “the set of circumstances or facts that surround a particular event, situation, etc.”) The first principle of Context-Driven Testing says “The value of any practice depends on its context.” The way you would approach the testing of a medical device (where a defect could result in loss of life) is likely quite different to how you would test a website for a local business, for example. The context is different – and the differences are important.

While there may be books or certifications that propose a “testing process” or methodology, you should consider the context of your particular situation to assess whether any of these processes or methodologies have valuable elements to leverage. Remember that testing requires a broad variety of different skills and activities: working with other people, formulating hypotheses, creating & changing strategies, critical thinking & evaluation, finding the right people when you need help, assessing what all this might mean for risk and then finding ways to relate this information in compelling and credible ways. What we need is a way of thinking about testing that is flexible enough to cover such a range of skills and activities across many different contexts.

The following from context-driven-testing.com puts it well, I think:

Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.

Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.

Bearing the above in mind, the only software testing methodology that I feel comfortable to recommend is Rapid Software Testing (RST) developed by James Bach and Michael Bolton. RST isn’t a prescriptive process but rather a way to understand testing with a focus on context and people:

[RST] is a responsible approach to software testing, centered around people who do testing and people who need it done. It is a methodology (in the sense of “a system of methods”) that embraces tools (aka “automation”) but emphasizes the role of skilled technical personnel who guide and drive the process.

Rather than being a set of templates and rules, RST is a mindset and a skill set. It is a way to understand testing; it is a set of things a tester knows how to do; and it includes approaches to effective leadership in testing.

https://rapid-software-testing.com/about-rapid-software-testing/

RST is therefore quite different from some of the prevalent processes/methodologies that you might come across in searching for resources to answer the question of how testing is done, such as ISTQB and TMap. These systems are often referred to as “factory-style testing” and an excellent summary of how RST differs from these can be found at https://www.satisfice.com/download/how-rst-is-different-from-factory-style-testing

Given how different your context and testing mission is likely to be on different projects in different organizations at different times for different customers, the way “testing is done” necessarily needs to be flexible and adaptable enough to respect these very different situations. Any formal process or methodology that seeks to prescribe how to test is likely to be sub-optimal in your particular context, so I suggest adopting something like the mindset proposed by RST and adapting your approach to testing to suit your context.

You can find the first three parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my patient and dependable review team (Paul Seaman and Ky) for their helpful feedback on this post.

“Calling Bullsh*t” (Carl T. Bergstrom and Jevin D. West)

It was thanks to a recommendation from Michael Bolton that I came across the book Calling Bullsh*t by Carl T. Bergstrom and Jevin D. West. While it’s not a book specifically about software testing, there are some excellent takeaways for testers as I’ll point to in the following review of the book. This book is a must read for software testers in my opinion.

The authors’ definition of bullshit (BS) is important to note before digging into the content (appearing on page 40):

Bullshit involves language, statistical figures, data graphics, and other forms of presentation intended to persuade or impress an audience by distracting, overwhelming, or intimidating them with a blatant disregard for truth, logical coherence, or what information is actually being conveyed.

I was amazed to read that the authors already run a course at a US university on the same topic as this book:

We have devoted our careers to teaching students how to think logically and quantitatively about data. This book emerged from a course we teach at the University of Washington, also titled “Calling Bullshit”. We hope it will show you that you do not need to be a professional statistician or econometrician or data scientist to think critically about quantitative arguments, nor do you need extensive data sets and weeks of effort to see through bullshit. It is often sufficient to apply basic logical reasoning to a problem and, where needed, augment that with information readily discovered via search engine.

The rise of the internet and particularly social media are noted as ways that BS has proliferated in more recent times, spreading both misinformation (claims that are false but not deliberately designed to deceive) and disinformation (deliberate falsehoods).

…the algorithms driving social media content are bullshitters. They don’t care about the messages they carry. They just want our attention and will tell us whatever works to capture it.

Bullshit spreads more easily in a massively networked, click-driven social media world than in any previous social environment. We have to be alert for bullshit in everything we read.

As testers, we tend to have a critical thinking mindset and are hopefully alert to stuff that just doesn’t seem right, whether that’s the way a feature works in a product or a claim made about some software. It seems to me that testers should naturally be good spotters of BS more generally and this book provides a lot of great tips both for spotting BS and learning how to credibly refute it.

Looking at black boxes (e.g. statistical procedures or data science algorithms), the authors make the crucial point that understanding the inner workings of the black box is not required in order to spot problems:

The central theme of this book is that you usually don’t have to open the analytic black box in order to call bullshit on the claims that come out of it. Any black box used to generate bullshit has to take in data and spit results out.

Most often, bullshit arises either because there are biases in the data that get fed into the black box, or because there are obvious problems with the results that come out. Occasionally the technical details of the black box matter, but in our experience such cases are uncommon. This is fortunate, because you don’t need a lot of technical expertise to spot problems with the data or results. You just need to think clearly and practice spotting the sort of thing that can go wrong.

The first big topic of consideration looks at associations, correlations and causes and spotting claims that confuse one for the other. The authors provide excellent examples in this chapter of the book and a common instance of this confusion in the testing arena is covered by Theresa Neate‘s blog post, Testing and Quality: Correlation does not equal Causation. (I’ve also noted the confusion between correlation and causality very frequently when looking at big ag-funded “studies” used as ammunition against veganism.)

The chapter titled “Numbers and Nonsense” covers the various ways in which numbers are used in misleading and confusing ways. The authors make the valid point that:

…although numbers may seem to be pure facts that exist independently from any human judgment, they are heavily laden with context and shaped by decisions – from how they are calculated to the units in which they are expressed.

It is all too common in the testing industry for people to hang numbers on things that make little or no sense to look at quantitatively, counting “test cases” comes to mind. The book covers various ways in which numbers turn into nonsense, including summary statistics, percentages and percentage points. Goodhart’s Law is mentioned (in its rephrased form by Marilyn Strathern):

When a measure becomes a target, it ceases to be a good measure

I’m sure many of us are familiar with this law in action when we’re forced into “metrics programmes” around testing for which gaming becomes the focus rather than the improvement our organizations were looking for. The authors introduce the idea of mathiness here: “mathiness refers to formulas and expressions that may look and feel like math – even as they disregard the logical coherence and formal rigour of actual mathematics” and testing is not immune from mathiness either, e.g. “Tested = Checked + Explored” is commonly quoted from Elisabeth Hendrickson‘s (excellent) Explore It! book. Another concept that will be very familiar to testers (and others in the IT industry) is zombie statistics, viz.

…numbers that are cited badly out of context, are sorely outdated, or were entirely made up in the first place – but they are quoted so often that they simply won’t die.

There are many examples of such zombie statistics in our industry, Boehm’s so-called cost of change curve being a prime example (claiming that the cost of changes later in the development cycle is orders of magnitude higher than earlier in the cycle) and is of one the examples covered beautifully in Laurent Bossavit’s excellent book, The Leprechauns of Software Engineering.

The next statistical concept introduced in the book is selection bias and I was less familiar with this concept (at least under this name):

Selection bias arises when the individuals that you sample for your study differ systematically from the population of individuals eligible for your study.

This sort of non-random sampling leads to statistical analyses failing or becoming misleading and there are again some well-considered examples to explain and illustrate this bias. Reading this chapter brought to mind my recent critique of the Capgemini World Quality Report in which I noted that both the size of organizations and roles of participants in the survey was problematic. (I again note that from my vegan research that many big ag-funded studies suffer from this bias too.)

A hefty chapter is devoted to data visualization, with the authors noting the relatively recent proliferation of charts and data graphics in the media due to the technology becoming available to more easily produce them. The treatment of the various ways that charts can be misleading is again excellent with sound examples (including axis scaling, axis starting values, and the “binning” of axis values). I loved the idea of glass slippers here, viz.

Glass slippers take one type of data and shoehorn it into a visual form designed to display another. In doing so, they trade on the authority of good visualizations to appear authoritative themselves. They are to data visualizations what mathiness is to mathematical equations.

The misuse of the periodic table visualization is cited as an example and, of course, the testing industry has its own glass slippers in this area, for example Santhosh Tuppad’s Heuristic Table of Testing! This chapter also discusses visualizations that look like Venn diagrams but aren’t, and highlights the dangers of 3-D bar graphs, line graphs and pie charts. A new concept for me in this chapter was the principle of proportional ink:

Edward Tufte…in his classic book The Visual Display of Quantitative Information…states that “the representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the numerical quantities represented.” The principle of proportional ink applies this rule to how shading is used on graphs.

The illustration of this principle by well-chosen examples is again very effective here.

It’s great to see some sensible commentary on the subject of big data in the next chapter. The authors say “We want to provide an antidote to [the] hype” and they certainly achieve this aim. They discuss AI & ML and the critical topic of how training data influences outcomes. They also note how machine learning algorithms perpetuate human biases.

The problem is the hype, the notion that something magical will emerge if only we can accumulate data on a large enough scale. We just need to be reminded: Big data is not better; it’s just bigger. And it certainly doesn’t speak for itself.

The topics of Big Data, AI and ML are certainly hot in the testing industry at the moment, with tool vendors and big consultancies all extoling the virtues of these technologies to change the world of testing. These claims have been made for quite some time now and, as I noted in my critique of the Capgemini World Quality Report recently, the reality has yet to catch up with the hype. I commend the authors here for their reality check in this over-hyped area.

In the chapter titled “The Susceptibility of Science”, the authors discuss the scientific method and how statistical significance (p-values) is often manipulated to aid with getting research papers published in journals. Their explanation of the base rate fallacy is excellent and a worthy inclusion, as it is such a common mistake. While the publication of dodgy papers and misleading statistics are acknowledged, the authors’ belief is that “science just plain works” – and I agree with them. (From my experience in vegan research, I’ve read so many dubious studies funded by big ag but these don’t undermine my faith in science, rather my faith in human nature sometimes!) In closing:

Empirically, science is successful. Individual papers may be wrong and individual studies misreported in the popular press, but the institution as a whole is strong. We should keep this in perspective when we compare science to much of the other human knowledge – and human bullshit – that is out there.

In the penultimate chapter, “Spotting Bullshit”, the discussion of the various means by which BS arises (covered throughout the book) is split out into six ways of spotting it, viz.

  • Question the source of information
  • Beware of unfair comparisons
  • If it seems too good or bad to be true…
  • Think in orders of magnitude
  • Avoid confirmation bias
  • Consider multiple hypotheses

These ways of spotting BS act as a handy checklist I think and will certainly be helpful to me in refining my skills in this area. While I was still reading this book, I listened to a testing panel session online and one of the panelists was from testing tool vendor, Applitools. He briefly mentioned some claims about their visual AI-powered test automation tool. These claims piqued my interest and I managed to find the same statistics on their website:

Applitools claims about their visual AI-powered test automation tool

I’ll leave it as an exercise for the reader to decide if any of the above falls under the various ways BS manifests itself according to this book!

The final chapter, “Refuting Bullshit”, is really a call to action:

…a solution to the ongoing bullshit epidemic is going to require more than just an ability to see it for what it is. We need to shine a light on bullshit where it occurs, and demand better from those who promulgate it.

The authors provide some methods to refute BS, as they themselves use throughout the book in the many well-chosen examples used to illustrate their points:

  • Use reductio ad absurdum
  • Be memorable
  • Find counterexamples
  • Provide analogies
  • Redraw figures
  • Deploy a null model

They also “conclude with a few thoughts about how to [call BS] in an ethical and constructive manner”, viz.

  • Be correct
  • Be charitable
  • Admit fault
  • Be clear
  • Be pertinent

In summary, this book is highly recommended reading for all testers to help them become more skilled spotters of BS; be that from vendors, testing consultants or others presenting information about testing. This skill will also come in very handy in spotting BS in claims made about the products you work on in your own organization!

The amount of energy needed to refute bullshit is an order of magnitude bigger than [that needed] to produce it.

Alberto Brandolini (Italian software engineer, 2014)

After reading this book, you should have the skills to spot BS and I actively encourage you to then find inventive ways to refute it publicly so that others might not get fooled by the same BS.

Our industry needs those of us who genuinely care about testing to call out BS when we see it, I’m hoping to see more of this in our community! (My critique of the Capgemini World Quality Report and review of a blog post by Cigniti are examples of my own work in this area as I learn and refine these skills.)

Common search engine questions about testing #3: “When should software testing activities start?”

This is the third of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “When should software testing activities start?” (and the related question, “When to do software testing?”).

It feels very timely to be answering this question as there is so much noise in the industry at the moment around not only “shifting” testing to the left but also to the right. “Shifting left” is the idea that testing activities should be moved more towards the start of (and then throughout) the development cycle, while shifting right is more about testing “in production” (i.e. testing the software after it’s been deployed and is in use by customers). It seems to me that there is a gap now forming in the middle where much of our testing used to be performed (and, actually, probably still is), viz. testing of a built system by humans before it is deployed.

Let’s start by looking at what we mean by “testing activities” and who might perform these activities.

For teams with dedicated testers, the testers can participate in design meetings and ask questions about how customers really work. They can also review user stories (and other claims of how the software is intended to work) to look for inconsistencies and other issues. Testers might also work with developers to help them generate test ideas for unit and API tests. Testers with coding skills might work with API developers to write stubs or API tests during development. Testers might pair with developers to test some new functionality locally before it even makes it into a more formal build. For teams without dedicated testers, the developers will be covering these – and other – testing activities themselves, perhaps with assistance from a roaming testing/quality coach if the organization is following that kind of model. All of the above activities are performed before a built system is ready for testing in its entirety, so are probably what many would now refer to as “shift left” testing in practice.

The shifting left of testing activities seems to have been heavily influenced by the agile movement. Practitioners such as Janet Gregory and Lisa Crispin have written books on “Agile Testing” which cover many of these same themes, without referring to them as “shift left”. The idea that the critical thinking skills of testers can be leveraged from the earliest stages of developing a piece of software seems sound enough to me. The term “agile tester”, though, seems odd – I prefer to think of testing as testing, with “agile” being part of the context here (and this context enables some of these shift-left activities to occur whereas a different development approach might make these activities difficult or impossible).

In more “traditional” approaches to software development (and also in dysfunctional agile teams), testing activities tend to be pushed towards the end of the cycle (or sprint/iteration) when there is a built “test ready” version of the software available for testing. Testing at this point is highly valuable in my opinion and is still required even if all of the “shift left” testing activities are being performed. If testing activities only start at this late stage, though, there is a lot of opportunity for problems to accumulate that could have been detected earlier and resolving these issues so late in the cycle may be much more difficult (e.g. significant architectural changes may not be feasible). To help mitigate risk and learn by evaluating the developing product, testers should look for ways to test incremental integration even in such environments.

The notion that “testing in production” is an acceptable – and potentially useful – thing is really quite new in our industry. Suggesting that we tested in production when I first started in the testing industry was akin to a bad joke, Microsoft’s release of Windows Vista again comes to mind. Of course, a lot has changed since then in terms of the technologies we use and the deployment methods available to us so we shouldn’t be surprised that testing of the deployed software is now a more reasonable thing to do. We can learn a lot from genuine production use that we could never hope to simulate in pre-production environments and automated monitoring and rollback systems present us with scope to “un-deploy” a bad version much more easily than recalling millions of 3.5-inch floppies! This “shift right” approach can add valuable additional testing information but, again, this is in itself not a replacement for other testing we might perform at other times during the development cycle.

In considering when testing activities should start then, it’s useful to broaden your thinking about what a “testing activity” is away from just system testing of the entire solution and also to be clear about your testing mission. Testing activities should start as early as makes sense in your context (e.g. you’ll probably start testing at different times in an agile team than when working on a waterfall project). Different types of testing activities can occur at different times and remember that critical thinking applied to user stories, designs, etc. is all testing. Use information from production deployments to learn about real customer usage and feed this information back into your ongoing testing activities pre-deployment.

And, by way of final word, I encourage you to advocate for opportunities to test your software before deployment using humans (i.e. not just relying on a set of “green” results from your automated checks), whether your team is shifting left, shifting right or not dancing at all.

You can find the first two parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my review team (Paul Seaman and Ky) for their helpful feedback on this post.

Common search engine questions about testing #2: How does software testing impact software quality?

This is the second of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I address the question “How does software testing impact software quality?” (and the related question, “How is software testing related to software quality?”).

It’s worth taking a moment to clarify what I mean by “quality”, via this definition from Jerry Weinberg and Cem Kaner:

Quality is value to some person (that matters)

I like this definition because it puts a person at the centre of the concept and acknowledges the subjectivity of quality. What is considered to be a bug by one customer may well be viewed as a feature by another! This inherent subjectivity means that quality is more amenable to assessment than measurement, as has been well discussed in a blog post from James Bach, Assess Quality, Don’t Measure It.

So, what then of the relationship between testing and quality?

If we think of testing as an information service provider, then the impact of testing on the quality of the end product is heavily dependent on both the quality of that information and also on the actions & decisions taken on that information. If testing provides information that is difficult to interpret or fails to communicate in a way that is meaningful to its consumers, then it is less likely to be taken seriously and acted upon. If stakeholders choose to do nothing with the information arising from testing (even if it is in fact highly valuable), then that testing effort has no demonstrable impact on quality. Clearly then, the pervasive idea in our industry that testing improves quality isn’t necessarily true – but it’s certainly the case that good testing can have an influence on quality.

It may even be the case that performing more testing reduces the quality of your delivered software. If the focus of testing is on finding bugs – over identifying threats to the software’s value – then performing more testing will probably result in finding more bugs, but they might not represent the important problems in the product. The larger number of bugs found by testing then results in more change in the software and potentially increases risk, rather than reducing it (and the currently popular idea of “defect-free/zero defect” software seems to leave itself wide open to this counterintuitive problem).

Testers were once seen as gatekeepers of quality, but this notion thankfully seems to be almost resigned to the history books. Everyone on a development team has responsibility for quality in some way and testers should be well placed to help other people in the team to improve their own testing, skill up in risk analysis, etc. In this sense, we’re acting more as quality assistants and I note that some organisations explicitly have the role of “Quality Assistant” now (and it makes sense to say “I am a QA” in this sense whereas it never did when “QA” was synonymous with “Quality Assurance”).

I like this quote from James Bach in his blog post, Why I Am A Tester:

…my intent as a tester is not to improve quality. That’s a hopeful side effect of my process, but I call that a side effect because it is completely beyond our control. Testers do not create, assure, ensure, or insure quality. We do not in any deep sense prove that a product “works.” The direct intent of testing – what occupies our minds and lies at least somewhat within our power – is to discover the truth about the product. The best testers I know are in love with dispelling illusions for the benefit of our clients.

Testing is a way to identify threats to the value of the software for our customer – and, given our definition of quality, the relationship between testing and quality therefore seems very clear. The tricky part is how to perform testing in a way which keeps the value of the software for our customer at the forefront of our efforts while we look for these threats. We’ll look at this again in answering later questions in this blog series.

I highly recommend also reading Michael Bolton’s blog post, Testers: Get Out of the Quality Assurance Business, for its treatment of where testing – and testers – fit into building good quality software.

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

The first part of this blog series answered the question, “Why is software testing important?“.

Thanks to my review team (Paul Seaman and Ky) for their helpful feedback on this post.

Reviewing Capgemini’s “World Quality Report 2020-21”

As I noted in another recent blog post, it’s that time of year again when predictions get made for the year ahead as well as reviews of the year. Right on cue, Capgemini released the latest edition of their annual World Quality Report to cover 2020 and 2021.

I reviewed the 2018/19 edition of their report in depth and I thought it worth reviewing the new report to compare and contrast it with the one from two years ago.

TL;DR

The findings and recommendations in the 2020/21 edition of this report are really very similar to those in the 2018/19 report. It appears that the survey respondents are drawn from a very similar pool and the lack of responses from smaller organisations mean that the results are heavily skewed to very large corporate environments.

There’s still plenty of talk about “more automation” and highlighting the importance of AI/ML in revolutionizing QA/testing (there is again no real differentiation or definition of the difference between testing and QA from the authors). There is almost no talk of “testing” (as I understand it) in the report, while there is a heavy focus on agile, DevOps, automation, AI, ML, etc. while removing or reducing human involvement in testing seems to be a clear goal. The fact that the report is co-authored by MicroFocus, a tool vendor, may have something to do this with direction. Agile and DevOps are almost always referred to in the report together, as though they are the same thing or one depends on the other.

I would have liked to see some deep questions around testing practice in the survey to learn more about what’s going on in terms of human testing in these large organizations, but alas this was nowhere to be seen here.

There is evidence of confirmation bias by the report’s authors throughout this report. When the survey results confirm what they expect, there is little questioning of potential for misunderstanding of the questions. However, in cases where the results don’t confirm what they expected, there are many reasons suggested for why this might be so. In doing this, the credibility of the report is severely impacted for me and brings into question why they survey so many organisations and ask so many questions if the idea ultimately is to deliver the recommendations that the authors and their sponsors are looking for.

I’m obviously not really the target audience for these corporate type reports and I can imagine the content of the report being quite demoralizing for testers reading it if their organisations appear to be so far behind what these large organisations are claiming to be doing. I would suggest not believing the hype, doing your own critical thinking and taking the conclusions from all such surveys and reports with a pinch of salt.

The survey (pages 68-71)

This year’s report comes in at a hefty 76 pages (so a few pages heavier than the 2018/19 edition). I again chose to look first at where the data came from to build the report, which is presented towards the end of the report. The survey size was 1750 (compared to 1700 in 2018/19) and the organizations taking part were again all of over 1000 employees, with the largest number coming from organizations of over 10,000 employees. The response breakdown by organizational size were all within a percentage point of the 2018/19 report so it seems likely that it’s really the same organizations contributing every time. The lack of input from smaller organizations is a concern, as I imagine smaller, more nimble organizations might actually be where the innovation and genuine change in the way testing is performed comes from.

The survey had a good spread of countries & regions as well as industry sectors (with the top three sectors accounting for almost half of the responses, viz. financial services, public sector/government, and telecommunications). The types of people who provided survey responses this year bears a startling resemblance to the 2018/19 report – in terms of job title breakdown (2020/21 vs. 2018/19), they were grouped as follows: CIO (25% vs. 27%), IT director (20% vs. 22%), QA/Testing Manager (19% vs. 20%), VP Applications (17% vs. 18%), CMO/CDO (7% in both cases), CTO/Product Head (6% in both cases) and VP/Director of R&D (6% in 2020/21 only). These striking similarities in the data again lead me to the conclusion that the report relies on the same people in the same organizations providing responses year on year.

Introduction (pages 4-5)

In his introduction, Mark Buenen of Capgemini (page 4) notes that

the use of test automation is growing, as for most organizations it is still not at the level required

which seems to suggest there is some level of automation at which these organizations will cease to look for other ways to leverage automation. He also says

It is reassuring that 69% of the organizations interviewed in this survey feel they always or virtually always meet their quality goals

I’m not sure what is reassuring about this statistic. Given the senior folks interviewed for this survey, I wonder how many of these places actually have clearly defined quality goals and how they go about measuring whether they meet (or “virtually always meet”!) these goals. Another interesting point Mark makes is that

One of the main challenges noted this year is the lack of the right testing methodologies for teams, as reported by 55% of respondents.

I wonder what the “right testing methodologies” being sought are? Is it organizations looking for a silver bullet “testing methodology” to solve their quality problems? In his introduction, Raffi Margaliot of Micro Focus (page 5) says

This year’s WQR shows that QA has transitioned from being an independent function in a separate team, towards becoming an integrated part of the software delivery team, with responsibilities reaching beyond testing and finding defects. QA engineers are now charged with enabling the entire team to achieve their quality objectives, and incorporating better engineering practices and state-of-the-art techniques such as AI and ML to achieve these aims.

The move to embedding responsibility for testing and quality within development teams began so long ago that it seems nonsensical to still even be talking about it as an improvement. The idea that AI and ML are playing big parts in the way whole team approaches to quality are implemented is popular and especially so with CTO/CIO types interviewed for reports like this, but I still believe the reality within most development teams is very different. We’ll return to the actual evidence to support these claims as we examine the detail in the report to follow.

Executive Summary (pages 6-8)

The most interesting part of the summary for me was the commentary around the answers to the survey question around “Objectives of Quality Assurance and Testing in the organization”.

Apparently 62% of survey respondents said that this was an objective of QA and testing: “Automate: Make QA and Testing a smarter automated process”. The implication here is that automation is smarter than what isn’t automated and that moving away from human involvement in testing is seen as a good thing. I still don’t understand why organizations are confused about the fact that testing cannot be automated, but obviously the lure of the idea and the spruiking of vendors suggesting otherwise (including the co-sponsor of this report, MicroFocus) are both very strong factors.

Some 60% of respondents said “Quality Enablement: Support everybody in the team to achieve higher quality” was an objective. The “whole team approach to quality” idea is nothing new and seems to be ubiquitous in most modern software development organizations anyway. The report commentary around this in the Summary is pretty extraordinary and it would be remiss of me not to quote it in its full glory:

It won’t be an exaggeration if we say that out of all the software disciplines, QA has witnessed the most rapid transformation. QA has been steadily evolving – from an independent function to an integrated function, and now to an inclusive function. Also, the role of the QA practitioner is transforming from testing and finding defects, to ensuring that other engineering team members inculcate quality in their way of working. They need to do this by enabling them and by removing impediments on their way to achieving quality objectives.

Actually I think it is an exaggeration, QA/testing has moved pretty slowly over the years and “steadily evolving” is closer to the mark. It wouldn’t be a report of this type if it didn’t mention shifting left and shifting right and the authors don’t disappoint:

QA is not only shifting left but also moving right. We see more and more enterprises talk about exploratory testing, chaos engineering, and ensuring that the product is experienced the way end users will experience it in real life before releasing it to the market.

Testers must be dizzy these days with all this shifting to the left and also shifting to the right. I wonder what testing is left in the “middle” now; you know, the sort of testing where a human interacts with the product we’ve built to look for deep and subtle problems that might threaten its value to customers. This is where I’d imagine most exploratory testing efforts would sit and, while the report notes that more organizations are talking about exploratory testing, my feeling is that they’re talking about something quite different than what excellent practitioners of this approach mean by exploratory testing.

Key findings (pages 9-10)

QA orchestration in agile and DevOps

In this first category of findings, the report claims:

The adoption of agile and DevOps is steadily increasing, resulting in QA teams becoming orchestrators of quality.

While I don’t doubt that more and more organizations claim to be using agile and DevOps (even without any broad consensus on what either of those terms have come to mean), it sounds like they still expect some group (i.e. “QA”) to kind of arrange for the quality to happen. The idea of “full-stack QA” comes next:

We see a trend towards wanting QA engineers who have developer type skills, who yet retain their quality mindset and business-cum-user centricity.

Is this expecting too much? Yes, we think so. Only a few QA professionals can have all these skills in their repertoire. That’s why organizations are experimenting with the QA operational structure, with the way QA teams work, and with the skill acquisition and training of QA professionals.

I agree with the report on this point, there’s still a key role for excellent human testers even if they can’t write code. This now seems to be a contrarian viewpoint in our industry and this “moon on a stick” desire for testers who can perform excellent testing, find problems and fix them, then write the automated checks for them and do the production monitoring of the deployed code feels like incredibly wishful thinking and is not helpful to the discussion of advancing genuine testing skills.

Artificial intelligence and machine learning

The report is surprisingly honest and realistic here:

Expectations of the benefits that AI and ML can bring to quality assurance remain high, but while adoption is on the increase, and some organizations are blazing trails, there are few signs of significant general progress.

Nonetheless, enthusiasm hasn’t diminished: organizations are putting AI high among their selection criteria for new QA solutions and tools, and almost 90% of respondents to this year’s survey said AI now formed the biggest growth area in their test activities. It seems pretty clear they feel smart technologies will increase cost-efficiency, reduce the need for manual testing, shorten time to market – and, most importantly of all, help to create and sustain a virtuous circle of continuous quality improvements.

It’s good that the authors acknowledge the very slow progress in this area, despite AI and ML being touted as the next big things for many years (by this report itself, as you’ll note from my review of the 2018/19 report). What’s sad is that almost all the respondents are saying that AI is the biggest growth area around testing, which is worrying when other parts of the report indicate other more significant issues (e.g. a lack of good testing methodologies). The report’s interpretation of why so many organizations continue to be so invested in making AI work for them in testing is questionable I think. What does “increased cost-efficiency” really mean? And is this the direct result of the “reduced need for manual testing”? The “virtuous circle of quality improvements” they mention is currently more like a death spiral of reducing human interaction with the software before release, pushing poor quality software out to customers more often, seeing their complaints, fixing them, pushing fixes out more often, …

Budgets and cost containment

The next category is on budget/costs around testing and the report says:

The main focus remains manpower cost reduction, but some organizations are also looking at deriving the best out of their tool investments, moving test environments to the cloud, and looking for efficiencies from technologies including AI, machine learning and test automation.

There is again the claim about using AI/ML for “efficiency” gains and it’s a concern that reducing the number of people involved in QA/testing is seen as a priority. It should be clear by now that I believe humans are key in performing excellent testing and they cannot be replaced by tools, AI/ML or automation (though their capabilities can, of course, be extended by the use of these technologies).

Test automation

You’ll be pleased to know that automation is getting smarter:

The good thing we saw this year is that more and more practitioners are talking about in-sprint automation, about automation in all parts of QA lifecycle and not just in execution, and also about doing it smartly.

This begs the question of how automation was being tackled before, but let’s suppose organizations are being smarter about its use – even though this conclusion, to me at least, again makes it sound like more and more human interaction with the software is being deliberately removed under the banner of “smart automation”.

While the momentum could have been higher, the automation scores have mostly risen since last year. Also, the capability of automation tools being used seems to satisfy many organizations, but the signs are that the benefits aren’t being fully realized: only around a third of respondents (37%) felt they were currently getting a return on their investment. It really depends on how that return is being measured and communicated to the relevant stakeholders. Another factor may be that the tools are getting smarter, but the teams are not yet sufficiently skilled to take full advantage of them.

This paragraph sums up the confused world most organizations seem to be living in when it comes to the benefits (& limitations) expected of automation. While I’m no fan of the ROI concept applied to automation (or anything else in software development), this particular survey response indicates dissatisfaction with benefits derived from investments made in automation in the majority of organizations. There are many potential reasons for this, including unrealistic expectations, poor tool selection, misunderstanding about what automation can and can’t do, etc. but I had to smile when reading that last sentence which could be restated as “the automation tools are just too smart for the humans to use”!

Test environment management (TEM) and test data management (TDM)

There was nothing much to say under this category, but this closing statement caught my eye:

It was also interesting to note that process and governance came out as a bigger challenge than technology in this area.

I think the same statement probably applies to almost everything we do in software development. We’re not – and never really have been – short of toys (technology & tools), but assembling groups of humans to play well with those toys and end up with something of value to customers has always been a challenge.

Key recommendations (pages 11-13)

The recommendations are structured basically around the same categories as the findings discussed above, in summary:

  • QA orchestration in agile and DevOps
    • Don’t silo responsibility for QA. Share it.
    • Spread the word
    • Be part of the business
    • Make room for dashboards
    • Listen more to users
  • Artificial intelligence and machine learning
    • Focus on what matters
    • Keep learning
    • Have a toolkit
    • Testing AI systems: have a strategy
  • Budgets and cost containment
    • Greater savings can be achieved by using test infrastructure smartly
    • Use advantages in analytics, AI, and machine learning to make testing smarter
    • Be prepared to pay well for smarter talent
    • Don’t put all key initiatives on hold. Strive to be more efficient instead
  • Test automation
    • Change the status quo
    • Think ahead
    • Choose the right framework
    • Balance automation against skills needs
    • Don’t think one size fits all
    • Get smart
  • Test environment management and test data management
    • Create a shared center of excellence for TEM/TDM
    • Get as much value as you can out of your tool investment
    • Have strong governance in place
  • Getting ready to succeed in a post-COVID world
    • Be better prepared for business continuity
    • Focus more on security
    • Don’t look at COVID-19 as a way to cut costs, but as an opportunity to transform
    • Continue to use the best practices adopted during the pandemic

There’s nothing too revolutionary here, with a lot of motherhood type of advice – and copious use of the word “smart”. One part did catch my eye, though, under “Change the status quo” for test automation:

Testing will always be squeezed in the software development lifecycle. Introducing more automation – and pursuing it vigorously – is the only answer.

This messaging is, in my opinion, misleading and dangerous. It reinforces the idea that testing is a separate activity to development and so can “squeezed” while other parts of the lifecycle are not. It seems odd – and contradictory – to me to say this when so many of the report’s conclusions are about “inclusive QA” and whole team approaches. The idea that “more automation” is the “only answer” is highly problematic – there is no acknowledgement of context here and adding more automation can often just lead to more of the same problems (or even introduce further new and exciting ones), when part of the solution might be to re-involve humans into the lifecycle especially when it comes to testing.

Current Trends in Quality Assurance and Testing (pages 14-49)

Almost half of the WQR is again dedicated to discussing current trends in QA and testing and some of the most revealing content is to be found in this part of the report. I’ll break down my analysis in the same way as the report (the ordering of these sections is curiously different to the ordering of the key findings and recommendations).

QA orchestration in agile and DevOps

I’ll highlight a few areas from this section of the report. Firstly, the topic of how much project effort is allocated to testing:

…in agile and DevOps models, 40% of our respondents said 30% of their overall project effort is allocated to testing…. there is quite a wide variation between individual countries: for instance, almost half of US respondents using agile (47%) said they do so for more than 30% of their overall test effort, whereas only 11% of Italian respondents and 4% of UK respondents said the same.

The question really makes no sense in truly agile teams, since testing activities will run alongside development and measuring how much of the total effort relates specifically to testing is nonsensical – and I suspect (i.e. hope) is not explicitly tracked by most teams. As such, the wide variation in response to this question is exactly what I’d expect; it might just be that it is the Italians and UK folks who are being honest in acknowledging the ridiculousness of this question.

There are a couple of worrying statistics next, 51% of respondents said they ‘always’ or ‘almost always’ aim to “maximize the automation of test”. Does this mean they’re willing to sacrifice in other areas (e.g. human interactions with the software) to achieve this? Why would they want to achieve this anyway? What did they believe the question meant by “the automation of test”?

Meanwhile, another (almost) half said they ‘always’ or ‘almost always’ “test less during development and focus more on quality monitoring/production test” (maybe the same half as in the above automation response?). I assume this is the “shift-right” brigade again, but I really don’t see this idea of shifting right (or left) as removing the need for human testing of the software before it gets to production (where I acknowledge that lots of helpful, cool monitoring and information gathering can also take place).

It was a little surprising to find that the most common challenge [in applying testing to agile development] (50%) was a reported difficulty in aligning appropriate tools for automated testing. However, this may perhaps be explained by the fact that 42% of respondents reported a lack of professional test expertise in agile teams – and so this lack of skills may explain the uncertainty about identifying and applying the right tools.

The fact that the most common challenge was related to technology (tools in this case) comes as no surprise, but highlights how misguided most organizations are around agile in general. Almost half of the respondents acknowledge that a lack of test expertise in agile teams is a challenge while also half say that tooling is the problem. This focus on technology over people comes up again and again in this report.

Which metrics are teams using to track applications [sic] quality? Code coverage by test was the most important indicator, with 53% of respondents saying they always or almost always use it. This is compliant with agile test pyramid good practices, although it can be argued that this is more of a development indicator, since it measures unit tests. Almost as high in response, with 51% saying always or almost always, was risk covered by test. This is particularly significant: if test strategy is based on risk, it’s a very good thing.

The good ol’ “test pyramid” had to make an appearance and here it is, elevated to the stature of providing a compliance mechanism. At least they make the note that forming a test strategy around risk is “a very good thing”, but there’s little in the response to this question about tracking quality that refers to anything meaningful in terms of my preferred definition of quality (“Value to some person (who matters)”).

In closing this section of the report:

Out of all the insights relating to the agile and DevOps theme, perhaps the greatest surprise for us in this year’s survey was the response to a question about the importance of various criteria for successful agile and DevOps adoption. The technology stack was rated as essential or almost essential by 65% of respondents, while skill sets and organizational culture came in the bottom, with 34% and 28% respectively rating these highly. Operational and business priorities were rated highly by 41% of respondents.

How can this be? Maybe some respondents thought these criteria were a given, or maybe they interpreted these options differently. That’s certainly a possibility: we noted wide variations between countries in response to this question. For instance, the highest figure for skills needs was Poland, with 64%, while the lowest was Brazil, with just 5%. Similarly, for culture, the highest figure was Sweden, with 69%, and the lowest was once again Brazil, with only 2%. (Brazil’s perceived technology stack need was very high, at 98%.) The concept of culture could mean different things in different countries, and may therefore be weighted differently for people.

Regardless of what may have prompted some of these responses, we remain resolute in our own view that success in agile and DevOps adoption is predicated on the extent to which developments are business-driven. We derive this opinion not just from our own experience in the field, but from the pervasive sense of a commercial imperative that emerges from many other areas of this year’s report.

I would also expect culture to be ranked much higher (probably highest), but the survey responses don’t elevate it so in terms of criteria for successful agile and DevOps adoption. The report authors suggest that this might be due to misunderstanding of the question (which is possible for this and every other question in the survey, of course) and then display their confirmation bias by making their own conclusions that are not supported by the survey data (and make sense of that closing sentence if you will!). My take is a little different – it seems to me that most organizations focus on solving technology issues rather than people ones, so it’s not too surprising that the technology stack is rated at the top.

Artificial intelligence and machine learning

There is a lot of talk about AI and ML in this report again (as in previous years) and it feels like these technologies are always the next big thing, but yet don’t really become the next thing in any significant way (or, in the report’s more grandiose language: “…adoption and application have still not reached the required maturity to show visible results”).

The authors ask this question in this section:

Can one produce high-quality digital assets such as e-commerce, supply chain systems, and engineering and workface management solutions, without spending time and money assuring quality? In other words, can a system be tested without testing it? That may sound like a pipedream, but the industry has already started talking about developing systems and processes with intelligent quality engineering capabilities.

This doesn’t sound like a pipedream to me, it just sounds like complete nonsense. The industry may be talking about this but largely via the vested interests in selling products and tools based on the idea that removing humans from testing activities is a worthy goal.

Almost nine out of ten respondents (88%) said that AI was now the strongest growth area of their test activities [testing with AI and testing of AI]

This is an interesting claim and relatively meaningless, given the very limited impact that AI has had so far on everyday testing activities in most organizations. It’s not clear from the report what this huge percentage of respondents are pursuing by this growth in the use of AI, maybe next year’s report will reveal that (but I strongly doubt it).

In wrapping up this section:

Even though the benefits may not yet be fully in reach, the vast majority of people are genuinely enthusiastic about the prospects for AI and ML. These smart technologies have real potential not just in cost-efficiency, in zero-touch testing, and in time to market, but in the most important way of all – and that is in helping to achieve continuous quality improvements.

There is a lot of noise and enthusiasm around the use of AI and ML, in the IT industry generally and not just testing. The danger here is in the expectations of benefits from introducing these technologies and the report fuels this fire by claiming potential to reduce costs, reduce/remove the need for humans in testing, and speed up development. Adopting AI and ML in the future may yield some of these benefits but the idea that doing so now (with such a lack of expertise and practical experience as outlined in this very report) will help “achieve continuous quality improvements” doesn’t make sense to me.

Test automation

This is always an interesting section of the report and one stat hit me early on, in that only 37% of respondents agreed that “We get ROI from our automation efforts”. This is a pretty damning indictment of how automation projects are viewed and handled especially in larger organizations. The authors note in response that “We feel moving towards scriptless automation tools may provide better return on investment in the long term” and I’m interested in why they said that.

In terms of the degree to which automation is being used:

…our respondents told us that around 15% of all testing was automated. Only 3% of them said they were automating 21% or more of their test activities.

These are quite low numbers considering the noise about “automating all testing” and using AI/ML to reduce or remove humans from the testing effort. Something doesn’t quite add up here.

Test data management and test environment management

This section of the report wasn’t very exciting, noting the fairly obvious increase in the use of Cloud-based and containerized test environments. 29% of respondents reported still using on-premise hardware for their test environments, though.

Budgets and cost containment

The breakdown of QA budget made for interesting reading, with 45% going on hardware & infrastructure, 31% on tools, and just 25% on “human resources”. The fact that the big organizations surveyed for this report spend more on tools than on humans when it comes to QA/testing really says it all for me, as does:

We see greater emphasis being placed on reducing the human resources budget rather than the hardware and infrastructure budget.

While the overall allocation of total IT budget to QA remained similar to previous years (slowly declining year-on-year), this year’s report does at least recognize the “blurring boundaries” between QA and other functions:

It’s more difficult these days to track the movement of QA budgets specifically as an individual component. This is because the budget supports the overall team: the boundaries are blurring, and there is less delineation between different activities performed and the people who perform them in the agile environment.

Andy Armstrong (Head of Quality Assurance and Testing, Nordea Bank)

while the dedicated QA budget may show a downward trend, it’s difficult to ascertain how much of that budget is now consumed by the developers doing the testing.

The impact of COVID-19 and its implications on quality assurance activities in a post-pandemic world

This (hopefully!) one-off section of the report wasn’t particularly revealing for me, though I noted the stat that 74% of respondents said that “We need to automate more of QA and testing” in a post-pandemic world – as though these people need more reasons/excuses to ramp up automation!

Sector Analysis (pages 50-67)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors and discuss particular trends and challenges within each, with a strong focus on the impact of the COVID-19 pandemic. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy, utilities and chemicals
  • Financial services
  • Healthcare and life sciences
  • High-tech
  • Government and public sector
  • Telecoms, media and entertainment

Geography-specific reports

The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find anything worth of special mention here.

Before I go…

I respond to reports and articles of this nature in order to provide a different perspective, based on my opinion of what good software testing looks like as well as my experience in the industry. I provide such content as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting.

If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Common search engine questions about testing #1: Why is software testing important?

This is the first of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

The first cab off the rank is “Why is software testing important?” (with related questions being “why is software testing necessary?”, “why is software testing needed?”, “why is software testing important in software engineering?” and “why is software testing important in SDLC?”).

Let’s begin by looking at this from a different angle, how would teams/organisations behave if software testing wasn’t important to them? They’d probably try to cut the cost of it or find ways to justify not doing it all (especially with expensive humans). They might devalue the people doing such work by compensating them differently to other team members or look upon their work as a commodity that they can have performed by lowest common denominator staff (perhaps in a cheaper location). They would capitalize on their confirmation bias by appealing to the authority of the many articles and presentations claiming that “testing is dead”. They would ensure that testing is seen as a separate function from the rest of development to enable their desire to remove it completely. They would view testing as a necessary evil.

Listening to the way some organisations and some parts of the software development community talk about testing, it’s common to see these indications that software testing just isn’t important to them. In trying to understand why this is so, I’ve come to believe that this largely stems from the software testing industry traditionally doing a poor job of articulating its value and not being clear on what it is that good testing actually provides. We’ve spent a long time working off the assumption that it’s obvious to people paying the bills that testing is important and necessary.

To be clear, my preferred definition of testing comes from Michael Bolton and James Bach, viz.

Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.

I like this definition because it highlights all of the aspects of why testing is important to me, with its focus on interacting with the product, engaging in learning and exploration, and running experiments to help find out if the thing in front of me as a tester is the thing we wanted. It seems to me that this type of evaluation is important and would likely also be viewed as important by the business. However, if we sell the importance of testing based on providing turgid test reports of passed and failed test cases, it’s not too surprising that stakeholders view testing as being more of a costly nuisance than a valued and trusted advisor. Too often, I’ve seen the outputs of testing being focused on describing the testing approach, techniques, test cases run and bugs logged – in other words, we too often provide information about what we did and fail to tell a story about what we discovered during the process.

The reality is that most stakeholders (and certainly customers) don’t care about what you did as a tester, but they probably care about what you learned while doing it that can be valuable in terms of deciding whether we want to proceed with giving the product to customers. Learning to present testing outcomes in a language that helps consumers of the information to make good decisions is a real skill and one that is lacking in our industry. Talking about risk (be that product, project, business or societal) based on what we’ve learned during testing, for example, might be exactly what a business stakeholder is looking for in terms of value from that testing effort. In deliberately looking for problems that threaten the value of the product, there is more chance of finding them before they can impact our customers.

Another spanner in these works is the confusion caused by the common use of the term “automated testing”. It should be clear from the definition I presented above that testing is a deeply human activity, requiring key human skills such as the ability to subjectively experience using the product, make judgements about it and perform experiments against it. While the topic of “automated testing” will be covered in more depth in answering a later question in this blog series, I also wanted to briefly mention automation here to be clear when answering why software testing is important. In this context, I’m going to include the help and leverage we can gain by automation under the umbrella term of “software testing”, while reminding you that the testing itself cannot be automated since it requires distinctly human traits in its performance.

Let’s wrap up this post with a couple of reasons why I think software testing is important.

Software testing is important because:

  • We want to find out if there are problems that might threaten the value of the product, so that they can be fixed before the product reaches the customer.
  • We have a desire to know if the product we’ve built is the product we (and, by extension, our customers) wanted to build.
    • The machines alone can’t provide us with this kind of knowledge.
    • We can’t rely solely on the builders of the product either as they lack the critical distance from what they’ve built to find deep and subtle problems with it.

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks to my review team (Paul Seaman and Ky) for their helpful feedback on this post.