Monthly Archives: November 2020

Common search engine questions about testing #2: How does software testing impact software quality?

This is the second of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I address the question “How does software testing impact software quality?” (and the related question, “How is software testing related to software quality?”).

It’s worth taking a moment to clarify what I mean by “quality”, via this definition from Jerry Weinberg and Cem Kaner:

Quality is value to some person (that matters)

I like this definition because it puts a person at the centre of the concept and acknowledges the subjectivity of quality. What is considered to be a bug by one customer may well be viewed as a feature by another! This inherent subjectivity means that quality is more amenable to assessment than measurement, as has been well discussed in a blog post from James Bach, Assess Quality, Don’t Measure It.

So, what then of the relationship between testing and quality?

If we think of testing as an information service provider, then the impact of testing on the quality of the end product is heavily dependent on both the quality of that information and also on the actions & decisions taken on that information. If testing provides information that is difficult to interpret or fails to communicate in a way that is meaningful to its consumers, then it is less likely to be taken seriously and acted upon. If stakeholders choose to do nothing with the information arising from testing (even if it is in fact highly valuable), then that testing effort has no demonstrable impact on quality. Clearly then, the pervasive idea in our industry that testing improves quality isn’t necessarily true – but it’s certainly the case that good testing can have an influence on quality.

It may even be the case that performing more testing reduces the quality of your delivered software. If the focus of testing is on finding bugs – over identifying threats to the software’s value – then performing more testing will probably result in finding more bugs, but they might not represent the important problems in the product. The larger number of bugs found by testing then results in more change in the software and potentially increases risk, rather than reducing it (and the currently popular idea of “defect-free/zero defect” software seems to leave itself wide open to this counterintuitive problem).

Testers were once seen as gatekeepers of quality, but this notion thankfully seems to be almost resigned to the history books. Everyone on a development team has responsibility for quality in some way and testers should be well placed to help other people in the team to improve their own testing, skill up in risk analysis, etc. In this sense, we’re acting more as quality assistants and I note that some organisations explicitly have the role of “Quality Assistant” now (and it makes sense to say “I am a QA” in this sense whereas it never did when “QA” was synonymous with “Quality Assurance”).

I like this quote from James Bach in his blog post, Why I Am A Tester:

…my intent as a tester is not to improve quality. That’s a hopeful side effect of my process, but I call that a side effect because it is completely beyond our control. Testers do not create, assure, ensure, or insure quality. We do not in any deep sense prove that a product “works.” The direct intent of testing – what occupies our minds and lies at least somewhat within our power – is to discover the truth about the product. The best testers I know are in love with dispelling illusions for the benefit of our clients.

Testing is a way to identify threats to the value of the software for our customer – and, given our definition of quality, the relationship between testing and quality therefore seems very clear. The tricky part is how to perform testing in a way which keeps the value of the software for our customer at the forefront of our efforts while we look for these threats. We’ll look at this again in answering later questions in this blog series.

I highly recommend also reading Michael Bolton’s blog post, Testers: Get Out of the Quality Assurance Business, for its treatment of where testing – and testers – fit into building good quality software.

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

The first part of this blog series answered the question, “Why is software testing important?“.

Thanks to my review team (Paul Seaman and Ky) for their helpful feedback on this post.

Reviewing Capgemini’s “World Quality Report 2020-21”

As I noted in another recent blog post, it’s that time of year again when predictions get made for the year ahead as well as reviews of the year. Right on cue, Capgemini released the latest edition of their annual World Quality Report to cover 2020 and 2021.

I reviewed the 2018/19 edition of their report in depth and I thought it worth reviewing the new report to compare and contrast it with the one from two years ago.

TL;DR

The findings and recommendations in the 2020/21 edition of this report are really very similar to those in the 2018/19 report. It appears that the survey respondents are drawn from a very similar pool and the lack of responses from smaller organisations mean that the results are heavily skewed to very large corporate environments.

There’s still plenty of talk about “more automation” and highlighting the importance of AI/ML in revolutionizing QA/testing (there is again no real differentiation or definition of the difference between testing and QA from the authors). There is almost no talk of “testing” (as I understand it) in the report, while there is a heavy focus on agile, DevOps, automation, AI, ML, etc. while removing or reducing human involvement in testing seems to be a clear goal. The fact that the report is co-authored by MicroFocus, a tool vendor, may have something to do this with direction. Agile and DevOps are almost always referred to in the report together, as though they are the same thing or one depends on the other.

I would have liked to see some deep questions around testing practice in the survey to learn more about what’s going on in terms of human testing in these large organizations, but alas this was nowhere to be seen here.

There is evidence of confirmation bias by the report’s authors throughout this report. When the survey results confirm what they expect, there is little questioning of potential for misunderstanding of the questions. However, in cases where the results don’t confirm what they expected, there are many reasons suggested for why this might be so. In doing this, the credibility of the report is severely impacted for me and brings into question why they survey so many organisations and ask so many questions if the idea ultimately is to deliver the recommendations that the authors and their sponsors are looking for.

I’m obviously not really the target audience for these corporate type reports and I can imagine the content of the report being quite demoralizing for testers reading it if their organisations appear to be so far behind what these large organisations are claiming to be doing. I would suggest not believing the hype, doing your own critical thinking and taking the conclusions from all such surveys and reports with a pinch of salt.

The survey (pages 68-71)

This year’s report comes in at a hefty 76 pages (so a few pages heavier than the 2018/19 edition). I again chose to look first at where the data came from to build the report, which is presented towards the end of the report. The survey size was 1750 (compared to 1700 in 2018/19) and the organizations taking part were again all of over 1000 employees, with the largest number coming from organizations of over 10,000 employees. The response breakdown by organizational size were all within a percentage point of the 2018/19 report so it seems likely that it’s really the same organizations contributing every time. The lack of input from smaller organizations is a concern, as I imagine smaller, more nimble organizations might actually be where the innovation and genuine change in the way testing is performed comes from.

The survey had a good spread of countries & regions as well as industry sectors (with the top three sectors accounting for almost half of the responses, viz. financial services, public sector/government, and telecommunications). The types of people who provided survey responses this year bears a startling resemblance to the 2018/19 report – in terms of job title breakdown (2020/21 vs. 2018/19), they were grouped as follows: CIO (25% vs. 27%), IT director (20% vs. 22%), QA/Testing Manager (19% vs. 20%), VP Applications (17% vs. 18%), CMO/CDO (7% in both cases), CTO/Product Head (6% in both cases) and VP/Director of R&D (6% in 2020/21 only). These striking similarities in the data again lead me to the conclusion that the report relies on the same people in the same organizations providing responses year on year.

Introduction (pages 4-5)

In his introduction, Mark Buenen of Capgemini (page 4) notes that

the use of test automation is growing, as for most organizations it is still not at the level required

which seems to suggest there is some level of automation at which these organizations will cease to look for other ways to leverage automation. He also says

It is reassuring that 69% of the organizations interviewed in this survey feel they always or virtually always meet their quality goals

I’m not sure what is reassuring about this statistic. Given the senior folks interviewed for this survey, I wonder how many of these places actually have clearly defined quality goals and how they go about measuring whether they meet (or “virtually always meet”!) these goals. Another interesting point Mark makes is that

One of the main challenges noted this year is the lack of the right testing methodologies for teams, as reported by 55% of respondents.

I wonder what the “right testing methodologies” being sought are? Is it organizations looking for a silver bullet “testing methodology” to solve their quality problems? In his introduction, Raffi Margaliot of Micro Focus (page 5) says

This year’s WQR shows that QA has transitioned from being an independent function in a separate team, towards becoming an integrated part of the software delivery team, with responsibilities reaching beyond testing and finding defects. QA engineers are now charged with enabling the entire team to achieve their quality objectives, and incorporating better engineering practices and state-of-the-art techniques such as AI and ML to achieve these aims.

The move to embedding responsibility for testing and quality within development teams began so long ago that it seems nonsensical to still even be talking about it as an improvement. The idea that AI and ML are playing big parts in the way whole team approaches to quality are implemented is popular and especially so with CTO/CIO types interviewed for reports like this, but I still believe the reality within most development teams is very different. We’ll return to the actual evidence to support these claims as we examine the detail in the report to follow.

Executive Summary (pages 6-8)

The most interesting part of the summary for me was the commentary around the answers to the survey question around “Objectives of Quality Assurance and Testing in the organization”.

Apparently 62% of survey respondents said that this was an objective of QA and testing: “Automate: Make QA and Testing a smarter automated process”. The implication here is that automation is smarter than what isn’t automated and that moving away from human involvement in testing is seen as a good thing. I still don’t understand why organizations are confused about the fact that testing cannot be automated, but obviously the lure of the idea and the spruiking of vendors suggesting otherwise (including the co-sponsor of this report, MicroFocus) are both very strong factors.

Some 60% of respondents said “Quality Enablement: Support everybody in the team to achieve higher quality” was an objective. The “whole team approach to quality” idea is nothing new and seems to be ubiquitous in most modern software development organizations anyway. The report commentary around this in the Summary is pretty extraordinary and it would be remiss of me not to quote it in its full glory:

It won’t be an exaggeration if we say that out of all the software disciplines, QA has witnessed the most rapid transformation. QA has been steadily evolving – from an independent function to an integrated function, and now to an inclusive function. Also, the role of the QA practitioner is transforming from testing and finding defects, to ensuring that other engineering team members inculcate quality in their way of working. They need to do this by enabling them and by removing impediments on their way to achieving quality objectives.

Actually I think it is an exaggeration, QA/testing has moved pretty slowly over the years and “steadily evolving” is closer to the mark. It wouldn’t be a report of this type if it didn’t mention shifting left and shifting right and the authors don’t disappoint:

QA is not only shifting left but also moving right. We see more and more enterprises talk about exploratory testing, chaos engineering, and ensuring that the product is experienced the way end users will experience it in real life before releasing it to the market.

Testers must be dizzy these days with all this shifting to the left and also shifting to the right. I wonder what testing is left in the “middle” now; you know, the sort of testing where a human interacts with the product we’ve built to look for deep and subtle problems that might threaten its value to customers. This is where I’d imagine most exploratory testing efforts would sit and, while the report notes that more organizations are talking about exploratory testing, my feeling is that they’re talking about something quite different than what excellent practitioners of this approach mean by exploratory testing.

Key findings (pages 9-10)

QA orchestration in agile and DevOps

In this first category of findings, the report claims:

The adoption of agile and DevOps is steadily increasing, resulting in QA teams becoming orchestrators of quality.

While I don’t doubt that more and more organizations claim to be using agile and DevOps (even without any broad consensus on what either of those terms have come to mean), it sounds like they still expect some group (i.e. “QA”) to kind of arrange for the quality to happen. The idea of “full-stack QA” comes next:

We see a trend towards wanting QA engineers who have developer type skills, who yet retain their quality mindset and business-cum-user centricity.

Is this expecting too much? Yes, we think so. Only a few QA professionals can have all these skills in their repertoire. That’s why organizations are experimenting with the QA operational structure, with the way QA teams work, and with the skill acquisition and training of QA professionals.

I agree with the report on this point, there’s still a key role for excellent human testers even if they can’t write code. This now seems to be a contrarian viewpoint in our industry and this “moon on a stick” desire for testers who can perform excellent testing, find problems and fix them, then write the automated checks for them and do the production monitoring of the deployed code feels like incredibly wishful thinking and is not helpful to the discussion of advancing genuine testing skills.

Artificial intelligence and machine learning

The report is surprisingly honest and realistic here:

Expectations of the benefits that AI and ML can bring to quality assurance remain high, but while adoption is on the increase, and some organizations are blazing trails, there are few signs of significant general progress.

Nonetheless, enthusiasm hasn’t diminished: organizations are putting AI high among their selection criteria for new QA solutions and tools, and almost 90% of respondents to this year’s survey said AI now formed the biggest growth area in their test activities. It seems pretty clear they feel smart technologies will increase cost-efficiency, reduce the need for manual testing, shorten time to market – and, most importantly of all, help to create and sustain a virtuous circle of continuous quality improvements.

It’s good that the authors acknowledge the very slow progress in this area, despite AI and ML being touted as the next big things for many years (by this report itself, as you’ll note from my review of the 2018/19 report). What’s sad is that almost all the respondents are saying that AI is the biggest growth area around testing, which is worrying when other parts of the report indicate other more significant issues (e.g. a lack of good testing methodologies). The report’s interpretation of why so many organizations continue to be so invested in making AI work for them in testing is questionable I think. What does “increased cost-efficiency” really mean? And is this the direct result of the “reduced need for manual testing”? The “virtuous circle of quality improvements” they mention is currently more like a death spiral of reducing human interaction with the software before release, pushing poor quality software out to customers more often, seeing their complaints, fixing them, pushing fixes out more often, …

Budgets and cost containment

The next category is on budget/costs around testing and the report says:

The main focus remains manpower cost reduction, but some organizations are also looking at deriving the best out of their tool investments, moving test environments to the cloud, and looking for efficiencies from technologies including AI, machine learning and test automation.

There is again the claim about using AI/ML for “efficiency” gains and it’s a concern that reducing the number of people involved in QA/testing is seen as a priority. It should be clear by now that I believe humans are key in performing excellent testing and they cannot be replaced by tools, AI/ML or automation (though their capabilities can, of course, be extended by the use of these technologies).

Test automation

You’ll be pleased to know that automation is getting smarter:

The good thing we saw this year is that more and more practitioners are talking about in-sprint automation, about automation in all parts of QA lifecycle and not just in execution, and also about doing it smartly.

This begs the question of how automation was being tackled before, but let’s suppose organizations are being smarter about its use – even though this conclusion, to me at least, again makes it sound like more and more human interaction with the software is being deliberately removed under the banner of “smart automation”.

While the momentum could have been higher, the automation scores have mostly risen since last year. Also, the capability of automation tools being used seems to satisfy many organizations, but the signs are that the benefits aren’t being fully realized: only around a third of respondents (37%) felt they were currently getting a return on their investment. It really depends on how that return is being measured and communicated to the relevant stakeholders. Another factor may be that the tools are getting smarter, but the teams are not yet sufficiently skilled to take full advantage of them.

This paragraph sums up the confused world most organizations seem to be living in when it comes to the benefits (& limitations) expected of automation. While I’m no fan of the ROI concept applied to automation (or anything else in software development), this particular survey response indicates dissatisfaction with benefits derived from investments made in automation in the majority of organizations. There are many potential reasons for this, including unrealistic expectations, poor tool selection, misunderstanding about what automation can and can’t do, etc. but I had to smile when reading that last sentence which could be restated as “the automation tools are just too smart for the humans to use”!

Test environment management (TEM) and test data management (TDM)

There was nothing much to say under this category, but this closing statement caught my eye:

It was also interesting to note that process and governance came out as a bigger challenge than technology in this area.

I think the same statement probably applies to almost everything we do in software development. We’re not – and never really have been – short of toys (technology & tools), but assembling groups of humans to play well with those toys and end up with something of value to customers has always been a challenge.

Key recommendations (pages 11-13)

The recommendations are structured basically around the same categories as the findings discussed above, in summary:

  • QA orchestration in agile and DevOps
    • Don’t silo responsibility for QA. Share it.
    • Spread the word
    • Be part of the business
    • Make room for dashboards
    • Listen more to users
  • Artificial intelligence and machine learning
    • Focus on what matters
    • Keep learning
    • Have a toolkit
    • Testing AI systems: have a strategy
  • Budgets and cost containment
    • Greater savings can be achieved by using test infrastructure smartly
    • Use advantages in analytics, AI, and machine learning to make testing smarter
    • Be prepared to pay well for smarter talent
    • Don’t put all key initiatives on hold. Strive to be more efficient instead
  • Test automation
    • Change the status quo
    • Think ahead
    • Choose the right framework
    • Balance automation against skills needs
    • Don’t think one size fits all
    • Get smart
  • Test environment management and test data management
    • Create a shared center of excellence for TEM/TDM
    • Get as much value as you can out of your tool investment
    • Have strong governance in place
  • Getting ready to succeed in a post-COVID world
    • Be better prepared for business continuity
    • Focus more on security
    • Don’t look at COVID-19 as a way to cut costs, but as an opportunity to transform
    • Continue to use the best practices adopted during the pandemic

There’s nothing too revolutionary here, with a lot of motherhood type of advice – and copious use of the word “smart”. One part did catch my eye, though, under “Change the status quo” for test automation:

Testing will always be squeezed in the software development lifecycle. Introducing more automation – and pursuing it vigorously – is the only answer.

This messaging is, in my opinion, misleading and dangerous. It reinforces the idea that testing is a separate activity to development and so can “squeezed” while other parts of the lifecycle are not. It seems odd – and contradictory – to me to say this when so many of the report’s conclusions are about “inclusive QA” and whole team approaches. The idea that “more automation” is the “only answer” is highly problematic – there is no acknowledgement of context here and adding more automation can often just lead to more of the same problems (or even introduce further new and exciting ones), when part of the solution might be to re-involve humans into the lifecycle especially when it comes to testing.

Current Trends in Quality Assurance and Testing (pages 14-49)

Almost half of the WQR is again dedicated to discussing current trends in QA and testing and some of the most revealing content is to be found in this part of the report. I’ll break down my analysis in the same way as the report (the ordering of these sections is curiously different to the ordering of the key findings and recommendations).

QA orchestration in agile and DevOps

I’ll highlight a few areas from this section of the report. Firstly, the topic of how much project effort is allocated to testing:

…in agile and DevOps models, 40% of our respondents said 30% of their overall project effort is allocated to testing…. there is quite a wide variation between individual countries: for instance, almost half of US respondents using agile (47%) said they do so for more than 30% of their overall test effort, whereas only 11% of Italian respondents and 4% of UK respondents said the same.

The question really makes no sense in truly agile teams, since testing activities will run alongside development and measuring how much of the total effort relates specifically to testing is nonsensical – and I suspect (i.e. hope) is not explicitly tracked by most teams. As such, the wide variation in response to this question is exactly what I’d expect; it might just be that it is the Italians and UK folks who are being honest in acknowledging the ridiculousness of this question.

There are a couple of worrying statistics next, 51% of respondents said they ‘always’ or ‘almost always’ aim to “maximize the automation of test”. Does this mean they’re willing to sacrifice in other areas (e.g. human interactions with the software) to achieve this? Why would they want to achieve this anyway? What did they believe the question meant by “the automation of test”?

Meanwhile, another (almost) half said they ‘always’ or ‘almost always’ “test less during development and focus more on quality monitoring/production test” (maybe the same half as in the above automation response?). I assume this is the “shift-right” brigade again, but I really don’t see this idea of shifting right (or left) as removing the need for human testing of the software before it gets to production (where I acknowledge that lots of helpful, cool monitoring and information gathering can also take place).

It was a little surprising to find that the most common challenge [in applying testing to agile development] (50%) was a reported difficulty in aligning appropriate tools for automated testing. However, this may perhaps be explained by the fact that 42% of respondents reported a lack of professional test expertise in agile teams – and so this lack of skills may explain the uncertainty about identifying and applying the right tools.

The fact that the most common challenge was related to technology (tools in this case) comes as no surprise, but highlights how misguided most organizations are around agile in general. Almost half of the respondents acknowledge that a lack of test expertise in agile teams is a challenge while also half say that tooling is the problem. This focus on technology over people comes up again and again in this report.

Which metrics are teams using to track applications [sic] quality? Code coverage by test was the most important indicator, with 53% of respondents saying they always or almost always use it. This is compliant with agile test pyramid good practices, although it can be argued that this is more of a development indicator, since it measures unit tests. Almost as high in response, with 51% saying always or almost always, was risk covered by test. This is particularly significant: if test strategy is based on risk, it’s a very good thing.

The good ol’ “test pyramid” had to make an appearance and here it is, elevated to the stature of providing a compliance mechanism. At least they make the note that forming a test strategy around risk is “a very good thing”, but there’s little in the response to this question about tracking quality that refers to anything meaningful in terms of my preferred definition of quality (“Value to some person (who matters)”).

In closing this section of the report:

Out of all the insights relating to the agile and DevOps theme, perhaps the greatest surprise for us in this year’s survey was the response to a question about the importance of various criteria for successful agile and DevOps adoption. The technology stack was rated as essential or almost essential by 65% of respondents, while skill sets and organizational culture came in the bottom, with 34% and 28% respectively rating these highly. Operational and business priorities were rated highly by 41% of respondents.

How can this be? Maybe some respondents thought these criteria were a given, or maybe they interpreted these options differently. That’s certainly a possibility: we noted wide variations between countries in response to this question. For instance, the highest figure for skills needs was Poland, with 64%, while the lowest was Brazil, with just 5%. Similarly, for culture, the highest figure was Sweden, with 69%, and the lowest was once again Brazil, with only 2%. (Brazil’s perceived technology stack need was very high, at 98%.) The concept of culture could mean different things in different countries, and may therefore be weighted differently for people.

Regardless of what may have prompted some of these responses, we remain resolute in our own view that success in agile and DevOps adoption is predicated on the extent to which developments are business-driven. We derive this opinion not just from our own experience in the field, but from the pervasive sense of a commercial imperative that emerges from many other areas of this year’s report.

I would also expect culture to be ranked much higher (probably highest), but the survey responses don’t elevate it so in terms of criteria for successful agile and DevOps adoption. The report authors suggest that this might be due to misunderstanding of the question (which is possible for this and every other question in the survey, of course) and then display their confirmation bias by making their own conclusions that are not supported by the survey data (and make sense of that closing sentence if you will!). My take is a little different – it seems to me that most organizations focus on solving technology issues rather than people ones, so it’s not too surprising that the technology stack is rated at the top.

Artificial intelligence and machine learning

There is a lot of talk about AI and ML in this report again (as in previous years) and it feels like these technologies are always the next big thing, but yet don’t really become the next thing in any significant way (or, in the report’s more grandiose language: “…adoption and application have still not reached the required maturity to show visible results”).

The authors ask this question in this section:

Can one produce high-quality digital assets such as e-commerce, supply chain systems, and engineering and workface management solutions, without spending time and money assuring quality? In other words, can a system be tested without testing it? That may sound like a pipedream, but the industry has already started talking about developing systems and processes with intelligent quality engineering capabilities.

This doesn’t sound like a pipedream to me, it just sounds like complete nonsense. The industry may be talking about this but largely via the vested interests in selling products and tools based on the idea that removing humans from testing activities is a worthy goal.

Almost nine out of ten respondents (88%) said that AI was now the strongest growth area of their test activities [testing with AI and testing of AI]

This is an interesting claim and relatively meaningless, given the very limited impact that AI has had so far on everyday testing activities in most organizations. It’s not clear from the report what this huge percentage of respondents are pursuing by this growth in the use of AI, maybe next year’s report will reveal that (but I strongly doubt it).

In wrapping up this section:

Even though the benefits may not yet be fully in reach, the vast majority of people are genuinely enthusiastic about the prospects for AI and ML. These smart technologies have real potential not just in cost-efficiency, in zero-touch testing, and in time to market, but in the most important way of all – and that is in helping to achieve continuous quality improvements.

There is a lot of noise and enthusiasm around the use of AI and ML, in the IT industry generally and not just testing. The danger here is in the expectations of benefits from introducing these technologies and the report fuels this fire by claiming potential to reduce costs, reduce/remove the need for humans in testing, and speed up development. Adopting AI and ML in the future may yield some of these benefits but the idea that doing so now (with such a lack of expertise and practical experience as outlined in this very report) will help “achieve continuous quality improvements” doesn’t make sense to me.

Test automation

This is always an interesting section of the report and one stat hit me early on, in that only 37% of respondents agreed that “We get ROI from our automation efforts”. This is a pretty damning indictment of how automation projects are viewed and handled especially in larger organizations. The authors note in response that “We feel moving towards scriptless automation tools may provide better return on investment in the long term” and I’m interested in why they said that.

In terms of the degree to which automation is being used:

…our respondents told us that around 15% of all testing was automated. Only 3% of them said they were automating 21% or more of their test activities.

These are quite low numbers considering the noise about “automating all testing” and using AI/ML to reduce or remove humans from the testing effort. Something doesn’t quite add up here.

Test data management and test environment management

This section of the report wasn’t very exciting, noting the fairly obvious increase in the use of Cloud-based and containerized test environments. 29% of respondents reported still using on-premise hardware for their test environments, though.

Budgets and cost containment

The breakdown of QA budget made for interesting reading, with 45% going on hardware & infrastructure, 31% on tools, and just 25% on “human resources”. The fact that the big organizations surveyed for this report spend more on tools than on humans when it comes to QA/testing really says it all for me, as does:

We see greater emphasis being placed on reducing the human resources budget rather than the hardware and infrastructure budget.

While the overall allocation of total IT budget to QA remained similar to previous years (slowly declining year-on-year), this year’s report does at least recognize the “blurring boundaries” between QA and other functions:

It’s more difficult these days to track the movement of QA budgets specifically as an individual component. This is because the budget supports the overall team: the boundaries are blurring, and there is less delineation between different activities performed and the people who perform them in the agile environment.

Andy Armstrong (Head of Quality Assurance and Testing, Nordea Bank)

while the dedicated QA budget may show a downward trend, it’s difficult to ascertain how much of that budget is now consumed by the developers doing the testing.

The impact of COVID-19 and its implications on quality assurance activities in a post-pandemic world

This (hopefully!) one-off section of the report wasn’t particularly revealing for me, though I noted the stat that 74% of respondents said that “We need to automate more of QA and testing” in a post-pandemic world – as though these people need more reasons/excuses to ramp up automation!

Sector Analysis (pages 50-67)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors and discuss particular trends and challenges within each, with a strong focus on the impact of the COVID-19 pandemic. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy, utilities and chemicals
  • Financial services
  • Healthcare and life sciences
  • High-tech
  • Government and public sector
  • Telecoms, media and entertainment

Geography-specific reports

The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find anything worth of special mention here.

Before I go…

I respond to reports and articles of this nature in order to provide a different perspective, based on my opinion of what good software testing looks like as well as my experience in the industry. I provide such content as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting.

If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Common search engine questions about testing #1: Why is software testing important?

This is the first of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

The first cab off the rank is “Why is software testing important?” (with related questions being “why is software testing necessary?”, “why is software testing needed?”, “why is software testing important in software engineering?” and “why is software testing important in SDLC?”).

Let’s begin by looking at this from a different angle, how would teams/organisations behave if software testing wasn’t important to them? They’d probably try to cut the cost of it or find ways to justify not doing it all (especially with expensive humans). They might devalue the people doing such work by compensating them differently to other team members or look upon their work as a commodity that they can have performed by lowest common denominator staff (perhaps in a cheaper location). They would capitalize on their confirmation bias by appealing to the authority of the many articles and presentations claiming that “testing is dead”. They would ensure that testing is seen as a separate function from the rest of development to enable their desire to remove it completely. They would view testing as a necessary evil.

Listening to the way some organisations and some parts of the software development community talk about testing, it’s common to see these indications that software testing just isn’t important to them. In trying to understand why this is so, I’ve come to believe that this largely stems from the software testing industry traditionally doing a poor job of articulating its value and not being clear on what it is that good testing actually provides. We’ve spent a long time working off the assumption that it’s obvious to people paying the bills that testing is important and necessary.

To be clear, my preferred definition of testing comes from Michael Bolton and James Bach, viz.

Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.

I like this definition because it highlights all of the aspects of why testing is important to me, with its focus on interacting with the product, engaging in learning and exploration, and running experiments to help find out if the thing in front of me as a tester is the thing we wanted. It seems to me that this type of evaluation is important and would likely also be viewed as important by the business. However, if we sell the importance of testing based on providing turgid test reports of passed and failed test cases, it’s not too surprising that stakeholders view testing as being more of a costly nuisance than a valued and trusted advisor. Too often, I’ve seen the outputs of testing being focused on describing the testing approach, techniques, test cases run and bugs logged – in other words, we too often provide information about what we did and fail to tell a story about what we discovered during the process.

The reality is that most stakeholders (and certainly customers) don’t care about what you did as a tester, but they probably care about what you learned while doing it that can be valuable in terms of deciding whether we want to proceed with giving the product to customers. Learning to present testing outcomes in a language that helps consumers of the information to make good decisions is a real skill and one that is lacking in our industry. Talking about risk (be that product, project, business or societal) based on what we’ve learned during testing, for example, might be exactly what a business stakeholder is looking for in terms of value from that testing effort. In deliberately looking for problems that threaten the value of the product, there is more chance of finding them before they can impact our customers.

Another spanner in these works is the confusion caused by the common use of the term “automated testing”. It should be clear from the definition I presented above that testing is a deeply human activity, requiring key human skills such as the ability to subjectively experience using the product, make judgements about it and perform experiments against it. While the topic of “automated testing” will be covered in more depth in answering a later question in this blog series, I also wanted to briefly mention automation here to be clear when answering why software testing is important. In this context, I’m going to include the help and leverage we can gain by automation under the umbrella term of “software testing”, while reminding you that the testing itself cannot be automated since it requires distinctly human traits in its performance.

Let’s wrap up this post with a couple of reasons why I think software testing is important.

Software testing is important because:

  • We want to find out if there are problems that might threaten the value of the product, so that they can be fixed before the product reaches the customer.
  • We have a desire to know if the product we’ve built is the product we (and, by extension, our customers) wanted to build.
    • The machines alone can’t provide us with this kind of knowledge.
    • We can’t rely solely on the builders of the product either as they lack the critical distance from what they’ve built to find deep and subtle problems with it.

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks to my review team (Paul Seaman and Ky) for their helpful feedback on this post.

In response to Cigniti’s “The emerging trends in Software Testing and Quality Assurance” blog post

A LinkedIn post led me to a blog post from Cigniti, titled The emerging trends in Software Testing and Quality Assurance (published on 2nd November 2020).

These kinds of posts are particularly common in the later part of the year as predictions for the next big things for the year ahead start to surface. While I read such posts with no expectation of learning anything profound from them, I’m still generally disappointed by what they have to say. This latest effort from Cigniti continues that theme of disappointment.

Notionally providing only three “emerging” trends, the build up to their big reveal is worth critiquing, starting with their opening gambit:

Software testing and Quality Assurance practices are on the ride to continuous evolution, improvement, and inclusion. Rather than being a separate function, QA is all set to become implicit to the development itself. For many software development teams, the process of making QA a part of the software development has already begun. This transition in the perception regarding QA from being a choice to a necessity is one of the most significant milestones that the IT sector has achieved in the evolutionary process. 

Removing the filler words and fluff, the message seems to be that “QA” (by which I’m going to assume they mean “testing”) is more a part of the development process than a separate “function”. The idea that testing has been seen as a choice in the recent past but is now seen as a necessity doesn’t marry with my experience in the industry at all. With the move to more agile ways of working, most teams I’ve encountered in the last few years have made their testing efforts part of their development practice with embedded testers (or whatever label they have) rather than having separate testing teams (or the surely now deprecated idea of “Testing Centres of Excellence”).

Next up, the blog makes an appeal to authority – and authorities don’t come much more credible than Gartner (right?):

Gartner, in the ‘Top Strategic Technology Trends for 2021’ report, has emphasized the need for people centricity

It’s worth making a mental note here about “people centricity” in readiness for the emerging trends that come later.

A number of big claims are made next:

Having a streamlined QA function eliminates the bottlenecks that may hinder timely code releases into production, resulting in a better satisfaction ratio among the intra-organizational teams. A high-quality code release translates into fewer defects into production and improved end user experience. An increased customer satisfaction level offers increased ROI to the organization.

There is no evidence provided in the post or in citations to support:

  • “Having a streamlined QA function eliminates the bottlenecks…” – the implication here appears to be that it’s QA that hinders “timely” releases, whereas the reality is often quite different in my experience. There are many potential bottlenecks and obstacles to production releases given the uncertainty and changing environment of most software development projects.
  • “… resulting in a better satisfaction ratio among the intra-organizational teams” – maybe I’ve missed the idea of “satisfaction ratio” coming into the industry, but I really don’t understand what they’re saying or how this claim can be backed up.
  • “A high-quality code release translates into fewer defects into production and improved end user experience” – there is no indication here about what is meant by quality (nor an acknowledgement that “quality” can mean very different things to different people in different contexts in different organizations) but connecting “high-quality code” to “fewer defects in production and improved end user experience” seems to me to be highly questionable.
  • “An increased customer satisfaction level offers increased ROI to the organization” – I cringe when I see mention of “ROI” around QA/testing and I cringed on cue here. The start of the very same blog post suggested that QA/testing is a necessity and not a choice, so why talk about its ROI? We don’t hear demands to increase the ROI of development or management or support, so why attempt to measure it for another necessary aspect of software development? (Note that Paul Seaman blogged on the use of terms from economics in testing recently, though not specifically on ROI.) And how exactly does “increased customer satisfaction” arise from a “streamlined QA function”? Customer satisfaction is again a many faceted thing, so suggesting that a change to the way testing is integrated into the development process necessarily increases customer satisfaction is oversimplifying what is a complex and deeply contextual rating.

We finally get to the three “emerging trends that will shape the [QA] function in the coming year and beyond”, starting with “Scriptless test automation”:

Test automation has been one of the top software testing trends for past few years. With a wider acceptance of test automation within the SDLC, there also comes the realization for constantly optimizing it for fulfilling the evolving requirements. One of the major challenges that organizations have been facing in adopting test automation has been the lack of skilled test automation resources for test script maintenance. Scriptless test automation is enabling organizations to overcome this challenge and still have efficient test cases for automating software testing.

Responding to this part alone could form its own post, but this is a perfect example of the kind of misguided thinking and advice that is so prevalent when it comes to the topic of automation. They are reinforcing the idea that automation is a trend but all that pesky code that’s required to tell the machines what to do is just too hard to maintain. It seems to be a hard message to get across, but creating automated checks is software development – and needs to be treated as such. The problem isn’t a “lack of skilled test automation resources for test script maintenance” as much as using people not skilled in programming to write the test code, in my opinion. While there is some value in scriptless automation technologies, I think their importance is generally overestimated at the same time as the effort involved in crafting genuinely maintainable and valuable automated check code is underestimated. (And, yes, I noted that they closed out this trend with the phrase “automating software testing”, as though that’s a thing.)

The next trend is “The marriage of AI, ML and QA”:

Introducing AI and ML into Quality Assurance process allows an organization to get out of the ‘Test Automation trap’, which can be explained as – “Test automation trap is when the test teams are not getting enough time to be able to do the failure triage from the previous test run before building the next test automation code.” 

It wouldn’t be a trend piece without mentioning “AI” and “ML”, of course. The fact that there’s three folks in the marriage should be a worrying sign in itself, but really this idea that AI and ML will significantly impact the way the majority of organizations perform testing anytime soon just doesn’t stack up. Looking at the common questions on LinkedIn, for example, most organizations are still struggling with such basic aspects of testing and quality management that introducing AI and ML would probably be more disruptive than helpful. I’ve not heard of this “Test Automation Trap” before either, which to me sounds like an issue of poorly written tests or an acceptance of many “test failures” as being the norm. Using AI/ML to tackle the issue of failure triage seems to be coming at this problem from the wrong end.

The final trend is “Continuous integration for continuous quality”:

With the help of DevOps, about 59% of organizations are now deploying multiple times a day, once a day, or once every few days. For these organizations, the code quality has been one of the biggest benefits of embracing DevOps.  

The CI/CD pipeline combined with test automation has done wonders for organization in terms of the quality of their releases. Not only has it positioned QA as an imperative instead of a bottleneck, it has resulted in higher returns as well. At present, only a handful of organizations have deployed bots to review their code, but this number is expected to go up as we move further into the future. 

Continuous integration is hardly a new thing and, in this “trend” towards “continuous quality”, there is conflation of the terms continuous integration, continuous delivery and DevOps. This is a common problem in the current discourse around these subjects unfortunately. Another common problem is the idea that more frequent deployments necessarily leads to improved quality. The use of CI/CD pipelines and more automation doesn’t in any way guarantee higher quality deliverables, it all too frequently seems to end up delivering poor quality more often, in fact. The post again uses this term “bottleneck” and implies that these more automated delivery mechanisms remove the bottleneck, but the human element of critically evaluating what’s been built before it’s delivered is implicitly being devalued here and is increasingly seen as an element to be minimized or removed. I’m not sure what the code review bots are about as referenced here, but it again stems from this notion that we’re better off having machines do this kind of critical analysis work rather than humans so as to remove “bottlenecks” to us deploying software (with little or no human critique along the way).

It seems to me that all of these trends are essentially centred on increasing our reliance on machines as part of our testing and quality practice, which seems to contradict the “people centricity” trend cited from Gartner at the top of the post. I can’t recall any recent trend identified in one of these reports that focuses on improving the testing skills of humans. If I was being cynical, I’d think that there’s more money to be made in identifying trends around which vendors can build tools for profit…

There was nothing of substance in Cigniti’s post that would prove useful to someone in testing with a genuine interest in learning where our craft is headed in the years ahead. I wonder if anyone is interested in a contrarian article about “trends”, something like “The folly of following trends” perhaps?

If the thoughts expressed in this post resonate with you, my new consultancy might be able to help you with improving your testing and quality practices – please see www.drleeconsulting.com.au for more details of my offering.