Reviewing Capgemini’s “World Quality Report 2020-21”

As I noted in another recent blog post, it’s that time of year again when predictions get made for the year ahead as well as reviews of the year. Right on cue, Capgemini released the latest edition of their annual World Quality Report to cover 2020 and 2021.

I reviewed the 2018/19 edition of their report in depth and I thought it worth reviewing the new report to compare and contrast it with the one from two years ago.

TL;DR

The findings and recommendations in the 2020/21 edition of this report are really very similar to those in the 2018/19 report. It appears that the survey respondents are drawn from a very similar pool and the lack of responses from smaller organisations mean that the results are heavily skewed to very large corporate environments.

There’s still plenty of talk about “more automation” and highlighting the importance of AI/ML in revolutionizing QA/testing (there is again no real differentiation or definition of the difference between testing and QA from the authors). There is almost no talk of “testing” (as I understand it) in the report, while there is a heavy focus on agile, DevOps, automation, AI, ML, etc. while removing or reducing human involvement in testing seems to be a clear goal. The fact that the report is co-authored by MicroFocus, a tool vendor, may have something to do this with direction. Agile and DevOps are almost always referred to in the report together, as though they are the same thing or one depends on the other.

I would have liked to see some deep questions around testing practice in the survey to learn more about what’s going on in terms of human testing in these large organizations, but alas this was nowhere to be seen here.

There is evidence of confirmation bias by the report’s authors throughout this report. When the survey results confirm what they expect, there is little questioning of potential for misunderstanding of the questions. However, in cases where the results don’t confirm what they expected, there are many reasons suggested for why this might be so. In doing this, the credibility of the report is severely impacted for me and brings into question why they survey so many organisations and ask so many questions if the idea ultimately is to deliver the recommendations that the authors and their sponsors are looking for.

I’m obviously not really the target audience for these corporate type reports and I can imagine the content of the report being quite demoralizing for testers reading it if their organisations appear to be so far behind what these large organisations are claiming to be doing. I would suggest not believing the hype, doing your own critical thinking and taking the conclusions from all such surveys and reports with a pinch of salt.

The survey (pages 68-71)

This year’s report comes in at a hefty 76 pages (so a few pages heavier than the 2018/19 edition). I again chose to look first at where the data came from to build the report, which is presented towards the end of the report. The survey size was 1750 (compared to 1700 in 2018/19) and the organizations taking part were again all of over 1000 employees, with the largest number coming from organizations of over 10,000 employees. The response breakdown by organizational size were all within a percentage point of the 2018/19 report so it seems likely that it’s really the same organizations contributing every time. The lack of input from smaller organizations is a concern, as I imagine smaller, more nimble organizations might actually be where the innovation and genuine change in the way testing is performed comes from.

The survey had a good spread of countries & regions as well as industry sectors (with the top three sectors accounting for almost half of the responses, viz. financial services, public sector/government, and telecommunications). The types of people who provided survey responses this year bears a startling resemblance to the 2018/19 report – in terms of job title breakdown (2020/21 vs. 2018/19), they were grouped as follows: CIO (25% vs. 27%), IT director (20% vs. 22%), QA/Testing Manager (19% vs. 20%), VP Applications (17% vs. 18%), CMO/CDO (7% in both cases), CTO/Product Head (6% in both cases) and VP/Director of R&D (6% in 2020/21 only). These striking similarities in the data again lead me to the conclusion that the report relies on the same people in the same organizations providing responses year on year.

Introduction (pages 4-5)

In his introduction, Mark Buenen of Capgemini (page 4) notes that

the use of test automation is growing, as for most organizations it is still not at the level required

which seems to suggest there is some level of automation at which these organizations will cease to look for other ways to leverage automation. He also says

It is reassuring that 69% of the organizations interviewed in this survey feel they always or virtually always meet their quality goals

I’m not sure what is reassuring about this statistic. Given the senior folks interviewed for this survey, I wonder how many of these places actually have clearly defined quality goals and how they go about measuring whether they meet (or “virtually always meet”!) these goals. Another interesting point Mark makes is that

One of the main challenges noted this year is the lack of the right testing methodologies for teams, as reported by 55% of respondents.

I wonder what the “right testing methodologies” being sought are? Is it organizations looking for a silver bullet “testing methodology” to solve their quality problems? In his introduction, Raffi Margaliot of Micro Focus (page 5) says

This year’s WQR shows that QA has transitioned from being an independent function in a separate team, towards becoming an integrated part of the software delivery team, with responsibilities reaching beyond testing and finding defects. QA engineers are now charged with enabling the entire team to achieve their quality objectives, and incorporating better engineering practices and state-of-the-art techniques such as AI and ML to achieve these aims.

The move to embedding responsibility for testing and quality within development teams began so long ago that it seems nonsensical to still even be talking about it as an improvement. The idea that AI and ML are playing big parts in the way whole team approaches to quality are implemented is popular and especially so with CTO/CIO types interviewed for reports like this, but I still believe the reality within most development teams is very different. We’ll return to the actual evidence to support these claims as we examine the detail in the report to follow.

Executive Summary (pages 6-8)

The most interesting part of the summary for me was the commentary around the answers to the survey question around “Objectives of Quality Assurance and Testing in the organization”.

Apparently 62% of survey respondents said that this was an objective of QA and testing: “Automate: Make QA and Testing a smarter automated process”. The implication here is that automation is smarter than what isn’t automated and that moving away from human involvement in testing is seen as a good thing. I still don’t understand why organizations are confused about the fact that testing cannot be automated, but obviously the lure of the idea and the spruiking of vendors suggesting otherwise (including the co-sponsor of this report, MicroFocus) are both very strong factors.

Some 60% of respondents said “Quality Enablement: Support everybody in the team to achieve higher quality” was an objective. The “whole team approach to quality” idea is nothing new and seems to be ubiquitous in most modern software development organizations anyway. The report commentary around this in the Summary is pretty extraordinary and it would be remiss of me not to quote it in its full glory:

It won’t be an exaggeration if we say that out of all the software disciplines, QA has witnessed the most rapid transformation. QA has been steadily evolving – from an independent function to an integrated function, and now to an inclusive function. Also, the role of the QA practitioner is transforming from testing and finding defects, to ensuring that other engineering team members inculcate quality in their way of working. They need to do this by enabling them and by removing impediments on their way to achieving quality objectives.

Actually I think it is an exaggeration, QA/testing has moved pretty slowly over the years and “steadily evolving” is closer to the mark. It wouldn’t be a report of this type if it didn’t mention shifting left and shifting right and the authors don’t disappoint:

QA is not only shifting left but also moving right. We see more and more enterprises talk about exploratory testing, chaos engineering, and ensuring that the product is experienced the way end users will experience it in real life before releasing it to the market.

Testers must be dizzy these days with all this shifting to the left and also shifting to the right. I wonder what testing is left in the “middle” now; you know, the sort of testing where a human interacts with the product we’ve built to look for deep and subtle problems that might threaten its value to customers. This is where I’d imagine most exploratory testing efforts would sit and, while the report notes that more organizations are talking about exploratory testing, my feeling is that they’re talking about something quite different than what excellent practitioners of this approach mean by exploratory testing.

Key findings (pages 9-10)

QA orchestration in agile and DevOps

In this first category of findings, the report claims:

The adoption of agile and DevOps is steadily increasing, resulting in QA teams becoming orchestrators of quality.

While I don’t doubt that more and more organizations claim to be using agile and DevOps (even without any broad consensus on what either of those terms have come to mean), it sounds like they still expect some group (i.e. “QA”) to kind of arrange for the quality to happen. The idea of “full-stack QA” comes next:

We see a trend towards wanting QA engineers who have developer type skills, who yet retain their quality mindset and business-cum-user centricity.

Is this expecting too much? Yes, we think so. Only a few QA professionals can have all these skills in their repertoire. That’s why organizations are experimenting with the QA operational structure, with the way QA teams work, and with the skill acquisition and training of QA professionals.

I agree with the report on this point, there’s still a key role for excellent human testers even if they can’t write code. This now seems to be a contrarian viewpoint in our industry and this “moon on a stick” desire for testers who can perform excellent testing, find problems and fix them, then write the automated checks for them and do the production monitoring of the deployed code feels like incredibly wishful thinking and is not helpful to the discussion of advancing genuine testing skills.

Artificial intelligence and machine learning

The report is surprisingly honest and realistic here:

Expectations of the benefits that AI and ML can bring to quality assurance remain high, but while adoption is on the increase, and some organizations are blazing trails, there are few signs of significant general progress.

Nonetheless, enthusiasm hasn’t diminished: organizations are putting AI high among their selection criteria for new QA solutions and tools, and almost 90% of respondents to this year’s survey said AI now formed the biggest growth area in their test activities. It seems pretty clear they feel smart technologies will increase cost-efficiency, reduce the need for manual testing, shorten time to market – and, most importantly of all, help to create and sustain a virtuous circle of continuous quality improvements.

It’s good that the authors acknowledge the very slow progress in this area, despite AI and ML being touted as the next big things for many years (by this report itself, as you’ll note from my review of the 2018/19 report). What’s sad is that almost all the respondents are saying that AI is the biggest growth area around testing, which is worrying when other parts of the report indicate other more significant issues (e.g. a lack of good testing methodologies). The report’s interpretation of why so many organizations continue to be so invested in making AI work for them in testing is questionable I think. What does “increased cost-efficiency” really mean? And is this the direct result of the “reduced need for manual testing”? The “virtuous circle of quality improvements” they mention is currently more like a death spiral of reducing human interaction with the software before release, pushing poor quality software out to customers more often, seeing their complaints, fixing them, pushing fixes out more often, …

Budgets and cost containment

The next category is on budget/costs around testing and the report says:

The main focus remains manpower cost reduction, but some organizations are also looking at deriving the best out of their tool investments, moving test environments to the cloud, and looking for efficiencies from technologies including AI, machine learning and test automation.

There is again the claim about using AI/ML for “efficiency” gains and it’s a concern that reducing the number of people involved in QA/testing is seen as a priority. It should be clear by now that I believe humans are key in performing excellent testing and they cannot be replaced by tools, AI/ML or automation (though their capabilities can, of course, be extended by the use of these technologies).

Test automation

You’ll be pleased to know that automation is getting smarter:

The good thing we saw this year is that more and more practitioners are talking about in-sprint automation, about automation in all parts of QA lifecycle and not just in execution, and also about doing it smartly.

This begs the question of how automation was being tackled before, but let’s suppose organizations are being smarter about its use – even though this conclusion, to me at least, again makes it sound like more and more human interaction with the software is being deliberately removed under the banner of “smart automation”.

While the momentum could have been higher, the automation scores have mostly risen since last year. Also, the capability of automation tools being used seems to satisfy many organizations, but the signs are that the benefits aren’t being fully realized: only around a third of respondents (37%) felt they were currently getting a return on their investment. It really depends on how that return is being measured and communicated to the relevant stakeholders. Another factor may be that the tools are getting smarter, but the teams are not yet sufficiently skilled to take full advantage of them.

This paragraph sums up the confused world most organizations seem to be living in when it comes to the benefits (& limitations) expected of automation. While I’m no fan of the ROI concept applied to automation (or anything else in software development), this particular survey response indicates dissatisfaction with benefits derived from investments made in automation in the majority of organizations. There are many potential reasons for this, including unrealistic expectations, poor tool selection, misunderstanding about what automation can and can’t do, etc. but I had to smile when reading that last sentence which could be restated as “the automation tools are just too smart for the humans to use”!

Test environment management (TEM) and test data management (TDM)

There was nothing much to say under this category, but this closing statement caught my eye:

It was also interesting to note that process and governance came out as a bigger challenge than technology in this area.

I think the same statement probably applies to almost everything we do in software development. We’re not – and never really have been – short of toys (technology & tools), but assembling groups of humans to play well with those toys and end up with something of value to customers has always been a challenge.

Key recommendations (pages 11-13)

The recommendations are structured basically around the same categories as the findings discussed above, in summary:

  • QA orchestration in agile and DevOps
    • Don’t silo responsibility for QA. Share it.
    • Spread the word
    • Be part of the business
    • Make room for dashboards
    • Listen more to users
  • Artificial intelligence and machine learning
    • Focus on what matters
    • Keep learning
    • Have a toolkit
    • Testing AI systems: have a strategy
  • Budgets and cost containment
    • Greater savings can be achieved by using test infrastructure smartly
    • Use advantages in analytics, AI, and machine learning to make testing smarter
    • Be prepared to pay well for smarter talent
    • Don’t put all key initiatives on hold. Strive to be more efficient instead
  • Test automation
    • Change the status quo
    • Think ahead
    • Choose the right framework
    • Balance automation against skills needs
    • Don’t think one size fits all
    • Get smart
  • Test environment management and test data management
    • Create a shared center of excellence for TEM/TDM
    • Get as much value as you can out of your tool investment
    • Have strong governance in place
  • Getting ready to succeed in a post-COVID world
    • Be better prepared for business continuity
    • Focus more on security
    • Don’t look at COVID-19 as a way to cut costs, but as an opportunity to transform
    • Continue to use the best practices adopted during the pandemic

There’s nothing too revolutionary here, with a lot of motherhood type of advice – and copious use of the word “smart”. One part did catch my eye, though, under “Change the status quo” for test automation:

Testing will always be squeezed in the software development lifecycle. Introducing more automation – and pursuing it vigorously – is the only answer.

This messaging is, in my opinion, misleading and dangerous. It reinforces the idea that testing is a separate activity to development and so can “squeezed” while other parts of the lifecycle are not. It seems odd – and contradictory – to me to say this when so many of the report’s conclusions are about “inclusive QA” and whole team approaches. The idea that “more automation” is the “only answer” is highly problematic – there is no acknowledgement of context here and adding more automation can often just lead to more of the same problems (or even introduce further new and exciting ones), when part of the solution might be to re-involve humans into the lifecycle especially when it comes to testing.

Current Trends in Quality Assurance and Testing (pages 14-49)

Almost half of the WQR is again dedicated to discussing current trends in QA and testing and some of the most revealing content is to be found in this part of the report. I’ll break down my analysis in the same way as the report (the ordering of these sections is curiously different to the ordering of the key findings and recommendations).

QA orchestration in agile and DevOps

I’ll highlight a few areas from this section of the report. Firstly, the topic of how much project effort is allocated to testing:

…in agile and DevOps models, 40% of our respondents said 30% of their overall project effort is allocated to testing…. there is quite a wide variation between individual countries: for instance, almost half of US respondents using agile (47%) said they do so for more than 30% of their overall test effort, whereas only 11% of Italian respondents and 4% of UK respondents said the same.

The question really makes no sense in truly agile teams, since testing activities will run alongside development and measuring how much of the total effort relates specifically to testing is nonsensical – and I suspect (i.e. hope) is not explicitly tracked by most teams. As such, the wide variation in response to this question is exactly what I’d expect; it might just be that it is the Italians and UK folks who are being honest in acknowledging the ridiculousness of this question.

There are a couple of worrying statistics next, 51% of respondents said they ‘always’ or ‘almost always’ aim to “maximize the automation of test”. Does this mean they’re willing to sacrifice in other areas (e.g. human interactions with the software) to achieve this? Why would they want to achieve this anyway? What did they believe the question meant by “the automation of test”?

Meanwhile, another (almost) half said they ‘always’ or ‘almost always’ “test less during development and focus more on quality monitoring/production test” (maybe the same half as in the above automation response?). I assume this is the “shift-right” brigade again, but I really don’t see this idea of shifting right (or left) as removing the need for human testing of the software before it gets to production (where I acknowledge that lots of helpful, cool monitoring and information gathering can also take place).

It was a little surprising to find that the most common challenge [in applying testing to agile development] (50%) was a reported difficulty in aligning appropriate tools for automated testing. However, this may perhaps be explained by the fact that 42% of respondents reported a lack of professional test expertise in agile teams – and so this lack of skills may explain the uncertainty about identifying and applying the right tools.

The fact that the most common challenge was related to technology (tools in this case) comes as no surprise, but highlights how misguided most organizations are around agile in general. Almost half of the respondents acknowledge that a lack of test expertise in agile teams is a challenge while also half say that tooling is the problem. This focus on technology over people comes up again and again in this report.

Which metrics are teams using to track applications [sic] quality? Code coverage by test was the most important indicator, with 53% of respondents saying they always or almost always use it. This is compliant with agile test pyramid good practices, although it can be argued that this is more of a development indicator, since it measures unit tests. Almost as high in response, with 51% saying always or almost always, was risk covered by test. This is particularly significant: if test strategy is based on risk, it’s a very good thing.

The good ol’ “test pyramid” had to make an appearance and here it is, elevated to the stature of providing a compliance mechanism. At least they make the note that forming a test strategy around risk is “a very good thing”, but there’s little in the response to this question about tracking quality that refers to anything meaningful in terms of my preferred definition of quality (“Value to some person (who matters)”).

In closing this section of the report:

Out of all the insights relating to the agile and DevOps theme, perhaps the greatest surprise for us in this year’s survey was the response to a question about the importance of various criteria for successful agile and DevOps adoption. The technology stack was rated as essential or almost essential by 65% of respondents, while skill sets and organizational culture came in the bottom, with 34% and 28% respectively rating these highly. Operational and business priorities were rated highly by 41% of respondents.

How can this be? Maybe some respondents thought these criteria were a given, or maybe they interpreted these options differently. That’s certainly a possibility: we noted wide variations between countries in response to this question. For instance, the highest figure for skills needs was Poland, with 64%, while the lowest was Brazil, with just 5%. Similarly, for culture, the highest figure was Sweden, with 69%, and the lowest was once again Brazil, with only 2%. (Brazil’s perceived technology stack need was very high, at 98%.) The concept of culture could mean different things in different countries, and may therefore be weighted differently for people.

Regardless of what may have prompted some of these responses, we remain resolute in our own view that success in agile and DevOps adoption is predicated on the extent to which developments are business-driven. We derive this opinion not just from our own experience in the field, but from the pervasive sense of a commercial imperative that emerges from many other areas of this year’s report.

I would also expect culture to be ranked much higher (probably highest), but the survey responses don’t elevate it so in terms of criteria for successful agile and DevOps adoption. The report authors suggest that this might be due to misunderstanding of the question (which is possible for this and every other question in the survey, of course) and then display their confirmation bias by making their own conclusions that are not supported by the survey data (and make sense of that closing sentence if you will!). My take is a little different – it seems to me that most organizations focus on solving technology issues rather than people ones, so it’s not too surprising that the technology stack is rated at the top.

Artificial intelligence and machine learning

There is a lot of talk about AI and ML in this report again (as in previous years) and it feels like these technologies are always the next big thing, but yet don’t really become the next thing in any significant way (or, in the report’s more grandiose language: “…adoption and application have still not reached the required maturity to show visible results”).

The authors ask this question in this section:

Can one produce high-quality digital assets such as e-commerce, supply chain systems, and engineering and workface management solutions, without spending time and money assuring quality? In other words, can a system be tested without testing it? That may sound like a pipedream, but the industry has already started talking about developing systems and processes with intelligent quality engineering capabilities.

This doesn’t sound like a pipedream to me, it just sounds like complete nonsense. The industry may be talking about this but largely via the vested interests in selling products and tools based on the idea that removing humans from testing activities is a worthy goal.

Almost nine out of ten respondents (88%) said that AI was now the strongest growth area of their test activities [testing with AI and testing of AI]

This is an interesting claim and relatively meaningless, given the very limited impact that AI has had so far on everyday testing activities in most organizations. It’s not clear from the report what this huge percentage of respondents are pursuing by this growth in the use of AI, maybe next year’s report will reveal that (but I strongly doubt it).

In wrapping up this section:

Even though the benefits may not yet be fully in reach, the vast majority of people are genuinely enthusiastic about the prospects for AI and ML. These smart technologies have real potential not just in cost-efficiency, in zero-touch testing, and in time to market, but in the most important way of all – and that is in helping to achieve continuous quality improvements.

There is a lot of noise and enthusiasm around the use of AI and ML, in the IT industry generally and not just testing. The danger here is in the expectations of benefits from introducing these technologies and the report fuels this fire by claiming potential to reduce costs, reduce/remove the need for humans in testing, and speed up development. Adopting AI and ML in the future may yield some of these benefits but the idea that doing so now (with such a lack of expertise and practical experience as outlined in this very report) will help “achieve continuous quality improvements” doesn’t make sense to me.

Test automation

This is always an interesting section of the report and one stat hit me early on, in that only 37% of respondents agreed that “We get ROI from our automation efforts”. This is a pretty damning indictment of how automation projects are viewed and handled especially in larger organizations. The authors note in response that “We feel moving towards scriptless automation tools may provide better return on investment in the long term” and I’m interested in why they said that.

In terms of the degree to which automation is being used:

…our respondents told us that around 15% of all testing was automated. Only 3% of them said they were automating 21% or more of their test activities.

These are quite low numbers considering the noise about “automating all testing” and using AI/ML to reduce or remove humans from the testing effort. Something doesn’t quite add up here.

Test data management and test environment management

This section of the report wasn’t very exciting, noting the fairly obvious increase in the use of Cloud-based and containerized test environments. 29% of respondents reported still using on-premise hardware for their test environments, though.

Budgets and cost containment

The breakdown of QA budget made for interesting reading, with 45% going on hardware & infrastructure, 31% on tools, and just 25% on “human resources”. The fact that the big organizations surveyed for this report spend more on tools than on humans when it comes to QA/testing really says it all for me, as does:

We see greater emphasis being placed on reducing the human resources budget rather than the hardware and infrastructure budget.

While the overall allocation of total IT budget to QA remained similar to previous years (slowly declining year-on-year), this year’s report does at least recognize the “blurring boundaries” between QA and other functions:

It’s more difficult these days to track the movement of QA budgets specifically as an individual component. This is because the budget supports the overall team: the boundaries are blurring, and there is less delineation between different activities performed and the people who perform them in the agile environment.

Andy Armstrong (Head of Quality Assurance and Testing, Nordea Bank)

while the dedicated QA budget may show a downward trend, it’s difficult to ascertain how much of that budget is now consumed by the developers doing the testing.

The impact of COVID-19 and its implications on quality assurance activities in a post-pandemic world

This (hopefully!) one-off section of the report wasn’t particularly revealing for me, though I noted the stat that 74% of respondents said that “We need to automate more of QA and testing” in a post-pandemic world – as though these people need more reasons/excuses to ramp up automation!

Sector Analysis (pages 50-67)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors and discuss particular trends and challenges within each, with a strong focus on the impact of the COVID-19 pandemic. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy, utilities and chemicals
  • Financial services
  • Healthcare and life sciences
  • High-tech
  • Government and public sector
  • Telecoms, media and entertainment

Geography-specific reports

The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find anything worth of special mention here.

Before I go…

I respond to reports and articles of this nature in order to provide a different perspective, based on my opinion of what good software testing looks like as well as my experience in the industry. I provide such content as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting.

If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

5 thoughts on “Reviewing Capgemini’s “World Quality Report 2020-21”

  1. Pingback: “Calling Bullsh*t” (Carl T. Bergstrom and Jevin D. West) | Rockin' and Testing All Over The World

  2. Pingback: Common search engine questions about testing #10: “What will software testing look like in 2021?” | Rockin' and Testing All Over The World

  3. Pingback: Calling BS

  4. Pingback: Reviewing Capgemini’s “World Quality Report 2022-23” | Rockin' and Testing All Over The World

  5. Pingback: 2022 in review | Rockin' and Testing All Over The World

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s