Reviewing Capgemini’s “World Quality Report 2022-23”

It’s that time of year again and I’ve gone through the pain of reviewing the latest edition of Capgemini’s annual World Quality Report (to cover 2022/23) so you don’t have to.

I reviewed both the 2018/19 and 2020/21 editions of their report in some depth in previous blog posts and I’ll take the same approach to this year’s effort, comparing and contrasting it with the previous two reports. Although this review might seem lengthy, it’s a mere summary of the 80 pages of the full report!

TL;DR

The survey results in this year’s report are more of the same really and I don’t feel like I learned a great deal about the state of testing from wading through it. My lived reality working with organizations to improve their testing and quality practices is quite different to the sentiments expressed in this report.

It’s good to see the report highlighting sustainability issues, a topic that hasn’t received much coverage yet but will become more of an issue for our industry I’m sure. The way we design, build and deploy our software has huge implications for its carbon footprint, both before release and for its lifetime in production usage.

The previous reports I reviewed were very focused on AI & ML, but these topics barely get a mention this year. I don’t think the promise of these technologies has been realised at large in the testing industry and maybe the lack of focus in the report reflects that reality.

It appears that the survey respondents are drawn from a very similar pool to previous reports and the lack of responses from smaller organizations mean that the results are heavily skewed to very large corporate environments.

I would have liked to see some deep questions around testing practice in the survey to learn more about what’s going on in terms of human testing in these large organizations, but alas there was no such questioning here (and these organizations seem to be less forthcoming with this information via other avenues too, unfortunately).

The visualizations used in the report are very poor. They look unprofessional, the use of multiple different styles is unnecessary and many are hard to interpret (as evidenced by the fact that the authors saw fit to include text explanations of what you’re looking at on many of these charts).

I reiterate my advice from last year – don’t believe the hype, do your own critical thinking and take the conclusions from surveys and reports like this with a (very large) grain of salt. Keep an interested eye on trends but don’t get too attached to them and instead focus on building excellent foundations in the craft of testing that will serve you well no matter what the technology du jour happens to be.

The survey (pages 72-75)

This year’s report runs to 80 pages, continuing the theme of being slightly thicker each year. I looked at the survey description section of the report first as it’s important to get a picture of where the data came from to build the report and support its recommendations and conclusions.

The survey size was 1750, suspiciously being exactly the same number as for the 2020/21 report. The organizations taking part were again all of over 1000 employees, with the largest number (35% of responses) coming from organizations of over 10,000 employees. The response breakdown by organizational size was very similar to that of the previous two reports, reinforcing the concern that the same organizations are contributing every time. The lack of input from smaller organizations unfortunately continues.

While responses came from 32 countries, they were heavily skewed to North America and Western Europe, with the US alone contributing 16% and then France with 9%. Industry sector spread was similar to past reports, with “High Tech” (18%) and “Financial Services” (15%) topping the list.

The types of people who provided survey responses this year was also very similar to previous reports, with CIOs at the top again (24% here vs. 25% last year), followed by QA Testing Managers and IT Directors. These three roles comprised over half (59%) of all responses.

Introduction (pages 4-5)

There’s a definite move towards talking about Quality Engineering in this year’s report (though it’s a term that’s not explicitly defined anywhere) and the stage is set right here in the Introduction:

We also heartily agree with the six pillars of Quality Engineering the report documents: orchestration, automation, AI, provisioning, metrics, and skill. Those are six nails in the coffin of manual testing. After all, brute force simply doesn’t suffice in the present age.

So, the talk of the death of manual testing (via a coffin reference for a change) continues, but let’s see if this conclusion is backed up by any genuine evidence in the survey’s findings.

Executive Summary (pages 6-7)

The idea of a transformation occurring from Quality Assurance (QA) to Quality Engineering (QE) is the key message again in the Executive Summary, set out via what the authors consider their six pillars of QE:

  1. Agile quality orchestration
  2. Quality automation
  3. Quality infrastructure testing and provisioning
  4. Test data provisioning and data validation
  5. The right quality indicators
  6. Increasing skill levels

In addition to these six pillars, they also bring in the concepts of “Sustainable IT” and “Value stream management”, more on those later.

Key recommendations (pages 8-9)

The set of key recommendations from the entirety of this hefty tome comprises little more than one page of the report and the recommendations are roughly split up as per the QE pillars.

For “Agile quality orchestration”, an interesting recommendation is:

Track and monitor metrics that are holistic quality indicators across the development lifecycle. For example: a “failed deployments” metric gives a holistic view of quality across teams.

While I like the idea of more holistic approaches to quality (rather than hanging our quality hat on just one metric), the example seems like a strange choice. Deployments can fail for all manner of reasons and, on the flipside, “successful” deployments may well be perceived as low quality by end users of the deployed software.

For “Quality automation”, it’s pleasing to see a recommendation like this in such a report:

Focus on what delivers the best benefits to customers and the business rather than justifying ROI.

It’s far too common for automation vendors to make their case based on ROI (and they rarely actually mean ROI in any traditional financial use of that term) and I agree that we should be looking at automation – just like any other ingredient of what goes into making the software cake – from a perspective of its cost, value and benefits.

Moving on to “Quality and sustainable IT”, they recommend:

Customize application performance monitoring tools to support the measurement of environmental impacts at a transactional level.

This is an interesting topic and one that I’ve looked into in some depth during volunteer research work for the UK’s Vegan Society. The design, implementation and hosting decisions we make for our applications all have significant impacts on the carbon footprint of the application and it’s not a subject that is currently receiving as much attention as it deserves, so I appreciate this being called out in this report.

In the same area, they also recommend:

Bring quality to the center of the strategy for sustainable IT for a consistent framework to measure, control, and quantify progress across the social, environmental, economic, and human facets of sustainable IT, even to the extent of establishing “green quality gates.”

Looking at “Quality engineering for emerging technology trends”, the recommendations are all phrased as questions, which seems strange to me and I don’t quite understand what the authors are trying to communicate in this section.

Finally, in “Value stream management”, they say:

Make sure you define with business owners and project owners the expected value outcome of testing and quality activities.

This is a reasonable idea and an activity that I’ve rarely seen done, well or otherwise. Communicating the value of testing and quality-related activities is far from straightforward, especially in ways that don’t fall victim to simplistic numerical metrics-based systems.

Current trends in Quality Engineering & Testing (p10-53)

More than half of the report is focused on current trends, again around the pillars discussed in the previous sections. Some of the most revealing content is to be found in this part of the report. I’ll break down my analysis into the same sections as the report.

Quality Orchestration in Agile Enterprises

I’m still not sure what “Quality Orchestration” actually is and fluff such as this doesn’t really help:

Quality orchestration in Agile enterprises continues to see an upward trend. Its adoption in Agile and DevOps has seen an evolution in terms of team composition and skillset of quality engineers.

The first chart in this section is pretty uninspiring, suggesting that only around half of the respondents are getting 20%+ improvements in “better quality” and “faster releases” as a result of adopting “Agile/DevOps” (which are frustratingly again treated together as though they’re one thing, the same mistake as in the last report).

The next section used a subset of the full sample (750 out of the 1750, but it’s not explained why this is the case) and an interesting statistic here is that “testing is carried out by business SMEs as opposed to quality engineers” “always” or “often” by 62% of the respondents. This seems to directly contradict the report’s premise of a strong movement towards QE.

For the results of the question “How important are the following QA skills when executing a successful Agile development program?”, the legend and the chart are not consistent (the legend suggesting “very important” response only, the chart including both “very important” and “extremely important”) and, disappointingly, none of the answers have anything to do with more human testing skills.

The next question is “What proportion of your teams are professional quality engineers?” and the chart of the results is a case in point of how badly the visuals have been designed throughout this report. It’s an indication that the visualizations are hard to comprehend when they need text to try to explain what they’re showing:

Figure 04 from the World Quality Report

Using different chart styles for each chart isn’t helpful and it makes the report look inconsistent and unprofessional. This data again doesn’t suggest a significant shift to a “QE first” approach in most organizations.

The closing six recommendations (page 16) are not revolutionary and I question the connection that’s being made here between code quality and product quality (and also the supposed cost reduction):

Grow end-to-end test automation and increase levels of test automation across CI/CD processes, with automated continuous testing, to drive better code quality. This will enable improved product quality while reducing the cost of quality.

Quality Automation

The Introduction acknowledges a problem I’ve seen throughout my career and, if anything, it’s getting worse over time:

Teams prioritize selecting the test automation tools but forget to define a proper test automation plan and strategy.

They also say that:

All organizations need a proper level of test automation today as Agile approaches are pushing the speed of development up. Testing, therefore, needs to be done faster, but it should not lose any of its rigor. To put it simply, too much manual testing will not keep up with development.

This notion of “manual” testing failing to keep up with the pace of development is common, but suggests to me that (a) the purpose of human testing is not well understood and (b) many teams continue to labour under the misapprehension that they can work at an unsustainable pace without sacrificing quality.

In answering the question “What are the top three most important factors in determining your test automation approach?”, only 26% said that “Automation ROI, value realization” was one of the top 3 most important factors (while, curiously, “maintainability” came out top with 46%). Prioritizing maintainability over an ability to realize value from the automation effort seems strange to me.

Turning to benefits, all eight possible answers to the question “What proportion (if any) of your team currently achieves the following benefits from test automation?” were suspiciously close to 50% so perhaps the intent of the question was not understood and ended up with a flip of the coin response. (For reference, the benefits in response to this question were “Continuous integration and delivery”, “Reduce test team size”, “Increase test coverage”, “Better quality/fewer defects”, “Reliability of systems”, “Cost control”, “Allowing faster release cycle” and “Autonomous and self-adaptive solutions”.) I don’t understand why “Reduce test team size” would seen as a benefit and this reflects the ongoing naivety about what automation can and can’t realistically achieve. The low level of benefits reported across the board lead the authors to note:

…it does seem that communications about what can and cannot be done are still not managed as well as they could be, especially when looking to justify the return on investment. The temptation to call out the percentage of manual tests as automated sets teams on a path to automate more than they should, without seeing if the manual tests are good cases for automation and would bring value.

and

We have been researching the test automation topic for many years, and it is disappointing that organizations still struggle to make test automation work.

Turning to recommendations in this area, it’s good to see this:

Focus on what delivers the best benefits to customers and the business rather than justifying ROI.

It’s also interesting that they circle back to the sustainability piece, especially as automated tests are often run across large numbers of physical/virtual machines and for multiple configurations:

A final thought: sustainability is a growing and important trend – not just in IT, but across everything. We need to start thinking now about how automation can show its benefit and cost to the world. Do you know what the carbon footprint of your automation test is? How long will it be before you have to be able to report on that for your organization? Now’s the time to start thinking about how and what so you are ready when that question is asked.

Quality Infrastructure Testing and Provisioning

This section of the report is very focused on adoption of cloud environments for testing. In answer to “What proportion of non-production environments are provisioned on the cloud?”, they claim that:

49% of organizations have more than 50% of their non-production environments on cloud. This cloud adoption of non-production environments is showing a positive trend, compared to last year’s survey, when only an average of 23% of testing was done in a cloud environment

The accompanying chart does not support this conclusion, showing 39% of respondents having 26-50% of their non-production environments in the cloud and just 10% having 51-75% there. They also conflate “non-production environment” with “testing done in a cloud environment” when comparing this data with the previous report, when in reality there could be many non-testing non-production environments inflating this number.

They go on to look at the mix of on-premise and cloud environments and whether single vendor or multiple vendor clouds are in use.

In answer to “Does your organization include cloud and infrastructure testing as part of the development lifecycle?”, the data looked like this:

World Quality Report figure 11

The authors interpreted this data to conclude that “It emerged that around 96% of
all the respondents mention that cloud testing is now included as part of the testing lifecycle”, where does 96% come from? The question is a little odd and the responses even more so – the first answer, for example, suggests that for projects where applications are hosted on the cloud, only 3% of respondents mandate testing in the cloud – doesn’t that seem strange?

The recommendations in this section were unremarkable. I found the categorization of the content in this part of the report (and the associated questions) quite confusing and can’t help but wonder if participants in the survey really understood the distinctions trying to be drawn out here.

Test Data Provisioning and Data Validation

Looking at where test data is located, we see the following data (from a subset of just 580 from the total of 1750 responses, the reason is again not provided):

World Quality Report figure 16

I’m not sure what to make of this data, especially as the responses are not valid answers to the question!

The following example just shows how leading some of the questions posed in the survey really are. Asking a high-level question like this to the senior types involved in the survey is guaranteed to produce a close to 100% affirmative response:

World Quality Report figure 19

Equally unsurprising are the results of the next questions around data validation, where organizations reveal how much trouble they have actually doing it.

The recommendations in this section were again unremarkable, none really requiring the results of an expensive survey to come up with.

Quality and Sustainable IT

The sustainability theme is new to this year’s report, although the authors refer to it as though everyone knows what “sustainability” means from an IT perspective and that it’s been front of mind for some time in the industry (which I don’t believe to be the case). They say:

Sustainable quality engineering is quality engineering that helps achieve sustainable IT. A higher quality ensures less wastage of resources and increased efficiencies. This has always been a keystone focus of quality as a discipline. From a broader perspective, any organization focusing on sustainable practices while running its business cannot do so without a strong focus on quality. “Shifting quality left” is not a new concept, and it is the only sustainable way to increase efficiencies. Simply put, there is no sustainability without quality!

Getting “shift left” into this discussion about sustainability is drawing a pretty long bow in my opinion. And it’s not the only one – consider this:

Only 72% of organizations think that quality could contribute to the environmental aspect of sustainable IT. If organizations want to be environmentally sustainable, they need to learn to use available resources optimally. A stronger strategic focus on quality is the way to achieve that.

We should be mindful when we see definitive claims, such as “the way” – there are clearly many different factors involved in achieving environmental sustainability of an organization and a focus on quality is just one of them.

I think the results of this question about the benefits of sustainable IT says it all:

World Quality Report figure 22

It would have been nice to see the environmental benefits topping this data, but it’s more about the organization being seen to be socially responsible than it is about actually being sustainable.

When it comes to testing, the survey explicitly asked whether “sustainability attributes” were being covered:

World Quality Report figure 23

I’m again suspicious of these results. Firstly, it’s another of the questions only asked of a subset of the 1750 participants (and it’s not explained why). Secondly, the results are all very close to 50% so might simply indicate a flip of the coin type response, especially to such a nebulous question. The idea that even 50% of organizations are deliberately targeting testing on these attributes (especially the efficiency attributes) doesn’t seem credible to me.

One of the recommendations in this section is again around “shift left”:

Bring true “shift left” to the application lifecycle to increase resource utilization and drive carbon footprint reduction.

While the topic of sustainability in IT is certainly interesting to me, I’m not seeing a big focus on it in everyday projects. Some of the claims in the report are hard to believe, but I acknowledge that my lack of exposure to IT projects in such big organizations may mean I’ve missed this particular boat already setting sail.

Quality Engineering for Emerging Technologies

This section of the report focuses on emerging technologies and impacts on QE and testing. The authors kick off with this data:

World Quality Report figure 26

This data again comes from a subset of the participants (1000 out of 1750) and I would have expected the “bars” for Blockchain and Web 3.0 to be the same length if the values are the same. The report notes that “…Web 3.0 is still being defined and there isn’t a universally accepted definition of what it means” so it seems odd that it’s such a high priority.

I note that, in answer to “Which of the following are the greatest benefits of new emerging technologies improving quality outcomes?”, 59% chose “More velocity without
compromising quality” so the age old desire to go faster and keep or improve quality persists!

The report doesn’t make any recommendations in this area, choosing instead to ask pretty open-ended questions. I’m not clear what value this section added, it feels like crystal ball gazing (and, indeed, the last part of this section is headed “Looking into the crystal ball”!).

Value Stream Management

The opening gambit of this section of the report reads:

One of the expectations of the quality and test function is to assure and ensure that the software development process delivers the expected value to the business and end-users. However, in practice, many teams and organizations struggle to make the value outcomes visible and manageable.

Is this your expectation of testing? Or your organization’s expectation? I’m not familiar with such an expectation being set against testing, but acknowledge that there are organizations that perhaps think this way.

The first chart in this section just makes me sad:

World Quality Report figure 30

I find it staggering that only 35% of respondents feel that detecting defects before going live is even in their top three objectives from testing. The authors had an interesting take on that, saying “Finding faults is not seen as a priority for most of the organizations
we interviewed, which indicates that this is becoming a standard expectation”, mmm.

The rest of this section focused more on value and, in particular, the lean process of “value stream mapping”. An astonishing 69% of respondents said they use this approach “almost every time” when improving the testing process in Agile/DevOps projects – this high percentage doesn’t resonate with my experience but again it may be that larger organizations have taken value stream mapping on board without me noticing (or publicizing their love it more broadly so that I do notice).

Sector analysis (p54-71)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors (almost identically to last year) and discuss particular trends and challenges within each. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy, utilities, natural resources and chemicals
  • Financial services
  • Healthcare and life sciences
  • Manufacturing
  • Public sector
  • Technology, media and telecoms

Four metrics are given in summary for each sector, viz. the percentage of:

  • Agile teams have professional quality engineers integrated
  • Teams achieved better reliability of systems through test automation
  • Agile teams have test automation implemented
  • Teams achieved faster release times through test automation

It’s interesting to note that, for each of the these metrics, almost all the sectors reported around the 50% mark, with financial services creeping a little higher. These results seem quite weak and it’s remarkable that, after so long and so much investment, only about half of Agile teams report that they’ve implemented test automation.

Geography-specific reports

The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find it particularly revealing, though this comment stood out (emphasis is mine):

We see other changes, specifically to quality engineering. In recent years, QE has been decentralizing. Quality practices were merging into teams, and centers of excellence were being dismantled. Now, organizations are recognizing that centralized command and control have their benefits and, while they aren’t completely retracing their steps, they are trying to find a balance that gives them more visibility and greater governance of quality assurance (QA) in practice across the software development lifecycle.

Advertisement

4 thoughts on “Reviewing Capgemini’s “World Quality Report 2022-23”

  1. Paul Holland (@PaulHolland_TWN)

    Great review Lee! I have an idea about the “only 26% said that “Automation ROI, value realization” was one of the top 3 most important factors (while, curiously, “maintainability” came out top with 46%).” comment.
    Perhaps the issue of maintainability is raising up so high in companies because they are starting to get crushed under the weight of maintaining all of the automation that they have written over the past few years.
    I have seen that many companies do not consider maintainability until they are faced with a crisis.

    Reply
    1. therockertester Post author

      Thanks for commenting, Paul, and you make a great point that I hadn’t thought about when writing my post.

      There’s certainly been a push – and continues to be – in reports like this to increase the amount of automation, so maybe that message has been taken on board and big steaming piles of poorly thought out automation code are now demanding ever-increasing efforts to keep up and running.

      This comes back to one of the other points well made in this report that “Teams prioritize selecting the test automation tools but forget to define a proper test automation plan and strategy” and I see this very frequently when talking automation with development teams – their first questions are about “how” (which tool to use) rather than the “why” and “what”.

      Maintainability is one thing, but making bad choices about what to automate in the first place is also a huge issue in most teams I work with.

      Reply
  2. Pingback: Five Blogs – 4 November 2022 – 5blogs

  3. Pingback: 2022 in review | Rockin' and Testing All Over The World

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s