Reviewing “Reimagine The Future of Quality Assurance” – (yet) another corporate report on the state of QA/testing

Capgemini recently released another 100+ page report around QA/testing, called Reimagine The Future of Quality Assurance. You might recall that I reviewed another of their long reports, the World Quality Report 2018/2019, and this new report also seemed worthy of some commentary. This is a long post given the length of the report, but I provide a summary of my feelings at the end of the post if the detailed content review below is too hefty.


It’s not clear to me whether this report is focusing on Quality Assurance (QA) or testing or both, the term “Quality Assurance” is not clearly defined or differentiated from testing anywhere in the report and, judging from the responses from some of the industry people they interview in the report, it’s obvious that most of them were also unclear about the focus. It should be noted that my analysis and comments are specifically targeted towards what is discussed around testing in this report.

The report is described as “Featuring the trends shaping the future of quality assurance, and a practitioners’ view of how QA can reinvent customer experiences for competitive advantage”, this also doesn’t really tell me what the focus is but let’s start to look at the content.

The Contents suggest the existence of a section on “Methodology” (page 9) but this is not present in the report and wouldn’t be required anyway as this is not a survey results report (in contrast to the World Quality Report) but is rather based on case study/industry commentary. This oversight in the Contents is indicative of a lack of proofing evident throughout the report – there are many typos, copy/paste errors, and grammar issues, suggesting the report itself wasn’t subject to a very diligent process of quality assurance before it was published.

Introductory content

The foreword comes from Olaf Pietschner (Managing Director, Capgemini Australia & New Zealand). He claims that “QA [is] moving up in the agile value chain”, maybe in reference to testing being seen as more important and valuable as more organizations move to more frequent releases, adopt DevOps, etc. but his intent here may well be something different.

In another introductory piece – titled “Transforming testing for digital transformation: Speed is the new currency” – Sandeep Johri (CEO, Tricentis) says:

Reinventing testing is essential for achieving the speed and agility required to thrive in the digital future. Why? Speed is the new currency but traditional software testing is the #1 enemy of speed.

I have several issues with this. What exactly about testing needs “reinventing”? While speed seems to be a focus for many businesses – following the “the fast will eat the slow” mantra – it’s a stretch to argue that testing is or has been the number one reason that businesses can’t move faster in terms of delivering software. There are so many factors that influence an organization’s ability to get software out of the door that to label testing as “enemy number 1” seems simplistic and so context-independent as to be meaningless.

Industry sector analysis

The next seventy-odd pages of the report focus on sector analysis from five industry sectors. Each sector includes an introductory piece from a Capgemini representative followed by case pieces from different businesses in that sector.

The first sector is “Consumer Products, Retail, Distribution & Transport” (CPRDT) and is introduced by Amit Singhania and Prashant Chaturvedi (both Vice-Presidents, Capgemini Australia & New Zealand). They say:

The move from QA to Quality Engineering (QE) is not an option. The equation is simple: Test less and assure more. Serious and continuous disruption in IT means the way testing and QA has been approached in the past must be overhauled.

I think they’re suggesting that it’s necessary to move away from QA towards QE, though they don’t define what they mean by QE. I’m unsure what they’re suggesting when they say “test less and assure more” (which is not an equation, by the way). These soundbite messages don’t really say anything useful to those involved in testing.

As DevOps spreads it is imperative that software – with the continuous development – needs to be continuously tested. This needs a paradigm shift in the skills of a developer and tester as the thin line between these skills is disappearing and same individuals are required to do both.

This continues to be a big subject of debate in the testing world and they seem to be suggesting that testers are now “required” to be developers (and vice-versa). While there may be benefits in some contexts to testers having development skills, I don’t buy this as a “catch all” statement. We do a disservice to skilled human testers when we suggest they have to develop code as well or they’re somehow unworthy of being part of such DevOps/agile teams. We need to do a better job of articulating the value of skilled testing as distinct from the value of excellent development skills, bearing in mind the concept of critical distance.

The first business piece from this sector comes from Australia Post’s Donna Shepherd (Head of Testing and Service Assurance). She talks a lot about DevOps, Agile, increased levels of automation, AI/ML, and Quality Engineering at Australia Post but then also says:

The role of the tester is also changing, moving away from large scale manual testing and embracing automation into a more technical role

I remain unclear as to whether large-scale manual testing is still the norm in her organization or whether significant moves towards a more automation-focused testing approach have already taken place. Donna also says:

The quality assurance team are the gatekeepers and despite the changes in delivery approaches, automation and skillset, QA will continue to play an important role in the future.

This doesn’t make it sound like a genuine DevOps mentality has been embedded yet and in her case, “QA [is a] governance layer having oversight of deliverables”.

The second business piece representing the CPRDT sector comes from McDonald’s, in the shape of David McMullen (Director of Technology) & Matt Cottee (Manager – POS Systems, Cashless, Digital and Technology Deployment), who manage to say nothing about testing in the couple of pages they’ve contributed to the report.

The next sector is “Energy, Utilities, Mining & Chemicals” and this is introduced by Jan Lindhaus (Vice-President, Head of Sector EUC, Capgemini Australia and New Zealand) and there’s not much about testing here. He says:

Smart QA needs to cover integrated ecosystems supported by cognitive and analytical capabilities along end-to-end business value chains with high speed, agility and robustness.

Maybe read that again, as I’ve done many times. I literally have no idea what “smart QA” is based on this description!

A theme gaining in popularity is new ways of working (NWW), which looks beyond Agile project delivery for a discrete capability.

I heard this NWW idea for the first time fairly recently in relation to one of Australia’s big four banks, but I don’t have a good handle on how this is different from the status quo of businesses adapting and changing the way they work to deal with changes in the business landscape over time. Is NWW an excuse to say “we’re Agile but not following its principles”? (Please point me in the direction of any resources that might help me understand this NWW concept more clearly.)

There are three business pieces for this sector, the first of which comes from Uthkusa Gamanayake (Test Capability Manager, AGL). It’s pleasing to finally start to read some more sensible commentary around testing here (maybe as we should expect given his title). He says:

Testers are part of scrum teams. This helps them to work closely with developers and provides opportunities to take on development tasks in addition to testing. The QA responsibility has moved from the quality assurance team to the scrum teams. This is a cultural and mindset shift.

It’s good to hear about this kind of shift happening in a large utility company like AGL and it at least sounds like testers have the option to take on automation development tasks but are not being moved away from manual testing as the norm. On automation, he says:

Individual teams within an organization will use their own automation tools and frameworks that best suit their requirements and platforms. There is no single solution or framework that will work for the entire organization. We should not be trying to standardize test automation. It can slow down delivery.

Again, this is refreshing to hear, they’re not looking for a “one size fits all” automation solution across such a huge IT organization but rather using best fit tools to solve problems in the context of individual teams. Turning to the topic de jour, AI, he states his opinion:

In my view AI is the future of test automation. AI will replace some testing roles in the future.

I think the jury is still out on this topic. I can imagine some work that some organizations refer to as “testing” being within the realms of the capability of AI even now. But what I understand “testing” to be seems unlikely to be replaceable by AI anytime soon.

There are many people who can test and provide results, but it is hard to find people who have a vision and can implement it.

I’m not sure what he was getting at here, maybe that it’s hard to find people who can clearly articulate the value of testing and tell coherent & compelling stories about the testing they perform. I see this challenge and coaching testers in the skills of storytelling is a priority for me if we are to see human testers being better understood and more valued by stakeholders. He also says:

As for the role of head of testing, it will still exist. It won’t go away, but its function will change. This role will have broader QA responsibilities. The head of QA role does exist in some organizations. I think the responsibilities are still limited to testing practices.

I basically agree with this assessment, in that some kind of senior leadership position dedicated to quality/testing is required in larger organizations even when the responsibility for quality and performing of tests is pushed down into the Scrum delivery teams.

The next business piece comes from David Hayman (Test Practice Manager, Genesis, and Chair of ANTZB). His “no nonsense” commentary is refreshingly honest and frank, exactly what I’d expect based on my experience of meeting and listening to David at past ANZTB conferences and events. On tooling, he says:

The right tool is the tool that you need to do the job. Sometimes they are more UI-focused, sometimes they are more AI-focused, sometimes they are more desktop-focused. As a result, with respect to the actual tools themselves, I’m not going to go into it because I don’t think it’s a value-add and can often be misleading. But actually, it doesn’t generate any value. The great thing is that sanity appears to have overtaken the market so that now we automate what’s valuable as opposed to automating a process because it’s a challenge, or because we can, or we want to, or because it looks good on a CV. The automation journey, though not complete, has reached a level of maturity, where sanity prevails. So that is at least a good thing.

I like the fact that David acknowledges that context is important in our choice of tooling for automation (or anything else for that matter) and that “sanity” is prevailing, at least in some teams in some organizations. That said, I commonly read articles and LinkedIn posts from folks still of the opinion that a particular tool is the saviour or that everyone should have a goal to “automate all the testing” so there’s still some way to go before sanity is the default position on this.

He goes on to talk about an extra kind of testing that he sees as missing from his current testing mix, which he labels “Product Intent Testing”:

I have been thinking that we’re going to need another phase in the testing process – perhaps an acceptance process, or similar. At the moment, we do component testing – we can automate that. We do functional testing – we can automate that. We do system integration testing – we can automate that. We have UAT – we can automate some of that, though obviously it requires a lot more business input.

When you have a situation where the expected results from AI tests are changing all the time, there is no hard and fast expected result. The result might get closer. As long as the function delivers the intent of the requirement, of the use case, or the story, then that’s close enough. But with an automated script, that doesn’t work. You can’t have ‘close enough’.

So I believe there’s an extra step, or an extra phase, I call Product Intent Testing [PIT]. This should be applied once we’ve run the functional tests. What we are investigating is ‘Has the intent that you were trying to achieve from a particular story, been provided?’ That requires human input – decision-making, basically.

It sounds like David is looking for a way to inject a healthy dose of human testing into this testing process, where it might be missing due to “replacement” by automation of existing parts of the process. I personally view this checking of intent to be exactly what we should be doing during story testing – it’s easy to get very focused on (and, paradoxically perhaps, distracted by) the acceptance criteria on our stories and potentially miss the intent of the story by stepping back a little. I’m interested to hear what others think about this topic of covering the intent during our testing.

The last business piece in this sector comes from Ian Robertson (CIO, Water NSW) and, as a non-testing guy, he doesn’t talk about testing in particular, focusing more on domain specifics, but he does mention tooling in the shape of Azure DevOps and Tosca (a Tricentis tool, coincidentally?).

The chunkiest section of the report is dedicated to the “Financial Services’ sector with six business pieces, introduced by Sudhir Pai (Chief Technology and Innovation Officer, Capgemini Financial Services). Part of his commentary is almost identical to that from Jan Lindhaus’s introduction for the “Energy, Utilities, Mining & Chemicals” sector:

Smart QA solutions integrating end-to-end ecosystems powered by cognitive and analytical capabilities are vital

Sudhir also again refers to the “New Ways of Working” idea and he makes a bold claim around “continuous testing”:

Our Continuous Testing report shows that the next 2-3 years is a critical time period for continuous testing – with increased automation in test data management and use of model-based testing for auto-generation of test cases, adoption is all set to boom.

I haven’t seen the “Continuous Testing report” he’s referring to, but I feel like these predictions of booming AI and automation of test case generation have been around for a while already and I don’t see widespread or meaningful adoption. Is “auto-generation of test cases” even something we’d want to adopt? If so, why and what other kinds of risk would we actually amplify by doing so?

Interesting, none of the six business pieces in this sector come from specialists in testing. The first one is by Nathalie Turgeon (Head of Project Delivery, AXA Shared Services Centre) and she hardly mentions testing but does appear to argue the case for a unified QA framework despite clearly articulating the very different landscapes of their legacy and digital businesses.

The next piece comes from Jarrod Sawers (Head of Enterprise Delivery Australia and New Zealand, AIA). He makes the observation:

The role of QA has evolved in the past five years, and there a few different parts to that. One part is mindset. If you go back several years across the market, testing was seen as the last thing you did, and it took along time, and was always done under pressure. Because if anything else went slow, the time to test was always challenged and, potentially, compromised. And that is the wrong idea.

It’s very much a mindset shift to say, ‘Well, let’s think about moving to a more Agile way of working, thinking about testing and QA and assurance of that.’ That is the assurance of what that outcome needs to be for the customer from the start of that process.

This shift away from “testing at the end of the process” has been happening for a very long time now, but Enterprise IT is perhaps a laggard in many respects and so it’s not surprising to hear that this shift is a fairly recent thing inside AIA but at least they’ve finally got there as they adopt more agile ways of working. Inevitably from an Enterprise guy, AI is top of mind:

A great part of the AI is around the move from doing QA once to continuous QA. Think about computing speed, and the power available now compared to just a few years ago, and the speed of these activities. Having that integrated within that decision process makes sense. To build it in so that you’re constantly getting feedback that, yes, it’s operating as expected. Yes, it’s giving us the outcomes we’re looking for.

The customer experience or customer outcome is much better, because no organization without AI has one-to-one QA for all of their operational processes. There is risk in manual processing and human decision-making.

I find myself feeling confused by Jarrod’s comments here and unsure what he means when he says that “no organization without AI has one-to-one QA for all of their operational processes”. “One-to-one QA” is not a term I’m familiar with. While I agree that there is risk in using humans for processing and making decisions, it’s simply untrue that there are no risks when the humans are replaced by AI/automation. All that really happens is a different set of risks are now applied and human decision-making, especially in the context of testing, is typically a risk worth taking. On “QA” specifically, Jarrod notes:

It has to be inherently part of the organisational journey to ensure that when we have a new product entering the market, all those things we say it’s going to do must actually happen. If it doesn’t work, it’s very damaging. So how do we know that we’re going to get there? The answer needs to be, ‘We know because we have taken the correct steps through the process’.

And somebody can say, ‘I know we’re doing this properly, it’s going to be very valuable throughout the process’. Whether that is a product owner or a test manager, it has to be somebody who can guarantee the QA and give assurance to the quality.

His closing statement here is interesting and one I disagree with. Putting such responsibility onto a single person is unfair and goes against the idea of the whole team being responsible for the quality of what they deliver. This gatekeeper (read: scapegoat) for quality is not helpful and sets the person up for failure, almost by definition.

The third business piece comes from Nicki Doble (Group CIO, Cover-More) and it’s clear that, for her, QA/testing is all about confidence-building:

We need to move faster with confidence, and that means leveraging continuous testing and deployment practices at the same time as meeting the quality and security requirements.

This will involve automated releases, along with test-driven development and automated testing to ensure confidence is maintained.

Historically, testing has been either quite manual or it involved a huge suite of automated tests that took a lot of effort to build and maintain, but which didn’t always support the value chain of the business.

In future, we need to focus on building only the right automated testing required to instill confidence and surety into our practices. This needs to be a mix of Test Driven Development (TDD) undertaken by our developers but supported by the QA team, automated performance and functional testing to maintain our minimum standards and create surety. And it needs to be paired with continuous testing running across our development branches.

It worries me to see words like “confidence” and “surety” in relation to expectations from testing. It sounds like she believes that TDD and automated testing are providing them more certainty than when they had their “quite manual” testing. It would have been more encouraging to instead read that she understands that an appropriate mix of human testing and automated checks can help them meet their quality goals, alongside an acknowledgement that surety cannot be achieved no matter what this mix looks like.

The next business piece comes from Raoul Hamilton-Smith (General Manager Product Architecture & CTO NZ, Equifax). He sets out his vision around testing all too clearly:

We want to have all testing automated, but we’re not there yet. It’s a cultural shift as much as a technology shift to make this happen.

It’s not a cultural or technology shift to make all testing automated, it’s simply an impossible mission – if you really believe testing is more than algorithmic checking. So much for “sanity” taking over (per David Hayman’s observations) when it comes to the “automate everything” nonsense! Raoul goes on to talk about Equifax’s Agile journey:

The organisation has been set up for Agile delivery for quite some time, including a move to the scaled Agile framework around 18 months ago. A standard Agile team (or squad) consists of a product owner, some application engineers, some QA analysts and a scrum master. As far as line management is concerned, there is a QA tower. However, the QA people are embedded in the Agile teams so their day-to-day leadership is via their scrum master/project manager.

What we have not been so good at is being very clear about the demand to automate testing. We probably haven’t shown all [sic] how that can be achieved, with some areas of delivery being better than others.

This is the challenge that we’re facing now – we have people who have been manually testing with automation skills that haven’t really had the opportunity to build out the automaton. So right now, we are at the pivot point, whereby automation is the norm.

It sounds like someone in this organization has loaded up on the Kool-Aid, adopting SAFe and borrowing the idea of squads from the so-called Spotify Model (itself a form of scaling framework for Scrum teams). The desire for complete automation is also evident again here, “the demand to automate testing”. It would be interesting to hear from this organization again in a year or two when the folly of this “automate everything” approach has made them rethink and take a more human-centred approach when it comes to testing.

The penultimate piece for this sector comes courtesy of David Lochrie (General Manager, Digital & Integration, ME Bank). He talks a lot about “cycle time” as a valuable metric and has this to say about testing:

From a quality assurance perspective, as a practice, Lochrie characterises the current status as being in the middle of an evolution. This involves transforming from highly manual, extremely slow, labour-intensive enterprise testing processes, and instead heading towards leveraging automation to reduce the cycle time of big, expensive, fragile [sic] and regression test suites.

“We’ve started our QA by focusing purely on automation. The next phase QA transformation journey will be to broaden our definition of QA. Rather than just focusing on test execution, and the automation of test execution, it will focus on what other disciplines come under that banner of QA and how do we move those to the left.”

The days of QA equating to testing are gone, he says.

QA these days involves much more than the old-school tester sitting at the end of the value chain and waiting for a new feature to be thrown over the fence from a developer for testing. “Under the old model the tester knew little about the feature or its origins, or about the business need, the design or the requirement. But those days are over.”

This again sounds like typical laggard enterprise IT, with testers in more actually agile organizations having been embedded into development teams (and being fully across features from the very start) as the norm for many years already. Unfortunately here again, it sounds like ME Bank will make the same fundamental error in trying to automate everything as the way to move faster and reduce their precious cycle time. I’d fully expect sanity to prevail in the long term for this organization too, simply out of necessity, so let’s revisit their comments in a future report too perhaps.

The sixth and final business piece is by Mark Zanetich (Head of Technology, Risk Compliance and Controls for Infrastructure & Operations, Westpac) and he has nothing substantive to say around testing.

Next up in terms of sectors comes “Higher Education” and the introduction is by David Harper (Vice-President, Head of Public Services, Capgemini Australia and New Zealand) who has nothing to say about testing either.

There are two business pieces for this sector, the first coming from Vicki Connor (Enterprise Test Architect, Deakin University) and she says this around AI in testing:

As far as testing applications based on AI, we are doing some exploratory testing and we are learning as we go. We are very open-minded about it. Whilst maintaining our basic principles of understanding why we are testing, what we are testing, when to test, who is best suited to test, where to conduct the testing and how to achieve the best results.

It’s good to read that they’re at least looking at AI in testing via the lens of the basics of why they’re testing and so on, rather than blindly adding it to their mix based on what every other organization is claiming to be doing. I assume when Vicki refers to “exploratory testing” here that she’s really meaning they’re experimenting with these AI approaches to testing and evaluating their usefulness in their own unique context (rather than using ET as a testing approach for their applications generally).

The second business piece comes from Dirk Vandenbulcke (Director – Digital Platforms, RMIT University) and more frequent releases are a hot topic for him:

RMIT us currently in a monthly release cadence. By only having monthly releases, we want to ensure the quality of these releases matches what you would normally find in Waterfall circumstances.

Automation is not only a form of cost control; it is also a question of quality control to meet these timelines. If the test cycles are six weeks, there is no way you can operate on a release cadence of four weeks.

Ultimately, we would like to move to fortnightly-releases for speed-to-market reason[sic], which means our QA cycles need to be automated, improved, and sped up.

For the moment, our QA is more journey-focused. As such, we want to make sure our testing needs are optimised, and use cases are properly tested. Potentially, that means not every single edge case will be tested ever single time. When they were originally developed they were tested – but they won’t be every single time we deploy.

We have started to focus our activities around the paths and journeys our students and staff will take through an experience, rather than doing wide, unfocused tests.

Especially in a fast release cadence, you can’t test every single thing, every time, or automate every single thing, so it’s essential to be focused.”

I find it fascinating that the quality bar after moving to monthly releases is “what you would normally find in Waterfall circumstances.” This sounds like a case of fear of the unknown in moving to more frequent releases, when in reality the risk involved in such releases should be lower since less changes are involved in each release. His approach of workflow/journey testing, though, strikes me as sensible and he also seems to have a handle on the folly of attempting to automate everything as a way out of the issues he’s facing with these more frequent releases.

The final sector considered in this report is “Government” and this is introduced by David Harper again. He manages to mention all the buzzwords in just a few sentences:

Technology trends continue to encourage new innovative approaches of testing code and opportunities for QA with, for example, DevOps and continuous integration and continuous delivery, further enabling Agile operating environments. Most notable is the emergence of the applicability of AI/machine learning as it relates to driving efficiency at scale in large transaction processing environments.

While these techniques are starting to be deployed in business process, it is interesting to explore how learning algorithms will be used to improve QA activities. Such smart or advanced automaton in testing will emerge once agencies have found their feet with automated testing.

My read of this is that government departments are still struggling with automated testing, let alone applying AI and machine learning.

There are two business pieces for this sector, firstly from Srinivas Kotha (Technology Testing Services Manager, Airservices) and he talks a lot about frameworks and strategy, focusing on the future but with less comment about the current state. He suggests that the organization will first look around to determine their strategy:

As part of the test strategy development, I will be looking at the market trends and emerging technologies in testing and quality assurance space to be able to effectively satisfy our future needs and demands. I believe technology evolution is on the upward trend and there is lot out there in the market that we can leverage to enhance our testing and QA capability and deliver business value.

I hope that they will actually look at their own unique requirements and then look at what technologies can help them meet those requirements, rather than looking at these “market trends” and fitting those into their strategy. As we can see from this very report, this “trend” noise is generally not helpful and the organization’s own context and specific needs should be the key drivers behind choices of technology. Talking about automation and AI, he says:

I will be keen to look at implementing more automation and use of Artificial Intelligence (AI) to scale up to increase the coverage (depth and breadth) of testing to reduce risks and time to market. We will be looking at two components within automation – basic and smart automation. We have done little bit of basic automation at the project level. However, we are not going to reuse that for ongoing testing, nor are we maintaining those scripts. There are some areas within the organisation where specific automated scripts are maintained and run for specific testing needs. We currently using a combination of market-leading and open source tools for test management and automation. Key basic automation items that are for immediate consideration are around ongoing functional, regression and performance (load and stress) testing.

Smart automation uses emerging technology such as AI. The questions we are asking are: how we can automate that for testing and data analysis for improving quality outcomes? And what testing can we do from a DevOps and CI/CD perspective, which we aim to adopt in the coming 1-2 years? In the next 6 months we will put up the framework, create the strategy and then begin implementing the initiatives in the strategy. The key potential strategy areas are around automation, test environment and data, and some of the smart test platforms/labs capability.

It sounds like they are in the very early days of building an automation capability, yet they’re already thinking about applying AI and so-called “smart automation”. There’s real danger here in losing sight of why they are trying to automate some of their testing.

The second piece comes from Philip John (QA and Testing Services Manager, WorkSafe Victoria) and his comments see the first mention (I think) of BDD in this report:

When it comes to QA resourcing, we are bringing in more Agile testers who can offer assistance in automation, with an aim to support continuous QA to underpin a CI/CD approach. We have behavioural-driven development and DevOps in our mix and are focusing our delivery model into shift-left testing.

The organisation is also using more Agile/SAFe Agile delivery models.

It all sounds very modern and on trend, hopefully the testers are adding genuine value and not just becoming Cucumbers. Note the mention of SAFe here, not the first time this heavyweight framework appears in the business pieces of this report. Philip heads down the KPI path as well:

From the KPI perspective, the number of KPIs in the testing and QA space is only going to grow, rather than diminish. We expect that there will be a tactical shift in the definition of some KPIs. In any case, we will need to have a reasonable level of KPIs established to ensure the adherence of testing and quality standards.

I don’t understand the fascination with KPIs and, even if we could conceive of some sensible ones around testing, why would more and more of them necessarily equal better? Hitting a KPI number and ensuring adherence to some standard are, of course, completely different things too.

Trend analysis

Moving on from the sector analysis, the report identifies eight “Key trends in Quality Assurance”, viz.

  • DevOps changes the game
  • Modern testing approaches for a DevOps world
  • The status of performance testing in Agile and DevOps
  • Digital Transformation and Artificial Intelligence
  • Unlocking the value of QA teams
  • Connected ecosystem for effective and efficient QA
  • RPA and what we can learn from test automation
  • Data democratisation for competitive advantage

Ignoring the fact that these are not actually trends (at least not as they are stated here) and that there is no indication of the source of them, let’s look at each in turn.

Each trend is supported by a business piece again, often by a tool vendor or some other party with something of a vested interest.

For “DevOps changes the game”, it’s down to Thomas Hadorn and Dominik Weissboeck (both Managing Directors APAC, Tricentis) to discuss the trend, kicking off with:

Scaled agile and DevOps are changing the game for software testing

There’s that “scaled agile” again but there’s a reasonable argument for the idea that adopting DevOps does change the game for testing. They discuss a little of the “how”:

In the past, when software testing was a timeboxed activity at the end of the cycle, the focus was on answering the question, ‘Are we done testing?’ When this was the primary question, “counting” metrics associated with the number of tests run, incomplete tests, passed tests and failed tests, drove the process and influenced the release decision. These metrics are highly ineffective in understanding the actual quality of a release. Today, the question to answer is: ‘Does the release have an acceptable level of risk?’

To provide the DevOps community with an objective perspective on the quality metrics most critical to answering this question, Tricentis commissioned Forrester to research the topic. The goal was to analyse how DevOps leaders measured and valued 75 quality metrics (selected by Forrester), then identify which metrics matter most for DevOps success

I like their acknowledgement that the fetish of counting things around testing is ineffective and answering questions about risk is a much more profound way of showing the value of testing. Turning to the Forrester research they mention, they provide this “quadrant” representation where the horizontal axis represents the degree to which metrics are measured and the vertical the value from measuring the metric (note that in this image, “Directions” should read “Distractions”):


I find it truly bizarre that a “hidden gem” is the idea of prioritizing automated tests based on risk (how else would you do it?!), while high value still seems to be placed on the very counting of things they’ve said is ineffective (e.g. total number of defects, test cases executed, etc.).

The next trend, “Modern testing approaches for a DevOps world”, is discussed by Sanjeev Sharma (VP, Global Practice Director | Data Modernization, Delphix). He makes an observation on the “Move Fast and Break Things” notion:

Although it was promulgated by startups that were early adopters of DevOps, “the notion of Move Fast and Break Things” is passé today. It was a Silicon Valley thing, and that era no longer exists. Enterprises require both speed and high quality, and the need to deliver products and services faster, while maintaining the high expectations of quality and performance are challenges modern day testing and QA teams must address.

This is a fair comment and I see most organizations still having a focus on quality over speed. The desire to have both is certainly challenging many aspects of the way software is built, tested and delivered – and “DevOps” is not a silver bullet in this regard. Sanjeev also makes this observation around AI/ML:

… will drive the need for AI and ML-driven testing, meaning testing and QA are guided by learning from the data generated by the tests being run, by the performance of systems in production, and by introducing randomness – chaos – into systems under test.

This is something I’ve never seen as much of in the testing industry as I’d have expected, that is taking the data generated by different kinds of tests (be they automated or not) and using that data to guide further or different tests. We have the tooling to do this but even basic measures such as code covered by automated tests suites is not generally collected and, even if it is, not used as input into the risk analysis for human testing.

The next (not) trend is “The status of performance testing in Agile and DevOps”, covered by Henrik Rexed (Performance Engineer, Neotys) and his focus – unsurprisingly since he works for a performance testing tool vendor – is performance testing. He comments:

That is why the most popular unicorn companies have invested in a framework that would allow them to automatically build, test, and deploy their releases to production with minimal human interaction.

Every organisation moving to Agile or DevOps will add continuous testing to their release management. Without implementing the performance scoring mechanism, they would quickly be blocked and will start to move performance testing to major releases only.

We are taking major risks by removing performance testing of our pipeline. Let’s be smart and provide a more efficient performance status.

I’m not keen on the idea of taking what so-called “unicorn companies” do as a model for what every other company should do – remember context matters and what’s good for SpaceX or WeWork might not be good for your organization. I agree that continuous testing is a direction most teams will take as they feel pressured to deploy more frequently and I see plenty of evidence for this already (including within Quest). Henrik makes the good point here that the mix of tests generally considered in “continuous testing” often doesn’t include performance testing and there are likely benefits from adding such testing into the mix rather than kicking the performance risk can down the road.

The next trend is “Digital Transformation and Artificial Intelligence” and is discussed by Shyam Narayan (Director | Head of Managed Services, Capgemini Australia and New Zealand). On the goal of AI in testing, he says:

AI interactions with the system multiply the results normally obtained by manual testing. A test automation script can be designed to interact with the system, but it can’t distinguish between the correct and incorrect outcomes for applications.

The end goal of leveraging AI in testing is to primarily reduce the testing lifecycle, making it shorter, smarter, and capable of augmenting the jobs of testers by equipping them with technology. AI is directly applicable to all aspects of testing including performance, exploratory and functional regression testing, identifying and resolving test failures, and performing usability testing.

I’m not sure what he means by “multiplying the results normally obtained by manual testing” and I’m also not convinced that the goal of leveraging AI is to reduce the time it takes to test, I’d see the advantages more in terms of enabling us to do things we currently cannot using humans or existing automation technologies. He also sees a very broad surface area of applicability across testing, it’ll be interesting to see how the reality pans out. In terms of skill requirements for testers in this new world, Shyam says:

Agile and DevOps-era organisations are seeking software development engineers in test (SDET) – technical software testers. But with AI-based applications the requirement will change from SDET to SDET plus data science/statistical modelling – software development artificial intelligence in rest [sic] (SDAIET). This means that QA experts will need knowledge and training not only in development but also in data science and statistical modelling.

This honestly seems ridiculous. The SDET idea hasn’t even been adopted broadly and, where organizations went “all in” around that idea, they’ve generally pulled back and realized that the testing performed by humans is actually a significant value add. Something like a SDAIET is so niche that I can’t imagine it catching on in any significant way.

The next trend is “Unlocking the value of QA teams” and is discussed by Remco Oostelaar (Director, Capgemini Australia and New Zealand). His main point seems to be that SAFe adoption has been a great thing, but that testing organizations haven’t really adapted into this new framework:

In some cases, the test organisation has not adapted to the new methods of Agile, Lean, Kanban that are integrated into the model. Instead it is still structurally based on the Waterfall model with the same processes and tools. At best these test organisations can deliver some short-term value, but not the breakthrough performance that enables the organisation to change the way it competes.

It’s interesting that he considers SAFe to be a model incorporating Agile, Lean and Kanban ideas, I didn’t get that impression when I took a SAFe course some years ago but acknowledge that my understanding of, and interest in, the framework is limited.

It is also important to consider how to transform low-value activities into a high-value outcome. An example is the build of manual test scenarios to automation that can be integrated as part of the continuous integration and continuous delivery (CI/CD) model. Other examples are: automatic code quality checks, continuous testing for unit tests, the application performance interface (API), and monitoring performance and security.

It’s sad to see this blanket view of manual testing as a “low-value activity” and we continue to have a lot of work to do in explaining the value of human testing and why & where it still fits even in this new world of Agile, CI/CD, DevOps, SAFe, NWW, AI, <insert buzzword here>.

Implementing SAFe is not about cost reduction; it is about delivering better and faster. Companies gain a competitive edge and improved customer relationship. The focus is on the velocity, throughput, efficiency improvement and quality of the delivery stream.

I’m sure no organization takes on SAFe believing it will reduce costs, just a glance at the framework overview shows you how heavyweight this is and the extra work you’ll need to do to implement it by the book. I’d be interested to see case studies of efficiency improvements and quality upticks after adopting SAFe.

The next trend is “Connected ecosystem for effective and efficient QA” and it’s over to Ajay Walgude (Vice President, Capgemini Financial Services) for the commentary. He makes reference to the World Quality Report (per my previous blog):

Everything seems to be in place or getting in order, but we still have lower defect removal efficiency (DRE), high cost of quality especially the cost on non-conformance based on interactions with various customers. While the World Quality Report (WQR) acknowledges these changes and comments on the budgets for QA being stable or reduced, there is no credible source that can comment on metrics such as cost of quality, and DRE across phases, and type of defects (the percentage of coding defects versus environment defects).

He doesn’t cite any sources for these claims, do we really have lower DRE across the industry? How would we know? And would we care? Metrics like DRE are not gathered by many organizations (and rightly so as far as I’m concerned) so such claims for the industry as a whole make no sense.

Effective and efficient QA relates to testing the right things in the right way. Effectiveness can be determined in terms of defect and coverage metrics such as defect removal efficiency, defect arrival rate, code coverage, test coverage and efficiency that can be measured in terms of the percentage automated (designed and executed), cost of quality and testing cycle timeframe. The connected eco system not only has a bearing on the QA metrics – cost of appraisal and prevention can go down significantly – but also on the cost of failure.

I’m with Ajay on the idea that we should strive to test the right things in the right way, this is again an example of context awareness, though it’s not what he’s referring to probably. I disagree with measuring effectiveness and efficiency via the type of metrics he mentions, however. Measuring “percentage automated” is meaningless to me, it treats human and automated tests as countable in the same way (which is nonsense) and reinforces the notion that more automation is better (which is not necessarily the case). And how exactly would one measure the “cost of quality” as a measure of efficiency?

He also clearly sees the so-called “Spotify Model” as being in widespread usage and makes the following claim about more agile team organizations:

The aim of squads, tribes, pods and scrum teams is to bring everybody together and drive towards the common goal of building the minimum viable product (MVP) that is release worthy. While the focus is on building the product, sufficient time should be spent on building this connected eco system that will significantly reduce the time and effort needed to achieve that goal and, in doing so, addressing the effective and efficient QA.

The goal of an agile development team is not to build an MVP, this may be a goal at some early stage of a product’s life, but it won’t generally be the goal.

The penultimate trend, “RPA and what we can learn from test automation”, is covered by Remco Oostelaar again (Director, Capgemini Australia and New Zealand) and he starts off by defining what he means by RPA:

Robotic Process Automation (RPA) is the automation of repetitive business tasks and it replaces the human aspect of retrieving or entering data from or into a system, such as entering invoices or creating user accounts across multiple systems. The goal is to make the process faster, more reliable, and cost-effective.

He argues that many of the challenges that organizations face when implementing RPA are similar to those they faced previously when implementing automated testing, leading to this bold claim:

In the future, RPA and test automation will merge into one area, as both have the same customer drivers – cost, speed and quality – and the skillsets are exchangeable. Tool providers are crossing-over to each other’s areas and, with machine learning and AI, this will only accelerate.

It troubles me when I see “test automation” as a cost-reduction initiative, the ROI on test automation (like any form of testing) is zero – it’s a cost, just like writing code is a cost and yet I’ve literally never seen anyone ask for a ROI justification to write product code.

The last trend covered here is “Data democratisation for competitive advantage”, discussed by Malliga Krishnan (Director | Head of Insights and Data, Capgemini Australia and New Zealand) and she doesn’t discuss testing at all.

In another report error, there is actually another trend not mentioned until we get here, so the final final trend is “The transformative impact of Cloud”, covered by David Fodor (Business Development – Financial Services, Amazon Web Services, Australia & New Zealand). It’s a strongly pro-AWS piece, as you’d probably expect, but it’s interesting to read the reality around AI implementations for testing viewed through the lens of such an infrastructure provider:

When it comes to quality assurance, it’s very early days. I haven’t seen significant investment in use cases that employ AI for assurance processes yet, but I’m sure as organisations redevelop their code deployment and delivery constructs, evolve their DevOps operating models and get competent at managing CI/CD and blue/green deployments, they will look at the value they can get from AI techniques to further automate this process back up the value chain.

It sounds like a lot of organizations have a long way to go in getting their automation, CI/CD pipelines and deployment models right before they need to worry about layering on AI. He makes the following points re: developers and testers:

Traditionally, there was a clear delineation between developers and testers. Now developers are much more accountable – and will increasingly be accountable – for doing a significant part of the assurance process themselves. And, as a result, organisations will want to automate that as much as possible. We should expect to see the balance of metrics – or what success looks like for a development cycle – to evolve very much to cycle time over and above pure defect rate. As techniques such as blue/green and canary deployments evolve even further, and as microservices architectures evolve further, the impacts of defects in production will become localised to the extent where you can afford to bias speed over failure.

The more you bias to speed, the more cycles that you can produce, the better you get and the lower your failure rates become. There is a growing bias to optimise for speed over perfection within an environment of effective rollback capabilities, particularly in a blue/green environment construct. The blast radius in a microservices architecture means that point failures don’t bring down your whole application. It might bring down one service within a broader stack. That’s definitely the future that we see. We see organisations who would rather perform more deployments with small failure rates, than have protracted Waterfall cycle development timelines with monolithic failure risk.

The “cycle time” metric is mentioned again here and at least he sees nonsense metrics such as defect rates going away over time in these more modern environments. His comment that “the impacts of defects in production will become localised to the extent where you can afford to bias speed over failure” rings true, but I still think many organizations are far away from the maturity in their DevOps, CI/CD, automation, rollbacks, etc. that make this a viable reality. The illusion of having that maturity is probably leading some to already be making this mistake, though.

Takeaways for the future of Quality Assurance

With the trends really all covered, the last section of the report of any substance is the “10 Key takeaways for the future of Quality Assurance” which are again listed without any citations or sources, so can only be taken as Capgemini opinion:

  • Digital transformation
  • Shift-Left or Right-Shift
  • Automation
  • Redefined KPIs
  • Evolution of QA Tools
  • QA Operating Model
  • QA framework and strategy
  • Focus on capability uplift
  • Role of industry bodies
  • Role of system integrators

Wrapping up

This is another hefty report from the Capgemini folks and, while the content is gathered less from data and more from opinion pieces when compared to the World Quality Report, it results in a very similar document.

There are plenty of buzzwords and vested interest commentary from tool vendors, but little to encourage me that such a report tells us much about the true state of testing or its future. While it was good to read some of the more sensible and practical commentary, the continued predictions of AI playing a significant role in QA/testing sometime soon simply don’t reflect the reality that those of us who spend time learning about what’s actually going on in testing teams are seeing. Most organizations still have a hard enough time getting genuine value from a move to agile ways of working and particularly leveraging automation to best effect, so the extra complexity and contextual application of AI seems a long way off the mainstream to me.

I realize that I’m probably not the target audience for these corporate type reports and that target audience will probably take on board some of the ideas from such a high-profile report – unfortunately, this will probably result in poor decisions about testing direction and strategy in large organizations, while a more context aware investigation of which good practices would make sense in each of their unique environments would likely produce better outcomes.

I find reading these reports both interesting and quite depressing in about equal measure, but I hope I’ve highlighted some of the more discussion-worthy pieces in this blog post.

2 thoughts on “Reviewing “Reimagine The Future of Quality Assurance” – (yet) another corporate report on the state of QA/testing

  1. Pingback: My Week's Favorites - December 15, 2019 - Thao Vo - Blog

  2. John Wilson

    Hmm. Perhaps the report could be summarised as, a report generated by those so far removed from actual testing for those so far removed from actual testing that the contents bear little resemblance to reality, possibly.
    Whilst I don’t always agree with Lee’s views I do welcome this thought provoking article.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s