I’ve been coming across talk of DevOps a lot recently, both in my work and also in the broader software development community. The continuous delivery & deployment folks all seem very excited about the manifold releases that can now be achieved, thanks to great tooling and “all of that automated testing” before deployment. From what I’ve been hearing, this has the potential to be the perfect storm of poor agile practices meeting a lack of understanding of what can and cannot be automated when it comes to “testing”.
The following quote comes from John Ferguson Smart in his recent article, The Role of QA in a DevOps World :
there is a lot more to DevOps than simply automating a build pipeline. DevOps is not an activity, it is a culture, and a culture that goes much deeper than what appears to the naked eye. You could say that DevOps involves creating a culture where software development teams works seamlessly with IT operations so that they can work together to build, test, release and update applications more frequently and efficiently.
I think this quote says a lot about DevOps and makes it clear that it’s not just about automating the stuff around building and deploying the software. With the big focus on automating, it is somewhat inevitable that the same misunderstandings are being made about the role of human testers in all this as were common during the early stages of agile adoption:
Some organisations also seem to believe that they can get away with fewer testers when they adopt a DevOps. After all, if all the tests are automated, who needs manual testers?
In reading more about the culture of DevOps, there are two obvious limitations we need to be talking about and making people aware of, viz. “acceptance criteria” and “automated testing”.
Acceptance criteria
Let’s be clear from that start that meeting all of the acceptance criteria does not mean the story/product is acceptable to your customer. There is a real danger of acceptance criteria being used in the same fallacious way that we previously used “requirements”, as though we can completely specify all the dimensions of acceptability by producing a set of criteria. Meeting all of the specified acceptance criteria might mean the story/product is acceptable. When the specified acceptance criteria are not met, the story/product is definitely not acceptable. So, we’d be better off thinking of them as “rejection criteria”. Michael Bolton wrote an excellent blog on this, Acceptance Tests: Let’s Change the Title, Too and he says (the bold emphasis is mine):
The idea that we’re done when the acceptance tests pass is a myth. As a tester, I can assure you that a suite of passing acceptance tests doesn’t mean that the product is acceptable to the customer, nor does it mean that the customer should accept it. It means that the product is ready for serious exploration, discovery, investigation, and learning—that is, for testing—so that we can find problems that we didn’t anticipate with those tests but that would nonetheless destroy value in the product.
Have a think about what that means for automated acceptance tests…
“Automated testing”
Although I prefer the term “automated checks” over “automated tests” (to highlight the fact that “testing” requires human thinking skills), I’ll indulge the common parlance for the purposes of this topic. It feels like increasingly greater reliance is being placed on automated tests to signify that all is OK with the products we build, especially in the world of DevOps where deployments of code change without any further human interaction are seen as normal as long as all the automated tests are “green”.
Let’s reflect for a moment on why we write automated tests. In another excellent blog post, s/automation/programming/, Michael Bolton says:
people do programming, and that good programming can be hard, and … good programming requires skill. And even good programming is vulnerable to errors and other problems.
We acknowledge that writing programs is difficult and prone to human error. Now, suppose instead of calling our critical checks “automated tests”, we instead referred to them as “programmed tests” – this would make it very clear that we’re:
writing more programs when we’re already rationally uncertain about the programs we’ve got.
Michael suggests similar substitutions:
Let’s automate to do all the testing? “Let’s write programs to do all the testing.”
Testing will be faster and cheaper if we automate. “Testing will be faster and cheaper if we write programs.”
Automation will replace human testers. “Writing programs will replace human testers.”
I think this makes it very clear that we cannot automate all of our testing on our way to quality products, be it in a DevOps environment or otherwise.
(For a great reference piece on the use of automation in testing, I recommend Michael Bolton & James Bach’s A Context-Driven Approach to Automation in Testing)
What then of the role for “manual” testers? John Ferguson Smart notes the changing role of testing in such environments and these changes again mirror the kinds of role I’ve been advocating for testers within agile teams:
It is true that you don’t have time for a separate, dedicated testing phase in a DevOps environment. But that doesn’t mean there is no manual testing going on. On the contrary, manual testing, in the form of exploratory testing, performance testing, usability testing and so on, is done continuously throughout the project, not just at the end… The role of the tester in a DevOps project involves exploring, discovering, and providing feedback about the product quality and design, as early as it is feasible to do so, and not just at the end of the process.
I’ll quote John Ferguson Smart again in conclusion – and hopefully I’ve made it clear why I agree with his opinion (the emphasis is again mine):
So testers are not made redundant because you have the technical capability to deploy ten times a day. On the contrary, testers play a vital role in ensuring that the application that gets deployed ten times a day is worth deploying.