This is the fifth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).
As I reach the halfway point in this series, I come to the question “Can you automate software testing?” (and the related question, “How can software test automation be done?”).
If you spend any time on Twitter and LinkedIn following threads around testing, this question of whether testing can be automated crops up with monotonous regularity and often seems to result in very heated discussion, with strong opinions from both the “yes” and “no” camps.
As a reminder (from part one of this blog series), my preferred definition of testing comes from Michael Bolton and James Bach, viz.
Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.
Looking at this definition, testing is clearly a deeply human activity since skills such as learning, exploring, questioning and inferring are not generally those well modelled by machines (even with AI/ML). Humans may or may not be assisted by tools or automated means while exercising these skills, but that doesn’t mean that the performance of testing is itself “automated”.
The distinction drawn between “testing” and “checking” made by James Bach and Michael Bolton has been incredibly helpful for me when talking about automation and countering the idea that testing can be automated (much more so than “validation” and “verification” in my experience). As a refresher, their definition of checking is:
Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
As Michael says, “We might choose to automate the exercise of some functions in a program, and then automatically compare the output of that program with some value obtained by another program or process. I’d call that a check.” Checking is a valuable component of our overall testing effort and, by this definition, lends itself to be automated. But the binary evaluations resulting from the execution of such checks form only a small part of the testing story and there are many aspects of product quality that are not amenable to such black and white evaluation.
Thinking about checks, there’s a lot that goes into them apart from the actual execution (by a machine or otherwise): someone decided we needed a check (risk analysis), someone designed the check, someone implemented the check (coding), someone decided what to observe and how to observe it, and someone evaluated the results from executing the check. These aspects of the check are testing activities and, importantly, they’re not the aspects that can be given over to a machine, i.e. be automated. There is significant testing skill required in the design, implementation and analysis of the check and its results, the execution (the automated bit) is really the easy part.
To quote Michael again:
A machine producing a bit is not doing the testing; the machine, by performing checks, is accelerating and extending our capacity to perform some action that happens as part of the testing that we humans do. The machinery is invaluable, but it’s important not to be dazzled by it. Instead, pay attention to the purpose that it helps us to fulfill, and to developing the skills required to use tools wisely and effectively.
We also need to be mindful to not conflate automation in testing with “automated checking”. There are many other ways that automation can help us, extending human abilities and enabling testing that humans cannot practically perform. Some examples of applications of automation include test data generation, test environment creation & configuration, software installation & configuration, monitoring & logging, simulating large user loads, repeating actions en masse, etc.
If we make the mistake of allowing ourselves to believe that “automated testing” exists, then we can all too easily fall into the trap of narrowing our thinking about testing to just automated checking, with a resulting focus on the development and execution of more and more automated checks. I’ve seen this problem many times across different teams in different geographies, especially so in terms of regression testing.
I think we are well served to eliminate “automated testing” from our vocabulary, instead talking about “automation in testing” and the valuable role automation can play in both testing & checking. The continued propaganda around “automated testing” as a thing, though, makes this job much harder than it sounds. You don’t have to look too hard to find examples of test tool vendors using this term and making all sorts of bold claims about their “automated testing solutions”. It’s no wonder that so many testers remain confused in answering the question about whether testing can be automated when a quick Google search got me to some of these gems within the top few results: What is automated testing? (SmartBear), Automated software testing (Atlassian) and Test Automation vs. Automated Testing: The Difference Matters (Tricentis).
I’ve only really scratched the surface of this big topic in this blog, but it should be obvious by now that I don’t believe you can automate software testing. There is often value to be gained by automating checks and leveraging automation to assist and extend humans in their testing efforts, but the real testing lies with the humans – and always will.
Some recommended reading related to this question:
- The Testing and Checking Refined article by James Bach and Michael Bolton, in which the distinction between testing and checking is discussed in depth, as well as the difference between checks performed by humans and those by machines.
- The Automation in Testing (AiT) site by Richard Bradshaw and Mark Winteringham, their six principles of AiT make a lot of sense to me.
- Bas Dijkstra’s blog
You can find the first four parts of this blog series at:
- Why is software testing important?,
- How does software testing impact software quality?
- When should software testing activities start? and
- How is software testing done?
I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.
Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.
Our company is adopting test automation in areas where it can help us; but we’ve also for some time had an automated data generation tool. Our business is computerised timetabling for universities and colleges, so we need test data that replicates the likely timetable of a university with hundreds of rooms, thousands of staff and tens of thousands of students. We have a bespoke tool that generates such timetables, but it generates results that sometimes have anomalous data structures; once you generate data items that have one or two relationships thoroughly mapped, the mapping of further relationships can diverge from practice in the real world. For a complex dataset full of multiple inter-relationships, this is almost inevitably going to happen.
So (apart from some unintentionally funny personal names, such as a student called Matt Grey) (but not his sister Satin) we sometimes find that situations arise in this generated dataset which just don’t work as perfect simulations of reality, and require human intervention to adapt the generated dataset for our test purposes.
Perhaps the most eyebrow-raising instance was the test timetable that got generated for such a fictional university which had seminars, symposiums, tutorials, conferences and exhibitions, but no lectures whatsoever! If a “simple” data generation tool can have this sort of problem, testers must be aware of possible shortcomings in their other automated tools and use them with a proper understanding of what they can and can’t do.
Thanks for your comment, Robert. You make a good point and one that seems to be largely overlooked, in that testers (and others) need to be well-practiced and skilled in the use of their tools, not only to get the most benefit from them but also to avoid being fooled by them in some way (as per your example). To quote Michael Bolton again: “by accelerating some action, tools can enable us to do bad testing faster than ever, far worse than we could possibly do it without using the tool.”
a quick Google search got me to some of these gems within the top few results: What is automated testing? (SmartBear), Automated software testing (Atlassian) and Test Automation vs. Automated Testing: The Difference Matters (Tricentis).
In one sense, this is completely explicable when you look at the business these companies believe that they are in: selling magic wands. The errors come with 1) believing that there is actual magic; and 2) believing that, in the performance of a feat of apparent magic, the power is in the wand, rather than in the magician who wields it. Caveat lector, and caveat emptor.
Thanks for taking a look at my blog and for your reply, Michael. It seems far too easy to make claims about these tools without much in the way of foundation while still managing to persuade decision-makers to fork out big dollars to buy them. I like the image of the tester as magician (as opposed to companies acting as magicians in making testers disappear).
Pingback: Testing Bits: 374 – January 3rd, 2021 – January 9th, 2021 | Testing Curator Blog
Pingback: Five Blogs – 12 January 2021 – 5blogs
Pingback: Common search engine questions about testing #6: “Is software testing easy?” | Rockin' and Testing All Over The World
Pingback: Common search engine questions about testing #7: “Is software testing a good career?” | Rockin' and Testing All Over The World
Pingback: Common search engine questions about testing #8: “Can I learn software testing on my own?” | Rockin' and Testing All Over The World
Pingback: Common search engine questions about testing #9: “Which software testing certification is the best?” | Rockin' and Testing All Over The World
Pingback: Common search engine questions about testing #10: “What will software testing look like in 2021?” | Rockin' and Testing All Over The World
Pingback: Lessons learned from writing a ten-part blog series | Rockin' and Testing All Over The World