It feels like much less than a year since I was penning my review of 2021, but the calendar doesn’t lie so it really is time to take the opportunity to review my 2022.
I published just 10 blog posts this year, so didn’t quite meet my personal target cadence of a post every month. There were a few reasons for this, the main one being my unexpected re-entry into employment (more on that below). Perhaps due to my more limited output, my blog traffic dropped by about 40% compared to 2021. I continue to be grateful for the amplification of my blog posts via their regular inclusion in lists such as 5Blogs and Software Testing Weekly.
March was the biggest month for my blog by far this year, thanks to a popular post about a video detailing how testers should fake experience to secure roles. I note in writing this blog post now that the video in question has been removed from YouTube, but no doubt there are similar videos doing the rounds that encourage inexperienced testers to cheat and misrepresent themselves – to the detriment of both themselves and the reputation of our industry.
I again published a critique of an industry report in November (after publishing similar critiques in 2020 and 2021) and this was my second most popular post of the year, so it’s good to see the considerable effort that goes into these critique-style posts being rewarded by good engagement.
I closed out the year with about 1,200 followers on Twitter, steady year on year, but maybe everyone will leave Twitter soon if the outrage many are expressing recently isn’t fake!
Work life
For the first few months of 2022, I continued doing a small amount of consulting work through my own business, Dr Lee Consulting. It was good to work directly with clients to help solve testing challenges and I was encouraged by their positive feedback.
Quite unexpectedly, an ex-colleague from my days at Quest persuaded me to interview at SSW, the consultancy he joined after Quest. A lunch with the CEO and some formalities quickly led to an offer to become SSW’s first Test Practice Lead (on a permanent part-time basis). I’ve now been with SSW for about seven months and it’s certainly been an interesting journey so far!
The environment is quite different from Quest. Firstly, SSW is a consultancy rather than a product company and I’ve come to realise how different the approach is in the consulting world compared to the product world. Secondly, SSW is a small Australian company compared to Quest being a large international one, so meetings are all standard working hours (and I certainly don’t miss the very early and very late meetings that so frequently formed part of my Quest working day!).
I have been warmly welcomed across SSW and I’m spreading the word on good testing internally, as well as working directly with some of SSW’s clients to improve their approaches to testing and quality management.
Testing-related events
As I announced mid-2021, I was excited to be part of the programme for the in-person Testing Talks 2021 (The Reunion) conference in Melbourne, rescheduled for October 2022. Unfortunately, I had to give up my spot on the programme due to my COVID vaccination status – though, surprise surprise, all such restrictions had been removed by the time the event actually took place. But I did attend the conference and it was awesome to see so many people in the one place for a testing event, after the hiatus thanks to the pandemic and the incredibly harsh restrictions that resulted for Melbourne. (I blogged about my experience of attending Testing Talks 2022.)
In terms of virtual events, I was fortunate to be invited to act as a peer advisor for one of Michael Bolton’s virtual RST classes running in the Australian timezone. This was an awesome three-day experience and I enjoyed interacting with the students as well as sharpening my understanding of some of the RST concepts from Michael’s current version of the class.
Two very enjoyable virtual events came courtesy of the Association for Software Testing (AST) and their Lean Coffees. I participated in the May and September events suited to my timezone and they were enlightening and fun, as well as offering a great way to engage with other testers in an informal online setting.
I had an enjoyable conversation with James Bach too, forming part of his “Testing Voices” series on the Rapid Software Testing YouTube channel:
Although I’ve interacted with James online and also in person several times (especially during his visits to Melbourne), this was our most in-depth conversation to date and it was fun to talk about my journey into testing, my love of mathematics and my approach to testing. I appreciate James’s continued passion for testing and, in particular, his desire to move the craft forward.
Testing books
I didn’t publish an updated version of my book An Exploration of Testers during 2022, but may do in 2023. I’m always open to additional contributions to this book, so please contact me if you’re interested in telling your story via the answers to the questions posed in the book!
I made good progress on the free AST e-book, Navigating the World as a Context-Driven Tester though. This book provides responses to common questions and statements about testing from a context-driven perspective, with its content being crowdsourced from the membership of the AST and the broader testing community. I added a further 10 responses in 2022, bringing the total to 16. I will continue to ask for contributions about once a month in 2023. The book is available from the AST’s GitHub.
Podcasting
Paul Seaman, Toby Thompson and I kicked off The 3 Amigos of Testing podcast in 2021 and produced three episodes in that first year, but we failed to reconvene to produce more content in 2022. There were a number of reasons for this, but we did get together to work up our next episode recently, so expect our next podcast instalment to drop in early 2023!
Volunteering for the UK Vegan Society
I’ve continued to volunteer with the UK’s Vegan Society both as a proofreader and also contributing to their web research efforts. I’ve learned a lot about SEO as a result of the web-related tasks and I undertook an interesting research project on membership/join pages to help the Society to improve its pages around joining with the aim of increasing new memberships.
I really enjoy working with The Vegan Society, increasing my contribution to and engagement with the vegan community worldwide. It was particularly rewarding and humbling to be awarded “Volunteer of the Season” and be featured in the Society’s member magazine, The Vegan, towards the end of the year.
In closing
As always, I’m grateful for the attention of my readers here and also followers on other platforms. I wish you all a Happy New Year and I hope you enjoy my posts and other contributions to the testing community to come through 2023 – the first public opportunity to engage with me in 2023 will be the AST’s Steel Yourselves webinar on January 30, when I’ll be arguing the case for a testing phase, I hope to “see you” there!
It’s that time of year again and I’ve gone through the pain of reviewing the latest edition of Capgemini’s annual World Quality Report (to cover 2022/23) so you don’t have to.
I reviewed both the 2018/19 and 2020/21 editions of their report in some depth in previous blog posts and I’ll take the same approach to this year’s effort, comparing and contrasting it with the previous two reports. Although this review might seem lengthy, it’s a mere summary of the 80 pages of the full report!
TL;DR
The survey results in this year’s report are more of the same really and I don’t feel like I learned a great deal about the state of testing from wading through it. My lived reality working with organizations to improve their testing and quality practices is quite different to the sentiments expressed in this report.
It’s good to see the report highlighting sustainability issues, a topic that hasn’t received much coverage yet but will become more of an issue for our industry I’m sure. The way we design, build and deploy our software has huge implications for its carbon footprint, both before release and for its lifetime in production usage.
The previous reports I reviewed were very focused on AI & ML, but these topics barely get a mention this year. I don’t think the promise of these technologies has been realised at large in the testing industry and maybe the lack of focus in the report reflects that reality.
It appears that the survey respondents are drawn from a very similar pool to previous reports and the lack of responses from smaller organizations mean that the results are heavily skewed to very large corporate environments.
I would have liked to see some deep questions around testing practice in the survey to learn more about what’s going on in terms of human testing in these large organizations, but alas there was no such questioning here (and these organizations seem to be less forthcoming with this information via other avenues too, unfortunately).
The visualizations used in the report are very poor. They look unprofessional, the use of multiple different styles is unnecessary and many are hard to interpret (as evidenced by the fact that the authors saw fit to include text explanations of what you’re looking at on many of these charts).
I reiterate my advice from last year – don’t believe the hype, do your own critical thinking and take the conclusions from surveys and reports like this with a (very large) grain of salt. Keep an interested eye on trends but don’t get too attached to them and instead focus on building excellent foundations in the craft of testing that will serve you well no matter what the technology du jour happens to be.
The survey (pages 72-75)
This year’s report runs to 80 pages, continuing the theme of being slightly thicker each year. I looked at the survey description section of the report first as it’s important to get a picture of where the data came from to build the report and support its recommendations and conclusions.
The survey size was 1750, suspiciously being exactly the same number as for the 2020/21 report. The organizations taking part were again all of over 1000 employees, with the largest number (35% of responses) coming from organizations of over 10,000 employees. The response breakdown by organizational size was very similar to that of the previous two reports, reinforcing the concern that the same organizations are contributing every time. The lack of input from smaller organizations unfortunately continues.
While responses came from 32 countries, they were heavily skewed to North America and Western Europe, with the US alone contributing 16% and then France with 9%. Industry sector spread was similar to past reports, with “High Tech” (18%) and “Financial Services” (15%) topping the list.
The types of people who provided survey responses this year was also very similar to previous reports, with CIOs at the top again (24% here vs. 25% last year), followed by QA Testing Managers and IT Directors. These three roles comprised over half (59%) of all responses.
Introduction (pages 4-5)
There’s a definite move towards talking about Quality Engineering in this year’s report (though it’s a term that’s not explicitly defined anywhere) and the stage is set right here in the Introduction:
We also heartily agree with the six pillars of Quality Engineering the report documents: orchestration, automation, AI, provisioning, metrics, and skill. Those are six nails in the coffin of manual testing. After all, brute force simply doesn’t suffice in the present age.
So, the talk of the death of manual testing (via a coffin reference for a change) continues, but let’s see if this conclusion is backed up by any genuine evidence in the survey’s findings.
Executive Summary (pages 6-7)
The idea of a transformation occurring from Quality Assurance (QA) to Quality Engineering (QE) is the key message again in the Executive Summary, set out via what the authors consider their six pillars of QE:
Agile quality orchestration
Quality automation
Quality infrastructure testing and provisioning
Test data provisioning and data validation
The right quality indicators
Increasing skill levels
In addition to these six pillars, they also bring in the concepts of “Sustainable IT” and “Value stream management”, more on those later.
Key recommendations (pages 8-9)
The set of key recommendations from the entirety of this hefty tome comprises little more than one page of the report and the recommendations are roughly split up as per the QE pillars.
For “Agile quality orchestration”, an interesting recommendation is:
Track and monitor metrics that are holistic quality indicators across the development lifecycle. For example: a “failed deployments” metric gives a holistic view of quality across teams.
While I like the idea of more holistic approaches to quality (rather than hanging our quality hat on just one metric), the example seems like a strange choice. Deployments can fail for all manner of reasons and, on the flipside, “successful” deployments may well be perceived as low quality by end users of the deployed software.
For “Quality automation”, it’s pleasing to see a recommendation like this in such a report:
Focus on what delivers the best benefits to customers and the business rather than justifying ROI.
It’s far too common for automation vendors to make their case based on ROI (and they rarely actually mean ROI in any traditional financial use of that term) and I agree that we should be looking at automation – just like any other ingredient of what goes into making the software cake – from a perspective of its cost, value and benefits.
Moving on to “Quality and sustainable IT”, they recommend:
Customize application performance monitoring tools to support the measurement of environmental impacts at a transactional level.
This is an interesting topic and one that I’ve looked into in some depth during volunteer research work for the UK’s Vegan Society. The design, implementation and hosting decisions we make for our applications all have significant impacts on the carbon footprint of the application and it’s not a subject that is currently receiving as much attention as it deserves, so I appreciate this being called out in this report.
In the same area, they also recommend:
Bring quality to the center of the strategy for sustainable IT for a consistent framework to measure, control, and quantify progress across the social, environmental, economic, and human facets of sustainable IT, even to the extent of establishing “green quality gates.”
Looking at “Quality engineering for emerging technology trends”, the recommendations are all phrased as questions, which seems strange to me and I don’t quite understand what the authors are trying to communicate in this section.
Finally, in “Value stream management”, they say:
Make sure you define with business owners and project owners the expected value outcome of testing and quality activities.
This is a reasonable idea and an activity that I’ve rarely seen done, well or otherwise. Communicating the value of testing and quality-related activities is far from straightforward, especially in ways that don’t fall victim to simplistic numerical metrics-based systems.
Current trends in Quality Engineering & Testing (p10-53)
More than half of the report is focused on current trends, again around the pillars discussed in the previous sections. Some of the most revealing content is to be found in this part of the report. I’ll break down my analysis into the same sections as the report.
Quality Orchestration in Agile Enterprises
I’m still not sure what “Quality Orchestration” actually is and fluff such as this doesn’t really help:
Quality orchestration in Agile enterprises continues to see an upward trend. Its adoption in Agile and DevOps has seen an evolution in terms of team composition and skillset of quality engineers.
The first chart in this section is pretty uninspiring, suggesting that only around half of the respondents are getting 20%+ improvements in “better quality” and “faster releases” as a result of adopting “Agile/DevOps” (which are frustratingly again treated together as though they’re one thing, the same mistake as in the last report).
The next section used a subset of the full sample (750 out of the 1750, but it’s not explained why this is the case) and an interesting statistic here is that “testing is carried out by business SMEs as opposed to quality engineers” “always” or “often” by 62% of the respondents. This seems to directly contradict the report’s premise of a strong movement towards QE.
For the results of the question “How important are the following QA skills when executing a successful Agile development program?”, the legend and the chart are not consistent (the legend suggesting “very important” response only, the chart including both “very important” and “extremely important”) and, disappointingly, none of the answers have anything to do with more human testing skills.
The next question is “What proportion of your teams are professional quality engineers?” and the chart of the results is a case in point of how badly the visuals have been designed throughout this report. It’s an indication that the visualizations are hard to comprehend when they need text to try to explain what they’re showing:
Using different chart styles for each chart isn’t helpful and it makes the report look inconsistent and unprofessional. This data again doesn’t suggest a significant shift to a “QE first” approach in most organizations.
The closing six recommendations (page 16) are not revolutionary and I question the connection that’s being made here between code quality and product quality (and also the supposed cost reduction):
Grow end-to-end test automation and increase levels of test automation across CI/CD processes, with automated continuous testing, to drive better code quality. This will enable improved product quality while reducing the cost of quality.
Quality Automation
The Introduction acknowledges a problem I’ve seen throughout my career and, if anything, it’s getting worse over time:
Teams prioritize selecting the test automation tools but forget to define a proper test automation plan and strategy.
They also say that:
All organizations need a proper level of test automation today as Agile approaches are pushing the speed of development up. Testing, therefore, needs to be done faster, but it should not lose any of its rigor. To put it simply, too much manual testing will not keep up with development.
This notion of “manual” testing failing to keep up with the pace of development is common, but suggests to me that (a) the purpose of human testing is not well understood and (b) many teams continue to labour under the misapprehension that they can work at an unsustainable pace without sacrificing quality.
In answering the question “What are the top three most important factors in determining your test automation approach?”, only 26% said that “Automation ROI, value realization” was one of the top 3 most important factors (while, curiously, “maintainability” came out top with 46%). Prioritizing maintainability over an ability to realize value from the automation effort seems strange to me.
Turning to benefits, all eight possible answers to the question “What proportion (if any) of your team currently achieves the following benefits from test automation?” were suspiciously close to 50% so perhaps the intent of the question was not understood and ended up with a flip of the coin response. (For reference, the benefits in response to this question were “Continuous integration and delivery”, “Reduce test team size”, “Increase test coverage”, “Better quality/fewer defects”, “Reliability of systems”, “Cost control”, “Allowing faster release cycle” and “Autonomous and self-adaptive solutions”.) I don’t understand why “Reduce test team size” would seen as a benefit and this reflects the ongoing naivety about what automation can and can’t realistically achieve. The low level of benefits reported across the board lead the authors to note:
…it does seem that communications about what can and cannot be done are still not managed as well as they could be, especially when looking to justify the return on investment. The temptation to call out the percentage of manual tests as automated sets teams on a path to automate more than they should, without seeing if the manual tests are good cases for automation and would bring value.
and
We have been researching the test automation topic for many years, and it is disappointing that organizations still struggle to make test automation work.
Turning to recommendations in this area, it’s good to see this:
Focus on what delivers the best benefits to customers and the business rather than justifying ROI.
It’s also interesting that they circle back to the sustainability piece, especially as automated tests are often run across large numbers of physical/virtual machines and for multiple configurations:
A final thought: sustainability is a growing and important trend – not just in IT, but across everything. We need to start thinking now about how automation can show its benefit and cost to the world. Do you know what the carbon footprint of your automation test is? How long will it be before you have to be able to report on that for your organization? Now’s the time to start thinking about how and what so you are ready when that question is asked.
Quality Infrastructure Testing and Provisioning
This section of the report is very focused on adoption of cloud environments for testing. In answer to “What proportion of non-production environments are provisioned on the cloud?”, they claim that:
49% of organizations have more than 50% of their non-production environments on cloud. This cloud adoption of non-production environments is showing a positive trend, compared to last year’s survey, when only an average of 23% of testing was done in a cloud environment
The accompanying chart does not support this conclusion, showing 39% of respondents having 26-50% of their non-production environments in the cloud and just 10% having 51-75% there. They also conflate “non-production environment” with “testing done in a cloud environment” when comparing this data with the previous report, when in reality there could be many non-testing non-production environments inflating this number.
They go on to look at the mix of on-premise and cloud environments and whether single vendor or multiple vendor clouds are in use.
In answer to “Does your organization include cloud and infrastructure testing as part of the development lifecycle?”, the data looked like this:
The authors interpreted this data to conclude that “It emerged that around 96% of all the respondents mention that cloud testing is now included as part of the testing lifecycle”, where does 96% come from? The question is a little odd and the responses even more so – the first answer, for example, suggests that for projects where applications are hosted on the cloud, only 3% of respondents mandate testing in the cloud – doesn’t that seem strange?
The recommendations in this section were unremarkable. I found the categorization of the content in this part of the report (and the associated questions) quite confusing and can’t help but wonder if participants in the survey really understood the distinctions trying to be drawn out here.
Test Data Provisioning and Data Validation
Looking at where test data is located, we see the following data (from a subset of just 580 from the total of 1750 responses, the reason is again not provided):
I’m not sure what to make of this data, especially as the responses are not valid answers to the question!
The following example just shows how leading some of the questions posed in the survey really are. Asking a high-level question like this to the senior types involved in the survey is guaranteed to produce a close to 100% affirmative response:
Equally unsurprising are the results of the next questions around data validation, where organizations reveal how much trouble they have actually doing it.
The recommendations in this section were again unremarkable, none really requiring the results of an expensive survey to come up with.
Quality and Sustainable IT
The sustainability theme is new to this year’s report, although the authors refer to it as though everyone knows what “sustainability” means from an IT perspective and that it’s been front of mind for some time in the industry (which I don’t believe to be the case). They say:
Sustainable quality engineering is quality engineering that helps achieve sustainable IT. A higher quality ensures less wastage of resources and increased efficiencies. This has always been a keystone focus of quality as a discipline. From a broader perspective, any organization focusing on sustainable practices while running its business cannot do so without a strong focus on quality. “Shifting quality left” is not a new concept, and it is the only sustainable way to increase efficiencies. Simply put, there is no sustainability without quality!
Getting “shift left” into this discussion about sustainability is drawing a pretty long bow in my opinion. And it’s not the only one – consider this:
Only 72% of organizations think that quality could contribute to the environmental aspect of sustainable IT. If organizations want to be environmentally sustainable, they need to learn to use available resources optimally. A stronger strategic focus on quality is the way to achieve that.
We should be mindful when we see definitive claims, such as “the way” – there are clearly many different factors involved in achieving environmental sustainability of an organization and a focus on quality is just one of them.
I think the results of this question about the benefits of sustainable IT says it all:
It would have been nice to see the environmental benefits topping this data, but it’s more about the organization being seen to be socially responsible than it is about actually being sustainable.
When it comes to testing, the survey explicitly asked whether “sustainability attributes” were being covered:
I’m again suspicious of these results. Firstly, it’s another of the questions only asked of a subset of the 1750 participants (and it’s not explained why). Secondly, the results are all very close to 50% so might simply indicate a flip of the coin type response, especially to such a nebulous question. The idea that even 50% of organizations are deliberately targeting testing on these attributes (especially the efficiency attributes) doesn’t seem credible to me.
One of the recommendations in this section is again around “shift left”:
Bring true “shift left” to the application lifecycle to increase resource utilization and drive carbon footprint reduction.
While the topic of sustainability in IT is certainly interesting to me, I’m not seeing a big focus on it in everyday projects. Some of the claims in the report are hard to believe, but I acknowledge that my lack of exposure to IT projects in such big organizations may mean I’ve missed this particular boat already setting sail.
Quality Engineering for Emerging Technologies
This section of the report focuses on emerging technologies and impacts on QE and testing. The authors kick off with this data:
This data again comes from a subset of the participants (1000 out of 1750) and I would have expected the “bars” for Blockchain and Web 3.0 to be the same length if the values are the same. The report notes that “…Web 3.0 is still being defined and there isn’t a universally accepted definition of what it means” so it seems odd that it’s such a high priority.
I note that, in answer to “Which of the following are the greatest benefits of new emerging technologies improving quality outcomes?”, 59% chose “More velocity without compromising quality” so the age old desire to go faster and keep or improve quality persists!
The report doesn’t make any recommendations in this area, choosing instead to ask pretty open-ended questions. I’m not clear what value this section added, it feels like crystal ball gazing (and, indeed, the last part of this section is headed “Looking into the crystal ball”!).
Value Stream Management
The opening gambit of this section of the report reads:
One of the expectations of the quality and test function is to assure and ensure that the software development process delivers the expected value to the business and end-users. However, in practice, many teams and organizations struggle to make the value outcomes visible and manageable.
Is this your expectation of testing? Or your organization’s expectation? I’m not familiar with such an expectation being set against testing, but acknowledge that there are organizations that perhaps think this way.
The first chart in this section just makes me sad:
I find it staggering that only 35% of respondents feel that detecting defects before going live is even in their top three objectives from testing. The authors had an interesting take on that, saying “Finding faults is not seen as a priority for most of the organizations we interviewed, which indicates that this is becoming a standard expectation”, mmm.
The rest of this section focused more on value and, in particular, the lean process of “value stream mapping”. An astonishing 69% of respondents said they use this approach “almost every time” when improving the testing process in Agile/DevOps projects – this high percentage doesn’t resonate with my experience but again it may be that larger organizations have taken value stream mapping on board without me noticing (or publicizing their love it more broadly so that I do notice).
Sector analysis (p54-71)
I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors (almost identically to last year) and discuss particular trends and challenges within each. The sectors are:
Automotive
Consumer products, retail and distribution
Energy, utilities, natural resources and chemicals
Financial services
Healthcare and life sciences
Manufacturing
Public sector
Technology, media and telecoms
Four metrics are given in summary for each sector, viz. the percentage of:
Agile teams have professional quality engineers integrated
Teams achieved better reliability of systems through test automation
Agile teams have test automation implemented
Teams achieved faster release times through test automation
It’s interesting to note that, for each of the these metrics, almost all the sectors reported around the 50% mark, with financial services creeping a little higher. These results seem quite weak and it’s remarkable that, after so long and so much investment, only about half of Agile teams report that they’ve implemented test automation.
Geography-specific reports
The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find it particularly revealing, though this comment stood out (emphasis is mine):
We see other changes, specifically to quality engineering. In recent years, QE has been decentralizing. Quality practices were merging into teams, and centers of excellence were being dismantled. Now, organizations are recognizing that centralized command and control have their benefits and, while they aren’t completely retracing their steps, they are trying to find a balance that gives them more visibility and greater governance of quality assurance (QA) in practice across the software development lifecycle.
As another year draws to a close, I’ll take the opportunity to review my 2021.
I published 14 blog posts during the year, just about meeting my personal target cadence of a post every month. I wrapped up my ten-part series answering common search engine questions about testing and covered several different topics during my blogging through the year. My blog attracted about 25% more views than in 2020, somewhat surprisingly, and I continue to be really grateful for the amplification of my blog posts via their regular inclusion in lists such as 5Blogs, Testing Curator’s Testing Bits and Software Testing Weekly.
December 2021 has been the biggest month for my blog by far this year with a similar number of views to my all-time high back in November 2020 – interestingly, I published a critique of an industry report in December and published similar critiques in November 2020, so clearly these types of posts are popular (even if they can be somewhat demoralizing to write)!
I closed out the year with about 1,200 followers on Twitter, again up around 10% over the year.
Conferences and meetups
2021 was my quietest year for perhaps fifteen years in terms of conferences and meetups, mainly due to the ongoing impacts of the COVID-19 pandemic around the world.
I was pleased to announce mid-2021 that I would be speaking at the in-person Testing Talks 2021 (The Reunion) conference in Melbourne in October. Sadly, the continuing harsh response to the pandemic in this part of the world made an in-person event too difficult to hold, but hopefully I can keep that commitment for its rescheduled date in 2022.
I didn’t participate in any virtual or remote events during the entire year.
Consulting
After launching my testing consultancy, Dr Lee Consulting, towards the end of 2020, I noted in last year’s review post that “I’m confident that my approach, skills and experience will find a home with the right organisations in the months and years ahead.” This confidence turned out to be well founded and I’ve enjoyed working with my first clients during 2021.
Consulting is a very different gig to full-time permanent employment but it’s been great so far, offering me the opportunity to work in different domains with different types of organizations while also allowing me the freedom to enjoy a more relaxed lifestyle. I’m grateful to those who have put their faith (and dollars!) in me during 2021 as I begin my consulting journey and I’m looking forward to helping more organizations to improve their testing and quality practices during 2022.
Testing books
After publishing my first testing book in October 2020, in the shape of An Exploration of Testers, it’s been pleasing to see a steady stream of sales through 2021. I made my first donation of proceeds to the Association for Software Testing (AST) from sales of the book and another donation will follow early in 2022. I also formalized an arrangement with the AST so that all future proceeds will be donated to them and all new & existing members will receive a free copy of the book. (I’m open to additional contributions to this book, so please contact me if you’re interested in telling your story via the answers to the questions posed in the book!)
I started work on another book project in 2021, also through the AST. Navigating the World as a Context-Driven Tester provides responses to common questions and statements about testing from a context-driven perspective, with its content being crowdsourced from the membership of the AST and the broader testing community. There are responses to six questions in the book so far and I’m adding another response every month (or so). The book is available for free from the AST’s GitHub.
Podcasting
It was fun to kick off a new podcasting venture with two good mates from the local testing industry, Paul Seaman and Toby Thompson. We’ve produced three episodes of The 3 Amigos of Testing podcast so far and aim to get back on the podcasting horse early in 2022 to continue our discussions around automation started back in August. The process of planning content for the podcast, discussing and dry-running it, and finally recording is an interesting one and kudos to Paul for driving the project and doing the heavy lifting around editing and publishing each episode.
Volunteering for the UK Vegan Society
I’ve continued to volunteer with the UK’s Vegan Society and, while I’ve worked on proofreading tasks again through the year, I’ve also started contributing to their web research efforts over the last six months or so.
It was exciting to be part of one of the Society’s most significant outputs of 2021, viz. the Planting Value in the Food System report. This 40,000-word report was a mammoth research project and my work in proofing it was also a big job! The resulting report and the website are high quality and show the credibility of The Vegan Society in producing well-researched reference materials in the vegan space.
Joining the web research volunteer group immediately gave me the opportunity to learn, being tasked with leading the research efforts around green websites and accessibility testing.
I found the green website research particularly engaging, as it was not an area I’d even considered before and the carbon footprint of websites – and how it can easily be reduced – doesn’t seem to (yet) be on the radar of most companies. The lengthy recommendations resulting from my research in this area will inform changes to the Vegan Society website over time and this work has inspired me to look into offering advice in this area to companies who may have overlooked this potentially significant contributor to their carbon footprint.
I also spent considerable time investigating website accessibility and tooling to help with development & testing in this area. While accessibility testing is something I was tangentially aware of in my testing career, the opportunity to deep dive into it was great and, again, my recommendations will be implemented over time to improve the accessibility of the society’s own website.
I continue to enjoy working with The Vegan Society, increasing my contribution to and engagement with the vegan community worldwide. The passion and commitment of the many volunteers I interact with is invigorating. I see it as my form of vegan activism and a way to utilize my existing skills in research and the IT industry as well as gaining valuable new skills and knowledge along the way.
Status Quo projects
I was honoured to be asked to write a lengthy article for the Status Quo official fan club magazine, FTMO, following the sad passing of the band’s original bass player, Alan Lancaster in September. Alan spent much of his life here in Australia, migrating to Sydney in 1978 and he was very active in the music industry in this country following his departure from Quo in the mid-1980s. It was a labour of love putting together a 5000-word article and selecting interesting photos to accompany it from my large collection of Quo scrapbooks.
I spent time during 2021 on a new Quo project too, also based around my scrapbook collection. This project should go live in 2022 and has been an interesting learning exercise, not just in terms of website development but also photography. Returning to coding after a 20+ year hiatus has been a challenge but I’m reasonably happy with the simple website I’ve put together using HTML, CSS, JavaScript, PHP and a MySQL database. Gathering the equipment and skills to take great photos of scrapbook clippings has also been fun and it’s nice to get back into photography, a keen hobby of mine especially in my university days back in the UK.
In closing
As always, I’m grateful for the attention of my readers here and also followers on other platforms. I wish you all a Happy New Year and I hope you enjoy my posts and other contributions to the testing community to come through 2022!
Almost unbelievably, it’s now been a year since I left my long stint at Quest Software. It’s been a very different year for me than any of the previous 25-or-so spent in full-time employment in the IT industry. The continuing impact of COVID-19 on day-to-day life in my part of the world has also made for an unusual 12 months in many ways.
While I haven’t missed working at Quest as much as I expected, I’ve missed the people I had the chance to work with for so long in Melbourne and I’ve also missed my opportunities to spend time with the teams in China that I’d built up such a strong relationship with over the last few years (and who, sadly, have all since departed Quest as well as their operations there were closed down this year).
Starting to work with my first clients in a consulting capacity is an interesting experience with a lot of learning opportunities. I plan to blog on some of my lessons learned from these early engagements later in the year.
Another fun and testing-related project kicked off in May, working with my good friends from the industry, Paul Seaman and Toby Thompson, to start The 3 Amigos of Testing podcast. We’ve always caught up regularly to chat about testing and life in general over a cold one or two, and this new podcast has given us plenty of opportunities to talk testing again, albeit virtually. A new episode of this podcast should drop very soon after this blog post.
On more personal notes, I’ve certainly been finding more time for myself since ending full-time employment. There are some non-negotiables, such as daily one-hour (or more) walks and meditation practice, and I’ve also been prioritizing bike riding and yoga practice. I’ve been reading a lot too – more than a book a week – on a wide variety of different topics. These valuable times away from technology are foundational in helping me to live with much more ease than in the past.
I’ve continued to do volunteer work with The Vegan Society (UK). I started off performing proofreading tasks and have also now joined their web volunteers’ team where I’ve been leading research projects on how to reduce the carbon footprint of the Society’s website and also to improve its accessibility. These web research projects have given me the welcome opportunity to learn about areas that I was not very familiar with before, the “green website” work being particularly interesting and it has inspired me to pursue other opportunities in this area (watch this space!). A massive proofreading task led to the recent publication of the awesome Planting Value in the Food System reports, with some deep research and great ideas for transitioning UK farming away from animal-based agriculture.
Looking to the rest of 2021, the only firm commitment I have in the testing space – outside of consulting work – is an in-person conference talk at Testing Talks 2021 in Melbourne. I’ll be continuing with my considerable volunteering commitment with the Vegan Society and I have a big Status Quo project in the works too! With little to no prospect of long-distance travel in Australia or overseas in this timeframe, we will enjoy short breaks locally between lockdowns and also press on with various renovation projects on our little beach house.
(Given the title of this blog, I can’t waste this opportunity to include a link to one of my favourite Status Quo songs, “A Year” – this powerful ballad morphs into a heavier piece towards the end, providing some light amongst the heaviness of its parent album, “Piledriver”. Enjoy!)
This is the final part of a ten-part blog series in which I’ve answered some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).
In this last post, I ponder the open question of “What will software testing look like in 2021?” (note: updated the year from 2020 in my original dataset from Answer The Public to 2021).
The reality for most people involved in the software testing business is that testing will look pretty much the same in 2021 as it did in 2020 – and probably as it did for many of the years before that too. Incremental improvements take time in organisations and the scope & impact of such changes will vary wildly between different organisations and even within different parts of the same organisation.
I fully expect 2021 to yield a number of reports about trends in software testing and quality, akin to Capgemini’s annual World Quality Report (which I critiqued again last year). There will probably be a lot of noise around the application of AI and machine learning to testing, especially from tool vendors and the big consultancies.
I feel certain that automation (especially of the “codeless” variety) will continue to be one of the main threads around testing with companies continuing to recruit on the basis of “automated testing” prowess over exploratory testing skills.
I think a small but dedicated community of people genuinely interested in advancing the craft of software testing will continue to publish their ideas and look to inject some reality into the various places that testing gets discussed online.
My daily meditation practice has applications here too. In the same way that the practice helps me to recognise when thoughts are happening without getting caught up in their storyline, I think you should make an effort to observe the inevitable commentary on trends in the testing industry through 2021 without going out of your way to follow them. These trends are likely to change again next year and expending effort trying to keep “on trend” is likely effort better spent elsewhere. Instead, I would recommend focusing on the fundamentals of good software testing, while continuing to demonstrate the value of good testing and advancing the practice as best you can in the context of your organisation.
I would also encourage you to make 2021 the year that you tell your testing stories for the benefit of the wider community – your stories are unique, valuable and a great way for others to learn what’s really going on in our industry. There are many avenues to share your first-person experiences – blog about them, share them as LinkedIn articles, talk about them at meetups or present them at a conference (many of which seem destined to remain as virtual events through 2021, which I see as a positive in terms of widening the opportunity for more diverse stories to be heard).
I’ve provided the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.
I’m grateful to Paul Seaman and Ky who acted as reviewers for every part of this blog series; I couldn’t have completed the series without their help, guidance and encouragement along the way, thank you!
Thanks also to all those who’ve amplified the posts in this series via their blogs, lists and social media posts – it’s been much appreciated. And, last but not least, thanks to Terry Rice for the underlying idea for the content of this series.
This is the penultimate part of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).
In this post, I answer the question “Which software testing certification is the best?“.
There has been much controversy around certification in our industry for a very long time. The certification market is dominated by the International Software Testing Qualifications Board (ISTQB), which they describe as “the world’s most successful scheme for certifying software testers”. The scheme arose out of the British Computer Society’s ISEB testing certification in the late 1990s and has grown to become the de facto testing certification scheme. With a million-or-so exams administered and 700,000+ certifications issued, the scheme has certainly been successful in dishing out certifications across its ever-increasing range of offerings (broadly grouped into Agile, Core and Specialist areas).
In the interests of disclosure, I am Foundation certified by the ANZTB and I encouraged all of the testers at Quest in the early-mid 2000s to get certified too. At the time, it felt to me like this was the only certification that gave a stamp of professionalism to testers. After I received education from Michael Bolton during Rapid Software Testing in 2007, I soon realised the errors in my thinking – and then put many of the same testers through RST with James Bach a few years later!
Although the ISTQB scheme has issued many certifications, the value of these certifications is less clear. The lower level certifications, particularly Foundation, are very easy to obtain and require little to no practical knowledge or experience in software testing. It’s been disappointing to witness how this de facto simple certification became a pre-requisite for hiring testers all over the world. The requirement to be ISQTB-certified doesn’t seem to crop up very often on job ads in the Australian market now, though, so maybe its perceived value is falling over time.
If your desire is to become an excellent tester, then I would encourage you to adopt some of the approaches to learning outlined in the previous post in this series. Following a path of serious self-learning about the craft (and maybe challenging yourself with one of the more credible training courses such as BBST or RST) is likely to provide you with much more value in the long-term than ticking the ISTQB certification box. If you’re concerned about your resume “making the cut” when applying for jobs without having ISTQB certification, consider taking Michael Bolton’s advice in No Certification, No Problem!
Coming back to the original question. Imagine what the best software testing certification might be if you happen to be a for-profit training provider for ISTQB certifications. Then think about what the best software testing certification might be if you’re a tester with a few years of experience in the industry looking to take your skills to the next level. I don’t think it makes sense to ask which (of anything) is the “best” as there are so many context-specific factors to consider.
The de facto standard for certification in our industry, viz. ISTQB, is not a requirement for you to become an excellent and credible software tester, in my opinion.
If you’re interested in a much fuller treatment of the issues with testing certifications, I think James Bach has covered all the major arguments in his blog post, Against Certification. Ilari Henrik Aegerter’s short Super Single Slide Sessions #6 – On Certifications video is also worth a look and, for some light relief around this controversial topic, see the IQSTD website!
You can find the first eight parts of this blog series at:
I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.
Thanks again to my review team (Paul Seaman and Ky) for their helpful feedback on this post, their considerable effort and input as this series comes towards an end has been instrumental in producing posts that I’m proud of.
This is the eighth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).
In this post, I answer the question “Can I learn software testing on my own?” (and the related questions, “Can I learn software testing online?” and “Can anybody learn software testing?”).
The skills needed to be an excellent tester can be learned. How you choose to undertake that learning is a personal choice, but there’s really no need to tackle this substantial task as a solo effort – and I would strongly encourage you not to go it alone. The testing community is strong and, in my experience, exceptionally willing to help people on their journey to becoming better testers so utilizing this vast resource should be part of your strategy. There is so much great content online for free and engaging with great testers is straightforward via, most notably in my opinion, Twitter and LinkedIn.
While it’s great to learn the various techniques and approaches to testing, it’s also worth looking more broadly into fields such as psychology and sociology. Becoming an excellent tester requires more than just great testing and technical skills so broadening your learning should be helpful. While I don’t recommend most of the testing books from “experts”, I’ve made a few recommendations in the Resources section of my consultancy website (and you can find a bunch of blogs, articles, etc. as starting points for further reading there too).
The next part of this blog series will cover the topic of certifications, so I won’t discuss this in depth here – but I don’t believe it’s necessary to undertake the most common certifications in our industry, viz. those offered by the ISTQB. The only formal courses around testing that I choose to recommend are Rapid Software Testing (which I’ve personally attended twice, with Michael Bolton and then James Bach) and the great value Black Box Software Testing courses from the Association for Software Testing.
You can certainly learn the skills required to be an excellent tester and there’s simply no need to go it alone in doing so. There is no need to attend expensive training courses or go through certification schemes on your way to becoming excellent, but you will need persistence, a growth mindset and a keen interest in continuous learning. I recommend leveraging the large, strong and helpful testing community in your journey of learning the craft – engaging with this community has helped me tremendously over many years and I try to give back to it in whatever ways I can, hopefully inspiring and helping more people to experience the awesomeness of the craft of software testing.
You might find the following blog posts useful too in terms of guiding your learning process:
I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.
Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.
This is the seventh of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).
In this post, I answer the question “Is software testing a good career?” (and the related questions, “How is software testing a career?” and “Why choose software testing as a career?”).
Reflecting first on my own experience, software testing ended up being an awesome career. I didn’t set out to become a career software tester, though. After a few years as a developer in the UK, I moved to Australia and started looking for work in the IT industry. Within a couple of weeks of arriving in the country, I landed an interview at Quest Software (then in the Eastern suburbs of Melbourne) for a technical writer position. After interviewing for that position, they mentioned that the “QA Manager” was also looking for people and asked whether I’d be interested in chatting with her also. Long story short, I didn’t land the technical writing job but was offered a “Senior Tester” position – and I accepted it without hesitation! I was simply happy to have secured my first job in a new country, never intending it to be a long-term proposition with Quest or the start of a new career in the field of software testing. As it turned out, I stayed with Quest for 21 years in testing/quality related roles from start to finish!
So, there was some luck involved in finding a good company to work for and a job that I found interesting. I’m not sure I’d have stayed in testing, though, had it not been for the revelation that was attending Rapid Software Testing with Michael Bolton in 2007 – that really gave me the motivation to treat software testing more seriously as a long-term career prospect and also marked the time, in my opinion, that I really started to add much more value to Quest as well. The company appreciated the value that good testers were adding to their development teams and I was fortunate to mentor, train, coach and work alongside some great testers, not only in Australia but all over the world. Looking back on my Quest journey, I think it was the clear demonstration of value from testing that led to more and more opportunities for me (and other testers), as predicted by Steve Martin when he said “be so good they can’t ignore you”!
The landscape has changed considerably in the testing industry over the last twenty years, of course. It has to be acknowledged that it’s becoming very difficult to secure testing roles in which you can expect to perform exploratory testing as the mainstay of your working day (and especially so in higher cost locations). I’ve rarely seen an advertisement for such a role in Australia in the last few years, with most employers now also demanding some “automated testing” skills as part of the job. Whether the reality post-employment is that nearly all testers are now performing a mix of testing (be it scripted, exploratory or a combination of both) and automation development, I’m not so sure. If your desire is to become an excellent (exploratory) tester without having some coding knowledge/experience, then I think there are still some limited opportunities out there but seeking them out will most likely require you to be in the network of people in similar positions in companies that understand the value that testing of this kind can bring.
Making the effort to learn some coding skills is likely to be beneficial in terms of getting your resume over the line. I’d recommend not worrying too much about which language(s)/framework(s) you choose to learn, but rather focusing on the fundamentals of good programming. I would also suggest building an understanding of the “why” and “what” in terms of automation (over the “how”, i.e. which language and framework to leverage in a particular context) as this understanding will allow you to quickly add value and not be so vulnerable to the inevitable changes in language and framework preferences over time.
I think customers of the software we build expect that the software has undergone some critical evaluation by humans before they acquire it, so it both intrigues and concerns me that so many big tech companies publicly express their lack of “testers” as some kind of badge of honour. I simply don’t understand why this is seen as a good thing and it seems to me that this trend is likely to come full (or full-ish) circle at some point when the downsides of removing specialists in testing from the development, release and deployment process outweigh the perceived benefits (not that I’m sure what these are, apart from reduced headcount and cost?).
I still believe that software testing is a good career choice. It can be intellectually challenging, varied and stimulating in the right organization. It’s certainly not getting any easier to secure roles in which you’ll spend all of your time performing exploratory testing, though, so broadening your arsenal to include some coding skills and building a good understanding of why and what makes sense to automate are likely to help you along the way to gaining meaningful employment in this industry.
You can find the first six parts of this blog series at:
I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.
Thanks again to my erstwhile review team (Paul Seaman and Ky) for their helpful feedback on this post.
This is the sixth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).
In this post, I answer the question “Is software testing easy?” (and the related question, “Why is software testing so hard?”).
There exists a perception that “anyone can test” and, since testing is really just “playing with the software”, it’s therefore easy. By contrast, it seems that programming is universally viewed as being difficult. This reasoning leads people to believe that a good place to start their career in IT is as a tester, with a view to moving “up” to the more hallowed ranks of developers.
My experience suggests that many people often have no issue with trying to tell testers how to do their job, in a way that those same people wouldn’t dream of doing towards developers. This generally seems to be based on some past experience from their career when they considered themselves a tester, even if that experience is now significantly outdated and they didn’t engage in any serious work to advance themselves as testers. Such interactions are a red flag that many in the IT industry view testing as the easy part of the software development process.
The perception that testing is easy is also not helped by the existence and prevalence of the simple and easy to achieve ISTQB Foundation certification. The certification has been achieved by several hundred thousand people worldwide (the ISTQB had issued 721000+ certifications as of May 2020, with the vast majority of those likely to be at Foundation level), so it’s clearly not difficult to obtain (even without study) and has flooded the market with “testers” who have little but this certification behind them.
Thanks to Michael Bolton (via this very recent tweet) for identifying another reason why this perception exists. “Testing” is often conflated with “finding bugs” and we all know how easy it is to find bugs in the software we use every day:
There’s a reason that many people think testing is easy, due to an asymmetry. No one ever fired up a computer and stumbled into creating a slick UI or a sophisticated algorithm, but people stumble into bugs every day. Finding bugs is easy, they think. So testing must be easy.
Another unfortunate side effect of the idea that testing is easy is that testers are viewed as fungible, i.e. any tester can simply be replaced by another one since there’s not much skill required to perform the role. The move to outsource testing capability to lower cost locations then becomes an attractive proposition. I’m not going to discuss test outsourcing and offshoring in any depth here, but I’ve seen a lot of great, high value testers around the world lose their jobs due to this process of offshoring based on the misplaced notion of fungibility of testing resources.
Enough about the obvious downsides of mistakenly viewing testing as easy! I don’t believe good software testing is at all easy and hopefully my reasons for saying this will help you to counter any claims that testing (at least, testing as I talk about it) is easy work and can be performed equally well by anyone.
As a good tester, we are tasked with evaluating a product by learning about it through exploration, experimentation, observation and inference. This requires us to adopt a curious, imaginative and critical thinking mindset, while we constantly make decisions about what’s interesting to investigate further and evaluate the opportunity cost of doing so. We look for inconsistencies by referring to descriptions of the product, claims about it and within the product itself. These are not easy things to do.
We study the product and build models of it to help us make conjectures and design useful experiments. We perform risk analysis, taking into account many different factors to generate a wealth of test ideas. This modelling and risk analysis work is far from easy.
We ask questions and provide information to help our stakeholders understand the product we’ve built so that they can decide if it’s the product they wanted. We identify important problems and inform our stakeholders about them – and this is information they sometimes don’t want to hear. Revealing problems (or what might be problems) in an environment generally focused on proving we built the right thing is not easy and requires emotional intelligence & great communication skills.
We choose, configure and use tools to help us with our work and to question the product in ways we’re incapable of (or inept at) as humans without the assistance of tools. We might also write some code (e.g. code developed specifically for the purpose of exercising other code or implementing algorithmic decision rules against specific observations of the product, “checks”), as well as working closely with developers to help them improve their own test code. Using tooling and test code appropriately is not easy.
(You might want to check out Michael Bolton’s Testing Rap, from which some of the above was inspired, as a fun way to remind people about all the awesome things human testers actually do!)
This heady mix of aspects of art, science, sociology, psychology and more – requiring skills in technology, communication, experiment design, modelling, risk analysis, tooling and more – makes it clear to me why good software testing is hard to do.
In wrapping up, I don’t believe that good software testing is easy. Good testing is challenging to do well, in part due to the broad reach of subject areas it touches on and also the range of different skills required – but this is actually good news. The challenging nature of testing enables a varied and intellectually stimulating job and the skills to do it well can be learned.
It’s not easy, but most worthwhile things in life aren’t!
You can find the first five parts of this blog series at:
I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.
Thanks again to my dedicated review team (Paul Seaman and Ky) for their helpful feedback on this post. Paul’s blog, Not Everybody Can Test, is worth a read in relation to the subject matter of this post.
This is the fifth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).
As I reach the halfway point in this series, I come to the question “Can you automate software testing?” (and the related question, “How can software test automation be done?”).
If you spend any time on Twitter and LinkedIn following threads around testing, this question of whether testing can be automated crops up with monotonous regularity and often seems to result in very heated discussion, with strong opinions from both the “yes” and “no” camps.
As a reminder (from part one of this blog series), my preferred definition of testing comes from Michael Bolton and James Bach, viz.
Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.
Looking at this definition, testing is clearly a deeply human activity since skills such as learning, exploring, questioning and inferring are not generally those well modelled by machines (even with AI/ML). Humans may or may not be assisted by tools or automated means while exercising these skills, but that doesn’t mean that the performance of testing is itself “automated”.
The distinction drawn between “testing” and “checking” made by James Bach and Michael Bolton has been incredibly helpful for me when talking about automation and countering the idea that testing can be automated (much more so than “validation” and “verification” in my experience). As a refresher, their definition of checking is:
Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
As Michael says, “We might choose to automate the exercise of some functions in a program, and then automatically compare the output of that program with some value obtained by another program or process. I’d call that a check.” Checking is a valuable component of our overall testing effort and, by this definition, lends itself to be automated. But the binary evaluations resulting from the execution of such checks form only a small part of the testing story and there are many aspects of product quality that are not amenable to such black and white evaluation.
Thinking about checks, there’s a lot that goes into them apart from the actual execution (by a machine or otherwise): someone decided we needed a check (risk analysis), someone designed the check, someone implemented the check (coding), someone decided what to observe and how to observe it, and someone evaluated the results from executing the check. These aspects of the check are testing activities and, importantly, they’re not the aspects that can be given over to a machine, i.e. be automated. There is significant testing skill required in the design, implementation and analysis of the check and its results, the execution (the automated bit) is really the easy part.
A machine producing a bit is not doing the testing; the machine, by performing checks, is accelerating and extending our capacity to perform some action that happens as part of the testing that we humans do. The machinery is invaluable, but it’s important not to be dazzled by it. Instead, pay attention to the purpose that it helps us to fulfill, and to developing the skills required to use tools wisely and effectively.
We also need to be mindful to not conflate automation in testing with “automated checking”. There are many other ways that automation can help us, extending human abilities and enabling testing that humans cannot practically perform. Some examples of applications of automation include test data generation, test environment creation & configuration, software installation & configuration, monitoring & logging, simulating large user loads, repeating actions en masse, etc.
If we make the mistake of allowing ourselves to believe that “automated testing” exists, then we can all too easily fall into the trap of narrowing our thinking about testing to just automated checking, with a resulting focus on the development and execution of more and more automated checks. I’ve seen this problem many times across different teams in different geographies, especially so in terms of regression testing.
I think we are well served to eliminate “automated testing” from our vocabulary, instead talking about “automation in testing” and the valuable role automation can play in both testing & checking. The continued propaganda around “automated testing” as a thing, though, makes this job much harder than it sounds. You don’t have to look too hard to find examples of test tool vendors using this term and making all sorts of bold claims about their “automated testing solutions”. It’s no wonder that so many testers remain confused in answering the question about whether testing can be automated when a quick Google search got me to some of these gems within the top few results: What is automated testing? (SmartBear), Automated software testing (Atlassian) and Test Automation vs. Automated Testing: The Difference Matters (Tricentis).
I’ve only really scratched the surface of this big topic in this blog, but it should be obvious by now that I don’t believe you can automate software testing. There is often value to be gained by automating checks and leveraging automation to assist and extend humans in their testing efforts, but the real testing lies with the humans – and always will.
Some recommended reading related to this question:
The Testing and Checking Refined article by James Bach and Michael Bolton, in which the distinction between testing and checking is discussed in depth, as well as the difference between checks performed by humans and those by machines.
The Automation in Testing (AiT) site by Richard Bradshaw and Mark Winteringham, their six principles of AiT make a lot of sense to me.
I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.
Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.