Speaking at the Testing Talks 2021 (The Reunion) conference (28 October, Melbourne)

After almost two decades of very regularly attending testing conferences, the combined impacts of COVID-19 and finishing up my career at Quest have curtailed these experiences in more recent times. I’ve missed the in-person interaction with the testing community facilitated by such events, as I know many others have also.

The latter stages of 2020 saw me give three talks; firstly for the DDD Melbourne By Night meetup, then a two-minute talk for the “Community Strikes The Soapbox” part of EuroSTAR 2020 Online, and finally a contribution to the inaugural TestFlix conference. All of these were virtual events and at least gave me some presentation practice.

The opportunity to be part of an in-person conference in Melbourne was very appealing and, after chatting with Cameron Bradley, I committed to building a new talk in readiness for his Testing Talks 2021 Conference.

With the chance to develop a completely new talk, I riffed on a few ideas before settling on what seemed like a timely story for me to tell, namely what I’ve learned from twenty-odd years in the testing industry. I’ve titled the talk “Lessons Learned in Software Testing”, in a deliberate nod to the awesome book of the same name.

I’ve stuck with my usual routine in putting this new talk together, using a mindmap to help me come up with the structure and key messages before starting to cut a slide deck. It remains a challenge for me to focus more on the talk content than refining the slides at this stage, but I’m making a conscious effort to get the messaging down on rough slides before putting finishing touches to them later on.

It’s been interesting to look back over such a long career in the one industry, thinking about the trends that have come and gone, and realizing how much remains the same in terms of being a good tester adding value to projects. I’m looking forward to sharing some of the lessons I’ve learned along the way – some specifically around testing and some more general – in this new talk later in the year.

Fingers crossed (and COVID-permitting!), I’ll be taking the stage at the Melbourne Convention & Exhibition Centre on 28th October to deliver my talk to what I hope will be a packed house. Maybe you can join me? More details and tickets are available from the Testing Talks 2021 Conference website.

Is talking about “scaling” human testing missing the point?

I recently came across an article from Adam Piskorek about the way Google tests its software.

While I was already familiar with the book How Google Tests Software (by James Whittaker, Jason Arbon et al, 2012), Adam’s article introduced another newer book about how Google approaches software engineering more generally, Software Engineering at Google: Lessons Learned from Programming Over Time (by Titus Winters, Tom Manshreck & Hyrum Wright, 2020).

The following quote in Adam’s article is lifted from this newer book and made me want to dive deeper into the book’s broader content around testing*:

Attempting to assess product quality by asking humans to manually interact with every feature just doesn’t scale. When it comes to testing, there is one clean answer: automation.

Chapter 11 (Testing Overview), p210 (Adam Bender)

I was stunned by this quote from the book. It felt like they were saying that development simply goes too quickly for adequate testing to be performed and also that automation is seen as the silver bullet to moving as fast as they desire while maintaining quality, without those pesky slow humans interacting with the software they’re pushing out.

But, in the interests of fairness, I decided to study the four main chapters of the book devoted to testing to more fully understand how they arrived at the conclusion in this quote – Chapter 11 which offers an overview of the testing approach at Google, chapter 12 devoted to unit testing, chapter 13 on test doubles and chapter 14 on “Larger Testing”. The book is, perhaps unsurprisingly, available to read freely on Google Books.

I didn’t find anything too controversial in chapter 12, rather mostly sensible advice around unit testing. The following quote from this chapter is worth noting, though, as it highlights that “testing” generally means automated checks in their world view:

After preventing bugs, the most important purpose of a test is to improve engineers’ productivity. Compared to broader-scoped tests, unit tests have many properties that make them an excellent way to optimize productivity.

Chapter 13 on test doubles was similarly straightforward, covering the challenges of mocking and giving decent advice around when to opt for faking, stubbing and interaction testing as approaches in this area. Chapter 14 dealt with the challenges of authoring tests of greater scope and I again wasn’t too surprised by what I read there.

It is chapter 11 of this book, Testing Overview (written by Adam Bender), that contains the most interesting content in my opinion and the remainder of this blog post looks in detail at this chapter.

The author says:

since the early 2000s, the software industry’s approach to testing has evolved dramatically to cope with the size and complexity of modern software systems. Central to that evolution has been the practice of developer-driven, automated testing.

I agree that the general industry approach to testing has changed a great deal in the last twenty years. These changes have been driven in part by changes in technology and the ways in which software is delivered to users. They’ve also been driven to some extent by the desire to cut cost and it seems to me that focusing more on automation has been seen (misguidedly) as a way to reduce the overall cost of delivering software solutions. This focus has led to a reduction in the investment in humans to assess what we’re building and I think we all too often experience the results of that reduced level of investment.

Automated testing can prevent bugs from escaping into the wild and affecting your users. The later in the development cycle a bug is caught, the more expensive it is; exponentially so in many cases.

Given the perception of Google as a leader in IT, I was very surprised to see this nonsense about the cost of defects being regurgitated here. This idea is “almost entirely anecdotal” according to Laurent Bossavit in his excellent The Leprechauns of Software Engineering book and he has an entire chapter devoted to this particular mythology. I would imagine that fixing bugs in production for Google is actually inexpensive given the ease with which they can go from code change to delivery into the customer’s hands.

Much ink has been spilled about the subject of testing software, and for good reason: for such an important practice, doing it well still seems to be a mysterious craft to many.

I find the choice of words here particularly interesting, describing testing as “a mysterious craft”. While I think of software testing as a craft, I don’t think it’s mysterious although my experience suggests that it’s very difficult to perform well. I’m not sure whether the wording is a subtle dig at parts of the testing industry in which testing is discussed in terms of it being a craft (e.g. the context-driven testing community) or whether they are genuinely trying to clear up some of the perceived mystery by explaining in some detail how Google approaches testing in this book.

The ability for humans to manually validate every behavior in a system has been unable to keep pace with the explosion of features and platforms in most software. Imagine what it would take to manually test all of the functionality of Google Search, like finding flights, movie times, relevant images, and of course web search results… Even if you can determine how to solve that problem, you then need to multiply that workload by every language, country, and device Google Search must support, and don’t forget to check for things like accessibility and security. Attempting to assess product quality by asking humans to manually interact with every feature just doesn’t scale. When it comes to testing, there is one clear answer: automation

(note: bold emphasis is mine)

We then come to the source of the quote that first piqued my interest. I find it interesting that they seem to be suggesting the need to “test everything” and using that as a justification for saying that using humans to interact with “everything” isn’t scalable. I’d have liked to see some acknowledgement here that the intent is not to attempt to test everything, but rather to make skilled, risk-based judgements about what’s important to test in a particular context for a particular mission (i.e. what are we trying to find out about the system?). The subset of the entire problem space that’s important to us is something we can potentially still ask humans to interact with in valuable ways. The “one clear answer” for testing being “automation” makes little sense to me, given the well-documented shortcomings of automated checks (some of which are acknowledged in this same book) and the different information we should be looking to gather from human interactions with the software compared to that from algorithmic automated checks.

Unlike the QA processes of yore, in which rooms of dedicated software testers pored over new versions of a system, exercising every possible behavior, the engineers who build systems today play an active and integral role in writing and running automated tests for their own code. Even in companies where QA is a prominent organization, developer-written tests are commonplace. At the speed and scale that today’s systems are being developed, the only way to keep up is by sharing the development of tests around the entire engineering staff.

Of course, writing tests is different from writing good tests. It can be quite difficult to train tens of thousands of engineers to write good tests. We will discuss what we have learned about writing good tests in the chapters that follow.

I think it’s great that developers are more involved in testing than they were in the days of yore. Well-written automated checks provide some safety around changing product code and help to prevent a skilled tester from wasting their time on known “broken” builds. But, again, the only discussion that follows in this particular book (as promised in the last sentence above) is about automation and not skilled human testing.

Fast, high-quality releases
With a healthy automated test suite, teams can release new versions of their application with confidence. Many projects at Google release a new version to production every day—even large projects with hundreds of engineers and thousands of code changes submitted every day. This would not be possible without automated testing.

The ability to get code changes to production safely and quickly is appealing and having good automated checks in place can certainly help to increase the safety of doing so. “Confidence” is an interesting choice of word to use around this (and is used frequently in this book), though – the Oxford dictionary definition of “confidence” is “a feeling or belief that one can have faith in or rely on someone or something”, so the “healthy automated test suite” referred to here appears to be one that these engineers feel comfortable to rely on enough to say whether new code should go to production or not.

The other interesting point here is about the need to release new versions so frequently. While it makes sense to have deployment pipelines and systems in place that enable releasing to production to be smooth and uneventful, the desire to push out changes to customers very frequently seems like an end in itself these days. For most testers in most organizations, there is probably no need or desire for such frequent production changes so deciding testing strategy on the perceived need for these frequent changes could lead to goal displacement – and potentially take an important aspect of assessing those changes (viz. human testers) out of the picture altogether.

If test flakiness continues to grows you will experience something much worse than lost productivity: a loss of confidence in the tests. It doesn’t take needing to investigate many flakes before a team loses trust in the test suite, After that happens, engineers will stop reacting to test failures, eliminating any value the test suite provided. Our experience suggests that as you approach 1% flakiness, the tests begin to lose value. At Google, our flaky rate hovers around 0.15%, which implies thousands of flakes every day. We fight hard to keep flakes in check, including actively investing engineering hours to fix them.

It’s good to see this acknowledgement of the issues around automated check stability and the propensity for unstable checks to lead to a collapse in trust in the entire suite. I’m interested to know how they go about categorizing failing checks as “flaky” to be included in their overall 0.15% “flaky rate”, no doubt there’s some additional human effort involved there too.

Just as we encourage tests of smaller size, at Google, we also encourage engineers to write tests of narrower scope. As a very rough guideline, we tend to aim to have a mix of around 80% of our tests being narrow-scoped unit tests that validate the majority of our business logic; 15% medium-scoped integration tests that validate the interactions between two or more components; and 5% end-to-end tests that validate the entire system. Figure 11-3 depicts how we can visualize this as a pyramid.

It was inevitable during coverage of automation that some kind of “test pyramid” would make an appearance! In this case, they use the classic Mike Cohn automated test pyramid but I was shocked to see them labelling the three different layers with percentages based on test case count. By their own reasoning, the tests in the different layers are of different scope (that’s why they’re in different layers, right?!) so counting them against each other really makes no sense at all.

Our recommended mix of tests is determined by our two primary goals: engineering productivity and product confidence. Favoring unit tests gives us high confidence quickly, and early in the development process. Larger tests act as sanity checks as the product develops; they should not be viewed as a primary method for catching bugs.

The concept of “confidence” being afforded by particular kinds of checks arises again and it’s also clear that automated checks are viewed as enablers of productivity.

Trying to answer the question “do we have enough tests?” with a single number ignores a lot of context and is unlikely to be useful. Code coverage can provide some insight into untested code, but it is not a substitute for thinking critically about how well your system is tested.

It’s good to see context being mentioned and also the shortcomings of focusing on coverage numbers alone. What I didn’t really find anywhere in what I read in this book was the critical thinking that would lead to an understanding that humans interacting with what’s been built is also a necessary part of assessing whether we’ve got what we wanted. The closest they get to talking about humans experiencing the software in earnest comes from their thoughts around “exploratory testing”:

Exploratory Testing is a fundamentally creative endeavor in which someone treats the application under test as a puzzle to be broken, maybe by executing an unexpected set of steps or by inserting unexpected data. When conducting an exploratory test, the specific problems to be found are unknown at the start. They are gradually uncovered by probing commonly overlooked code paths or unusual responses from the application. As with the detection of security vulnerabilities, as soon as an exploratory test discovers an issue, an automated test should be added to prevent future regressions.

Using automated testing to cover well-understood behaviors enables the expensive and qualitative efforts of human testers to focus on the parts of your products for which they can provide the most value – and avoid boring them to tears in the process.

This description of what exploratory testing is and what it’s best suited to are completely unfamiliar to me, as a practitioner of exploratory testing for fifteen years or so. I don’t treat the software “as a puzzle to be broken” and I’m not even sure what it would mean to do so. It also doesn’t make sense to me to say “the specific problems to be found are unknown at the start”, surely this applies to any type of testing? If we already know what the problems are, we wouldn’t need to test to discover them. My exploratory testing efforts are not focused on “commonly overlooked code paths” either, in fact I’m rarely interested in the code but rather the behaviour of the software experienced by the end user. Given that “exploratory testing” as an approach has been formally defined for such a long time (and refined over that time), it concerns me to see such a different notion being labelled as “exploratory testing” in this book.

TL;DRs
Automated testing is foundational to enabling software to change.
For tests to scale, they must be automated.
A balanced test suite is necessary for maintaining healthy test coverage.
“If you liked it, you should have put a test on it.”
Changing the testing culture in organizations takes time.

In wrapping up chapter 11 of the book, the focus is again on automated checks with essentially no mention of human testing. The scaling issue is highlighted here also, but thinking solely in terms of scale is missing the point, I think.

The chapters of this book devoted to ‘testing” in some way cover a lot of ground, but the vast majority of that journey is devoted to automated checks of various kinds. Given Google’s reputation and perceived leadership status in IT, I was really surprised to see mention of the “cost of change curve” and the test automation pyramid, but not surprised by the lack of focus on human exploratory testing.

Circling back to that triggering quote I saw in Adam’s blog (“Attempting to assess product quality by asking humans to manually interact with every feature just doesn’t scale”), I didn’t find an explanation of how they do in fact assess product quality – at least in the chapters I read. I was encouraged that they used the term “assess” rather than “measure” when talking about quality (on which James Bach wrote the excellent blog post, Assess Quality, Don’t Measure It), but I only read about their various approaches to using automated checks to build “confidence”, etc. rather than how they actually assess the quality of what they’re building.

I think it’s also important to consider your own context before taking Google’s ideas as a model for your own organization. The vast majority of testers don’t operate in organizations of Google’s scale and so don’t need to copy their solutions to these scaling problems. It seems we’re very fond of taking models, processes, methodologies, etc. from one organization and trying to copy the practices in an entirely different one (the widespread adoption of the so-called “Spotify model” is a perfect example of this problem).

Context is incredibly important and, in this particular case, I’d encourage anyone reading about Google’s approach to testing to be mindful of how different their scale is and not use the argument from the original quote that inspired this post to argue against the need for humans to assess the quality of the software we build.

* It would be remiss of me not to mention a brilliant response to this same quote from Michael Bolton – in the form of his 47-part Twitter thread (yes, 47!).

Lessons learned from writing a ten-part blog series

After leaving Quest back in August 2020, I spent some time working on ideas for a new venture. During this time, I learned some useful lessons from courses by Pat Flynn and got some excellent ideas from Teachable‘s Share What You Know Summit. When I launched my new software testing consultancy, Dr Lee Consulting, I decided to try out one of the ideas I’d heard for generating content around my new brand and so started a blog series, inspired most notably by Terry Rice.

After committing to a ten-part series of posts, I decided to announce my intention publicly (on Twitter and LinkedIn) to keep myself honest, but chose not to commit to a cadence for publishing the parts. I felt that publishing a new blog post once a week was about right and made an internal note to aim for this cadence. Some posts took longer to write than others and the review cycle was more involved for some posts. The series also spread over the Christmas/New Year period, but the entire series took me just on three months to complete so my cadence ended up being close to what I initially thought it would be.

My blogging over the last several years has usually been inspired by something I’ve read or observed or an event I’ve attended such a conference or meetup. These somewhat more spontaneous and sporadic content ideas mean that my posts have been inconsistent in both topic and cadence, not that I see any of this as being an issue.

Committing to a series of posts for which the subject matter was determined for me (in this case by search engine data) meant that I didn’t need to be creative in coming up with ideas for posts, but instead could focus on trying to add something new to the conversation in terms of answering these common questions. I found it difficult to add much nuance in answering some of the questions, but others afforded more lengthy and perhaps controversial responses. Hopefully the series in its entirety is of some value anyway.

My thanks again to Paul Seaman and Ky for reviewing every part of this blog series, as well as to all those who’ve amplified the posts in this series via their blogs, newsletters, lists and social media posts.

The ten parts of my first blog series can be accessed using the links below:

  1. Why is software testing important?,
  2. How does software testing impact software quality?
  3. When should software testing activities start?
  4. How is software testing done?
  5. Can you automate software testing?
  6. Is software testing easy?
  7. Is software testing a good career?
  8. Can I learn software testing on my own?
  9. Which software testing certification is the best?
  10. What will software testing look like in 2021?

(Feel free to send me ideas for any topics you’d like to see covered in a multi-part blog series in the future.)

Donation of proceeds from sales of “An Exploration of Testers” book

In October 2020, I published my first software testing book, “An Exploration of Testers”. As I mentioned then, one of my intentions with this project was to generate some funds to give back to the testing community (with 100% of all proceeds I receive from book sales being returned to the community).

I’m delighted to announce that I’ve now made my first donation as a result of sales so far, based on royalties for the book in LeanPub to date:

LeanPub royalties

(Note that there is up to a 45-day lag between book sales and my receipt of those funds, so some recent sales are not included in this first donation amount.)

I’ve personally rounded up the royalties paid so far (US$230.93) to form a donation of US$250 (and covered their processing fees) to the Association for Software Testing for use in their excellent Grant Program. I’m sure these funds will help meetup and peer conference organizers greatly in the future.

I will make further donations of royalties received from book sales not covered by this first donation.

“An Exploration of Testers” is available for purchase via LeanPub and a second edition featuring more contributions from great testers around the world should be coming soon. My thanks to all of the contributors so far for making the book a reality and also to those who’ve purchased a copy, without whom this valuable donation to the AST wouldn’t have been possible.

Common search engine questions about testing #10: “What will software testing look like in 2021?”

This is the final part of a ten-part blog series in which I’ve answered some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this last post, I ponder the open question of “What will software testing look like in 2021?” (note: updated the year from 2020 in my original dataset from Answer The Public to 2021).

The reality for most people involved in the software testing business is that testing will look pretty much the same in 2021 as it did in 2020 – and probably as it did for many of the years before that too. Incremental improvements take time in organisations and the scope & impact of such changes will vary wildly between different organisations and even within different parts of the same organisation.

I fully expect 2021 to yield a number of reports about trends in software testing and quality, akin to Capgemini’s annual World Quality Report (which I critiqued again last year). There will probably be a lot of noise around the application of AI and machine learning to testing, especially from tool vendors and the big consultancies.

I feel certain that automation (especially of the “codeless” variety) will continue to be one of the main threads around testing with companies continuing to recruit on the basis of “automated testing” prowess over exploratory testing skills.

I think a small but dedicated community of people genuinely interested in advancing the craft of software testing will continue to publish their ideas and look to inject some reality into the various places that testing gets discussed online.

My daily meditation practice has applications here too. In the same way that the practice helps me to recognise when thoughts are happening without getting caught up in their storyline, I think you should make an effort to observe the inevitable commentary on trends in the testing industry through 2021 without going out of your way to follow them. These trends are likely to change again next year and expending effort trying to keep “on trend” is likely effort better spent elsewhere. Instead, I would recommend focusing on the fundamentals of good software testing, while continuing to demonstrate the value of good testing and advancing the practice as best you can in the context of your organisation.

I would also encourage you to make 2021 the year that you tell your testing stories for the benefit of the wider community – your stories are unique, valuable and a great way for others to learn what’s really going on in our industry. There are many avenues to share your first-person experiences – blog about them, share them as LinkedIn articles, talk about them at meetups or present them at a conference (many of which seem destined to remain as virtual events through 2021, which I see as a positive in terms of widening the opportunity for more diverse stories to be heard).

For some alternative opinions on what 2021 might look like, check out the responses to the recent question “What trends do you think will emerge for testing in 2021?” posed by Ministry of Testing on LinkedIn.

You can find the previous nine parts of this blog series at:

I’ve provided the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

I’m grateful to Paul Seaman and Ky who acted as reviewers for every part of this blog series; I couldn’t have completed the series without their help, guidance and encouragement along the way, thank you!

Thanks also to all those who’ve amplified the posts in this series via their blogs, lists and social media posts – it’s been much appreciated. And, last but not least, thanks to Terry Rice for the underlying idea for the content of this series.

Common search engine questions about testing #9: “Which software testing certification is the best?”

This is the penultimate part of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Which software testing certification is the best?“.

There has been much controversy around certification in our industry for a very long time. The certification market is dominated by the International Software Testing Qualifications Board (ISTQB), which they describe as “the world’s most successful scheme for certifying software testers”. The scheme arose out of the British Computer Society’s ISEB testing certification in the late 1990s and has grown to become the de facto testing certification scheme. With a million-or-so exams administered and 700,000+ certifications issued, the scheme has certainly been successful in dishing out certifications across its ever-increasing range of offerings (broadly grouped into Agile, Core and Specialist areas).

In the interests of disclosure, I am Foundation certified by the ANZTB and I encouraged all of the testers at Quest in the early-mid 2000s to get certified too. At the time, it felt to me like this was the only certification that gave a stamp of professionalism to testers. After I received education from Michael Bolton during Rapid Software Testing in 2007, I soon realised the errors in my thinking – and then put many of the same testers through RST with James Bach a few years later!

Although the ISTQB scheme has issued many certifications, the value of these certifications is less clear. The lower level certifications, particularly Foundation, are very easy to obtain and require little to no practical knowledge or experience in software testing. It’s been disappointing to witness how this de facto simple certification became a pre-requisite for hiring testers all over the world. The requirement to be ISQTB-certified doesn’t seem to crop up very often on job ads in the Australian market now, though, so maybe its perceived value is falling over time.

If your desire is to become an excellent tester, then I would encourage you to adopt some of the approaches to learning outlined in the previous post in this series. Following a path of serious self-learning about the craft (and maybe challenging yourself with one of the more credible training courses such as BBST or RST) is likely to provide you with much more value in the long-term than ticking the ISTQB certification box. If you’re concerned about your resume “making the cut” when applying for jobs without having ISTQB certification, consider taking Michael Bolton’s advice in No Certification, No Problem!

Coming back to the original question. Imagine what the best software testing certification might be if you happen to be a for-profit training provider for ISTQB certifications. Then think about what the best software testing certification might be if you’re a tester with a few years of experience in the industry looking to take your skills to the next level. I don’t think it makes sense to ask which (of anything) is the “best” as there are so many context-specific factors to consider.

The de facto standard for certification in our industry, viz. ISTQB, is not a requirement for you to become an excellent and credible software tester, in my opinion.

If you’re interested in a much fuller treatment of the issues with testing certifications, I think James Bach has covered all the major arguments in his blog post, Against Certification. Ilari Henrik Aegerter’s short Super Single Slide Sessions #6 – On Certifications video is also worth a look and, for some light relief around this controversial topic, see the IQSTD website!

You can find the first eight parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my review team (Paul Seaman and Ky) for their helpful feedback on this post, their considerable effort and input as this series comes towards an end has been instrumental in producing posts that I’m proud of.

Common search engine questions about testing #8: “Can I learn software testing on my own?”

This is the eighth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Can I learn software testing on my own?” (and the related questions, “Can I learn software testing online?” and “Can anybody learn software testing?”).

The skills needed to be an excellent tester can be learned. How you choose to undertake that learning is a personal choice, but there’s really no need to tackle this substantial task as a solo effort – and I would strongly encourage you not to go it alone. The testing community is strong and, in my experience, exceptionally willing to help people on their journey to becoming better testers so utilizing this vast resource should be part of your strategy. There is so much great content online for free and engaging with great testers is straightforward via, most notably in my opinion, Twitter and LinkedIn.

While it’s great to learn the various techniques and approaches to testing, it’s also worth looking more broadly into fields such as psychology and sociology. Becoming an excellent tester requires more than just great testing and technical skills so broadening your learning should be helpful. While I don’t recommend most of the testing books from “experts”, I’ve made a few recommendations in the Resources section of my consultancy website (and you can find a bunch of blogs, articles, etc. as starting points for further reading there too).

The next part of this blog series will cover the topic of certifications, so I won’t discuss this in depth here – but I don’t believe it’s necessary to undertake the most common certifications in our industry, viz. those offered by the ISTQB. The only formal courses around testing that I choose to recommend are Rapid Software Testing (which I’ve personally attended twice, with Michael Bolton and then James Bach) and the great value Black Box Software Testing courses from the Association for Software Testing.

You can certainly learn the skills required to be an excellent tester and there’s simply no need to go it alone in doing so. There is no need to attend expensive training courses or go through certification schemes on your way to becoming excellent, but you will need persistence, a growth mindset and a keen interest in continuous learning. I recommend leveraging the large, strong and helpful testing community in your journey of learning the craft – engaging with this community has helped me tremendously over many years and I try to give back to it in whatever ways I can, hopefully inspiring and helping more people to experience the awesomeness of the craft of software testing.

You might find the following blog posts useful too in terms of guiding your learning process:

You can find the first seven parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.

Common search engine questions about testing #7: “Is software testing a good career?”

This is the seventh of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Is software testing a good career?” (and the related questions, “How is software testing a career?” and “Why choose software testing as a career?”).

Reflecting first on my own experience, software testing ended up being an awesome career. I didn’t set out to become a career software tester, though. After a few years as a developer in the UK, I moved to Australia and started looking for work in the IT industry. Within a couple of weeks of arriving in the country, I landed an interview at Quest Software (then in the Eastern suburbs of Melbourne) for a technical writer position. After interviewing for that position, they mentioned that the “QA Manager” was also looking for people and asked whether I’d be interested in chatting with her also. Long story short, I didn’t land the technical writing job but was offered a “Senior Tester” position – and I accepted it without hesitation! I was simply happy to have secured my first job in a new country, never intending it to be a long-term proposition with Quest or the start of a new career in the field of software testing. As it turned out, I stayed with Quest for 21 years in testing/quality related roles from start to finish!

So, there was some luck involved in finding a good company to work for and a job that I found interesting. I’m not sure I’d have stayed in testing, though, had it not been for the revelation that was attending Rapid Software Testing with Michael Bolton in 2007 – that really gave me the motivation to treat software testing more seriously as a long-term career prospect and also marked the time, in my opinion, that I really started to add much more value to Quest as well. The company appreciated the value that good testers were adding to their development teams and I was fortunate to mentor, train, coach and work alongside some great testers, not only in Australia but all over the world. Looking back on my Quest journey, I think it was the clear demonstration of value from testing that led to more and more opportunities for me (and other testers), as predicted by Steve Martin when he said “be so good they can’t ignore you”!

The landscape has changed considerably in the testing industry over the last twenty years, of course. It has to be acknowledged that it’s becoming very difficult to secure testing roles in which you can expect to perform exploratory testing as the mainstay of your working day (and especially so in higher cost locations). I’ve rarely seen an advertisement for such a role in Australia in the last few years, with most employers now also demanding some “automated testing” skills as part of the job. Whether the reality post-employment is that nearly all testers are now performing a mix of testing (be it scripted, exploratory or a combination of both) and automation development, I’m not so sure. If your desire is to become an excellent (exploratory) tester without having some coding knowledge/experience, then I think there are still some limited opportunities out there but seeking them out will most likely require you to be in the network of people in similar positions in companies that understand the value that testing of this kind can bring.

Making the effort to learn some coding skills is likely to be beneficial in terms of getting your resume over the line. I’d recommend not worrying too much about which language(s)/framework(s) you choose to learn, but rather focusing on the fundamentals of good programming. I would also suggest building an understanding of the “why” and “what” in terms of automation (over the “how”, i.e. which language and framework to leverage in a particular context) as this understanding will allow you to quickly add value and not be so vulnerable to the inevitable changes in language and framework preferences over time.

I think customers of the software we build expect that the software has undergone some critical evaluation by humans before they acquire it, so it both intrigues and concerns me that so many big tech companies publicly express their lack of “testers” as some kind of badge of honour. I simply don’t understand why this is seen as a good thing and it seems to me that this trend is likely to come full (or full-ish) circle at some point when the downsides of removing specialists in testing from the development, release and deployment process outweigh the perceived benefits (not that I’m sure what these are, apart from reduced headcount and cost?).

I still believe that software testing is a good career choice. It can be intellectually challenging, varied and stimulating in the right organization. It’s certainly not getting any easier to secure roles in which you’ll spend all of your time performing exploratory testing, though, so broadening your arsenal to include some coding skills and building a good understanding of why and what makes sense to automate are likely to help you along the way to gaining meaningful employment in this industry.

You can find the first six parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my erstwhile review team (Paul Seaman and Ky) for their helpful feedback on this post.

Common search engine questions about testing #6: “Is software testing easy?”

This is the sixth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Is software testing easy?” (and the related question, “Why is software testing so hard?”).

There exists a perception that “anyone can test” and, since testing is really just “playing with the software”, it’s therefore easy. By contrast, it seems that programming is universally viewed as being difficult. This reasoning leads people to believe that a good place to start their career in IT is as a tester, with a view to moving “up” to the more hallowed ranks of developers.

My experience suggests that many people often have no issue with trying to tell testers how to do their job, in a way that those same people wouldn’t dream of doing towards developers. This generally seems to be based on some past experience from their career when they considered themselves a tester, even if that experience is now significantly outdated and they didn’t engage in any serious work to advance themselves as testers. Such interactions are a red flag that many in the IT industry view testing as the easy part of the software development process.

The perception that testing is easy is also not helped by the existence and prevalence of the simple and easy to achieve ISTQB Foundation certification. The certification has been achieved by several hundred thousand people worldwide (the ISTQB had issued 721000+ certifications as of May 2020, with the vast majority of those likely to be at Foundation level), so it’s clearly not difficult to obtain (even without study) and has flooded the market with “testers” who have little but this certification behind them.

Thanks to Michael Bolton (via this very recent tweet) for identifying another reason why this perception exists. “Testing” is often conflated with “finding bugs” and we all know how easy it is to find bugs in the software we use every day:

There’s a reason that many people think testing is easy, due to an asymmetry. No one ever fired up a computer and stumbled into creating a slick UI or a sophisticated algorithm, but people stumble into bugs every day. Finding bugs is easy, they think. So testing must be easy.

Another unfortunate side effect of the idea that testing is easy is that testers are viewed as fungible, i.e. any tester can simply be replaced by another one since there’s not much skill required to perform the role. The move to outsource testing capability to lower cost locations then becomes an attractive proposition. I’m not going to discuss test outsourcing and offshoring in any depth here, but I’ve seen a lot of great, high value testers around the world lose their jobs due to this process of offshoring based on the misplaced notion of fungibility of testing resources.

Enough about the obvious downsides of mistakenly viewing testing as easy! I don’t believe good software testing is at all easy and hopefully my reasons for saying this will help you to counter any claims that testing (at least, testing as I talk about it) is easy work and can be performed equally well by anyone.

As a good tester, we are tasked with evaluating a product by learning about it through exploration, experimentation, observation and inference. This requires us to adopt a curious, imaginative and critical thinking mindset, while we constantly make decisions about what’s interesting to investigate further and evaluate the opportunity cost of doing so. We look for inconsistencies by referring to descriptions of the product, claims about it and within the product itself. These are not easy things to do.

We study the product and build models of it to help us make conjectures and design useful experiments. We perform risk analysis, taking into account many different factors to generate a wealth of test ideas. This modelling and risk analysis work is far from easy.

We ask questions and provide information to help our stakeholders understand the product we’ve built so that they can decide if it’s the product they wanted. We identify important problems and inform our stakeholders about them – and this is information they sometimes don’t want to hear. Revealing problems (or what might be problems) in an environment generally focused on proving we built the right thing is not easy and requires emotional intelligence & great communication skills.

We choose, configure and use tools to help us with our work and to question the product in ways we’re incapable of (or inept at) as humans without the assistance of tools. We might also write some code (e.g. code developed specifically for the purpose of exercising other code or implementing algorithmic decision rules against specific observations of the product, “checks”), as well as working closely with developers to help them improve their own test code. Using tooling and test code appropriately is not easy.

(You might want to check out Michael Bolton’s Testing Rap, from which some of the above was inspired, as a fun way to remind people about all the awesome things human testers actually do!)

This heady mix of aspects of art, science, sociology, psychology and more – requiring skills in technology, communication, experiment design, modelling, risk analysis, tooling and more – makes it clear to me why good software testing is hard to do.

In wrapping up, I don’t believe that good software testing is easy. Good testing is challenging to do well, in part due to the broad reach of subject areas it touches on and also the range of different skills required – but this is actually good news. The challenging nature of testing enables a varied and intellectually stimulating job and the skills to do it well can be learned.

It’s not easy, but most worthwhile things in life aren’t!

You can find the first five parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my dedicated review team (Paul Seaman and Ky) for their helpful feedback on this post. Paul’s blog, Not Everybody Can Test, is worth a read in relation to the subject matter of this post.

Common search engine questions about testing #5: “Can you automate software testing?”

This is the fifth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

As I reach the halfway point in this series, I come to the question “Can you automate software testing?” (and the related question, “How can software test automation be done?”).

If you spend any time on Twitter and LinkedIn following threads around testing, this question of whether testing can be automated crops up with monotonous regularity and often seems to result in very heated discussion, with strong opinions from both the “yes” and “no” camps.

As a reminder (from part one of this blog series), my preferred definition of testing comes from Michael Bolton and James Bach, viz.

Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.

Looking at this definition, testing is clearly a deeply human activity since skills such as learning, exploring, questioning and inferring are not generally those well modelled by machines (even with AI/ML). Humans may or may not be assisted by tools or automated means while exercising these skills, but that doesn’t mean that the performance of testing is itself “automated”.

The distinction drawn between “testing” and “checking” made by James Bach and Michael Bolton has been incredibly helpful for me when talking about automation and countering the idea that testing can be automated (much more so than “validation” and “verification” in my experience). As a refresher, their definition of checking is:

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

As Michael says, “We might choose to automate the exercise of some functions in a program, and then automatically compare the output of that program with some value obtained by another program or process. I’d call that a check.” Checking is a valuable component of our overall testing effort and, by this definition, lends itself to be automated. But the binary evaluations resulting from the execution of such checks form only a small part of the testing story and there are many aspects of product quality that are not amenable to such black and white evaluation.

Thinking about checks, there’s a lot that goes into them apart from the actual execution (by a machine or otherwise): someone decided we needed a check (risk analysis), someone designed the check, someone implemented the check (coding), someone decided what to observe and how to observe it, and someone evaluated the results from executing the check. These aspects of the check are testing activities and, importantly, they’re not the aspects that can be given over to a machine, i.e. be automated. There is significant testing skill required in the design, implementation and analysis of the check and its results, the execution (the automated bit) is really the easy part.

To quote Michael again:

A machine producing a bit is not doing the testing; the machine, by performing checks, is accelerating and extending our capacity to perform some action that happens as part of the testing that we humans do. The machinery is invaluable, but it’s important not to be dazzled by it. Instead, pay attention to the purpose that it helps us to fulfill, and to developing the skills required to use tools wisely and effectively.

We also need to be mindful to not conflate automation in testing with “automated checking”. There are many other ways that automation can help us, extending human abilities and enabling testing that humans cannot practically perform. Some examples of applications of automation include test data generation, test environment creation & configuration, software installation & configuration, monitoring & logging, simulating large user loads, repeating actions en masse, etc.

If we make the mistake of allowing ourselves to believe that “automated testing” exists, then we can all too easily fall into the trap of narrowing our thinking about testing to just automated checking, with a resulting focus on the development and execution of more and more automated checks. I’ve seen this problem many times across different teams in different geographies, especially so in terms of regression testing.

I think we are well served to eliminate “automated testing” from our vocabulary, instead talking about “automation in testing” and the valuable role automation can play in both testing & checking. The continued propaganda around “automated testing” as a thing, though, makes this job much harder than it sounds. You don’t have to look too hard to find examples of test tool vendors using this term and making all sorts of bold claims about their “automated testing solutions”. It’s no wonder that so many testers remain confused in answering the question about whether testing can be automated when a quick Google search got me to some of these gems within the top few results: What is automated testing? (SmartBear), Automated software testing (Atlassian) and Test Automation vs. Automated Testing: The Difference Matters (Tricentis).

I’ve only really scratched the surface of this big topic in this blog, but it should be obvious by now that I don’t believe you can automate software testing. There is often value to be gained by automating checks and leveraging automation to assist and extend humans in their testing efforts, but the real testing lies with the humans – and always will.

Some recommended reading related to this question:

  • The Testing and Checking Refined article by James Bach and Michael Bolton, in which the distinction between testing and checking is discussed in depth, as well as the difference between checks performed by humans and those by machines.
  • The Automation in Testing (AiT) site by Richard Bradshaw and Mark Winteringham, their six principles of AiT make a lot of sense to me.
  • Bas Dijkstra’s blog

You can find the first four parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.