Category Archives: Context-driven testing

Common search engine questions about testing #10: “What will software testing look like in 2021?”

This is the final part of a ten-part blog series in which I’ve answered some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this last post, I ponder the open question of “What will software testing look like in 2021?” (note: updated the year from 2020 in my original dataset from Answer The Public to 2021).

The reality for most people involved in the software testing business is that testing will look pretty much the same in 2021 as it did in 2020 – and probably as it did for many of the years before that too. Incremental improvements take time in organisations and the scope & impact of such changes will vary wildly between different organisations and even within different parts of the same organisation.

I fully expect 2021 to yield a number of reports about trends in software testing and quality, akin to Capgemini’s annual World Quality Report (which I critiqued again last year). There will probably be a lot of noise around the application of AI and machine learning to testing, especially from tool vendors and the big consultancies.

I feel certain that automation (especially of the “codeless” variety) will continue to be one of the main threads around testing with companies continuing to recruit on the basis of “automated testing” prowess over exploratory testing skills.

I think a small but dedicated community of people genuinely interested in advancing the craft of software testing will continue to publish their ideas and look to inject some reality into the various places that testing gets discussed online.

My daily meditation practice has applications here too. In the same way that the practice helps me to recognise when thoughts are happening without getting caught up in their storyline, I think you should make an effort to observe the inevitable commentary on trends in the testing industry through 2021 without going out of your way to follow them. These trends are likely to change again next year and expending effort trying to keep “on trend” is likely effort better spent elsewhere. Instead, I would recommend focusing on the fundamentals of good software testing, while continuing to demonstrate the value of good testing and advancing the practice as best you can in the context of your organisation.

I would also encourage you to make 2021 the year that you tell your testing stories for the benefit of the wider community – your stories are unique, valuable and a great way for others to learn what’s really going on in our industry. There are many avenues to share your first-person experiences – blog about them, share them as LinkedIn articles, talk about them at meetups or present them at a conference (many of which seem destined to remain as virtual events through 2021, which I see as a positive in terms of widening the opportunity for more diverse stories to be heard).

For some alternative opinions on what 2021 might look like, check out the responses to the recent question “What trends do you think will emerge for testing in 2021?” posed by Ministry of Testing on LinkedIn.

You can find the previous nine parts of this blog series at:

I’ve provided the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

I’m grateful to Paul Seaman and Ky who acted as reviewers for every part of this blog series; I couldn’t have completed the series without their help, guidance and encouragement along the way, thank you!

Thanks also to all those who’ve amplified the posts in this series via their blogs, lists and social media posts – it’s been much appreciated. And, last but not least, thanks to Terry Rice for the underlying idea for the content of this series.

Common search engine questions about testing #9: “Which software testing certification is the best?”

This is the penultimate part of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Which software testing certification is the best?“.

There has been much controversy around certification in our industry for a very long time. The certification market is dominated by the International Software Testing Qualifications Board (ISTQB), which they describe as “the world’s most successful scheme for certifying software testers”. The scheme arose out of the British Computer Society’s ISEB testing certification in the late 1990s and has grown to become the de facto testing certification scheme. With a million-or-so exams administered and 700,000+ certifications issued, the scheme has certainly been successful in dishing out certifications across its ever-increasing range of offerings (broadly grouped into Agile, Core and Specialist areas).

In the interests of disclosure, I am Foundation certified by the ANZTB and I encouraged all of the testers at Quest in the early-mid 2000s to get certified too. At the time, it felt to me like this was the only certification that gave a stamp of professionalism to testers. After I received education from Michael Bolton during Rapid Software Testing in 2007, I soon realised the errors in my thinking – and then put many of the same testers through RST with James Bach a few years later!

Although the ISTQB scheme has issued many certifications, the value of these certifications is less clear. The lower level certifications, particularly Foundation, are very easy to obtain and require little to no practical knowledge or experience in software testing. It’s been disappointing to witness how this de facto simple certification became a pre-requisite for hiring testers all over the world. The requirement to be ISQTB-certified doesn’t seem to crop up very often on job ads in the Australian market now, though, so maybe its perceived value is falling over time.

If your desire is to become an excellent tester, then I would encourage you to adopt some of the approaches to learning outlined in the previous post in this series. Following a path of serious self-learning about the craft (and maybe challenging yourself with one of the more credible training courses such as BBST or RST) is likely to provide you with much more value in the long-term than ticking the ISTQB certification box. If you’re concerned about your resume “making the cut” when applying for jobs without having ISTQB certification, consider taking Michael Bolton’s advice in No Certification, No Problem!

Coming back to the original question. Imagine what the best software testing certification might be if you happen to be a for-profit training provider for ISTQB certifications. Then think about what the best software testing certification might be if you’re a tester with a few years of experience in the industry looking to take your skills to the next level. I don’t think it makes sense to ask which (of anything) is the “best” as there are so many context-specific factors to consider.

The de facto standard for certification in our industry, viz. ISTQB, is not a requirement for you to become an excellent and credible software tester, in my opinion.

If you’re interested in a much fuller treatment of the issues with testing certifications, I think James Bach has covered all the major arguments in his blog post, Against Certification. Ilari Henrik Aegerter’s short Super Single Slide Sessions #6 – On Certifications video is also worth a look and, for some light relief around this controversial topic, see the IQSTD website!

You can find the first eight parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my review team (Paul Seaman and Ky) for their helpful feedback on this post, their considerable effort and input as this series comes towards an end has been instrumental in producing posts that I’m proud of.

Common search engine questions about testing #8: “Can I learn software testing on my own?”

This is the eighth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Can I learn software testing on my own?” (and the related questions, “Can I learn software testing online?” and “Can anybody learn software testing?”).

The skills needed to be an excellent tester can be learned. How you choose to undertake that learning is a personal choice, but there’s really no need to tackle this substantial task as a solo effort – and I would strongly encourage you not to go it alone. The testing community is strong and, in my experience, exceptionally willing to help people on their journey to becoming better testers so utilizing this vast resource should be part of your strategy. There is so much great content online for free and engaging with great testers is straightforward via, most notably in my opinion, Twitter and LinkedIn.

While it’s great to learn the various techniques and approaches to testing, it’s also worth looking more broadly into fields such as psychology and sociology. Becoming an excellent tester requires more than just great testing and technical skills so broadening your learning should be helpful. While I don’t recommend most of the testing books from “experts”, I’ve made a few recommendations in the Resources section of my consultancy website (and you can find a bunch of blogs, articles, etc. as starting points for further reading there too).

The next part of this blog series will cover the topic of certifications, so I won’t discuss this in depth here – but I don’t believe it’s necessary to undertake the most common certifications in our industry, viz. those offered by the ISTQB. The only formal courses around testing that I choose to recommend are Rapid Software Testing (which I’ve personally attended twice, with Michael Bolton and then James Bach) and the great value Black Box Software Testing courses from the Association for Software Testing.

You can certainly learn the skills required to be an excellent tester and there’s simply no need to go it alone in doing so. There is no need to attend expensive training courses or go through certification schemes on your way to becoming excellent, but you will need persistence, a growth mindset and a keen interest in continuous learning. I recommend leveraging the large, strong and helpful testing community in your journey of learning the craft – engaging with this community has helped me tremendously over many years and I try to give back to it in whatever ways I can, hopefully inspiring and helping more people to experience the awesomeness of the craft of software testing.

You might find the following blog posts useful too in terms of guiding your learning process:

You can find the first seven parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.

Common search engine questions about testing #7: “Is software testing a good career?”

This is the seventh of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Is software testing a good career?” (and the related questions, “How is software testing a career?” and “Why choose software testing as a career?”).

Reflecting first on my own experience, software testing ended up being an awesome career. I didn’t set out to become a career software tester, though. After a few years as a developer in the UK, I moved to Australia and started looking for work in the IT industry. Within a couple of weeks of arriving in the country, I landed an interview at Quest Software (then in the Eastern suburbs of Melbourne) for a technical writer position. After interviewing for that position, they mentioned that the “QA Manager” was also looking for people and asked whether I’d be interested in chatting with her also. Long story short, I didn’t land the technical writing job but was offered a “Senior Tester” position – and I accepted it without hesitation! I was simply happy to have secured my first job in a new country, never intending it to be a long-term proposition with Quest or the start of a new career in the field of software testing. As it turned out, I stayed with Quest for 21 years in testing/quality related roles from start to finish!

So, there was some luck involved in finding a good company to work for and a job that I found interesting. I’m not sure I’d have stayed in testing, though, had it not been for the revelation that was attending Rapid Software Testing with Michael Bolton in 2007 – that really gave me the motivation to treat software testing more seriously as a long-term career prospect and also marked the time, in my opinion, that I really started to add much more value to Quest as well. The company appreciated the value that good testers were adding to their development teams and I was fortunate to mentor, train, coach and work alongside some great testers, not only in Australia but all over the world. Looking back on my Quest journey, I think it was the clear demonstration of value from testing that led to more and more opportunities for me (and other testers), as predicted by Steve Martin when he said “be so good they can’t ignore you”!

The landscape has changed considerably in the testing industry over the last twenty years, of course. It has to be acknowledged that it’s becoming very difficult to secure testing roles in which you can expect to perform exploratory testing as the mainstay of your working day (and especially so in higher cost locations). I’ve rarely seen an advertisement for such a role in Australia in the last few years, with most employers now also demanding some “automated testing” skills as part of the job. Whether the reality post-employment is that nearly all testers are now performing a mix of testing (be it scripted, exploratory or a combination of both) and automation development, I’m not so sure. If your desire is to become an excellent (exploratory) tester without having some coding knowledge/experience, then I think there are still some limited opportunities out there but seeking them out will most likely require you to be in the network of people in similar positions in companies that understand the value that testing of this kind can bring.

Making the effort to learn some coding skills is likely to be beneficial in terms of getting your resume over the line. I’d recommend not worrying too much about which language(s)/framework(s) you choose to learn, but rather focusing on the fundamentals of good programming. I would also suggest building an understanding of the “why” and “what” in terms of automation (over the “how”, i.e. which language and framework to leverage in a particular context) as this understanding will allow you to quickly add value and not be so vulnerable to the inevitable changes in language and framework preferences over time.

I think customers of the software we build expect that the software has undergone some critical evaluation by humans before they acquire it, so it both intrigues and concerns me that so many big tech companies publicly express their lack of “testers” as some kind of badge of honour. I simply don’t understand why this is seen as a good thing and it seems to me that this trend is likely to come full (or full-ish) circle at some point when the downsides of removing specialists in testing from the development, release and deployment process outweigh the perceived benefits (not that I’m sure what these are, apart from reduced headcount and cost?).

I still believe that software testing is a good career choice. It can be intellectually challenging, varied and stimulating in the right organization. It’s certainly not getting any easier to secure roles in which you’ll spend all of your time performing exploratory testing, though, so broadening your arsenal to include some coding skills and building a good understanding of why and what makes sense to automate are likely to help you along the way to gaining meaningful employment in this industry.

You can find the first six parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my erstwhile review team (Paul Seaman and Ky) for their helpful feedback on this post.

Common search engine questions about testing #6: “Is software testing easy?”

This is the sixth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “Is software testing easy?” (and the related question, “Why is software testing so hard?”).

There exists a perception that “anyone can test” and, since testing is really just “playing with the software”, it’s therefore easy. By contrast, it seems that programming is universally viewed as being difficult. This reasoning leads people to believe that a good place to start their career in IT is as a tester, with a view to moving “up” to the more hallowed ranks of developers.

My experience suggests that many people often have no issue with trying to tell testers how to do their job, in a way that those same people wouldn’t dream of doing towards developers. This generally seems to be based on some past experience from their career when they considered themselves a tester, even if that experience is now significantly outdated and they didn’t engage in any serious work to advance themselves as testers. Such interactions are a red flag that many in the IT industry view testing as the easy part of the software development process.

The perception that testing is easy is also not helped by the existence and prevalence of the simple and easy to achieve ISTQB Foundation certification. The certification has been achieved by several hundred thousand people worldwide (the ISTQB had issued 721000+ certifications as of May 2020, with the vast majority of those likely to be at Foundation level), so it’s clearly not difficult to obtain (even without study) and has flooded the market with “testers” who have little but this certification behind them.

Thanks to Michael Bolton (via this very recent tweet) for identifying another reason why this perception exists. “Testing” is often conflated with “finding bugs” and we all know how easy it is to find bugs in the software we use every day:

There’s a reason that many people think testing is easy, due to an asymmetry. No one ever fired up a computer and stumbled into creating a slick UI or a sophisticated algorithm, but people stumble into bugs every day. Finding bugs is easy, they think. So testing must be easy.

Another unfortunate side effect of the idea that testing is easy is that testers are viewed as fungible, i.e. any tester can simply be replaced by another one since there’s not much skill required to perform the role. The move to outsource testing capability to lower cost locations then becomes an attractive proposition. I’m not going to discuss test outsourcing and offshoring in any depth here, but I’ve seen a lot of great, high value testers around the world lose their jobs due to this process of offshoring based on the misplaced notion of fungibility of testing resources.

Enough about the obvious downsides of mistakenly viewing testing as easy! I don’t believe good software testing is at all easy and hopefully my reasons for saying this will help you to counter any claims that testing (at least, testing as I talk about it) is easy work and can be performed equally well by anyone.

As a good tester, we are tasked with evaluating a product by learning about it through exploration, experimentation, observation and inference. This requires us to adopt a curious, imaginative and critical thinking mindset, while we constantly make decisions about what’s interesting to investigate further and evaluate the opportunity cost of doing so. We look for inconsistencies by referring to descriptions of the product, claims about it and within the product itself. These are not easy things to do.

We study the product and build models of it to help us make conjectures and design useful experiments. We perform risk analysis, taking into account many different factors to generate a wealth of test ideas. This modelling and risk analysis work is far from easy.

We ask questions and provide information to help our stakeholders understand the product we’ve built so that they can decide if it’s the product they wanted. We identify important problems and inform our stakeholders about them – and this is information they sometimes don’t want to hear. Revealing problems (or what might be problems) in an environment generally focused on proving we built the right thing is not easy and requires emotional intelligence & great communication skills.

We choose, configure and use tools to help us with our work and to question the product in ways we’re incapable of (or inept at) as humans without the assistance of tools. We might also write some code (e.g. code developed specifically for the purpose of exercising other code or implementing algorithmic decision rules against specific observations of the product, “checks”), as well as working closely with developers to help them improve their own test code. Using tooling and test code appropriately is not easy.

(You might want to check out Michael Bolton’s Testing Rap, from which some of the above was inspired, as a fun way to remind people about all the awesome things human testers actually do!)

This heady mix of aspects of art, science, sociology, psychology and more – requiring skills in technology, communication, experiment design, modelling, risk analysis, tooling and more – makes it clear to me why good software testing is hard to do.

In wrapping up, I don’t believe that good software testing is easy. Good testing is challenging to do well, in part due to the broad reach of subject areas it touches on and also the range of different skills required – but this is actually good news. The challenging nature of testing enables a varied and intellectually stimulating job and the skills to do it well can be learned.

It’s not easy, but most worthwhile things in life aren’t!

You can find the first five parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my dedicated review team (Paul Seaman and Ky) for their helpful feedback on this post. Paul’s blog, Not Everybody Can Test, is worth a read in relation to the subject matter of this post.

Common search engine questions about testing #5: “Can you automate software testing?”

This is the fifth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

As I reach the halfway point in this series, I come to the question “Can you automate software testing?” (and the related question, “How can software test automation be done?”).

If you spend any time on Twitter and LinkedIn following threads around testing, this question of whether testing can be automated crops up with monotonous regularity and often seems to result in very heated discussion, with strong opinions from both the “yes” and “no” camps.

As a reminder (from part one of this blog series), my preferred definition of testing comes from Michael Bolton and James Bach, viz.

Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.

Looking at this definition, testing is clearly a deeply human activity since skills such as learning, exploring, questioning and inferring are not generally those well modelled by machines (even with AI/ML). Humans may or may not be assisted by tools or automated means while exercising these skills, but that doesn’t mean that the performance of testing is itself “automated”.

The distinction drawn between “testing” and “checking” made by James Bach and Michael Bolton has been incredibly helpful for me when talking about automation and countering the idea that testing can be automated (much more so than “validation” and “verification” in my experience). As a refresher, their definition of checking is:

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

As Michael says, “We might choose to automate the exercise of some functions in a program, and then automatically compare the output of that program with some value obtained by another program or process. I’d call that a check.” Checking is a valuable component of our overall testing effort and, by this definition, lends itself to be automated. But the binary evaluations resulting from the execution of such checks form only a small part of the testing story and there are many aspects of product quality that are not amenable to such black and white evaluation.

Thinking about checks, there’s a lot that goes into them apart from the actual execution (by a machine or otherwise): someone decided we needed a check (risk analysis), someone designed the check, someone implemented the check (coding), someone decided what to observe and how to observe it, and someone evaluated the results from executing the check. These aspects of the check are testing activities and, importantly, they’re not the aspects that can be given over to a machine, i.e. be automated. There is significant testing skill required in the design, implementation and analysis of the check and its results, the execution (the automated bit) is really the easy part.

To quote Michael again:

A machine producing a bit is not doing the testing; the machine, by performing checks, is accelerating and extending our capacity to perform some action that happens as part of the testing that we humans do. The machinery is invaluable, but it’s important not to be dazzled by it. Instead, pay attention to the purpose that it helps us to fulfill, and to developing the skills required to use tools wisely and effectively.

We also need to be mindful to not conflate automation in testing with “automated checking”. There are many other ways that automation can help us, extending human abilities and enabling testing that humans cannot practically perform. Some examples of applications of automation include test data generation, test environment creation & configuration, software installation & configuration, monitoring & logging, simulating large user loads, repeating actions en masse, etc.

If we make the mistake of allowing ourselves to believe that “automated testing” exists, then we can all too easily fall into the trap of narrowing our thinking about testing to just automated checking, with a resulting focus on the development and execution of more and more automated checks. I’ve seen this problem many times across different teams in different geographies, especially so in terms of regression testing.

I think we are well served to eliminate “automated testing” from our vocabulary, instead talking about “automation in testing” and the valuable role automation can play in both testing & checking. The continued propaganda around “automated testing” as a thing, though, makes this job much harder than it sounds. You don’t have to look too hard to find examples of test tool vendors using this term and making all sorts of bold claims about their “automated testing solutions”. It’s no wonder that so many testers remain confused in answering the question about whether testing can be automated when a quick Google search got me to some of these gems within the top few results: What is automated testing? (SmartBear), Automated software testing (Atlassian) and Test Automation vs. Automated Testing: The Difference Matters (Tricentis).

I’ve only really scratched the surface of this big topic in this blog, but it should be obvious by now that I don’t believe you can automate software testing. There is often value to be gained by automating checks and leveraging automation to assist and extend humans in their testing efforts, but the real testing lies with the humans – and always will.

Some recommended reading related to this question:

  • The Testing and Checking Refined article by James Bach and Michael Bolton, in which the distinction between testing and checking is discussed in depth, as well as the difference between checks performed by humans and those by machines.
  • The Automation in Testing (AiT) site by Richard Bradshaw and Mark Winteringham, their six principles of AiT make a lot of sense to me.
  • Bas Dijkstra’s blog

You can find the first four parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my awesome review team (Paul Seaman and Ky) for their helpful feedback on this post.

Common search engine questions about testing #4: “How is software testing done?”

This is the fourth of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I ponder the question “How is software testing done?” (and the related questions, “What are software testing methodologies?”, “What is the software testing life cycle?” and “What is the software testing process?”).

There are many different ways in which software testing is performed, by different people in different organizations with different ideas about what constitutes “good testing”. Don’t be fooled into believing there is “one way” to do testing! There is certainly no single, approved, credible and official way to perform testing – and this is actually a good thing, in my opinion.

So, the question should perhaps be “How might software testing be done?” and, in answering this question, the idea of context is paramount. James Bach defines “context” (in Context-Driven Methodology) as follows:

When I say “context” I mean the totality of a situation that influences the success or failure of an enterprise.

(and Dictionary.com similarly offers “the set of circumstances or facts that surround a particular event, situation, etc.”) The first principle of Context-Driven Testing says “The value of any practice depends on its context.” The way you would approach the testing of a medical device (where a defect could result in loss of life) is likely quite different to how you would test a website for a local business, for example. The context is different – and the differences are important.

While there may be books or certifications that propose a “testing process” or methodology, you should consider the context of your particular situation to assess whether any of these processes or methodologies have valuable elements to leverage. Remember that testing requires a broad variety of different skills and activities: working with other people, formulating hypotheses, creating & changing strategies, critical thinking & evaluation, finding the right people when you need help, assessing what all this might mean for risk and then finding ways to relate this information in compelling and credible ways. What we need is a way of thinking about testing that is flexible enough to cover such a range of skills and activities across many different contexts.

The following from context-driven-testing.com puts it well, I think:

Context-driven testers choose their testing objectives, techniques, and deliverables (including test documentation) by looking first to the details of the specific situation, including the desires of the stakeholders who commissioned the testing. The essence of context-driven testing is project-appropriate application of skill and judgment. The Context-Driven School of testing places this approach to testing within a humanistic social and ethical framework.

Ultimately, context-driven testing is about doing the best we can with what we get. Rather than trying to apply “best practices,” we accept that very different practices (even different definitions of common testing terms) will work best under different circumstances.

Bearing the above in mind, the only software testing methodology that I feel comfortable to recommend is Rapid Software Testing (RST) developed by James Bach and Michael Bolton. RST isn’t a prescriptive process but rather a way to understand testing with a focus on context and people:

[RST] is a responsible approach to software testing, centered around people who do testing and people who need it done. It is a methodology (in the sense of “a system of methods”) that embraces tools (aka “automation”) but emphasizes the role of skilled technical personnel who guide and drive the process.

Rather than being a set of templates and rules, RST is a mindset and a skill set. It is a way to understand testing; it is a set of things a tester knows how to do; and it includes approaches to effective leadership in testing.

https://rapid-software-testing.com/about-rapid-software-testing/

RST is therefore quite different from some of the prevalent processes/methodologies that you might come across in searching for resources to answer the question of how testing is done, such as ISTQB and TMap. These systems are often referred to as “factory-style testing” and an excellent summary of how RST differs from these can be found at https://www.satisfice.com/download/how-rst-is-different-from-factory-style-testing

Given how different your context and testing mission is likely to be on different projects in different organizations at different times for different customers, the way “testing is done” necessarily needs to be flexible and adaptable enough to respect these very different situations. Any formal process or methodology that seeks to prescribe how to test is likely to be sub-optimal in your particular context, so I suggest adopting something like the mindset proposed by RST and adapting your approach to testing to suit your context.

You can find the first three parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my patient and dependable review team (Paul Seaman and Ky) for their helpful feedback on this post.

All testing is exploratory: change my mind

I’ve recently returned to Australia after several weeks in Europe, mainly for pleasure with a small amount of work along the way. Catching up on some of the testing-related chatter on my return, I spotted that Rex Black repeated his “Myths of Exploratory Testing” webinar in September. I respect the fact that he shares his free webinar content every month and, even though I often find myself disagreeing with his opinions, hearing what others think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good testing looks like.

Rex started off with his definition of exploratory testing (ET), viz.

A technique that uses knowledge, experience and skills to test software in a non-linear and investigatory fashion

He claimed that this is a “pretty widely shared definition of ET” but I don’t agree. The ISTQB Glossary uses the following definition:

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

The definition I hear most often is something like the following James Bach/Michael Bolton effort (their edit of Cem Kaner’s suggestion, which they used until 2015):

An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project

They have since deprecated the term “exploratory testing” in favour of simply “testing” (from 2015), defining testing as:

Evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.

Rex went on to say that the test basis and test oracles in ET “are primarily skills, knowledge and experience” and any such testing is referred to as “experience-based testing” (per the ISTQB definition, viz. “Testing based on the tester’s experience, knowledge and intuition.”). Experience-based testing that is investigatory is then deemed to be exploratory. I have several issues with this. There is an implication here that ET involves testing without using a range of oracles that might include specifications, user stories, or other more “formal” sources of what the software is meant to do. Rex reinforces this when he goes on to say that ET is a form of validation and “may tell us little or nothing about conformance to specification because the specification may not even be consulted by the tester”. Also, I can’t imagine any valuable testing that doesn’t rely on the tester’s skills, knowledge and experience so it seems to me that all testing would fall under this “experience-based testing” banner.

The first myth Rex discussed was the “origin myth”, that ET was invented in the 1990s in Silicon Valley or at least that was when a “name got hung on it” (e.g. Cem Kaner). He argued instead that it was invented by whoever wrote the first program, that IBM were doing it in the 1960s, that the independent test teams in Fred Brooks’s 1975 book Mythical Man Month were using ET, and “error guessing” as introduced by Glenford Myers in the classic book Art of Software Testing sounds “a whole lot like a form of ET”. The History of Definitions of ET on James Bach’s blog is a good reference in this regard, in my opinion. While I agree that programmers have been performing some kind of investigatory or unscripted testing in their development and debugging activities as long as programming has been a thing, it’s important that we define our testing activities in a way that makes the way we talk about what we do both accurate and credible. I see the argument for suggesting that error guessing is a form of ET, but it’s just one tactic that might be employed by a tester skilled in the much broader approach that is ET.

The next myth Rex discussed was the “completeness myth”, that “playing around” with the software is sufficient to test it. He mentioned that there is little education around testing in degrees in Software Engineering so people don’t understand what testing can and cannot do, which leads to myths like this. I agree that there is a general lack of understanding in our industry of how important structured ET is as part of a testing strategy, I haven’t personally heard this myth being espoused anywhere recently though.

Next up was the “sufficiency myth”, that some teams bring in a “mighty Jedi warrior of ET & this person has helped [them] to find every bug that can matter”. He mentioned a study from Microsoft where they split their testing groups for the same application, with one using ET (and other reactive strategies) only, while the other used pre-designed tests (including automated tests) only. The sets of bugs found by these two teams was partially but not fully overlapping, hence proving that ET alone is not sufficient. I’m confident that even if the groups had been divided up and did the same kind of testing (be it ET or pre-designed), then the sets of bugs from the two teams would also have been partially but not fully overlapping (there is some evidence to support this, albeit from a one-off small case study, from Aaron Hodder & James Bach in their article Test Cases Are Not Testing)! I’m not sure where this myth comes from, I’ve not heard it from anyone in the testing industry and haven’t seen a testing strategy that relies solely on ET. I do find that using ET as an approach can really help in focusing on finding bugs that matter, though, and that seems like a good thing to me.

Rex continued with the “irrelevance myth”, that we don’t have to worry about ET (or, indeed, any validation testing at all) because of the use of ATDD, BDD, or TDD. He argued that all of these approaches are verification rather than validation, so some validation is still relevant (and necessary). I’ve seen this particular myth and, if anything, it seems to be more prevalent over time especially in the CI/CD/DevOps world where automated checks (of various kinds) are viewed as sufficient gates to production deployment. Again, I see this as a lack of understanding of what value ET can add and that’s on us as a testing community to help people understand that value (and explain where ET fits into these newer, faster deployment approaches).

The final myth that Rex brought up was the “ET is not manageable myth”. In dispelling this myth, he mentioned the Rapid Reporter tool, timeboxed sessions, and scoping using charters (where a “charter is a set of one or more test conditions”). This was all quite reasonable, basically referring to session-based test management (SBTM) without using that term. One of his recommendations seemed odd, though: “record planned session time versus actual [session] time” – sessions are strictly timeboxed in an SBTM situation so planned and actual time are always the same. While this seems to be one of the more difficult aspects of SBTM at least initially for testers in my experience, sticking to the timebox is critical if ET is to be truly manageable.

Moving on from the myths, Rex talked about “reactive strategies” in general, suggesting they were suitable in agile environments but that we also need risk-based strategies and automation in addition to ET. He said that the reliance on skills and experience when using ET (in terms of the test basis and test oracle) mean that heuristics are a good way of triggering test ideas and he made the excellent point that all of our “traditional” test techniques still apply when using ET.

Rex’s conclusion was also sound, “I consider (the best practice of) ET to be essential but not sufficient by itself” and I have no issue with that (well, apart from his use of the term “best practice”) – and again don’t see any credible voices in the testing community arguing otherwise.

The last twenty minutes of the webinar was devoted to Q&A from both the online and live audience (the webinar was delivered in person at the STPCon conference). An interesting question from the live audience was “Has ET finally become embedded in the software testing lifecycle?” Rex responded that the “religious warfare… in the late 2000s/early 2010s has abated, some of the more obstreperous voices of that era have kinda taken their show off the road for various reasons and aren’t off stirring the pot as much”. This was presumably in reference to the somewhat heated debate going on in the context-driven testing community in that timeframe, some of which was unhelpful but much of which helped to shape much clearer thinking around ET, SBTM and CDT in general in my opinion. I wouldn’t describe it as “religious warfare”, though.

Rex also mentioned in response to this question that he actually now sees the opposite problem in the DevOps world, with “people running around saying automate everything” and the belief that automated tests by themselves are sufficient to decide when software is worthy of deployment to production. In another reference to Bolton/Bach, he argued that the “checking” and “testing” distinction was counterproductive in pointing out the fallacy of “automate everything”. I found this a little ironic since Rex constantly seeks to make the distinction between validation and verification, which is very close to the distinction that testing and checking seeks to draw (albeit in much more lay terms as far as I’m concerned). I’ve actually found the “checking” and “testing” terminology extremely helpful in making exactly the point that there is “testing” (as commonly understood by those outside of our profession) that cannot be automated, it’s a great conversation starter in this area for me.

One of Rex’s closing comments was again directed to the “schism” of the past with the CDT community, “I’m relieved that we aren’t still stuck in these incredibly tedious religious wars we had for that ten year period of time”.

There was a lot of good content in Rex’s webinar and nothing too controversial. His way of talking about ET (even the definition he chooses to use) is different to what I’m more familiar with from the CDT community but it’s good to hear him referring to ET as an essential part of a testing strategy. I’ve certainly seen an increased willingness to use ET as the mainstay of so-called “manual” testing efforts and putting structure around it using SBTM adds a lot of credibility. For the most part in my teams across Quest, we now consider test efforts to be considered ET only if they are performed within the framework of SBTM so that we have that accountability and structure in place for the various stakeholders to treat this approach as credible and worthy of their investment.

So, finally getting to the reason for the title of this post, both by Rex’s (I would argue unusual) definition (and even the ISTQB’s definition) or by what I would argue is the more widely accepted definition (Bach/Bolton above), it seems to me that all testing is exploratory. I’m open to your arguments to change my mind!

(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ The one I refer to in this blog post has not appeared there as yet, but the audio is available via https://rbcs-us.com/resources/podcast/)

Testing in Context Conference Australia 2019

The third annual conference of the Association for Software Testing (AST) outside of North America took place in Melbourne in the shape of Testing in Context Conference Australia 2019 (TiCCA19) on February 28 & March 1. The conference was held at the Jasper Hotel near the Queen Victoria Market.

The event drew a crowd of about 50, mainly from Australia and New Zealand but also with a decent international contingent (including a representative of the AST and a couple of testers all the way from Indonesia!).

I co-organized the event with Paul Seaman and the AST allowed us great freedom in how we put the conference together. We decided on the theme first, From Little Things Big Things Grow, and had a great response to our call for papers, resulting in what we thought was an awesome programme.

The Twitter hashtag for the event was #ticca19 and this was fairly active across the conference.

The event consisted of a first day of workshops followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical AST/peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach).

Takeaways

  • Testing is not dead, despite what you might hear on social media or from some automation tooling vendors. There is a vibrant community of skilled human testers who display immense value in their organizations. My hope is that these people will promote their skills more broadly and advocate for human involvement in producing great software.
  • Ben Simo’s keynote highlighted just how normalized bad software has become, we really can do better as a software industry and testers have a key role to play.
  • While “automation” is still a hot topic, I got a sense of a move back towards valuing the role of humans in producing quality software. This might not be too surprising given the event was a context-driven testing conference, but it’s still worth noting.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue). There was evidence of genuine conferring happening all over the place, exactly what we aimed for!
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for continuing to back our event in Melbourne.
  • I had a tiring but rewarding experience in co-organizing this event with Paul, the testing community in Melbourne is a great place to be!

Workshop day (Thursday 28th February)

We offered two full-day workshops to kick the event off, with “Applied Exploratory Testing” presented by Toby Thompson (from Software Education) and “Leveraging the Power of API Testing” presented by Scott Miles. Both workshops went well and it was pleasing to see them being well attended. Feedback on both workshops has been excellent so well done to Toby and Scott on their big efforts in putting the workshops together and delivering them so professionally.

Toby Thompson setting up his ET workshopScott Miles ready to start his API testing workshop

Pre-conference meetup (Thursday 28th February)

We decided to hold a free meetup on the evening before the main conference day to offer the broader Melbourne testing community the chance to meet some of the speakers as well as hearing a great presentation and speaker panel session. Thanks to generous sponsorship, the meetup went really well, with a small but highly engaged audience – I’ve blogged in detail about the meetup at https://therockertester.wordpress.com/2019/03/04/pre-ticca19-conference-meetup/

Aaron Hodder addresses the meetupGraeme, Aaron, Sam and Ben talking testing during the panel session

Conference day (Friday 1st March)

The conference was kicked off at 8.30am with some opening remarks from me including an acknowledgement of traditional owners and calling out two students who we sponsored to attend from the EPIC TestAbility Academy. Next up was Ilari Henrik Aegerter (board member of the AST) who briefly explained what the AST’s mission is and what services and benefits membership provides, followed by Richard Robinson outlining the way “open season” would be facilitated after each track talk.

I then introduced our opening keynote, Ben Simo with “Is There A Problem Here?”. Ben joined us all the way from Phoenix, Arizona, and this was his first time in Australia so we were delighted to have him “premiere” at our conference! His 45-minute keynote showed us many cases where he has experienced problems when using systems & software in the real world – from Australian road signs to his experience of booking his flights with Qantas, from hotel booking sites to roadtrip/mapping applications, and of course covering his well-publicized work around Healthcare.gov some years ago. He encouraged us to move away from “pass/fail” to asking “is there a problem here?” and, while not expecting perfection, know that our systems and software can be better. A brief open season brought an excellent first session to a close.

Ben Simo during his keynote (photo from Lynne Cazaly)

After a short break, the conference split into two track sessions with delegates having the choice of “From Prototype to Product: Building a VR Testing Effort” with Nick Pass or “Tales of Fail – How I failed a Quality Coach role” with Samantha Connelly (who has blogged about her talk and also her TiCCA19 conference experience in general).

While Sam’s talk attracted the majority of the audience, I opted to spend an hour with Nick Pass as he gave an excellent experience report of his time over in the UK testing virtual reality headsets for DisplayLink. Nick was in a new country, working for a new company in a new domain and also working on a brand new product within that company. He outlined the many challenges including technical, physical (simulator sickness), processes (“sort of agile”) and personal (“I have no idea”). Due to the nature of the product, there were rapid functionality changes and lots of experimentation and prototyping. Nick said he viewed “QA” as “Question Asker” in this environment and he advocated a Quality Engineering approach focused on both product and process. Test design was emergent but, when they got their first customer (hTC), the move to productizing meant a tightening up of processes, more automated checks, stronger testing techniques and adoption of the LeSS framework. This was a good example of a well-crafted first-person experience report from Nick with a simple but effective deck to guide the way. His 40-minute talk was followed by a full open season with a lot of questions both around the cool VR product and his role in building a test discipline for it.

Nick Pass talks VR

Morning tea was a welcome break and was well catered by the Jasper, before tracks resumed in the shape of “Test Reporting in the Hallway” with Morris Nye and “The Automation Gum Tree” with Michelle Macdonald.

I joined Michelle – a self-confessed “automation enthusiast” – as she described her approach to automation for the Pronto ERP product using the metaphor of the Aussie gum tree (which meant some stunning visuals in her slide deck). Firstly, she set the scene – she has built an automated testing framework using Selenium and Appium to deal with the 50,000 screens, 2000 data objects and 27 modules across Pronto’s system. She talked about their “Old Gum”, a Rational Robot system to test their Win32 application which then matured to use TestComplete. Her “new species” needed to cover both web and device UIs, preferably be based on open source technologies, be easy for others to create scripts, and needed support. It was Selenium IDE as a first step and the resulting framework is seen as successful as it’s easy to install, everyone has access to use it, knowledge has been shared, and patience has paid off. The gum tree analogies came thick and fast as the talk progressed. She talked about Inhabitants, be they consumers, diggers or travellers, then the need to sometimes burn off (throw away and start again), using the shade (developers working in feature branches) and controlling the giants (all too easy for automation to get too big and out of control). Michelle had a little too much content and her facilitator had to wrap her up at 50 minutes into the session so we had time for some questions during open season. There were some sound ideas in Michelle’s talk and she delivered it with passion, supported by the best-looking deck of the conference.

A sample of the beautiful slides in Michelle's talk

Lunch was a chance to relax over nice food and it was great to see people genuinely conferring over the content from the morning’s sessions. The hour passed quickly before delegates reconvened for another two track sessions.

First up for the afternoon was a choice between “Old Dog, New Tricks: How Traditional Testers Can Embrace Code” with Graeme Harvey and “The Uncertain Future of Non-Technical Testing” with Aaron Hodder.

I chose Aaron’s talk and he started off by challenging us as to what “technical” meant (and, as a large group, we failed to reach a consensus) as well as what “testing” meant. He gave his idea of what “non-technical testing” means: manually writing test scripts in English and a person executing them, while “technical testing” means: manually writing test scripts in Java and a machine executing them! He talked about the modern development environment and what he termed “inadvertent algorithmic cruelty”, supported by examples. He mentioned that he’s never seen a persona of someone in crisis or a troll when looking at user stories, while we have a great focus on technical risks but much less so on human risks. There are embedded prejudices in much modern software and he recommended the book Weapons of Math Destruction by Cathy O’Neil. This was another excellent talk from Aaron, covering a little of the same ground as his meetup talk but also breaking new ground and providing us with much food for thought in the way we build and test our software for real humans in the real world. Open season was busy and fully exhausted the one-hour in Aaron’s company.

Adam Howard introduces Aaron Hodder for his track

Graeme Harvey ready to present

A very brief break gave time for delegates to make their next choice, “Exploratory Testing: LIVE!” with Adam Howard or “The Little Agile Testing Manifesto” with Samantha Laing. Having seen Adam’s session before (at TestBash Australia 2018), I decided to attend Samantha’s talk. She introduced the Agile Testing Manifesto that she put together with Karen Greaves, which highlights that testing is an activity rather than a phase, we should aim to prevent bugs rather than focusing on finding them, look at testing over checking, aim to help build the best system possible instead of trying to break it, and emphasizes the whole team responsibility for quality. She gave us three top tips to take away: 1) ask “how can we test that?”, 2) use a “show me” column on your agile board (instead of an “in test” column), and 3) do all the testing tasks first (before development ones). This was a useful talk for the majority of her audience who didn’t seem to be very familiar with this testing manifesto.

Sam Laing presenting her track session (photo from Lynne Cazaly)

With the track sessions done for the day, afternoon tea was another chance to network and confer before the conference came back together in the large Function Hall for the closing keynote. Paul did the honours in introducing the well-known Lynne Cazaly with “Try to See It My Way: Communication, Influence and Persuasion”.

She encouraged us to view people as part of the system and deliberately choose to “entertain” different ideas and information. In trying to understand differences, you will actually find similarities. Lynne pointed out that we over-simplify our view of others and this leads to a lack of empathy. She introduced the Karpman Drama Triangle and the Empowerment Dynamic (by David Emerald). Lynne claimed that “all we’re ever trying to do is feel better about ourselves” and, rather than blocking ideas, we should yield and adopt a “go with” style of facilitation.

Lynne was a great choice of closing keynote and we were honoured to have her agree to present at the conference. Her vast experience translated into an entertaining, engaging and valuable presentation. She spent the whole day with us and thoroughly enjoyed her interactions with the delegates at this her first dedicated testing conference.

Slide from Lynne Cazaly's keynotelynne2Slide from Lynne Cazaly's keynote

Paul Seaman closed out the conference with some acknowledgements and closing remarks, before the crowd dispersed and it was pleasing to see so many people joining us for the post-conference cocktail reception, splendidly catered by the Jasper. The vibe was fantastic and it was nice for us as organizers to finally relax a little and enjoy chatting with delegates.

Acknowledgements

A conference doesn’t happen by accident, there’s a lot of work over many months for a whole bunch of people, so it’s time to acknowledge the various help we had along the way.

The conference has been actively supported by the Association for Software Testing and couldn’t happen without their backing so thanks to the AST and particularly Ilari who continues to be an enthusiastic promoter of the Australian conference via his presence on the AST board. Our wonderful event planner, Val Gryfakis, makes magic happen and saves the rest of us so much work in dealing with the venue and making sure everything runs to plan – we seriously couldn’t run the event without you, Val!

We had a big response to our call for proposals for TiCCA19, so thanks to everyone who took the time and effort to apply to provide content for the conference. Paul and I were assisted by Michele Playfair in selecting the programme and it was great to have Michele’s perspective as we narrowed down the field. We can only choose a very small subset for a one-day conference and we hope many of you will have another go when the next CFP comes around.

There is of course no conference without content so a huge thanks to our great presenters, be they delivering workshops, keynotes or track sessions. Thanks to those who bought tickets and supported the event as delegates, your engagement and positive feedback meant a lot to us as organizers.

Finally, my personal thanks go to my mate Paul for his help, encouragement, ideas and listening ear during the weeks and months leading up to the event, we make a great team and neither of us would do this gig with anyone else, cheers mate.

 

Pre-TiCCA19 conference meetup

In the weeks leading up to the Testing in Context Conference Australia 2019, our thoughts turned to how we might sneak in a meetup event alongside the conference to make the most of the fact that Melbourne would be home to so many awesome testers at the same time.

Thanks to the conference venue – the Jasper Hotel – giving us use of one of our workshop rooms for an evening and also food & drink sponsorship by House of Test (Switzerland), the meetup became feasible and a bit of social media advertising coupled with a free Eventbrite campaign led to about twenty keen testers (including a number of TiCCA19 conference speakers) assembling at the Jasper on the evening of Thursday 28th February.

Some pre-meetup networking gave people the chance to make new friends as well as giving the conference speakers a chance to meet some of their fellow presenters. After I gave a very brief opening, it was time for the content to kick off in the shape of a presentation by well-known and respected Kiwi context-driven tester, Aaron Hodder. His talk was titled “Inclusive Collaboration – how our differences can make the difference” in which he explored how having a neurodiverse workforce can give you a competitive edge, and how the workplace can respect diverse needs and different requirements for interaction and collaboration to bring out the best in everyone’s differences. This was a beautifully-crafted talk, delivered with Aaron’s unique blend of personal connection to the topic and a smattering of self-deprecation, while still driving home a hard-hitting message. (Aaron also shared some great resources on Inclusive Collaboration at https://goo.gl/768M0u).

Aaron Hodder addresses the meetupAaron Hodder addresses the meetupThe idea of "My user manual" presented by Aaron Hodder

A short networking break then gave everyone the chance to mingle some more and clean up the remains of the food, before we kicked off the panel session. Ably facilitated by Rich Robinson, the panel consisted of four TiCCA19 speakers, in the shape of Graeme Harvey, Aaron Hodder, Sam Connelly and Ben Simo. The conversation was driven by a few questions from Rich: How have you seen the testing role change in your career? How do you think the testing role will change into the future? Is the manual testing role dead? The resulting 45-minute discussion between the panel and audience was engaging and interesting – and kudos to Rich for such a great job in running the panel.

Graeme, Aaron, Sam and Ben talking testing during the panel sessionGraeme, Aaron, Sam and Ben talking testing during the panel session

We enjoyed putting this meetup on for the Melbourne testing community and the feedback from everyone involved was very positive, so thanks again to everyone who made it happen.