Common search engine questions about testing #3: “When should software testing activities start?”

This is the third of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I answer the question “When should software testing activities start?” (and the related question, “When to do software testing?”).

It feels very timely to be answering this question as there is so much noise in the industry at the moment around not only “shifting” testing to the left but also to the right. “Shifting left” is the idea that testing activities should be moved more towards the start of (and then throughout) the development cycle, while shifting right is more about testing “in production” (i.e. testing the software after it’s been deployed and is in use by customers). It seems to me that there is a gap now forming in the middle where much of our testing used to be performed (and, actually, probably still is), viz. testing of a built system by humans before it is deployed.

Let’s start by looking at what we mean by “testing activities” and who might perform these activities.

For teams with dedicated testers, the testers can participate in design meetings and ask questions about how customers really work. They can also review user stories (and other claims of how the software is intended to work) to look for inconsistencies and other issues. Testers might also work with developers to help them generate test ideas for unit and API tests. Testers with coding skills might work with API developers to write stubs or API tests during development. Testers might pair with developers to test some new functionality locally before it even makes it into a more formal build. For teams without dedicated testers, the developers will be covering these – and other – testing activities themselves, perhaps with assistance from a roaming testing/quality coach if the organization is following that kind of model. All of the above activities are performed before a built system is ready for testing in its entirety, so are probably what many would now refer to as “shift left” testing in practice.

The shifting left of testing activities seems to have been heavily influenced by the agile movement. Practitioners such as Janet Gregory and Lisa Crispin have written books on “Agile Testing” which cover many of these same themes, without referring to them as “shift left”. The idea that the critical thinking skills of testers can be leveraged from the earliest stages of developing a piece of software seems sound enough to me. The term “agile tester”, though, seems odd – I prefer to think of testing as testing, with “agile” being part of the context here (and this context enables some of these shift-left activities to occur whereas a different development approach might make these activities difficult or impossible).

In more “traditional” approaches to software development (and also in dysfunctional agile teams), testing activities tend to be pushed towards the end of the cycle (or sprint/iteration) when there is a built “test ready” version of the software available for testing. Testing at this point is highly valuable in my opinion and is still required even if all of the “shift left” testing activities are being performed. If testing activities only start at this late stage, though, there is a lot of opportunity for problems to accumulate that could have been detected earlier and resolving these issues so late in the cycle may be much more difficult (e.g. significant architectural changes may not be feasible). To help mitigate risk and learn by evaluating the developing product, testers should look for ways to test incremental integration even in such environments.

The notion that “testing in production” is an acceptable – and potentially useful – thing is really quite new in our industry. Suggesting that we tested in production when I first started in the testing industry was akin to a bad joke, Microsoft’s release of Windows Vista again comes to mind. Of course, a lot has changed since then in terms of the technologies we use and the deployment methods available to us so we shouldn’t be surprised that testing of the deployed software is now a more reasonable thing to do. We can learn a lot from genuine production use that we could never hope to simulate in pre-production environments and automated monitoring and rollback systems present us with scope to “un-deploy” a bad version much more easily than recalling millions of 3.5-inch floppies! This “shift right” approach can add valuable additional testing information but, again, this is in itself not a replacement for other testing we might perform at other times during the development cycle.

In considering when testing activities should start then, it’s useful to broaden your thinking about what a “testing activity” is away from just system testing of the entire solution and also to be clear about your testing mission. Testing activities should start as early as makes sense in your context (e.g. you’ll probably start testing at different times in an agile team than when working on a waterfall project). Different types of testing activities can occur at different times and remember that critical thinking applied to user stories, designs, etc. is all testing. Use information from production deployments to learn about real customer usage and feed this information back into your ongoing testing activities pre-deployment.

And, by way of final word, I encourage you to advocate for opportunities to test your software before deployment using humans (i.e. not just relying on a set of “green” results from your automated checks), whether your team is shifting left, shifting right or not dancing at all.

You can find the first two parts of this blog series at:

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks again to my review team (Paul Seaman and Ky) for their helpful feedback on this post.