Author Archives: therockertester

“The Influence of Organizational Structure on Software Quality: An Empirical Case Study” (Microsoft Research and subsequent blogs)

A Microsoft Research paper from back in 2008 has recently been getting a lot of renewed attention after a blog post about it did the rounds on Twitter, Reddit, etc. The paper is titled “The Influence of Organizational Structure on Software Quality: An Empirical Case Study” and it looks at defining metrics to measure organizational complexity and whether those metrics are better at predicting “failure-proneness” of software modules (specifically, those comprising the Windows Vista operating system) than other metrics such as code complexity .

The authors end up defining eight such “organizational metrics”, as follows:

  • Number of engineers – “the absolute number of unique engineers who have touched a binary and are still employed by the company”. The claim here is that higher values for this metric result in lower quality.
  • Number of ex-engineers – similar to the first metric, but defined as “the total number of unique engineers who have touched a binary and have left the company as of the release date of the software system”. Again, higher values for this metric should result in lower quality.
  • Edit frequency – “the total number times the source code, that makes up the binary, was edited”. Again, the claim is that higher values for this metric suggest lower quality.
  • Depth of Master Ownership – “This metric (DMO) determines the level of ownership of the binary depending on the number of edits done. The organization level of the person whose reporting engineers perform more than 75% of the rolled up edits is deemed as the DMO.” Don’t ask me, read the paper for more on this one, but the idea is that the lower the level of ownership, the higher the quality.
  • Percentage of Org contributing to development – “The ratio of the number of people reporting at the DMO level owner relative to the Master owner org size.” Higher values of this metric are claimed to point to higher quality.
  • Level of Organizational Code Ownership – “the percent of edits from the organization that contains the binary owner or if there is no owner then the organization that made the majority of the edits to that binary.” Higher values of this metric are again claimed to point to higher quality.
  • Overall Organization Ownership – “the ratio of the percentage of people at the DMO level making edits to a binary relative to total engineers editing the binary.” Higher values of this metric are claimed to point to higher quality.
  • Organization Intersection Factor – “a measure of the number of different organizations that contribute greater than 10% of edits, as measured at the level of the overall org owners.” Low values of this metric indicate higher quality.

These metrics are then used in a statistical model to predict failure-proneness of the over 3,000 modules comprising the 50m+ lines of source code in Windows Vista. The results apparently indicated that this organizational structure model is better at predicting failure-proneness of a module than any of these more common models: code churn, code complexity, dependencies, code coverage, and pre-release bugs.

I guess this finding is sort of interesting, if not very surprising or indeed helpful.

One startling omission from this paper is what constitutes a “failure”. There are complicated statistical models built from these eight organizational metrics and comparisons made to other models (and really the differences in the predictive power between all of them are not exactly massive), but nowhere does the paper explain what a “failure” is. This seems like a big problem to me. I literally don’t know what they’re counting – which is maybe just a problem for me – but, much more significantly, I don’t know whether what the different models are counting are the same things (which would be a big deal in comparing the outputs from these models against one another).

Now, a lot has changed in our industry since 2008 in terms of the way we build, test and deploy software. In particular, agile ways of working are now commonplace and I imagine this has a significant organizational impact, so these organizational metrics might not offer as much value as they did when this research was undertaken (if indeed they did even then).

But, after reading this paper and the long discussions that have ensued online recently after it came back into the light, I can’t help but ask myself what value we get from becoming better at predicting which modules have “bugs” in them. On this, the paper says:

More generally, it is beneficial to obtain early estimates of software quality (e.g. failure-proneness) to help inform decisions on testing, code inspections, design rework, as well as financial costs associated with a delayed release.

I get the point they’re making here but the information provided by this organizational metric model is not very useful in informing such decisions, compared to, say, a coherent testing story revealed by exploratory testing. Suppose I predict that module X likely has bugs in it, then what? This data point tells me nothing in terms of where to look for issues or whether it’s worth my while to do so based on my mission to my stakeholders.

We spend a lot of time and effort in software development as a whole – and testing specifically – trying to put numbers against things – perhaps as a means of appearing more scientific or accurate. When faced with questions about quality, though, such measurements are problematic and I thank James Bach for his very timely blog post in which he encourages us to assess quality rather than measure it – I suggest that taking the time to read his blog post is time better spent than trying to make sense of over-complicated and meaningless pseudo-science such as that presented in the paper I’ve reviewed here.

(The original 11-page MS Research paper can be found at https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2008-11.pdf)

2019 in review

It’s almost unbelievable that it’s time to close out my blogging for the year already! I published 13 blog posts during 2019, right on my target cadence of a post per month but down in number from 2017 and 2018. In terms of traffic, my blog attracted a very similar number of views to 2018 and I closed out the year with 1,000 followers on Twitter for the first time.

If there are particular topics you’d like to see me talking about here (especially to encourage more new readers), please feel free to reach out.

Working at Quest

I reached a milestone during 2019, notching up twenty years at Quest! It’s been an amazing journey since I started here in 1999 as a new migrant from the UK to Australia and I continue to enjoy a varied role working with dedicated people around the world. I travelled extensively again during the year and visited our folks in China, Austin (Texas) and the Czech Republic. The regular opportunities to travel and work with people from different cultures remains one of the most enjoyable (and sometimes most challenging!) aspects of my role.

lee20

I spent more time through 2019 helping teams to improve their agility, while still assisting widely around testing. As Quest modernizes both in terms of its products (e.g. new SaaS offerings) and processes, there is plenty to keep me busy helping the teams to deal with the different demands of more frequent delivery.

Conferences & meetups

I had another quieter year in terms of conference and meetup attendance. While I didn’t speak at a conference in 2019, I was lucky enough to co-organize the Association for Software Testing‘s third Australian conference, Testing in Context Conference Australia 2019 (TiCCA19). Working with Paul Seaman, we put together an excellent programme and the fifty-or-so delegates gave very positive feedback on what we offered. Although we had hoped to continue the TiCCA event as an annual conference, our small delegate numbers and ongoing challenges in attracting sponsorship unfortunately made it impossible for us to commit to the continuation of the event. It’s sad that we couldn’t build a sustainable true context-driven testing conference in a city as large as Melbourne, but Paul and I are happy to have tried hard with both CASTx18 and TiCCA19 providing great content for our local community.

The only other conference I attended was a non-IT event and something very different in many ways, the Animal Activists Forum in Melbourne. I contrasted the experience of attending this conference against the typical testing/IT conferences I’ve attended in my blog post, A very different conference experience.

I made it to a couple of meetups, the first being a pre-conference meetup we organized around TiCCA19. This meetup was enjoyable to organize and attend, featuring an excellent presentation by Aaron Hodder and a panel session with four TiCCA19 conference speakers – in the shape of Graeme Harvey, Aaron, Sam Connelly and Ben Simo – ably facilitated by Rich Robinson. The second meetup I attended was one of the high-quality Software Art Thou? series and saw the UK’s Kevlin Henney talking on “What do you mean?” (which he quickly modified to “WTF do you mean?”).

Community work

It was disappointing to learn that EPIC Assist had decided to pull out of the Melbourne market during 2019, resulting in the end of the software testing training course Paul Seaman and I had been delivering through them, the EPIC TestAbility Academy.

We would still love to share our knowledge and experience of software testing (and IT more generally) in a community setting and we continue to look for a partner organization to make this happen.

Other stuff

I’ve found myself reading a lot more books during 2019, a very welcome return to something I really enjoy and a useful way to reduce screen time (yes, I’m a physical book reader!). Many of the books came from the library and we are blessed with an excellent service in Melbourne (they purchased a number of books I requested through the year). Some of the books were purchased and shared with others in my office. I didn’t read testing books per se, but I became very interested in the subject of algorithms, AI and so on, reading a number of books in this area. Other areas of focus were leadership and knowledge acquisition.

I’ve also been spending more time to educate myself around animal rights and veganism, plus contributing in small ways to animal rights advocacy. It’s been an interesting change of tack to read books on these topics and also to see the reactions to my posts, tweets, etc. when this is the subject matter rather than my usual content! A handy summary of my thoughts around some of this can be found in my post, What becoming vegan taught me about software testing.

I hit another milestone early in 2019 when I acquired my first smartphone! I still find the form factor challenging and it seems unlikely I’ll ever become addicted to my phone, but I admit that it can be very handy when out and about – and Google Maps on the go during our travels made life a lot easier (though I was surprised offline maps don’t work in China, not a huge issue as we don’t drive there and taxis are incredibly cheap).

It felt like I had a much heavier workload during 2019 as well as some hefty stints of travel, so my outside projects didn’t get as much attention as in the previous few years. But I was glad to have the opportunity to organize the TiCCA19 conference as well as turning some work travel commitments into enjoyable holidays to see some new and interesting places. This time last year I was hinting at a new (personal) testing-related project that I hoped to kick off in 2019 and, while this didn’t eventuate, the project is still alive and I fully expect to get it up and running in 2020!

Thanks to my readers here and also followers on other platforms, I wish you all a very Merry Christmas & Happy New Year, and I hope you enjoy my posts to come through 2020. (And, remember, please let me know if there are any topics you particular want me to express opinions on, I’m happy to take suggestions!)

Reviewing “Reimagine The Future of Quality Assurance” – (yet) another corporate report on the state of QA/testing

Capgemini recently released another 100+ page report around QA/testing, called Reimagine The Future of Quality Assurance. You might recall that I reviewed another of their long reports, the World Quality Report 2018/2019, and this new report also seemed worthy of some commentary. This is a long post given the length of the report, but I provide a summary of my feelings at the end of the post if the detailed content review below is too hefty.

Scope

It’s not clear to me whether this report is focusing on Quality Assurance (QA) or testing or both, the term “Quality Assurance” is not clearly defined or differentiated from testing anywhere in the report and, judging from the responses from some of the industry people they interview in the report, it’s obvious that most of them were also unclear about the focus. It should be noted that my analysis and comments are specifically targeted towards what is discussed around testing in this report.

The report is described as “Featuring the trends shaping the future of quality assurance, and a practitioners’ view of how QA can reinvent customer experiences for competitive advantage”, this also doesn’t really tell me what the focus is but let’s start to look at the content.

The Contents suggest the existence of a section on “Methodology” (page 9) but this is not present in the report and wouldn’t be required anyway as this is not a survey results report (in contrast to the World Quality Report) but is rather based on case study/industry commentary. This oversight in the Contents is indicative of a lack of proofing evident throughout the report – there are many typos, copy/paste errors, and grammar issues, suggesting the report itself wasn’t subject to a very diligent process of quality assurance before it was published.

Introductory content

The foreword comes from Olaf Pietschner (Managing Director, Capgemini Australia & New Zealand). He claims that “QA [is] moving up in the agile value chain”, maybe in reference to testing being seen as more important and valuable as more organizations move to more frequent releases, adopt DevOps, etc. but his intent here may well be something different.

In another introductory piece – titled “Transforming testing for digital transformation: Speed is the new currency” – Sandeep Johri (CEO, Tricentis) says:

Reinventing testing is essential for achieving the speed and agility required to thrive in the digital future. Why? Speed is the new currency but traditional software testing is the #1 enemy of speed.

I have several issues with this. What exactly about testing needs “reinventing”? While speed seems to be a focus for many businesses – following the “the fast will eat the slow” mantra – it’s a stretch to argue that testing is or has been the number one reason that businesses can’t move faster in terms of delivering software. There are so many factors that influence an organization’s ability to get software out of the door that to label testing as “enemy number 1” seems simplistic and so context-independent as to be meaningless.

Industry sector analysis

The next seventy-odd pages of the report focus on sector analysis from five industry sectors. Each sector includes an introductory piece from a Capgemini representative followed by case pieces from different businesses in that sector.

The first sector is “Consumer Products, Retail, Distribution & Transport” (CPRDT) and is introduced by Amit Singhania and Prashant Chaturvedi (both Vice-Presidents, Capgemini Australia & New Zealand). They say:

The move from QA to Quality Engineering (QE) is not an option. The equation is simple: Test less and assure more. Serious and continuous disruption in IT means the way testing and QA has been approached in the past must be overhauled.

I think they’re suggesting that it’s necessary to move away from QA towards QE, though they don’t define what they mean by QE. I’m unsure what they’re suggesting when they say “test less and assure more” (which is not an equation, by the way). These soundbite messages don’t really say anything useful to those involved in testing.

As DevOps spreads it is imperative that software – with the continuous development – needs to be continuously tested. This needs a paradigm shift in the skills of a developer and tester as the thin line between these skills is disappearing and same individuals are required to do both.

This continues to be a big subject of debate in the testing world and they seem to be suggesting that testers are now “required” to be developers (and vice-versa). While there may be benefits in some contexts to testers having development skills, I don’t buy this as a “catch all” statement. We do a disservice to skilled human testers when we suggest they have to develop code as well or they’re somehow unworthy of being part of such DevOps/agile teams. We need to do a better job of articulating the value of skilled testing as distinct from the value of excellent development skills, bearing in mind the concept of critical distance.

The first business piece from this sector comes from Australia Post’s Donna Shepherd (Head of Testing and Service Assurance). She talks a lot about DevOps, Agile, increased levels of automation, AI/ML, and Quality Engineering at Australia Post but then also says:

The role of the tester is also changing, moving away from large scale manual testing and embracing automation into a more technical role

I remain unclear as to whether large-scale manual testing is still the norm in her organization or whether significant moves towards a more automation-focused testing approach have already taken place. Donna also says:

The quality assurance team are the gatekeepers and despite the changes in delivery approaches, automation and skillset, QA will continue to play an important role in the future.

This doesn’t make it sound like a genuine DevOps mentality has been embedded yet and in her case, “QA [is a] governance layer having oversight of deliverables”.

The second business piece representing the CPRDT sector comes from McDonald’s, in the shape of David McMullen (Director of Technology) & Matt Cottee (Manager – POS Systems, Cashless, Digital and Technology Deployment), who manage to say nothing about testing in the couple of pages they’ve contributed to the report.

The next sector is “Energy, Utilities, Mining & Chemicals” and this is introduced by Jan Lindhaus (Vice-President, Head of Sector EUC, Capgemini Australia and New Zealand) and there’s not much about testing here. He says:

Smart QA needs to cover integrated ecosystems supported by cognitive and analytical capabilities along end-to-end business value chains with high speed, agility and robustness.

Maybe read that again, as I’ve done many times. I literally have no idea what “smart QA” is based on this description!

A theme gaining in popularity is new ways of working (NWW), which looks beyond Agile project delivery for a discrete capability.

I heard this NWW idea for the first time fairly recently in relation to one of Australia’s big four banks, but I don’t have a good handle on how this is different from the status quo of businesses adapting and changing the way they work to deal with changes in the business landscape over time. Is NWW an excuse to say “we’re Agile but not following its principles”? (Please point me in the direction of any resources that might help me understand this NWW concept more clearly.)

There are three business pieces for this sector, the first of which comes from Uthkusa Gamanayake (Test Capability Manager, AGL). It’s pleasing to finally start to read some more sensible commentary around testing here (maybe as we should expect given his title). He says:

Testers are part of scrum teams. This helps them to work closely with developers and provides opportunities to take on development tasks in addition to testing. The QA responsibility has moved from the quality assurance team to the scrum teams. This is a cultural and mindset shift.

It’s good to hear about this kind of shift happening in a large utility company like AGL and it at least sounds like testers have the option to take on automation development tasks but are not being moved away from manual testing as the norm. On automation, he says:

Individual teams within an organization will use their own automation tools and frameworks that best suit their requirements and platforms. There is no single solution or framework that will work for the entire organization. We should not be trying to standardize test automation. It can slow down delivery.

Again, this is refreshing to hear, they’re not looking for a “one size fits all” automation solution across such a huge IT organization but rather using best fit tools to solve problems in the context of individual teams. Turning to the topic de jour, AI, he states his opinion:

In my view AI is the future of test automation. AI will replace some testing roles in the future.

I think the jury is still out on this topic. I can imagine some work that some organizations refer to as “testing” being within the realms of the capability of AI even now. But what I understand “testing” to be seems unlikely to be replaceable by AI anytime soon.

There are many people who can test and provide results, but it is hard to find people who have a vision and can implement it.

I’m not sure what he was getting at here, maybe that it’s hard to find people who can clearly articulate the value of testing and tell coherent & compelling stories about the testing they perform. I see this challenge and coaching testers in the skills of storytelling is a priority for me if we are to see human testers being better understood and more valued by stakeholders. He also says:

As for the role of head of testing, it will still exist. It won’t go away, but its function will change. This role will have broader QA responsibilities. The head of QA role does exist in some organizations. I think the responsibilities are still limited to testing practices.

I basically agree with this assessment, in that some kind of senior leadership position dedicated to quality/testing is required in larger organizations even when the responsibility for quality and performing of tests is pushed down into the Scrum delivery teams.

The next business piece comes from David Hayman (Test Practice Manager, Genesis, and Chair of ANTZB). His “no nonsense” commentary is refreshingly honest and frank, exactly what I’d expect based on my experience of meeting and listening to David at past ANZTB conferences and events. On tooling, he says:

The right tool is the tool that you need to do the job. Sometimes they are more UI-focused, sometimes they are more AI-focused, sometimes they are more desktop-focused. As a result, with respect to the actual tools themselves, I’m not going to go into it because I don’t think it’s a value-add and can often be misleading. But actually, it doesn’t generate any value. The great thing is that sanity appears to have overtaken the market so that now we automate what’s valuable as opposed to automating a process because it’s a challenge, or because we can, or we want to, or because it looks good on a CV. The automation journey, though not complete, has reached a level of maturity, where sanity prevails. So that is at least a good thing.

I like the fact that David acknowledges that context is important in our choice of tooling for automation (or anything else for that matter) and that “sanity” is prevailing, at least in some teams in some organizations. That said, I commonly read articles and LinkedIn posts from folks still of the opinion that a particular tool is the saviour or that everyone should have a goal to “automate all the testing” so there’s still some way to go before sanity is the default position on this.

He goes on to talk about an extra kind of testing that he sees as missing from his current testing mix, which he labels “Product Intent Testing”:

I have been thinking that we’re going to need another phase in the testing process – perhaps an acceptance process, or similar. At the moment, we do component testing – we can automate that. We do functional testing – we can automate that. We do system integration testing – we can automate that. We have UAT – we can automate some of that, though obviously it requires a lot more business input.

When you have a situation where the expected results from AI tests are changing all the time, there is no hard and fast expected result. The result might get closer. As long as the function delivers the intent of the requirement, of the use case, or the story, then that’s close enough. But with an automated script, that doesn’t work. You can’t have ‘close enough’.

So I believe there’s an extra step, or an extra phase, I call Product Intent Testing [PIT]. This should be applied once we’ve run the functional tests. What we are investigating is ‘Has the intent that you were trying to achieve from a particular story, been provided?’ That requires human input – decision-making, basically.

It sounds like David is looking for a way to inject a healthy dose of human testing into this testing process, where it might be missing due to “replacement” by automation of existing parts of the process. I personally view this checking of intent to be exactly what we should be doing during story testing – it’s easy to get very focused on (and, paradoxically perhaps, distracted by) the acceptance criteria on our stories and potentially miss the intent of the story by stepping back a little. I’m interested to hear what others think about this topic of covering the intent during our testing.

The last business piece in this sector comes from Ian Robertson (CIO, Water NSW) and, as a non-testing guy, he doesn’t talk about testing in particular, focusing more on domain specifics, but he does mention tooling in the shape of Azure DevOps and Tosca (a Tricentis tool, coincidentally?).

The chunkiest section of the report is dedicated to the “Financial Services’ sector with six business pieces, introduced by Sudhir Pai (Chief Technology and Innovation Officer, Capgemini Financial Services). Part of his commentary is almost identical to that from Jan Lindhaus’s introduction for the “Energy, Utilities, Mining & Chemicals” sector:

Smart QA solutions integrating end-to-end ecosystems powered by cognitive and analytical capabilities are vital

Sudhir also again refers to the “New Ways of Working” idea and he makes a bold claim around “continuous testing”:

Our Continuous Testing report shows that the next 2-3 years is a critical time period for continuous testing – with increased automation in test data management and use of model-based testing for auto-generation of test cases, adoption is all set to boom.

I haven’t seen the “Continuous Testing report” he’s referring to, but I feel like these predictions of booming AI and automation of test case generation have been around for a while already and I don’t see widespread or meaningful adoption. Is “auto-generation of test cases” even something we’d want to adopt? If so, why and what other kinds of risk would we actually amplify by doing so?

Interesting, none of the six business pieces in this sector come from specialists in testing. The first one is by Nathalie Turgeon (Head of Project Delivery, AXA Shared Services Centre) and she hardly mentions testing but does appear to argue the case for a unified QA framework despite clearly articulating the very different landscapes of their legacy and digital businesses.

The next piece comes from Jarrod Sawers (Head of Enterprise Delivery Australia and New Zealand, AIA). He makes the observation:

The role of QA has evolved in the past five years, and there a few different parts to that. One part is mindset. If you go back several years across the market, testing was seen as the last thing you did, and it took along time, and was always done under pressure. Because if anything else went slow, the time to test was always challenged and, potentially, compromised. And that is the wrong idea.

It’s very much a mindset shift to say, ‘Well, let’s think about moving to a more Agile way of working, thinking about testing and QA and assurance of that.’ That is the assurance of what that outcome needs to be for the customer from the start of that process.

This shift away from “testing at the end of the process” has been happening for a very long time now, but Enterprise IT is perhaps a laggard in many respects and so it’s not surprising to hear that this shift is a fairly recent thing inside AIA but at least they’ve finally got there as they adopt more agile ways of working. Inevitably from an Enterprise guy, AI is top of mind:

A great part of the AI is around the move from doing QA once to continuous QA. Think about computing speed, and the power available now compared to just a few years ago, and the speed of these activities. Having that integrated within that decision process makes sense. To build it in so that you’re constantly getting feedback that, yes, it’s operating as expected. Yes, it’s giving us the outcomes we’re looking for.

The customer experience or customer outcome is much better, because no organization without AI has one-to-one QA for all of their operational processes. There is risk in manual processing and human decision-making.

I find myself feeling confused by Jarrod’s comments here and unsure what he means when he says that “no organization without AI has one-to-one QA for all of their operational processes”. “One-to-one QA” is not a term I’m familiar with. While I agree that there is risk in using humans for processing and making decisions, it’s simply untrue that there are no risks when the humans are replaced by AI/automation. All that really happens is a different set of risks are now applied and human decision-making, especially in the context of testing, is typically a risk worth taking. On “QA” specifically, Jarrod notes:

It has to be inherently part of the organisational journey to ensure that when we have a new product entering the market, all those things we say it’s going to do must actually happen. If it doesn’t work, it’s very damaging. So how do we know that we’re going to get there? The answer needs to be, ‘We know because we have taken the correct steps through the process’.

And somebody can say, ‘I know we’re doing this properly, it’s going to be very valuable throughout the process’. Whether that is a product owner or a test manager, it has to be somebody who can guarantee the QA and give assurance to the quality.

His closing statement here is interesting and one I disagree with. Putting such responsibility onto a single person is unfair and goes against the idea of the whole team being responsible for the quality of what they deliver. This gatekeeper (read: scapegoat) for quality is not helpful and sets the person up for failure, almost by definition.

The third business piece comes from Nicki Doble (Group CIO, Cover-More) and it’s clear that, for her, QA/testing is all about confidence-building:

We need to move faster with confidence, and that means leveraging continuous testing and deployment practices at the same time as meeting the quality and security requirements.

This will involve automated releases, along with test-driven development and automated testing to ensure confidence is maintained.

Historically, testing has been either quite manual or it involved a huge suite of automated tests that took a lot of effort to build and maintain, but which didn’t always support the value chain of the business.

In future, we need to focus on building only the right automated testing required to instill confidence and surety into our practices. This needs to be a mix of Test Driven Development (TDD) undertaken by our developers but supported by the QA team, automated performance and functional testing to maintain our minimum standards and create surety. And it needs to be paired with continuous testing running across our development branches.

It worries me to see words like “confidence” and “surety” in relation to expectations from testing. It sounds like she believes that TDD and automated testing are providing them more certainty than when they had their “quite manual” testing. It would have been more encouraging to instead read that she understands that an appropriate mix of human testing and automated checks can help them meet their quality goals, alongside an acknowledgement that surety cannot be achieved no matter what this mix looks like.

The next business piece comes from Raoul Hamilton-Smith (General Manager Product Architecture & CTO NZ, Equifax). He sets out his vision around testing all too clearly:

We want to have all testing automated, but we’re not there yet. It’s a cultural shift as much as a technology shift to make this happen.

It’s not a cultural or technology shift to make all testing automated, it’s simply an impossible mission – if you really believe testing is more than algorithmic checking. So much for “sanity” taking over (per David Hayman’s observations) when it comes to the “automate everything” nonsense! Raoul goes on to talk about Equifax’s Agile journey:

The organisation has been set up for Agile delivery for quite some time, including a move to the scaled Agile framework around 18 months ago. A standard Agile team (or squad) consists of a product owner, some application engineers, some QA analysts and a scrum master. As far as line management is concerned, there is a QA tower. However, the QA people are embedded in the Agile teams so their day-to-day leadership is via their scrum master/project manager.

What we have not been so good at is being very clear about the demand to automate testing. We probably haven’t shown all [sic] how that can be achieved, with some areas of delivery being better than others.

This is the challenge that we’re facing now – we have people who have been manually testing with automation skills that haven’t really had the opportunity to build out the automaton. So right now, we are at the pivot point, whereby automation is the norm.

It sounds like someone in this organization has loaded up on the Kool-Aid, adopting SAFe and borrowing the idea of squads from the so-called Spotify Model (itself a form of scaling framework for Scrum teams). The desire for complete automation is also evident again here, “the demand to automate testing”. It would be interesting to hear from this organization again in a year or two when the folly of this “automate everything” approach has made them rethink and take a more human-centred approach when it comes to testing.

The penultimate piece for this sector comes courtesy of David Lochrie (General Manager, Digital & Integration, ME Bank). He talks a lot about “cycle time” as a valuable metric and has this to say about testing:

From a quality assurance perspective, as a practice, Lochrie characterises the current status as being in the middle of an evolution. This involves transforming from highly manual, extremely slow, labour-intensive enterprise testing processes, and instead heading towards leveraging automation to reduce the cycle time of big, expensive, fragile [sic] and regression test suites.

“We’ve started our QA by focusing purely on automation. The next phase QA transformation journey will be to broaden our definition of QA. Rather than just focusing on test execution, and the automation of test execution, it will focus on what other disciplines come under that banner of QA and how do we move those to the left.”

The days of QA equating to testing are gone, he says.

QA these days involves much more than the old-school tester sitting at the end of the value chain and waiting for a new feature to be thrown over the fence from a developer for testing. “Under the old model the tester knew little about the feature or its origins, or about the business need, the design or the requirement. But those days are over.”

This again sounds like typical laggard enterprise IT, with testers in more actually agile organizations having been embedded into development teams (and being fully across features from the very start) as the norm for many years already. Unfortunately here again, it sounds like ME Bank will make the same fundamental error in trying to automate everything as the way to move faster and reduce their precious cycle time. I’d fully expect sanity to prevail in the long term for this organization too, simply out of necessity, so let’s revisit their comments in a future report too perhaps.

The sixth and final business piece is by Mark Zanetich (Head of Technology, Risk Compliance and Controls for Infrastructure & Operations, Westpac) and he has nothing substantive to say around testing.

Next up in terms of sectors comes “Higher Education” and the introduction is by David Harper (Vice-President, Head of Public Services, Capgemini Australia and New Zealand) who has nothing to say about testing either.

There are two business pieces for this sector, the first coming from Vicki Connor (Enterprise Test Architect, Deakin University) and she says this around AI in testing:

As far as testing applications based on AI, we are doing some exploratory testing and we are learning as we go. We are very open-minded about it. Whilst maintaining our basic principles of understanding why we are testing, what we are testing, when to test, who is best suited to test, where to conduct the testing and how to achieve the best results.

It’s good to read that they’re at least looking at AI in testing via the lens of the basics of why they’re testing and so on, rather than blindly adding it to their mix based on what every other organization is claiming to be doing. I assume when Vicki refers to “exploratory testing” here that she’s really meaning they’re experimenting with these AI approaches to testing and evaluating their usefulness in their own unique context (rather than using ET as a testing approach for their applications generally).

The second business piece comes from Dirk Vandenbulcke (Director – Digital Platforms, RMIT University) and more frequent releases are a hot topic for him:

RMIT us currently in a monthly release cadence. By only having monthly releases, we want to ensure the quality of these releases matches what you would normally find in Waterfall circumstances.

Automation is not only a form of cost control; it is also a question of quality control to meet these timelines. If the test cycles are six weeks, there is no way you can operate on a release cadence of four weeks.

Ultimately, we would like to move to fortnightly-releases for speed-to-market reason[sic], which means our QA cycles need to be automated, improved, and sped up.

For the moment, our QA is more journey-focused. As such, we want to make sure our testing needs are optimised, and use cases are properly tested. Potentially, that means not every single edge case will be tested ever single time. When they were originally developed they were tested – but they won’t be every single time we deploy.

We have started to focus our activities around the paths and journeys our students and staff will take through an experience, rather than doing wide, unfocused tests.

Especially in a fast release cadence, you can’t test every single thing, every time, or automate every single thing, so it’s essential to be focused.”

I find it fascinating that the quality bar after moving to monthly releases is “what you would normally find in Waterfall circumstances.” This sounds like a case of fear of the unknown in moving to more frequent releases, when in reality the risk involved in such releases should be lower since less changes are involved in each release. His approach of workflow/journey testing, though, strikes me as sensible and he also seems to have a handle on the folly of attempting to automate everything as a way out of the issues he’s facing with these more frequent releases.

The final sector considered in this report is “Government” and this is introduced by David Harper again. He manages to mention all the buzzwords in just a few sentences:

Technology trends continue to encourage new innovative approaches of testing code and opportunities for QA with, for example, DevOps and continuous integration and continuous delivery, further enabling Agile operating environments. Most notable is the emergence of the applicability of AI/machine learning as it relates to driving efficiency at scale in large transaction processing environments.

While these techniques are starting to be deployed in business process, it is interesting to explore how learning algorithms will be used to improve QA activities. Such smart or advanced automaton in testing will emerge once agencies have found their feet with automated testing.

My read of this is that government departments are still struggling with automated testing, let alone applying AI and machine learning.

There are two business pieces for this sector, firstly from Srinivas Kotha (Technology Testing Services Manager, Airservices) and he talks a lot about frameworks and strategy, focusing on the future but with less comment about the current state. He suggests that the organization will first look around to determine their strategy:

As part of the test strategy development, I will be looking at the market trends and emerging technologies in testing and quality assurance space to be able to effectively satisfy our future needs and demands. I believe technology evolution is on the upward trend and there is lot out there in the market that we can leverage to enhance our testing and QA capability and deliver business value.

I hope that they will actually look at their own unique requirements and then look at what technologies can help them meet those requirements, rather than looking at these “market trends” and fitting those into their strategy. As we can see from this very report, this “trend” noise is generally not helpful and the organization’s own context and specific needs should be the key drivers behind choices of technology. Talking about automation and AI, he says:

I will be keen to look at implementing more automation and use of Artificial Intelligence (AI) to scale up to increase the coverage (depth and breadth) of testing to reduce risks and time to market. We will be looking at two components within automation – basic and smart automation. We have done little bit of basic automation at the project level. However, we are not going to reuse that for ongoing testing, nor are we maintaining those scripts. There are some areas within the organisation where specific automated scripts are maintained and run for specific testing needs. We currently using a combination of market-leading and open source tools for test management and automation. Key basic automation items that are for immediate consideration are around ongoing functional, regression and performance (load and stress) testing.

Smart automation uses emerging technology such as AI. The questions we are asking are: how we can automate that for testing and data analysis for improving quality outcomes? And what testing can we do from a DevOps and CI/CD perspective, which we aim to adopt in the coming 1-2 years? In the next 6 months we will put up the framework, create the strategy and then begin implementing the initiatives in the strategy. The key potential strategy areas are around automation, test environment and data, and some of the smart test platforms/labs capability.

It sounds like they are in the very early days of building an automation capability, yet they’re already thinking about applying AI and so-called “smart automation”. There’s real danger here in losing sight of why they are trying to automate some of their testing.

The second piece comes from Philip John (QA and Testing Services Manager, WorkSafe Victoria) and his comments see the first mention (I think) of BDD in this report:

When it comes to QA resourcing, we are bringing in more Agile testers who can offer assistance in automation, with an aim to support continuous QA to underpin a CI/CD approach. We have behavioural-driven development and DevOps in our mix and are focusing our delivery model into shift-left testing.

The organisation is also using more Agile/SAFe Agile delivery models.

It all sounds very modern and on trend, hopefully the testers are adding genuine value and not just becoming Cucumbers. Note the mention of SAFe here, not the first time this heavyweight framework appears in the business pieces of this report. Philip heads down the KPI path as well:

From the KPI perspective, the number of KPIs in the testing and QA space is only going to grow, rather than diminish. We expect that there will be a tactical shift in the definition of some KPIs. In any case, we will need to have a reasonable level of KPIs established to ensure the adherence of testing and quality standards.

I don’t understand the fascination with KPIs and, even if we could conceive of some sensible ones around testing, why would more and more of them necessarily equal better? Hitting a KPI number and ensuring adherence to some standard are, of course, completely different things too.

Trend analysis

Moving on from the sector analysis, the report identifies eight “Key trends in Quality Assurance”, viz.

  • DevOps changes the game
  • Modern testing approaches for a DevOps world
  • The status of performance testing in Agile and DevOps
  • Digital Transformation and Artificial Intelligence
  • Unlocking the value of QA teams
  • Connected ecosystem for effective and efficient QA
  • RPA and what we can learn from test automation
  • Data democratisation for competitive advantage

Ignoring the fact that these are not actually trends (at least not as they are stated here) and that there is no indication of the source of them, let’s look at each in turn.

Each trend is supported by a business piece again, often by a tool vendor or some other party with something of a vested interest.

For “DevOps changes the game”, it’s down to Thomas Hadorn and Dominik Weissboeck (both Managing Directors APAC, Tricentis) to discuss the trend, kicking off with:

Scaled agile and DevOps are changing the game for software testing

There’s that “scaled agile” again but there’s a reasonable argument for the idea that adopting DevOps does change the game for testing. They discuss a little of the “how”:

In the past, when software testing was a timeboxed activity at the end of the cycle, the focus was on answering the question, ‘Are we done testing?’ When this was the primary question, “counting” metrics associated with the number of tests run, incomplete tests, passed tests and failed tests, drove the process and influenced the release decision. These metrics are highly ineffective in understanding the actual quality of a release. Today, the question to answer is: ‘Does the release have an acceptable level of risk?’

To provide the DevOps community with an objective perspective on the quality metrics most critical to answering this question, Tricentis commissioned Forrester to research the topic. The goal was to analyse how DevOps leaders measured and valued 75 quality metrics (selected by Forrester), then identify which metrics matter most for DevOps success

I like their acknowledgement that the fetish of counting things around testing is ineffective and answering questions about risk is a much more profound way of showing the value of testing. Turning to the Forrester research they mention, they provide this “quadrant” representation where the horizontal axis represents the degree to which metrics are measured and the vertical the value from measuring the metric (note that in this image, “Directions” should read “Distractions”):

quadrant

I find it truly bizarre that a “hidden gem” is the idea of prioritizing automated tests based on risk (how else would you do it?!), while high value still seems to be placed on the very counting of things they’ve said is ineffective (e.g. total number of defects, test cases executed, etc.).

The next trend, “Modern testing approaches for a DevOps world”, is discussed by Sanjeev Sharma (VP, Global Practice Director | Data Modernization, Delphix). He makes an observation on the “Move Fast and Break Things” notion:

Although it was promulgated by startups that were early adopters of DevOps, “the notion of Move Fast and Break Things” is passé today. It was a Silicon Valley thing, and that era no longer exists. Enterprises require both speed and high quality, and the need to deliver products and services faster, while maintaining the high expectations of quality and performance are challenges modern day testing and QA teams must address.

This is a fair comment and I see most organizations still having a focus on quality over speed. The desire to have both is certainly challenging many aspects of the way software is built, tested and delivered – and “DevOps” is not a silver bullet in this regard. Sanjeev also makes this observation around AI/ML:

… will drive the need for AI and ML-driven testing, meaning testing and QA are guided by learning from the data generated by the tests being run, by the performance of systems in production, and by introducing randomness – chaos – into systems under test.

This is something I’ve never seen as much of in the testing industry as I’d have expected, that is taking the data generated by different kinds of tests (be they automated or not) and using that data to guide further or different tests. We have the tooling to do this but even basic measures such as code covered by automated tests suites is not generally collected and, even if it is, not used as input into the risk analysis for human testing.

The next (not) trend is “The status of performance testing in Agile and DevOps”, covered by Henrik Rexed (Performance Engineer, Neotys) and his focus – unsurprisingly since he works for a performance testing tool vendor – is performance testing. He comments:

That is why the most popular unicorn companies have invested in a framework that would allow them to automatically build, test, and deploy their releases to production with minimal human interaction.

Every organisation moving to Agile or DevOps will add continuous testing to their release management. Without implementing the performance scoring mechanism, they would quickly be blocked and will start to move performance testing to major releases only.

We are taking major risks by removing performance testing of our pipeline. Let’s be smart and provide a more efficient performance status.

I’m not keen on the idea of taking what so-called “unicorn companies” do as a model for what every other company should do – remember context matters and what’s good for SpaceX or WeWork might not be good for your organization. I agree that continuous testing is a direction most teams will take as they feel pressured to deploy more frequently and I see plenty of evidence for this already (including within Quest). Henrik makes the good point here that the mix of tests generally considered in “continuous testing” often doesn’t include performance testing and there are likely benefits from adding such testing into the mix rather than kicking the performance risk can down the road.

The next trend is “Digital Transformation and Artificial Intelligence” and is discussed by Shyam Narayan (Director | Head of Managed Services, Capgemini Australia and New Zealand). On the goal of AI in testing, he says:

AI interactions with the system multiply the results normally obtained by manual testing. A test automation script can be designed to interact with the system, but it can’t distinguish between the correct and incorrect outcomes for applications.

The end goal of leveraging AI in testing is to primarily reduce the testing lifecycle, making it shorter, smarter, and capable of augmenting the jobs of testers by equipping them with technology. AI is directly applicable to all aspects of testing including performance, exploratory and functional regression testing, identifying and resolving test failures, and performing usability testing.

I’m not sure what he means by “multiplying the results normally obtained by manual testing” and I’m also not convinced that the goal of leveraging AI is to reduce the time it takes to test, I’d see the advantages more in terms of enabling us to do things we currently cannot using humans or existing automation technologies. He also sees a very broad surface area of applicability across testing, it’ll be interesting to see how the reality pans out. In terms of skill requirements for testers in this new world, Shyam says:

Agile and DevOps-era organisations are seeking software development engineers in test (SDET) – technical software testers. But with AI-based applications the requirement will change from SDET to SDET plus data science/statistical modelling – software development artificial intelligence in rest [sic] (SDAIET). This means that QA experts will need knowledge and training not only in development but also in data science and statistical modelling.

This honestly seems ridiculous. The SDET idea hasn’t even been adopted broadly and, where organizations went “all in” around that idea, they’ve generally pulled back and realized that the testing performed by humans is actually a significant value add. Something like a SDAIET is so niche that I can’t imagine it catching on in any significant way.

The next trend is “Unlocking the value of QA teams” and is discussed by Remco Oostelaar (Director, Capgemini Australia and New Zealand). His main point seems to be that SAFe adoption has been a great thing, but that testing organizations haven’t really adapted into this new framework:

In some cases, the test organisation has not adapted to the new methods of Agile, Lean, Kanban that are integrated into the model. Instead it is still structurally based on the Waterfall model with the same processes and tools. At best these test organisations can deliver some short-term value, but not the breakthrough performance that enables the organisation to change the way it competes.

It’s interesting that he considers SAFe to be a model incorporating Agile, Lean and Kanban ideas, I didn’t get that impression when I took a SAFe course some years ago but acknowledge that my understanding of, and interest in, the framework is limited.

It is also important to consider how to transform low-value activities into a high-value outcome. An example is the build of manual test scenarios to automation that can be integrated as part of the continuous integration and continuous delivery (CI/CD) model. Other examples are: automatic code quality checks, continuous testing for unit tests, the application performance interface (API), and monitoring performance and security.

It’s sad to see this blanket view of manual testing as a “low-value activity” and we continue to have a lot of work to do in explaining the value of human testing and why & where it still fits even in this new world of Agile, CI/CD, DevOps, SAFe, NWW, AI, <insert buzzword here>.

Implementing SAFe is not about cost reduction; it is about delivering better and faster. Companies gain a competitive edge and improved customer relationship. The focus is on the velocity, throughput, efficiency improvement and quality of the delivery stream.

I’m sure no organization takes on SAFe believing it will reduce costs, just a glance at the framework overview shows you how heavyweight this is and the extra work you’ll need to do to implement it by the book. I’d be interested to see case studies of efficiency improvements and quality upticks after adopting SAFe.

The next trend is “Connected ecosystem for effective and efficient QA” and it’s over to Ajay Walgude (Vice President, Capgemini Financial Services) for the commentary. He makes reference to the World Quality Report (per my previous blog):

Everything seems to be in place or getting in order, but we still have lower defect removal efficiency (DRE), high cost of quality especially the cost on non-conformance based on interactions with various customers. While the World Quality Report (WQR) acknowledges these changes and comments on the budgets for QA being stable or reduced, there is no credible source that can comment on metrics such as cost of quality, and DRE across phases, and type of defects (the percentage of coding defects versus environment defects).

He doesn’t cite any sources for these claims, do we really have lower DRE across the industry? How would we know? And would we care? Metrics like DRE are not gathered by many organizations (and rightly so as far as I’m concerned) so such claims for the industry as a whole make no sense.

Effective and efficient QA relates to testing the right things in the right way. Effectiveness can be determined in terms of defect and coverage metrics such as defect removal efficiency, defect arrival rate, code coverage, test coverage and efficiency that can be measured in terms of the percentage automated (designed and executed), cost of quality and testing cycle timeframe. The connected eco system not only has a bearing on the QA metrics – cost of appraisal and prevention can go down significantly – but also on the cost of failure.

I’m with Ajay on the idea that we should strive to test the right things in the right way, this is again an example of context awareness, though it’s not what he’s referring to probably. I disagree with measuring effectiveness and efficiency via the type of metrics he mentions, however. Measuring “percentage automated” is meaningless to me, it treats human and automated tests as countable in the same way (which is nonsense) and reinforces the notion that more automation is better (which is not necessarily the case). And how exactly would one measure the “cost of quality” as a measure of efficiency?

He also clearly sees the so-called “Spotify Model” as being in widespread usage and makes the following claim about more agile team organizations:

The aim of squads, tribes, pods and scrum teams is to bring everybody together and drive towards the common goal of building the minimum viable product (MVP) that is release worthy. While the focus is on building the product, sufficient time should be spent on building this connected eco system that will significantly reduce the time and effort needed to achieve that goal and, in doing so, addressing the effective and efficient QA.

The goal of an agile development team is not to build an MVP, this may be a goal at some early stage of a product’s life, but it won’t generally be the goal.

The penultimate trend, “RPA and what we can learn from test automation”, is covered by Remco Oostelaar again (Director, Capgemini Australia and New Zealand) and he starts off by defining what he means by RPA:

Robotic Process Automation (RPA) is the automation of repetitive business tasks and it replaces the human aspect of retrieving or entering data from or into a system, such as entering invoices or creating user accounts across multiple systems. The goal is to make the process faster, more reliable, and cost-effective.

He argues that many of the challenges that organizations face when implementing RPA are similar to those they faced previously when implementing automated testing, leading to this bold claim:

In the future, RPA and test automation will merge into one area, as both have the same customer drivers – cost, speed and quality – and the skillsets are exchangeable. Tool providers are crossing-over to each other’s areas and, with machine learning and AI, this will only accelerate.

It troubles me when I see “test automation” as a cost-reduction initiative, the ROI on test automation (like any form of testing) is zero – it’s a cost, just like writing code is a cost and yet I’ve literally never seen anyone ask for a ROI justification to write product code.

The last trend covered here is “Data democratisation for competitive advantage”, discussed by Malliga Krishnan (Director | Head of Insights and Data, Capgemini Australia and New Zealand) and she doesn’t discuss testing at all.

In another report error, there is actually another trend not mentioned until we get here, so the final final trend is “The transformative impact of Cloud”, covered by David Fodor (Business Development – Financial Services, Amazon Web Services, Australia & New Zealand). It’s a strongly pro-AWS piece, as you’d probably expect, but it’s interesting to read the reality around AI implementations for testing viewed through the lens of such an infrastructure provider:

When it comes to quality assurance, it’s very early days. I haven’t seen significant investment in use cases that employ AI for assurance processes yet, but I’m sure as organisations redevelop their code deployment and delivery constructs, evolve their DevOps operating models and get competent at managing CI/CD and blue/green deployments, they will look at the value they can get from AI techniques to further automate this process back up the value chain.

It sounds like a lot of organizations have a long way to go in getting their automation, CI/CD pipelines and deployment models right before they need to worry about layering on AI. He makes the following points re: developers and testers:

Traditionally, there was a clear delineation between developers and testers. Now developers are much more accountable – and will increasingly be accountable – for doing a significant part of the assurance process themselves. And, as a result, organisations will want to automate that as much as possible. We should expect to see the balance of metrics – or what success looks like for a development cycle – to evolve very much to cycle time over and above pure defect rate. As techniques such as blue/green and canary deployments evolve even further, and as microservices architectures evolve further, the impacts of defects in production will become localised to the extent where you can afford to bias speed over failure.

The more you bias to speed, the more cycles that you can produce, the better you get and the lower your failure rates become. There is a growing bias to optimise for speed over perfection within an environment of effective rollback capabilities, particularly in a blue/green environment construct. The blast radius in a microservices architecture means that point failures don’t bring down your whole application. It might bring down one service within a broader stack. That’s definitely the future that we see. We see organisations who would rather perform more deployments with small failure rates, than have protracted Waterfall cycle development timelines with monolithic failure risk.

The “cycle time” metric is mentioned again here and at least he sees nonsense metrics such as defect rates going away over time in these more modern environments. His comment that “the impacts of defects in production will become localised to the extent where you can afford to bias speed over failure” rings true, but I still think many organizations are far away from the maturity in their DevOps, CI/CD, automation, rollbacks, etc. that make this a viable reality. The illusion of having that maturity is probably leading some to already be making this mistake, though.

Takeaways for the future of Quality Assurance

With the trends really all covered, the last section of the report of any substance is the “10 Key takeaways for the future of Quality Assurance” which are again listed without any citations or sources, so can only be taken as Capgemini opinion:

  • Digital transformation
  • Shift-Left or Right-Shift
  • Automation
  • Redefined KPIs
  • Evolution of QA Tools
  • QA Operating Model
  • QA framework and strategy
  • Focus on capability uplift
  • Role of industry bodies
  • Role of system integrators

Wrapping up

This is another hefty report from the Capgemini folks and, while the content is gathered less from data and more from opinion pieces when compared to the World Quality Report, it results in a very similar document.

There are plenty of buzzwords and vested interest commentary from tool vendors, but little to encourage me that such a report tells us much about the true state of testing or its future. While it was good to read some of the more sensible and practical commentary, the continued predictions of AI playing a significant role in QA/testing sometime soon simply don’t reflect the reality that those of us who spend time learning about what’s actually going on in testing teams are seeing. Most organizations still have a hard enough time getting genuine value from a move to agile ways of working and particularly leveraging automation to best effect, so the extra complexity and contextual application of AI seems a long way off the mainstream to me.

I realize that I’m probably not the target audience for these corporate type reports and that target audience will probably take on board some of the ideas from such a high-profile report – unfortunately, this will probably result in poor decisions about testing direction and strategy in large organizations, while a more context aware investigation of which good practices would make sense in each of their unique environments would likely produce better outcomes.

I find reading these reports both interesting and quite depressing in about equal measure, but I hope I’ve highlighted some of the more discussion-worthy pieces in this blog post.

A very different conference experience

My Twitter feed has been busy in recent weeks with testing conference season in full swing.

First on my radar after some time away in Europe on holidays was TestBash Australia, followed soon afterwards by their New Zealand and San Francisco incarnations. Next up was the German version of the massive Agile Testing Days and another mega-conference in the shape of European stalwart EuroSTAR is in progress as I write.

It’s one of the joys of social media that we can share in the goings on of these conferences even if we can’t attend in person. The only testing conference I’ve attended in 2019 has been TiCCA19 in Melbourne (an event I co-organized with Paul Seaman and the Association for Software Testing) but I hope to get to an event or two in 2020.

I did attend a very different kind of conference at the Melbourne Town Hall in October, though, in the shape of the full weekend Animal Activists Forum. There was a great range of talks across several tracks on both days and I saw inspiring presentations from passionate activists. Organizations like Voiceless, Animals Australia, Aussie Farms, The Vegan Society, and the Animal Justice Party – as well as many individuals – are doing so much good work for this movement.

There were some marked differences between this conference and the testing/IT conferences I generally attend. Firstly, the cost for the two full days of this event (including refreshments but not lunches) was just AU$80 (early bird), representing remarkable value given the location and range of great talks on offer.

Another obvious difference was the prevalence of female speakers on the programme, probably due to the fact that the vegan community is believed to be around 70-80% female. It was good to see more passion and positivity emanating from the stage too, all the more remarkable when considering the atrocities and realities of the animal exploitation industries that many of us are regularly exposed to within this movement.

The focus of most of the talks I attended was on actionable content, things we could do to help advance the movement. While there was some discussion of theory, history and philosophy, it was for the most part discussed with a view to providing ideas for what we can do now to advance animal rights. Many IT conference talks would do well to similarly focus on actionable takeaways.

While there were many differences compared to tech conferences, there was also evidence of common themes. One of the areas of commonality was how difficult it is to persuade people to change, even in the face of facts and evidence in support of the positive impacts of the change, such as going vegan (with the focus being squarely on going vegan for the animals in this audience, while also considering the environmental and health benefits). It was good to hear the different ideas and approaches from different speakers and activist groups. We need many different styles of advocacy when it comes to context-driven testing too – different people are going to be reached in different ways (it’s almost as though context matters!).

It’s interesting to me how easy it sometimes seems to be to change people’s minds or opinions, though. An example I’ve seen unfolding is the introduction of dairy products into China. I’ve been working with testing teams there for seven years and, for the first few years, I rarely saw or heard any mention of dairy products. This situation has changed very rapidly, thanks to massive marketing efforts by the dairy industry (most notably – and sadly – from Australia and New Zealand dairy companies). Even though almost all Chinese people are lactose intolerant and have little idea about how to use products like dairy milk and cheese, the consumption of these products has become very mainstream. From infant formula (a very lucrative business) to milk on supermarket shelves (with some very familiar Australian brands on show) to Starbucks, the dairy offerings are now ubiquitous. The fact that these products are normalized in the West enables an easier sell to the Chinese and their marketing has been heavily contextualized, for example some of the advertising claims that drinking cow’s milk will help children grow taller. These nutritional falsehoods have worked in the West and are now working in China. The dairy mythology has been successfully sold to this enormous market and the unbelievable levels of cruelty that will result from this, as well as the inevitable negative human health implications, are tragic. Such large industries, of course, have dollars on their side to mount huge marketing campaigns and are driven by profit above the abuse of animals or the health of their consumers . But maybe there are lessons to be learned from their approaches to messaging that can be beneficial in selling good approaches to testing (without the blatant untruths, of course)?

(By the way, does anyone reading this post know if the ISQTB is having a marketing push in China right now? A couple of my colleagues there have talked to me about ISTQB certification just in the last week, while no-one has mentioned it before in the seven years I’ve been working with testers in China…)

If you found this post interesting, I humbly recommend that you also read this one, What becoming vegan taught me about software testing

All testing is exploratory: change my mind

I’ve recently returned to Australia after several weeks in Europe, mainly for pleasure with a small amount of work along the way. Catching up on some of the testing-related chatter on my return, I spotted that Rex Black repeated his “Myths of Exploratory Testing” webinar in September. I respect the fact that he shares his free webinar content every month and, even though I often find myself disagreeing with his opinions, hearing what others think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good testing looks like.

Rex started off with his definition of exploratory testing (ET), viz.

A technique that uses knowledge, experience and skills to test software in a non-linear and investigatory fashion

He claimed that this is a “pretty widely shared definition of ET” but I don’t agree. The ISTQB Glossary uses the following definition:

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

The definition I hear most often is something like the following James Bach/Michael Bolton effort (which they used until 2015):

An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project

They have since deprecated the term “exploratory testing” in favour of simply “testing” (from 2015), defining testing as:

Evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.

Rex went on to say that the test basis and test oracles in ET “are primarily skills, knowledge and experience” and any such testing is referred to as “experience-based testing” (per the ISTQB definition, viz. “Testing based on the tester’s experience, knowledge and intuition.”). Experience-based testing that is investigatory is then deemed to be exploratory. I have several issues with this. There is an implication here that ET involves testing without using a range of oracles that might include specifications, user stories, or other more “formal” sources of what the software is meant to do. Rex reinforces this when he goes on to say that ET is a form of validation and “may tell us little or nothing about conformance to specification because the specification may not even be consulted by the tester”. Also, I can’t imagine any valuable testing that doesn’t rely on the tester’s skills, knowledge and experience so it seems to me that all testing would fall under this “experience-based testing” banner.

The first myth Rex discussed was the “origin myth”, that ET was invented in the 1990s in Silicon Valley or at least that was when a “name got hung on it” (e.g. Cem Kaner). He argued instead that it was invented by whoever wrote the first program, that IBM were doing it in the 1960s, that the independent test teams in Fred Brooks’s 1975 book Mythical Man Month were using ET, and “error guessing” as introduced by Glenford Myers in the classic book Art of Software Testing sounds “a whole lot like a form of ET”. The History of Definitions of ET on James Bach’s blog is a good reference in this regard, in my opinion. While I agree that programmers have been performing some kind of investigatory or unscripted testing in their development and debugging activities as long as programming has been a thing, it’s important that we define our testing activities in a way that makes the way we talk about what we do both accurate and credible. I see the argument for suggesting that error guessing is a form of ET, but it’s just one tactic that might be employed by a tester skilled in the much broader approach that is ET.

The next myth Rex discussed was the “completeness myth”, that “playing around” with the software is sufficient to test it. He mentioned that there is little education around testing in degrees in Software Engineering so people don’t understand what testing can and cannot do, which leads to myths like this. I agree that there is a general lack of understanding in our industry of how important structured ET is as part of a testing strategy, I haven’t personally heard this myth being espoused anywhere recently though.

Next up was the “sufficiency myth”, that some teams bring in a “mighty Jedi warrior of ET & this person has helped [them] to find every bug that can matter”. He mentioned a study from Microsoft where they split their testing groups for the same application, with one using ET (and other reactive strategies) only, while the other used pre-designed tests (including automated tests) only. The sets of bugs found by these two teams was partially but not fully overlapping, hence proving that ET alone is not sufficient. I’m confident that even if the groups had been divided up and did the same kind of testing (be it ET or pre-designed), then the sets of bugs from the two teams would also have been partially but not fully overlapping (there is some evidence to support this, albeit from a one-off small case study, from Aaron Hodder & James Bach in their article Test Cases Are Not Testing)! I’m not sure where this myth comes from, I’ve not heard it from anyone in the testing industry and haven’t seen a testing strategy that relies solely on ET. I do find that using ET as an approach can really help in focusing on finding bugs that matter, though, and that seems like a good thing to me.

Rex continued with the “irrelevance myth”, that we don’t have to worry about ET (or, indeed, any validation testing at all) because of the use of ATDD, BDD, or TDD. He argued that all of these approaches are verification rather than validation, so some validation is still relevant (and necessary). I’ve seen this particular myth and, if anything, it seems to be more prevalent over time especially in the CI/CD/DevOps world where automated checks (of various kinds) are viewed as sufficient gates to production deployment. Again, I see this as a lack of understanding of what value ET can add and that’s on us as a testing community to help people understand that value (and explain where ET fits into these newer, faster deployment approaches).

The final myth that Rex brought up was the “ET is not manageable myth”. In dispelling this myth, he mentioned the Rapid Reporter tool, timeboxed sessions, and scoping using charters (where a “charter is a set of one or more test conditions”). This was all quite reasonable, basically referring to session-based test management (SBTM) without using that term. One of his recommendations seemed odd, though: “record planned session time versus actual [session] time” – sessions are strictly timeboxed in an SBTM situation so planned and actual time are always the same. While this seems to be one of the more difficult aspects of SBTM at least initially for testers in my experience, sticking to the timebox is critical if ET is to be truly manageable.

Moving on from the myths, Rex talked about “reactive strategies” in general, suggesting they were suitable in agile environments but that we also need risk-based strategies and automation in addition to ET. He said that the reliance on skills and experience when using ET (in terms of the test basis and test oracle) mean that heuristics are a good way of triggering test ideas and he made the excellent point that all of our “traditional” test techniques still apply when using ET.

Rex’s conclusion was also sound, “I consider (the best practice of) ET to be essential but not sufficient by itself” and I have no issue with that (well, apart from his use of the term “best practice”) – and again don’t see any credible voices in the testing community arguing otherwise.

The last twenty minutes of the webinar was devoted to Q&A from both the online and live audience (the webinar was delivered in person at the STPCon conference). An interesting question from the live audience was “Has ET finally become embedded in the software testing lifecycle?” Rex responded that the “religious warfare… in the late 2000s/early 2010s has abated, some of the more obstreperous voices of that era have kinda taken their show off the road for various reasons and aren’t off stirring the pot as much”. This was presumably in reference to the somewhat heated debate going on in the context-driven testing community in that timeframe, some of which was unhelpful but much of which helped to shape much clearer thinking around ET, SBTM and CDT in general in my opinion. I wouldn’t describe it as “religious warfare”, though.

Rex also mentioned in response to this question that he actually now sees the opposite problem in the DevOps world, with “people running around saying automate everything” and the belief that automated tests by themselves are sufficient to decide when software is worthy of deployment to production. In another reference to Bolton/Bach, he argued that the “checking” and “testing” distinction was counterproductive in pointing out the fallacy of “automate everything”. I found this a little ironic since Rex constantly seeks to make the distinction between validation and verification, which is very close to the distinction that testing and checking seeks to draw (albeit in much more lay terms as far as I’m concerned). I’ve actually found the “checking” and “testing” terminology extremely helpful in making exactly the point that there is “testing” (as commonly understood by those outside of our profession) that cannot be automated, it’s a great conversation starter in this area for me.

One of Rex’s closing comments was again directed to the “schism” of the past with the CDT community, “I’m relieved that we aren’t still stuck in these incredibly tedious religious wars we had for that ten year period of time”.

There was a lot of good content in Rex’s webinar and nothing too controversial. His way of talking about ET (even the definition he chooses to use) is different to what I’m more familiar with from the CDT community but it’s good to hear him referring to ET as an essential part of a testing strategy. I’ve certainly seen an increased willingness to use ET as the mainstay of so-called “manual” testing efforts and putting structure around it using SBTM adds a lot of credibility. For the most part in my teams across Quest, we now consider test efforts to be considered ET only if they are performed within the framework of SBTM so that we have that accountability and structure in place for the various stakeholders to treat this approach as credible and worthy of their investment.

So, finally getting to the reason for the title of this post, both by Rex’s (I would argue unusual) definition (and even the ISTQB’s definition) or by what I would argue is the more widely accepted definition (Bach/Bolton above), it seems to me that all testing is exploratory. I’m open to your arguments to change my mind!

(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ The one I refer to in this blog post has not appeared there as yet, but the audio is available via https://rbcs-us.com/resources/podcast/)

What becoming vegan taught me about software testing

While I’ve been in the software testing industry for twenty years, I’ve only been vegan for a few years. Veganism is a movement around minimizing harm to our fellow human and non-human animals with compassion at its core, and I think of it as being built on three main pillars, viz. ethics, the environment, and health.

I will discuss how being vegan has strong parallels with being a software tester in terms of the three pillars of veganism, the similar challenges involved in changing the mindsets of others – and also the need to frequently justify one’s own beliefs!

Be prepared to learn something about veganism and maybe having some long-held assumptions challenged, but also expect to take away some ideas for changing hearts and minds in your life as a tester.

Ethics

At its heart, veganism is an ethical principle designed to reduce – to the greatest extent possible – the suffering we cause to other human and non-human animals. It is unfortunate that the movement is seen as “radical” or “militant” when it essentially asks nothing more of us than to not cause unnecessary suffering. It feels to me like the animal rights movement is on the rise, just as the human rights movement was through the 1800s and 1900s.

With ethics being at the core of veganism, they also need to be at the core of software testing. Doing the right thing is the right thing to do and our job as testers often involves the relay of bad or unpopular information. As algorithms take over so many decision-making processes in the modern world, our dependency on the developers writing these algorithms to do the right thing only increases. The Volkswagen “dieselgate”[1] scandal is an example of where many people involved with the development of the “defeat devices” (possibly including both software developers and testers) didn’t speak up enough to stop these devices going into the public domain and potentially causing harm to so many people.

Since we have now acknowledged as a society that humans have the right to live a life free from unnecessary suffering and so on, organizations employing testers have an obligation to engage them in intellectually challenging work fit for humans to invest their valuable lives in. Putting intelligent humans into roles where they are expected to mimic the actions of machines (e.g. executing step-by-step detailed test cases) is, to me at least, unethical.

Environment

According to a recent study by the University of Oxford[2], the leading cause of greenhouse gas emissions is the animal agriculture industry, with the bulk of these emissions emanating from factory farms. This significant contribution to one of our planet’s most pressing problems is rarely acknowledged or publicly discussed, especially by governments. Our disregard for the environment is bad for the animals and bad for humans. It’s not too much of a stretch to view factory farming and so-called “factory testing” in similar lights. The mechanistic dogma of factory testing is bad for our business, yet remains for many organizations their default position and unquestioned “best practice” approach to software testing, despite the evidence of its inefficiencies.

The animal agriculture business is also the largest contributor to deforestation, either to create pasture for grazing or to grow soy beans or grain to feed to animals. Animals are very inefficient in their conversion of food and water into the products eaten by humans (in terms of calories and protein[3]), so we could easily enormously reduce the amount of deforestation while at the same time providing more food for humans to eat directly. The dominant ideology around consuming animal products again makes this a conversation rarely heard. We could similarly cut out the inefficiencies in a large proportion of the world’s software testing industry by removing the “middle man” of excessive documentation. I hoped that the rise in popularity of more agile approaches to software development would spell the end of heavyweight “testing processes” in which we spend more time writing about the testing we might do and less time actually interacting with the software to see what it does do and assessing whether that’s what we wanted. The dominant ideology around software testing – very successfully promoted by organizations like the ISTQB – still manages to push challenges to these approaches to the fringes, however, and talks of detractors as radical or even unprofessional.

Health

Credible studies[4] point to the fact that reducing animal products in the human diet (or, better, eliminating them altogether) leads to improved long-term health outcomes. A wholefood plant-based diet appears to be optimal for human health, both in terms of physical and mental wellbeing.

It’s good to see some important aspects of self-care entering the conversation in the IT community, such as practicing mindfulness and getting adequate sleep. The mythology around “all nighters” and hero efforts popularized in the IT community thankfully seems to be unravelling as we realize the importance of physical and mental wellbeing when it comes to our ability to perform well in our work settings. Testers have often borne the brunt of last-minute hero efforts to get releases out and I’m encouraged by the changes I see in our industry to a mindset of the whole development team being responsible for the quality of deliverables, a positive outcome from the move to more agile approaches perhaps.

The same old questions, time and time again

One of the challenges of being vegan is the fact that everyone suddenly becomes an expert on nutrition to explain why it is unhealthy to not consume animal products. The questions are surprisingly similar from many different people and can be seen in the many online materials from vegan bloggers and influencers. The most common question is probably “where do you get your protein?”, reflecting both a modern obsession over protein intake (it’s almost impossible to be protein deficient when consuming an average amount of calories) and also poor nutritional advice from doctors, government bodies, and mainstream media. It’s worth remembering that your typical GP receives only a few hours of nutritional training during their time at medical school, while government policy is heavily influenced by lobbying from big agriculture and mainstream media relies heavily on advertising revenues from the animal agriculture industry. The animals consumed by humans as meat get their protein from eating plants, just like humans can.

Another common question is “What about vitamin B12?” and, yes, most vegans will supplement with (vegan-sourced) B12. What most people don’t realize is that factory farmed animals are supplemented with B12 so vegans are simply cutting out the middle man (or, rather, middle animal). Animals are not some magic B12 producing machine, B12 comes naturally from bacteria in soil and hence the need for supplementation in animals now raised in such unnatural conditions.

The animal agriculture industry relies on this marketing and mythology, as well as the general lack of questioning about what is seen as the norm in consuming animal products. The same applies to the software testing industry where big players and repeated mythology have created such a dominant world view of testing that it goes unquestioned by so many. If you go against the grain in this industry (say, aligning yourself with the context-driven testing “school”, as I have chosen to do), you can expect some of these common questions:

  • How do you measure coverage?
  • How can you test without test cases?
  • What percentage of your testing is automated?
  • Why do you focus so much on semantics (like “testing” and “checking”)?

Software testing is also one of the few specialties I’ve seen in the IT industry in which people outside of the specialty believe and openly express that they know how to do it well, despite having little or no experience of actually performing good testing. The ongoing misapprehension that testing is somehow “easy” or lesser than other software development specialties is something we should all counter whenever we have the opportunity – software testing is a professional, cognitive activity that requires skills including – but not limited to – critical thinking, analysis, attention to detail, and the ability to both focus and defocus. Let’s not devalue ourselves as good testers by not speaking up for our craft! Knowing how to respond well to the common questions is a good place to start.

Hello old friend, confirmation bias!

When it comes to cognitive biases, one of the most common I’ve witnessed is “confirmation bias”, especially when talking about food. Confirmation bias is the tendency to search for, interpret, favour and recall information in a way that confirms your pre-existing beliefs or hypotheses. Talking specifically in the context of the food we eat, everyone is looking for good news about their bad habits so any “research” that suggests eating meat, eggs, dairy products, chocolate, etc. is actually good for you gets big press and plenty of clicks. (One trick is to “follow the money” when it comes to research articles like this, as funding from big agriculture invariably can be found somewhere behind them, in my experience.) It should come as no surprise that the overwhelming scientific evidence to the contrary doesn’t warrant the same attention!

I see the same confirmation bias being displayed by many in the testing community, with big differences of opinion between the so-called “schools of testing”[5] and a lack of willingness to even consider the ideas from one school within the others. Even when credible experience reports[6] have been published around the poor efficacy of a test case-based approach to testing, for example, there is no significant shift away from what I’d call “traditional” approaches to testing in many large organizations and testing outsourcing service providers.

To counter confirmation bias when trying to change the mindset of testers, it’s worth looking at the very different approaches taken by different activists in the vegan movement. I was first introduced to the Socratic Method when I took Michael Bolton’s Rapid Software Testing course back in 2007 and it’s been a powerful tool for fostering critical thinking in my work since then. Many vegan activists also use the Socratic Method to help non-vegans explore their logical inconsistencies, but their approach to using it varies widely. A well-known Australian activist, Joey Carbstrong[7], pulls no punches with his somewhat “in your face” in style, whereas the high-profile UK activist Earthling Ed[8] uses a gentler approach to achieve similar results. These stylistic differences remind me of those I’ve personally experienced by attending Rapid Software Testing delivered by Michael Bolton and, later, with James Bach. I strongly believe in the power of the Socratic Method in terms of fostering critical thinking skills and it’s a powerful approach to use when confronted by those who doubt or disregard your preferred approach to testing based on little but their own confirmation bias.

Time for dessert!

Any belief or action that doesn’t conform to the mainstream narrative or paradigm causes people to become defensive. Just as humans consuming animal products is seen as natural, normal and necessary[9] (when it is demonstrably none of those), a departure from the norms of the well-peddled testing methodologies is likely to result in you being questioned, criticized and feeling the need to justify your own beliefs. I would encourage you to navigate your own path, be well informed and find approaches and techniques that work well in your context, then advocate for your good ideas via respectful dialogue and use of the Socratic Method.

And, yes, even vegans get to eat yummy desserts! I hope you’ve learned a little more about veganism and maybe I’ve helped to dispel a few myths around it – maybe try that vegan option the next time you go out for a meal or check out one of the many vegan activists[7,8,10,11s] spreading the word about veganism.

References

[1] Volkswagen “dieselgate”: https://www.sbs.com.au/news/what-is-the-volkswagen-dieselgate-emissions-scandal

[2] “Reducing food’s environmental impacts through producers and consumers”, Science Volume 360, Issue 6392, 1st June 2018: https://science.sciencemag.org/content/360/6392/987

[3] “Energy and protein feed-to-food conversion efficiencies in the US and potential food security gains from dietary changes” (A Shepon, G Eshel, E Noor and R Milo), Environmental Research Letters Volume 11, Number 10, 4th October 2016: https://iopscience.iop.org/article/10.1088/1748-9326/11/10/105002

[4] Examples from the Physicians Committee for Responsible Medicine (US): https://www.pcrm.org/clinical-research

[5] “Four Schools of Testing” (Bret Pettichord), Workshop on Teaching Software Testing, Florida Tech, February 2003: http://www.testingeducation.org/conference/wtst_pettichord_FSofST2.pdf

[6] “Test Cases Are Not Testing” (James Bach & Aaron Hodder), Testing Trapeze magazine, February 2015: https://www.satisfice.com/download/test-cases-are-not-testing

[7] Joey Carbstrong https://www.youtube.com/channel/UCG6usHVNuRbexyisxE27nDw

[8] Earthling Ed https://www.youtube.com/channel/UCVRrGAcUc7cblUzOhI1KfFg/videos

[9] “Why We Love Dogs, Eat Pigs, and Wear Cows: An Introduction to Carnism” (Melanie Joy): https://www.carnism.org/

[10] That Vegan Couple https://www.youtube.com/channel/UCV8d4At_1yUUgpsnqyDchrw

[11] Mic The Vegan https://www.youtube.com/channel/UCGJq0eQZoFSwgcqgxIE9MHw/videos

Two decades at Quest Software

Today (2nd August 2019) marks twenty years since I first sat down at a desk at Quest Software in Melbourne as a “Senior Tester”.

I’d migrated from the UK just a few weeks earlier and arrived in Australia in the middle of the late 90s tech boom. The local broadsheet newspaper, The Age, had a separate section once a week which was a hefty tome and packed full of IT jobs. I sent my CV to many different recruitment companies advertising in the newspaper and started to get some interest. My scatter gun approach was a response to the lack of opportunities for LISP developers (my previous skill from three years as a developer back in the UK, working on expert systems for IBM) but I did focus a little on openings for technical writers, believing I could string words together pretty well and had a decent grasp of technology.

One of the first interviews I secured for such a technical writing position was for a company I’d never heard of, Quest Software out in the Eastern suburbs of Melbourne (Ashburton, at that time). After some hasty company research, I remember catching a train there and following the recruiter’s directions to “take the staircase next to the bottle shop” to locate the Quest office (actually, one of two offices in the same street due to recent expansion). My interview would be with the head of the technical writing team and we started off with a chat over coffee in the kitchen. I didn’t even realize this was the interview, it was so relaxed and welcoming! At the end of the coffee/interview, he asked whether I’d also like to chat with the head of the testing team as she was looking for people too, so of course I took the opportunity to do so. This was again a very informal chat and I left the office with a technical writing task to complete. After completing the task, I was soon contacted to return to the Quest office to further my application for a software testing position, but not the technical writing one. A test case writing task formed part of this next slightly more formal interview, my first attempt at writing such a document! It was very shortly afterwards that the recruiter let me know I had an offer of a role as a “Senior Tester” and I couldn’t return the required paperwork fast enough – I’d found my first job in Australia!

I considered myself very fortunate to have secured a position so quickly after arriving into Australia. I was certainly lucky to find a great recruiter, Keith Phillips from Natural Solutions, and I recall visiting him in person for the first time after the deal was done with Quest, down at his office in South Melbourne. It turned out we had a common connection to the University of Wales in Aberystwyth, where I studied for both my undergraduate and doctoral degrees. We also studied in the same department (Mathematics) and, although Keith’s studies were some years before mine, many of the same department staff were still around during my time there as well. I believe Keith is still in the recruitment industry and I have fond memories of his kind, professional and unhurried approach to his work, something not common during my experiences with recruiters back then.

Back to 2nd August, 1999, then and my first day at the Quest office in Ashburton. Amidst the dotcom madness, Quest were growing rapidly and I was just one of many new starters coming through the door every week. We were sitting two to a desk for a while until we moved to bigger new digs in Camberwell, about three months after I joined. We grew rapidly and I enjoyed my time as a tester, slotting in well to a couple of different development teams and learning the ropes from other testers in the office. Being new to the testing game, I didn’t realize that we had a very “traditional” approach to testing in Quest at that time – I was part of an independent testing team under a Test Manager and spent a lot of my time writing and executing test cases, and producing lots of documentation (thanks, Rational Unified Process).

I was also learning the ropes of living in a new country and I’m indebted to my colleagues at the time for their patience and help in many aspects of me settling into a Melbourne life!

I worked across a few teams in my role as a “Senior Tester” from 1999 until 2004 when I was promoted to a “Test Team Lead” and given people management responsibility for the first time, leading a small group of testers as well as retaining hands-on testing commitments. I realize now that I was a classic “process cop” and quality gate fanatic, persisting with the very traditional ideas around testing and test management. This was an interesting and challenging time for me and, while I enjoyed some aspects of managing people, it was also not the most enjoyable aspect of my job.

It was during my time as test lead that Quest ran the Rapid Software Testing course in-house with Michael Bolton, in our Ottawa office in 2007. It was a very long way to travel to attend this course, but it was truly career-changing for me and opened my eyes to a new world of what testing was for and how it could be done differently. I returned to work in Melbourne inspired to change the way we thought about testing at Quest and took every chance I could to spread the word about the great new ideas I’d been exposed to. Looking back on it now, I banged this drum pretty hard and was probably quite annoying – but challenging the status quo seemed like the right thing to do.

During a shift to adopting Scrum within some of the Melbourne teams and a move away from the independent test team, I really saw an opportunity to bring in new testing ideas from Rapid Software Testing and so, in 2008, a new position was created to enable me to focus on doing so, viz. “Test Architect”. Evangelizing the new ideas and approaches across the Melbourne teams was the main job here and the removal of people management responsibility gave me a welcome chance to focus on effecting change in our testing approach. I enjoyed this new role very much over the next five years, during which time we moved to Southbank and Quest Software was acquired by Dell to form part of their new Software business.

My role expanded in 2013 to provide test architectural guidance across all of the worldwide Information Management group as “Principal Test Architect”. One of the great benefits of this promotion was the chance to work closely with colleagues in other parts of the world and I became a very regular visitor to our office in China, helping the talented and enthusiastic young testers there. I also started my conference presentation journey in 2014, a massive step outside my comfort zone! While attending a testing peer conference in Sydney in 2013, I was fortunate to meet Rob Sabourin (who was acting as content owner for the event) and he encouraged me to share my story (of implementing session-based exploratory testing with the teams in China) to a much wider audience, leading to my first conference talk at Let’s Test in Sweden the following year. This started a journey of giving conference talks all over the world, another great set of experiences and I appreciate the support I’ve had from Quest along the way in expanding the reach of my messages.

Dell sold off its software business in late 2016 and so I was again working for Quest but this time under its new owners, Francisco Partners.

My last promotion came in 2018, becoming “Director of Software Craft” to work across all of the Information Management business in helping to improve the way we develop, build and test our software. This continues to be both a challenging and rewarding role, in which I’m fortunate to work alongside supportive peers at the Director level as we strive for continuous improvement, not just in the way we test but the way we do software development.

My thanks go to the many great colleagues I’ve shared this journey with, some have gone onto other things, but a surprising number are still here with 20+ years of service. The chance to work with many of my colleagues on the ground across the world has been – and continues to be – a highlight of my job.

I’ve been fortunate to enjoy the support and encouragement of some excellent managers too, allowing me the freedom to grow, contribute to the testing community outside of Quest, and ultimately expand my purview across all of the Information Management business unit in my capacity as Director of Software Craft.

Little did I think on 2nd August 1999 that my first job in Australia would be the only one I’d know some twenty years later, but I consider myself very lucky to have found Quest and I’ve enjoyed learning & growing both personally & professionally alongside the company. My thanks to everyone along the way who’s made this two decade-long journey so memorable!