Category Archives: Testing

Interview with Rob Sabourin for McGill University undergraduates (8th November 2022)

Thanks to the wonders of modern communication technology, I was interviewed by Rob Sabourin as part of his course on Software Engineering Practice for McGill University undergraduates in Montreal, Canada.

I took part in a small group interview with McGill students back in February 2016, as I wrote about in my blog post, A small contribution to the next generation of software engineering professionals I did another interview in April 2022 for the same Software Engineering Practice course and was more than happy to repeat that experience when Rob invited me to join his Fall cohort.

The early evening timeslot for Rob’s lecture on “Estimation” was perfect for me in Australia and I sat in on the lecture piece before my interview.

I’ve spent a lot of time in Rob’s company over the years, in both personal and professional settings, watching him give big keynote presentations, workshops, meetup group talks and so on. But I’d never witnessed his style in the university lecture setting so it was fascinating to watch him in action with his McGill students. He covered the topic very well, displaying his deep knowledge of the history of software engineering to take us from older approaches such as function point analysis, through to agile and estimating “at the last responsible moment”. Rob talked about story points (pointing out that they’re not an agile version of function points!) and estimating via activities such as planning poker. He also covered T-shirt sizing as an alternative approach, before wrapping up his short lecture with some ideas around measuring progress (e.g. burndown charts). Rob’s depth of knowledge was clear, but he presented this material in a very pragmatic and accessible way, perfectly pitched for an undergraduate audience.

With the theory over, it was time for me to be in the hot seat – for what ended up being about 50 minutes! Rob structured the interview by walking through the various steps of the Scrum lifecycle, asking me about my first-person experience of all these moving parts. He was especially interested in my work with Scrum teams in highly-distributed teams (including Europe, Israel, US, China and Australia) and how these team structures impacted the way we did Scrum. It was good to share my experiences and present a “real world” version of agile in practice for the students to compare and contrast with the theory.

It was a lot of fun spending time with Rob in this setting and I thank him & his students for their engagement and questions. I’m always open to sharing my knowledge and experience, it’s very rewarding and the least I can do given all the help I’ve had along the journey that is my career so far (including from Rob himself).

Reviewing Capgemini’s “World Quality Report 2022-23”

It’s that time of year again and I’ve gone through the pain of reviewing the latest edition of Capgemini’s annual World Quality Report (to cover 2022/23) so you don’t have to.

I reviewed both the 2018/19 and 2020/21 editions of their report in some depth in previous blog posts and I’ll take the same approach to this year’s effort, comparing and contrasting it with the previous two reports. Although this review might seem lengthy, it’s a mere summary of the 80 pages of the full report!

TL;DR

The survey results in this year’s report are more of the same really and I don’t feel like I learned a great deal about the state of testing from wading through it. My lived reality working with organizations to improve their testing and quality practices is quite different to the sentiments expressed in this report.

It’s good to see the report highlighting sustainability issues, a topic that hasn’t received much coverage yet but will become more of an issue for our industry I’m sure. The way we design, build and deploy our software has huge implications for its carbon footprint, both before release and for its lifetime in production usage.

The previous reports I reviewed were very focused on AI & ML, but these topics barely get a mention this year. I don’t think the promise of these technologies has been realised at large in the testing industry and maybe the lack of focus in the report reflects that reality.

It appears that the survey respondents are drawn from a very similar pool to previous reports and the lack of responses from smaller organizations mean that the results are heavily skewed to very large corporate environments.

I would have liked to see some deep questions around testing practice in the survey to learn more about what’s going on in terms of human testing in these large organizations, but alas there was no such questioning here (and these organizations seem to be less forthcoming with this information via other avenues too, unfortunately).

The visualizations used in the report are very poor. They look unprofessional, the use of multiple different styles is unnecessary and many are hard to interpret (as evidenced by the fact that the authors saw fit to include text explanations of what you’re looking at on many of these charts).

I reiterate my advice from last year – don’t believe the hype, do your own critical thinking and take the conclusions from surveys and reports like this with a (very large) grain of salt. Keep an interested eye on trends but don’t get too attached to them and instead focus on building excellent foundations in the craft of testing that will serve you well no matter what the technology du jour happens to be.

The survey (pages 72-75)

This year’s report runs to 80 pages, continuing the theme of being slightly thicker each year. I looked at the survey description section of the report first as it’s important to get a picture of where the data came from to build the report and support its recommendations and conclusions.

The survey size was 1750, suspiciously being exactly the same number as for the 2020/21 report. The organizations taking part were again all of over 1000 employees, with the largest number (35% of responses) coming from organizations of over 10,000 employees. The response breakdown by organizational size was very similar to that of the previous two reports, reinforcing the concern that the same organizations are contributing every time. The lack of input from smaller organizations unfortunately continues.

While responses came from 32 countries, they were heavily skewed to North America and Western Europe, with the US alone contributing 16% and then France with 9%. Industry sector spread was similar to past reports, with “High Tech” (18%) and “Financial Services” (15%) topping the list.

The types of people who provided survey responses this year was also very similar to previous reports, with CIOs at the top again (24% here vs. 25% last year), followed by QA Testing Managers and IT Directors. These three roles comprised over half (59%) of all responses.

Introduction (pages 4-5)

There’s a definite move towards talking about Quality Engineering in this year’s report (though it’s a term that’s not explicitly defined anywhere) and the stage is set right here in the Introduction:

We also heartily agree with the six pillars of Quality Engineering the report documents: orchestration, automation, AI, provisioning, metrics, and skill. Those are six nails in the coffin of manual testing. After all, brute force simply doesn’t suffice in the present age.

So, the talk of the death of manual testing (via a coffin reference for a change) continues, but let’s see if this conclusion is backed up by any genuine evidence in the survey’s findings.

Executive Summary (pages 6-7)

The idea of a transformation occurring from Quality Assurance (QA) to Quality Engineering (QE) is the key message again in the Executive Summary, set out via what the authors consider their six pillars of QE:

  1. Agile quality orchestration
  2. Quality automation
  3. Quality infrastructure testing and provisioning
  4. Test data provisioning and data validation
  5. The right quality indicators
  6. Increasing skill levels

In addition to these six pillars, they also bring in the concepts of “Sustainable IT” and “Value stream management”, more on those later.

Key recommendations (pages 8-9)

The set of key recommendations from the entirety of this hefty tome comprises little more than one page of the report and the recommendations are roughly split up as per the QE pillars.

For “Agile quality orchestration”, an interesting recommendation is:

Track and monitor metrics that are holistic quality indicators across the development lifecycle. For example: a “failed deployments” metric gives a holistic view of quality across teams.

While I like the idea of more holistic approaches to quality (rather than hanging our quality hat on just one metric), the example seems like a strange choice. Deployments can fail for all manner of reasons and, on the flipside, “successful” deployments may well be perceived as low quality by end users of the deployed software.

For “Quality automation”, it’s pleasing to see a recommendation like this in such a report:

Focus on what delivers the best benefits to customers and the business rather than justifying ROI.

It’s far too common for automation vendors to make their case based on ROI (and they rarely actually mean ROI in any traditional financial use of that term) and I agree that we should be looking at automation – just like any other ingredient of what goes into making the software cake – from a perspective of its cost, value and benefits.

Moving on to “Quality and sustainable IT”, they recommend:

Customize application performance monitoring tools to support the measurement of environmental impacts at a transactional level.

This is an interesting topic and one that I’ve looked into in some depth during volunteer research work for the UK’s Vegan Society. The design, implementation and hosting decisions we make for our applications all have significant impacts on the carbon footprint of the application and it’s not a subject that is currently receiving as much attention as it deserves, so I appreciate this being called out in this report.

In the same area, they also recommend:

Bring quality to the center of the strategy for sustainable IT for a consistent framework to measure, control, and quantify progress across the social, environmental, economic, and human facets of sustainable IT, even to the extent of establishing “green quality gates.”

Looking at “Quality engineering for emerging technology trends”, the recommendations are all phrased as questions, which seems strange to me and I don’t quite understand what the authors are trying to communicate in this section.

Finally, in “Value stream management”, they say:

Make sure you define with business owners and project owners the expected value outcome of testing and quality activities.

This is a reasonable idea and an activity that I’ve rarely seen done, well or otherwise. Communicating the value of testing and quality-related activities is far from straightforward, especially in ways that don’t fall victim to simplistic numerical metrics-based systems.

Current trends in Quality Engineering & Testing (p10-53)

More than half of the report is focused on current trends, again around the pillars discussed in the previous sections. Some of the most revealing content is to be found in this part of the report. I’ll break down my analysis into the same sections as the report.

Quality Orchestration in Agile Enterprises

I’m still not sure what “Quality Orchestration” actually is and fluff such as this doesn’t really help:

Quality orchestration in Agile enterprises continues to see an upward trend. Its adoption in Agile and DevOps has seen an evolution in terms of team composition and skillset of quality engineers.

The first chart in this section is pretty uninspiring, suggesting that only around half of the respondents are getting 20%+ improvements in “better quality” and “faster releases” as a result of adopting “Agile/DevOps” (which are frustratingly again treated together as though they’re one thing, the same mistake as in the last report).

The next section used a subset of the full sample (750 out of the 1750, but it’s not explained why this is the case) and an interesting statistic here is that “testing is carried out by business SMEs as opposed to quality engineers” “always” or “often” by 62% of the respondents. This seems to directly contradict the report’s premise of a strong movement towards QE.

For the results of the question “How important are the following QA skills when executing a successful Agile development program?”, the legend and the chart are not consistent (the legend suggesting “very important” response only, the chart including both “very important” and “extremely important”) and, disappointingly, none of the answers have anything to do with more human testing skills.

The next question is “What proportion of your teams are professional quality engineers?” and the chart of the results is a case in point of how badly the visuals have been designed throughout this report. It’s an indication that the visualizations are hard to comprehend when they need text to try to explain what they’re showing:

Figure 04 from the World Quality Report

Using different chart styles for each chart isn’t helpful and it makes the report look inconsistent and unprofessional. This data again doesn’t suggest a significant shift to a “QE first” approach in most organizations.

The closing six recommendations (page 16) are not revolutionary and I question the connection that’s being made here between code quality and product quality (and also the supposed cost reduction):

Grow end-to-end test automation and increase levels of test automation across CI/CD processes, with automated continuous testing, to drive better code quality. This will enable improved product quality while reducing the cost of quality.

Quality Automation

The Introduction acknowledges a problem I’ve seen throughout my career and, if anything, it’s getting worse over time:

Teams prioritize selecting the test automation tools but forget to define a proper test automation plan and strategy.

They also say that:

All organizations need a proper level of test automation today as Agile approaches are pushing the speed of development up. Testing, therefore, needs to be done faster, but it should not lose any of its rigor. To put it simply, too much manual testing will not keep up with development.

This notion of “manual” testing failing to keep up with the pace of development is common, but suggests to me that (a) the purpose of human testing is not well understood and (b) many teams continue to labour under the misapprehension that they can work at an unsustainable pace without sacrificing quality.

In answering the question “What are the top three most important factors in determining your test automation approach?”, only 26% said that “Automation ROI, value realization” was one of the top 3 most important factors (while, curiously, “maintainability” came out top with 46%). Prioritizing maintainability over an ability to realize value from the automation effort seems strange to me.

Turning to benefits, all eight possible answers to the question “What proportion (if any) of your team currently achieves the following benefits from test automation?” were suspiciously close to 50% so perhaps the intent of the question was not understood and ended up with a flip of the coin response. (For reference, the benefits in response to this question were “Continuous integration and delivery”, “Reduce test team size”, “Increase test coverage”, “Better quality/fewer defects”, “Reliability of systems”, “Cost control”, “Allowing faster release cycle” and “Autonomous and self-adaptive solutions”.) I don’t understand why “Reduce test team size” would seen as a benefit and this reflects the ongoing naivety about what automation can and can’t realistically achieve. The low level of benefits reported across the board lead the authors to note:

…it does seem that communications about what can and cannot be done are still not managed as well as they could be, especially when looking to justify the return on investment. The temptation to call out the percentage of manual tests as automated sets teams on a path to automate more than they should, without seeing if the manual tests are good cases for automation and would bring value.

and

We have been researching the test automation topic for many years, and it is disappointing that organizations still struggle to make test automation work.

Turning to recommendations in this area, it’s good to see this:

Focus on what delivers the best benefits to customers and the business rather than justifying ROI.

It’s also interesting that they circle back to the sustainability piece, especially as automated tests are often run across large numbers of physical/virtual machines and for multiple configurations:

A final thought: sustainability is a growing and important trend – not just in IT, but across everything. We need to start thinking now about how automation can show its benefit and cost to the world. Do you know what the carbon footprint of your automation test is? How long will it be before you have to be able to report on that for your organization? Now’s the time to start thinking about how and what so you are ready when that question is asked.

Quality Infrastructure Testing and Provisioning

This section of the report is very focused on adoption of cloud environments for testing. In answer to “What proportion of non-production environments are provisioned on the cloud?”, they claim that:

49% of organizations have more than 50% of their non-production environments on cloud. This cloud adoption of non-production environments is showing a positive trend, compared to last year’s survey, when only an average of 23% of testing was done in a cloud environment

The accompanying chart does not support this conclusion, showing 39% of respondents having 26-50% of their non-production environments in the cloud and just 10% having 51-75% there. They also conflate “non-production environment” with “testing done in a cloud environment” when comparing this data with the previous report, when in reality there could be many non-testing non-production environments inflating this number.

They go on to look at the mix of on-premise and cloud environments and whether single vendor or multiple vendor clouds are in use.

In answer to “Does your organization include cloud and infrastructure testing as part of the development lifecycle?”, the data looked like this:

World Quality Report figure 11

The authors interpreted this data to conclude that “It emerged that around 96% of
all the respondents mention that cloud testing is now included as part of the testing lifecycle”, where does 96% come from? The question is a little odd and the responses even more so – the first answer, for example, suggests that for projects where applications are hosted on the cloud, only 3% of respondents mandate testing in the cloud – doesn’t that seem strange?

The recommendations in this section were unremarkable. I found the categorization of the content in this part of the report (and the associated questions) quite confusing and can’t help but wonder if participants in the survey really understood the distinctions trying to be drawn out here.

Test Data Provisioning and Data Validation

Looking at where test data is located, we see the following data (from a subset of just 580 from the total of 1750 responses, the reason is again not provided):

World Quality Report figure 16

I’m not sure what to make of this data, especially as the responses are not valid answers to the question!

The following example just shows how leading some of the questions posed in the survey really are. Asking a high-level question like this to the senior types involved in the survey is guaranteed to produce a close to 100% affirmative response:

World Quality Report figure 19

Equally unsurprising are the results of the next questions around data validation, where organizations reveal how much trouble they have actually doing it.

The recommendations in this section were again unremarkable, none really requiring the results of an expensive survey to come up with.

Quality and Sustainable IT

The sustainability theme is new to this year’s report, although the authors refer to it as though everyone knows what “sustainability” means from an IT perspective and that it’s been front of mind for some time in the industry (which I don’t believe to be the case). They say:

Sustainable quality engineering is quality engineering that helps achieve sustainable IT. A higher quality ensures less wastage of resources and increased efficiencies. This has always been a keystone focus of quality as a discipline. From a broader perspective, any organization focusing on sustainable practices while running its business cannot do so without a strong focus on quality. “Shifting quality left” is not a new concept, and it is the only sustainable way to increase efficiencies. Simply put, there is no sustainability without quality!

Getting “shift left” into this discussion about sustainability is drawing a pretty long bow in my opinion. And it’s not the only one – consider this:

Only 72% of organizations think that quality could contribute to the environmental aspect of sustainable IT. If organizations want to be environmentally sustainable, they need to learn to use available resources optimally. A stronger strategic focus on quality is the way to achieve that.

We should be mindful when we see definitive claims, such as “the way” – there are clearly many different factors involved in achieving environmental sustainability of an organization and a focus on quality is just one of them.

I think the results of this question about the benefits of sustainable IT says it all:

World Quality Report figure 22

It would have been nice to see the environmental benefits topping this data, but it’s more about the organization being seen to be socially responsible than it is about actually being sustainable.

When it comes to testing, the survey explicitly asked whether “sustainability attributes” were being covered:

World Quality Report figure 23

I’m again suspicious of these results. Firstly, it’s another of the questions only asked of a subset of the 1750 participants (and it’s not explained why). Secondly, the results are all very close to 50% so might simply indicate a flip of the coin type response, especially to such a nebulous question. The idea that even 50% of organizations are deliberately targeting testing on these attributes (especially the efficiency attributes) doesn’t seem credible to me.

One of the recommendations in this section is again around “shift left”:

Bring true “shift left” to the application lifecycle to increase resource utilization and drive carbon footprint reduction.

While the topic of sustainability in IT is certainly interesting to me, I’m not seeing a big focus on it in everyday projects. Some of the claims in the report are hard to believe, but I acknowledge that my lack of exposure to IT projects in such big organizations may mean I’ve missed this particular boat already setting sail.

Quality Engineering for Emerging Technologies

This section of the report focuses on emerging technologies and impacts on QE and testing. The authors kick off with this data:

World Quality Report figure 26

This data again comes from a subset of the participants (1000 out of 1750) and I would have expected the “bars” for Blockchain and Web 3.0 to be the same length if the values are the same. The report notes that “…Web 3.0 is still being defined and there isn’t a universally accepted definition of what it means” so it seems odd that it’s such a high priority.

I note that, in answer to “Which of the following are the greatest benefits of new emerging technologies improving quality outcomes?”, 59% chose “More velocity without
compromising quality” so the age old desire to go faster and keep or improve quality persists!

The report doesn’t make any recommendations in this area, choosing instead to ask pretty open-ended questions. I’m not clear what value this section added, it feels like crystal ball gazing (and, indeed, the last part of this section is headed “Looking into the crystal ball”!).

Value Stream Management

The opening gambit of this section of the report reads:

One of the expectations of the quality and test function is to assure and ensure that the software development process delivers the expected value to the business and end-users. However, in practice, many teams and organizations struggle to make the value outcomes visible and manageable.

Is this your expectation of testing? Or your organization’s expectation? I’m not familiar with such an expectation being set against testing, but acknowledge that there are organizations that perhaps think this way.

The first chart in this section just makes me sad:

World Quality Report figure 30

I find it staggering that only 35% of respondents feel that detecting defects before going live is even in their top three objectives from testing. The authors had an interesting take on that, saying “Finding faults is not seen as a priority for most of the organizations
we interviewed, which indicates that this is becoming a standard expectation”, mmm.

The rest of this section focused more on value and, in particular, the lean process of “value stream mapping”. An astonishing 69% of respondents said they use this approach “almost every time” when improving the testing process in Agile/DevOps projects – this high percentage doesn’t resonate with my experience but again it may be that larger organizations have taken value stream mapping on board without me noticing (or publicizing their love it more broadly so that I do notice).

Sector analysis (p54-71)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors (almost identically to last year) and discuss particular trends and challenges within each. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy, utilities, natural resources and chemicals
  • Financial services
  • Healthcare and life sciences
  • Manufacturing
  • Public sector
  • Technology, media and telecoms

Four metrics are given in summary for each sector, viz. the percentage of:

  • Agile teams have professional quality engineers integrated
  • Teams achieved better reliability of systems through test automation
  • Agile teams have test automation implemented
  • Teams achieved faster release times through test automation

It’s interesting to note that, for each of the these metrics, almost all the sectors reported around the 50% mark, with financial services creeping a little higher. These results seem quite weak and it’s remarkable that, after so long and so much investment, only about half of Agile teams report that they’ve implemented test automation.

Geography-specific reports

The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find it particularly revealing, though this comment stood out (emphasis is mine):

We see other changes, specifically to quality engineering. In recent years, QE has been decentralizing. Quality practices were merging into teams, and centers of excellence were being dismantled. Now, organizations are recognizing that centralized command and control have their benefits and, while they aren’t completely retracing their steps, they are trying to find a balance that gives them more visibility and greater governance of quality assurance (QA) in practice across the software development lifecycle.

Attending the AST’s virtual “lean coffee” (September 21, 2022)

The Association for Software Testing ran another lean coffee over a Zoom meeting on 21st September, designed primarily for the European timezone (in their morning) but also again being convenient for me to attend in Australia.

For anyone unfamiliar with the concept of a Lean Coffee, it’s an agenda-less meeting in which the participants gather, build an agenda and then talk about the topics one by one (usually with a timebox around each topic, which can be extended if there’s energy around it).

After my good experience of their first virtual lean coffee on May 24, I was really looking forward to this session – and it was certainly well worth attending.

This lean coffee was facilitated by new AST board member Trisha Chetani using Miro. We spent a chunk of time at the start of the session getting set up in Miro, suggesting topics for discussion and then voting on them (again a process we had to learn in Miro).

It was a larger group in this session than last time, which had it pros and cons. There were more topics to choose from and more diverse opinions & experiences being shared, but there was also the inevitable Zoom meeting “talking over each other” issue which made some of the discussions a little frustrating. It was good to find myself not being the only Australian in attendance with Anne-Marie Charrett joining the session and also great to see a wide range of tester experience, from newbies to, erm, more established testers.

AST lean coffee Zoom meeting screenshot

We covered the following three topics in the lean coffee:

  • Teaching developers about testing
  • A book, blog or podcast that inspired you recently
  • How can a test team show their value?

I enjoyed the session and the topics we managed to discuss. The hour went really quickly and I got some valuable different perspectives especially on the final topic we covered. Thanks to the AST for organizing this session at a time that was reasonable for folks on my side of the world to attend. I’m looking forward to more lean coffees in the future (when hopefully we’ve nailed down the tech so we can focus most of the hour on what we all love, talking testing!).

Note that one of the lean coffee attendees, AST board member James Thomas, has penned an excellent blog which covers the content of the discussions from this session in some detail.

Testing & “Exponential Organizations” (Salim Ismail, Michael S. Malone & Yuri Van Geest)

I’m not sure how I came across the book Exponential Organizations (by Salim Ismail, Michael S. Malone & Yuri Van Geest) but it ended up on my library reservation list and was a fairly quick read. The book’s central theme is that new technologies allow for a new type of organization – the “ExO” (Exponential Organization) – that can out-achieve more traditional styles of company. The authors claim that:

An ExO can eliminate the incremental, linear way traditional companies get bigger, leveraging assets like community, big data, algorithms, and new technology into achieving performance benchmarks ten times better than its peers.

This blog post isn’t intended to be an in-depth review of the book which, although I found interesting, was far too laden with buzzwords to make it an enjoyable (or even credible) read at times. The content hasn’t aged well, as you might expect when it contains case studies of these hyper-growth companies – many of which went on to implode. A new study of ExO’s from 2021 will form the second edition of the book coming later in 2022, though.

The motivation for this blog post arose from the following quote (which appears on page 140 of the 2014 paperback edition):

One of the reasons Facebook has been so successful is the inherent trust that the company has placed in its people. At most software companies (and certainly the larger ones), a new software release goes through layers upon layers of unit testing, system testing and integration testing, usually administered by separate quality assurance departments. At Facebook, however, development teams enjoy the full trust of management. Any team can release new code onto the live site without oversight. As a management style, it seems counterintuitive, but with individual reputations at stake – and no-one else to catch shoddy coding – Facebook teams end up working that much harder to ensure there are no errors. The result is that Facebook has been able to release code of unimaginable complexity faster than any other company in Silicon Valley history. In the process, it has seriously raised the bar.

I acknowledge that the authors of this book are not well versed in software testing and the focus of their book is not software development. Having said that, writing about testing as they’ve done here is potentially damaging in the broader context of those tangential to software development who might be misled by such claims about testing. Let’s unpack this a little more.

The idea that “separate quality assurance departments” were still the norm when this book was written (2014) doesn’t feel quite right to me. The agile train was already rolling along by then and the move to having testers embedded within development teams was well underway. What they’re describing at Facebook sounds more in line with what Microsoft, as an example, were doing around this time with their move to SDETs (Software Development Engineers in Test) as a different model to having embedded testers focused on the more human aspects of testing.

The idea that “development teams enjoy the full trust of management” and “Any team can release new code onto the live site without oversight” is interesting with the benefit of hindsight, given the many public issues Facebook has had around some of the features and capabilities included within its platform. There have been many questions raised around the ethics of Facebook’s algorithms and data management (e.g. the Cambridge Analytica scandal), perhaps unintended consequences of the free rein that has resulted from this level of trust in the developers.

It’s a surprisingly common claim that developers will do a better job of their own testing when there is no obvious safety net being provided by dedicated testers. I’ve not seen evidence to support this but acknowledge that there might be some truth to the claim for some developers. As a general argument, though, it doesn’t feel as strong to me as arguing that people specializing in testing can both help developers to improve their own testing game while also adding their expertise in human testing at a higher level. And, of course, it’s nonsense to suggest that any amount of hard work – by developers, testers or anybody else – can “ensure there are no errors”.

While I can’t comment on the validity of the claim that Facebook has released complex software “faster than any other company in Silicon Valley history”, it doesn’t seem to me like a claim that has much relevance even if it’s true. The claim of “unimaginable complexity”, though, is much more believable; given the benefit of hindsight and the evidence that suggests they probably don’t fully understand what they’ve built either (and we know that there are emergent behaviours inherent in complex software products, as covered expertly by James Christie in his many blog posts on this topic).

The closing sentence claiming that Facebook has “seriously raised the bar” doesn’t provide any context, so what might the authors be referring to here? Has Facebook raised the bar in testing practice? Or in frequently releasing well-considered, ethically-responsible features to its users? Or in some other way? I don’t consider Facebook to be high quality software or a case study of what great testing looks like, but maybe the authors had a different bar in mind that has been raised by Facebook in the area of software development/delivery/testing.

In wrapping up this short post, it was timely that Michael Bolton posted on LinkedIn about the subject matter that is so often lacking in any discussion or writing around software testing today – and his observations cover this paragraph on testing at Facebook perfectly. I encourage you to read his LinkedIn post.

“The Great Post Office Scandal” (Nick Wallis)

I’ve been following the story of the UK Post Office and its dubious prosecutions of sub-postmasters based on “evidence” of their wrongdoings from its IT system, Horizon, for some years.

My mother worked in the Post Office all of her working life and I also used to work there part-time during school and university holidays. There were no computer terminals on the counters back then; it was all very much paper trail accounting and I remember working on the big ledger when it came to balancing the weekly account every Wednesday afternoon (a process that often continued well into the evening).

Nick Wallis’s book covers the story in incredible detail, describing how the Post Office’s Horizon system (built by Fujitsu under an outsourcing arrangement) was badly managed by both the Post Office and Fujitsu (along with poor Government oversight) and resulted in thousands of innocent people having their lives turned upside down. It is both a moving account of the personal costs shouldered by so many individuals as well as being a reference piece for all of us in IT when it comes to governance, the importance of taking bugs seriously, and having the courage to speak up even if the implications of doing so might be personally difficult.

It’s amazing to think this story might never have been told – and justice never been served – were it not for a few heroes who stepped up, made their voices heard and fought to have the truth exposed. The author’s dedication to telling this story is commendable and he’s done an incredible job of documenting the many travesties that comprise the full awfulness of this sorry tale. This case is yet another example of the truth of Margaret Read’s quote:

Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.

One of the more surprising aspects of the story for me was the fact that very complex IT systems like Horizon have been considered in UK law (since 1990) to be “mechanical instruments” and they’re assumed to be working correctly unless shown otherwise. This was a key factor in the data shown by Horizon being trusted over the word of sub-postmasters (many of whom had been in the loyal service of the Post Office in small communities for decades).

Jones wanted the Law Commission’s legal presumption (that ‘in the absence of evidence to the contrary, the courts will presume that mechanical instruments were in order at the material time’ [from 1990]) modified to reflect reality. He told the minister, ‘If people found it difficult to prove a computer was operating reliably in the early 1990s, we can only imagine how difficult it might be to do that today, with the likes of machine-learning algorithms coming to conclusions for reasons even the computer programmer doesn’t understand.’

Darren Jones, chair of the BEIS Select Committee, p. 456 of “The Great Post Office Scandal”

It’s now clear that the complex systems we all build and engage with today (and even back when Horizon was first rolled out) have emergent behaviours that we can’t be predicted. The Post Office’s continued denial that there were any bugs in Horizon (and Fujitsu’s lack of co-operation in providing the evidence to the contrary) seems utterly ridiculous – and it was this denial that allowed so many miscarriages of justice in prosecuting people based on the claimed infallibility of Horizon.

Program testing can be used to show the presence of bugs, but never to show their absence!

Edsger W. Dijkstra

Reading this story really made me think about what the onus on testers is in terms of revealing important problems and advocating for them to be addressed. The tragic cases described in the book illustrate how important it is for testing to be focused on finding important problems in the software under test, not just proving that it passes some big suite of algorithmic checks. Fujitsu, under duress, eventually had to disclose sets of bug reports from the Horizon system and acknowledged that there were known bugs that could have resulted in the balance discrepancies that resulted in so many prosecutions for theft. There are of course much bigger questions to be answered as to why these bugs didn’t get fixed. As a tester raising an issue, there’s only so far you can go in advocating for that issue to be addressed and your ability to do that is highly context-dependent. In this case, even if the testers were doing a great job of finding and raising important problems and advocating for them to be fixed, the toxic swill of Fujitsu, Post Office and government in which everyone was swimming obviously made it very difficult for those problems to get the attention they deserved.

Coming back to my anchors that are the principles of context-driven testing, these seem particularly relevant:

  • People, working together, are the most important part of any project’s context.
  • Projects unfold over time in ways that are often not predictable.
  • Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

I think part of our job as testers is not only to test the software, but also to test the project and the processes that form the context around our development of the software. Pointing out problems in the project is no easy task, especially in some contexts. But, by bearing in mind cases like the Post Office scandal, maybe we can all find more courage to speak up and share our concerns – doing so could quite literally be the difference between life and death for someone negatively impacted by the system we’re working on.

It would be remiss of me not to mention the amazing work of James Christie in discussing many aspects of the Post Office scandal, bringing his unique experience in both auditing and software testing to dig deep into the issues at hand. I strongly encourage you to read his many blog posts on this story (noting that he has also written an excellent review of the book).

“The Great Post Office Scandal” is available direct from the publisher and the author maintains the Post Office Scandal website to share all the latest news of what is, incredibly, still an ongoing story.

My first virtual “lean coffee” (May 24, 2022)

The Association for Software Testing ran a lean coffee over a Zoom meeting on 24th May, designed primarily for the European timezone (in their morning) but also being convenient for me to attend in Australia.

While I’ve attended Lean Coffee sessions at conferences and other events over the years, this was my first experience of this style of meeting in a virtual format. For those unfamiliar with the concept of a Lean Coffee, it’s an agenda-less meeting in which the participants gather, build an agenda and then talk about the topics one by one (usually with a timebox around each topic, which can be extended if there’s energy around it).

The meeting was facilitated by Joel Montvelisky using Lean Coffee Table and we all learned our way around the tool as we went along. It was a small but very engaged group, and it felt like the perfect size for me to both learn from others as well as feeling comfortable to contribute in (what I hope was) a meaningful way to the discussions.

We covered the following four topics in the lean coffee:

  • Why do people want to speak at conferences, and can we support them to get what they need in other ways?
  • What one thing would you prefer never to have to do again (as a tester)?
  • Working conditions for testers
  • Tactics for learning while testing

The virtual experience was of course a little different to an in-person one. Firstly, it was later in the day for me so well past the time I’d want to be drinking coffee! But, more seriously, I think the format lends itself well to the virtual environment and perhaps enables more reserved participants to engage more easily than in a physical meeting.

I really enjoyed the session, the time went really quickly and I felt like I got some interesting different perspectives in pretty much all of the topics we covered. I thank the AST for organizing it at a time that was reasonable for folks on my side of the world to attend and I hope to see attend lean coffees in the future (and also hope to see more Aussies and Kiwis in attendance!).

Note that one of the lean coffee attendees, AST board member James Thomas, has penned an excellent blog which covers the content of the discussions from this session in some detail.

Proof that I can (surprisingly) still be surprised: “Fake Experience on Software Testing”

After 25 years or so in the IT industry, and with the vast majority of my experience being in testing, I rarely find myself surprised by even the most nonsensical stuff that crosses my virtual desk. It often feels like the same mistakes and traps are being made and fallen into by a new generation of testers or the same old things get rebranded as the latest shiny thing.

I’ve seen it come and go with the “testing is dead” narrative. I’ve seen automation via screen comparison noted as a ridiculous idea while now being touted as the next big thing. I’ve witnessed the ridicule around record & playback as an automation technique, while I’m now inundated with marketing for automation tools that magically instruct computers how to do stuff apparently “without any code”. I’ve become conditioned to the nonsense and I’m well aware that everything comes and everything goes, so my attention is rarely distracted away by such obviously ridiculous “trends” in the testing industry.

But I recently came to realise that my capability to be surprised in this industry hasn’t been completely destroyed, it has merely been dormant. Thanks to Eric Proegler posting a link to a YouTube video on the Association for Software Testing‘s Slack, my surprise (and, along with it, a hefty dose of dismay) has been awoken from its slumber.

The video in question is titled “Fake Experiance on Software Testing” by Venkata Krishna (published on February 6th, 2022):

While I’m at pains not to give this abomination any oxygen, the video has already been viewed over 131,000 times as I write so calling it out here seems worthwhile even if it adds another click or two to this incredible view count. It’s worth noting that the video has over 2,700 likes and has received over 270 comments, some (rightly) calling it out as promoting unethical practice but the vast majority sadly praising it as being useful.

The typo in the titling of the video and the fact that it was recorded on a computer running an unregistered copy of Windows gives an indication of the standard of the material to come. Venkata’s introduction sets out his purpose for producing this video:

“how to put the fake experience on software testing… what are all the major challenges you may face and how to overcome those challenges – and how to happily do your work even though you go with fake experience”.

This stunning opening gambit originally made me think that the video must itself be a fake or some kind of joke piece, but alas I was mistaken. Disturbingly, he claims that the video was made in response to requests from his subscribers.

His early advice is to do one “real-time project” for manual and automated testing before claiming fake experience, claiming that “there is no issue” in doing this to get into a company (this claim is repeated frequently throughout the video).

He advises applicants to approach those “good” small consultancies who can provide fake experience documents (and even payslips, bank statements, etc.) to help them clear background checks by employers when applying for jobs (Venkata reminds his viewers not to ask him specifically for the names of consultancies providing such “services”).

Heading into the workplace after clearing any checks using the fake experience documents, he suggests that the tester be careful with the questions they ask or “risk being identified as a fake resource”. He claims that “automation is all about Selenium” and, for any question they might be asked, the solution is “already in Google”. Both “manual” and “automated” testing are described in very simplistic ways and requiring little skill or knowledge to “get by” without arousing suspicion as a “fake resource”.

If the tester can survive two to three months without being found out, then “no-one will stop you” and if they somehow manage to do the work, “no-one will touch you”. He mentioned that there are so many jobs in the US and India with so much demand that it’s easy to use fake experience to land a position.

One of the more obvious challenges of this approach (!) is that “you might not be able to do the work”, in which case Venkata advises relying on friends with actual experience as testers or utilizing a “job support service”. If the tester really can’t do the work, the employer might re-do their background checks and flag you as fake. In this case, he said HR will be the first to start asking questions, such as “is your experience real or fake?”, and the tester should always say their experience is real and suggest that HR should contact their previous employer. Acting confident (while you lie) here is the key, apparently. If HR really do re-check and the tester’s fake experience is revealed, the tester should offer to resign and then leave. There is no problem here, “It’s software, everything is soft, you won’t go to jail”.

While Venkata rightly suggests that it can be hard to find your first testing job (and claims that there are “no fresher jobs” in his market), these facts don’t justify encouraging candidates to misrepresent themselves (essentially, committing fraud). This reflects badly not only on the people following this path, but more generally on the testing industry. The reputation of some outsourced testing companies already isn’t great and this kind of “advice” to fraudulently join them only serves to devalue testing and diminish its reputation even further.

It is on those of us who employ testers to provide entry-level opportunities into our industry, maybe via apprenticeship style models, where keen new testers can learn their craft and not have to be dishonest in doing so. There are excellent examples in India of companies and communities around testing who are treating the craft with care and promoting its value – Moolya and The Test Tribe immediately come to my mind. Publishing blogs, articles and other materials that can point potential new entrants into testing towards better quality resources seems important to me. This blog is a small attempt to do this, but these words are very unlikely to attract 100,000+ readers!

I remain surprised that anyone would publish a video recommending this fraudulent behaviour, along with the many justifications they make for doing so. A job in the testing industry can be varied, fulfilling and intellectually challenging – it’s not a place to live a fake life. For anyone looking to enter the testing industry, I hope you will choose to look for guidance and help from professional testers passionate about their craft rather than doing yourself a disservice by “faking it”.

ER: acting as a Rapid Software Testing Explored “peer advisor” (7-10 February 2022)

A relatively rare scheduling of the online version of the Rapid Software Testing Explored course for Australasian timezones presented me with an invitation from presenter Michael Bolton to act as a “peer advisor” for the course running from 7-10 February.

I had already participated in RST twice before, thanks to in-person classes with Michael in Canada back in 2007 and then again with James Bach in Melbourne in 2011, so the opportunity to experience the class online and in its most current form were both very appealing. I was quick to accept Michael’s offer to volunteer for the duration of the course.

While the peer advisor role is voluntary and came with no obligation to attend for any particular duration, I made room in my consulting schedule to attend every session over the four days (with the consistent afternoon scheduling making this a practical option for me). Each afternoon consisted of three 90-minute sessions with two 30-minute breaks, making a total of 18 hours of class time. The class retailed at AU$600 for paying participants so offers incredible value in its virtual format, in my opinion.

As a a peer advisor, I added commentary here and there during Michael’s sessions but contributed more during exercises in the breakout rooms, nudging the participants as required to help them. I was delighted to be joined by Paul Seaman and Aaron Hodder as peer advisors, both testers I have huge respect for and who have made significant contributions to the context-driven testing community. Eugenio Elizondo did a sterling job as PA, being quick to provide links to resources, etc. as well as keeping on top of the various administrivia required to run a smooth virtual class.

The class was attended by over twenty students from across Australia, New Zealand and Malaysia. Zoom was used for all of Michael’s main sessions with breakout rooms being used to split the participants into smaller groups for exercises (with the peer advisors roaming these rooms to assist as needed). Asynchronous collaboration was facilitated via a Mattermost instance (an open source Slack clone), which seemed to work well for posing questions to Michael, documenting references, general chat between participants, etc.

While no two runs of an RST class are the same, all the “classic” content was covered over the four days, including testing & checking, heuristics & oracles, the heuristic test strategy model & product coverage outlines, shallow & deep testing, session-based test management, and “manual” & “automated” testing. The intent is not to cover a slide deck but rather to follow the energy in the (virtual) room and tailor the content to maximize its value to the particular group of participants. This nature of the class meant that even during this third pass through it, I still found the content fresh, engaging and valuable – and it really felt like the other participants did too.

The various example applications used throughout the class are generally simple but reveal complexity (and I’d seen all of them before, I think). It was good to see how other participants dealt with the tasks around testing these applications and I enjoyed nudging them along in the breakouts to explore different ways of thinking about the problems at hand.

The experience of RST in an online format was of course quite different to an in-person class. I missed the more direct and instant feedback from the faces and body language of participants (not everyone decided to have their video turned on either) and I imagine this also makes this format challenging for the presenter. I wondered sometimes whether there was confusion or misunderstanding that lay hidden from obvious view, in a way that wouldn’t happen so readily if everyone was physically present in the same room. Michael’s incredibly rich, subtle and nuanced use of language is always a joy for me, but I again wondered if some of this richness and subtlety was lost especially for participants without English as their first language.

The four hefty afternoons of this RST class passed so quickly and I thoroughly enjoyed both the course itself as well as the experience of helping out in a small way as a peer advisor. It was fun to spend some social time with some of the group after the last session in a “virtual pub” where Michael could finally enjoy a hard-earned beer! The incredible pack of resources sent to all participants is also hugely valuable and condenses so much learned experience and practical knowledge into forms well suited to application in the day-to-day life of a tester.

Since I first participated in RST back in 2007, I’ve been a huge advocate for this course and experiencing the online version (and seeing the updates to its content over the last fifteen years) has only made my opinions even stronger about the value and need for this quality of testing education. In a world of such poor messaging and content around testing, RST is a shining light and a source of hope – take this class if you ever have the chance (check out upcoming RST courses)!

(I would like to publicly offer my thanks to Michael for giving me the opportunity to act as a peer advisor during this virtual RST class – as I hope I’ve communicated above, it was an absolute pleasure!)

A tester’s critique of “The 2021 State of Software Quality: The View from Enterprise Leaders & Followers”

I really should know better, but I decided to watch a webinar titled The 2021 State of Software Quality: The View from Enterprise Leaders & Followers from MicroFocus and Enterprise Management Associates, Inc. The promo spiel for the webinar read as follows:

The rapid rise of the digital economy became twice as important after layering on a worldwide pandemic. With every company having to become a software company, enterprise application development speed, volume, cost, quality, and risk are key determinants that define who survives and who does not. The pressure on application development teams to build more software faster and cheaper often runs counter to the objectives of software quality and managing risk.

Join Steve Hendrick, Research Director at Enterprise Management Associates, to hear key findings from a recent worldwide survey about software quality. This webinar will look at the characteristics and differences between software quality leaders and followers. Key to this discussion of software quality is the impact that people, process, and products are having on enterprise software quality. Completing this view into software quality will be a discussion of best and worst practices and their differences across three levels of software quality leadership.

While the opening gambit of this promo literally makes no sense – “The rapid rise of the digital economy became twice as important after layering on a worldwide pandemic” – the webinar sounded like it at least held some promise in terms of identifying differences between those “leading” in software quality and those “following”.

The survey data presented in this webinar was formed from 316 responses by Directors, VPs and C-level executives of larger enterprises (2000 employees or more). The presenter noted specifically that the mean enterprise size in the survey was over 11,000 employees and that this was a good thing, since larger enterprises have a “more complex take on DevOps”. This focus on garnering responses from people far away from the actual work of developing software in very large enterprises immediately makes me suspicious of the value of the responses for practitioners.

Unusually for surveys and reports of this type, though, the webinar started in earnest with a slide titled “What is Software Quality”:

While the three broad software quality attributes seem to me to represent some dimensions of quality, they don’t answer the question of what the survey means when it refers to “software quality”. If this was the definition given in the survey to guide participants, then it feels like their responses are likely skewed to thinking solely about these three dimensions and not the many more that are familiar to those of us with a broader perspective aligned with, for example, Jerry Weinberg’s definition of quality as “Value to some person”.

The next slide was particularly important as it introduced the segmentation of respondents into Outliers, Laggards, Mainstreamers and Leaders based on their self-assessment of the quality of their products:

This “leadership segmentation” is the foundation for the analysis throughout the rest of the webinar, yet it is completely based on self-assessment! Note that over half (55%) self-assess their quality as 8/10, 9/10 or even 10/10, while only 11% rate themselves as 5/10 or below. This looks like a classic example of cognitive bias and illusory superiority. This poor basis for the segmentation used so heavily in the analysis which follows is troubling.

Moving on, imagine being faced with answering this question: “How does your enterprise balance the contribution to software quality that is made by people, policy, processes, and products (development and DevOps tools)?” You might need to read that again. The survey responses came back as follows:

Call me cynical but this almost impossible to answer question looks like it resulted in most people just giving equal weight to all of the five choices, so ending up with just about 20% in each category.

It was soon time to look to “agile methodologies” for clues as to how “adopting” agile relates to quality leadership segmentation:

It was noted here that the “leaders” (again, remember this is respondents self-assessing themselves as quality leaders) were most likely to represent enterprises in which “Nearly all teams are using agile methods”. A reminder that correlation does not imply causation feels in order at this point.

The revelations kept coming, let’s look at the “phases” in which enterprises are “measuring quality”:

The presenter made a big deal here about the “leaders” showing much higher scores for measuring quality in the requirements and testing management “phases” than the “mainstreamers” and “laggards”. Of course, this provided the perfect opportunity to propagate the “cost of change” curve nonsense, with the presenter claiming it is “many times more expensive to resolve defects found in production than found during development”. He also sagely suggested that the leaders’ focus on requirements management and testing was part of their “secret sauce”.

When the surveyed enterprises were asked about their “software quality journey over the last two years”, the results looked like this:

The conclusion here was that “leaders” are establishing centres of excellence for software quality. There was a question about this during the short Q&A at the end of the deck presentation, asking what such a function actually does, to which the presenter said a CoE is “A good way to jumpstart an enterprise thinking about quality, it elevates the importance of quality in the enterprise” and “raises visibility of the fact that software quality is important”. An interesting but overlooked part of the data on this slide in my opinion is that about 20% of enterprises (even the “leaders”) said that their “focus on agile and DevOps has not had any impact on software quality”. I assume this data didn’t fit the narrative for this highly DevOps-focused webinar.

Attention then turned to tooling, firstly looking at development tools:

I find it interesting that all of these different types of development tooling are considered “DevOps tools” and it’s surprising that only around half of the “laggards” even claim to use source code management tools (it’s not clear why “mainstreamers” were left off this slide) and only just over half of the “leaders” are using continuous integration tools. These statistics seem contrary to the idea that even the leaders are really mature in their use of tooling around DevOps. (It’s also worth noting that there is also considerable wiggle room in the wording of this question, “regularly used or will be used”.) Deployment, rather than development, tooling was also analyzed but I didn’t spot anything interesting there, apart from the very fine-grained breakdown of tooling types (resulting in an incredible 19 different categories).

The presenter then examined why software quality was improving:

Notice that the slide is titled “Why has your software quality been improving since 2019?” while the actual survey question was “Why has your approach to software quality improved since the beginning of 2019?” Improvements in approach may or may not result in quality improvements. Some of the choices for response to this question don’t really answer the question, but clearly the idea was to suggest that adding more DevOps process and tooling leads to quality improvements while the data suggests otherwise (more around business drivers).

Moving from the “why” to the “how” came next (again with the same subtle difference between the slide title and the survey question):

There are again business/customer drivers behind most of these responses, but increased automation and use of tooling also show up highly. A standout is the “leaders” highlighting that “our multifunctional teams have learned how to work more effectively together” was a way to improve quality.

Some realizations/revelations about quality followed:

There were at least signs here of enterprises accepting that improving quality takes significant effort, not just from additional testing and tooling, but also from management and the business. The presenter focused on the idea of “shifting left” and there was a question on this during the Q&A too, asking “how important is shift left?” to which the presenter said it was “very important to leaders, it’s a best practice and it makes intuitive sense”. But he also noted that there was an additional finding in the deeper data around this that enterprises found it to be a “challenge in piling more responsibility on developers, made it harder for developers to get their job done, it alienates them and gets them bogged down with activities that are not coding” and that enterprises were sensitive to these concerns. From that response, it doesn’t sound to me like the “leaders” have really grasped the concept of “shift left” as I understand it and are still not viewing some types of testing as being part of developers’ responsibilities. The final entry on this slide also stood out to me (but was not highlighted by the presenter), with 17% of the “leaders” saying that “software quality is a problem if it is too high”, interesting!

Presentations like this usually end up talking about best practices and this webinar was no different:

The presenter focused on the high rating given by the “leaders” “adoption of quality standards (such as ISO)” but overlooked what I took as one of the few positives from any of the data in the webinar, namely that adopting “a more comprehensive approach to software testing” was a practice generally seen as something worth continuing to do.

The deck wrapped up with a summary of the “Best Practices of Software Quality Leaders”:

These don’t strike me as actually being best practices, rather statements and dubious conclusions drawn from the survey data. Point 4 on this slide – “Embracing agile and improving your DevOps practice will improve your software quality” – was highlighted (of course) but is seriously problematic. Remember the self-assessed “leaders” claimed that their software quality was increasing due to expanding their “DevOps processes and toolchain”, but correlation does not imply causation as this point on this final slide implies. This apparent causality was reinforced by the presenter’s answer to a question during the Q&A also, when asked “what is one thing we can do to improve quality?”. He said his preference is to understand the impact that software quality has on the business, but his pragmatic answer is to “take stock of your DevOps practice and look for ways to improve it, since maturing your DevOps practice improves quality.”

There were so many issues for me with the methodology behind the data presented in this webinar. The self-assessment of software quality produced by these enterprises makes the foundation for all of the conclusions drawn from the survey data very shaky in my opinion. The same enterprises who probably over-rated themselves on quality are also likely to have over-rated themselves in other areas (which appears to be the case throughout). There is also evidence of mistakenly taking correlation to imply causation, e.g. suggesting that adding more DevOps process and tooling improves quality. (Even claiming correlation is dubious given the self-assessment problem underneath all the data.)

There’s really not much to take away from the results of this survey for me in helping to understand what differences in approach, process, practice, tooling, etc. might lead to higher quality outcomes. I’m not at all surprised or disappointed in feeling this way, as my expectations of such fluffy marketing-led surveys are very low (based on experiencing of critiquing a number of them over the last few years). What does disappoint me is not the “state of software quality” supposedly evidenced by such surveys, but rather the state of the quality of dialogue and critical thinking around testing and quality in our industry.

The webinar can be viewed from https://content.microfocus.com/optimize-devops-tb/2021-software-quality (note that registration is required).

The power of the pause

While writing my last blog post, a review of Cal Newport’s “Deep Work” book, I reminded myself of a topic I’ve been meaning to blog about for a while, viz. the power of the pause.

Coming at this from a software development perspective, I mentioned in the last blog post that:

“There seems to be a new trend forming around “deployments to production” as being a useful measure of productivity, when really it’s more an indicator of busyness and often comes as a result of a lack of appetite for any type of pause along the pipeline for humans to meaningfully (and deeply!) interact with the software before it’s deployed.”

I often see this goal of deploying every change directly (and automatically) to production without the goal being accompanied by compelling reasons for doing so – apart from maybe “it’s what <insert big name tech company here> does”, even though you’re likely nothing like those companies in most other important ways. What’s the rush? While there are some cases where a very quick deployment to production is of course important, the idea that every change needs to be deployed in the same way is questionable for most organizations I’ve worked with.

Automated deployment pipelines can be great mechanisms for de-risking the process of getting updated software into production, removing opportunities for human error and making such deployments less of a drama when they’re required. But, just because you have this mechanism at your disposal, it doesn’t mean you need to use it for each and every change made to the software.

I’ve seen a lot of power in pausing along the deployment pipeline to give humans the opportunity to interact with the software before customers are exposed to the changes. I don’t believe we can automate our way out of the need for human interaction for software designed for use by humans, but I’m also coming to appreciate that this is increasingly seen as a contrarian position (and one I’m happy to hold). I’d ask you to consider whether there is a genuine need for automated deployment of every change to production in your organization and whether you’re removing the opportunity to find important problems by removing humans from the process.

Taking a completely different perspective, I’ve been practicing mindfulness meditation for a while now and haven’t missed a daily practice since finishing up full-time employment back in August 2020. One of the most valuable things I’ve learned from this practice is the idea of putting space between stimulus and response – being deliberate in taking pause.

Exploring the work of Gerry Hussey has been very helpful in this regard and he says:

The things and situations that we encounter in our outer world are the stimulus, and the way in which we interpret and respond mentally and emotionally to that stimulus is our response.

Consciousness enables us to create a gap between stimulus and response, and when we expand that gap, we are no longer operating as conditioned reflexes. By creating a gap between stimulus and response, we create an opportunity to choose our response. It is in this gap between stimulus and response that our ability to grow and develop exists. The more we expand this gap, the less we are conditioned by reflexes and the more we grow our ability to be defined not by happens to us but how we choose to respond.

Awaken Your Power Within: Let Go of Fear. Discover Your Infinite Potential. Become Your True Self (Gerry Hussey)

I’ve found this idea really helpful in both my professional and personal lives. It’s helped with listening, to focus on understanding rather than an eagerness to simply respond. The power of the pause in this sense has been especially helpful in my consulting work as it has a great side effect of lowering the chances of jumping into solution mode before fully understanding the problem at hand. Accepting the fact that things will happen outside my control in my day to day life but that I have the choice in how to respond to whatever happens has been transformational.

Inevitably, there are still times where my response to stimuli is quick, conditioned and primitive (with system 1 thinking doing its job) – and sometimes not kind. But I now at least recognize when this has happened and bring myself back to what I’ve learned from regular practice so as to continue improving.

So, whether it’s thinking specifically about software delivery pipelines or my interactions with the world around me, I’m seeing great power in the pause – and maybe you can too.