Monthly Archives: December 2021

2021 in review

As another year draws to a close, I’ll take the opportunity to review my 2021.

I published 14 blog posts during the year, just about meeting my personal target cadence of a post every month. I wrapped up my ten-part series answering common search engine questions about testing and covered several different topics during my blogging through the year. My blog attracted about 25% more views than in 2020, somewhat surprisingly, and I continue to be really grateful for the amplification of my blog posts via their regular inclusion in lists such as 5Blogs, Testing Curator’s Testing Bits and Software Testing Weekly.

December 2021 has been the biggest month for my blog by far this year with a similar number of views to my all-time high back in November 2020 – interestingly, I published a critique of an industry report in December and published similar critiques in November 2020, so clearly these types of posts are popular (even if they can be somewhat demoralizing to write)!

I closed out the year with about 1,200 followers on Twitter, again up around 10% over the year.

Conferences and meetups

2021 was my quietest year for perhaps fifteen years in terms of conferences and meetups, mainly due to the ongoing impacts of the COVID-19 pandemic around the world.

I was pleased to announce mid-2021 that I would be speaking at the in-person Testing Talks 2021 (The Reunion) conference in Melbourne in October. Sadly, the continuing harsh response to the pandemic in this part of the world made an in-person event too difficult to hold, but hopefully I can keep that commitment for its rescheduled date in 2022.

I didn’t participate in any virtual or remote events during the entire year.

Consulting

After launching my testing consultancy, Dr Lee Consulting, towards the end of 2020, I noted in last year’s review post that “I’m confident that my approach, skills and experience will find a home with the right organisations in the months and years ahead.” This confidence turned out to be well founded and I’ve enjoyed working with my first clients during 2021.

Consulting is a very different gig to full-time permanent employment but it’s been great so far, offering me the opportunity to work in different domains with different types of organizations while also allowing me the freedom to enjoy a more relaxed lifestyle. I’m grateful to those who have put their faith (and dollars!) in me during 2021 as I begin my consulting journey and I’m looking forward to helping more organizations to improve their testing and quality practices during 2022.

Testing books

After publishing my first testing book in October 2020, in the shape of An Exploration of Testers, it’s been pleasing to see a steady stream of sales through 2021. I made my first donation of proceeds to the Association for Software Testing (AST) from sales of the book and another donation will follow early in 2022. I also formalized an arrangement with the AST so that all future proceeds will be donated to them and all new & existing members will receive a free copy of the book. (I’m open to additional contributions to this book, so please contact me if you’re interested in telling your story via the answers to the questions posed in the book!)

I started work on another book project in 2021, also through the AST. Navigating the World as a Context-Driven Tester provides responses to common questions and statements about testing from a context-driven perspective, with its content being crowdsourced from the membership of the AST and the broader testing community. There are responses to six questions in the book so far and I’m adding another response every month (or so). The book is available for free from the AST’s GitHub.

Podcasting

It was fun to kick off a new podcasting venture with two good mates from the local testing industry, Paul Seaman and Toby Thompson. We’ve produced three episodes of The 3 Amigos of Testing podcast so far and aim to get back on the podcasting horse early in 2022 to continue our discussions around automation started back in August. The process of planning content for the podcast, discussing and dry-running it, and finally recording is an interesting one and kudos to Paul for driving the project and doing the heavy lifting around editing and publishing each episode.

Volunteering for the UK Vegan Society

I’ve continued to volunteer with the UK’s Vegan Society and, while I’ve worked on proofreading tasks again through the year, I’ve also started contributing to their web research efforts over the last six months or so.

It was exciting to be part of one of the Society’s most significant outputs of 2021, viz. the Planting Value in the Food System report. This 40,000-word report was a mammoth research project and my work in proofing it was also a big job! The resulting report and the website are high quality and show the credibility of The Vegan Society in producing well-researched reference materials in the vegan space.

Joining the web research volunteer group immediately gave me the opportunity to learn, being tasked with leading the research efforts around green websites and accessibility testing.

I found the green website research particularly engaging, as it was not an area I’d even considered before and the carbon footprint of websites – and how it can easily be reduced – doesn’t seem to (yet) be on the radar of most companies. The lengthy recommendations resulting from my research in this area will inform changes to the Vegan Society website over time and this work has inspired me to look into offering advice in this area to companies who may have overlooked this potentially significant contributor to their carbon footprint.

I also spent considerable time investigating website accessibility and tooling to help with development & testing in this area. While accessibility testing is something I was tangentially aware of in my testing career, the opportunity to deep dive into it was great and, again, my recommendations will be implemented over time to improve the accessibility of the society’s own website.

I continue to enjoy working with The Vegan Society, increasing my contribution to and engagement with the vegan community worldwide. The passion and commitment of the many volunteers I interact with is invigorating. I see it as my form of vegan activism and a way to utilize my existing skills in research and the IT industry as well as gaining valuable new skills and knowledge along the way.

Status Quo projects

I was honoured to be asked to write a lengthy article for the Status Quo official fan club magazine, FTMO, following the sad passing of the band’s original bass player, Alan Lancaster in September. Alan spent much of his life here in Australia, migrating to Sydney in 1978 and he was very active in the music industry in this country following his departure from Quo in the mid-1980s. It was a labour of love putting together a 5000-word article and selecting interesting photos to accompany it from my large collection of Quo scrapbooks.

I spent time during 2021 on a new Quo project too, also based around my scrapbook collection. This project should go live in 2022 and has been an interesting learning exercise, not just in terms of website development but also photography. Returning to coding after a 20+ year hiatus has been a challenge but I’m reasonably happy with the simple website I’ve put together using HTML, CSS, JavaScript, PHP and a MySQL database. Gathering the equipment and skills to take great photos of scrapbook clippings has also been fun and it’s nice to get back into photography, a keen hobby of mine especially in my university days back in the UK.

In closing

As always, I’m grateful for the attention of my readers here and also followers on other platforms. I wish you all a Happy New Year and I hope you enjoy my posts and other contributions to the testing community to come through 2022!

A tester’s critique of “The 2021 State of Software Quality: The View from Enterprise Leaders & Followers”

I really should know better, but I decided to watch a webinar titled The 2021 State of Software Quality: The View from Enterprise Leaders & Followers from MicroFocus and Enterprise Management Associates, Inc. The promo spiel for the webinar read as follows:

The rapid rise of the digital economy became twice as important after layering on a worldwide pandemic. With every company having to become a software company, enterprise application development speed, volume, cost, quality, and risk are key determinants that define who survives and who does not. The pressure on application development teams to build more software faster and cheaper often runs counter to the objectives of software quality and managing risk.

Join Steve Hendrick, Research Director at Enterprise Management Associates, to hear key findings from a recent worldwide survey about software quality. This webinar will look at the characteristics and differences between software quality leaders and followers. Key to this discussion of software quality is the impact that people, process, and products are having on enterprise software quality. Completing this view into software quality will be a discussion of best and worst practices and their differences across three levels of software quality leadership.

While the opening gambit of this promo literally makes no sense – “The rapid rise of the digital economy became twice as important after layering on a worldwide pandemic” – the webinar sounded like it at least held some promise in terms of identifying differences between those “leading” in software quality and those “following”.

The survey data presented in this webinar was formed from 316 responses by Directors, VPs and C-level executives of larger enterprises (2000 employees or more). The presenter noted specifically that the mean enterprise size in the survey was over 11,000 employees and that this was a good thing, since larger enterprises have a “more complex take on DevOps”. This focus on garnering responses from people far away from the actual work of developing software in very large enterprises immediately makes me suspicious of the value of the responses for practitioners.

Unusually for surveys and reports of this type, though, the webinar started in earnest with a slide titled “What is Software Quality”:

While the three broad software quality attributes seem to me to represent some dimensions of quality, they don’t answer the question of what the survey means when it refers to “software quality”. If this was the definition given in the survey to guide participants, then it feels like their responses are likely skewed to thinking solely about these three dimensions and not the many more that are familiar to those of us with a broader perspective aligned with, for example, Jerry Weinberg’s definition of quality as “Value to some person”.

The next slide was particularly important as it introduced the segmentation of respondents into Outliers, Laggards, Mainstreamers and Leaders based on their self-assessment of the quality of their products:

This “leadership segmentation” is the foundation for the analysis throughout the rest of the webinar, yet it is completely based on self-assessment! Note that over half (55%) self-assess their quality as 8/10, 9/10 or even 10/10, while only 11% rate themselves as 5/10 or below. This looks like a classic example of cognitive bias and illusory superiority. This poor basis for the segmentation used so heavily in the analysis which follows is troubling.

Moving on, imagine being faced with answering this question: “How does your enterprise balance the contribution to software quality that is made by people, policy, processes, and products (development and DevOps tools)?” You might need to read that again. The survey responses came back as follows:

Call me cynical but this almost impossible to answer question looks like it resulted in most people just giving equal weight to all of the five choices, so ending up with just about 20% in each category.

It was soon time to look to “agile methodologies” for clues as to how “adopting” agile relates to quality leadership segmentation:

It was noted here that the “leaders” (again, remember this is respondents self-assessing themselves as quality leaders) were most likely to represent enterprises in which “Nearly all teams are using agile methods”. A reminder that correlation does not imply causation feels in order at this point.

The revelations kept coming, let’s look at the “phases” in which enterprises are “measuring quality”:

The presenter made a big deal here about the “leaders” showing much higher scores for measuring quality in the requirements and testing management “phases” than the “mainstreamers” and “laggards”. Of course, this provided the perfect opportunity to propagate the “cost of change” curve nonsense, with the presenter claiming it is “many times more expensive to resolve defects found in production than found during development”. He also sagely suggested that the leaders’ focus on requirements management and testing was part of their “secret sauce”.

When the surveyed enterprises were asked about their “software quality journey over the last two years”, the results looked like this:

The conclusion here was that “leaders” are establishing centres of excellence for software quality. There was a question about this during the short Q&A at the end of the deck presentation, asking what such a function actually does, to which the presenter said a CoE is “A good way to jumpstart an enterprise thinking about quality, it elevates the importance of quality in the enterprise” and “raises visibility of the fact that software quality is important”. An interesting but overlooked part of the data on this slide in my opinion is that about 20% of enterprises (even the “leaders”) said that their “focus on agile and DevOps has not had any impact on software quality”. I assume this data didn’t fit the narrative for this highly DevOps-focused webinar.

Attention then turned to tooling, firstly looking at development tools:

I find it interesting that all of these different types of development tooling are considered “DevOps tools” and it’s surprising that only around half of the “laggards” even claim to use source code management tools (it’s not clear why “mainstreamers” were left off this slide) and only just over half of the “leaders” are using continuous integration tools. These statistics seem contrary to the idea that even the leaders are really mature in their use of tooling around DevOps. (It’s also worth noting that there is also considerable wiggle room in the wording of this question, “regularly used or will be used”.) Deployment, rather than development, tooling was also analyzed but I didn’t spot anything interesting there, apart from the very fine-grained breakdown of tooling types (resulting in an incredible 19 different categories).

The presenter then examined why software quality was improving:

Notice that the slide is titled “Why has your software quality been improving since 2019?” while the actual survey question was “Why has your approach to software quality improved since the beginning of 2019?” Improvements in approach may or may not result in quality improvements. Some of the choices for response to this question don’t really answer the question, but clearly the idea was to suggest that adding more DevOps process and tooling leads to quality improvements while the data suggests otherwise (more around business drivers).

Moving from the “why” to the “how” came next (again with the same subtle difference between the slide title and the survey question):

There are again business/customer drivers behind most of these responses, but increased automation and use of tooling also show up highly. A standout is the “leaders” highlighting that “our multifunctional teams have learned how to work more effectively together” was a way to improve quality.

Some realizations/revelations about quality followed:

There were at least signs here of enterprises accepting that improving quality takes significant effort, not just from additional testing and tooling, but also from management and the business. The presenter focused on the idea of “shifting left” and there was a question on this during the Q&A too, asking “how important is shift left?” to which the presenter said it was “very important to leaders, it’s a best practice and it makes intuitive sense”. But he also noted that there was an additional finding in the deeper data around this that enterprises found it to be a “challenge in piling more responsibility on developers, made it harder for developers to get their job done, it alienates them and gets them bogged down with activities that are not coding” and that enterprises were sensitive to these concerns. From that response, it doesn’t sound to me like the “leaders” have really grasped the concept of “shift left” as I understand it and are still not viewing some types of testing as being part of developers’ responsibilities. The final entry on this slide also stood out to me (but was not highlighted by the presenter), with 17% of the “leaders” saying that “software quality is a problem if it is too high”, interesting!

Presentations like this usually end up talking about best practices and this webinar was no different:

The presenter focused on the high rating given by the “leaders” “adoption of quality standards (such as ISO)” but overlooked what I took as one of the few positives from any of the data in the webinar, namely that adopting “a more comprehensive approach to software testing” was a practice generally seen as something worth continuing to do.

The deck wrapped up with a summary of the “Best Practices of Software Quality Leaders”:

These don’t strike me as actually being best practices, rather statements and dubious conclusions drawn from the survey data. Point 4 on this slide – “Embracing agile and improving your DevOps practice will improve your software quality” – was highlighted (of course) but is seriously problematic. Remember the self-assessed “leaders” claimed that their software quality was increasing due to expanding their “DevOps processes and toolchain”, but correlation does not imply causation as this point on this final slide implies. This apparent causality was reinforced by the presenter’s answer to a question during the Q&A also, when asked “what is one thing we can do to improve quality?”. He said his preference is to understand the impact that software quality has on the business, but his pragmatic answer is to “take stock of your DevOps practice and look for ways to improve it, since maturing your DevOps practice improves quality.”

There were so many issues for me with the methodology behind the data presented in this webinar. The self-assessment of software quality produced by these enterprises makes the foundation for all of the conclusions drawn from the survey data very shaky in my opinion. The same enterprises who probably over-rated themselves on quality are also likely to have over-rated themselves in other areas (which appears to be the case throughout). There is also evidence of mistakenly taking correlation to imply causation, e.g. suggesting that adding more DevOps process and tooling improves quality. (Even claiming correlation is dubious given the self-assessment problem underneath all the data.)

There’s really not much to take away from the results of this survey for me in helping to understand what differences in approach, process, practice, tooling, etc. might lead to higher quality outcomes. I’m not at all surprised or disappointed in feeling this way, as my expectations of such fluffy marketing-led surveys are very low (based on experiencing of critiquing a number of them over the last few years). What does disappoint me is not the “state of software quality” supposedly evidenced by such surveys, but rather the state of the quality of dialogue and critical thinking around testing and quality in our industry.

The webinar can be viewed from https://content.microfocus.com/optimize-devops-tb/2021-software-quality (note that registration is required).