Category Archives: Testing

Common search engine questions about testing #2: How does software testing impact software quality?

This is the second of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

In this post, I address the question “How does software testing impact software quality?” (and the related question, “How is software testing related to software quality?”).

It’s worth taking a moment to clarify what I mean by “quality”, via this definition from Jerry Weinberg and Cem Kaner:

Quality is value to some person (that matters)

I like this definition because it puts a person at the centre of the concept and acknowledges the subjectivity of quality. What is considered to be a bug by one customer may well be viewed as a feature by another! This inherent subjectivity means that quality is more amenable to assessment than measurement, as has been well discussed in a blog post from James Bach, Assess Quality, Don’t Measure It.

So, what then of the relationship between testing and quality?

If we think of testing as an information service provider, then the impact of testing on the quality of the end product is heavily dependent on both the quality of that information and also on the actions & decisions taken on that information. If testing provides information that is difficult to interpret or fails to communicate in a way that is meaningful to its consumers, then it is less likely to be taken seriously and acted upon. If stakeholders choose to do nothing with the information arising from testing (even if it is in fact highly valuable), then that testing effort has no demonstrable impact on quality. Clearly then, the pervasive idea in our industry that testing improves quality isn’t necessarily true – but it’s certainly the case that good testing can have an influence on quality.

It may even be the case that performing more testing reduces the quality of your delivered software. If the focus of testing is on finding bugs – over identifying threats to the software’s value – then performing more testing will probably result in finding more bugs, but they might not represent the important problems in the product. The larger number of bugs found by testing then results in more change in the software and potentially increases risk, rather than reducing it (and the currently popular idea of “defect-free/zero defect” software seems to leave itself wide open to this counterintuitive problem).

Testers were once seen as gatekeepers of quality, but this notion thankfully seems to be almost resigned to the history books. Everyone on a development team has responsibility for quality in some way and testers should be well placed to help other people in the team to improve their own testing, skill up in risk analysis, etc. In this sense, we’re acting more as quality assistants and I note that some organisations explicitly have the role of “Quality Assistant” now (and it makes sense to say “I am a QA” in this sense whereas it never did when “QA” was synonymous with “Quality Assurance”).

I like this quote from James Bach in his blog post, Why I Am A Tester:

…my intent as a tester is not to improve quality. That’s a hopeful side effect of my process, but I call that a side effect because it is completely beyond our control. Testers do not create, assure, ensure, or insure quality. We do not in any deep sense prove that a product “works.” The direct intent of testing – what occupies our minds and lies at least somewhat within our power – is to discover the truth about the product. The best testers I know are in love with dispelling illusions for the benefit of our clients.

Testing is a way to identify threats to the value of the software for our customer – and, given our definition of quality, the relationship between testing and quality therefore seems very clear. The tricky part is how to perform testing in a way which keeps the value of the software for our customer at the forefront of our efforts while we look for these threats. We’ll look at this again in answering later questions in this blog series.

I highly recommend also reading Michael Bolton’s blog post, Testers: Get Out of the Quality Assurance Business, for its treatment of where testing – and testers – fit into building good quality software.

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

The first part of this blog series answered the question, “Why is software testing important?“.

Thanks to my review team (Paul Seaman and Ky) for their helpful feedback on this post.

Reviewing Capgemini’s “World Quality Report 2020-21”

As I noted in another recent blog post, it’s that time of year again when predictions get made for the year ahead as well as reviews of the year. Right on cue, Capgemini released the latest edition of their annual World Quality Report to cover 2020 and 2021.

I reviewed the 2018/19 edition of their report in depth and I thought it worth reviewing the new report to compare and contrast it with the one from two years ago.

TL;DR

The findings and recommendations in the 2020/21 edition of this report are really very similar to those in the 2018/19 report. It appears that the survey respondents are drawn from a very similar pool and the lack of responses from smaller organisations mean that the results are heavily skewed to very large corporate environments.

There’s still plenty of talk about “more automation” and highlighting the importance of AI/ML in revolutionizing QA/testing (there is again no real differentiation or definition of the difference between testing and QA from the authors). There is almost no talk of “testing” (as I understand it) in the report, while there is a heavy focus on agile, DevOps, automation, AI, ML, etc. while removing or reducing human involvement in testing seems to be a clear goal. The fact that the report is co-authored by MicroFocus, a tool vendor, may have something to do this with direction. Agile and DevOps are almost always referred to in the report together, as though they are the same thing or one depends on the other.

I would have liked to see some deep questions around testing practice in the survey to learn more about what’s going on in terms of human testing in these large organizations, but alas this was nowhere to be seen here.

There is evidence of confirmation bias by the report’s authors throughout this report. When the survey results confirm what they expect, there is little questioning of potential for misunderstanding of the questions. However, in cases where the results don’t confirm what they expected, there are many reasons suggested for why this might be so. In doing this, the credibility of the report is severely impacted for me and brings into question why they survey so many organisations and ask so many questions if the idea ultimately is to deliver the recommendations that the authors and their sponsors are looking for.

I’m obviously not really the target audience for these corporate type reports and I can imagine the content of the report being quite demoralizing for testers reading it if their organisations appear to be so far behind what these large organisations are claiming to be doing. I would suggest not believing the hype, doing your own critical thinking and taking the conclusions from all such surveys and reports with a pinch of salt.

The survey (pages 68-71)

This year’s report comes in at a hefty 76 pages (so a few pages heavier than the 2018/19 edition). I again chose to look first at where the data came from to build the report, which is presented towards the end of the report. The survey size was 1750 (compared to 1700 in 2018/19) and the organizations taking part were again all of over 1000 employees, with the largest number coming from organizations of over 10,000 employees. The response breakdown by organizational size were all within a percentage point of the 2018/19 report so it seems likely that it’s really the same organizations contributing every time. The lack of input from smaller organizations is a concern, as I imagine smaller, more nimble organizations might actually be where the innovation and genuine change in the way testing is performed comes from.

The survey had a good spread of countries & regions as well as industry sectors (with the top three sectors accounting for almost half of the responses, viz. financial services, public sector/government, and telecommunications). The types of people who provided survey responses this year bears a startling resemblance to the 2018/19 report – in terms of job title breakdown (2020/21 vs. 2018/19), they were grouped as follows: CIO (25% vs. 27%), IT director (20% vs. 22%), QA/Testing Manager (19% vs. 20%), VP Applications (17% vs. 18%), CMO/CDO (7% in both cases), CTO/Product Head (6% in both cases) and VP/Director of R&D (6% in 2020/21 only). These striking similarities in the data again lead me to the conclusion that the report relies on the same people in the same organizations providing responses year on year.

Introduction (pages 4-5)

In his introduction, Mark Buenen of Capgemini (page 4) notes that

the use of test automation is growing, as for most organizations it is still not at the level required

which seems to suggest there is some level of automation at which these organizations will cease to look for other ways to leverage automation. He also says

It is reassuring that 69% of the organizations interviewed in this survey feel they always or virtually always meet their quality goals

I’m not sure what is reassuring about this statistic. Given the senior folks interviewed for this survey, I wonder how many of these places actually have clearly defined quality goals and how they go about measuring whether they meet (or “virtually always meet”!) these goals. Another interesting point Mark makes is that

One of the main challenges noted this year is the lack of the right testing methodologies for teams, as reported by 55% of respondents.

I wonder what the “right testing methodologies” being sought are? Is it organizations looking for a silver bullet “testing methodology” to solve their quality problems? In his introduction, Raffi Margaliot of Micro Focus (page 5) says

This year’s WQR shows that QA has transitioned from being an independent function in a separate team, towards becoming an integrated part of the software delivery team, with responsibilities reaching beyond testing and finding defects. QA engineers are now charged with enabling the entire team to achieve their quality objectives, and incorporating better engineering practices and state-of-the-art techniques such as AI and ML to achieve these aims.

The move to embedding responsibility for testing and quality within development teams began so long ago that it seems nonsensical to still even be talking about it as an improvement. The idea that AI and ML are playing big parts in the way whole team approaches to quality are implemented is popular and especially so with CTO/CIO types interviewed for reports like this, but I still believe the reality within most development teams is very different. We’ll return to the actual evidence to support these claims as we examine the detail in the report to follow.

Executive Summary (pages 6-8)

The most interesting part of the summary for me was the commentary around the answers to the survey question around “Objectives of Quality Assurance and Testing in the organization”.

Apparently 62% of survey respondents said that this was an objective of QA and testing: “Automate: Make QA and Testing a smarter automated process”. The implication here is that automation is smarter than what isn’t automated and that moving away from human involvement in testing is seen as a good thing. I still don’t understand why organizations are confused about the fact that testing cannot be automated, but obviously the lure of the idea and the spruiking of vendors suggesting otherwise (including the co-sponsor of this report, MicroFocus) are both very strong factors.

Some 60% of respondents said “Quality Enablement: Support everybody in the team to achieve higher quality” was an objective. The “whole team approach to quality” idea is nothing new and seems to be ubiquitous in most modern software development organizations anyway. The report commentary around this in the Summary is pretty extraordinary and it would be remiss of me not to quote it in its full glory:

It won’t be an exaggeration if we say that out of all the software disciplines, QA has witnessed the most rapid transformation. QA has been steadily evolving – from an independent function to an integrated function, and now to an inclusive function. Also, the role of the QA practitioner is transforming from testing and finding defects, to ensuring that other engineering team members inculcate quality in their way of working. They need to do this by enabling them and by removing impediments on their way to achieving quality objectives.

Actually I think it is an exaggeration, QA/testing has moved pretty slowly over the years and “steadily evolving” is closer to the mark. It wouldn’t be a report of this type if it didn’t mention shifting left and shifting right and the authors don’t disappoint:

QA is not only shifting left but also moving right. We see more and more enterprises talk about exploratory testing, chaos engineering, and ensuring that the product is experienced the way end users will experience it in real life before releasing it to the market.

Testers must be dizzy these days with all this shifting to the left and also shifting to the right. I wonder what testing is left in the “middle” now; you know, the sort of testing where a human interacts with the product we’ve built to look for deep and subtle problems that might threaten its value to customers. This is where I’d imagine most exploratory testing efforts would sit and, while the report notes that more organizations are talking about exploratory testing, my feeling is that they’re talking about something quite different than what excellent practitioners of this approach mean by exploratory testing.

Key findings (pages 9-10)

QA orchestration in agile and DevOps

In this first category of findings, the report claims:

The adoption of agile and DevOps is steadily increasing, resulting in QA teams becoming orchestrators of quality.

While I don’t doubt that more and more organizations claim to be using agile and DevOps (even without any broad consensus on what either of those terms have come to mean), it sounds like they still expect some group (i.e. “QA”) to kind of arrange for the quality to happen. The idea of “full-stack QA” comes next:

We see a trend towards wanting QA engineers who have developer type skills, who yet retain their quality mindset and business-cum-user centricity.

Is this expecting too much? Yes, we think so. Only a few QA professionals can have all these skills in their repertoire. That’s why organizations are experimenting with the QA operational structure, with the way QA teams work, and with the skill acquisition and training of QA professionals.

I agree with the report on this point, there’s still a key role for excellent human testers even if they can’t write code. This now seems to be a contrarian viewpoint in our industry and this “moon on a stick” desire for testers who can perform excellent testing, find problems and fix them, then write the automated checks for them and do the production monitoring of the deployed code feels like incredibly wishful thinking and is not helpful to the discussion of advancing genuine testing skills.

Artificial intelligence and machine learning

The report is surprisingly honest and realistic here:

Expectations of the benefits that AI and ML can bring to quality assurance remain high, but while adoption is on the increase, and some organizations are blazing trails, there are few signs of significant general progress.

Nonetheless, enthusiasm hasn’t diminished: organizations are putting AI high among their selection criteria for new QA solutions and tools, and almost 90% of respondents to this year’s survey said AI now formed the biggest growth area in their test activities. It seems pretty clear they feel smart technologies will increase cost-efficiency, reduce the need for manual testing, shorten time to market – and, most importantly of all, help to create and sustain a virtuous circle of continuous quality improvements.

It’s good that the authors acknowledge the very slow progress in this area, despite AI and ML being touted as the next big things for many years (by this report itself, as you’ll note from my review of the 2018/19 report). What’s sad is that almost all the respondents are saying that AI is the biggest growth area around testing, which is worrying when other parts of the report indicate other more significant issues (e.g. a lack of good testing methodologies). The report’s interpretation of why so many organizations continue to be so invested in making AI work for them in testing is questionable I think. What does “increased cost-efficiency” really mean? And is this the direct result of the “reduced need for manual testing”? The “virtuous circle of quality improvements” they mention is currently more like a death spiral of reducing human interaction with the software before release, pushing poor quality software out to customers more often, seeing their complaints, fixing them, pushing fixes out more often, …

Budgets and cost containment

The next category is on budget/costs around testing and the report says:

The main focus remains manpower cost reduction, but some organizations are also looking at deriving the best out of their tool investments, moving test environments to the cloud, and looking for efficiencies from technologies including AI, machine learning and test automation.

There is again the claim about using AI/ML for “efficiency” gains and it’s a concern that reducing the number of people involved in QA/testing is seen as a priority. It should be clear by now that I believe humans are key in performing excellent testing and they cannot be replaced by tools, AI/ML or automation (though their capabilities can, of course, be extended by the use of these technologies).

Test automation

You’ll be pleased to know that automation is getting smarter:

The good thing we saw this year is that more and more practitioners are talking about in-sprint automation, about automation in all parts of QA lifecycle and not just in execution, and also about doing it smartly.

This begs the question of how automation was being tackled before, but let’s suppose organizations are being smarter about its use – even though this conclusion, to me at least, again makes it sound like more and more human interaction with the software is being deliberately removed under the banner of “smart automation”.

While the momentum could have been higher, the automation scores have mostly risen since last year. Also, the capability of automation tools being used seems to satisfy many organizations, but the signs are that the benefits aren’t being fully realized: only around a third of respondents (37%) felt they were currently getting a return on their investment. It really depends on how that return is being measured and communicated to the relevant stakeholders. Another factor may be that the tools are getting smarter, but the teams are not yet sufficiently skilled to take full advantage of them.

This paragraph sums up the confused world most organizations seem to be living in when it comes to the benefits (& limitations) expected of automation. While I’m no fan of the ROI concept applied to automation (or anything else in software development), this particular survey response indicates dissatisfaction with benefits derived from investments made in automation in the majority of organizations. There are many potential reasons for this, including unrealistic expectations, poor tool selection, misunderstanding about what automation can and can’t do, etc. but I had to smile when reading that last sentence which could be restated as “the automation tools are just too smart for the humans to use”!

Test environment management (TEM) and test data management (TDM)

There was nothing much to say under this category, but this closing statement caught my eye:

It was also interesting to note that process and governance came out as a bigger challenge than technology in this area.

I think the same statement probably applies to almost everything we do in software development. We’re not – and never really have been – short of toys (technology & tools), but assembling groups of humans to play well with those toys and end up with something of value to customers has always been a challenge.

Key recommendations (pages 11-13)

The recommendations are structured basically around the same categories as the findings discussed above, in summary:

  • QA orchestration in agile and DevOps
    • Don’t silo responsibility for QA. Share it.
    • Spread the word
    • Be part of the business
    • Make room for dashboards
    • Listen more to users
  • Artificial intelligence and machine learning
    • Focus on what matters
    • Keep learning
    • Have a toolkit
    • Testing AI systems: have a strategy
  • Budgets and cost containment
    • Greater savings can be achieved by using test infrastructure smartly
    • Use advantages in analytics, AI, and machine learning to make testing smarter
    • Be prepared to pay well for smarter talent
    • Don’t put all key initiatives on hold. Strive to be more efficient instead
  • Test automation
    • Change the status quo
    • Think ahead
    • Choose the right framework
    • Balance automation against skills needs
    • Don’t think one size fits all
    • Get smart
  • Test environment management and test data management
    • Create a shared center of excellence for TEM/TDM
    • Get as much value as you can out of your tool investment
    • Have strong governance in place
  • Getting ready to succeed in a post-COVID world
    • Be better prepared for business continuity
    • Focus more on security
    • Don’t look at COVID-19 as a way to cut costs, but as an opportunity to transform
    • Continue to use the best practices adopted during the pandemic

There’s nothing too revolutionary here, with a lot of motherhood type of advice – and copious use of the word “smart”. One part did catch my eye, though, under “Change the status quo” for test automation:

Testing will always be squeezed in the software development lifecycle. Introducing more automation – and pursuing it vigorously – is the only answer.

This messaging is, in my opinion, misleading and dangerous. It reinforces the idea that testing is a separate activity to development and so can “squeezed” while other parts of the lifecycle are not. It seems odd – and contradictory – to me to say this when so many of the report’s conclusions are about “inclusive QA” and whole team approaches. The idea that “more automation” is the “only answer” is highly problematic – there is no acknowledgement of context here and adding more automation can often just lead to more of the same problems (or even introduce further new and exciting ones), when part of the solution might be to re-involve humans into the lifecycle especially when it comes to testing.

Current Trends in Quality Assurance and Testing (pages 14-49)

Almost half of the WQR is again dedicated to discussing current trends in QA and testing and some of the most revealing content is to be found in this part of the report. I’ll break down my analysis in the same way as the report (the ordering of these sections is curiously different to the ordering of the key findings and recommendations).

QA orchestration in agile and DevOps

I’ll highlight a few areas from this section of the report. Firstly, the topic of how much project effort is allocated to testing:

…in agile and DevOps models, 40% of our respondents said 30% of their overall project effort is allocated to testing…. there is quite a wide variation between individual countries: for instance, almost half of US respondents using agile (47%) said they do so for more than 30% of their overall test effort, whereas only 11% of Italian respondents and 4% of UK respondents said the same.

The question really makes no sense in truly agile teams, since testing activities will run alongside development and measuring how much of the total effort relates specifically to testing is nonsensical – and I suspect (i.e. hope) is not explicitly tracked by most teams. As such, the wide variation in response to this question is exactly what I’d expect; it might just be that it is the Italians and UK folks who are being honest in acknowledging the ridiculousness of this question.

There are a couple of worrying statistics next, 51% of respondents said they ‘always’ or ‘almost always’ aim to “maximize the automation of test”. Does this mean they’re willing to sacrifice in other areas (e.g. human interactions with the software) to achieve this? Why would they want to achieve this anyway? What did they believe the question meant by “the automation of test”?

Meanwhile, another (almost) half said they ‘always’ or ‘almost always’ “test less during development and focus more on quality monitoring/production test” (maybe the same half as in the above automation response?). I assume this is the “shift-right” brigade again, but I really don’t see this idea of shifting right (or left) as removing the need for human testing of the software before it gets to production (where I acknowledge that lots of helpful, cool monitoring and information gathering can also take place).

It was a little surprising to find that the most common challenge [in applying testing to agile development] (50%) was a reported difficulty in aligning appropriate tools for automated testing. However, this may perhaps be explained by the fact that 42% of respondents reported a lack of professional test expertise in agile teams – and so this lack of skills may explain the uncertainty about identifying and applying the right tools.

The fact that the most common challenge was related to technology (tools in this case) comes as no surprise, but highlights how misguided most organizations are around agile in general. Almost half of the respondents acknowledge that a lack of test expertise in agile teams is a challenge while also half say that tooling is the problem. This focus on technology over people comes up again and again in this report.

Which metrics are teams using to track applications [sic] quality? Code coverage by test was the most important indicator, with 53% of respondents saying they always or almost always use it. This is compliant with agile test pyramid good practices, although it can be argued that this is more of a development indicator, since it measures unit tests. Almost as high in response, with 51% saying always or almost always, was risk covered by test. This is particularly significant: if test strategy is based on risk, it’s a very good thing.

The good ol’ “test pyramid” had to make an appearance and here it is, elevated to the stature of providing a compliance mechanism. At least they make the note that forming a test strategy around risk is “a very good thing”, but there’s little in the response to this question about tracking quality that refers to anything meaningful in terms of my preferred definition of quality (“Value to some person (who matters)”).

In closing this section of the report:

Out of all the insights relating to the agile and DevOps theme, perhaps the greatest surprise for us in this year’s survey was the response to a question about the importance of various criteria for successful agile and DevOps adoption. The technology stack was rated as essential or almost essential by 65% of respondents, while skill sets and organizational culture came in the bottom, with 34% and 28% respectively rating these highly. Operational and business priorities were rated highly by 41% of respondents.

How can this be? Maybe some respondents thought these criteria were a given, or maybe they interpreted these options differently. That’s certainly a possibility: we noted wide variations between countries in response to this question. For instance, the highest figure for skills needs was Poland, with 64%, while the lowest was Brazil, with just 5%. Similarly, for culture, the highest figure was Sweden, with 69%, and the lowest was once again Brazil, with only 2%. (Brazil’s perceived technology stack need was very high, at 98%.) The concept of culture could mean different things in different countries, and may therefore be weighted differently for people.

Regardless of what may have prompted some of these responses, we remain resolute in our own view that success in agile and DevOps adoption is predicated on the extent to which developments are business-driven. We derive this opinion not just from our own experience in the field, but from the pervasive sense of a commercial imperative that emerges from many other areas of this year’s report.

I would also expect culture to be ranked much higher (probably highest), but the survey responses don’t elevate it so in terms of criteria for successful agile and DevOps adoption. The report authors suggest that this might be due to misunderstanding of the question (which is possible for this and every other question in the survey, of course) and then display their confirmation bias by making their own conclusions that are not supported by the survey data (and make sense of that closing sentence if you will!). My take is a little different – it seems to me that most organizations focus on solving technology issues rather than people ones, so it’s not too surprising that the technology stack is rated at the top.

Artificial intelligence and machine learning

There is a lot of talk about AI and ML in this report again (as in previous years) and it feels like these technologies are always the next big thing, but yet don’t really become the next thing in any significant way (or, in the report’s more grandiose language: “…adoption and application have still not reached the required maturity to show visible results”).

The authors ask this question in this section:

Can one produce high-quality digital assets such as e-commerce, supply chain systems, and engineering and workface management solutions, without spending time and money assuring quality? In other words, can a system be tested without testing it? That may sound like a pipedream, but the industry has already started talking about developing systems and processes with intelligent quality engineering capabilities.

This doesn’t sound like a pipedream to me, it just sounds like complete nonsense. The industry may be talking about this but largely via the vested interests in selling products and tools based on the idea that removing humans from testing activities is a worthy goal.

Almost nine out of ten respondents (88%) said that AI was now the strongest growth area of their test activities [testing with AI and testing of AI]

This is an interesting claim and relatively meaningless, given the very limited impact that AI has had so far on everyday testing activities in most organizations. It’s not clear from the report what this huge percentage of respondents are pursuing by this growth in the use of AI, maybe next year’s report will reveal that (but I strongly doubt it).

In wrapping up this section:

Even though the benefits may not yet be fully in reach, the vast majority of people are genuinely enthusiastic about the prospects for AI and ML. These smart technologies have real potential not just in cost-efficiency, in zero-touch testing, and in time to market, but in the most important way of all – and that is in helping to achieve continuous quality improvements.

There is a lot of noise and enthusiasm around the use of AI and ML, in the IT industry generally and not just testing. The danger here is in the expectations of benefits from introducing these technologies and the report fuels this fire by claiming potential to reduce costs, reduce/remove the need for humans in testing, and speed up development. Adopting AI and ML in the future may yield some of these benefits but the idea that doing so now (with such a lack of expertise and practical experience as outlined in this very report) will help “achieve continuous quality improvements” doesn’t make sense to me.

Test automation

This is always an interesting section of the report and one stat hit me early on, in that only 37% of respondents agreed that “We get ROI from our automation efforts”. This is a pretty damning indictment of how automation projects are viewed and handled especially in larger organizations. The authors note in response that “We feel moving towards scriptless automation tools may provide better return on investment in the long term” and I’m interested in why they said that.

In terms of the degree to which automation is being used:

…our respondents told us that around 15% of all testing was automated. Only 3% of them said they were automating 21% or more of their test activities.

These are quite low numbers considering the noise about “automating all testing” and using AI/ML to reduce or remove humans from the testing effort. Something doesn’t quite add up here.

Test data management and test environment management

This section of the report wasn’t very exciting, noting the fairly obvious increase in the use of Cloud-based and containerized test environments. 29% of respondents reported still using on-premise hardware for their test environments, though.

Budgets and cost containment

The breakdown of QA budget made for interesting reading, with 45% going on hardware & infrastructure, 31% on tools, and just 25% on “human resources”. The fact that the big organizations surveyed for this report spend more on tools than on humans when it comes to QA/testing really says it all for me, as does:

We see greater emphasis being placed on reducing the human resources budget rather than the hardware and infrastructure budget.

While the overall allocation of total IT budget to QA remained similar to previous years (slowly declining year-on-year), this year’s report does at least recognize the “blurring boundaries” between QA and other functions:

It’s more difficult these days to track the movement of QA budgets specifically as an individual component. This is because the budget supports the overall team: the boundaries are blurring, and there is less delineation between different activities performed and the people who perform them in the agile environment.

Andy Armstrong (Head of Quality Assurance and Testing, Nordea Bank)

while the dedicated QA budget may show a downward trend, it’s difficult to ascertain how much of that budget is now consumed by the developers doing the testing.

The impact of COVID-19 and its implications on quality assurance activities in a post-pandemic world

This (hopefully!) one-off section of the report wasn’t particularly revealing for me, though I noted the stat that 74% of respondents said that “We need to automate more of QA and testing” in a post-pandemic world – as though these people need more reasons/excuses to ramp up automation!

Sector Analysis (pages 50-67)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors and discuss particular trends and challenges within each, with a strong focus on the impact of the COVID-19 pandemic. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy, utilities and chemicals
  • Financial services
  • Healthcare and life sciences
  • High-tech
  • Government and public sector
  • Telecoms, media and entertainment

Geography-specific reports

The main World Quality Report was supplemented by a number of short reports for specific locales. I only reviewed the Australia/New Zealand one and didn’t find anything worth of special mention here.

Before I go…

I respond to reports and articles of this nature in order to provide a different perspective, based on my opinion of what good software testing looks like as well as my experience in the industry. I provide such content as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting.

If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Common search engine questions about testing #1: Why is software testing important?

This is the first of a ten-part blog series in which I will answer some of the most common questions asked about software testing, according to search engine autocomplete results (thanks to Answer The Public).

The first cab off the rank is “Why is software testing important?” (with related questions being “why is software testing necessary?”, “why is software testing needed?”, “why is software testing important in software engineering?” and “why is software testing important in SDLC?”).

Let’s begin by looking at this from a different angle, how would teams/organisations behave if software testing wasn’t important to them? They’d probably try to cut the cost of it or find ways to justify not doing it all (especially with expensive humans). They might devalue the people doing such work by compensating them differently to other team members or look upon their work as a commodity that they can have performed by lowest common denominator staff (perhaps in a cheaper location). They would capitalize on their confirmation bias by appealing to the authority of the many articles and presentations claiming that “testing is dead”. They would ensure that testing is seen as a separate function from the rest of development to enable their desire to remove it completely. They would view testing as a necessary evil.

Listening to the way some organisations and some parts of the software development community talk about testing, it’s common to see these indications that software testing just isn’t important to them. In trying to understand why this is so, I’ve come to believe that this largely stems from the software testing industry traditionally doing a poor job of articulating its value and not being clear on what it is that good testing actually provides. We’ve spent a long time working off the assumption that it’s obvious to people paying the bills that testing is important and necessary.

To be clear, my preferred definition of testing comes from Michael Bolton and James Bach, viz.

Testing is the process of evaluating a product by learning about it through experiencing, exploring, and experimenting, which includes to some degree: questioning, study, modelling, observation, inference, etc.

I like this definition because it highlights all of the aspects of why testing is important to me, with its focus on interacting with the product, engaging in learning and exploration, and running experiments to help find out if the thing in front of me as a tester is the thing we wanted. It seems to me that this type of evaluation is important and would likely also be viewed as important by the business. However, if we sell the importance of testing based on providing turgid test reports of passed and failed test cases, it’s not too surprising that stakeholders view testing as being more of a costly nuisance than a valued and trusted advisor. Too often, I’ve seen the outputs of testing being focused on describing the testing approach, techniques, test cases run and bugs logged – in other words, we too often provide information about what we did and fail to tell a story about what we discovered during the process.

The reality is that most stakeholders (and certainly customers) don’t care about what you did as a tester, but they probably care about what you learned while doing it that can be valuable in terms of deciding whether we want to proceed with giving the product to customers. Learning to present testing outcomes in a language that helps consumers of the information to make good decisions is a real skill and one that is lacking in our industry. Talking about risk (be that product, project, business or societal) based on what we’ve learned during testing, for example, might be exactly what a business stakeholder is looking for in terms of value from that testing effort. In deliberately looking for problems that threaten the value of the product, there is more chance of finding them before they can impact our customers.

Another spanner in these works is the confusion caused by the common use of the term “automated testing”. It should be clear from the definition I presented above that testing is a deeply human activity, requiring key human skills such as the ability to subjectively experience using the product, make judgements about it and perform experiments against it. While the topic of “automated testing” will be covered in more depth in answering a later question in this blog series, I also wanted to briefly mention automation here to be clear when answering why software testing is important. In this context, I’m going to include the help and leverage we can gain by automation under the umbrella term of “software testing”, while reminding you that the testing itself cannot be automated since it requires distinctly human traits in its performance.

Let’s wrap up this post with a couple of reasons why I think software testing is important.

Software testing is important because:

  • We want to find out if there are problems that might threaten the value of the product, so that they can be fixed before the product reaches the customer.
  • We have a desire to know if the product we’ve built is the product we (and, by extension, our customers) wanted to build.
    • The machines alone can’t provide us with this kind of knowledge.
    • We can’t rely solely on the builders of the product either as they lack the critical distance from what they’ve built to find deep and subtle problems with it.

I’m providing the content in this blog series as part of the “not just for profit” approach of my consultancy business, Dr Lee Consulting. If the way I’m writing about testing resonates with you and you’re looking for help with the testing & quality practices in your organisation, please get in touch and we can discuss whether I’m the right fit for you.

Thanks to my review team (Paul Seaman and Ky) for their helpful feedback on this post.

In response to Cigniti’s “The emerging trends in Software Testing and Quality Assurance” blog post

A LinkedIn post led me to a blog post from Cigniti, titled The emerging trends in Software Testing and Quality Assurance (published on 2nd November 2020).

These kinds of posts are particularly common in the later part of the year as predictions for the next big things for the year ahead start to surface. While I read such posts with no expectation of learning anything profound from them, I’m still generally disappointed by what they have to say. This latest effort from Cigniti continues that theme of disappointment.

Notionally providing only three “emerging” trends, the build up to their big reveal is worth critiquing, starting with their opening gambit:

Software testing and Quality Assurance practices are on the ride to continuous evolution, improvement, and inclusion. Rather than being a separate function, QA is all set to become implicit to the development itself. For many software development teams, the process of making QA a part of the software development has already begun. This transition in the perception regarding QA from being a choice to a necessity is one of the most significant milestones that the IT sector has achieved in the evolutionary process. 

Removing the filler words and fluff, the message seems to be that “QA” (by which I’m going to assume they mean “testing”) is more a part of the development process than a separate “function”. The idea that testing has been seen as a choice in the recent past but is now seen as a necessity doesn’t marry with my experience in the industry at all. With the move to more agile ways of working, most teams I’ve encountered in the last few years have made their testing efforts part of their development practice with embedded testers (or whatever label they have) rather than having separate testing teams (or the surely now deprecated idea of “Testing Centres of Excellence”).

Next up, the blog makes an appeal to authority – and authorities don’t come much more credible than Gartner (right?):

Gartner, in the ‘Top Strategic Technology Trends for 2021’ report, has emphasized the need for people centricity

It’s worth making a mental note here about “people centricity” in readiness for the emerging trends that come later.

A number of big claims are made next:

Having a streamlined QA function eliminates the bottlenecks that may hinder timely code releases into production, resulting in a better satisfaction ratio among the intra-organizational teams. A high-quality code release translates into fewer defects into production and improved end user experience. An increased customer satisfaction level offers increased ROI to the organization.

There is no evidence provided in the post or in citations to support:

  • “Having a streamlined QA function eliminates the bottlenecks…” – the implication here appears to be that it’s QA that hinders “timely” releases, whereas the reality is often quite different in my experience. There are many potential bottlenecks and obstacles to production releases given the uncertainty and changing environment of most software development projects.
  • “… resulting in a better satisfaction ratio among the intra-organizational teams” – maybe I’ve missed the idea of “satisfaction ratio” coming into the industry, but I really don’t understand what they’re saying or how this claim can be backed up.
  • “A high-quality code release translates into fewer defects into production and improved end user experience” – there is no indication here about what is meant by quality (nor an acknowledgement that “quality” can mean very different things to different people in different contexts in different organizations) but connecting “high-quality code” to “fewer defects in production and improved end user experience” seems to me to be highly questionable.
  • “An increased customer satisfaction level offers increased ROI to the organization” – I cringe when I see mention of “ROI” around QA/testing and I cringed on cue here. The start of the very same blog post suggested that QA/testing is a necessity and not a choice, so why talk about its ROI? We don’t hear demands to increase the ROI of development or management or support, so why attempt to measure it for another necessary aspect of software development? (Note that Paul Seaman blogged on the use of terms from economics in testing recently, though not specifically on ROI.) And how exactly does “increased customer satisfaction” arise from a “streamlined QA function”? Customer satisfaction is again a many faceted thing, so suggesting that a change to the way testing is integrated into the development process necessarily increases customer satisfaction is oversimplifying what is a complex and deeply contextual rating.

We finally get to the three “emerging trends that will shape the [QA] function in the coming year and beyond”, starting with “Scriptless test automation”:

Test automation has been one of the top software testing trends for past few years. With a wider acceptance of test automation within the SDLC, there also comes the realization for constantly optimizing it for fulfilling the evolving requirements. One of the major challenges that organizations have been facing in adopting test automation has been the lack of skilled test automation resources for test script maintenance. Scriptless test automation is enabling organizations to overcome this challenge and still have efficient test cases for automating software testing.

Responding to this part alone could form its own post, but this is a perfect example of the kind of misguided thinking and advice that is so prevalent when it comes to the topic of automation. They are reinforcing the idea that automation is a trend but all that pesky code that’s required to tell the machines what to do is just too hard to maintain. It seems to be a hard message to get across, but creating automated checks is software development – and needs to be treated as such. The problem isn’t a “lack of skilled test automation resources for test script maintenance” as much as using people not skilled in programming to write the test code, in my opinion. While there is some value in scriptless automation technologies, I think their importance is generally overestimated at the same time as the effort involved in crafting genuinely maintainable and valuable automated check code is underestimated. (And, yes, I noted that they closed out this trend with the phrase “automating software testing”, as though that’s a thing.)

The next trend is “The marriage of AI, ML and QA”:

Introducing AI and ML into Quality Assurance process allows an organization to get out of the ‘Test Automation trap’, which can be explained as – “Test automation trap is when the test teams are not getting enough time to be able to do the failure triage from the previous test run before building the next test automation code.” 

It wouldn’t be a trend piece without mentioning “AI” and “ML”, of course. The fact that there’s three folks in the marriage should be a worrying sign in itself, but really this idea that AI and ML will significantly impact the way the majority of organizations perform testing anytime soon just doesn’t stack up. Looking at the common questions on LinkedIn, for example, most organizations are still struggling with such basic aspects of testing and quality management that introducing AI and ML would probably be more disruptive than helpful. I’ve not heard of this “Test Automation Trap” before either, which to me sounds like an issue of poorly written tests or an acceptance of many “test failures” as being the norm. Using AI/ML to tackle the issue of failure triage seems to be coming at this problem from the wrong end.

The final trend is “Continuous integration for continuous quality”:

With the help of DevOps, about 59% of organizations are now deploying multiple times a day, once a day, or once every few days. For these organizations, the code quality has been one of the biggest benefits of embracing DevOps.  

The CI/CD pipeline combined with test automation has done wonders for organization in terms of the quality of their releases. Not only has it positioned QA as an imperative instead of a bottleneck, it has resulted in higher returns as well. At present, only a handful of organizations have deployed bots to review their code, but this number is expected to go up as we move further into the future. 

Continuous integration is hardly a new thing and, in this “trend” towards “continuous quality”, there is conflation of the terms continuous integration, continuous delivery and DevOps. This is a common problem in the current discourse around these subjects unfortunately. Another common problem is the idea that more frequent deployments necessarily leads to improved quality. The use of CI/CD pipelines and more automation doesn’t in any way guarantee higher quality deliverables, it all too frequently seems to end up delivering poor quality more often, in fact. The post again uses this term “bottleneck” and implies that these more automated delivery mechanisms remove the bottleneck, but the human element of critically evaluating what’s been built before it’s delivered is implicitly being devalued here and is increasingly seen as an element to be minimized or removed. I’m not sure what the code review bots are about as referenced here, but it again stems from this notion that we’re better off having machines do this kind of critical analysis work rather than humans so as to remove “bottlenecks” to us deploying software (with little or no human critique along the way).

It seems to me that all of these trends are essentially centred on increasing our reliance on machines as part of our testing and quality practice, which seems to contradict the “people centricity” trend cited from Gartner at the top of the post. I can’t recall any recent trend identified in one of these reports that focuses on improving the testing skills of humans. If I was being cynical, I’d think that there’s more money to be made in identifying trends around which vendors can build tools for profit…

There was nothing of substance in Cigniti’s post that would prove useful to someone in testing with a genuine interest in learning where our craft is headed in the years ahead. I wonder if anyone is interested in a contrarian article about “trends”, something like “The folly of following trends” perhaps?

If the thoughts expressed in this post resonate with you, my new consultancy might be able to help you with improving your testing and quality practices – please see www.drleeconsulting.com.au for more details of my offering.

Launching my testing consultancy, Dr Lee Consulting

As I mentioned in a recent blog post, I’ve been working on setting up my own software testing consultancy following my exit from Quest back in August.

I finally got all my ducks in a row and launched the business – Dr Lee Consulting – publicly on 21st October 2020!

I spent a few weeks focusing on business basics and refining the idea of what my consultancy should look like. I completed two of Pat Flynn‘s excellent courses in the process, viz. Will It Fly? (the book and companion course) and Smart From Scratch. Mindmaps were my friend during this ideation and refinement stage, and my thanks go to those connections I reached out to along the way for their valuable help and feedback.

Another great source of inspiration and ideas was the “Share What You Know Summit” run as a virtual event by Teach:able (on 22-24 September). This was an excellent three-day event and furnished me with some great tips around LinkedIn profile tweaks and ideas for content generation.

In terms of administrivia, I registered for an ABN and business name online, then purchased the corresponding domain name (business name and domain name need to closely match for a .com.au domain, unlike most other domain extensions). I built my website using a free WordPress site, mainly due to my familiarity with their platform after blogging there for many years. I’ll probably upgrade to a paid plan sometime soon to remove ads and allow me to more professionally map my domain to that site.

I feel like the time I invested in the ideation and refinement of the idea was well spent and I tried not to go overboard in perfecting my website – at some point you just need to pull the trigger and get the thing out there!

My aim has always been to share what I’ve learned about testing with other organizations and Dr Lee Consulting is now my vehicle to do this. While I realize I’m not the right fit for every organization, I hope there are organizations/teams out there who will see the value in my services – I’m ready and waiting to help!

Check out www.drleeconsulting.com.au for full details of my offering and how to contact me for a no obligation conversation about engaging my services.

Publishing my first testing book, “An Exploration of Testers”

As I mentioned in my last blog post, I’ve been working on a testing book for the last year-or-so. With more free time since leaving full-time employment back in August, I’m delighted to have now published my first e-book on testing, called An Exploration of Testers.

The book is formed of contributions from various testers around the world, with seventeen contributions in the first edition. Each tester answered the same set of eleven questions designed to tease out testing, career and life lessons. I was humbled by how much time and effort went into the contributions and also by how willing the community was to engage with the project, with almost every tester I invited to contribute then committing to doing so. A number of contributions will be added in the coming months (and additional versions of the book are free after your initial purchase, so don’t be afraid to buy now!).

My experience of using LeanPub as the publishing platform has been generally very good. When I was researching ways to self-publish, LeanPub seemed to get good reviews and it was free to try so I gave it a go, then ended up sticking with it. I’m still on the free plan and it suffices for now for this project. The platform makes most aspects of creating, publishing and selling a book really straightforward and the markdown language used for writing the manuscript is easy to learn (though sometimes comes with frustrating limitations on the control of layout). I would recommend LeanPub to others looking to write their first book.

At the very start of the project, I decided that any proceeds from sales of the book would be ploughed back into the testing community and this fact seemed to encourage participation in the project. I will be transparent about the money received from book sales (with the only expenses being those taken by LeanPub as the publishing & sales platform) and also where I decide to invest it back into our community. It seems only fair to give back to the community that has been so generous to me over the years and also generated the content for the book.

For more details and to buy a copy, please visit https://leanpub.com/anexplorationoftesters

Pre-launch announcements for my new projects

After six weeks or so of resetting following my unplanned exit from Quest, I’m getting close to publicly announcing more details on a couple of new projects.

One of these has been in the making for about a year, while the other has arisen as a direct result of leaving full-time employment.

I’ve always been drawn to the idea of writing a book and I will finally realize this idea with the release of a testing-related e-book very soon. It’s been a highly collaborative effort with input from many members of the testing community. Having more free time since finishing up at Quest has given me the opportunity to wrap up what I think is worthy of publishing as a first edition. I will return all proceeds from sales of this book to the testing community. Look out for more details of the book via this blog and my social media presences in the coming weeks!

My other project is a new boutique software testing consultancy business. The intention is to offer something quite different in the consulting space, utilizing my skills and experience from the last twenty years to help organizations to improve their testing practices. This consultancy won’t suit everyone but I hope that my niche offering will both help those who see the value in the way I think about testing and also give me the chance to share my knowledge and experience in a meaningful way outside of full-time corporate employment. I expect to launch this business before the end of the year, but feel free to express interest in securing my services now if you believe that my thinking around software testing could be of value in your organization. Note that I will not be making myself available full-time (as I’m deliberately carving out time for volunteer work and to focus on my wellbeing), so now is a good time to secure some of my limited future availability before the formal launch of the consultancy. Again, keep an eye on this blog and my socials for more details of the testing consultancy project.

ER of presenting at DDD Melbourne By Night meetup (10th September 2020)

In response to a tweet looking for speakers for an online meetup organized by DDD Melbourne By Night, I submitted an idea – “Testing is not dead!” – and it was accepted.

I had a few weeks to prepare for this short (ten-minute) talk and went through my usual process of sketching out the content in a mindmap first (using the free version of XMind), then putting together a short slide deck (in PowerPoint) to cover that content.

I find it harder to nail down my content for short talks like this than for a typical longer conference track talk. The restricted time forces focus and I landed on just a few key points: looking at the claims of “testing is dead”, defining what “testing” means to me (and contrasting with “checking”), where automation fits in, and wrapping up with a few tips for non-specialist testers (as this is primarily a meetup with a developer audience).

I did two practice runs of the talk over the same conference call technology that the meetup would be using (Zoom), even though my willing audience of one (my wife) was only in the next room at home! I find practice runs to be an essential part of my preparation and I was pleased to find both runs coming in very close to the ten-minute timebox.

The September DDD by Night meetup took place on the evening of 10th September and featured nine lightning talks with some preamble and also time for questions between each talk. I was third up on the bill and managed to whizz through my talk in a few seconds under ten minutes! The content seemed to be well received and some of my ideas were clearly new to this audience, so I was pleased to have the opportunity to spread my opinion about testing to a different part of the Melbourne tech community.

Lee kicking off his talk

It was also great to see Vanessa Morgan as a first-time presenter during this meetup and her talk was a very polished performance.

Thanks to the DDD Melbourne crew for putting on meetup events during these interesting times and, as a newcomer, the friendly community spirit in this group was obvious.

You can watch my talk on YouTube.

Everyone’s talking about testing!

I can’t remember a time in my life when “testing” has been such a hot topic of coverage in the media. It feels like every news item leads with some mention of the number of tests conducted to detect the COVID-19 coronavirus, whether it be it locally or further afield. This level of coverage of the topic of testing even exceeds that during Y2K (according to my memory at least), albeit in a very different context.

It was interesting to see the reaction when the President of the United States said that the US case numbers are high because so many tests have been conducted – and that a reduction in testing might be in order. This led Ben Simo to tweet on this idea in the context of software testing:

Stop testing your software! Bugs are the result of testing. Bugs that don’t kill users dead instantly aren’t really bugs. No testing is the key to zero defect software! If you don’t see it, it doesn’t exist. If you don’t count it, it doesn’t matter. No testing!

I felt similarly when I read some of the coverage of the worldwide testing efforts during the pandemic. “Testing” for COVID-19 is revealing valuable information and informing public health responses to the differing situations in which we find ourselves in different parts of the world right now. (In this context, “testing” is really “checking” as it results in an algorithmically-determinable “pass” or “fail” result.)

When we test software, we reveal information about it and some of that information might not be to the liking of some of our stakeholders. A subset of that information will be made up of the bugs we find. In testing “more”, we will likely unearth more bugs as we explore for different kinds of problems, while “more” in the COVID-19 sense means performing the same test but on more people (or more frequently on the same people).

We should remain mindful of when we’ve done “enough” testing. If we are genuinely not discovering any new valuable information, then we might decide to stop testing and move onto something else. Our findings from any test, though, represent our experience only at a point in time – a code change tomorrow could cause problems we didn’t see in our testing today and an unlucky person could acquire COVID-19 the day after a test giving them the all clear.

There is a balance to be struck in terms of what constitutes “enough” testing, be that in the context of COVID-19 or software. There comes a point where the cost of discovering new information from testing outweighs the value of that information. We could choose not to test at all, but this is risky as we then have no information to help us understand changing risks. We could try to test everyone every day for COVID-19, but this would be hugely expensive and completely overwhelm our testing capacity – and would be overkill given what we already understand about its risks.

Many of us are testing products in which bugs are potentially irritating for our users, but not life and death issues if they go undetected before release. The context is clearly very different in the case of detecting those infected by COVID-19.

As levels of COVID-19 testing coverage have increased, the risk of acquiring the virus has become better understood. By understanding the risks, different mitigation strategies have been employed such as (so-called) social distancing, progressively more stringent “lockdowns”, and mandatory mask wearing. These strategies are influenced by risk analysis derived from the results of the testing effort. This is exactly what we do in software testing too, testing provides us with information about risks and threats to value.

It’s also interesting to observe how decisions are being made by bearing in mind a broader context, not just taking into account the testing results in a particular area or country. Data from all across the world is being collated and research studies are being published & referenced to build up the bigger picture. Even anecdotes are proving to be useful inputs. This is the situation we find ourselves in as software testers too, the software in front of us is a small part of the picture and it’s one of the key tenets of context-driven testing that we are deliberate in our efforts to explore the context and not just look at the software in isolation. In this sense, anecdotes and stories – as perhaps less formal sources of information – are incredibly valuable in helping to more fully understand the context.

Test reporting continues to be a topic of great debate in our industry, with some preferring lightweight visual styles of report and others producing lengthy and wordy documents. The reporting of COVID-19 case numbers continues to be frequent and newsworthy, as people look to to form a picture of the situation in their locale. Some of this media reporting is very lightweight in the form of just new daily case and fatality numbers, while some is much deeper and allows the consumer to slice and dice worldwide data. Charts seem to be the reporting style of choice, sometimes with misleading axes that either exaggerate or down play the extent of the problem depending on the slant of the publisher. Different people react to the same virus infection reports in quite different ways, based on their own judgement, biases and influences. We see the same issue with software test reporting, especially when such reporting is purely based around quantitative measures (such as test case counts, pass/fail ratios, etc.) The use of storytelling as a means of reporting is nothing new in the media and I’d argue we would be well served in software testing to tell a story about our testing when we’re asked to report on what we did (see Michael Bolton’s blog for an example of how to tell a three-part testing story – a story about the product and its status, a story about how the testing was done, and a story about the quality of the testing work).

While I don’t normally focus on counting tests and their results, I’ll be happy to see more COVID-19 tests taking place and fewer new daily positive results in both my area of Australia and the world more generally. Stay safe.

(Many thanks to Paul Seaman for his review of this post, his sage feedback has made for a much better post than it otherwise would have been.)

Going meta: a blog post about writing a blog post

My recent experiences of authoring blog posts in WordPress have been less enjoyable than usual thanks to the use of their latest “block editor”, leading me to ask on Twitter:

WordPress seems to update their post editor very frequently so I just about learn the quirks of one when it is superseded by another.

This post will serve as my (long) answer to WordPress’s reply. I’m going to spend the next 45 minutes running an exploratory testing session, creating a blog post and noting issues as I come across them while using the block editor.

Session from Tuesday 21st July 2020 , on Windows 10 laptop (using keyboard and mouse controls only) using Chrome browser

4:10pm I’m starting my session by writing a very basic block of unformatted text. I note that when I move my mouse, a small toolbar appears which covers the end of the previous block (this could be an issue when in the flow of writing). The toolbar disappears as soon as I type and reappears on every mouse movement. The content of the toolbar seems very limited, maybe to just the most used formatting features (most used by the whole WordPress community or most used by me)? At least each icon in the toolbar has a tooltip. There’s a very odd control that only appears when hovering over the leftmost icon (to change block type or style) which appears to facilitate moving the whole block up or down in the post. I wonder why the toolbar is so narrow, snce there is plenty of room to add more icons to allow easier discovery of available options here. I’ve been distracted by the toolbar but now resume my mission to complete a basic paragraph of text.

OK, so hitting Enter gives me a new paragraph block, that makes sense. Let’s get more creative now, how about changing the colour of some text? The toolbar doesn’t appear to have a colour picker, oh, it’s tucked away under “More rich text controls”. I’ve typed some text, highlighted it and then selected a custom colour. That worked OK once I found the colour picker. The colour picker control seems to stay in the toolbar after using it – or does it? I’ll try it again but lo, it’s back under the hidden controls again. There’s probably a deliberate choice of behaviour here, but I’ll choose not to investigate it right now.

I’m trying to select some text across blocks using Shift+Arrow keys but that doesn’t work as I’d expect, being inconsistent with other text selection using this keyboard combination in other text processing applications. (Ctrl+Shift_Arrow keys suffers the same fate.) Shift+Page Up/Down only select within the current block, again not what I’d expect.

4:30pm After adding this new block (just by pressing Enter from the previous one), I’m intrigued by the array of block types to choose from when pressing the “+” button which appears in seemingly different spots below here (and I just spotted another “+” icon on the very top toolbar of the page and it looks like it does the same thing). There are many block types, so many that a search feature is provided (a testing rabbit hole I’ll choose not to go down at the moment). Some of the block types have names which indicate they require payment to use and the available block types are categorized (e.g. Text, Media, etc.) I decide to try a few of the different block types.

Adding a “quote” block now, which offers two areas, one for the quote and one for the citation. It appears that the citation cannot be removed and so more space is left below the quote text than I’d like (but maybe it doesn’t render the empty space when published?).

A test quote without citation

Moving on to adding a list and this works as I’d expected, offering a choice between bulleted and numbered with indentation (maybe there’s a limit on nesting here, but not investigated).

  • First item of my list
  • Next item of my list
    • Indented!

Even though I’ve been using this editor for my last few blog posts, I still tend to forget that auto-save is no longer a thing and I just happened to notice the “Save Draft” in the top right corner of the page, so let’s save.

In reality, my blog posts are mainly paragraphs of text with an occasional quote and image so exploring more block types doesn’t seem worth the effort. But looking at images feels like a path worth following.

Copying an image on the clipboard seems to work OK, though immediately puts cursor focus into the caption so I started typing my next bunch of paragraph text incorrectly as the image caption.

Options in the toolbar for the image make sense and I tried adding an image from a file with similar results (deleted from the post before publishing). Adding images into a post is straightforward and it’s good to see copying in directly from the clipboard working well as there have been issues with doing so in previous incarnations of the editor.

4:45pm Returning to simply writing text, I often add hyperlinks from my posts so let’s try that next. Ctrl+K is my usual “go to” for hyperlinks (from good ol’ Word) and it pops up a small edit window to add the URL and Enter adds it in: http://www.google.com Selecting some text and using the same shortcut does the same thing, allowing the text and the URL to be different. The hyperlinking experience is fine (and I note after adding the two hyperlinks here that there’s a “Link” icon in the toolbar also).

I remember to save my draft. As I resume typing, the toolbar catches my eye again and I check out “More options” under the ellipsis icon. I notice there are two very similar options, “Copy” and “Duplicate”, so I’ll try those. Selecting “Copy” changes the option to “Copied!” and pasting into Notepad shows the text of this block with some markup. I note that “Copied!” has now changed back to “Copy”. Selecting “Duplicate” immediately copies the content of this block right underneath (deleted for brevity), I’m not sure what the use case would be for doing that over and above the existing standard copy functionality. OK, I’ve just realised that I’ve been distracted by the toolbar yet again.

I just added this block via a “hidden” control, I’m not sure why products persist with undiscoverable features like this. Hovering just below an existing block halfway across the block reveals the “+” icon to add a block (though it often seems to get ‘blocked’ by, you’ve guessed it, that toolbar again).

My time is just about up. As I review my short session to create this blog post, I think it’s the appearing/disappearing toolbar that frustrates me the most during authoring of posts. I almost never use it (e.g. I always use keyboard shortcuts to bold and italicize text, and add hyperlinks) and, when I do, the option I’m after is usually tucked away.

Thanks to WordPress for responding to my tweet (and providing what is still generally a great free platforms for blogging!) and for giving me a good excuse to test, learn and document a session!