On the testing community merry-go-round

This tweet from Katrina Clokie started a long and interesting discussion on Twitter:

I was a little surprised to see Katrina saying this as she’s been a very active and significant contributor to the testing community for many years and is an organizer for the highly-regarded WeTest conferences in New Zealand. It seems that her tweet was motivated by her recent experiences at non-testing conferences and it’s been great to see such a key member of the testing community taking opportunities to present at non-testing events.

The replies to this tweet were plentiful and largely supportive of the position that (a) the testing community has been talking about the same things for a decade or more, and (b) does not reach out to learn from & help educate other IT communities.

Groundhog Day?

Are we, as a testing community, really talking about the same things over and over again? I actually think we are and we aren’t, it really depends on your lens as to how you see this.

As Maria Kedemo replied on the Twitter thread, “What is old to you and me might be new to others” and I certainly think it’s the case that many conference topics repeat the same subject matter year on year – but this is not necessarily a bad thing. A show of hands in answering “who’s a first-timer?” at a conference usually results in a large proportion of hands going up, so there is always a new audience for the same messages. Provided these messages are sound and valuable, then why not repeat them to cover new entrants to the community? What might sound like the same talk/content from a presentation title on a programme could well be very different in content to what it was a decade ago, too. While I’m not familiar with developer conference content, I would imagine that they’re not dissimilar in this area, with some foundational developer topics being mainstays of conference programmes year on year.

I’ve been a regular testing conference delegate since 2007 (and, since 2014, speaker) and noticed significant changes in the “topics du jour” over this period. I’ve seen a move away from a focus on testing techniques and “testing as an independent thing” towards topics like quality coaching, testing as part of a whole team approach to quality (thanks agile), and human factors in being successful as a tester. At developer-centric conferences, I imagine shifts in topic driven frequently by changes in technology/language and also likely shifts due to agile adoption too.

As you may know, I’m involved with organizing the Association for Software Testing conferences in Australia and I do this for a number of reasons. One is to offer a genuine context-driven testing community conference in this geography (because I see that as a tremendously valuable thing in itself) and another is to build conference programmes offering something different from what I see at other testing events in Australia. The recently-released TiCCA19 conference programme, for example, features a keynote presentation from Lynne Cazaly and she is not directly connected with software testing but will deliver very relevant messages to our audience mainly drawn from the testing community.

Reach out

I think most disciplines – be they IT, testing or otherwise – fail to capitalize on the potential to learn from others, maybe it’s just human nature.

At least in the context-driven part of the testing world, though, I’ve seen genuine progress in taking learnings from a broader range of disciplines including social science, systems thinking, psychology and philosophy. I personally thank Michael Bolton for introducing me to many interesting topics from these broader disciplines that have helped me greatly in understanding the human aspects involved in testing.

In terms of broadening our message about what we believe good testing looks like, I agree that it’s generally the case that the more public members of the testing community are not presenting at, for example, developer-centric conferences. I have recently seen Katrina and others (e.g. Anne-Marie Charrett) taking the initiative to do so, though, and hopefully more non-testing conferences will see the benefit of including testing/quality talks on their programmes. (I have so far been completely unsuccessful in securing a presentation slot at non-testing conferences via their usual CFP routes.)

So I think it’s a two-way street here – we as testing conference organizers need to be more open to including content from “other” communities and also vice versa.

I hope Katrina continues to contribute to the testing community, her voice would be sorely missed.

PS: I will blog separately about some of the replies to Katrina’s thread that were specifically aimed at the context-driven testing community.

Advertisements

The TiCCA19 conference programme is live!

After a successful 2018 event in the shape of CASTx18, the Association for Software Testing were keen to continue Australian conferences and so Paul Seaman and I took on the job of organizing the 2019 event on the AST’s behalf. We opted to rebrand the conference to avoid confusion with the AST’s well-known CAST event (held annually in North America) and so Testing in Context Conference Australia was born.

It’s been a busy few months in getting to the point where the full line-up for this conference is now live.

Paul and I decided to go for a theme with an Australian bent again, the time “From Little Things, Big Things Grow”. It’s always a milestone in planning a conference when it comes time to open up the call for proposals and we watched the proposals flowing in, with the usual surge towards the CFP closing date of 31st October.

The response to the CFP was excellent, with 95 proposals coming in from all around the globe. We had ideas from first-time presenters and also some from very seasoned campaigners on the testing conference circuit. My thanks go to everyone who took the time and effort to put forward a proposal.

We were joined by Michele Playfair to help us select a programme from the CFP responses. This was an interesting process (as usual), making some hard decisions to build what we considered the best conference programme from what was submitted. With only eight track session slots to fill, we couldn’t choose all of the excellent talks we were offered unfortunately.

The tracks we have chosen are hopefully broad enough in topic to be interesting to many testers. Our keynotes come from Ben Simo (making his first trip and conference appearance in Australia!) and local legend, Lynne Cazaly. Rounding out our programme are three full-day workshops showcasing top Melbourne talent, in the shape of Neil Killick, Scott Miles and Toby Thompson. I’m proud of the programme we have on offer and thank all the speakers who’ve accepted our invitation to help us deliver an awesome event.

The complete TiCCA19 line-up is:

Keynotes (March 1st)

  • Ben Simo with “Is There A Problem Here?”
  • Lynne Cazaly with “Try to see it my way: How developers, technicians, managers and leaders can better understand each other”

Tracks (March 1st)

  •  with “From Prototype to Product: Building a VR Testing Effort”
  •  with “Tales of Fail – How I failed a Quality Coach role”
  •  with “Test Reporting in the Hallway”
  •  with “The Automation Gum Tree”
  •  with “Old Dog, New Tricks: How Traditional Testers Can Embrace Code”
  •  with “The Uncertain Future of Non-Technical Testing”
  • Adam Howard with “Exploratory Testing: LIVE!”
  •  with “The Little Agile Testing Manifesto”

Workshops (February 28th)

  • Neil Killick with “From “Quality Assurance” to “Quality Champion” – How to be a successful tester in an agile team”
  • Scott Miles with “Leveraging the Power of API Testing”
  • Toby Thompson with “Applied Exploratory Testing”

For more details about TiCCA19, including the full schedule and the chance to benefit from significant discounts during the “Little Ripper” period of registration, visit ticca19.org

I hope to see you in Melbourne next year!

Another survey on the state of testing worldwide, the “ISTQB Worldwide Software Testing Practices Report 2017-18”

I blogged recently about the Capgemini/Micro Focus/Sogeti “World Quality Report 2018/19” (WQR) and, shortly afterwards, another report from a worldwide survey around software testing appeared, this time in the shape of the ISTQB Worldwide Software Testing Practices Report 2017-18. The publication of this report felt like another opportunity to review the findings and conclusions, as well as comparing and contrasting with the WQR.

The survey size is stated as “more than 2000” so it’s similar in reach to the WQR, but it’s good to see that the responses to the ISTQB survey are much more heavily weighted to testers than managers/executives (with 43% of responses from people identifying themselves as a “Tester” and 77% being “technical” vs. 23% “managers”). Organizational size information is not provided in this report, whereas the WQR data showed it was heavily skewed towards the largest companies.

The ISTQB report comes in at a light forty pages compared to the WQR’s seventy, in part due to the different presentation style here. This report mainly consists of data with some commentary on it, but no big treatise conclusions as was the case in the WQR.

Main findings (pages 4-5)

The report’s “main findings” are listed as:

  1. More than 2000 people from 92 countries contributed to the … report. In this year’s report, respondents’ geographic distribution is quite well balanced.
  2. The outcome of the 2017-2018 report is mostly in parallel with the results of the one done in 2015-16.
  3. Test analyst, test manager and technical test analyst titles are the top three titles used in a typical tester’s career path.
  4. Main improvement areas in software testing are test automation, knowledge about test processes, and communication between development and testing.
  5. Top five test design techniques utilized by software testing teams are use case testing, exploratory testing, boundary value analysis, checklist based, and error guessing.
  6. New technologies or subjects that are expected to affect software testing in near future are security, artificial intelligence, and big data.
  7. Trending topics for software testing profession in near future will be test automation, agile testing, and security testing.
  8. Non-testing skills expected from a typical tester are soft skills, business/domain knowledge, and business analysis skills.

The first “finding” is not really a finding, it’s data about the survey and its respondents. There is nothing particularly surprising in the other findings. Finding number 5 is interesting, though, as I wouldn’t expect to see “exploratory testing” being considered a test technique alongside the likes of boundary value analysis. For me, exploratory testing is an approach to testing during which we can employ a variety of techniques (such as BVA).

Background of respondents (pages 8-9)

The geographical distribution of responses is probably indicative of ISTQB strongholds, with 33% from Asia, 27% from North America and 27% from Europe.

More than half of the respondents come from “Information Technology” organizations, so this is again a difference from the WQR and indicates a different target demographic. Three quarters of the responses here are from just four organization types, viz. IT, Financial Services, Healthcare & Medical, and Telecom, Media & Entertainment.

Organizational and economic aspects of testing (pages 11-14)

In answering “Who is responsible for software testing in your company?”, a huge 79% said “In-house test team” but this isn’t the whole story as just 30% said “Only in-house test team”, so most are also using some other supplemental source of testers (e.g. 19% said “off-shore test team”). When it comes to improving testers’ competency, 50% responded with “Certification of competencies” which is again probably due to the ISTQB slant on the targets for the survey. It’s good to see a hefty 27% of respondents saying “Participation at conferences”, though.

The classic “What percent of a typical IT/R&D project budget is allocated to software testing?” question comes next. This continues to baffle me as a meaningful question especially in agile environments when what constitutes testing compared to development is not easy to determine. The most common answer here (41% of responses) was “11-25%” while only 8.5% said “>40%”. You might recall that the WQR finding in this area was 26% so this report is broadly consistent. But it still doesn’t make sense as something to measure, at least not in the agile context of my organization.

When asked about their expectations of testing budget for the year ahead, 61% indicated some growth, 31% expected it to be stable and just 8% expected a decline.

Processes (pages 15-23)

As you’d probably expect, the Processes section is the chunkiest of the whole report.

It kicks off by asking “What are the main objectives of your testing activities?” with the top three responses being “To detect bugs”, “To show the system is working properly” and “To gain confidence”. While finding bugs is an important part of our job as testers, it is but one part of the job and arguably not the most important one. The idea that testing can show the system “is working properly” concerns me as does the idea that we can give other people confidence by testing. What we need to focus on is testing to reveal information about the product and communicating that information is ways that help others to make decisions about whether we have the product they want and whether the risks to its value that we identify are acceptable or not. A worrying 15% of responses to this question were “To have zero defects”.

A set of 17 testing types and 18 testing topics form the basis for the next question, “Which of the below testing types and/or topics are important for your organization?” Functional testing easily won the testing types competition (at 83%) while user acceptance testing took the gong in the topics race (at 66%).  Thinking of testing in this “types” sort of breakdown is a thing in the ISTQB syllabus but I’m not convinced it has much relevance in day-to-day testing work, but appreciate other contexts might see this differently. 53% of respondents said exploratory testing was an important topic, but later responses (see “Testing techniques and levels”) make me uneasy about what people are thinking of as ET here.

When it comes to improvement, 64% of respondents said “Test automation” was the main improvement area in their testing activities. I’m not sure whether the question was asking which areas they see as having improved the most or the areas that still have the most room for improvement, but either way, it’s not surprising to see automation heading this list.

The final question in this section asks “What are the top testing challenges in your agile projects?”, with “Test automation”, “Documentation” and “Collaboration” heading the answers. The report suggests: “The root cause behind these challenges may be continuously evolving nature of software in Agile projects, cultural challenges/resistance to Agile ways of working”. While these are possible causes, another is the mistaken application of “traditional” approaches to software testing (as still very much highlighted by the ISTQB syllabus) in agile environments.

Skills & career paths (pages 24-31)

The second largest portion of the report kicks off by looking at the career path of testers in the surveyed organizations. The most commonly reported career path is “Tester -> Test Analyst”, closely followed by “Tester -> Test Manager”. I don’t find titles like those used here very relevant or informative, they mean quite different things between different organizations so this data is of questionable value. Similarly the next question – “What could be the next level in the career path for a test manager?” (with the top response being a dead heat between “Test Department Director” and “Project Manager”) – doesn’t really tell me very much.

More interesting are the results of the next question, “Which testing skills do you expect from testers?” with the answers: Test Execution (70%), Bug Reporting (68%), Test Design (67%), Test Analysis (67%), Test Automation (62%), Test Planning (60%), Test Strategy (52%), Test Implementation (50%), Test Monitoring (38%), Bug Advocacy (29%) and Other (2%). This indicates, as the report itself concludes, that today’s tester is expected to have a broad range of skills – testing is no longer about “running tests and reporting bugs”.

The last two questions in this section are around “non-testing skills” expected of testers, firstly in an agile context and then in a non-agile context. The answers are surprisingly similar, with “Soft skills”, “Business/domain knowledge” and “Business Analysis” forming the top three in both cases (albeit with the second two skills reversed in their order). It troubles me to think in terms of “non-testing skills” when we really should be encouraging testers to skill in the areas that add most value to their teams, in whatever context that happens to be. In drawing distinctions between what is and isn’t a testing skill, I think we diminish the incredibly varied skills that a great tester can bring to a team.

Tools & automation (pages 32-33)

On tool usage, the majority of respondents indicated use of defect tracking, test automation, test execution, test management, and performance testing tools. This is unsurprising as raw statistics, but it would be nice to know how those tools are being used to improve outcomes in the respondents’ environments.

The other question in this section is “What is the percentage of automated test cases you use with respect to your overall test cases?” Maybe you can hear my sighs. How anyone can honestly answer this question is beyond me, but anyway, 19% of respondents said more than 50%, while close to half of them said less than 10%. The report makes the mistake of interpreting these numbers as coverage percentages, when that it not what the question asked: “Almost half of respondents that implemented automated tests reported that their coverage is up to 20%” The question in itself is meaningless and reinforces the common misconceptions that all tests are equal and that you can compare automated tests to “other” tests in a meaningful way.

Testing techniques & levels (pages 34-35)

It’s interesting to see the list of “test techniques” on offer in answering “Which test techniques are utilized by your testing team?”. The top five responses were Use Case Testing (73%), Exploratory Testing (67.2%), Boundary Value Analysis (52.3%), Checklist-Based (49.7%) and Error Guessing (36%). I’m assuming respondents answered here in accordance with the definition of these techniques from the ISTQB syllabus. I find it almost impossible to believe that two-thirds of the sample are really doing what those of us in the context-driven testing world would recognize as exploratory testing. The list of techniques doesn’t contain comparable things for me anyway, again I see ET as an approach rather than a technique comparable to boundary value analysis, equivalence partitioning, decision tables, etc.

When it comes to test “levels”, system and integration testing are indicated as consuming the most of the testing budget, unsurprisingly. It’s not clear where spend on automated testing fits into these levels.

Future of testing (pages 36-39)

In answering “Which new technologies or subjects will be important to the software testing industry in the following 5 years?”, around half of the respondents said Security, Artificial Intelligence, Big Data and Cloud. Answering “What will be the most trending topic for software testing profession in near future”, the top responses were Test Automation, Agile Testing and Security Testing. This second question doesn’t seem very useful, what does “most trending topic” really mean? The two questions in this section of the survey were unlikely to result in revelations – and they didn’t.

Wrapping up

With less wordy conclusion drawing in the ISTQB report than the World Quality Report, there is more room for the reader to look at the data and make their own opinions of what that data is telling them. For me, the questions and possible answers generally don’t tell me a great deal about what testers are really doing, what challenges they are facing, or how we grow both testers and testing in the future.

ER of attending and presenting at the inaugural TestBash Australia conference (Sydney)

The first TestBash conference to be held in Australia/New Zealand took place in Sydney on October 19. The well-established conference brand of the Ministry of Testing ensured a sell-out crowd (of around 130) for this inaugural event, quite an achievement in the tough Australian market for testing conferences. The conference was held in the Aerial function centre at the University of Technology in Sydney.

The Twitter hashtag for the event was #testbash (from which I’ve borrowed the photos in this post) and this was very active across the conference and in the days after.

I was there to both attend and present at the conference. In fact, I would be co-presenting with Paul Seaman on our volunteer work teaching software testing to young adults on the autism spectrum. It was great to have this opportunity and we were humbled to be selected from the vast response the conference had to its call for papers.

The event followed the normal TestBash format, viz. a single day conference consisting of a single track with an opening and closing keynote plus a session of “99 second talks” (the TestBash version of lightning talks). Track sessions were 30 or 45 minutes in duration, generally with very little time after each talk for questions from the audience (especially in the case of the 30-minute slots).

Early arrivals were rewarded with the opportunity to participate in a Lean Coffee session out on the balcony at the Aerial function centre, a nice way to start the day in the morning sunshine (and with pretty good barista coffee too!).

The conference proper kicked off at 8.50am with a short opening address from the event MC, Trish Koo. She welcomed everyone, gave some background about the Ministry of Testing and also gave a shout out to all of the sponsors (viz. Enov8ApplitoolsGumtreeTyro and Testing Times).

The opening keynote came from Maaret Pyhajarvi (from Finland) with “Next Level Teamwork: Pairing and Mobbing”. Maaret is very well-known for her work around mobbing and this was a good introductory talk on the topic. She mentioned that mobbing involves everyone in the team working together around one computer, which helps learning as everyone knows something that the others don’t. By way of contrast, she outlined strong-style pairing, in which “I have an idea, you take the keyboard to drive”. In this style, different levels of skill help, being unequal at the task is actually a good thing. Maaret said she now only uses pairing as a way to train people, not to actually test software. In a mobbing scenario, there is always one driver on the keyboard who is only following instructions and not thinking. A designated navigator makes decisions on behalf of the group. The roles are rotated ever four minutes and a retro is held at the end of every session. Maaret also noted the importance of mixing roles in the mob (e.g. testers, developers, automation engineers). This was a strong opening keynote with content pitched at just the right level for it to be of general interest.

maaret

Next up was a 30-minute talk from Alister Scott (from Automattic) with “How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing WordPress.com”. He introduced his talk by giving some context about the way the company is organized – 800 people across 69 countries, with everyone remote (i.e. no offices!), and all internal communications being facilitated by WordPress (dogfooding). Alistair structured his talk as a series of problems and their solutions, starting with the problem of broken customer flows in production (when they moved to continuous delivery). Their solution to this problem was to add automated end-to-end testing of signup flows in production (and only in production). This solution led to the next problem, having non-deterministic end-to-end tests due to ever-changing A/B tests. The solution to this problem was an override of A/B tests during testing. The next problem was these new tests being too slow, too late (only in production) and too hidden, so they moved to parallel tests and adding “canaries” on merge (before deployment), simple tests of key features (signing up and publishing a page) designed to give fast feedback of major breaking changes. This led to the next problem, having to revert merges and slow local runs to which the solution was having live branch tests with canaries on every pull request. This led to the observation that, of course, canaries don’t find all the problems, so the solution then was to add optional full test suites on live branches. Even then, a problem persisted with Internet Explorer 11 and Safari 10 specific issues, so IE11 and Safari 10 canaries were added. The final problem is still current, in that people still break end-to-end tests! This was a nicely structured short talk about a journey of end-to-end testing and how solving one problem led to another (and ultimately has put them in a position of having no manual regression testing), good content.

ascott

A welcome break for morning tea and a chance to catch up with familiar faces came next before the delegates reconvened, with Enov8 getting the chance for a 99-second sponsor talk before sessions resumed.

First up after the break was a 30-minute session thanks to Michele Playfair (of YOW!) with “A Tester’s Guide to Changing Hearts and Minds”. Her key message was that the ability to change people’s opinions about testing was essentially a marketing exercise and she introduced the “4 P’s of marketing”, viz. Product, Price, Promotion and Placement. She argued that, as testers, we need to be better at defining our product (we should be able to answer questions like “what do you do here?”) and also promoting ourselves (by building relationships and networks, and revealing our value). This was a good short talk from Michele, a different angle on the topic of testers describing and showing their value.

mplayfair

Next up was Peter Bartlett (of Campaign Monitor) with a 45-minute talk on “Advancing Quality by Turning Developers into Quality Champions”. He defined a “quality champion” as “a developer who actively promotes quality in their team”, with this being a temporary role (typically lasting six months or so) which is rotated amongst the team. He generally selects someone who already displays a quality mindset or is an influencer within the team to take on the role initially and then trains them via one-on-one meetings, contextual training and against set goals. He encourages them to ask questions like “what areas are hard to test and why?”, “what can I do to make it easier for you to develop your code and be confident in its quality?”, and “what’s the riskiest piece of what you’re working on?”.  Pete holds regular group meetings with all of the quality champions, these might be demo meetings, lean coffees or workshops/activities (e.g. how to write good acceptance criteria, dealing with automation flakiness, playing the dice game, introducing a new tool, how to use heuristics, live group testing). He has noted some positive changes as a result of using this quality champions model, including increased testability, a growth in knowledge and understanding around quality, new automation tests and performance tool testing research. Pete wrapped up with some tips, including starting small, taking time to explain and listen (across all project stakeholders), and to keep reviewing. This was a similar talk to Pete’s talk at the CASTx18 conference earlier in the year but it felt more fully developed here, no doubt as a result of another six months or so of trying this approach in Campaign Monitor.

pbartlett

As the clock struck noon, it was time for Paul Seaman (of Travelport Locomote) and I to take the big stage for our 30-minute talk, “A Spectrum of Difference – Creating EPIC Software Testers”. We outlined the volunteer work we’ve been doing with EPIC Assist to teach software testing to young adults on the autism spectrum (a topic on which I’ve already blogged extensively) and we were pleased with how our co-presenting effort went – and we thought we looked pretty cool in our EPIC polo shirts! We managed to finish up just about on time and the content seemed to resonate with this audience.

us3us1us2us4

With our talk commitment completed, it was lunch hour (albeit with very limited vegan options despite pre-ordering) and it was good to get some fresh air and sunshine out on the venue’s balcony. Paul and I received lots of great feedback about our talk during lunch, it’s always so nice when people make the effort to express their thanks or interest.

Returning from lunch, it was Applitools’ turn to get their 99-seconds of fame as a sponsor before presentations resumed, in the form of a 45-minute session by Adam Howard (of TradeMe) with “Exploratory Testing: LIVE”. This was a really brave presentation, with Adam performing exploratory testing of a feature in the TradeMe website (New Zealand’s EBay) that had been deliberately altered by a developer in ways Adam was not aware of (via an A/B deployment in production). It was brave in many ways: he relied on internet connectivity and a stable VPN connection back to his office in New Zealand, and also exposed himself to testing a feature for the first time in front of 130 eagle-eyed testers! He applied some classic ET techniques and talked through everything he was doing in very credible terms, so this session served as an object lesson to anyone unfamiliar with what genuine exploratory testing looks like and how valuable it can be (Adam unearthed many issues, some of which probably weren’t deliberately introduced for the purposes of his session!). Great work from a solid presenter.

ahoward

The following 30-minute talk was Paul Maxwell-Walters with “Avoid Sleepwalking to Failure! On Abstractions and Keeping it Real in Software Teams”. This was a really serious talk, high on well-researched content and it was a struggle to give all the content the coverage it deserved in such a short slot. He introduced the ideas of hyper-normalization and hyper-reality before getting into talking about abstractions, viz. “quality” and “measurement”. I particularly liked this quote from his talk, “bad metrics and abstractions are delusional propaganda”! This maybe would have been a better talk if he’d tried to cover less content, but nevertheless it was really engaging and interesting stuff.

pmaxwellwalters

The final break came next before we reconvened for the push to the finish. First up after the break was another 99-second sponsor talk, this time Anne-Marie Charrett (conference co-organizer) on her consultancy business, Testing Times.

The last 30-minute slot went to first-time conference presenter, Georgia de Pont (of Tyro), with “Test Representatives – An Alternative Approach to Test Practice Management” and she presented very confidently and calmly on her first outing. She outlined how Tyro moved to having testers embedded in agile teams and, while there lots of positives from doing this, there was also a lack of consistency in test practice across the teams and no way to consider practice-wide improvements. She went on to talk about the move to “test representatives” (who are themselves embedded testers in teams), one from each tribe, who have a mission to provide a community for testers and act as points of contact for initiatives impacting testing. Each representative then shares the outputs of the representatives group with their team. Initiatives the representatives have covered so far include clarifying the role of the embedded tester, improving the test recruitment process (via a pair testing exercise), onboarding new test engineers, performance criteria for test engineers, upskilling test engineers, co-ordinating engineering-wide test engineers and developing a Quality Engineering strategy. There is also a stretch goal for testers to operate across teams. Georgia’s recommended steps to implement such a model were to start small, look for volunteers over selection, communicate the work of the representatives across the organization, survey to get feedback, hold retros within the representatives group and foster support from engineering leadership. This was a solid talk, especially impressive considering Georgia’s lack of experience in this environment.

gdepont

The final presentation of the day was a closing keynote thanks to Parimala Hariprasad (of Amadeus) with “Enchanting Experiences – The Future of Mobile Apps”. Her time on stage was pretty brief (using only a little over half of her 45-minute slot before Q&A) but was very engaging.  She argued that designing great products isn’t about good screens, it’s about great – enchanting – experiences. She said we should think more about ecosystems than apps and screens as systems become more complex and interconnected. Her neat slides and confident presentation style made her messaging very clear and she also handled Q&A pretty well.

pari

The last session of the conference was dedicated to “99 second talks”, the TestBash version of lightning talks in which each speaker gets just 99 seconds to present on a topic of their choice. There were plenty of volunteers so the short keynote was made up for by more 99s talks, some 18 in total, as follows:

  • Sam Connelly – on depression (and introducing “spoon theory“)
  • Amanda Dean – on why she believes testing is not a craft and should be thought of as a profession
  • Maaret Pyhajarvi – live exploratory testing of an API (using the Gilded Rose example, as per her recent webinar on the same topic)
  • Cameron Bradley – on why a common automation framework is a good thing (based on his experience of implementing one at Tabcorp)
  • Dany Matthias – on experimenting with coffee!
  • Melissa Ngau – on giving and receiving feedback
  • Geoff Dunn – on conflict and how testers can help to resolve it
  • Catherine Karena – on mentoring
  • Nicky West – what is good strategy?
  • Kim Nepata – Blockchain 101
  • Sunil Kumar – mobile application testing: how, what and why?
  • Said – on rotations and why they’re essential in development teams
  • Melissa (Editor Boss at Ministry of Testing) – living a dream as a writer
  • Leela – on transitioning from a small to a large company
  • Haramut – demo of a codeless automation framework
  • Trish Koo – promoting her test automation training course
  • Anne-Marie Charrett – “Audience-Driven Speaking”
  • Maaret Pyhajarvi – promoting the Speak Easy mentoring programme

99s99s-2

After a brief closing speech from Trish Koo, the conference closed out. The action then moved to the nearby Knox Street Bar for a post-conference “meetup” with free drinks courtesy of sponsorship from Innodev. This was a fun evening, relaxing with old friends from the testing community and talking conference organizing with others involved in this, erm, fun activity!

knox

I’ll finish off this blog post with some general thoughts on this conference.

The standard of presentations was excellent, as you might expect from a TestBash and the massive response to their call for papers (around 250). The mix of topics was also very good, from live exploratory testing (I would love to see something like this at every testing conference) to automation to coaching/training/interpersonal talks.

The single track format of all TestBash conferences means there is no fear of missing out, but the desire to pack as many talks as possible into the single day means very limited opportunity for Q&A (which is often where the really interesting discussions are). I personally missed the deep questioning that occurs post-presentations at conferences like CAST.

Although the sponsor talks were kept to short 99-second formats, I still find sponsor talks of any kind uncomfortable, especially at a relatively expensive conference.

Paul and I enjoyed presenting to this audience and the Ministry of Testing do an excellent job in terms of pre-gig information and speaker compensation (expensing literally door-to-door). We appreciated the opportunity to share our story and broaden awareness of our programme with EPIC Assist.

Attending the pre-TestBash Sydney Testers meetup

Arriving in Sydney the day before our presentation at the Ministry of Testing‘s TestBash Australia 2018 conference allowed me (along with Paul Seaman) to attend the pre-TestBash meetup organized by the well-known Sydney Testers group.

The meetup was held in the offices of Gumtree, up on the 22nd floor of the tower at 1 York Street in the CBD. On entering their office, the most striking feature was the simply stunning view it affords their lucky employees of the famous Sydney Harbour Bridge. The other noticeable thing about this relatively newly-renovated space for the company is that it has been furnished using items themselves sourced from the Gumtree platform, so no cookie-cutter corporate office furnishings here!

View of the Sydney harbour bridge from the Gumtree office

It was good to see a decent crowd of about thirty people enjoying the free food and drinks before the “main event”, viz. Trish Koo with her short presentation on “The Future of Testing”. She covered some interesting predictions in her talk, including:

  • Exploratory Testing will be really weird
  • End to end testing will become meaningless
  • Black box testing will be cool again
  • Testers may be the only ones who can stop the robot apocalypse

This was at least a very different treatment compared to the many similarly-named talks out there. Her hand-drawn slides were another point of difference and she certainly got some interesting reactions from the audience! The Q&A afterwards was engaging, but still left ample time for us all to mingle before we formally overstayed our welcome at Gumtree!

Richard Bradshaw and Trish Koo

Trish says testers may be the only ones who can stop the Robot Apocalypse!

A vision of everyone working together from Trish Koo

It was good to see Richard Bradshaw there representing Ministry of Testing as well as TestBash Australia conference organizers, David Greenlees and Anne-Marie Charrett. Thanks to Sam Connelly and the Sydney Testers crew for putting on a good meetup in the run up to TestBash, it’s always good to mingle with the local testing community before a major event in another city.

(First three photos above from Michele Playfair, the last one from Paul Maxwell-Walters.)

ER of attending the TechDiversity Awards 2018 (Melbourne)

I had the pleasure of attending the TechDiversity Awards in Melbourne on 27th September. I was there as part of the EPIC Assist contingent as a nominee for an award in the Education category for the EPIC TestAbility Academy (ETA), the software testing programme for young adults with autism delivered by Paul Seaman and I. (You can view our nomination here.)

The venues for the two parts of the event were both within a renovated wharf shed at Docklands.

The first part of the event took place in the Sumac space and saw all the shortlisted nominees (around 40 different groups) assembled to select the merit award winners who would later battle it out for the top spot in each category (viz. Government, Business, Media, and Education). The Education category had the most entries on the shortlist (18) and just five were selected for merit awards – ETA didn’t make it to the next stage unfortunately. We were still very proud to have been nominated and shortlisted amongst such a great bunch of programmes in the tech diversity space around Education.

Paul & Maria, Michele, Lee & Kylie at the merit awards

Moving on to the Gala Dinner in the massive Peninsula space, we had our own table consisting of (clockwise in the below photo) Kym Vassiliou (EPIC Assist), Lee, Kylie (Lee’s wife), Bill Gamack (CEO of EPIC Assist), Paul Seaman, Maria (Paul’s wife), Michele Playfair and Craig Thompson (EPIC Assist). The event was a packed house with about 400 people sitting down for the dinner.

techdiversity_photo1

The MC for the evening was Soozey Johnstone and she did a really good job of keeping things on track and injecting her own passion for diversity into proceedings. Apart from revealing the award winners, there were three keynote speakers sprinkled throughout the evening.

First up for an opening keynote was Philip Dalidakis (Minister for Trade & Investment, Innovation & the Digital Economy and Small Business in Victoria) and he announced the winner of the Minister’s Prize in the shape of the Grad Girls Program 2018 (VICICT4.Women).

Next up was Georgina McEncroe, founder of the all-female shared ride service, Shebah.  Her background as a comedian made this a very open and entertaining short speech!

The last keynote was a very personal one, from Alan Lachman who shared the story of his daughter losing her sight and this being the inspiration for setting up Insight. The three keynotes were all quite different but each made for a welcome break between award presentations and food courses.

In terms of the all-important awards, the winners were:

It was great to see the “Champion” award going to the RISE programme at the Department of Health & Human Services, so even though ETA didn’t get up, at least an autism-related initiative took the main gong.

This was a well-run event and the venue was impressive, with good service and fine catering for our vegan needs. It was inspiring to see all of the great work going on towards improving diversity in the tech sector, but a little surprising to see something of a lack of diversity amongst the nominations (e.g. there was a very heavy bias towards gender diversity). The breakdown of nominations by the four categories also needs to be reconsidered, as there were very large numbers of nominations in Business and Education (17 and 18 respectively) while only 3 in Media and 4 in Government.

It was a really enjoyable evening and I consider myself fortunate to be working with a bunch of genuinely nice people on this initiative. I’m looking forward to running the third course of ETA in 2019 and maybe, just maybe we’ll have better luck at these awards next year if we’re nominated again!

The World Quality Report 2018/19 – applying critical thinking

The tenth edition of the World Quality Report 2018/19 (WQR) was published recently. This 70-page opus was produced by Capgemini, Micro Focus and Sogeti (a division of Capgemini anyway) and is perhaps the largest survey of its kind.

After digesting this report, I feel it’s important to apply some critical thinking to both the data and the conclusions drawn from it in this report. This is a long blog post as there is a lot to unpack here.

The survey (pages 66-69)

I’m always interested in understanding where the data comes from when I see survey results like these and this information is openly provided in the report. Understanding the origin of the data is important context and I read here first (whereas the report itself presents it at the end).

The survey consisted of 1700 interviews. In terms of the organizations taking part, the survey was restricted to only organizations with more than 1000 employees (actually it was 40% from those with more than 10,000, 34% from those with 5000-10000 and 26% from those with 1000-5000) so the results are in fact heavily skewed towards the very largest corporations. The survey had a good spread of countries & regions as well as industry sectors (although the top three sectors accounted for almost half of the responses, viz. financial services, public sector/government, and telecommunications).

The types of people who provided survey responses is more interesting, though – in terms of job title breakdown, they were grouped as follows: CIO (27%), IT director (22%), QA/Testing Manager (20%), VP Applications (18%), CMO/CDO (7%) and CTO/Product Head (6%). With the (possible) exception of the QA/Testing Manager types, most of these people are likely a long way away from the actual day-to-day testing work happening in their organizations.

Let’s look at each section of the report now.

Introduction (pages 4-5)

In his introduction, Brad Little of Capgemini (page 4) says the WQR is “a comprehensive and balanced overview of the key trends shaping quality assurance (QA) and testing today”. In his introduction, Raffi Margaliot of Micro Focus (page 5) says “The results from this tenth edition of the WQR are conclusive: all but one percent of organizations are now using DevOps practices in their organization. Their focus is no longer on whether to move to DevOps; rather how to refine their DevOps approach and continuously improve.” Perhaps coincidentally, it’s worth noting that Micro Focus offers solutions in the DevOps space and, as you’ll see later in my post, the “conclusive” part of this statement is highly questionable.

Executive Summary (pages 6-9)

On the role of AI in QA and testing, “AI will enable enterprises to transform testing into a fully self-generating, self-running and self-adapting activity”, really? And why would that be seen as a good thing?

On agile: “Our survey also reveals that organizations are customizing Agile and combining it with waterfall to develop hybrid frameworks that are a best fit to their organizational, regulatory, cultural and business requirements” – so not thinking of a move towards agility as a mindset change, but rather a process/framework from which to cherrypick the bits that are easy and then calling themselves “agile”.

On automation: “The objective of automation has also changed as there is less focus on shortening of testing times and more on better coverage and effective use of test cases. This, again, is related to the dictum of “Quality at Speed”” – what about the almost complete uptake of DevOps and agile you just mentioned? Aren’t these dependent on fast feedback loops? I’d argue that the focus of automation has changed, but largely in support of CI/CD pipelines where fast feedback on recent changes before deployment is key. “Moving forward, organizations will need to move toward higher levels of end-to-end testing automation”, why?

The cost of testing is also covered in this executive summary, I’ll talk in detail about that later but for now enjoy the statistic that QA and testing account for 26% of the IT budget (the same number as in the previous year’s report).

WQR findings (pages 10-11)

The findings kick off with the following revelation: “Expecting QA and testing to directly contribute to “ensuring end-user satisfaction” is not an obvious or intuitive expectation. However, this year, it came out as the top objective of QA and testing strategy.” I’m not sure why this is so surprising to the authors of this report, but I’m intrigued by having this specifically as an objective of a test strategy.

AI is a big focus of this report and this big claim is made here, “The convergence of AI, ML, and analytics and their use in carrying out smarter automation will be the biggest disruptive force which will transform QA and testing over the next two to three years.” It will be interesting to see whether such significant disruption really does occur in such a short timeframe, I don’t see a great deal of evidence to support this claim from my part of the testing world but I acknowledge that others may have different views and their organizations might actually be more active in these areas than I’d expect: “57% of our respondents said they had AI projects for quality assurance, already in place or planned for the next 12 months.” (though the “or planned” part of that survey question leaves a lot of wriggle room).

On the topic of automation: “These challenges around automation, test data, and environments, create a situation where organizations are unable to keep pace with the volume and frequency of testing required. Essentially, they slow down testing, thus defeating one of the main objectives of adopting frameworks such as agile and DevOps. This also came through in our survey results, when 43% of respondents said that “too slow testing process” was a challenge when it came to developing applications today.” It’s interesting that agile and DevOps are referred to as “frameworks” and that this also seems to imply that faster testing is one of the main objectives of agile and DevOps.

Key recommendations (pages 12-13)

The authors make five key recommendations out of the mountain of survey data underlying this WQR, viz.

  • Increase the level of basic and smart test automation but do so in a smart, phased manner
  • Implement a non-siloed approach for test environment and data provisioning
  • Build quality engineering skills beyond SDETs
  • Improve tracking to optimize spends
  • Develop a testing approach for AI solutions now

The first recommendation around automation is based on this conclusion: “We believe that automation is the biggest bottleneck holding back the evolution of QA and testing today. This is due, in part, to automation’s key role as an enabler of successful agile and DevOps transformation. With increasing agile and DevOps adoption (99% according to the 2018 survey), the importance of automation for delivering “Quality at Speed” has also risen.” While I agree that automation has an important role to play in our more modern development approaches, I’m not convinced that a lack of automation is holding back the “evolution of QA and testing” – I’d see the lack of focus on genuine human testing skills to be a much bigger issue in fact. The 99% DevOps adoption statistic is reeled out again in support of their conclusion, see the “Trends” section below for more on the dubious grounding of this number.

When it comes to test environments, the survey responses indicate that a lot of organizations have a lot of issues with generating and maintaining appropriate test data, something that’s been a common thread in the industry for years. The recommendation here is to centralize test data and environment provisioning and “move towards “smart” test data and test environment management, i.e. the creation of self-provisioning, self-monitoring and self-healing environments”.

In terms of skillsets for those in QA and testing, the “first priority is to attract/reskill towards agile test specialists who have functional automation skills and domain testing skills. We would recommend that automation be a must-have skill for everyone in the QA function today.” This topic is pretty hot in our industry right now and reports like this making recommendations like this mean there is even more weight behind the idea that every tester needs to be able to “do automation”. None of their recommendations in this area deal with upskilling humans in performing excellent testing and this is a huge gap in most organizations. Recruiting recently for an expert exploratory tester here in Melbourne showed just how few highly-skilled testers are around, while there is a vast supply of “traditional”/ISTQB style testers plying their wares in large organizations. I do think that testers need to understand where automation makes sense and where it doesn’t, and can greatly assist those writing automation code to focus on the right places – but that doesn’t mean every tester needs to write automation code in my opinion.

On the subject of tracking spend on testing, the report makes the fairly obvious observation that “The adoption of agile and DevOps in project teams has led to a situation where QA and testing activities are being done by many, including developers as well as specified testing professionals. This makes it tough to accurately track, understand or optimize QA and testing spends.” The recommendation is to “create a detailed and elaborate tracking mechanism” to work out who is spending what time on testing activities so it can be more accurately tracked. Given that the report itself claims that everyone is working in an agile fashion with whole team approach to quality, I’m not sure why anyone would want to try to track the spend in this way. Surely the spend of interest is the total spend on developing the product and trying to split out testing reinforces the old ways of thinking of divisions between development and testing. I’ll talk about cost more in the “Trends” section, but recommending this artificial and arbitrary boundary between testing and everything else required to build, test and produce the product is an anti-recommendation at best and dangerous at worst (in that the percentage spent on QA and testing is always an easy target for “optimization”, and we all know what that means!).

I acknowledge that testing AI solutions is a tricky problem so the recommendation that “every organization needs to start developing a QA and test strategy for their AI applications now” has some merit. I still think the idea that so many organizations are actively working on genuine AI solutions is questionable (and the data in this report is not conclusive, given the nature of the survey responses).

Current Trends in Quality Assurance and Testing (pages 14-43)

Almost half of the WQR is dedicated to discussing current trends in QA and testing and this was by far the most revealing content in the report for me. I’ll break down my analysis in the same way as the report.

Key trends in IT (pages 16-21)

This section discusses the results of survey questions around executive management objectives with QA and testing, before looking specifically into digital transformation & the API economy, internet of things, Cloud, cybersecurity, and blockchain. The conclusion in this section is bang on: “It’s also important to remember that the new IT model is not just about the latest technologies or improved processes. Above all, it is a change in culture, attitude, and mindset, which will also require a change in the traditional ways of working or delivering services. This means that people, processes, and technology will all have to go through a period of change and improvement before we can fully realize the benefits promised by the new technologies and frameworks.” Finally a mention of culture and mindset change – and the challenges that always brings.

Artificial intelligence (pages 22-25)

I just watched a EuroSTAR webinar from Jason Arbon (CEO of test.ai) titled AI Will Soon Test Everything (there’s a lot to unpack in that webinar too, especially around exploratory testing, but that’s a whole other blog post!) so was interested in the data in the AI section of this report, especially given the big prominence given to AI in the summary, recommendations and key findings sections before it.

A little over half of the respondents indicated that AI is “in place or planned on quality assurance” and the report comments that a “big challenge lies in identifying possible use cases for AI in testing” (with over half of the respondents citing “identifying where business might actually apply AI” as a challenge). Interesting, exactly half of the respondents indicated that they think there is no change required in skillset around “test strategy and test design skills” when including AI. I agree with the report here when it states “Clearly, there is enthusiasm for and excitement around AI technologies and solutions, but their actual application in testing is still emerging.”

Test automation (pages 26-30)

This trend is subtitled “The single-biggest enabler of maturity in QA and testing”, interesting! The authors nail their colours to the mast right from the introduction in this section, with “For QA, this means an increased focus on the concept of “Quality at Speed” and its associated promises of avoiding human intervention wherever possible, reducing costs, increasing quality, and achieving better time to market. And the way to achieve each of these goals? Automation.” I don’t think we should be viewing any one thing as offering solutions to so many different issues in more agile software delivery projects – and “avoiding human intervention wherever possible” is not a goal for me either.

The well-worn 99% statistic is presented yet again as a factor driving more automation: “the adoption of agile and DevOps, which seems to have reached a tipping point today, with 99% of our respondents saying they use DevOps for at least some part of their business”. The statistics around DevOps adoption (see next section) don’t suggest a tipping point anytime soon.

A whopping 61% of the respondents suggest application change as an obstacle to automation success: “When asked about the main challenges in achieving their desired level of automation, 61% of respondents said they had difficulties automating as their applications changed with every release. This could be a direct result of the flexibility provided by frameworks like agile and DevOps, which allow organizations to change their requirements or stories frequently. This often leads to too many changes with every release and puts additional pressure on testers as test cases generated earlier or previous automation work no longer remains relevant.” It amazes me that people think application changes in a release are some kind of surprise and even more so that all those pesky changes we make to help and please customers are thought of as “too many changes with every release”! For products under active development, we should expect plenty of change and adapt our practices – not just in relation to automation but throughout our development process – to match that reality.

In discussing the benefits of automation, 60+% of respondents indicated “better test coverage”, “better control and transparency of test activities”, “better reuse of test cases”, “reduction of test cycle time”, “better detection of defects” and “reduction of test costs”. It would be interesting to understand more about what these categories mean and how organizations are measuring such benefits. It would seem likely to me that different organizations would have quite different goals around their automation initiatives, so it’s unlikely those goals would lead to the same set of benefits in all cases.

In summary, the authors say “To be successful, organizations must understand that automation is not only about replacing manual testing and chasing some incremental cost savings. Instead, focus on delivery quality at speed and supporting frameworks such as agile and DevOps to deliver much greater results and take QA and testing to the next level.” I would argue that to be successful, organizations need to realize that automation cannot replace manual testing but can extend and supplement human testing when applied appropriately. Chasing cost savings is a fools errand in this case, automation really is just writing (and maintaining) more code, so why would that save money?

The quality assurance organization (pages 31-35)

This is the section of the report that details the claim that “according to this year’s survey, a full 99% of respondents said that they were using DevOps principles in their organization”. This sounds like a pretty impressive statistic, so impressive that it is littered throughout the rest of the WQR. Looking into the actual survey results, though, the story is not quite so impressive:

  • 42% of respondents said “Fewer than 20% of our projects use DevOps principles”
  • 30% of respondents said “20-50% of our projects use DevOps principles”
  • 15% of respondents said “50-70% of our projects use DevOps principles”
  • 9% of respondents said “70-90% of our projects use DevOps principles”
  • Just 3% of respondents said “90-95% of our projects use DevOps principles”

In other words, almost three quarters of the respondents are using DevOps principles in the minority (i.e. less than 50%) of their projects. These underlying statistics are much more instructive than the “banner” 99% claim the authors choose to perpetuate in the WQR. What’s also interesting in these same statistics over time (2015-2018) – which the authors either didn’t spot or chose not to mention – is that they actually suggest a decrease in the use of DevOps! In 2015, for example, some 58% of respondents fell into the “using DevOps principles for 50% or more of their projects” categories.

My general impression here (and elsewhere in the report) is that DevOps and agile are not clearly understood as being different by the authors. This is reinforced by these comments: “As already stated, only one percent of respondents indicated that they were not experimenting with or applying DevOps in any form. According to the survey, the top DevOps processes being followed were “breaking down large efforts into smaller batches of work” (44% of respondents currently using and 38% planning to use), “cloud-based development and test environments” (43% using and 40% planning to use), and the “continuous monitoring of apps in production” (41% using and 40% planning to use).” Isn’t the idea of breaking down work into smaller pieces very much an agile mindset? What does it really have to do with DevOps?

Looking at the conclusions in the section on challenges in applying testing to agile developments, one of the conclusions is “As the skillset is moving from functional to SDET (Software Development Engineer in Test), organizations are faced with challenges of reskilling the existing testing teams and attracting the right testing talent to build future-ready testing teams.” The data shows that 41% of respondents cite “Lack of a good testing approach that fits with the agile development method” as a challenge. This challenge is not solved by making everyone an SDET, in fact probably quite the opposite. The role of expert human testing is again not discussed at all here when the data clearly supports a view that skilled human testers are critical in solving many of the challenges seen when testing in agile environments.

Test data and environments management (pages 36-39)

I didn’t find anything particularly surprising or controversial in this part of the report. The authors identified increased containerized, virtualized and Cloud test environments and also noted the impact that standards & regulations such as GDPR and IFRS9 are having on test data management.

[Note that there is some errant data from a different section of the report mistakenly placed in this part of the report.]

Efficiency and cost containment in quality assurance (pages 40-43)

It’s a case of leaving the best (or is that worst?) to last in terms of my objections to the findings in this report, viz. when talking about “efficiency and cost containment”. The authors’ opening gambit relates to the proportion of IT budget spent on testing: “According to this year’s survey, the proportion of the IT budget spent on QA and testing is pegged at 26%. This is the same as last year, though considerably below the highs of 31% in 2016 and 35% in 2015. Before that, QA and testing budgets accounted for 26% in 2014 and 23% in 2013.” I’ll leave it you as the reader to ponder why these statistics are the way they are and how you would even measure this percentage in your own organization. The authors surmise that “as organizations have gained experience and maturity in handling these new frameworks and technologies [e.g. agile, DevOps, cloud test environments, etc.], they have started reaping the benefits of these changes. A number of testing activities have gained efficiency and this has driven down costs. This is reflected in the fall in the proportion of IT budgets devoted to testing that we have seen over the last two years.”

Interestingly, the authors note that “According to our 2018 survey, when respondents were asked whether they had seen an increase in the proportional effort and cost spending on QA and testing over the last four to five years, a whopping 72% said “yes”. This directly contradicts the overall budgetary trends.” The authors then dig “deep” into the reasons for this confusing (or contradictory) data.

Firstly, they make the reasonable argument that IT budgets have generally been increasing due to the take up of new technologies, digitalization and so on, so the absolute test effort and budget has increased but stayed relatively the same against those increased overall budgets. The second argument is around divvying up test effort and spend across back-end legacy systems compared to front-end systems, with the back-end benefiting greatly in terms of cost from increased automation while the drive for speed at the front-end means more spend there to keep up with changing business requirements.

These first two arguments are somewhat reasonable. It’s the third observation, however, that to me makes a mockery of the whole idea of measuring the cost of testing and QA as a percentage of total IT budget: “The third and final factor is probably the biggest of them all. This is the difficulty in accurately capturing testing spends due to the coming of age of agile and DevOps. Before agile and DevOps, testing often operated as a separate profit or cost center, with operations centralized in a Test Center of Excellence (TCoE). This made it easier to measure spends and track how much was being spent on what. However, agile and DevOps have made this kind of tracking difficult since testing is now integrated into the project or the Scrum teams. This makes it extremely difficult to track exactly how much time is spent on testing activities, especially when the now-prevalent Software Development Engineer in Test (SDET) profile (who engages in development, analysis, and testing activities). It is entirely possible, for instance, that the efforts of these SDETs, or of entire Scrum teams, is being tagged to the development or the testing budget or allocated on the basis of a thumb-rule percentage between these two budgets.” At least the authors have acknowledged here that trying to measure this is a largely pointless exercise in agile teams and, since they keep claiming that almost everyone is doing agile, why even persist in trying to measure this? They essentially say here that it’s almost impossible to measure or, when asked to do so, people just make it up (“a thumb-rule percentage”).

Another interesting bunch of statistics comes next, in the breakdown of QA and testing spend between hardware/infrastructure, tools (licenses), and human resources with these coming in at 44%, 31% and 26% respectively this year. I’m simply amazed that the human element in the total cost is the lowest proportion and this indicates to me a lot of misplaced redirection of effort away from human interactions with product and towards automation (much of which is probably of questionable value).

The claim that “expert opinion holds that the increased number of test cycles brought about by the shift to agile and DevOps is perhaps one of the biggest reasons for a rise in testing effort and expenditures” is not supported by reference to who these experts are (and the survey responses do not support this conclusion directly in my opinion).

The summary for this section makes some recommendations and this is the most depressing part of all: “To gain the maximum benefit from their QA and testing spends, we would recommend that organizations focus on three key areas over the next couple of years. First, work on creating successful use cases (in testing) for new technologies such as AI, machine learning (ML), or robotics process automation. Second, create detailed and elaborate tracking mechanisms to understand exactly how much cost and effort is going into testing in Agile or DevOps teams. It would be impossible to reduce costs without understanding clearly how much is being spent and where. Finally, there is one step that organizations can immediately take to improve testing efficiencies, that is the use of end-to-end automation in testing. While investments are being made, they are nowhere near the optimal levels. All three of these steps will go a long way to improving testing efficiency and the quality of their IT systems in the long term.”

The last thing a truly agile team should be doing is layering on pointless bureaucracy such as “detailed and elaborate tracking mechanisms” to track testing time and effort. At least the authors make it clear in their recommendations that the point here is to “reduce costs” but this is probably in opposition to the main business driver for QA and testing which was “contribute to end-user satisfaction”. Not every organization will see the need to cut costs in QA and testing if the end products are attractive to customers and making good revenues. Also, suggesting that everyone should add more end-to-end test automation goes against almost every recent published practitioner article on this topic in my part of the testing community.

Sector Analysis (pages 44-65)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors and discuss particular trends and challenges within each. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy and utilities
  • Financial services
  • Healthcare and life sciences
  • High-Tech
  • Government and public sector
  • Telecom, media and entertainment

Wrapping up

A huge amount of data collection obviously goes into producing a report of this nature and the high profile publishers will no doubt mean its recommendations get plenty of publicity. As I’ve tried to detail in the above, some of the conclusions drawn from the data don’t make sense to me and the skewed nature of the sample (opinions from CIO/CTO types in the very largest corporations) means most of the recommendations don’t resonate with the testing industry as I’m familiar with it.

A few other points I’d like to draw attention to:

  • The report always says “QA and testing” together, with neither term being defined anywhere so it’s not clear what they’re talking about and whether they correctly view them as separate concepts or not. I wonder whether the interview questions were also couched in this language and, if so, how might that have affected the answers?
  • Similarly, the report usually says “agile and DevOps” together, as though they also necessarily go together. For me, they’re only somewhat related and I know of plenty of organizations practising agile while not taking on board DevOps yet. It is also worrying that both agile and DevOps in this report are most often referred to as “frameworks”, rather than focusing on them more as mindsets.
  • There is almost no talk of “testing” (as I understand it) in the report, while there is a heavy focus on agile, DevOps, automation, AI, ML, etc. I would have liked to see some deep questions around testing practises to learn more about what’s going on in terms of human testing in these large organizations.

The report claims to be “a comprehensive and balanced overview of the key trends shaping quality assurance (QA) and testing today”, but I don’t see the balance I’d like and the lack of questioning around actual testing practices means the report is not comprehensive for me either. The target audience for these corporate type reports will probably take on board some of the recommendations from this high-profile report and I imagine in some cases this will simply perpetuate some poor decision-making in and around testing in large organizations.