I blogged recently about the Capgemini/Micro Focus/Sogeti “World Quality Report 2018/19” (WQR) and, shortly afterwards, another report from a worldwide survey around software testing appeared, this time in the shape of the ISTQB Worldwide Software Testing Practices Report 2017-18. The publication of this report felt like another opportunity to review the findings and conclusions, as well as comparing and contrasting with the WQR.
The survey size is stated as “more than 2000” so it’s similar in reach to the WQR, but it’s good to see that the responses to the ISTQB survey are much more heavily weighted to testers than managers/executives (with 43% of responses from people identifying themselves as a “Tester” and 77% being “technical” vs. 23% “managers”). Organizational size information is not provided in this report, whereas the WQR data showed it was heavily skewed towards the largest companies.
The ISTQB report comes in at a light forty pages compared to the WQR’s seventy, in part due to the different presentation style here. This report mainly consists of data with some commentary on it, but no big treatise conclusions as was the case in the WQR.
Main findings (pages 4-5)
The report’s “main findings” are listed as:
- More than 2000 people from 92 countries contributed to the … report. In this year’s report, respondents’ geographic distribution is quite well balanced.
- The outcome of the 2017-2018 report is mostly in parallel with the results of the one done in 2015-16.
- Test analyst, test manager and technical test analyst titles are the top three titles used in a typical tester’s career path.
- Main improvement areas in software testing are test automation, knowledge about test processes, and communication between development and testing.
- Top five test design techniques utilized by software testing teams are use case testing, exploratory testing, boundary value analysis, checklist based, and error guessing.
- New technologies or subjects that are expected to affect software testing in near future are security, artificial intelligence, and big data.
- Trending topics for software testing profession in near future will be test automation, agile testing, and security testing.
- Non-testing skills expected from a typical tester are soft skills, business/domain knowledge, and business analysis skills.
The first “finding” is not really a finding, it’s data about the survey and its respondents. There is nothing particularly surprising in the other findings. Finding number 5 is interesting, though, as I wouldn’t expect to see “exploratory testing” being considered a test technique alongside the likes of boundary value analysis. For me, exploratory testing is an approach to testing during which we can employ a variety of techniques (such as BVA).
Background of respondents (pages 8-9)
The geographical distribution of responses is probably indicative of ISTQB strongholds, with 33% from Asia, 27% from North America and 27% from Europe.
More than half of the respondents come from “Information Technology” organizations, so this is again a difference from the WQR and indicates a different target demographic. Three quarters of the responses here are from just four organization types, viz. IT, Financial Services, Healthcare & Medical, and Telecom, Media & Entertainment.
Organizational and economic aspects of testing (pages 11-14)
In answering “Who is responsible for software testing in your company?”, a huge 79% said “In-house test team” but this isn’t the whole story as just 30% said “Only in-house test team”, so most are also using some other supplemental source of testers (e.g. 19% said “off-shore test team”). When it comes to improving testers’ competency, 50% responded with “Certification of competencies” which is again probably due to the ISTQB slant on the targets for the survey. It’s good to see a hefty 27% of respondents saying “Participation at conferences”, though.
The classic “What percent of a typical IT/R&D project budget is allocated to software testing?” question comes next. This continues to baffle me as a meaningful question especially in agile environments when what constitutes testing compared to development is not easy to determine. The most common answer here (41% of responses) was “11-25%” while only 8.5% said “>40%”. You might recall that the WQR finding in this area was 26% so this report is broadly consistent. But it still doesn’t make sense as something to measure, at least not in the agile context of my organization.
When asked about their expectations of testing budget for the year ahead, 61% indicated some growth, 31% expected it to be stable and just 8% expected a decline.
Processes (pages 15-23)
As you’d probably expect, the Processes section is the chunkiest of the whole report.
It kicks off by asking “What are the main objectives of your testing activities?” with the top three responses being “To detect bugs”, “To show the system is working properly” and “To gain confidence”. While finding bugs is an important part of our job as testers, it is but one part of the job and arguably not the most important one. The idea that testing can show the system “is working properly” concerns me as does the idea that we can give other people confidence by testing. What we need to focus on is testing to reveal information about the product and communicating that information is ways that help others to make decisions about whether we have the product they want and whether the risks to its value that we identify are acceptable or not. A worrying 15% of responses to this question were “To have zero defects”.
A set of 17 testing types and 18 testing topics form the basis for the next question, “Which of the below testing types and/or topics are important for your organization?” Functional testing easily won the testing types competition (at 83%) while user acceptance testing took the gong in the topics race (at 66%). Thinking of testing in this “types” sort of breakdown is a thing in the ISTQB syllabus but I’m not convinced it has much relevance in day-to-day testing work, but appreciate other contexts might see this differently. 53% of respondents said exploratory testing was an important topic, but later responses (see “Testing techniques and levels”) make me uneasy about what people are thinking of as ET here.
When it comes to improvement, 64% of respondents said “Test automation” was the main improvement area in their testing activities. I’m not sure whether the question was asking which areas they see as having improved the most or the areas that still have the most room for improvement, but either way, it’s not surprising to see automation heading this list.
The final question in this section asks “What are the top testing challenges in your agile projects?”, with “Test automation”, “Documentation” and “Collaboration” heading the answers. The report suggests: “The root cause behind these challenges may be continuously evolving nature of software in Agile projects, cultural challenges/resistance to Agile ways of working”. While these are possible causes, another is the mistaken application of “traditional” approaches to software testing (as still very much highlighted by the ISTQB syllabus) in agile environments.
Skills & career paths (pages 24-31)
The second largest portion of the report kicks off by looking at the career path of testers in the surveyed organizations. The most commonly reported career path is “Tester -> Test Analyst”, closely followed by “Tester -> Test Manager”. I don’t find titles like those used here very relevant or informative, they mean quite different things between different organizations so this data is of questionable value. Similarly the next question – “What could be the next level in the career path for a test manager?” (with the top response being a dead heat between “Test Department Director” and “Project Manager”) – doesn’t really tell me very much.
More interesting are the results of the next question, “Which testing skills do you expect from testers?” with the answers: Test Execution (70%), Bug Reporting (68%), Test Design (67%), Test Analysis (67%), Test Automation (62%), Test Planning (60%), Test Strategy (52%), Test Implementation (50%), Test Monitoring (38%), Bug Advocacy (29%) and Other (2%). This indicates, as the report itself concludes, that today’s tester is expected to have a broad range of skills – testing is no longer about “running tests and reporting bugs”.
The last two questions in this section are around “non-testing skills” expected of testers, firstly in an agile context and then in a non-agile context. The answers are surprisingly similar, with “Soft skills”, “Business/domain knowledge” and “Business Analysis” forming the top three in both cases (albeit with the second two skills reversed in their order). It troubles me to think in terms of “non-testing skills” when we really should be encouraging testers to skill in the areas that add most value to their teams, in whatever context that happens to be. In drawing distinctions between what is and isn’t a testing skill, I think we diminish the incredibly varied skills that a great tester can bring to a team.
Tools & automation (pages 32-33)
On tool usage, the majority of respondents indicated use of defect tracking, test automation, test execution, test management, and performance testing tools. This is unsurprising as raw statistics, but it would be nice to know how those tools are being used to improve outcomes in the respondents’ environments.
The other question in this section is “What is the percentage of automated test cases you use with respect to your overall test cases?” Maybe you can hear my sighs. How anyone can honestly answer this question is beyond me, but anyway, 19% of respondents said more than 50%, while close to half of them said less than 10%. The report makes the mistake of interpreting these numbers as coverage percentages, when that it not what the question asked: “Almost half of respondents that implemented automated tests reported that their coverage is up to 20%” The question in itself is meaningless and reinforces the common misconceptions that all tests are equal and that you can compare automated tests to “other” tests in a meaningful way.
Testing techniques & levels (pages 34-35)
It’s interesting to see the list of “test techniques” on offer in answering “Which test techniques are utilized by your testing team?”. The top five responses were Use Case Testing (73%), Exploratory Testing (67.2%), Boundary Value Analysis (52.3%), Checklist-Based (49.7%) and Error Guessing (36%). I’m assuming respondents answered here in accordance with the definition of these techniques from the ISTQB syllabus. I find it almost impossible to believe that two-thirds of the sample are really doing what those of us in the context-driven testing world would recognize as exploratory testing. The list of techniques doesn’t contain comparable things for me anyway, again I see ET as an approach rather than a technique comparable to boundary value analysis, equivalence partitioning, decision tables, etc.
When it comes to test “levels”, system and integration testing are indicated as consuming the most of the testing budget, unsurprisingly. It’s not clear where spend on automated testing fits into these levels.
Future of testing (pages 36-39)
In answering “Which new technologies or subjects will be important to the software testing industry in the following 5 years?”, around half of the respondents said Security, Artificial Intelligence, Big Data and Cloud. Answering “What will be the most trending topic for software testing profession in near future”, the top responses were Test Automation, Agile Testing and Security Testing. This second question doesn’t seem very useful, what does “most trending topic” really mean? The two questions in this section of the survey were unlikely to result in revelations – and they didn’t.
Wrapping up
With less wordy conclusion drawing in the ISTQB report than the World Quality Report, there is more room for the reader to look at the data and make their own opinions of what that data is telling them. For me, the questions and possible answers generally don’t tell me a great deal about what testers are really doing, what challenges they are facing, or how we grow both testers and testing in the future.