Monthly Archives: November 2018

The TiCCA19 conference programme is live!

After a successful 2018 event in the shape of CASTx18, the Association for Software Testing were keen to continue Australian conferences and so Paul Seaman and I took on the job of organizing the 2019 event on the AST’s behalf. We opted to rebrand the conference to avoid confusion with the AST’s well-known CAST event (held annually in North America) and so Testing in Context Conference Australia was born.

It’s been a busy few months in getting to the point where the full line-up for this conference is now live.

Paul and I decided to go for a theme with an Australian bent again, the time “From Little Things, Big Things Grow”. It’s always a milestone in planning a conference when it comes time to open up the call for proposals and we watched the proposals flowing in, with the usual surge towards the CFP closing date of 31st October.

The response to the CFP was excellent, with 95 proposals coming in from all around the globe. We had ideas from first-time presenters and also some from very seasoned campaigners on the testing conference circuit. My thanks go to everyone who took the time and effort to put forward a proposal.

We were joined by Michele Playfair to help us select a programme from the CFP responses. This was an interesting process (as usual), making some hard decisions to build what we considered the best conference programme from what was submitted. With only eight track session slots to fill, we couldn’t choose all of the excellent talks we were offered unfortunately.

The tracks we have chosen are hopefully broad enough in topic to be interesting to many testers. Our keynotes come from Ben Simo (making his first trip and conference appearance in Australia!) and local legend, Lynne Cazaly. Rounding out our programme are three full-day workshops showcasing top Melbourne talent, in the shape of Neil Killick, Scott Miles and Toby Thompson. I’m proud of the programme we have on offer and thank all the speakers who’ve accepted our invitation to help us deliver an awesome event.

The complete TiCCA19 line-up is:

Keynotes (March 1st)

  • Ben Simo with “Is There A Problem Here?”
  • Lynne Cazaly with “Try to see it my way: How developers, technicians, managers and leaders can better understand each other”

Tracks (March 1st)

  •  with “From Prototype to Product: Building a VR Testing Effort”
  •  with “Tales of Fail – How I failed a Quality Coach role”
  •  with “Test Reporting in the Hallway”
  •  with “The Automation Gum Tree”
  •  with “Old Dog, New Tricks: How Traditional Testers Can Embrace Code”
  •  with “The Uncertain Future of Non-Technical Testing”
  • Adam Howard with “Exploratory Testing: LIVE!”
  •  with “The Little Agile Testing Manifesto”

Workshops (February 28th)

  • Neil Killick with “From “Quality Assurance” to “Quality Champion” – How to be a successful tester in an agile team”
  • Scott Miles with “Leveraging the Power of API Testing”
  • Toby Thompson with “Applied Exploratory Testing”

For more details about TiCCA19, including the full schedule and the chance to benefit from significant discounts during the “Little Ripper” period of registration, visit

I hope to see you in Melbourne next year!

Another survey on the state of testing worldwide, the “ISTQB Worldwide Software Testing Practices Report 2017-18”

I blogged recently about the Capgemini/Micro Focus/Sogeti “World Quality Report 2018/19” (WQR) and, shortly afterwards, another report from a worldwide survey around software testing appeared, this time in the shape of the ISTQB Worldwide Software Testing Practices Report 2017-18. The publication of this report felt like another opportunity to review the findings and conclusions, as well as comparing and contrasting with the WQR.

The survey size is stated as “more than 2000” so it’s similar in reach to the WQR, but it’s good to see that the responses to the ISTQB survey are much more heavily weighted to testers than managers/executives (with 43% of responses from people identifying themselves as a “Tester” and 77% being “technical” vs. 23% “managers”). Organizational size information is not provided in this report, whereas the WQR data showed it was heavily skewed towards the largest companies.

The ISTQB report comes in at a light forty pages compared to the WQR’s seventy, in part due to the different presentation style here. This report mainly consists of data with some commentary on it, but no big treatise conclusions as was the case in the WQR.

Main findings (pages 4-5)

The report’s “main findings” are listed as:

  1. More than 2000 people from 92 countries contributed to the … report. In this year’s report, respondents’ geographic distribution is quite well balanced.
  2. The outcome of the 2017-2018 report is mostly in parallel with the results of the one done in 2015-16.
  3. Test analyst, test manager and technical test analyst titles are the top three titles used in a typical tester’s career path.
  4. Main improvement areas in software testing are test automation, knowledge about test processes, and communication between development and testing.
  5. Top five test design techniques utilized by software testing teams are use case testing, exploratory testing, boundary value analysis, checklist based, and error guessing.
  6. New technologies or subjects that are expected to affect software testing in near future are security, artificial intelligence, and big data.
  7. Trending topics for software testing profession in near future will be test automation, agile testing, and security testing.
  8. Non-testing skills expected from a typical tester are soft skills, business/domain knowledge, and business analysis skills.

The first “finding” is not really a finding, it’s data about the survey and its respondents. There is nothing particularly surprising in the other findings. Finding number 5 is interesting, though, as I wouldn’t expect to see “exploratory testing” being considered a test technique alongside the likes of boundary value analysis. For me, exploratory testing is an approach to testing during which we can employ a variety of techniques (such as BVA).

Background of respondents (pages 8-9)

The geographical distribution of responses is probably indicative of ISTQB strongholds, with 33% from Asia, 27% from North America and 27% from Europe.

More than half of the respondents come from “Information Technology” organizations, so this is again a difference from the WQR and indicates a different target demographic. Three quarters of the responses here are from just four organization types, viz. IT, Financial Services, Healthcare & Medical, and Telecom, Media & Entertainment.

Organizational and economic aspects of testing (pages 11-14)

In answering “Who is responsible for software testing in your company?”, a huge 79% said “In-house test team” but this isn’t the whole story as just 30% said “Only in-house test team”, so most are also using some other supplemental source of testers (e.g. 19% said “off-shore test team”). When it comes to improving testers’ competency, 50% responded with “Certification of competencies” which is again probably due to the ISTQB slant on the targets for the survey. It’s good to see a hefty 27% of respondents saying “Participation at conferences”, though.

The classic “What percent of a typical IT/R&D project budget is allocated to software testing?” question comes next. This continues to baffle me as a meaningful question especially in agile environments when what constitutes testing compared to development is not easy to determine. The most common answer here (41% of responses) was “11-25%” while only 8.5% said “>40%”. You might recall that the WQR finding in this area was 26% so this report is broadly consistent. But it still doesn’t make sense as something to measure, at least not in the agile context of my organization.

When asked about their expectations of testing budget for the year ahead, 61% indicated some growth, 31% expected it to be stable and just 8% expected a decline.

Processes (pages 15-23)

As you’d probably expect, the Processes section is the chunkiest of the whole report.

It kicks off by asking “What are the main objectives of your testing activities?” with the top three responses being “To detect bugs”, “To show the system is working properly” and “To gain confidence”. While finding bugs is an important part of our job as testers, it is but one part of the job and arguably not the most important one. The idea that testing can show the system “is working properly” concerns me as does the idea that we can give other people confidence by testing. What we need to focus on is testing to reveal information about the product and communicating that information is ways that help others to make decisions about whether we have the product they want and whether the risks to its value that we identify are acceptable or not. A worrying 15% of responses to this question were “To have zero defects”.

A set of 17 testing types and 18 testing topics form the basis for the next question, “Which of the below testing types and/or topics are important for your organization?” Functional testing easily won the testing types competition (at 83%) while user acceptance testing took the gong in the topics race (at 66%).  Thinking of testing in this “types” sort of breakdown is a thing in the ISTQB syllabus but I’m not convinced it has much relevance in day-to-day testing work, but appreciate other contexts might see this differently. 53% of respondents said exploratory testing was an important topic, but later responses (see “Testing techniques and levels”) make me uneasy about what people are thinking of as ET here.

When it comes to improvement, 64% of respondents said “Test automation” was the main improvement area in their testing activities. I’m not sure whether the question was asking which areas they see as having improved the most or the areas that still have the most room for improvement, but either way, it’s not surprising to see automation heading this list.

The final question in this section asks “What are the top testing challenges in your agile projects?”, with “Test automation”, “Documentation” and “Collaboration” heading the answers. The report suggests: “The root cause behind these challenges may be continuously evolving nature of software in Agile projects, cultural challenges/resistance to Agile ways of working”. While these are possible causes, another is the mistaken application of “traditional” approaches to software testing (as still very much highlighted by the ISTQB syllabus) in agile environments.

Skills & career paths (pages 24-31)

The second largest portion of the report kicks off by looking at the career path of testers in the surveyed organizations. The most commonly reported career path is “Tester -> Test Analyst”, closely followed by “Tester -> Test Manager”. I don’t find titles like those used here very relevant or informative, they mean quite different things between different organizations so this data is of questionable value. Similarly the next question – “What could be the next level in the career path for a test manager?” (with the top response being a dead heat between “Test Department Director” and “Project Manager”) – doesn’t really tell me very much.

More interesting are the results of the next question, “Which testing skills do you expect from testers?” with the answers: Test Execution (70%), Bug Reporting (68%), Test Design (67%), Test Analysis (67%), Test Automation (62%), Test Planning (60%), Test Strategy (52%), Test Implementation (50%), Test Monitoring (38%), Bug Advocacy (29%) and Other (2%). This indicates, as the report itself concludes, that today’s tester is expected to have a broad range of skills – testing is no longer about “running tests and reporting bugs”.

The last two questions in this section are around “non-testing skills” expected of testers, firstly in an agile context and then in a non-agile context. The answers are surprisingly similar, with “Soft skills”, “Business/domain knowledge” and “Business Analysis” forming the top three in both cases (albeit with the second two skills reversed in their order). It troubles me to think in terms of “non-testing skills” when we really should be encouraging testers to skill in the areas that add most value to their teams, in whatever context that happens to be. In drawing distinctions between what is and isn’t a testing skill, I think we diminish the incredibly varied skills that a great tester can bring to a team.

Tools & automation (pages 32-33)

On tool usage, the majority of respondents indicated use of defect tracking, test automation, test execution, test management, and performance testing tools. This is unsurprising as raw statistics, but it would be nice to know how those tools are being used to improve outcomes in the respondents’ environments.

The other question in this section is “What is the percentage of automated test cases you use with respect to your overall test cases?” Maybe you can hear my sighs. How anyone can honestly answer this question is beyond me, but anyway, 19% of respondents said more than 50%, while close to half of them said less than 10%. The report makes the mistake of interpreting these numbers as coverage percentages, when that it not what the question asked: “Almost half of respondents that implemented automated tests reported that their coverage is up to 20%” The question in itself is meaningless and reinforces the common misconceptions that all tests are equal and that you can compare automated tests to “other” tests in a meaningful way.

Testing techniques & levels (pages 34-35)

It’s interesting to see the list of “test techniques” on offer in answering “Which test techniques are utilized by your testing team?”. The top five responses were Use Case Testing (73%), Exploratory Testing (67.2%), Boundary Value Analysis (52.3%), Checklist-Based (49.7%) and Error Guessing (36%). I’m assuming respondents answered here in accordance with the definition of these techniques from the ISTQB syllabus. I find it almost impossible to believe that two-thirds of the sample are really doing what those of us in the context-driven testing world would recognize as exploratory testing. The list of techniques doesn’t contain comparable things for me anyway, again I see ET as an approach rather than a technique comparable to boundary value analysis, equivalence partitioning, decision tables, etc.

When it comes to test “levels”, system and integration testing are indicated as consuming the most of the testing budget, unsurprisingly. It’s not clear where spend on automated testing fits into these levels.

Future of testing (pages 36-39)

In answering “Which new technologies or subjects will be important to the software testing industry in the following 5 years?”, around half of the respondents said Security, Artificial Intelligence, Big Data and Cloud. Answering “What will be the most trending topic for software testing profession in near future”, the top responses were Test Automation, Agile Testing and Security Testing. This second question doesn’t seem very useful, what does “most trending topic” really mean? The two questions in this section of the survey were unlikely to result in revelations – and they didn’t.

Wrapping up

With less wordy conclusion drawing in the ISTQB report than the World Quality Report, there is more room for the reader to look at the data and make their own opinions of what that data is telling them. For me, the questions and possible answers generally don’t tell me a great deal about what testers are really doing, what challenges they are facing, or how we grow both testers and testing in the future.