Attending the pre-TestBash Sydney Testers meetup

Arriving in Sydney the day before our presentation at the Ministry of Testing‘s TestBash Australia 2018 conference allowed me (along with Paul Seaman) to attend the pre-TestBash meetup organized by the well-known Sydney Testers group.

The meetup was held in the offices of Gumtree, up on the 22nd floor of the tower at 1 York Street in the CBD. On entering their office, the most striking feature was the simply stunning view it affords their lucky employees of the famous Sydney Harbour Bridge. The other noticeable thing about this relatively newly-renovated space for the company is that it has been furnished using items themselves sourced from the Gumtree platform, so no cookie-cutter corporate office furnishings here!

View of the Sydney harbour bridge from the Gumtree office

It was good to see a decent crowd of about thirty people enjoying the free food and drinks before the “main event”, viz. Trish Koo with her short presentation on “The Future of Testing”. She covered some interesting predictions in her talk, including:

  • Exploratory Testing will be really weird
  • End to end testing will become meaningless
  • Black box testing will be cool again
  • Testers may be the only ones who can stop the robot apocalypse

This was at least a very different treatment compared to the many similarly-named talks out there. Her hand-drawn slides were another point of difference and she certainly got some interesting reactions from the audience! The Q&A afterwards was engaging, but still left ample time for us all to mingle before we formally overstayed our welcome at Gumtree!

Richard Bradshaw and Trish Koo

Trish says testers may be the only ones who can stop the Robot Apocalypse!

A vision of everyone working together from Trish Koo

It was good to see Richard Bradshaw there representing Ministry of Testing as well as TestBash Australia conference organizers, David Greenlees and Anne-Marie Charrett. Thanks to Sam Connelly and the Sydney Testers crew for putting on a good meetup in the run up to TestBash, it’s always good to mingle with the local testing community before a major event in another city.

(First three photos above from Michele Playfair, the last one from Paul Maxwell-Walters.)

Advertisements

ER of attending the TechDiversity Awards 2018 (Melbourne)

I had the pleasure of attending the TechDiversity Awards in Melbourne on 27th September. I was there as part of the EPIC Assist contingent as a nominee for an award in the Education category for the EPIC TestAbility Academy (ETA), the software testing programme for young adults with autism delivered by Paul Seaman and I. (You can view our nomination here.)

The venues for the two parts of the event were both within a renovated wharf shed at Docklands.

The first part of the event took place in the Sumac space and saw all the shortlisted nominees (around 40 different groups) assembled to select the merit award winners who would later battle it out for the top spot in each category (viz. Government, Business, Media, and Education). The Education category had the most entries on the shortlist (18) and just five were selected for merit awards – ETA didn’t make it to the next stage unfortunately. We were still very proud to have been nominated and shortlisted amongst such a great bunch of programmes in the tech diversity space around Education.

Paul & Maria, Michele, Lee & Kylie at the merit awards

Moving on to the Gala Dinner in the massive Peninsula space, we had our own table consisting of (clockwise in the below photo) Kym Vassiliou (EPIC Assist), Lee, Kylie (Lee’s wife), Bill Gamack (CEO of EPIC Assist), Paul Seaman, Maria (Paul’s wife), Michele Playfair and Craig Thompson (EPIC Assist). The event was a packed house with about 400 people sitting down for the dinner.

techdiversity_photo1

The MC for the evening was Soozey Johnstone and she did a really good job of keeping things on track and injecting her own passion for diversity into proceedings. Apart from revealing the award winners, there were three keynote speakers sprinkled throughout the evening.

First up for an opening keynote was Philip Dalidakis (Minister for Trade & Investment, Innovation & the Digital Economy and Small Business in Victoria) and he announced the winner of the Minister’s Prize in the shape of the Grad Girls Program 2018 (VICICT4.Women).

Next up was Georgina McEncroe, founder of the all-female shared ride service, Shebah.  Her background as a comedian made this a very open and entertaining short speech!

The last keynote was a very personal one, from Alan Lachman who shared the story of his daughter losing her sight and this being the inspiration for setting up Insight. The three keynotes were all quite different but each made for a welcome break between award presentations and food courses.

In terms of the all-important awards, the winners were:

It was great to see the “Champion” award going to the RISE programme at the Department of Health & Human Services, so even though ETA didn’t get up, at least an autism-related initiative took the main gong.

This was a well-run event and the venue was impressive, with good service and fine catering for our vegan needs. It was inspiring to see all of the great work going on towards improving diversity in the tech sector, but a little surprising to see something of a lack of diversity amongst the nominations (e.g. there was a very heavy bias towards gender diversity). The breakdown of nominations by the four categories also needs to be reconsidered, as there were very large numbers of nominations in Business and Education (17 and 18 respectively) while only 3 in Media and 4 in Government.

It was a really enjoyable evening and I consider myself fortunate to be working with a bunch of genuinely nice people on this initiative. I’m looking forward to running the third course of ETA in 2019 and maybe, just maybe we’ll have better luck at these awards next year if we’re nominated again!

The World Quality Report 2018/19 – applying critical thinking

The tenth edition of the World Quality Report 2018/19 (WQR) was published recently. This 70-page opus was produced by Capgemini, Micro Focus and Sogeti (a division of Capgemini anyway) and is perhaps the largest survey of its kind.

After digesting this report, I feel it’s important to apply some critical thinking to both the data and the conclusions drawn from it in this report. This is a long blog post as there is a lot to unpack here.

The survey (pages 66-69)

I’m always interested in understanding where the data comes from when I see survey results like these and this information is openly provided in the report. Understanding the origin of the data is important context and I read here first (whereas the report itself presents it at the end).

The survey consisted of 1700 interviews. In terms of the organizations taking part, the survey was restricted to only organizations with more than 1000 employees (actually it was 40% from those with more than 10,000, 34% from those with 5000-10000 and 26% from those with 1000-5000) so the results are in fact heavily skewed towards the very largest corporations. The survey had a good spread of countries & regions as well as industry sectors (although the top three sectors accounted for almost half of the responses, viz. financial services, public sector/government, and telecommunications).

The types of people who provided survey responses is more interesting, though – in terms of job title breakdown, they were grouped as follows: CIO (27%), IT director (22%), QA/Testing Manager (20%), VP Applications (18%), CMO/CDO (7%) and CTO/Product Head (6%). With the (possible) exception of the QA/Testing Manager types, most of these people are likely a long way away from the actual day-to-day testing work happening in their organizations.

Let’s look at each section of the report now.

Introduction (pages 4-5)

In his introduction, Brad Little of Capgemini (page 4) says the WQR is “a comprehensive and balanced overview of the key trends shaping quality assurance (QA) and testing today”. In his introduction, Raffi Margaliot of Micro Focus (page 5) says “The results from this tenth edition of the WQR are conclusive: all but one percent of organizations are now using DevOps practices in their organization. Their focus is no longer on whether to move to DevOps; rather how to refine their DevOps approach and continuously improve.” Perhaps coincidentally, it’s worth noting that Micro Focus offers solutions in the DevOps space and, as you’ll see later in my post, the “conclusive” part of this statement is highly questionable.

Executive Summary (pages 6-9)

On the role of AI in QA and testing, “AI will enable enterprises to transform testing into a fully self-generating, self-running and self-adapting activity”, really? And why would that be seen as a good thing?

On agile: “Our survey also reveals that organizations are customizing Agile and combining it with waterfall to develop hybrid frameworks that are a best fit to their organizational, regulatory, cultural and business requirements” – so not thinking of a move towards agility as a mindset change, but rather a process/framework from which to cherrypick the bits that are easy and then calling themselves “agile”.

On automation: “The objective of automation has also changed as there is less focus on shortening of testing times and more on better coverage and effective use of test cases. This, again, is related to the dictum of “Quality at Speed”” – what about the almost complete uptake of DevOps and agile you just mentioned? Aren’t these dependent on fast feedback loops? I’d argue that the focus of automation has changed, but largely in support of CI/CD pipelines where fast feedback on recent changes before deployment is key. “Moving forward, organizations will need to move toward higher levels of end-to-end testing automation”, why?

The cost of testing is also covered in this executive summary, I’ll talk in detail about that later but for now enjoy the statistic that QA and testing account for 26% of the IT budget (the same number as in the previous year’s report).

WQR findings (pages 10-11)

The findings kick off with the following revelation: “Expecting QA and testing to directly contribute to “ensuring end-user satisfaction” is not an obvious or intuitive expectation. However, this year, it came out as the top objective of QA and testing strategy.” I’m not sure why this is so surprising to the authors of this report, but I’m intrigued by having this specifically as an objective of a test strategy.

AI is a big focus of this report and this big claim is made here, “The convergence of AI, ML, and analytics and their use in carrying out smarter automation will be the biggest disruptive force which will transform QA and testing over the next two to three years.” It will be interesting to see whether such significant disruption really does occur in such a short timeframe, I don’t see a great deal of evidence to support this claim from my part of the testing world but I acknowledge that others may have different views and their organizations might actually be more active in these areas than I’d expect: “57% of our respondents said they had AI projects for quality assurance, already in place or planned for the next 12 months.” (though the “or planned” part of that survey question leaves a lot of wriggle room).

On the topic of automation: “These challenges around automation, test data, and environments, create a situation where organizations are unable to keep pace with the volume and frequency of testing required. Essentially, they slow down testing, thus defeating one of the main objectives of adopting frameworks such as agile and DevOps. This also came through in our survey results, when 43% of respondents said that “too slow testing process” was a challenge when it came to developing applications today.” It’s interesting that agile and DevOps are referred to as “frameworks” and that this also seems to imply that faster testing is one of the main objectives of agile and DevOps.

Key recommendations (pages 12-13)

The authors make five key recommendations out of the mountain of survey data underlying this WQR, viz.

  • Increase the level of basic and smart test automation but do so in a smart, phased manner
  • Implement a non-siloed approach for test environment and data provisioning
  • Build quality engineering skills beyond SDETs
  • Improve tracking to optimize spends
  • Develop a testing approach for AI solutions now

The first recommendation around automation is based on this conclusion: “We believe that automation is the biggest bottleneck holding back the evolution of QA and testing today. This is due, in part, to automation’s key role as an enabler of successful agile and DevOps transformation. With increasing agile and DevOps adoption (99% according to the 2018 survey), the importance of automation for delivering “Quality at Speed” has also risen.” While I agree that automation has an important role to play in our more modern development approaches, I’m not convinced that a lack of automation is holding back the “evolution of QA and testing” – I’d see the lack of focus on genuine human testing skills to be a much bigger issue in fact. The 99% DevOps adoption statistic is reeled out again in support of their conclusion, see the “Trends” section below for more on the dubious grounding of this number.

When it comes to test environments, the survey responses indicate that a lot of organizations have a lot of issues with generating and maintaining appropriate test data, something that’s been a common thread in the industry for years. The recommendation here is to centralize test data and environment provisioning and “move towards “smart” test data and test environment management, i.e. the creation of self-provisioning, self-monitoring and self-healing environments”.

In terms of skillsets for those in QA and testing, the “first priority is to attract/reskill towards agile test specialists who have functional automation skills and domain testing skills. We would recommend that automation be a must-have skill for everyone in the QA function today.” This topic is pretty hot in our industry right now and reports like this making recommendations like this mean there is even more weight behind the idea that every tester needs to be able to “do automation”. None of their recommendations in this area deal with upskilling humans in performing excellent testing and this is a huge gap in most organizations. Recruiting recently for an expert exploratory tester here in Melbourne showed just how few highly-skilled testers are around, while there is a vast supply of “traditional”/ISTQB style testers plying their wares in large organizations. I do think that testers need to understand where automation makes sense and where it doesn’t, and can greatly assist those writing automation code to focus on the right places – but that doesn’t mean every tester needs to write automation code in my opinion.

On the subject of tracking spend on testing, the report makes the fairly obvious observation that “The adoption of agile and DevOps in project teams has led to a situation where QA and testing activities are being done by many, including developers as well as specified testing professionals. This makes it tough to accurately track, understand or optimize QA and testing spends.” The recommendation is to “create a detailed and elaborate tracking mechanism” to work out who is spending what time on testing activities so it can be more accurately tracked. Given that the report itself claims that everyone is working in an agile fashion with whole team approach to quality, I’m not sure why anyone would want to try to track the spend in this way. Surely the spend of interest is the total spend on developing the product and trying to split out testing reinforces the old ways of thinking of divisions between development and testing. I’ll talk about cost more in the “Trends” section, but recommending this artificial and arbitrary boundary between testing and everything else required to build, test and produce the product is an anti-recommendation at best and dangerous at worst (in that the percentage spent on QA and testing is always an easy target for “optimization”, and we all know what that means!).

I acknowledge that testing AI solutions is a tricky problem so the recommendation that “every organization needs to start developing a QA and test strategy for their AI applications now” has some merit. I still think the idea that so many organizations are actively working on genuine AI solutions is questionable (and the data in this report is not conclusive, given the nature of the survey responses).

Current Trends in Quality Assurance and Testing (pages 14-43)

Almost half of the WQR is dedicated to discussing current trends in QA and testing and this was by far the most revealing content in the report for me. I’ll break down my analysis in the same way as the report.

Key trends in IT (pages 16-21)

This section discusses the results of survey questions around executive management objectives with QA and testing, before looking specifically into digital transformation & the API economy, internet of things, Cloud, cybersecurity, and blockchain. The conclusion in this section is bang on: “It’s also important to remember that the new IT model is not just about the latest technologies or improved processes. Above all, it is a change in culture, attitude, and mindset, which will also require a change in the traditional ways of working or delivering services. This means that people, processes, and technology will all have to go through a period of change and improvement before we can fully realize the benefits promised by the new technologies and frameworks.” Finally a mention of culture and mindset change – and the challenges that always brings.

Artificial intelligence (pages 22-25)

I just watched a EuroSTAR webinar from Jason Arbon (CEO of test.ai) titled AI Will Soon Test Everything (there’s a lot to unpack in that webinar too, especially around exploratory testing, but that’s a whole other blog post!) so was interested in the data in the AI section of this report, especially given the big prominence given to AI in the summary, recommendations and key findings sections before it.

A little over half of the respondents indicated that AI is “in place or planned on quality assurance” and the report comments that a “big challenge lies in identifying possible use cases for AI in testing” (with over half of the respondents citing “identifying where business might actually apply AI” as a challenge). Interesting, exactly half of the respondents indicated that they think there is no change required in skillset around “test strategy and test design skills” when including AI. I agree with the report here when it states “Clearly, there is enthusiasm for and excitement around AI technologies and solutions, but their actual application in testing is still emerging.”

Test automation (pages 26-30)

This trend is subtitled “The single-biggest enabler of maturity in QA and testing”, interesting! The authors nail their colours to the mast right from the introduction in this section, with “For QA, this means an increased focus on the concept of “Quality at Speed” and its associated promises of avoiding human intervention wherever possible, reducing costs, increasing quality, and achieving better time to market. And the way to achieve each of these goals? Automation.” I don’t think we should be viewing any one thing as offering solutions to so many different issues in more agile software delivery projects – and “avoiding human intervention wherever possible” is not a goal for me either.

The well-worn 99% statistic is presented yet again as a factor driving more automation: “the adoption of agile and DevOps, which seems to have reached a tipping point today, with 99% of our respondents saying they use DevOps for at least some part of their business”. The statistics around DevOps adoption (see next section) don’t suggest a tipping point anytime soon.

A whopping 61% of the respondents suggest application change as an obstacle to automation success: “When asked about the main challenges in achieving their desired level of automation, 61% of respondents said they had difficulties automating as their applications changed with every release. This could be a direct result of the flexibility provided by frameworks like agile and DevOps, which allow organizations to change their requirements or stories frequently. This often leads to too many changes with every release and puts additional pressure on testers as test cases generated earlier or previous automation work no longer remains relevant.” It amazes me that people think application changes in a release are some kind of surprise and even more so that all those pesky changes we make to help and please customers are thought of as “too many changes with every release”! For products under active development, we should expect plenty of change and adapt our practices – not just in relation to automation but throughout our development process – to match that reality.

In discussing the benefits of automation, 60+% of respondents indicated “better test coverage”, “better control and transparency of test activities”, “better reuse of test cases”, “reduction of test cycle time”, “better detection of defects” and “reduction of test costs”. It would be interesting to understand more about what these categories mean and how organizations are measuring such benefits. It would seem likely to me that different organizations would have quite different goals around their automation initiatives, so it’s unlikely those goals would lead to the same set of benefits in all cases.

In summary, the authors say “To be successful, organizations must understand that automation is not only about replacing manual testing and chasing some incremental cost savings. Instead, focus on delivery quality at speed and supporting frameworks such as agile and DevOps to deliver much greater results and take QA and testing to the next level.” I would argue that to be successful, organizations need to realize that automation cannot replace manual testing but can extend and supplement human testing when applied appropriately. Chasing cost savings is a fools errand in this case, automation really is just writing (and maintaining) more code, so why would that save money?

The quality assurance organization (pages 31-35)

This is the section of the report that details the claim that “according to this year’s survey, a full 99% of respondents said that they were using DevOps principles in their organization”. This sounds like a pretty impressive statistic, so impressive that it is littered throughout the rest of the WQR. Looking into the actual survey results, though, the story is not quite so impressive:

  • 42% of respondents said “Fewer than 20% of our projects use DevOps principles”
  • 30% of respondents said “20-50% of our projects use DevOps principles”
  • 15% of respondents said “50-70% of our projects use DevOps principles”
  • 9% of respondents said “70-90% of our projects use DevOps principles”
  • Just 3% of respondents said “90-95% of our projects use DevOps principles”

In other words, almost three quarters of the respondents are using DevOps principles in the minority (i.e. less than 50%) of their projects. These underlying statistics are much more instructive than the “banner” 99% claim the authors choose to perpetuate in the WQR. What’s also interesting in these same statistics over time (2015-2018) – which the authors either didn’t spot or chose not to mention – is that they actually suggest a decrease in the use of DevOps! In 2015, for example, some 58% of respondents fell into the “using DevOps principles for 50% or more of their projects” categories.

My general impression here (and elsewhere in the report) is that DevOps and agile are not clearly understood as being different by the authors. This is reinforced by these comments: “As already stated, only one percent of respondents indicated that they were not experimenting with or applying DevOps in any form. According to the survey, the top DevOps processes being followed were “breaking down large efforts into smaller batches of work” (44% of respondents currently using and 38% planning to use), “cloud-based development and test environments” (43% using and 40% planning to use), and the “continuous monitoring of apps in production” (41% using and 40% planning to use).” Isn’t the idea of breaking down work into smaller pieces very much an agile mindset? What does it really have to do with DevOps?

Looking at the conclusions in the section on challenges in applying testing to agile developments, one of the conclusions is “As the skillset is moving from functional to SDET (Software Development Engineer in Test), organizations are faced with challenges of reskilling the existing testing teams and attracting the right testing talent to build future-ready testing teams.” The data shows that 41% of respondents cite “Lack of a good testing approach that fits with the agile development method” as a challenge. This challenge is not solved by making everyone an SDET, in fact probably quite the opposite. The role of expert human testing is again not discussed at all here when the data clearly supports a view that skilled human testers are critical in solving many of the challenges seen when testing in agile environments.

Test data and environments management (pages 36-39)

I didn’t find anything particularly surprising or controversial in this part of the report. The authors identified increased containerized, virtualized and Cloud test environments and also noted the impact that standards & regulations such as GDPR and IFRS9 are having on test data management.

[Note that there is some errant data from a different section of the report mistakenly placed in this part of the report.]

Efficiency and cost containment in quality assurance (pages 40-43)

It’s a case of leaving the best (or is that worst?) to last in terms of my objections to the findings in this report, viz. when talking about “efficiency and cost containment”. The authors’ opening gambit relates to the proportion of IT budget spent on testing: “According to this year’s survey, the proportion of the IT budget spent on QA and testing is pegged at 26%. This is the same as last year, though considerably below the highs of 31% in 2016 and 35% in 2015. Before that, QA and testing budgets accounted for 26% in 2014 and 23% in 2013.” I’ll leave it you as the reader to ponder why these statistics are the way they are and how you would even measure this percentage in your own organization. The authors surmise that “as organizations have gained experience and maturity in handling these new frameworks and technologies [e.g. agile, DevOps, cloud test environments, etc.], they have started reaping the benefits of these changes. A number of testing activities have gained efficiency and this has driven down costs. This is reflected in the fall in the proportion of IT budgets devoted to testing that we have seen over the last two years.”

Interestingly, the authors note that “According to our 2018 survey, when respondents were asked whether they had seen an increase in the proportional effort and cost spending on QA and testing over the last four to five years, a whopping 72% said “yes”. This directly contradicts the overall budgetary trends.” The authors then dig “deep” into the reasons for this confusing (or contradictory) data.

Firstly, they make the reasonable argument that IT budgets have generally been increasing due to the take up of new technologies, digitalization and so on, so the absolute test effort and budget has increased but stayed relatively the same against those increased overall budgets. The second argument is around divvying up test effort and spend across back-end legacy systems compared to front-end systems, with the back-end benefiting greatly in terms of cost from increased automation while the drive for speed at the front-end means more spend there to keep up with changing business requirements.

These first two arguments are somewhat reasonable. It’s the third observation, however, that to me makes a mockery of the whole idea of measuring the cost of testing and QA as a percentage of total IT budget: “The third and final factor is probably the biggest of them all. This is the difficulty in accurately capturing testing spends due to the coming of age of agile and DevOps. Before agile and DevOps, testing often operated as a separate profit or cost center, with operations centralized in a Test Center of Excellence (TCoE). This made it easier to measure spends and track how much was being spent on what. However, agile and DevOps have made this kind of tracking difficult since testing is now integrated into the project or the Scrum teams. This makes it extremely difficult to track exactly how much time is spent on testing activities, especially when the now-prevalent Software Development Engineer in Test (SDET) profile (who engages in development, analysis, and testing activities). It is entirely possible, for instance, that the efforts of these SDETs, or of entire Scrum teams, is being tagged to the development or the testing budget or allocated on the basis of a thumb-rule percentage between these two budgets.” At least the authors have acknowledged here that trying to measure this is a largely pointless exercise in agile teams and, since they keep claiming that almost everyone is doing agile, why even persist in trying to measure this? They essentially say here that it’s almost impossible to measure or, when asked to do so, people just make it up (“a thumb-rule percentage”).

Another interesting bunch of statistics comes next, in the breakdown of QA and testing spend between hardware/infrastructure, tools (licenses), and human resources with these coming in at 44%, 31% and 26% respectively this year. I’m simply amazed that the human element in the total cost is the lowest proportion and this indicates to me a lot of misplaced redirection of effort away from human interactions with product and towards automation (much of which is probably of questionable value).

The claim that “expert opinion holds that the increased number of test cycles brought about by the shift to agile and DevOps is perhaps one of the biggest reasons for a rise in testing effort and expenditures” is not supported by reference to who these experts are (and the survey responses do not support this conclusion directly in my opinion).

The summary for this section makes some recommendations and this is the most depressing part of all: “To gain the maximum benefit from their QA and testing spends, we would recommend that organizations focus on three key areas over the next couple of years. First, work on creating successful use cases (in testing) for new technologies such as AI, machine learning (ML), or robotics process automation. Second, create detailed and elaborate tracking mechanisms to understand exactly how much cost and effort is going into testing in Agile or DevOps teams. It would be impossible to reduce costs without understanding clearly how much is being spent and where. Finally, there is one step that organizations can immediately take to improve testing efficiencies, that is the use of end-to-end automation in testing. While investments are being made, they are nowhere near the optimal levels. All three of these steps will go a long way to improving testing efficiency and the quality of their IT systems in the long term.”

The last thing a truly agile team should be doing is layering on pointless bureaucracy such as “detailed and elaborate tracking mechanisms” to track testing time and effort. At least the authors make it clear in their recommendations that the point here is to “reduce costs” but this is probably in opposition to the main business driver for QA and testing which was “contribute to end-user satisfaction”. Not every organization will see the need to cut costs in QA and testing if the end products are attractive to customers and making good revenues. Also, suggesting that everyone should add more end-to-end test automation goes against almost every recent published practitioner article on this topic in my part of the testing community.

Sector Analysis (pages 44-65)

I didn’t find this section of the report as interesting as the trends section. The authors identify eight sectors and discuss particular trends and challenges within each. The sectors are:

  • Automotive
  • Consumer products, retail and distribution
  • Energy and utilities
  • Financial services
  • Healthcare and life sciences
  • High-Tech
  • Government and public sector
  • Telecom, media and entertainment

Wrapping up

A huge amount of data collection obviously goes into producing a report of this nature and the high profile publishers will no doubt mean its recommendations get plenty of publicity. As I’ve tried to detail in the above, some of the conclusions drawn from the data don’t make sense to me and the skewed nature of the sample (opinions from CIO/CTO types in the very largest corporations) means most of the recommendations don’t resonate with the testing industry as I’m familiar with it.

A few other points I’d like to draw attention to:

  • The report always says “QA and testing” together, with neither term being defined anywhere so it’s not clear what they’re talking about and whether they correctly view them as separate concepts or not. I wonder whether the interview questions were also couched in this language and, if so, how might that have affected the answers?
  • Similarly, the report usually says “agile and DevOps” together, as though they also necessarily go together. For me, they’re only somewhat related and I know of plenty of organizations practising agile while not taking on board DevOps yet. It is also worrying that both agile and DevOps in this report are most often referred to as “frameworks”, rather than focusing on them more as mindsets.
  • There is almost no talk of “testing” (as I understand it) in the report, while there is a heavy focus on agile, DevOps, automation, AI, ML, etc. I would have liked to see some deep questions around testing practises to learn more about what’s going on in terms of human testing in these large organizations.

The report claims to be “a comprehensive and balanced overview of the key trends shaping quality assurance (QA) and testing today”, but I don’t see the balance I’d like and the lack of questioning around actual testing practices means the report is not comprehensive for me either. The target audience for these corporate type reports will probably take on board some of the recommendations from this high-profile report and I imagine in some cases this will simply perpetuate some poor decision-making in and around testing in large organizations.

Looking for testing talent – an ER from Melbourne (Australia)

I’ve recently had the opportunity to look for an experienced tester to fill a new position at Quest in Melbourne, viz. a Test Coach/Lead Tester. The main responsibilities included coaching the wider development group here in improving their testing as well as hands-on exploratory testing. I crafted the job description accordingly with a deliberate focus on the key coaching and exploratory testing experience we were looking for in the ideal candidate.

Once the ad went up on Seek, our Talent Acquisition group was inundated with the usual flood of resumes – while it surprised them, it didn’t surprise me as previous experience suggests a large response to any “testing” category of job ad on sites like Seek. Rather than wasting their time and mine reviewing unsuitable applications, the field was soon whittled down to a very small number based on whether the resume even contained the phrase “exploratory testing”.

For the resumes that got through this simple filtering, I was a little surprised by their poor quality given that this was clearly advertised as a senior role with quite specific expectations on skills & experience. So, a few words of advice for testers looking for work in this market when it comes to the resume:

  • Show attention to detail (I view this as kind of important for a tester): this means removing all obvious spelling and grammatical errors – Word and other document creation tools will show you these mistakes quite clearly, so if you didn’t bother to fix them, what does that say about you?
  • Use plain English: so many resumes use fancy words when simple ones would do, using a thesaurus “don’t impress me much”!
  • Tell me what you did:
    • Focus less on talking about project specific detail (and especially the project budget, why is this important?)
    • Focus more on describing what your role was and what you actually did, noting anything that you feel is particularly relevant and important to the job you’re applying for.
  • Keep it short: bearing in mind the advice above, focus on key information and keep the overall resume down to a couple of pages. Always think about the information you absolutely have to communicate in order to best represent yourself and remember that resume length alone is not an indicator of experience, skill or anything else.
  • Tailor the resume to the job and company: this shows you’ve made a small effort during your application, in the same way that a generic resume shows that you haven’t.

We shortlisted a small number of candidates (it wouldn’t be a shortlist otherwise, right?!) for phone interviews, in which they were asked a number of questions designed to gauge their practical experience in the areas of interest, while also attempting to determine their general attitude towards testing. While most of the candidates were good at explaining their current work in the context of their current employer, the more open questions about their opinions on some testing topics were often answered by again referring to the way things are in their current job. It would have been good to hear more genuine personal opinions backed up with their reasoning, but it seemed most were either unable or afraid to offer such opinions in this setting.

The more worthy candidates after phone interviews were then asked to complete a “take home test” in which their actual testing skills were examined, in particularly their ability to perform exploratory testing and document what they did. This group of tests was highly instructive and any nagging doubts we had about a candidate from their phone interview were generally cleared up by their test answers. It was clear in most cases that “exploratory testing” on a resume was not an indication of practitioner experience of performing structured (e.g. session-based) exploratory testing.

After reviewing these test responses, only a small number of in-person interviews resulted and I am pleased to say that we found an excellent candidate – we’re looking forward to welcoming them to our Melbourne team very soon.

Based on this recent experience in this particular market (i.e. testing in Melbourne), a few recommendations for job seekers:

  • Please heed the CV advice given above!
  • Don’t apply for jobs where you fail to meet many of the asks: job sites like Seek have made applying for jobs too easy, so think of the people on the other end before you do your daily search in a category, select all, and hit Apply.
  • Don’t be afraid to have and express your own opinion: especially when you are explicitly asked for it – and be prepared to back up your opinion with sound reasons based on your unique experience.
  • Stand out from the crowd: being active in the testing community and/or showing signs of continuous learning (attending conferences, meetups, etc. or following some blogs on testing) are easy ways to do this. Wearing your ISTQB certification as a badge of honour does the exact opposite (for me).

(The “Testing in the Pub” podcast has covered the topic of hiring testers and they have some great recommendations too, so check out episodes 50 and 51.)

Time for another podcast

A few months ago, Johan Steyn reached out to me from South Africa (via LinkedIn) to ask if I’d like to be a guest on his Careers in Software Testing podcast. Johan has been very busy interviewing lots of testers around the world for his relatively new podcast and I was happy to accept his invitation.

It took a while for our calendars and timezones to line up, but we eventually got there and recorded the podcast in early September. Johan likes to keep the podcasts to around twenty minutes but we still managed to talk about a few different topics. With the subject of the podcast series in mind, he always asks his guests how they ended up in software testing and it’s been interesting to hear the answers to this question from his numerous guests so far. The common theme is “falling into testing” and my story is no different. For those who don’t know how I ended up in testing, you’ll have to listen to the podcast to hear the story – but it all started when I moved to a different country and interviewed for a job as a technical writer!

Johan’s been a very busy man recording the podcasts, so my contribution won’t be “live” for a while, but I’ll update here when it is. Thanks again to Johan for the invitation and the chance to share my thoughts on software testing.

 

“The Coaching Habit” (Michael Bungay Stanier)

In my job working with teams across various worldwide locations, I am often coaching testers and leaders on how to improve their testing. I also specifically mentor a number of testers in our office in China in a one-on-one setting. I really enjoy this aspect of my work and, in the interests of continuously improving, Michael Bungay Stanier’s best-seller The Coaching Habit seemed like a worthy addition to my library.

There are two big ideas in this book. The first is in the subtitle “Say Less, Ask More & Change the Way You Lead Forever”, namely that as a coach, it’s important to learn to stop jumping in with advice and instead ask more questions. Michael acknowledges that this is not easy as we tend to naturally assume that responding with advice or solutions is what we’re meant to do: “…the seemingly simple behaviour change of giving a little less advice and asking a few more questions is surprisingly difficult”.

The second key idea is that just seven simple questions can help to break out of the cycle of advice giving and instead move to genuine coaching by seeking more from the person being coached and helping them learn for themselves. The bulk of the book (which is a short and easy read) is given over to detailing these seven questions:

  1. What’s on your mind?
  2. And what else?
  3. What’s the real challenge here for you?
  4. What do you want?
  5. How can I help?
  6. If you are saying Yes to this, what are you saying No to?
  7. What was most useful for you?

The first question is a simple conversation starter and invites the person to share what’s actually important to them right now. The second question helps to stop us leaping to offer advice: “…even though we don’t really know what the issue is, or what’s going on for the person, we’re quite sure we’ve got the answer she needs.” Asking “And what else?” is “…often the simplest way to stay lazy and stay curious. It’s a self-management tool to keep your Advice Monster under restraints.” The author goes as far as suggesting that this second question is “the best coaching question in the world” and I immediately realized how effective this one will be in curbing what I hadn’t recognized was an inclination to jump in with advice before fully understanding the person’s concerns, context and actual problems. I also love this, erm, advice: “stop offering up advice with a question mark attached” (e.g. “Have you thought of…?”).

The third question – “What’s the real challenge here for you?” – acts as an excellent focusing question, especially if there are many issues/challenges exposed by the previous question. The fourth question – “What do you want?” – works as a clarifying question and I like the suggestion to also offer to share what you want when asking the other person this question.

The fifth question – “How can I help?” (or, more bluntly, “What do you want from me?”) – really cuts to the chase and is a potential saviour of falling back into our default helpful, action mode.

The penultimate question – “If you are saying Yes to this, what are you saying No to?” – works to combat over-commitment. Most of us have said “yes” to additional work, knowing that it’s really over-committing and this is ultimately unsustainable. This very clear question helps to clarify priorities and helps people to only say “yes” to more important tasks, knowing they can ditch some other lower priority work in the process. (I can see this working well in sprint planning sessions too!)

The final question – “What was most useful for you?” – feels like a great way to capture feedback and learning from coaching interactions. (Again, I can see value in this question in more general meeting situations too.)

(Note that almost all of the questions are “What?” questions, deliberately contradicting the advice of “Why?” advocates such as Peter Senge and Simon Sinek.)

I particularly liked that each of the questions is supported by some science and there are also videos available to show how to put them into action.

I really enjoyed reading this short, easily digestible book and it’s packed full of great takeaways. The seven questions are already posted visibly at my workspaces to remind me to utilize them in my ongoing coaching and mentoring activities. I have already started to make use of the lessons from this little book right away, a very good indicator of the quality and usefulness of the content.

Serendipity

It was early December 2017 when we found out that an ex-colleague at Quest in Melbourne had passed away. Bruce was a very popular guy during his few years with us as a tester – his flat-top haircut, dapper clothing and brightly coloured socks made him stand out amongst on office full of the usual IT crowd attire! He stood out to me, though, for just being a good bloke – he (along with his wife, Denise, who also worked at Quest) were incredibly generous to me when I first moved to Australia and started working at Quest, fielding those naive questions from a new arrival with patience and being good friends who just happened to live in the same area in which I’d chosen to settle.

It was testament to Bruce’s reputation as a good bloke that his funeral was a large affair, drawing representations from the various communities he was involved with around cars, dancing, and surf life saving. A few of us current Quest folks attended and we were pleased to find that a bunch of ex-Questers had also made the effort to remember him there too.

It was good to see some of the old Quest faces again and catch up with our various work and life changes since we’d all last seen each other (in many cases meaning ten-plus years). It was during one of these conversations that I happened to talk about the volunteer work I’d been doing to teach software testing to young adults on the autism spectrum (along with my good mate Paul Seaman). Dennis mentioned that his son, Dom, had a spectrum diagnosis and might be interested in the training, so I sent Dennis some details on the application process shortly after the funeral.

We had completed the first run of the EPIC TestAbility Academy in June 2017 and were actively looking for participants for the second run, so it was a timely opportunity for Dom. I was delighted when EPIC Assist informed us that Dom had applied – and we were very happy to accept him onto the second course starting in March 2018.

We had ten students on this second course, with nine making it to the end. It was a great group and I was disappointed to only be present for four of the twelve sessions due to work travel commitments. But I saw Dom as an engaged student, always contributing to discussions, and always tackling the homework between sessions. (I’ve already blogged about this second run in more detail here.)

Dom receiving his ETA completion certificate

I returned to Australia after the course ended in June and I knew that Paul had been working hard (along with Kym Vassiliou from EPIC Assist) to get some kind of placement going at this workplace, Travelport Locomote. The usual ping-pong between departments and HR burned a lot of time, but eventually it has come to pass that Dom is taking up a placement at Travelport Locomote as part of their just launched “LocoStart” programme, working alongside Paul two days per week.

Out of something so sad, something so wonderful has come about. Dom should be very proud of himself for taking the plunge to be part of the training course and for being such a diligent and engaged student throughout. His dedication and potential have been recognized by Travelport Locomote and I hope this opportunity to engage in a real-world software testing job in a modern IT company is a very positive one, both for him and Travelport Locomote. I know Paul is going to enjoy having Dom as part of his team and is committed to his success.

Finally, another shout out to Bruce, without whom this opportunity would have never happened for Dom, that good bloke karma just keeps on giving!