My experience of working from home full-time during the pandemic

Thanks to Twitter, I spotted that a new testing-related ebook had been published recently, titled Software People Work From Home, in which a number of testers from around the world describe their personal experiences of working from home thanks to coronavirus-induced restrictions.

I really enjoyed reading this (free) ebook and felt inspired to share some of my own experiences based on more than two months of working full-time at home for Quest.

Firstly, some context around my “normal” work situation. Although I am based in our Melbourne city office, I’ve been working from home for three to four days a week for the last five years or so. Part of the reason for this mode of working is due to the long commute from my home to the Melbourne office (around two hours door-to-door, each way) and another factor is the many early morning/late night meetings I’m involved with thanks to my collaboration with Engineering teams across basically every timezone!

This hybrid model has worked well for me and for Quest over the last few years. The reduced requirement to be within “normal” commuting distance of Melbourne means I can live in a beautiful location, right on the beach with peace and fresh air – this has made a huge difference to my lifestyle and well-being. We always make time during the day for a long walk (typically around 5km) and having this as part of my routine continues to be really important to me.

I intended to resume this model of working following my return from some international travel back in mid-March. (I’ve blogged about the experience of travelling during the pandemic separately here and here.) Of course, on returning from the UK towards the end of March, our office had already been closed until further notice thanks to COVID-19 – and so began my immediate transition to working from home full-time.

It’s been an interesting three months in this new mode of working. In many ways, I am very fortunate. Firstly, I’ve kept a full-time job on the same salary as pre-COVID so there are no additional financial pressures resulting from this change. I’ve also had plenty of practice at working from home over the last several years, so the adjustment to full-time at home hasn’t been as significant as for many people. I’m lucky enough to have a dedicated room in our (albeit very small) home to work from (and the incredible water view from there never gets old!) and we are a couple with no kids so it’s pretty straightforward to maintain a quiet environment in which to concentrate on work.

Even adding the extra day or two at home every week compared to my usual routine has revealed some additional benefits of the arrangement. I’m settling into a great circadian rhythm (save for a few early morning meetings) and not commuting has freed up some more time to enjoy relaxing at home. We’re cooking together more often too.

While it’s generally a positive for me to work from home all the time, there are challenges too, the most significant of which for me is avoiding overworking. I spotted a great quote on Twitter recently:

Stop calling it ‘working from home’ and start calling it ‘living at work’

Heather De-Quincey

My boundaries between “work” and “not work” are – and never have been – very strict. I have access to all of the systems folks within Quest might use to contact me pretty much all of the time. My role operates across our entire business unit, which has people in timezones spanning the whole world.

Shortly after lockdowns became part of almost everyone’s life experience, we decided to convert what would have been a large in-person meeting in California into a virtual event. While it was a great success, organizing and running a 70+ person meeting across so many timezones was a huge effort by many people – I worked sixteen-hour days for three days straight during the event, resulting in severe over-tiredness (and unwelcome grumpiness on the home front).

There is always someone looking for something from me, so it can be hard to ignore those requests even when they’re out of what most would think of as “business hours”. The early morning and late night meetings necessitated particularly by the need to interact with folks in the US are draining, but I’ve learned to speak up more and refuse meetings before 7am or after 11pm so as to allow for a reasonably consistent sleep pattern.

There are some things I am missing as a result of not just working from home all the time, but also the general world situation thanks to the pandemic. I was looking forward to attending the CAST conference in Austin in August, but that of course has been cancelled. This is also the first time in recent memory that I haven’t had an overseas trip (or two or three!) in planning, either for work or leisure (or sometimes a combination of both). It feels strange not having these adventures to look forward to anytime soon.

Closer to home, my day or two up in Melbourne affords me a number of opportunities and benefits that I’d taken for granted. I miss my weekly coffee catch-ups with Paul Seaman, although we maintain close virtual contact. I make very good use of the Melbourne Library Service and usually have an interesting book or two on the go, but there’s been no opportunity to borrow books recently. I also miss the chance to have lunch or coffee with ex-colleagues, something I do almost every time I travel into the city. And, of course, the kitchen banter with my Quest colleagues is sadly lacking – and Teams meetings are just not the same!

Overall, I’m enjoying the experience of working from home full-time and the downsides certainly don’t outweigh the positives of both my work output and lifestyle. With Quest keeping its offices closed until at least September 2020, I’ll get to enjoy it for a while longer.

The Melbourne testing market and spotting “red flags”

One of the more popular job listing sites in Australia is Seek. A search on this site for Melbourne-based jobs under the Information & Communication Technology -> Testing & Quality Assurance classification yielded 70 results as I began writing this post on 29th May 2020.

Looking at these jobs, I’ve broadly categorized them as follows:

  • Automation (32)
    • Automation/SDET
    • Test Automation Engineer (x2)
    • Automation Test Consultant (x2)
    • Senior QA Automation Engineer
    • Test Automation Specialist (x2)
    • Automation Test Analyst (x5)
    • Automation Test Analyst – Tosca
    • Senior Automation Test Analyst
    • QA Engineer (Automation)
    • Automation Tester (x4)
    • Senior Test Automation Engineer/Lead (x2)
    • Java Development Engineer in Test (DevOps)
    • Lead/Sr.Automation Tester
    • Senior Test Automation Specialist
    • Automation Test Engineer
    • Automation Test Engineer (Embedded Software)
    • Applications Testing Automation Engineers (x4)
    • Technical Test Lead – Automation Lead Tosca
  • Management (7)
    • Test Manager
    • Performance Test Manager (x2)
    • Defect Manager (x4)
  • Specialists (14)
    • Performance Tester (x4)
    • Penetration Testing
    • Performance Test Consultant
    • Infrastructure and Security Test Analyst
    • Senior Test Analyst – Infrastructure testing
    • Performance Test Analyst (x2)
    • Performance Engineer (x2)
    • Jmeter Tester
    • Network Test Engineer (hardware-related)
  • “Manual” / Other (17)
    • Test Analyst
    • Test Analyst – Business Intelligence
    • Senior Test Analyst (x2)
    • UAT Tester
    • Mobile QA Engineer
    • Quality Assurance Engineer
    • QA Engineer
    • Graduate Test Analyst
    • Automation Specialists and Defect Managers
    • Junior Quality Assurance Tester Traineeship
    • Senior Software Test / V&V Engineers (Defence)
    • Validation & Verification Lead
    • Integration Testers (x4)

These ads are not representative of unique jobs, as exactly the same ad is sometimes posted at different times and/or the same job is typically posted by a number of different recruiters (especially for government roles).

The breakdown in terms of the number of opportunities didn’t surprise me. The focus at the moment seems to be around automation driven by CI/CD investments, DevOps/agile transformations – and, of course, the overestimation of what automation can and can’t do. Similarly, performance and security-related testing are topics de jour as represented by a swathe of ads in these areas. Test management doesn’t seem like a good place to be, with very few roles being advertised in recent times and this change has been heavily driven by agile adoption in my experience.

I generally take more interest in the ads focused on human testing to see what companies are looking for in this area. Most of the more “traditional” human testing roles (e.g. in government departments) also now mandate some degree of proficiency in tools commonly associated with “automated testing”. It’s pleasing to see requests for ISTQB certification becoming much less common and I’ll occasionally even spot a reference to “exploratory testing”.

But there are often “red flags” in these ads and I proffer a few examples from this most recent search of the “opportunities” on offer.

First up, a “Senior Test Analyst” role “to take accountability for the testing capabilities within the Microservices team.” The “Technical experience” requirements are listed first and include “Linux/Unix Shell Scripting, Java Programming, Clean Coding, Git, Jenkins2, Gradle, Docker, REST APIs, SQL, Unit Testing (Junit), Component Testing, BDD, Integration Testing” and then finally the “Testing Practices” requirements are presented, viz. “Exploratory Testing, Mind Mapping, Requirements Analysis, Peer Collaboration, Continuous Integration, Continuous Deployment, Stubbing and Mocking”.

There are a few red flags here for me. If this were a true microservice team, then it would have a clear sense of ownership of its microservice and a whole team approach to the testing and quality of that service. I’d be looking for clarification of what “accountability for the testing capabilities” really means in the context of this team. Another issue is the lack of clarity about whether this is a human testing role or a development (automation code) role, or a hybrid, or something else. The fact that the technical (mainly development) skills requirements are listed before the testing ones would immediately lead me to believe that more value is placed on automation than on deep human testing here. While it’s good to see exploratory testing explicitly listed (albeit as a “testing practice”), the other requirements listed around testing are much less convincing as to whether this organization would truly value the contribution of an excellent exploratory tester.

Next up, a “Quality Assurance Engineer” role where the successful applicant (who becomes just a “Quality Engineer” as the ad goes on) will “play a critical role in transforming quality practices… and work to develop a world-class test automation solution. [They will] work as part of a cross functional squad, acting as the QA expert, across new features, enhancements and bug fixes. [They’ll use their] testing experience to advise on automation opportunities and review requirements to ensure a fit for purpose solution in (sic) delivered.”

In describing what the day-to-day work entails for this role, there are some positive signs: “You’ll be passionate about testing and use your experience to identify critical defects. You’ll be naturally curious and explore the latest tools and techniques to continuously improve testing” But there are also some less positive signs: “You’ll work as a fully engaged member of cross functional squad including Developers, UX/UI Designers and Product Managers to ensure the quality of products. You’ll create, maintain and execute tests (manual or automated) to verify adherence to requirements.” Again, it’s good to see exploratory testing getting a mention, albeit in what comes across as a confused way (especially in light of the “verify adherence to requirements” elsewhere), “You’ll perform risk based exploratory testing.”

In terms of skills, they’re expecting “expertise in developing functional tests that cover 100% of requirements and execute manually or automated (with focus on least maintenance and faster script development time)” and possession of “strong skills on efficient test design”.

There are obviously a few red flags in this ad. It sounds like this organization has latched into the so-called “Spotify Model” since it mentions “squads” a number of times, even though this model was actually not successfully adopted within Spotify. It talks about “ensuring” quality and verifying “adherence to requirements”, while at the same time asking for exploratory testing skills (of the risk-based variety, of course). Covering “100% of requirements” completes a picture of an organization where verification, preferably by machine, is valued much more than human testing.

My final example is for a “QA Engineer” role which asks for background in “both manual and automated testing” The red flags come real early in this one: “Can you imagine how it would feel to be responsible for ensuring that our key products work well in all conditions? Are you interested in working in a Global Centre of Excellence…?” I’ll choose not to imagine how it would feel to be given an impossible mission.

This lucky candidate “will be responsible for both manual and automated testing as we move towards complete automation.” At least there is no doubt left here about the value of deep human testing as this organization seeks to “automate everything.”

I like the requirement to be “Excellent at finding the problems others miss”, but much less keen to see this followed by “Able to document test cases based on solution requirements provided. Understanding of Developing in an Agile environment”. Advanced-level “ISTBQ” is then included in their list of “desirable” skills.

I don’t understand why this organization believes this ad would attract an excellent human tester who could leverage exploratory testing to genuinely find “the problems others miss”. They describe themselves as Agile but want the successful candidate to write test cases, all the time trying to get to a point where “manual” testing is no longer required. Based on the ad alone, either this is an organization with a confused outlook on testing or they’re really looking for someone to take on an impossible mission while devaluing their actual testing skills – either way, this doesn’t sound like a great way for a decent tester to spend their working life (but I acknowledge the ad could just be very poorly-written and misrepresents the core beliefs of the organization when it comes to testing).

While it’s interesting to review occasionally what organizations are looking for when they’re advertising for testing-related roles, it’s also pretty depressing to see just how little value seems to now be placed on deep human testing skill. The “automate everything” bandwagon seems to roll on and is taking over this market, while opportunities for those genuinely skilled in testing as a craft seem to become fewer and fewer, at least in the Melbourne market represented by published job ads.

As the economic impact of COVID-19 takes its toll, a large number of folks are hitting the IT market at the same time. In just the last couple of weeks, large IT staff reductions have been reported in Melbourne by big tech names such as MYOB and Culture Amp – and more seem likely in the coming months. If there’s a bright side to this situation, it’s that there are no doubt lots of good people coming back into the market so it’s probably a great chance to pick up testing talent for those organisations able to do so. If you’re representing such an organisation and really want to hire skilled human testers, please take the time to construct better tester job ads! Making it sound like you understand the value of testing is likely to attract those really good testers who might have come back into the market thanks to COVID-19.

Remember that a good tester should be a strong critical thinker and will be testing your ad, so catch your own “red flags” before they do!

ER of international travel during the coronavirus pandemic (part 2)

This is the second part of my travelogue/ER about travelling between the UK and Australia during the coronavirus pandemic (the first part can be found here). I’m writing this post on day 5 of our enforced 14-day home quarantine period following our return to Australia.

This second week began on Monday 23rd March and you might recall that we were now based in Aberystwyth on the West coast of Wales. The day started with news from Cathay Pacific about our new return flight option, that being a departure from London on 18th April and only getting us back as far as Sydney as their routes to Melbourne were stopping completely. We suddenly felt like mid-April was a long way away given how fast things had changed in just the previous seven days, so we decided to look for alternatives to get us home earlier. (As it turned out, Hong Kong would soon announce the banning of transit passengers through its airport so these new return flights with Cathay would never happen anyway.)

Looking for flights back to Australia, we spotted some Emirates options via Dubai and booked new flights departing London on 1st April, costing almost AU$3000 for the two of us. This process wasn’t straightforward as these newly-announced flights were selling so fast that the website didn’t respond well to the load and we missed out on flights for several other earlier dates as the transactions failed part-way through. At least 1st April didn’t feel so far away and we felt comfortable waiting it out for a week-or-so in Aber before we could return home. With the flights booked, I refocused on my Quest work for a while before we made the most of the lovely sunny day with a walk along the Prom and South Beach. We bumped into my Mathematics PhD supervisor from all those years ago, Alun Morris, and his wife Mary while walking and it was great to see them. They stopped for a chat (at a distance, of course!) and it was our first interaction with someone we knew since we’d left Australia. Alun looked in great health (it’s that Aber sea air!) and seeing him again was a fillip to our morale. Lunch back at the apartment gave way to more work in the afternoon.

Work was soon interrupted, however, by an announcement from Emirates that they were suspending operations from 25th March (and, as it turned out shortly afterwards, Dubai would be prohibiting transit passengers), rendering our newly booked flights useless.

We were really disappointed and again looked for alternatives, this time coming across flights with Etihad from Manchester via Abu Dhabi to Melbourne leaving the next day. We booked these flights, accruing another AU$5000+ on our credit card, and started to make our plans for leaving Aber (including informing our host that we would now be leaving earlier than expected).

Now thoroughly distracted, we headed out into the late afternoon sunshine and took what we thought would be a last chance to climb Constitution Hill. The stunning views over this pretty town and coastline never fail to impress and we soaked them in, before again watching the starlings doing their thing at sunset over the Pier. Back at the apartment, we cooked up a big meal to use up at least some of the nice organic veggies we’d bought at the Farmers Market. It was shortly after devouring this feed that the news came through that Abu Dhabi (as part of the United Arab Emirates) was also banning transit passengers with almost immediate effect, so our latest flight booking was again rendered useless. Our host came up to see us about our early departure and, luckily for us, she was very understanding about the fact that we would yet again have to change our plans and stay on. We were thoroughly exhausted by the end of the day after all the ups and downs.

View over Aber from Constitution Hill

We woke early on Tuesday morning to read the news that Australia was enforcing a complete travel ban, even on its own citizens (this was an escalation of the previous restrictions on entry) so we resigned ourselves to the fact that we were in for the long haul with our stay in Aber. We considered ourselves fortunate to be in stable accommodation in a familiar place and my ability to continue working meant we were under no financial pressure. Many other stranded Aussies were in far worse situations across the world.

As resignation set in, we tried to reset mentally and I buried myself into work again. It was another sunny and very mild day so we enjoyed a nice walk back out to Tan-y-Bwlch beach (which was completely deserted) before returning for lunch in our apartment. Back into work in the afternoon, a strange Twitter DM arrived from Qantas re: a new flight option to replace our previous Emirates booking. This new flight would be via South Africa, leaving on 27th March. This was very confusing in light of the news we’d heard around border closure in Australia, so we called the Australian High Commission in London in the hope of gaining some clarity. It turned out that the new restrictions were only on Australian citizens leaving Australia and that citizens could still enter if they could find a flight to do so. The poor communication by Scott Morrison (Prime Minister) on this was unhelpful, but we were at least comforted that we could return to Australia if only we could find a way. We took up the option of the new Qantas flight through South Africa and were pleased that they waived the ~AU$3000 change cost on these new flights thanks to my Frequent Flyer status. We reset our plans and expectations yet again around a departure from the UK at the weekend.

A nice walk along the Prom and South beach late in the afternoon and a tasty dinner made up of food from the Parsnipship and Anuna bakery rounded out our day. We called it a night believing our luck had changed and we’d be right to get home fairly soon thanks to these new Qantas flights.

It was a frosty, clear and sunny start to Wednesday and forecast to head right up to 18C, beautiful! An early start secured a good few hours of work in the morning before we headed out for a walk, this time to try and find a modernist-style house we’d noticed from almost every vantage point over Aber. It turned out to be towards the top of Cae Melyn and it was great to see this unusual piece of architecture up close, we could only imagine what the 270-degree views out across Aber must look like from inside the place. A stroll through the lush greenery of Penglais Nature Park was great for our spirits and we walked back to the apartment along the Prom in time for lunch. Returning to work in the afternoon, we soon spotted news that South Africa was heading into lockdown from 26th March – of course! We failed to find any information on the impact of this on transit passengers so sought assistance from the Australian High Commission in South Africa. They were also unsure and suggested we ask our airline. We contacted Qantas and, you’ve guessed it, they suggested we seek government advice. By now we were exasperated and had no confidence that this set of flights would happen either. Desperation was certainly setting in as we searched again for any remaining options for flights from any UK airport to any Australian airport.

The only flights we could find now were with Qatar Airways via Doha. We’d seen their flights before but dismissed them based on long layovers in Doha, scared that regulations might change during the layover and leave us stranded somewhere we really didn’t want to be. We were encouraged, though, by the fact that some Australian relatives had successfully made the trip back with Qatar just the day before so we decided we’d book flights with them, from Birmingham (as the closest and easiest airport to return to from Aber) to Melbourne via Doha. Demand, of course, was really heavy for their flights as they were basically the last option flying into Australia so the flight prices were very high at around AU$5000 each. This presented us with our next problem. We’d been transferring cash over to our credit card as fast as we could, but the timezone difference to Australia meant that this basically took a day each time. We didn’t have enough credit left on our main card for both flights, but could cover one. A call to our bank asking for an emergency credit limit increase fell on deaf ears as it was during the night in Australia with no-one available to authorize such a request. I did have one more credit card in my wallet, unused for years and with enough of a credit limit (from memory) to also cover one ticket, but would this go through? We tried booking one ticket with this card… and it went through successfully! We could then book the other on our usual card. By now, we’d spent close to AU$20,000 on flights in a few days (and no refunds in sight, as Qantas/Emirates and Etihad all want to issue credit notes and not cash refunds), so this really was our last gasp attempt. (The eventual cancellation of our Qantas flights via South Africa was not communicated to us until after we left the UK, by the way.)

With our new flights in place, we needed some fresh air and we headed to the Treehouse to grab some supplies to sustain us on the very long trip home. They were doing a great job of continuing to service the community via their “shout your orders through the door” approach! We came away well armed with enough snacks to keep us going. A final (maybe?!) walk along South beach and the Prom on this lovely clear Spring afternoon was delightful, Aber looked resplendent in the sunshine. Dinner back in the apartment was an exercise in using up what supplies we already had open in order to save waste (we’d already decided to leave most of our haul of vegan organic goodies with the host, as a small token of our appreciation for her help and flexibility). She popped up later in the evening and we said our farewells (maybe?!).

View of the Old College, Prom and Constitution Hill

We had to make an early start on Thursday to pack, tidy the apartment and make the trip to Birmingham. It was with some trepidation that we first checked SMS and email, as well as the latest news updates, to see if Qatar had decided to end transit overnight, but all seemed well.

It was sunny and frosty as we headed out onto the deserted streets of Aber at 7.30am and loaded up our hire car. The drive over Pumlumon Fawr was just stunning with frost-covered paddocks, an abundance of newborn lambs and clear blue skies. The familiar drive back to the Midlands was effortless with so little traffic on the road so we comfortably covered the distance in under three hours including a fuel topup before returning the hire car to Budget at Birmingham airport. The agent at Budget mentioned that her only customers recently were people just like us, returning way too early (we still had sixteen days of our prepaid hire to go) and to the wrong location (we should have returned to Brighton), resulting in over three hundred early returns and basically no cars going out. Wandering down to the terminal, it showed all the signs of being closed – no cars, no passengers walking around, no signs of life. Even after entering the terminal, it was still deadly quiet and our Qatar flight was basically the only sizeable flight departing during the day. Check-in was easy at about 11am, leaving us no rush to make our 2pm flight. There wasn’t too much in the way of distraction during our wait, with only WH Smith’s and Boots being open in the entire terminal (and nowhere to source even a coffee!) The flight unsurprisingly left on time and the six-and-a-half hour flight down to Doha was very comfortable – and we experienced awesome service from an airline voted world’s best in recent times. As we’d booked our flights less than 24 hours before departure, we couldn’t order special (vegan) meals for this first flight. We notified cabin crew when we got on and they promised to look into it. We got amazing personal service from Melina who cobbled together a tray of vegan goodies from other meal trays to keep us going, then later delivered us a delicious vegan meze plate from business class. Impressive attention to detail and we actually felt like she cared about us, very much appreciated under the circumstances (and hopefully her employer does something nice for her based on the feedback we’ve given to them).

The flight arrived into Doha early at 11.20pm local time – and the airport was packed! It was strange to see so many people – for the first time in a couple of weeks – and attempts at social distancing during the lines for security checks weren’t very successful. Many people were in full hazmat gear from head to toe, we had no protective equipment at all as our attempts to source even a face mask in the UK had failed. On entering the main terminal, we were impressed by the spaciousness and feel of the place, but very surprised again to see all shops and eateries open seemingly as usual here, certainly in stark contrast to Birmingham airport.

As we passed into Friday in the airport, we had around twenty hours to kill before our (hopefully!) last flight from Doha direct to Melbourne. We managed to find a quiet spot with comfortable seating we could fashion into a makeshift bed and tag-teamed short spells of sleep. We had power and internet too, so could pass the time on our laptops even if most of that time was spent  following the latest news updates in the hope that nothing scuppered our plans during the long wait.

Of course, it wasn’t too long before another potential problem arose with the news that all arrivals into Australia would soon be subject to 14-day quarantine in hotels of the government’s choosing (to replace the existing home quarantine scheme). The messaging around this change of policy was again inconsistent and confusing – some reports said this new scheme would be in effect “by midnight on Saturday 28th” while others said it would take effect “from midnight on Saturday 28th”. We were due into Melbourne at about 6pm on Saturday so this small difference could potentially make a big difference to us. We didn’t get clarity on this point before the time finally rolled around for us to board the flight to Melbourne after our long, long wait.

The flight left Doha on time and landed early into a deserted Melbourne airport at about 5.30pm on Saturday. We still didn’t know what we’d find as we left the plane and headed to immigration. Thankfully, we had a lucky break and were one of the last flights from which passengers were allowed to head home to begin their 14-day quarantine period, so we were very thankful for that!

The journey home had been long but we were grateful to finally get back to our little house on the beach to begin our quarantine. The ups and downs of the previous few days had taken their toll and, five days later, we’re still adjusting to the new normal. Looking back on the events of last week while writing this blog, it almost doesn’t feel real and it feels all the more remarkable that we actually made it home at all. Attempts to obtain cash refunds from Qantas and Etihad continue…

There were many issues that made the process of finding a way home more complicated and stressful than it needed to be. Firstly, the information on changes to regulations coming out of government was not clear or well-communicated – the complete closing of the Australian border and the timing of the hotel quarantine scheme were two examples of this. Sourcing precise information in both of these cases was difficult as news outlets and even trusted sources like the High Commissions didn’t have consistent or reliable information at hand when the announcements were made.

Secondly, a number of IT systems had clearly failed to account for load and automated systems haven’t accounted for changes along the way. We have numerous examples of such problems, from the Emirates booking system failing part-way through flight bookings to automated “online check-in is now open” emails from Qantas just today about a flight already cancelled almost a week ago. The icing on this particular cake, though, has to go to Qantas again who sent us this SMS about a rescheduling of our South Africa-routed flight after it was cancelled due to the lockdown (asterisks and bold are mine):

We’ve now rebooked you onto flight QF9324 on Fri * Apr from Johannesburg at 10.00 arriving Fictitious Point at 11.00

It feels like we’re actually living at Fictitious Point right now, but we’re home and safe and, so far, feeling healthy. If there is a moral from this story, it’s probably that travelling during a pandemic is not a great idea!

ER of international travel during the coronavirus pandemic (part 1)

My next couple of blog posts are not my usual subject matter, but I wanted to tell this story (and, honestly, it feels cathartic right now to do so). These posts are perhaps best described as travelogues, serving as experience reports of what it was like travelling between Australia and the UK during the pandemic. I’m writing this one on day 4 of an enforced 14-day home quarantine period following our return to Australia.

Late last year, we were pleased to learn that a good friend in the UK was getting married in April 2020 and so planned a month-long trip from Australia to attend the wedding, do some travelling around the UK to catch up with friends and family, and take in a few events along the way.

As we looked for good flight options, Cathay Pacific offered a good deal and we were happy to use them given good experiences in the past. Since we would be transiting through Hong Kong, I decided to take the opportunity to break the trip there and go spend a few days in Quest’s Zhuhai office as I was long overdue a visit there anyway. For the rest of the trip, I planned to work a day or two a week during the trip to minimize the impact of my absence on Quest.

We built an itinerary of about a week in China followed by three weeks in the UK, mainly based in Wales then finishing up with the wedding and a few relaxing days in Brighton before the long trip home just after Easter.

The first spanner in the works came with the impact of the coronavirus outbreak in China, meaning a visit to the office was too dangerous (and also, as it later turned out, pointless as all our staff would be working remotely during isolation). So we changed our plans slightly to transit directly through Hong Kong, tacking on a few days in London to start the trip to replace the planned time in China.

As our departure date approached, the coronavirus news of course became much worse as the spread continued around the world. The UK seemed to be faring well, though, and Boris was persisting with his “keep calm and carry on”/herd immunity idea so we risk-assessed and decided to still make the trip, arriving in London very early on Monday 16th March.

It was a beautiful sunny & crisp day in Hyde Park as we killed time before checking into our hotel, La Suite West (with its very own vegan restaurant!). Exploring the local area again, Notting Hill was noticeably quiet but most shops, cafes and restaurants were open and it felt pretty much like “business as usual”. We were really tired by mid-afternoon and opted for a very early dinner at the excellent farm-to-table vegan eatery, Farmacy.

Back at the hotel by 5pm, we tuned in to the first of Boris’s daily missives to the people of the UK – and this is when everything really started to change. In this first speech, he announced that pubs and other places of social gathering would be stopped, the real start of social distancing in the UK.

We decided to avoid all forms of public transport for the rest of our time in London and so explored locally on foot on Tuesday. The Design Museum, housed in an amazing building in Kensington, was an excellent start and then we took in part of the glorious Victoria & Albert Museum, thinking we could come back to explore some more in the days ahead. There were quite a few people in the museums but their vast size made it feel safe in terms of keeping well apart from each other. We headed to the Brasserie at Cloud Twelve in Notting Hill for dinner at about 5pm, a lovely quiet and relaxing organic vegan eatery. We were warmly welcomed with very friendly service – we later found out we were their first customers for the day. We left with a pile of free vegan cakes, walking back through the very quiet local streets to our hotel for the next installment from Boris – this time finding out, among other things, that all the museums would now close, so much for our V&A revisit! If the Melbourne Cup is the race that stops the nation in Australia, Boris’s daily speeches quickly became those that stopped the UK as people tuned in to hear of the latest disruptions coming to their daily lives.

It was another sunny day on Wednesday so we kicked the day off with a long walk in Hyde Park around the Serpentine. There were plenty of people out exercising but respecting the social distancing rules. Heading to Kensington High Street, most of the shops along this usually busy street were closed but Wholefoods was doing good business and was a little too crowded for comfort but we stocked up on some supplies there while we had the chance. We had a relaxing coffee stop over at Cloud Twelve again (still very quiet and remarkable that it was still open) before walking back to the hotel along Westbourne Grove, which was eerily quiet but for a few food shops staying open (including the excellent Planet Organic where we gathered more supplies). We noted that Farmacy had signage to indicate even more reduced opening hours (closing at 5pm) so we made sure to head back there for a very early dinner at about 4pm, enjoying another excellent meal. It was almost empty  and we enjoyed nice conversations with the manager as he sought to work out what to do for the best in terms of staying open or closing up indefinitely. A post-dinner stroll in Hyde Park at dusk revealed many others making the most of the fresh air but the vastness of the park made social distancing very easy.

By now, it was clear day-by-day that more and more facilities and events were being cancelled around us and Boris’s daily updates clearly indicated stronger and stronger restrictions on what could stay open as well as limits on personal movements. All of the events we had planned – a Status Quo tribute band gig in a pub, Francis Rossi speaking in Aberystwyth and the “Only Fools and Horses” musical in the West End – were already cancelled and the postponement of our friend’s wedding, though not officially made yet, seemed likely.

Based on what we were seeing and hearing, we decided to cancel the various touring around the country we had planned to do and instead extend our stay in my old University town of Aberystwyth in Wales until our scheduled return date (14th April). With lower population density and no confirmed cases, Aber seemed like a good spot to take safe haven in familiar surroundings – I had the good fortune to live there for seven years and have returned almost every year for the last twenty years too. Cancelling our existing hotels and so on was straightforward and we could extend our AirBnB stay in Aber easily as all of the host’s other bookings had already cancelled for months ahead.

Our last full day in London on Thursday was a more typical drizzly, damp cold affair but we still donned our walking shoes and enjoyed a stroll around Little Venice, an area we’d discovered some years ago on a previous trip to London. It was a quiet locale with only a few locals on the tow paths around the canals. A return to Hyde Park followed and what would be our last coffee stop at Cloud Twelve, remarkably staying open while almost all around it had already closed indefinitely. We also stocked up on some more supplies at Planet Organic on the way back to the hotel. Our area was much quieter than even the previous day. Dinner was again taken early at Farmacy before they closed at 5pm, this time closing indefinitely. The manager was clearly upset at having to stand down all of his staff at this great restaurant. A final dusk walk in Hyde Park rounded out our outdoor time in London and we packed up ready to leave on Friday. Restrictions on the numbers for social gatherings such as weddings confirmed that our friend’s wedding would need to be postponed.

We had already booked a hire car to pick up from Heathrow on Friday as part of our original itinerary, so we opted to spend the extra for a private taxi back to the airport rather than travelling by tube. Terminal 3 was almost deserted as we caught the empty Budget car hire shuttle bus to collect our car. It was a lovely clear day for a drive and the roads (especially around Heathrow itself) were really quiet, so it was a relatively easy, if long, drive to Aber via the M4, Severn Bridge, Abergavenny, Brecon, Builth Wells, Rhayader and Llangurig. It was late afternoon by the time we concluded our almost five hour drive but we were relieved to be in Aber and our AirBnB apartment was great. Its central location, full kitchen and strong wi-fi boded well for a comfortable (and, if need be, longer term) stay. We figured we had enough time to make it to Dragonfly Bistro, a nice vegetarian eatery near the Castle, so headed there for a coffee and ordered lots of takeaway food to help us see out the coming days. Boris was giving his latest speech while we waited for our takeaway food and it ironically included the news that cafes and restaurants would now also need to close (apart from those offering takeaway and/or delivery services).

It was a stunning clear and sunny Saturday as we ventured out early on a shopping run in Aber. Our first port of call was to the newly-opened vegan deli, Iwtopia. Yes, a vegan deli in Aber, who’d have ever thought it!? This would be the last day for Iwtopia for a while so we took the chance to strongly support this great new place and stocked up on a heap of vegan goodies. We enjoyed a very long chat with the lovely owner too, we hope she can survive the downtime and come back to continue offering this great selection of vegan food to the people of Aber. It was especially pleasing to see that lots of the products were local and actively promoting other small vegan businesses. We were a little surprised that the splendid Farmers Market was still on (and practicing good social distancing too) so we also grabbed some quality fresh organic veggies, yummy sourdough bread from Anuna Bakery (and chatted with the owner who relocated from Melbourne to rural Wales!) and interesting food from the ingeniously named vegetarian producer, Parsnipship. Next stop was an old favourite, the stalwart organic shop, the Treehouse, where we stocked up on more quality produce as well as taking advantage of their new bulk supplies store. We’d had a big morning of shopping and felt like we were in a good position to not have to shop again for a week or two. But, more importantly, we’d enjoyed some great conversations with people and felt warmly welcomed – that familiar warm embrace of Aberystwyth was still there for me all these years later. We returned to the Farmers Market to grab lunch from the all-vegan Renegade Kitchen food van and again had an enjoyable conversation with the owner operators while they prepared our food.

With shopping and lunch sorted, we could finally take the opportunity to enjoy a long walk along the Prom and South beach. A hefty storm the week before had deposited some of the beach up onto the Prom. Aber was surprisingly crowded, it felt more like a Bank Holiday weekend than a town in semi-lockdown, but the expansive Prom gave everyone room to spread out (though not all chose to do so). We supported Dragonfly Bistro by grabbing a takeaway coffee and bought a few interesting Welsh ciders from the new Bottle and Barrel bottle shop. As sunset approached, we walked the Prom again and watched the murmuration of starlings over the Pier, always an amazing sight! Settling into our apartment, we assembled a big vegan dinner from our day’s shopping haul.

Some of the beach dumped onto Aber's Prom during a recent stormaber5The murmuration of starlings over Aber pier

Aber turned on another clear and sunny day on Sunday, albeit chilly out of the sun in a cool wind. We drove out the short distance to Penparcau to climb Pen Dinas, with its wonderful views across all of Aber and the Cardigan Bay coastline. A few hardy souls were hiking in this area but again it was easy to keep our distance from others. Heading downhill, we relaxed at Tan-y-Blwch beach, before the climb back to our car. Lunch back at the apartment was followed by a visit to Dragonfly Bistro for a takeaway coffee – and we then learned that the owner had decided that this would be her last day of opening for the foreseeable future. Back at the apartment, we felt like the world was closing around us with the vegan deli and vegetarian restaurant now gone. It was lovely in the sun down on the Prom and we sat there for a couple of hours to again watch the free show put on by the thousands of starlings heading home for the night under the Pier. Dinner back in the apartment was followed by the news that our return flight with Cathay Pacific (on 14th April) had been cancelled and to await news of an alternative flight home. Little did we know then that this would be the start of a series of flight-related highs and lows in the week ahead.

Pen Dinas monumentLooking down to Tan-y-Bwlch beach from Pen DinasThe Ystwyth at Tan-y-BwlchA stunning view of South Beach and the Castle from Tan-y-Bwlch

In my next blog post, I’ll cover one of the most uncertain, stressful and expensive weeks of our lives.


One of the joys of reading is the books you come across by accident. Reading a couple of Tim Wu’s excellent books (viz. “The Attention Merchants” and “The Master Switch”) led me to books on solitude, including “Solitude: In Pursuit of a Singular Life in a Crowded World” by Michael Harris.

It seemed timely to read on this topic, as I’ve been implementing a “digital declutter” after recently reading Digital Minimalism by Cal Newport. I’m fortunate to live in a beautiful and peaceful location so I’m being much more mindful of making the most of the spot to deliberate separate myself from technology sometimes and take in the simple pleasures of time spent watching the ocean and listening to the birds.

The inspiration for writing “Solitude” came from the author reading about Dr Edith Bone. Hers is a remarkable story (and worth reading about in itself) of seven years spent in solitary confinement.

A little reading – and a hero in Dr. Bone – had turned malaise into a mission. I wanted to become acquainted again with the still night, with my own hapless daydreaming, with the bare self I had (for how long?) been running from. I kept asking myself: why I am so afraid of my own quiet company? This book is the closest I’ve come to an answer.

Aligning closely with Wu’s work, Harris discusses the rise of social media and the “connectedness” it was designed to create. But we all know by now that the “likes” and sharing are highly addictive, triggering small but frequent dopamine hits. This has had a devastating impact on our ability to find solitude:

We’re given opportunities to practise being alone every day, almost every hour. Go on a drive. Sit on a lawn. Stick your phone in a drawer. Once we start looking, we find solitude is always just below the surface of things. I thought at first that solitude was a lost art. Now I know that’s too pretty a term, too soft a metaphor.

Solitude has become a resource.

Like all resources, it can be harvested and hoarded, taken up by powerful forces without permission or inquiry, and then transformed into private wealth, until the fields of empty space we once took for granted first dwindle, then disappear.

Harris goes on to ask the question: what is solitude for? He comes up with three answers: the formulation of fresh ideas, self-knowledge, and (paradoxically) bonding with others.

Taken together, these three ingredients build a rich interior life. It turns out that merely escaping crowds was never the point of solitude at all: rather, solitude is a resource – an ecological niche – inside of which these benefits can be reaped. And so it matters enormously when that resource is under attack.

Our modern, hyperconnected, “always on” world sees solitude under constant threat and it takes a determined effort to find it in our lives:

Our online crowds are so insistent, so omnipresent, that we must now actively elbow out the forces that encroach on solitude’s borders, or else forfeit to them a large portion of our mental landscape.

It turns out that some research has already been done around daydreaming. MRI scanning reveals that daydreaming “constitutes an intense and heterogeneous set of brain functions” and:

…this industrious activity plays out while the conscious mind remains utterly unaware of the work – so our thoughts (sometimes really great thoughts) emerge without our anticipation or understanding. They emerge from the blue. Daydreaming thoughts may look like “pointless fantasizing” or “complex planning” or “the generation of creative ideas”. But, whatever their utility, they arrive unbidden.

Einstein believed that “the daydreaming mind’s ability to link things is, in fact, our only path toward fresh ideas.” Harris describes his own attempts to daydream during a three-hour wander and he says of this experience:

I start to see time-devouring apps like Candy Crush as pacifiers for a culture unwilling or unable to experience a finer, adult form of leisure. We believed those who told us that the devil loves idle hands. And so we gave our hands over for safekeeping. We long for constant proof of our effectiveness, our accomplishments. And perhaps it’s this longing for proof, for glittering external validation, that makes our solitude so vulnerable to those who would harvest it.

The addictive nature of social media (see ludic loops) has seen us giving up what few moments of spare time we have:

To a media baron looking for short-term profits, a daydreaming mind must look like an awful waste. All that time and attention left to wander, directionless! Making use of the blank spaces in a person’s life – draining the well of reverie – has become one of the missions of modernity.

But we do need to break out of this cycle and, bizarrely, doing so is seen as an odd and disruptive thing to do (e.g. I see the disbelief every time I mention to someone that I’m not, and never have been, “on Facebook”):

Choosing a mental solitude, then, is a disruptive act, a true sabotage of the schemes of ludic loop engineers and social media barons. Choosing solitude is a gorgeous waste.

Harris then discusses how we’ve all become part of the crowd and true marks of individualism are being eroded as a result:

…today we need to safeguard our inner weirdo, seal it off and protect it from being buffeted. Learn an old torch song that nobody knows; read a musty out-of-print detective novel; photograph a honey-perfect sunset and show it to no-one. We may need to build new and stronger weirdo cocoons, in which to entertain our private selves. Beyond the sharing, the commenting, the constant thumbs-upping, beyond all that distracting gilt, there are stranger things to be loved.

Harris explores the impact that technologies like Google Maps have had on our ability to truly lose ourselves and wander freely in nature, activities that have historically yielded great insights but are much more difficult to achieve in our hyper-connected and increasingly urban lives. He goes on to look at reading and writing – and the socialization of those activities. Proust once defined reading as “that fruitful miracle of a communication in the midst of solitude” but even this is under threat:

But that solitary reading experience is now endangered, and so is the empathy it fosters. Our stories are going social. We can assume that, in thirty years, readers and writers will use platform technologies to constantly interact with and shape each other, for better or worse. Authors will enlist crowd-sourcing and artificial intelligence to help them write their stories.

In his final chapter, Harris tells the story of his seven-day experience of solitude in a cabin in the woods, offline and alone:

Near the end of this lonely week my thoughts stop floating so much and return to the problem of solitude in a digital culture. Only now, out on the meditative trail I’ve been hiking before and after my crackers-and-apple lunch, I’m thinking about it differently, more expansively. Things here call for wide lenses.

From this dirt vantage, all that clicking and sharing and liking and posting looks like a pile of iron shackles. We are the ones creating the content, yet we’re never compensated with anything but the tremulous, fast-evaporating pleasures that social grooming delivers. Validation and self-expression, we are told, are far greater prizes than the measly cash that flows upward to platform owners…. [these] systems we live by can expropriate no value from solitude, and so they abhor it.

I enjoyed reading this book, it’s written in a very approachable style with many personal anecdotes (which you may or may not find interesting in themselves). I took this read as a reminder to make room for “daydreaming”, be that looking out over the ocean or simply not pulling out my phone during a short tram ride. Nicholas Carr says it well in the Foreword of the book:

Solitude is refreshing. It strengthens memory, sharpens awareness, and spurs creativity. It makes us calmer, more attentive, clearer headed. Most important of all, it relieves the pressure of conformity. It gives us the space we need to discover the deepest sources of passion, enjoyment, and fulfillment in our lives. Being alone frees us to be ourselves – and that makes us better company when we rejoin the crowd.

I also recently read another book on the same topic, but given a much more serious treatment by Raymond Kethledge & Mike Erwin, in the shape of “Lead Yourself First: Inspiring Leadership Through Solitude” – I highly recommend this book.

“The Influence of Organizational Structure on Software Quality: An Empirical Case Study” (Microsoft Research and subsequent blogs)

A Microsoft Research paper from back in 2008 has recently been getting a lot of renewed attention after a blog post about it did the rounds on Twitter, Reddit, etc. The paper is titled “The Influence of Organizational Structure on Software Quality: An Empirical Case Study” and it looks at defining metrics to measure organizational complexity and whether those metrics are better at predicting “failure-proneness” of software modules (specifically, those comprising the Windows Vista operating system) than other metrics such as code complexity .

The authors end up defining eight such “organizational metrics”, as follows:

  • Number of engineers – “the absolute number of unique engineers who have touched a binary and are still employed by the company”. The claim here is that higher values for this metric result in lower quality.
  • Number of ex-engineers – similar to the first metric, but defined as “the total number of unique engineers who have touched a binary and have left the company as of the release date of the software system”. Again, higher values for this metric should result in lower quality.
  • Edit frequency – “the total number times the source code, that makes up the binary, was edited”. Again, the claim is that higher values for this metric suggest lower quality.
  • Depth of Master Ownership – “This metric (DMO) determines the level of ownership of the binary depending on the number of edits done. The organization level of the person whose reporting engineers perform more than 75% of the rolled up edits is deemed as the DMO.” Don’t ask me, read the paper for more on this one, but the idea is that the lower the level of ownership, the higher the quality.
  • Percentage of Org contributing to development – “The ratio of the number of people reporting at the DMO level owner relative to the Master owner org size.” Higher values of this metric are claimed to point to higher quality.
  • Level of Organizational Code Ownership – “the percent of edits from the organization that contains the binary owner or if there is no owner then the organization that made the majority of the edits to that binary.” Higher values of this metric are again claimed to point to higher quality.
  • Overall Organization Ownership – “the ratio of the percentage of people at the DMO level making edits to a binary relative to total engineers editing the binary.” Higher values of this metric are claimed to point to higher quality.
  • Organization Intersection Factor – “a measure of the number of different organizations that contribute greater than 10% of edits, as measured at the level of the overall org owners.” Low values of this metric indicate higher quality.

These metrics are then used in a statistical model to predict failure-proneness of the over 3,000 modules comprising the 50m+ lines of source code in Windows Vista. The results apparently indicated that this organizational structure model is better at predicting failure-proneness of a module than any of these more common models: code churn, code complexity, dependencies, code coverage, and pre-release bugs.

I guess this finding is sort of interesting, if not very surprising or indeed helpful.

One startling omission from this paper is what constitutes a “failure”. There are complicated statistical models built from these eight organizational metrics and comparisons made to other models (and really the differences in the predictive power between all of them are not exactly massive), but nowhere does the paper explain what a “failure” is. This seems like a big problem to me. I literally don’t know what they’re counting – which is maybe just a problem for me – but, much more significantly, I don’t know whether what the different models are counting are the same things (which would be a big deal in comparing the outputs from these models against one another).

Now, a lot has changed in our industry since 2008 in terms of the way we build, test and deploy software. In particular, agile ways of working are now commonplace and I imagine this has a significant organizational impact, so these organizational metrics might not offer as much value as they did when this research was undertaken (if indeed they did even then).

But, after reading this paper and the long discussions that have ensued online recently after it came back into the light, I can’t help but ask myself what value we get from becoming better at predicting which modules have “bugs” in them. On this, the paper says:

More generally, it is beneficial to obtain early estimates of software quality (e.g. failure-proneness) to help inform decisions on testing, code inspections, design rework, as well as financial costs associated with a delayed release.

I get the point they’re making here but the information provided by this organizational metric model is not very useful in informing such decisions, compared to, say, a coherent testing story revealed by exploratory testing. Suppose I predict that module X likely has bugs in it, then what? This data point tells me nothing in terms of where to look for issues or whether it’s worth my while to do so based on my mission to my stakeholders.

We spend a lot of time and effort in software development as a whole – and testing specifically – trying to put numbers against things – perhaps as a means of appearing more scientific or accurate. When faced with questions about quality, though, such measurements are problematic and I thank James Bach for his very timely blog post in which he encourages us to assess quality rather than measure it – I suggest that taking the time to read his blog post is time better spent than trying to make sense of over-complicated and meaningless pseudo-science such as that presented in the paper I’ve reviewed here.

(The original 11-page MS Research paper can be found at

2019 in review

It’s almost unbelievable that it’s time to close out my blogging for the year already! I published 13 blog posts during 2019, right on my target cadence of a post per month but down in number from 2017 and 2018. In terms of traffic, my blog attracted a very similar number of views to 2018 and I closed out the year with 1,000 followers on Twitter for the first time.

If there are particular topics you’d like to see me talking about here (especially to encourage more new readers), please feel free to reach out.

Working at Quest

I reached a milestone during 2019, notching up twenty years at Quest! It’s been an amazing journey since I started here in 1999 as a new migrant from the UK to Australia and I continue to enjoy a varied role working with dedicated people around the world. I travelled extensively again during the year and visited our folks in China, Austin (Texas) and the Czech Republic. The regular opportunities to travel and work with people from different cultures remains one of the most enjoyable (and sometimes most challenging!) aspects of my role.


I spent more time through 2019 helping teams to improve their agility, while still assisting widely around testing. As Quest modernizes both in terms of its products (e.g. new SaaS offerings) and processes, there is plenty to keep me busy helping the teams to deal with the different demands of more frequent delivery.

Conferences & meetups

I had another quieter year in terms of conference and meetup attendance. While I didn’t speak at a conference in 2019, I was lucky enough to co-organize the Association for Software Testing‘s third Australian conference, Testing in Context Conference Australia 2019 (TiCCA19). Working with Paul Seaman, we put together an excellent programme and the fifty-or-so delegates gave very positive feedback on what we offered. Although we had hoped to continue the TiCCA event as an annual conference, our small delegate numbers and ongoing challenges in attracting sponsorship unfortunately made it impossible for us to commit to the continuation of the event. It’s sad that we couldn’t build a sustainable true context-driven testing conference in a city as large as Melbourne, but Paul and I are happy to have tried hard with both CASTx18 and TiCCA19 providing great content for our local community.

The only other conference I attended was a non-IT event and something very different in many ways, the Animal Activists Forum in Melbourne. I contrasted the experience of attending this conference against the typical testing/IT conferences I’ve attended in my blog post, A very different conference experience.

I made it to a couple of meetups, the first being a pre-conference meetup we organized around TiCCA19. This meetup was enjoyable to organize and attend, featuring an excellent presentation by Aaron Hodder and a panel session with four TiCCA19 conference speakers – in the shape of Graeme Harvey, Aaron, Sam Connelly and Ben Simo – ably facilitated by Rich Robinson. The second meetup I attended was one of the high-quality Software Art Thou? series and saw the UK’s Kevlin Henney talking on “What do you mean?” (which he quickly modified to “WTF do you mean?”).

Community work

It was disappointing to learn that EPIC Assist had decided to pull out of the Melbourne market during 2019, resulting in the end of the software testing training course Paul Seaman and I had been delivering through them, the EPIC TestAbility Academy.

We would still love to share our knowledge and experience of software testing (and IT more generally) in a community setting and we continue to look for a partner organization to make this happen.

Other stuff

I’ve found myself reading a lot more books during 2019, a very welcome return to something I really enjoy and a useful way to reduce screen time (yes, I’m a physical book reader!). Many of the books came from the library and we are blessed with an excellent service in Melbourne (they purchased a number of books I requested through the year). Some of the books were purchased and shared with others in my office. I didn’t read testing books per se, but I became very interested in the subject of algorithms, AI and so on, reading a number of books in this area. Other areas of focus were leadership and knowledge acquisition.

I’ve also been spending more time to educate myself around animal rights and veganism, plus contributing in small ways to animal rights advocacy. It’s been an interesting change of tack to read books on these topics and also to see the reactions to my posts, tweets, etc. when this is the subject matter rather than my usual content! A handy summary of my thoughts around some of this can be found in my post, What becoming vegan taught me about software testing.

I hit another milestone early in 2019 when I acquired my first smartphone! I still find the form factor challenging and it seems unlikely I’ll ever become addicted to my phone, but I admit that it can be very handy when out and about – and Google Maps on the go during our travels made life a lot easier (though I was surprised offline maps don’t work in China, not a huge issue as we don’t drive there and taxis are incredibly cheap).

It felt like I had a much heavier workload during 2019 as well as some hefty stints of travel, so my outside projects didn’t get as much attention as in the previous few years. But I was glad to have the opportunity to organize the TiCCA19 conference as well as turning some work travel commitments into enjoyable holidays to see some new and interesting places. This time last year I was hinting at a new (personal) testing-related project that I hoped to kick off in 2019 and, while this didn’t eventuate, the project is still alive and I fully expect to get it up and running in 2020!

Thanks to my readers here and also followers on other platforms, I wish you all a very Merry Christmas & Happy New Year, and I hope you enjoy my posts to come through 2020. (And, remember, please let me know if there are any topics you particular want me to express opinions on, I’m happy to take suggestions!)

Reviewing “Reimagine The Future of Quality Assurance” – (yet) another corporate report on the state of QA/testing

Capgemini recently released another 100+ page report around QA/testing, called Reimagine The Future of Quality Assurance. You might recall that I reviewed another of their long reports, the World Quality Report 2018/2019, and this new report also seemed worthy of some commentary. This is a long post given the length of the report, but I provide a summary of my feelings at the end of the post if the detailed content review below is too hefty.


It’s not clear to me whether this report is focusing on Quality Assurance (QA) or testing or both, the term “Quality Assurance” is not clearly defined or differentiated from testing anywhere in the report and, judging from the responses from some of the industry people they interview in the report, it’s obvious that most of them were also unclear about the focus. It should be noted that my analysis and comments are specifically targeted towards what is discussed around testing in this report.

The report is described as “Featuring the trends shaping the future of quality assurance, and a practitioners’ view of how QA can reinvent customer experiences for competitive advantage”, this also doesn’t really tell me what the focus is but let’s start to look at the content.

The Contents suggest the existence of a section on “Methodology” (page 9) but this is not present in the report and wouldn’t be required anyway as this is not a survey results report (in contrast to the World Quality Report) but is rather based on case study/industry commentary. This oversight in the Contents is indicative of a lack of proofing evident throughout the report – there are many typos, copy/paste errors, and grammar issues, suggesting the report itself wasn’t subject to a very diligent process of quality assurance before it was published.

Introductory content

The foreword comes from Olaf Pietschner (Managing Director, Capgemini Australia & New Zealand). He claims that “QA [is] moving up in the agile value chain”, maybe in reference to testing being seen as more important and valuable as more organizations move to more frequent releases, adopt DevOps, etc. but his intent here may well be something different.

In another introductory piece – titled “Transforming testing for digital transformation: Speed is the new currency” – Sandeep Johri (CEO, Tricentis) says:

Reinventing testing is essential for achieving the speed and agility required to thrive in the digital future. Why? Speed is the new currency but traditional software testing is the #1 enemy of speed.

I have several issues with this. What exactly about testing needs “reinventing”? While speed seems to be a focus for many businesses – following the “the fast will eat the slow” mantra – it’s a stretch to argue that testing is or has been the number one reason that businesses can’t move faster in terms of delivering software. There are so many factors that influence an organization’s ability to get software out of the door that to label testing as “enemy number 1” seems simplistic and so context-independent as to be meaningless.

Industry sector analysis

The next seventy-odd pages of the report focus on sector analysis from five industry sectors. Each sector includes an introductory piece from a Capgemini representative followed by case pieces from different businesses in that sector.

The first sector is “Consumer Products, Retail, Distribution & Transport” (CPRDT) and is introduced by Amit Singhania and Prashant Chaturvedi (both Vice-Presidents, Capgemini Australia & New Zealand). They say:

The move from QA to Quality Engineering (QE) is not an option. The equation is simple: Test less and assure more. Serious and continuous disruption in IT means the way testing and QA has been approached in the past must be overhauled.

I think they’re suggesting that it’s necessary to move away from QA towards QE, though they don’t define what they mean by QE. I’m unsure what they’re suggesting when they say “test less and assure more” (which is not an equation, by the way). These soundbite messages don’t really say anything useful to those involved in testing.

As DevOps spreads it is imperative that software – with the continuous development – needs to be continuously tested. This needs a paradigm shift in the skills of a developer and tester as the thin line between these skills is disappearing and same individuals are required to do both.

This continues to be a big subject of debate in the testing world and they seem to be suggesting that testers are now “required” to be developers (and vice-versa). While there may be benefits in some contexts to testers having development skills, I don’t buy this as a “catch all” statement. We do a disservice to skilled human testers when we suggest they have to develop code as well or they’re somehow unworthy of being part of such DevOps/agile teams. We need to do a better job of articulating the value of skilled testing as distinct from the value of excellent development skills, bearing in mind the concept of critical distance.

The first business piece from this sector comes from Australia Post’s Donna Shepherd (Head of Testing and Service Assurance). She talks a lot about DevOps, Agile, increased levels of automation, AI/ML, and Quality Engineering at Australia Post but then also says:

The role of the tester is also changing, moving away from large scale manual testing and embracing automation into a more technical role

I remain unclear as to whether large-scale manual testing is still the norm in her organization or whether significant moves towards a more automation-focused testing approach have already taken place. Donna also says:

The quality assurance team are the gatekeepers and despite the changes in delivery approaches, automation and skillset, QA will continue to play an important role in the future.

This doesn’t make it sound like a genuine DevOps mentality has been embedded yet and in her case, “QA [is a] governance layer having oversight of deliverables”.

The second business piece representing the CPRDT sector comes from McDonald’s, in the shape of David McMullen (Director of Technology) & Matt Cottee (Manager – POS Systems, Cashless, Digital and Technology Deployment), who manage to say nothing about testing in the couple of pages they’ve contributed to the report.

The next sector is “Energy, Utilities, Mining & Chemicals” and this is introduced by Jan Lindhaus (Vice-President, Head of Sector EUC, Capgemini Australia and New Zealand) and there’s not much about testing here. He says:

Smart QA needs to cover integrated ecosystems supported by cognitive and analytical capabilities along end-to-end business value chains with high speed, agility and robustness.

Maybe read that again, as I’ve done many times. I literally have no idea what “smart QA” is based on this description!

A theme gaining in popularity is new ways of working (NWW), which looks beyond Agile project delivery for a discrete capability.

I heard this NWW idea for the first time fairly recently in relation to one of Australia’s big four banks, but I don’t have a good handle on how this is different from the status quo of businesses adapting and changing the way they work to deal with changes in the business landscape over time. Is NWW an excuse to say “we’re Agile but not following its principles”? (Please point me in the direction of any resources that might help me understand this NWW concept more clearly.)

There are three business pieces for this sector, the first of which comes from Uthkusa Gamanayake (Test Capability Manager, AGL). It’s pleasing to finally start to read some more sensible commentary around testing here (maybe as we should expect given his title). He says:

Testers are part of scrum teams. This helps them to work closely with developers and provides opportunities to take on development tasks in addition to testing. The QA responsibility has moved from the quality assurance team to the scrum teams. This is a cultural and mindset shift.

It’s good to hear about this kind of shift happening in a large utility company like AGL and it at least sounds like testers have the option to take on automation development tasks but are not being moved away from manual testing as the norm. On automation, he says:

Individual teams within an organization will use their own automation tools and frameworks that best suit their requirements and platforms. There is no single solution or framework that will work for the entire organization. We should not be trying to standardize test automation. It can slow down delivery.

Again, this is refreshing to hear, they’re not looking for a “one size fits all” automation solution across such a huge IT organization but rather using best fit tools to solve problems in the context of individual teams. Turning to the topic de jour, AI, he states his opinion:

In my view AI is the future of test automation. AI will replace some testing roles in the future.

I think the jury is still out on this topic. I can imagine some work that some organizations refer to as “testing” being within the realms of the capability of AI even now. But what I understand “testing” to be seems unlikely to be replaceable by AI anytime soon.

There are many people who can test and provide results, but it is hard to find people who have a vision and can implement it.

I’m not sure what he was getting at here, maybe that it’s hard to find people who can clearly articulate the value of testing and tell coherent & compelling stories about the testing they perform. I see this challenge and coaching testers in the skills of storytelling is a priority for me if we are to see human testers being better understood and more valued by stakeholders. He also says:

As for the role of head of testing, it will still exist. It won’t go away, but its function will change. This role will have broader QA responsibilities. The head of QA role does exist in some organizations. I think the responsibilities are still limited to testing practices.

I basically agree with this assessment, in that some kind of senior leadership position dedicated to quality/testing is required in larger organizations even when the responsibility for quality and performing of tests is pushed down into the Scrum delivery teams.

The next business piece comes from David Hayman (Test Practice Manager, Genesis, and Chair of ANTZB). His “no nonsense” commentary is refreshingly honest and frank, exactly what I’d expect based on my experience of meeting and listening to David at past ANZTB conferences and events. On tooling, he says:

The right tool is the tool that you need to do the job. Sometimes they are more UI-focused, sometimes they are more AI-focused, sometimes they are more desktop-focused. As a result, with respect to the actual tools themselves, I’m not going to go into it because I don’t think it’s a value-add and can often be misleading. But actually, it doesn’t generate any value. The great thing is that sanity appears to have overtaken the market so that now we automate what’s valuable as opposed to automating a process because it’s a challenge, or because we can, or we want to, or because it looks good on a CV. The automation journey, though not complete, has reached a level of maturity, where sanity prevails. So that is at least a good thing.

I like the fact that David acknowledges that context is important in our choice of tooling for automation (or anything else for that matter) and that “sanity” is prevailing, at least in some teams in some organizations. That said, I commonly read articles and LinkedIn posts from folks still of the opinion that a particular tool is the saviour or that everyone should have a goal to “automate all the testing” so there’s still some way to go before sanity is the default position on this.

He goes on to talk about an extra kind of testing that he sees as missing from his current testing mix, which he labels “Product Intent Testing”:

I have been thinking that we’re going to need another phase in the testing process – perhaps an acceptance process, or similar. At the moment, we do component testing – we can automate that. We do functional testing – we can automate that. We do system integration testing – we can automate that. We have UAT – we can automate some of that, though obviously it requires a lot more business input.

When you have a situation where the expected results from AI tests are changing all the time, there is no hard and fast expected result. The result might get closer. As long as the function delivers the intent of the requirement, of the use case, or the story, then that’s close enough. But with an automated script, that doesn’t work. You can’t have ‘close enough’.

So I believe there’s an extra step, or an extra phase, I call Product Intent Testing [PIT]. This should be applied once we’ve run the functional tests. What we are investigating is ‘Has the intent that you were trying to achieve from a particular story, been provided?’ That requires human input – decision-making, basically.

It sounds like David is looking for a way to inject a healthy dose of human testing into this testing process, where it might be missing due to “replacement” by automation of existing parts of the process. I personally view this checking of intent to be exactly what we should be doing during story testing – it’s easy to get very focused on (and, paradoxically perhaps, distracted by) the acceptance criteria on our stories and potentially miss the intent of the story by stepping back a little. I’m interested to hear what others think about this topic of covering the intent during our testing.

The last business piece in this sector comes from Ian Robertson (CIO, Water NSW) and, as a non-testing guy, he doesn’t talk about testing in particular, focusing more on domain specifics, but he does mention tooling in the shape of Azure DevOps and Tosca (a Tricentis tool, coincidentally?).

The chunkiest section of the report is dedicated to the “Financial Services’ sector with six business pieces, introduced by Sudhir Pai (Chief Technology and Innovation Officer, Capgemini Financial Services). Part of his commentary is almost identical to that from Jan Lindhaus’s introduction for the “Energy, Utilities, Mining & Chemicals” sector:

Smart QA solutions integrating end-to-end ecosystems powered by cognitive and analytical capabilities are vital

Sudhir also again refers to the “New Ways of Working” idea and he makes a bold claim around “continuous testing”:

Our Continuous Testing report shows that the next 2-3 years is a critical time period for continuous testing – with increased automation in test data management and use of model-based testing for auto-generation of test cases, adoption is all set to boom.

I haven’t seen the “Continuous Testing report” he’s referring to, but I feel like these predictions of booming AI and automation of test case generation have been around for a while already and I don’t see widespread or meaningful adoption. Is “auto-generation of test cases” even something we’d want to adopt? If so, why and what other kinds of risk would we actually amplify by doing so?

Interesting, none of the six business pieces in this sector come from specialists in testing. The first one is by Nathalie Turgeon (Head of Project Delivery, AXA Shared Services Centre) and she hardly mentions testing but does appear to argue the case for a unified QA framework despite clearly articulating the very different landscapes of their legacy and digital businesses.

The next piece comes from Jarrod Sawers (Head of Enterprise Delivery Australia and New Zealand, AIA). He makes the observation:

The role of QA has evolved in the past five years, and there a few different parts to that. One part is mindset. If you go back several years across the market, testing was seen as the last thing you did, and it took along time, and was always done under pressure. Because if anything else went slow, the time to test was always challenged and, potentially, compromised. And that is the wrong idea.

It’s very much a mindset shift to say, ‘Well, let’s think about moving to a more Agile way of working, thinking about testing and QA and assurance of that.’ That is the assurance of what that outcome needs to be for the customer from the start of that process.

This shift away from “testing at the end of the process” has been happening for a very long time now, but Enterprise IT is perhaps a laggard in many respects and so it’s not surprising to hear that this shift is a fairly recent thing inside AIA but at least they’ve finally got there as they adopt more agile ways of working. Inevitably from an Enterprise guy, AI is top of mind:

A great part of the AI is around the move from doing QA once to continuous QA. Think about computing speed, and the power available now compared to just a few years ago, and the speed of these activities. Having that integrated within that decision process makes sense. To build it in so that you’re constantly getting feedback that, yes, it’s operating as expected. Yes, it’s giving us the outcomes we’re looking for.

The customer experience or customer outcome is much better, because no organization without AI has one-to-one QA for all of their operational processes. There is risk in manual processing and human decision-making.

I find myself feeling confused by Jarrod’s comments here and unsure what he means when he says that “no organization without AI has one-to-one QA for all of their operational processes”. “One-to-one QA” is not a term I’m familiar with. While I agree that there is risk in using humans for processing and making decisions, it’s simply untrue that there are no risks when the humans are replaced by AI/automation. All that really happens is a different set of risks are now applied and human decision-making, especially in the context of testing, is typically a risk worth taking. On “QA” specifically, Jarrod notes:

It has to be inherently part of the organisational journey to ensure that when we have a new product entering the market, all those things we say it’s going to do must actually happen. If it doesn’t work, it’s very damaging. So how do we know that we’re going to get there? The answer needs to be, ‘We know because we have taken the correct steps through the process’.

And somebody can say, ‘I know we’re doing this properly, it’s going to be very valuable throughout the process’. Whether that is a product owner or a test manager, it has to be somebody who can guarantee the QA and give assurance to the quality.

His closing statement here is interesting and one I disagree with. Putting such responsibility onto a single person is unfair and goes against the idea of the whole team being responsible for the quality of what they deliver. This gatekeeper (read: scapegoat) for quality is not helpful and sets the person up for failure, almost by definition.

The third business piece comes from Nicki Doble (Group CIO, Cover-More) and it’s clear that, for her, QA/testing is all about confidence-building:

We need to move faster with confidence, and that means leveraging continuous testing and deployment practices at the same time as meeting the quality and security requirements.

This will involve automated releases, along with test-driven development and automated testing to ensure confidence is maintained.

Historically, testing has been either quite manual or it involved a huge suite of automated tests that took a lot of effort to build and maintain, but which didn’t always support the value chain of the business.

In future, we need to focus on building only the right automated testing required to instill confidence and surety into our practices. This needs to be a mix of Test Driven Development (TDD) undertaken by our developers but supported by the QA team, automated performance and functional testing to maintain our minimum standards and create surety. And it needs to be paired with continuous testing running across our development branches.

It worries me to see words like “confidence” and “surety” in relation to expectations from testing. It sounds like she believes that TDD and automated testing are providing them more certainty than when they had their “quite manual” testing. It would have been more encouraging to instead read that she understands that an appropriate mix of human testing and automated checks can help them meet their quality goals, alongside an acknowledgement that surety cannot be achieved no matter what this mix looks like.

The next business piece comes from Raoul Hamilton-Smith (General Manager Product Architecture & CTO NZ, Equifax). He sets out his vision around testing all too clearly:

We want to have all testing automated, but we’re not there yet. It’s a cultural shift as much as a technology shift to make this happen.

It’s not a cultural or technology shift to make all testing automated, it’s simply an impossible mission – if you really believe testing is more than algorithmic checking. So much for “sanity” taking over (per David Hayman’s observations) when it comes to the “automate everything” nonsense! Raoul goes on to talk about Equifax’s Agile journey:

The organisation has been set up for Agile delivery for quite some time, including a move to the scaled Agile framework around 18 months ago. A standard Agile team (or squad) consists of a product owner, some application engineers, some QA analysts and a scrum master. As far as line management is concerned, there is a QA tower. However, the QA people are embedded in the Agile teams so their day-to-day leadership is via their scrum master/project manager.

What we have not been so good at is being very clear about the demand to automate testing. We probably haven’t shown all [sic] how that can be achieved, with some areas of delivery being better than others.

This is the challenge that we’re facing now – we have people who have been manually testing with automation skills that haven’t really had the opportunity to build out the automaton. So right now, we are at the pivot point, whereby automation is the norm.

It sounds like someone in this organization has loaded up on the Kool-Aid, adopting SAFe and borrowing the idea of squads from the so-called Spotify Model (itself a form of scaling framework for Scrum teams). The desire for complete automation is also evident again here, “the demand to automate testing”. It would be interesting to hear from this organization again in a year or two when the folly of this “automate everything” approach has made them rethink and take a more human-centred approach when it comes to testing.

The penultimate piece for this sector comes courtesy of David Lochrie (General Manager, Digital & Integration, ME Bank). He talks a lot about “cycle time” as a valuable metric and has this to say about testing:

From a quality assurance perspective, as a practice, Lochrie characterises the current status as being in the middle of an evolution. This involves transforming from highly manual, extremely slow, labour-intensive enterprise testing processes, and instead heading towards leveraging automation to reduce the cycle time of big, expensive, fragile [sic] and regression test suites.

“We’ve started our QA by focusing purely on automation. The next phase QA transformation journey will be to broaden our definition of QA. Rather than just focusing on test execution, and the automation of test execution, it will focus on what other disciplines come under that banner of QA and how do we move those to the left.”

The days of QA equating to testing are gone, he says.

QA these days involves much more than the old-school tester sitting at the end of the value chain and waiting for a new feature to be thrown over the fence from a developer for testing. “Under the old model the tester knew little about the feature or its origins, or about the business need, the design or the requirement. But those days are over.”

This again sounds like typical laggard enterprise IT, with testers in more actually agile organizations having been embedded into development teams (and being fully across features from the very start) as the norm for many years already. Unfortunately here again, it sounds like ME Bank will make the same fundamental error in trying to automate everything as the way to move faster and reduce their precious cycle time. I’d fully expect sanity to prevail in the long term for this organization too, simply out of necessity, so let’s revisit their comments in a future report too perhaps.

The sixth and final business piece is by Mark Zanetich (Head of Technology, Risk Compliance and Controls for Infrastructure & Operations, Westpac) and he has nothing substantive to say around testing.

Next up in terms of sectors comes “Higher Education” and the introduction is by David Harper (Vice-President, Head of Public Services, Capgemini Australia and New Zealand) who has nothing to say about testing either.

There are two business pieces for this sector, the first coming from Vicki Connor (Enterprise Test Architect, Deakin University) and she says this around AI in testing:

As far as testing applications based on AI, we are doing some exploratory testing and we are learning as we go. We are very open-minded about it. Whilst maintaining our basic principles of understanding why we are testing, what we are testing, when to test, who is best suited to test, where to conduct the testing and how to achieve the best results.

It’s good to read that they’re at least looking at AI in testing via the lens of the basics of why they’re testing and so on, rather than blindly adding it to their mix based on what every other organization is claiming to be doing. I assume when Vicki refers to “exploratory testing” here that she’s really meaning they’re experimenting with these AI approaches to testing and evaluating their usefulness in their own unique context (rather than using ET as a testing approach for their applications generally).

The second business piece comes from Dirk Vandenbulcke (Director – Digital Platforms, RMIT University) and more frequent releases are a hot topic for him:

RMIT us currently in a monthly release cadence. By only having monthly releases, we want to ensure the quality of these releases matches what you would normally find in Waterfall circumstances.

Automation is not only a form of cost control; it is also a question of quality control to meet these timelines. If the test cycles are six weeks, there is no way you can operate on a release cadence of four weeks.

Ultimately, we would like to move to fortnightly-releases for speed-to-market reason[sic], which means our QA cycles need to be automated, improved, and sped up.

For the moment, our QA is more journey-focused. As such, we want to make sure our testing needs are optimised, and use cases are properly tested. Potentially, that means not every single edge case will be tested ever single time. When they were originally developed they were tested – but they won’t be every single time we deploy.

We have started to focus our activities around the paths and journeys our students and staff will take through an experience, rather than doing wide, unfocused tests.

Especially in a fast release cadence, you can’t test every single thing, every time, or automate every single thing, so it’s essential to be focused.”

I find it fascinating that the quality bar after moving to monthly releases is “what you would normally find in Waterfall circumstances.” This sounds like a case of fear of the unknown in moving to more frequent releases, when in reality the risk involved in such releases should be lower since less changes are involved in each release. His approach of workflow/journey testing, though, strikes me as sensible and he also seems to have a handle on the folly of attempting to automate everything as a way out of the issues he’s facing with these more frequent releases.

The final sector considered in this report is “Government” and this is introduced by David Harper again. He manages to mention all the buzzwords in just a few sentences:

Technology trends continue to encourage new innovative approaches of testing code and opportunities for QA with, for example, DevOps and continuous integration and continuous delivery, further enabling Agile operating environments. Most notable is the emergence of the applicability of AI/machine learning as it relates to driving efficiency at scale in large transaction processing environments.

While these techniques are starting to be deployed in business process, it is interesting to explore how learning algorithms will be used to improve QA activities. Such smart or advanced automaton in testing will emerge once agencies have found their feet with automated testing.

My read of this is that government departments are still struggling with automated testing, let alone applying AI and machine learning.

There are two business pieces for this sector, firstly from Srinivas Kotha (Technology Testing Services Manager, Airservices) and he talks a lot about frameworks and strategy, focusing on the future but with less comment about the current state. He suggests that the organization will first look around to determine their strategy:

As part of the test strategy development, I will be looking at the market trends and emerging technologies in testing and quality assurance space to be able to effectively satisfy our future needs and demands. I believe technology evolution is on the upward trend and there is lot out there in the market that we can leverage to enhance our testing and QA capability and deliver business value.

I hope that they will actually look at their own unique requirements and then look at what technologies can help them meet those requirements, rather than looking at these “market trends” and fitting those into their strategy. As we can see from this very report, this “trend” noise is generally not helpful and the organization’s own context and specific needs should be the key drivers behind choices of technology. Talking about automation and AI, he says:

I will be keen to look at implementing more automation and use of Artificial Intelligence (AI) to scale up to increase the coverage (depth and breadth) of testing to reduce risks and time to market. We will be looking at two components within automation – basic and smart automation. We have done little bit of basic automation at the project level. However, we are not going to reuse that for ongoing testing, nor are we maintaining those scripts. There are some areas within the organisation where specific automated scripts are maintained and run for specific testing needs. We currently using a combination of market-leading and open source tools for test management and automation. Key basic automation items that are for immediate consideration are around ongoing functional, regression and performance (load and stress) testing.

Smart automation uses emerging technology such as AI. The questions we are asking are: how we can automate that for testing and data analysis for improving quality outcomes? And what testing can we do from a DevOps and CI/CD perspective, which we aim to adopt in the coming 1-2 years? In the next 6 months we will put up the framework, create the strategy and then begin implementing the initiatives in the strategy. The key potential strategy areas are around automation, test environment and data, and some of the smart test platforms/labs capability.

It sounds like they are in the very early days of building an automation capability, yet they’re already thinking about applying AI and so-called “smart automation”. There’s real danger here in losing sight of why they are trying to automate some of their testing.

The second piece comes from Philip John (QA and Testing Services Manager, WorkSafe Victoria) and his comments see the first mention (I think) of BDD in this report:

When it comes to QA resourcing, we are bringing in more Agile testers who can offer assistance in automation, with an aim to support continuous QA to underpin a CI/CD approach. We have behavioural-driven development and DevOps in our mix and are focusing our delivery model into shift-left testing.

The organisation is also using more Agile/SAFe Agile delivery models.

It all sounds very modern and on trend, hopefully the testers are adding genuine value and not just becoming Cucumbers. Note the mention of SAFe here, not the first time this heavyweight framework appears in the business pieces of this report. Philip heads down the KPI path as well:

From the KPI perspective, the number of KPIs in the testing and QA space is only going to grow, rather than diminish. We expect that there will be a tactical shift in the definition of some KPIs. In any case, we will need to have a reasonable level of KPIs established to ensure the adherence of testing and quality standards.

I don’t understand the fascination with KPIs and, even if we could conceive of some sensible ones around testing, why would more and more of them necessarily equal better? Hitting a KPI number and ensuring adherence to some standard are, of course, completely different things too.

Trend analysis

Moving on from the sector analysis, the report identifies eight “Key trends in Quality Assurance”, viz.

  • DevOps changes the game
  • Modern testing approaches for a DevOps world
  • The status of performance testing in Agile and DevOps
  • Digital Transformation and Artificial Intelligence
  • Unlocking the value of QA teams
  • Connected ecosystem for effective and efficient QA
  • RPA and what we can learn from test automation
  • Data democratisation for competitive advantage

Ignoring the fact that these are not actually trends (at least not as they are stated here) and that there is no indication of the source of them, let’s look at each in turn.

Each trend is supported by a business piece again, often by a tool vendor or some other party with something of a vested interest.

For “DevOps changes the game”, it’s down to Thomas Hadorn and Dominik Weissboeck (both Managing Directors APAC, Tricentis) to discuss the trend, kicking off with:

Scaled agile and DevOps are changing the game for software testing

There’s that “scaled agile” again but there’s a reasonable argument for the idea that adopting DevOps does change the game for testing. They discuss a little of the “how”:

In the past, when software testing was a timeboxed activity at the end of the cycle, the focus was on answering the question, ‘Are we done testing?’ When this was the primary question, “counting” metrics associated with the number of tests run, incomplete tests, passed tests and failed tests, drove the process and influenced the release decision. These metrics are highly ineffective in understanding the actual quality of a release. Today, the question to answer is: ‘Does the release have an acceptable level of risk?’

To provide the DevOps community with an objective perspective on the quality metrics most critical to answering this question, Tricentis commissioned Forrester to research the topic. The goal was to analyse how DevOps leaders measured and valued 75 quality metrics (selected by Forrester), then identify which metrics matter most for DevOps success

I like their acknowledgement that the fetish of counting things around testing is ineffective and answering questions about risk is a much more profound way of showing the value of testing. Turning to the Forrester research they mention, they provide this “quadrant” representation where the horizontal axis represents the degree to which metrics are measured and the vertical the value from measuring the metric (note that in this image, “Directions” should read “Distractions”):


I find it truly bizarre that a “hidden gem” is the idea of prioritizing automated tests based on risk (how else would you do it?!), while high value still seems to be placed on the very counting of things they’ve said is ineffective (e.g. total number of defects, test cases executed, etc.).

The next trend, “Modern testing approaches for a DevOps world”, is discussed by Sanjeev Sharma (VP, Global Practice Director | Data Modernization, Delphix). He makes an observation on the “Move Fast and Break Things” notion:

Although it was promulgated by startups that were early adopters of DevOps, “the notion of Move Fast and Break Things” is passé today. It was a Silicon Valley thing, and that era no longer exists. Enterprises require both speed and high quality, and the need to deliver products and services faster, while maintaining the high expectations of quality and performance are challenges modern day testing and QA teams must address.

This is a fair comment and I see most organizations still having a focus on quality over speed. The desire to have both is certainly challenging many aspects of the way software is built, tested and delivered – and “DevOps” is not a silver bullet in this regard. Sanjeev also makes this observation around AI/ML:

… will drive the need for AI and ML-driven testing, meaning testing and QA are guided by learning from the data generated by the tests being run, by the performance of systems in production, and by introducing randomness – chaos – into systems under test.

This is something I’ve never seen as much of in the testing industry as I’d have expected, that is taking the data generated by different kinds of tests (be they automated or not) and using that data to guide further or different tests. We have the tooling to do this but even basic measures such as code covered by automated tests suites is not generally collected and, even if it is, not used as input into the risk analysis for human testing.

The next (not) trend is “The status of performance testing in Agile and DevOps”, covered by Henrik Rexed (Performance Engineer, Neotys) and his focus – unsurprisingly since he works for a performance testing tool vendor – is performance testing. He comments:

That is why the most popular unicorn companies have invested in a framework that would allow them to automatically build, test, and deploy their releases to production with minimal human interaction.

Every organisation moving to Agile or DevOps will add continuous testing to their release management. Without implementing the performance scoring mechanism, they would quickly be blocked and will start to move performance testing to major releases only.

We are taking major risks by removing performance testing of our pipeline. Let’s be smart and provide a more efficient performance status.

I’m not keen on the idea of taking what so-called “unicorn companies” do as a model for what every other company should do – remember context matters and what’s good for SpaceX or WeWork might not be good for your organization. I agree that continuous testing is a direction most teams will take as they feel pressured to deploy more frequently and I see plenty of evidence for this already (including within Quest). Henrik makes the good point here that the mix of tests generally considered in “continuous testing” often doesn’t include performance testing and there are likely benefits from adding such testing into the mix rather than kicking the performance risk can down the road.

The next trend is “Digital Transformation and Artificial Intelligence” and is discussed by Shyam Narayan (Director | Head of Managed Services, Capgemini Australia and New Zealand). On the goal of AI in testing, he says:

AI interactions with the system multiply the results normally obtained by manual testing. A test automation script can be designed to interact with the system, but it can’t distinguish between the correct and incorrect outcomes for applications.

The end goal of leveraging AI in testing is to primarily reduce the testing lifecycle, making it shorter, smarter, and capable of augmenting the jobs of testers by equipping them with technology. AI is directly applicable to all aspects of testing including performance, exploratory and functional regression testing, identifying and resolving test failures, and performing usability testing.

I’m not sure what he means by “multiplying the results normally obtained by manual testing” and I’m also not convinced that the goal of leveraging AI is to reduce the time it takes to test, I’d see the advantages more in terms of enabling us to do things we currently cannot using humans or existing automation technologies. He also sees a very broad surface area of applicability across testing, it’ll be interesting to see how the reality pans out. In terms of skill requirements for testers in this new world, Shyam says:

Agile and DevOps-era organisations are seeking software development engineers in test (SDET) – technical software testers. But with AI-based applications the requirement will change from SDET to SDET plus data science/statistical modelling – software development artificial intelligence in rest [sic] (SDAIET). This means that QA experts will need knowledge and training not only in development but also in data science and statistical modelling.

This honestly seems ridiculous. The SDET idea hasn’t even been adopted broadly and, where organizations went “all in” around that idea, they’ve generally pulled back and realized that the testing performed by humans is actually a significant value add. Something like a SDAIET is so niche that I can’t imagine it catching on in any significant way.

The next trend is “Unlocking the value of QA teams” and is discussed by Remco Oostelaar (Director, Capgemini Australia and New Zealand). His main point seems to be that SAFe adoption has been a great thing, but that testing organizations haven’t really adapted into this new framework:

In some cases, the test organisation has not adapted to the new methods of Agile, Lean, Kanban that are integrated into the model. Instead it is still structurally based on the Waterfall model with the same processes and tools. At best these test organisations can deliver some short-term value, but not the breakthrough performance that enables the organisation to change the way it competes.

It’s interesting that he considers SAFe to be a model incorporating Agile, Lean and Kanban ideas, I didn’t get that impression when I took a SAFe course some years ago but acknowledge that my understanding of, and interest in, the framework is limited.

It is also important to consider how to transform low-value activities into a high-value outcome. An example is the build of manual test scenarios to automation that can be integrated as part of the continuous integration and continuous delivery (CI/CD) model. Other examples are: automatic code quality checks, continuous testing for unit tests, the application performance interface (API), and monitoring performance and security.

It’s sad to see this blanket view of manual testing as a “low-value activity” and we continue to have a lot of work to do in explaining the value of human testing and why & where it still fits even in this new world of Agile, CI/CD, DevOps, SAFe, NWW, AI, <insert buzzword here>.

Implementing SAFe is not about cost reduction; it is about delivering better and faster. Companies gain a competitive edge and improved customer relationship. The focus is on the velocity, throughput, efficiency improvement and quality of the delivery stream.

I’m sure no organization takes on SAFe believing it will reduce costs, just a glance at the framework overview shows you how heavyweight this is and the extra work you’ll need to do to implement it by the book. I’d be interested to see case studies of efficiency improvements and quality upticks after adopting SAFe.

The next trend is “Connected ecosystem for effective and efficient QA” and it’s over to Ajay Walgude (Vice President, Capgemini Financial Services) for the commentary. He makes reference to the World Quality Report (per my previous blog):

Everything seems to be in place or getting in order, but we still have lower defect removal efficiency (DRE), high cost of quality especially the cost on non-conformance based on interactions with various customers. While the World Quality Report (WQR) acknowledges these changes and comments on the budgets for QA being stable or reduced, there is no credible source that can comment on metrics such as cost of quality, and DRE across phases, and type of defects (the percentage of coding defects versus environment defects).

He doesn’t cite any sources for these claims, do we really have lower DRE across the industry? How would we know? And would we care? Metrics like DRE are not gathered by many organizations (and rightly so as far as I’m concerned) so such claims for the industry as a whole make no sense.

Effective and efficient QA relates to testing the right things in the right way. Effectiveness can be determined in terms of defect and coverage metrics such as defect removal efficiency, defect arrival rate, code coverage, test coverage and efficiency that can be measured in terms of the percentage automated (designed and executed), cost of quality and testing cycle timeframe. The connected eco system not only has a bearing on the QA metrics – cost of appraisal and prevention can go down significantly – but also on the cost of failure.

I’m with Ajay on the idea that we should strive to test the right things in the right way, this is again an example of context awareness, though it’s not what he’s referring to probably. I disagree with measuring effectiveness and efficiency via the type of metrics he mentions, however. Measuring “percentage automated” is meaningless to me, it treats human and automated tests as countable in the same way (which is nonsense) and reinforces the notion that more automation is better (which is not necessarily the case). And how exactly would one measure the “cost of quality” as a measure of efficiency?

He also clearly sees the so-called “Spotify Model” as being in widespread usage and makes the following claim about more agile team organizations:

The aim of squads, tribes, pods and scrum teams is to bring everybody together and drive towards the common goal of building the minimum viable product (MVP) that is release worthy. While the focus is on building the product, sufficient time should be spent on building this connected eco system that will significantly reduce the time and effort needed to achieve that goal and, in doing so, addressing the effective and efficient QA.

The goal of an agile development team is not to build an MVP, this may be a goal at some early stage of a product’s life, but it won’t generally be the goal.

The penultimate trend, “RPA and what we can learn from test automation”, is covered by Remco Oostelaar again (Director, Capgemini Australia and New Zealand) and he starts off by defining what he means by RPA:

Robotic Process Automation (RPA) is the automation of repetitive business tasks and it replaces the human aspect of retrieving or entering data from or into a system, such as entering invoices or creating user accounts across multiple systems. The goal is to make the process faster, more reliable, and cost-effective.

He argues that many of the challenges that organizations face when implementing RPA are similar to those they faced previously when implementing automated testing, leading to this bold claim:

In the future, RPA and test automation will merge into one area, as both have the same customer drivers – cost, speed and quality – and the skillsets are exchangeable. Tool providers are crossing-over to each other’s areas and, with machine learning and AI, this will only accelerate.

It troubles me when I see “test automation” as a cost-reduction initiative, the ROI on test automation (like any form of testing) is zero – it’s a cost, just like writing code is a cost and yet I’ve literally never seen anyone ask for a ROI justification to write product code.

The last trend covered here is “Data democratisation for competitive advantage”, discussed by Malliga Krishnan (Director | Head of Insights and Data, Capgemini Australia and New Zealand) and she doesn’t discuss testing at all.

In another report error, there is actually another trend not mentioned until we get here, so the final final trend is “The transformative impact of Cloud”, covered by David Fodor (Business Development – Financial Services, Amazon Web Services, Australia & New Zealand). It’s a strongly pro-AWS piece, as you’d probably expect, but it’s interesting to read the reality around AI implementations for testing viewed through the lens of such an infrastructure provider:

When it comes to quality assurance, it’s very early days. I haven’t seen significant investment in use cases that employ AI for assurance processes yet, but I’m sure as organisations redevelop their code deployment and delivery constructs, evolve their DevOps operating models and get competent at managing CI/CD and blue/green deployments, they will look at the value they can get from AI techniques to further automate this process back up the value chain.

It sounds like a lot of organizations have a long way to go in getting their automation, CI/CD pipelines and deployment models right before they need to worry about layering on AI. He makes the following points re: developers and testers:

Traditionally, there was a clear delineation between developers and testers. Now developers are much more accountable – and will increasingly be accountable – for doing a significant part of the assurance process themselves. And, as a result, organisations will want to automate that as much as possible. We should expect to see the balance of metrics – or what success looks like for a development cycle – to evolve very much to cycle time over and above pure defect rate. As techniques such as blue/green and canary deployments evolve even further, and as microservices architectures evolve further, the impacts of defects in production will become localised to the extent where you can afford to bias speed over failure.

The more you bias to speed, the more cycles that you can produce, the better you get and the lower your failure rates become. There is a growing bias to optimise for speed over perfection within an environment of effective rollback capabilities, particularly in a blue/green environment construct. The blast radius in a microservices architecture means that point failures don’t bring down your whole application. It might bring down one service within a broader stack. That’s definitely the future that we see. We see organisations who would rather perform more deployments with small failure rates, than have protracted Waterfall cycle development timelines with monolithic failure risk.

The “cycle time” metric is mentioned again here and at least he sees nonsense metrics such as defect rates going away over time in these more modern environments. His comment that “the impacts of defects in production will become localised to the extent where you can afford to bias speed over failure” rings true, but I still think many organizations are far away from the maturity in their DevOps, CI/CD, automation, rollbacks, etc. that make this a viable reality. The illusion of having that maturity is probably leading some to already be making this mistake, though.

Takeaways for the future of Quality Assurance

With the trends really all covered, the last section of the report of any substance is the “10 Key takeaways for the future of Quality Assurance” which are again listed without any citations or sources, so can only be taken as Capgemini opinion:

  • Digital transformation
  • Shift-Left or Right-Shift
  • Automation
  • Redefined KPIs
  • Evolution of QA Tools
  • QA Operating Model
  • QA framework and strategy
  • Focus on capability uplift
  • Role of industry bodies
  • Role of system integrators

Wrapping up

This is another hefty report from the Capgemini folks and, while the content is gathered less from data and more from opinion pieces when compared to the World Quality Report, it results in a very similar document.

There are plenty of buzzwords and vested interest commentary from tool vendors, but little to encourage me that such a report tells us much about the true state of testing or its future. While it was good to read some of the more sensible and practical commentary, the continued predictions of AI playing a significant role in QA/testing sometime soon simply don’t reflect the reality that those of us who spend time learning about what’s actually going on in testing teams are seeing. Most organizations still have a hard enough time getting genuine value from a move to agile ways of working and particularly leveraging automation to best effect, so the extra complexity and contextual application of AI seems a long way off the mainstream to me.

I realize that I’m probably not the target audience for these corporate type reports and that target audience will probably take on board some of the ideas from such a high-profile report – unfortunately, this will probably result in poor decisions about testing direction and strategy in large organizations, while a more context aware investigation of which good practices would make sense in each of their unique environments would likely produce better outcomes.

I find reading these reports both interesting and quite depressing in about equal measure, but I hope I’ve highlighted some of the more discussion-worthy pieces in this blog post.

A very different conference experience

My Twitter feed has been busy in recent weeks with testing conference season in full swing.

First on my radar after some time away in Europe on holidays was TestBash Australia, followed soon afterwards by their New Zealand and San Francisco incarnations. Next up was the German version of the massive Agile Testing Days and another mega-conference in the shape of European stalwart EuroSTAR is in progress as I write.

It’s one of the joys of social media that we can share in the goings on of these conferences even if we can’t attend in person. The only testing conference I’ve attended in 2019 has been TiCCA19 in Melbourne (an event I co-organized with Paul Seaman and the Association for Software Testing) but I hope to get to an event or two in 2020.

I did attend a very different kind of conference at the Melbourne Town Hall in October, though, in the shape of the full weekend Animal Activists Forum. There was a great range of talks across several tracks on both days and I saw inspiring presentations from passionate activists. Organizations like Voiceless, Animals Australia, Aussie Farms, The Vegan Society, and the Animal Justice Party – as well as many individuals – are doing so much good work for this movement.

There were some marked differences between this conference and the testing/IT conferences I generally attend. Firstly, the cost for the two full days of this event (including refreshments but not lunches) was just AU$80 (early bird), representing remarkable value given the location and range of great talks on offer.

Another obvious difference was the prevalence of female speakers on the programme, probably due to the fact that the vegan community is believed to be around 70-80% female. It was good to see more passion and positivity emanating from the stage too, all the more remarkable when considering the atrocities and realities of the animal exploitation industries that many of us are regularly exposed to within this movement.

The focus of most of the talks I attended was on actionable content, things we could do to help advance the movement. While there was some discussion of theory, history and philosophy, it was for the most part discussed with a view to providing ideas for what we can do now to advance animal rights. Many IT conference talks would do well to similarly focus on actionable takeaways.

While there were many differences compared to tech conferences, there was also evidence of common themes. One of the areas of commonality was how difficult it is to persuade people to change, even in the face of facts and evidence in support of the positive impacts of the change, such as going vegan (with the focus being squarely on going vegan for the animals in this audience, while also considering the environmental and health benefits). It was good to hear the different ideas and approaches from different speakers and activist groups. We need many different styles of advocacy when it comes to context-driven testing too – different people are going to be reached in different ways (it’s almost as though context matters!).

It’s interesting to me how easy it sometimes seems to be to change people’s minds or opinions, though. An example I’ve seen unfolding is the introduction of dairy products into China. I’ve been working with testing teams there for seven years and, for the first few years, I rarely saw or heard any mention of dairy products. This situation has changed very rapidly, thanks to massive marketing efforts by the dairy industry (most notably – and sadly – from Australia and New Zealand dairy companies). Even though almost all Chinese people are lactose intolerant and have little idea about how to use products like dairy milk and cheese, the consumption of these products has become very mainstream. From infant formula (a very lucrative business) to milk on supermarket shelves (with some very familiar Australian brands on show) to Starbucks, the dairy offerings are now ubiquitous. The fact that these products are normalized in the West enables an easier sell to the Chinese and their marketing has been heavily contextualized, for example some of the advertising claims that drinking cow’s milk will help children grow taller. These nutritional falsehoods have worked in the West and are now working in China. The dairy mythology has been successfully sold to this enormous market and the unbelievable levels of cruelty that will result from this, as well as the inevitable negative human health implications, are tragic. Such large industries, of course, have dollars on their side to mount huge marketing campaigns and are driven by profit above the abuse of animals or the health of their consumers . But maybe there are lessons to be learned from their approaches to messaging that can be beneficial in selling good approaches to testing (without the blatant untruths, of course)?

(By the way, does anyone reading this post know if the ISQTB is having a marketing push in China right now? A couple of my colleagues there have talked to me about ISTQB certification just in the last week, while no-one has mentioned it before in the seven years I’ve been working with testers in China…)

If you found this post interesting, I humbly recommend that you also read this one, What becoming vegan taught me about software testing

All testing is exploratory: change my mind

I’ve recently returned to Australia after several weeks in Europe, mainly for pleasure with a small amount of work along the way. Catching up on some of the testing-related chatter on my return, I spotted that Rex Black repeated his “Myths of Exploratory Testing” webinar in September. I respect the fact that he shares his free webinar content every month and, even though I often find myself disagreeing with his opinions, hearing what others think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good testing looks like.

Rex started off with his definition of exploratory testing (ET), viz.

A technique that uses knowledge, experience and skills to test software in a non-linear and investigatory fashion

He claimed that this is a “pretty widely shared definition of ET” but I don’t agree. The ISTQB Glossary uses the following definition:

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

The definition I hear most often is something like the following James Bach/Michael Bolton effort (which they used until 2015):

An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project

They have since deprecated the term “exploratory testing” in favour of simply “testing” (from 2015), defining testing as:

Evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.

Rex went on to say that the test basis and test oracles in ET “are primarily skills, knowledge and experience” and any such testing is referred to as “experience-based testing” (per the ISTQB definition, viz. “Testing based on the tester’s experience, knowledge and intuition.”). Experience-based testing that is investigatory is then deemed to be exploratory. I have several issues with this. There is an implication here that ET involves testing without using a range of oracles that might include specifications, user stories, or other more “formal” sources of what the software is meant to do. Rex reinforces this when he goes on to say that ET is a form of validation and “may tell us little or nothing about conformance to specification because the specification may not even be consulted by the tester”. Also, I can’t imagine any valuable testing that doesn’t rely on the tester’s skills, knowledge and experience so it seems to me that all testing would fall under this “experience-based testing” banner.

The first myth Rex discussed was the “origin myth”, that ET was invented in the 1990s in Silicon Valley or at least that was when a “name got hung on it” (e.g. Cem Kaner). He argued instead that it was invented by whoever wrote the first program, that IBM were doing it in the 1960s, that the independent test teams in Fred Brooks’s 1975 book Mythical Man Month were using ET, and “error guessing” as introduced by Glenford Myers in the classic book Art of Software Testing sounds “a whole lot like a form of ET”. The History of Definitions of ET on James Bach’s blog is a good reference in this regard, in my opinion. While I agree that programmers have been performing some kind of investigatory or unscripted testing in their development and debugging activities as long as programming has been a thing, it’s important that we define our testing activities in a way that makes the way we talk about what we do both accurate and credible. I see the argument for suggesting that error guessing is a form of ET, but it’s just one tactic that might be employed by a tester skilled in the much broader approach that is ET.

The next myth Rex discussed was the “completeness myth”, that “playing around” with the software is sufficient to test it. He mentioned that there is little education around testing in degrees in Software Engineering so people don’t understand what testing can and cannot do, which leads to myths like this. I agree that there is a general lack of understanding in our industry of how important structured ET is as part of a testing strategy, I haven’t personally heard this myth being espoused anywhere recently though.

Next up was the “sufficiency myth”, that some teams bring in a “mighty Jedi warrior of ET & this person has helped [them] to find every bug that can matter”. He mentioned a study from Microsoft where they split their testing groups for the same application, with one using ET (and other reactive strategies) only, while the other used pre-designed tests (including automated tests) only. The sets of bugs found by these two teams was partially but not fully overlapping, hence proving that ET alone is not sufficient. I’m confident that even if the groups had been divided up and did the same kind of testing (be it ET or pre-designed), then the sets of bugs from the two teams would also have been partially but not fully overlapping (there is some evidence to support this, albeit from a one-off small case study, from Aaron Hodder & James Bach in their article Test Cases Are Not Testing)! I’m not sure where this myth comes from, I’ve not heard it from anyone in the testing industry and haven’t seen a testing strategy that relies solely on ET. I do find that using ET as an approach can really help in focusing on finding bugs that matter, though, and that seems like a good thing to me.

Rex continued with the “irrelevance myth”, that we don’t have to worry about ET (or, indeed, any validation testing at all) because of the use of ATDD, BDD, or TDD. He argued that all of these approaches are verification rather than validation, so some validation is still relevant (and necessary). I’ve seen this particular myth and, if anything, it seems to be more prevalent over time especially in the CI/CD/DevOps world where automated checks (of various kinds) are viewed as sufficient gates to production deployment. Again, I see this as a lack of understanding of what value ET can add and that’s on us as a testing community to help people understand that value (and explain where ET fits into these newer, faster deployment approaches).

The final myth that Rex brought up was the “ET is not manageable myth”. In dispelling this myth, he mentioned the Rapid Reporter tool, timeboxed sessions, and scoping using charters (where a “charter is a set of one or more test conditions”). This was all quite reasonable, basically referring to session-based test management (SBTM) without using that term. One of his recommendations seemed odd, though: “record planned session time versus actual [session] time” – sessions are strictly timeboxed in an SBTM situation so planned and actual time are always the same. While this seems to be one of the more difficult aspects of SBTM at least initially for testers in my experience, sticking to the timebox is critical if ET is to be truly manageable.

Moving on from the myths, Rex talked about “reactive strategies” in general, suggesting they were suitable in agile environments but that we also need risk-based strategies and automation in addition to ET. He said that the reliance on skills and experience when using ET (in terms of the test basis and test oracle) mean that heuristics are a good way of triggering test ideas and he made the excellent point that all of our “traditional” test techniques still apply when using ET.

Rex’s conclusion was also sound, “I consider (the best practice of) ET to be essential but not sufficient by itself” and I have no issue with that (well, apart from his use of the term “best practice”) – and again don’t see any credible voices in the testing community arguing otherwise.

The last twenty minutes of the webinar was devoted to Q&A from both the online and live audience (the webinar was delivered in person at the STPCon conference). An interesting question from the live audience was “Has ET finally become embedded in the software testing lifecycle?” Rex responded that the “religious warfare… in the late 2000s/early 2010s has abated, some of the more obstreperous voices of that era have kinda taken their show off the road for various reasons and aren’t off stirring the pot as much”. This was presumably in reference to the somewhat heated debate going on in the context-driven testing community in that timeframe, some of which was unhelpful but much of which helped to shape much clearer thinking around ET, SBTM and CDT in general in my opinion. I wouldn’t describe it as “religious warfare”, though.

Rex also mentioned in response to this question that he actually now sees the opposite problem in the DevOps world, with “people running around saying automate everything” and the belief that automated tests by themselves are sufficient to decide when software is worthy of deployment to production. In another reference to Bolton/Bach, he argued that the “checking” and “testing” distinction was counterproductive in pointing out the fallacy of “automate everything”. I found this a little ironic since Rex constantly seeks to make the distinction between validation and verification, which is very close to the distinction that testing and checking seeks to draw (albeit in much more lay terms as far as I’m concerned). I’ve actually found the “checking” and “testing” terminology extremely helpful in making exactly the point that there is “testing” (as commonly understood by those outside of our profession) that cannot be automated, it’s a great conversation starter in this area for me.

One of Rex’s closing comments was again directed to the “schism” of the past with the CDT community, “I’m relieved that we aren’t still stuck in these incredibly tedious religious wars we had for that ten year period of time”.

There was a lot of good content in Rex’s webinar and nothing too controversial. His way of talking about ET (even the definition he chooses to use) is different to what I’m more familiar with from the CDT community but it’s good to hear him referring to ET as an essential part of a testing strategy. I’ve certainly seen an increased willingness to use ET as the mainstay of so-called “manual” testing efforts and putting structure around it using SBTM adds a lot of credibility. For the most part in my teams across Quest, we now consider test efforts to be considered ET only if they are performed within the framework of SBTM so that we have that accountability and structure in place for the various stakeholders to treat this approach as credible and worthy of their investment.

So, finally getting to the reason for the title of this post, both by Rex’s (I would argue unusual) definition (and even the ISTQB’s definition) or by what I would argue is the more widely accepted definition (Bach/Bolton above), it seems to me that all testing is exploratory. I’m open to your arguments to change my mind!

(For reference, Rex publishes all his webinars on the RBCS website at The one I refer to in this blog post has not appeared there as yet, but the audio is available via