Category Archives: Conferences

Attending the “Testing Talks 2022 – The Reunion” conference (Melbourne, 20th October)

It was a long time coming (thanks to COVID and the harsh restrictions imposed on Melbourne especially during 2020 and 2021) but, after three years, the Testing Talks conference finally took place at the Melbourne Convention and Exhibition Centre on Thursday 20th October.

Appropriately billed as “The Reunion”, the conference saw over 400 keen testers assembling for the single-track event, one of the largest tech conferences in Australia post-COVID so hats off to Cameron Bradley and his team for making the event happen!

Delegates arriving and registering at Testing Talks 2022 conference

I arrived fairly early and there were already plenty of people checked in and enjoying catching up. I bumped into several familiar faces almost immediately (g’day Paul, Pete and Rob!) and it was lovely to meet up in person again after a long time between conference drinks.

Cameron Bradley opening the Testing Talks 2022 conference

The conference kicked off in the massive Clarendon room at about 9am with a brief introduction by Cameron Bradley, who showed his great passion for testing and community while displaying genuine humility and appreciation for others. An excellent start to the day’s proceedings.

The opening talk came from David Colwell (VP of AI & ML, Tricentis) with “How to test a learning system”. He defined a learning system as any system that improves with respect to a task given more exposure to the task. Such systems are more than just rules, with artificial neural networks being an early example. David noted that many modern learning systems are good at getting the right answers after learning, but it’s often difficult to know why. When testing a learning system, looking at its accuracy alone is not enough and we need to look at where it’s inaccurate to see if small degrees of inaccuracy are actually indicative of big problems. He gave the example of a system which had been trained on data with a small population of Indigenous people that led to significant issues with its outputs for Indigenous people while appearing to be high-90% accurate overall. Inevitably, David used Tricentis’s own product, Vision AI, as a case study, but only very briefly and he mentioned that good old combinatorial testing focusing on the intersections that matter was key in testing this system. His key message was that the same testing techniques (e.g. combinatorial, automation, exploratory testing) and same testing skills are still relevant for these types of learning systems, it’s just a different application of those techniques and skills. David is an excellent presenter and he pitched this talk at a level suitable for a general audience (without turning it into a vendor pitch). I was pleased to see a focus on understanding why such systems give the results they do rather than just celebrating their “accuracy”. An interesting and well-presented opening session, sadly missing an opportunity for Q&A.

Next up on the big stage was Andrew Whitehouse (Wipro) with “Design-driven concepts for automated testing”. He used the analogy of a refrigerator and the mechanism it uses to stop itself from freezing inside. His message was to focus on testing system behaviours and look at interactions to drive design decisions. Andrew suggested the use of contract tests to check that the structure of interactions stays OK and collaboration tests to check that the behaviour of interactions stays OK. The key is to use both of these approaches at scale, under load and over time to reveal different types of issues. He really laboured the fridge analogy and (pun intended) it left me cold. The key message made sense but the construction of the argument around the fridge didn’t work too well and the slides didn’t help either (too many words and poor colour choices leading to contrast & readability issues). There was again no Q&A following this talk.

St Ali coffee at the barista coffee cart at Testing Talks 2022 conference

Morning tea (or coffee in my case, thanks to the excellent St Ali-fuelled barista coffee cart!) was a good opportunity for a stretch and it was nice to bump into more old friends (g’day Erik!). The catering from MCEC was vegan-friendly with clear labelling and lots of yummy choices, much appreciated.

Heading back into the conference room, the next session was “Automate any app with Appium” by Rohan Singh (Headspin). He gave a brief introduction to Appium (which uses the Selenium WebDriver protocol) and then went straight into a demo, in which he installed the Appium Python client, connected to his real Android device and then created a simple automated check against the EBay app. Rohan’s demo was well prepared and went well – perhaps too well as his 45-minute session was all over in about 15 minutes (even including a short spiel about his employer, Headspin)! The very rapid execution left a hole in the schedule so we all headed back out into the open space until the next session.

The only lightning talk of the day wrapped up the morning session, in the form of Matt Fellows (SmartBear) giving an “Introduction to Contract Testing”. It was great to see Matt on stage and I’ve personally appreciated his help around contract testing in the past. He continues to be a strong advocate for this approach, as co-founder of PACTFlow and now through the SmartBear offerings. He kicked off by noting some of the problems with traditional integration testing which – while it exercises the whole stack – is often slow, fragile, hard to debug, difficult to manage in terms of environments, has questionable coverage and can result in build queues. Matt outlined the basics of contract testing as an API integration testing technique that is simpler, requires no test environment, runs fast, scales linearly and can be deployed independently. This was a perfect lightning talk, executed in bang on 10 minutes and providing a worthy introduction to this important topic.

Matt Fellow talking contract testing at Testing Talks 2022 conference
Matt Fellow talking contract testing at Testing Talks 2022 conference

Lunch again saw MCEC stepping up to the plate in terms of vegan options and the leisurely schedule left enough time to enjoy some pleasant sunshine along the Yarra out the front of the exhibition centre before heading back for the long afternoon session.

Laszlo Simity (SauceLabs) had the job of engaging with the post-lunch audience with his talk, “Elevate your testing”, and he was more than up to the task. He began with some history lessons on the development of IT systems and outlined the current pain point: an exponential growth in testing requirements at the same time as an exponential decay in testing timeframes. He said more tests + more people + more tools = brute force, but there is an alternative to this brute force approach, viz. what he called “signal driven quality”:

Signal-driven quality slide from Laszlo Simity's Testing Talks 2022 conference talk

Laszlo’s idea was to connect information from all of our different types of testing into one place, with the aim of making smarter testing decisions. He outlined a few signals to illustrate the approach:

  1. Failure analysis – a small number of failures generally cause most of the test quality issues
  2. API testing – validate the business layer with API tests and reduce tests through the UI
  3. UI performance testing – to provide early awareness of performance degradation, e.g. using Google Lighthouse
  4. Accessibility testing – applying WCAG and using tools such as Axe (axe-core)

Only in his last slide did Laszlo refer to his employer, SauceLabs, and that their solution included all of the above signals into one platform. This was a nicely-crafted talk, taking us on a journey from history into pain points and through to potential solutions. It was an object lesson in how to give a talk as a vendor & sponsor and there was also a good Q&A at the end of his session.

A big name in the Selenium community was up next, with Manoj Kumar (LambdaTest) talking about “What’s new in Selenium 4?”. Manoj mentioned that relative locators, Selenium Grid observability and a DevTools protocol (e.g. to mock geolocation) are all new in version 4 and that WebDriver BiDi (bi-directional) is now available for cross-browser automation. He provided some short demos of the new features in this session, which was (unsurprisingly) very focused on (and pro) Selenium. While this content was probably interesting for the toolsmiths in the audience, it didn’t feel like a talk of general relevance to me.

A short break for afternoon tea (and more delicious vegan treats) was welcome before heading into the home stretch.

The next session, “Interactive Exploratory Testing” by Sarmila Padmanabhan & Cameron Bradley (both of Bunnings), was the one that stood out to me from the programme given my particular interest in all things Exploratory Testing. Sarmila gave the familiar definition of exploratory testing from Cem Kaner and also mentioned context-driven testing! The session then moved onto explaining three “exploratory testing techniques” in the shape of mob testing, bug bashes and crowd testing. In mob testing, a group from different disciplines test the system via one driver working on a single device. The delegates were split up into groups (one per row in the room) to test a deliberately buggy web app using a mob approach, but the groups were far too large and undisciplined to make this work well. Reconvening, the next topic was bug bashes, defined as short bursts of intense usage, using a group of people from different disciplines testing using multiple devices/browsers. Sarmila suggested this was a useful approach for production-ready features. The planned bug bash exercise was abandoned since the previous exercise had basically degenerated into a bug bash. The final topic was crowd testing, where real people in real-world conditions test the same app, as a complement to other types of testing. It has the benefit of a diversity of people and environments (e.g. devices). The exercise for this was to test the site, but it unfortunately crashed under the large load soon after starting the exercise. I didn’t feel that this session was really about exploratory testing as I understand and practice it. The large audience made it too hard to practically run meaningful exercises, with the group becoming somewhat out of control at times. I’d love to see a session specifically on exploratory testing at the next conference, highlighting what a credible, structured and valuable approach it can be when done well.

Another big name in the automation space came next, Anand Bagmar (Software Quality Evangelist, Applitools) talking about “Techniques to Eradicate Flaky Tests”. Anand backpedalled from the talk’s claim right from the off, noting that eradication is likely impossible. He mentioned some common challenges in UI-level automation, such as long running tests with slow feedback, limitations on automatable scenarios at this level, the pain of cross-browser/cross-device execution and, of course, “flaky” tests. Anand outlined three ways to reduce flakiness:

  1. Reduce the number of tests – less testing at the UI level, with more at lower levels (and, yes, he mentioned the test pyramid as a model!)
  2. Remove external dependencies via Intelligent Virtualization – he recommended SpecMatic as a stub server, describing it as “PACT on steroids”!
  3. Use visual assertions – Anand argued that the current approach to testing is incorrect, it’s mundane, tedious and error prone. Testing is about much more than “spot the difference” and we need to “leverage AI” and “replicate human eyes and brain”. He then pitched the VisualAI tool from his employer (and sponsor) Applitools as a way of achieving “perfection across all screens and browsers”. UX using VisualAI then became part of his updated test pyramid.

I liked his closing message to “make automation intelligent but not magic” and he was a good presenter with great audience interaction, but the talk became too much of a pitch for VisualAI towards the end unfortunately.

It was left to Basarat Syed (Pepperstore) to close out the presentations for the day, with “Automating testing web apps with Cypress”. His session consisted almost entirely of demos, in which he built end-to-end tests of a simple MVC “to do” application. His naturally laid back style made for an entertaining session, even if he perhaps chose to cover too many very similar examples in his demo. His takeaway message was to test behaviours and not implementation – and that Cypress is awesome! A short Q&A wrapped things up.

It was then time for Cameron Bradley to return to the lectern to close out the conference with the usual “thank you’s” and formalities. A large number of prize draws then followed out in the open space from the many sponsor competitions held during the day.

For those interested in continuing the festivities, the conference moved on to the nearby Munich Brauhaus on South Wharf for drinks and nibbles. It was good to see so many people turning up to socialize, even if the ability to communicate with each other was compromised by the very noisy pub and its Band Karaoke (which enticed a number of Testing Talks speakers to take the mic!). I enjoyed chatting with friends old and new for a couple of hours over a few ciders, a nice way to end a big day with the testing community.

Apart from the talks themselves, I made a few other observations during the day.

Venue – The venue was excellent, with a good comfortable room, top notch audio/visuals and thoughtful vegan catering. The coffee cart with St Ali coffee was very welcome too (even though it didn’t offer oat milk!).

Audience – As an, erm, more senior member of the Melbourne testing community, it was interesting to see the audience here. While I was in the company of a few friends of similar vintages, the majority of the crowd were young and obviously keen to engage with the sponsors. I was a little disappointed that parts of the audience weren’t as respectful as they might have been, with talking during presentations being common no matter where I sat in the auditorium.

Programme – I generally avoid talks by sponsors at conferences but that was impossible to do here as most of the presenters were from one of the event’s sponsors. For the most part, they didn’t indulge in product pitches during their talks, though, which was good to see. I would have liked to see more Q&A after each talk – there was generally no time for Q&A and, when there was some Q&A, no audience mics were used and the presenters didn’t repeat the question for the broader audience to know what question they were answering.

The programme was very focused on automation/tooling and I would have liked to see more talks about human testing: the challenges, interesting new approaches and first-person experience reports. Given the younger audience at this conference and the prevalence of tooling vendors as sponsors, it concerns me that it would be too easy for them to think this is what testing is all about and then missing out on learning the fundamentals of our craft.

Kudos to Cameron and the Testing Talks team for making this event finally happen. I know from personal experience of organizing a number of testing events in Melbourne how much work is involved and how hard it can be to get a crowd, even in more “normal” times! Cam’s authenticity and desire for community building shone through from the opening remarks to his easy-going conversations with delegates at the pub, my congratulations to all involved in bringing so many of us together for a great day.

Speaking at the Testing Talks 2021 (The Reunion) conference (28 October, Melbourne)

After almost two decades of very regularly attending testing conferences, the combined impacts of COVID-19 and finishing up my career at Quest have curtailed these experiences in more recent times. I’ve missed the in-person interaction with the testing community facilitated by such events, as I know many others have also.

The latter stages of 2020 saw me give three talks; firstly for the DDD Melbourne By Night meetup, then a two-minute talk for the “Community Strikes The Soapbox” part of EuroSTAR 2020 Online, and finally a contribution to the inaugural TestFlix conference. All of these were virtual events and at least gave me some presentation practice.

The opportunity to be part of an in-person conference in Melbourne was very appealing and, after chatting with Cameron Bradley, I committed to building a new talk in readiness for his Testing Talks 2021 Conference.

With the chance to develop a completely new talk, I riffed on a few ideas before settling on what seemed like a timely story for me to tell, namely what I’ve learned from twenty-odd years in the testing industry. I’ve titled the talk “Lessons Learned in Software Testing”, in a deliberate nod to the awesome book of the same name.

I’ve stuck with my usual routine in putting this new talk together, using a mindmap to help me come up with the structure and key messages before starting to cut a slide deck. It remains a challenge for me to focus more on the talk content than refining the slides at this stage, but I’m making a conscious effort to get the messaging down on rough slides before putting finishing touches to them later on.

It’s been interesting to look back over such a long career in the one industry, thinking about the trends that have come and gone, and realizing how much remains the same in terms of being a good tester adding value to projects. I’m looking forward to sharing some of the lessons I’ve learned along the way – some specifically around testing and some more general – in this new talk later in the year.

Fingers crossed (and COVID-permitting!), I’ll be taking the stage at the Melbourne Convention & Exhibition Centre on 28th October to deliver my talk to what I hope will be a packed house. Maybe you can join me? More details and tickets are available from the Testing Talks 2021 Conference website.

ER of attending my first virtual testing conference, Tribal Qonf (27 & 28 June 2020)

I spotted some promotion on Twitter for a new testing conference in India, Tribal Qonf, and the virtual nature of it (thanks to COVID-19) plus the impressive speaker line-up (including James Bach and Michael Bolton) spurred my interest. Looking into it further, the pricing was incredibly low so I decided to register for it (for around AU$30 at the time I registered).

Although the weekend scheduling of the conference and Indian timezone wasn’t ideal, the conference promised to provide all content via recordings so I didn’t tune into any of the presentations “live”, waiting instead the ten days or so for recordings to be made available. I then watched most of the presentations from the two-day event over a period of a few days.

The first presentation I watched was the opening talk from day 1 by James Bach, titled “Weaving Testing: Thread by Thread” This was a fascinating talk and it was great to see such a detailed analysis of what actually happens during good testing by skilled practitioners, especially compared to the mythology we’ve generally been conditioned with about what makes for ‘proper’ testing.

Next up, I opted for Pradeep Soundararajan‘s talk on “The Business Value of Testing”. I’ve unfortunately never managed to catch Pradeep presenting in person, but this virtual presentation displayed the passion I expected from him. It was also engaging and refreshingly honest about the challenges we face in terms of recognizing how different stakeholders view the “value” of what we provide as testers.

My next choice was “Adopting a simplified Risk-Based Testing Approach” by Nishi Grover Garg, in which she outlined the basics of the approach, very much in the style of practitioners like Rex Black. The approach was presented very clearly here and I liked the way Nishi contextualized the risk-based testing approach to her startup environment.

A nicely-crafted story came next thanks to Ajay Balamurugadas and his talk “Lessons from 14 Years of Software Testing Career”. He detailed his learnings from each of his testing jobs and offered practical suggestions for areas to focus on at different levels of experience in testing. This presentation reminded me very much of my “A Day In The Life Of A Test Architect” talk which I gave at STARWest in 2016 and again at CAST in 2017.

Rounding out the talks for day 1, I somewhat hesitantly tuned into the ‘expert panel’ on “Testing after 2020”. I’ve become a little jaded about panel sessions but I really enjoyed this one featuring Aprajita Mathur, Ashok Thiruvengadam, Rahul Verma and Pradeep Soundararajan. The panelists responses to the various questions were refreshingly down to earth and practical. I was particularly pleased to see the considered, reasonable and sensible discussions around AI/ML in testing, providing welcome relief from the usual Kool Aid drinkers around these topics in the industry at the moment. A shout out to Lalit Bhamare too for his skillful moderation of this panel session which was a significant factor in its success for me.

I kicked off my “day 2” viewing with the first talk from that day, viz. , Ashock Thiruvengadam with “Be in a Flow. Test Brilliantly” This was something a little different in terms of topic for a testing conference (which is always good to see), focusing on introducing the idea of “flow”. I was reminded of the importance of uninterrupted sessions when performing exploratory testing while listening to this talk.

Next, I opted for Mike Talks with “The Hard Lessons Learned in Test Automation”, in which he shared some interesting stories of lessons learned resulting from his chats with testers over coffee in his home city of Wellington (New Zealand). It was unsurprising to me that his chats resulted in a few very common themes, all of which were familiar territory from my various conversations about automation with testers from all over the world over the last twenty-odd years. It seems we have a long way to go in terms of learning these hard lessons, despite them being covered ad nauseam in blogs, articles, books and conference talks.

My next choice was “A Quick Recipe for Test Strategy” from Brijesh Deb and I immediately liked his take on the topic. He defined a test strategy simply as a “set of ideas that guide test design” and made it clear that we shouldn’t conflate this with a hefty “one size fits all” document of some sort. I also liked his focus on driving test strategy by asking questions, with not just a shout out to James Bach‘s Heuristic Test Strategy Model but also an example of using it in practice.

The penultimate talk I watched was “Who Are Your Stakeholders?” with Anna Royzman. We often hear the term “stakeholders” used in testing (and software development more generally) but rarely do we seem to agree on what this term means in the context of our projects. Anna gave a good introduction on how to identify different types of stakeholders and what kinds of information these different stakeholders might be looking for.

I concluded my binge watching of the conference talks with the closing session from the event, in the shape of a “Fireside Chat with Michael Bolton” with questions coming from Ajay Balamurugadas. I loved Michael’s answer to Ajay’s question “what has changed in testing from 1994 to 2020?”, “not enough!” This was a fun fifty minute session and a perfect way to wrap up the conference.

Obviously, “attending” a virtual conference is a completely different experience to an in-person event. I chose not to watch all of the presentation recordings but did watch most of them and the quality was high. I didn’t watch the recordings back-to-back either, rather spreading out my viewing across a few days alongside my usual work commitments. I also didn’t contribute to the conference’s Slack channels as the event had been over for two weeks or so by the time I got to the recordings.

I personally missed the in-person aspects that make traditional conferences so valuable, but it might not be the case that we have to choose one over the other as we move forward. I wonder if we’re entering a new era for conferences, driven by changes forced upon us by COVID-19. There are enormous accessibility benefits of the virtual model, thanks to lower pricing and the removal of the need to travel and spend time away from home & family. Such virtual events also open up opportunities for new voices who might be unable or unwilling to travel to a “normal” event, or are too uncomfortable to address a physical audience.

The selection of topics on offer during this event was good and the talks were of a high standard. It appeared to be well organized too, so thanks to Lalit and the Test Tribe crew for putting on a worthwhile testing event during these difficult times! I enjoyed the experience of this virtual conference and I am now considering attending other virtual testing conferences through 2020 before – maybe! – more normal service resumes in 2021…

A very different conference experience

My Twitter feed has been busy in recent weeks with testing conference season in full swing.

First on my radar after some time away in Europe on holidays was TestBash Australia, followed soon afterwards by their New Zealand and San Francisco incarnations. Next up was the German version of the massive Agile Testing Days and another mega-conference in the shape of European stalwart EuroSTAR is in progress as I write.

It’s one of the joys of social media that we can share in the goings on of these conferences even if we can’t attend in person. The only testing conference I’ve attended in 2019 has been TiCCA19 in Melbourne (an event I co-organized with Paul Seaman and the Association for Software Testing) but I hope to get to an event or two in 2020.

I did attend a very different kind of conference at the Melbourne Town Hall in October, though, in the shape of the full weekend Animal Activists Forum. There was a great range of talks across several tracks on both days and I saw inspiring presentations from passionate activists. Organizations like Voiceless, Animals Australia, Aussie Farms, The Vegan Society, and the Animal Justice Party – as well as many individuals – are doing so much good work for this movement.

There were some marked differences between this conference and the testing/IT conferences I generally attend. Firstly, the cost for the two full days of this event (including refreshments but not lunches) was just AU$80 (early bird), representing remarkable value given the location and range of great talks on offer.

Another obvious difference was the prevalence of female speakers on the programme, probably due to the fact that the vegan community is believed to be around 70-80% female. It was good to see more passion and positivity emanating from the stage too, all the more remarkable when considering the atrocities and realities of the animal exploitation industries that many of us are regularly exposed to within this movement.

The focus of most of the talks I attended was on actionable content, things we could do to help advance the movement. While there was some discussion of theory, history and philosophy, it was for the most part discussed with a view to providing ideas for what we can do now to advance animal rights. Many IT conference talks would do well to similarly focus on actionable takeaways.

While there were many differences compared to tech conferences, there was also evidence of common themes. One of the areas of commonality was how difficult it is to persuade people to change, even in the face of facts and evidence in support of the positive impacts of the change, such as going vegan (with the focus being squarely on going vegan for the animals in this audience, while also considering the environmental and health benefits). It was good to hear the different ideas and approaches from different speakers and activist groups. We need many different styles of advocacy when it comes to context-driven testing too – different people are going to be reached in different ways (it’s almost as though context matters!).

It’s interesting to me how easy it sometimes seems to be to change people’s minds or opinions, though. An example I’ve seen unfolding is the introduction of dairy products into China. I’ve been working with testing teams there for seven years and, for the first few years, I rarely saw or heard any mention of dairy products. This situation has changed very rapidly, thanks to massive marketing efforts by the dairy industry (most notably – and sadly – from Australia and New Zealand dairy companies). Even though almost all Chinese people are lactose intolerant and have little idea about how to use products like dairy milk and cheese, the consumption of these products has become very mainstream. From infant formula (a very lucrative business) to milk on supermarket shelves (with some very familiar Australian brands on show) to Starbucks, the dairy offerings are now ubiquitous. The fact that these products are normalized in the West enables an easier sell to the Chinese and their marketing has been heavily contextualized, for example some of the advertising claims that drinking cow’s milk will help children grow taller. These nutritional falsehoods have worked in the West and are now working in China. The dairy mythology has been successfully sold to this enormous market and the unbelievable levels of cruelty that will result from this, as well as the inevitable negative human health implications, are tragic. Such large industries, of course, have dollars on their side to mount huge marketing campaigns and are driven by profit above the abuse of animals or the health of their consumers . But maybe there are lessons to be learned from their approaches to messaging that can be beneficial in selling good approaches to testing (without the blatant untruths, of course)?

(By the way, does anyone reading this post know if the ISQTB is having a marketing push in China right now? A couple of my colleagues there have talked to me about ISTQB certification just in the last week, while no-one has mentioned it before in the seven years I’ve been working with testers in China…)

If you found this post interesting, I humbly recommend that you also read this one, What becoming vegan taught me about software testing

Testing in Context Conference Australia 2019

The third annual conference of the Association for Software Testing (AST) outside of North America took place in Melbourne in the shape of Testing in Context Conference Australia 2019 (TiCCA19) on February 28 & March 1. The conference was held at the Jasper Hotel near the Queen Victoria Market.

The event drew a crowd of about 50, mainly from Australia and New Zealand but also with a decent international contingent (including a representative of the AST and a couple of testers all the way from Indonesia!).

I co-organized the event with Paul Seaman and the AST allowed us great freedom in how we put the conference together. We decided on the theme first, From Little Things Big Things Grow, and had a great response to our call for papers, resulting in what we thought was an awesome programme.

The Twitter hashtag for the event was #ticca19 and this was fairly active across the conference.

The event consisted of a first day of workshops followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical AST/peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach).


  • Testing is not dead, despite what you might hear on social media or from some automation tooling vendors. There is a vibrant community of skilled human testers who display immense value in their organizations. My hope is that these people will promote their skills more broadly and advocate for human involvement in producing great software.
  • Ben Simo’s keynote highlighted just how normalized bad software has become, we really can do better as a software industry and testers have a key role to play.
  • While “automation” is still a hot topic, I got a sense of a move back towards valuing the role of humans in producing quality software. This might not be too surprising given the event was a context-driven testing conference, but it’s still worth noting.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue). There was evidence of genuine conferring happening all over the place, exactly what we aimed for!
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for continuing to back our event in Melbourne.
  • I had a tiring but rewarding experience in co-organizing this event with Paul, the testing community in Melbourne is a great place to be!

Workshop day (Thursday 28th February)

We offered two full-day workshops to kick the event off, with “Applied Exploratory Testing” presented by Toby Thompson (from Software Education) and “Leveraging the Power of API Testing” presented by Scott Miles. Both workshops went well and it was pleasing to see them being well attended. Feedback on both workshops has been excellent so well done to Toby and Scott on their big efforts in putting the workshops together and delivering them so professionally.

Toby Thompson setting up his ET workshopScott Miles ready to start his API testing workshop

Pre-conference meetup (Thursday 28th February)

We decided to hold a free meetup on the evening before the main conference day to offer the broader Melbourne testing community the chance to meet some of the speakers as well as hearing a great presentation and speaker panel session. Thanks to generous sponsorship, the meetup went really well, with a small but highly engaged audience – I’ve blogged in detail about the meetup at

Aaron Hodder addresses the meetupGraeme, Aaron, Sam and Ben talking testing during the panel session

Conference day (Friday 1st March)

The conference was kicked off at 8.30am with some opening remarks from me including an acknowledgement of traditional owners and calling out two students who we sponsored to attend from the EPIC TestAbility Academy. Next up was Ilari Henrik Aegerter (board member of the AST) who briefly explained what the AST’s mission is and what services and benefits membership provides, followed by Richard Robinson outlining the way “open season” would be facilitated after each track talk.

I then introduced our opening keynote, Ben Simo with “Is There A Problem Here?”. Ben joined us all the way from Phoenix, Arizona, and this was his first time in Australia so we were delighted to have him “premiere” at our conference! His 45-minute keynote showed us many cases where he has experienced problems when using systems & software in the real world – from Australian road signs to his experience of booking his flights with Qantas, from hotel booking sites to roadtrip/mapping applications, and of course covering his well-publicized work around some years ago. He encouraged us to move away from “pass/fail” to asking “is there a problem here?” and, while not expecting perfection, know that our systems and software can be better. A brief open season brought an excellent first session to a close.

Ben Simo during his keynote (photo from Lynne Cazaly)

After a short break, the conference split into two track sessions with delegates having the choice of “From Prototype to Product: Building a VR Testing Effort” with Nick Pass or “Tales of Fail – How I failed a Quality Coach role” with Samantha Connelly (who has blogged about her talk and also her TiCCA19 conference experience in general).

While Sam’s talk attracted the majority of the audience, I opted to spend an hour with Nick Pass as he gave an excellent experience report of his time over in the UK testing virtual reality headsets for DisplayLink. Nick was in a new country, working for a new company in a new domain and also working on a brand new product within that company. He outlined the many challenges including technical, physical (simulator sickness), processes (“sort of agile”) and personal (“I have no idea”). Due to the nature of the product, there were rapid functionality changes and lots of experimentation and prototyping. Nick said he viewed “QA” as “Question Asker” in this environment and he advocated a Quality Engineering approach focused on both product and process. Test design was emergent but, when they got their first customer (hTC), the move to productizing meant a tightening up of processes, more automated checks, stronger testing techniques and adoption of the LeSS framework. This was a good example of a well-crafted first-person experience report from Nick with a simple but effective deck to guide the way. His 40-minute talk was followed by a full open season with a lot of questions both around the cool VR product and his role in building a test discipline for it.

Nick Pass talks VR

Morning tea was a welcome break and was well catered by the Jasper, before tracks resumed in the shape of “Test Reporting in the Hallway” with Morris Nye and “The Automation Gum Tree” with Michelle Macdonald.

I joined Michelle – a self-confessed “automation enthusiast” – as she described her approach to automation for the Pronto ERP product using the metaphor of the Aussie gum tree (which meant some stunning visuals in her slide deck). Firstly, she set the scene – she has built an automated testing framework using Selenium and Appium to deal with the 50,000 screens, 2000 data objects and 27 modules across Pronto’s system. She talked about their “Old Gum”, a Rational Robot system to test their Win32 application which then matured to use TestComplete. Her “new species” needed to cover both web and device UIs, preferably be based on open source technologies, be easy for others to create scripts, and needed support. It was Selenium IDE as a first step and the resulting framework is seen as successful as it’s easy to install, everyone has access to use it, knowledge has been shared, and patience has paid off. The gum tree analogies came thick and fast as the talk progressed. She talked about Inhabitants, be they consumers, diggers or travellers, then the need to sometimes burn off (throw away and start again), using the shade (developers working in feature branches) and controlling the giants (all too easy for automation to get too big and out of control). Michelle had a little too much content and her facilitator had to wrap her up at 50 minutes into the session so we had time for some questions during open season. There were some sound ideas in Michelle’s talk and she delivered it with passion, supported by the best-looking deck of the conference.

A sample of the beautiful slides in Michelle's talk

Lunch was a chance to relax over nice food and it was great to see people genuinely conferring over the content from the morning’s sessions. The hour passed quickly before delegates reconvened for another two track sessions.

First up for the afternoon was a choice between “Old Dog, New Tricks: How Traditional Testers Can Embrace Code” with Graeme Harvey and “The Uncertain Future of Non-Technical Testing” with Aaron Hodder.

I chose Aaron’s talk and he started off by challenging us as to what “technical” meant (and, as a large group, we failed to reach a consensus) as well as what “testing” meant. He gave his idea of what “non-technical testing” means: manually writing test scripts in English and a person executing them, while “technical testing” means: manually writing test scripts in Java and a machine executing them! He talked about the modern development environment and what he termed “inadvertent algorithmic cruelty”, supported by examples. He mentioned that he’s never seen a persona of someone in crisis or a troll when looking at user stories, while we have a great focus on technical risks but much less so on human risks. There are embedded prejudices in much modern software and he recommended the book Weapons of Math Destruction by Cathy O’Neil. This was another excellent talk from Aaron, covering a little of the same ground as his meetup talk but also breaking new ground and providing us with much food for thought in the way we build and test our software for real humans in the real world. Open season was busy and fully exhausted the one-hour in Aaron’s company.

Adam Howard introduces Aaron Hodder for his track

Graeme Harvey ready to present

A very brief break gave time for delegates to make their next choice, “Exploratory Testing: LIVE!” with Adam Howard or “The Little Agile Testing Manifesto” with Samantha Laing. Having seen Adam’s session before (at TestBash Australia 2018), I decided to attend Samantha’s talk. She introduced the Agile Testing Manifesto that she put together with Karen Greaves, which highlights that testing is an activity rather than a phase, we should aim to prevent bugs rather than focusing on finding them, look at testing over checking, aim to help build the best system possible instead of trying to break it, and emphasizes the whole team responsibility for quality. She gave us three top tips to take away: 1) ask “how can we test that?”, 2) use a “show me” column on your agile board (instead of an “in test” column), and 3) do all the testing tasks first (before development ones). This was a useful talk for the majority of her audience who didn’t seem to be very familiar with this testing manifesto.

Sam Laing presenting her track session (photo from Lynne Cazaly)

With the track sessions done for the day, afternoon tea was another chance to network and confer before the conference came back together in the large Function Hall for the closing keynote. Paul did the honours in introducing the well-known Lynne Cazaly with “Try to See It My Way: Communication, Influence and Persuasion”.

She encouraged us to view people as part of the system and deliberately choose to “entertain” different ideas and information. In trying to understand differences, you will actually find similarities. Lynne pointed out that we over-simplify our view of others and this leads to a lack of empathy. She introduced the Karpman Drama Triangle and the Empowerment Dynamic (by David Emerald). Lynne claimed that “all we’re ever trying to do is feel better about ourselves” and, rather than blocking ideas, we should yield and adopt a “go with” style of facilitation.

Lynne was a great choice of closing keynote and we were honoured to have her agree to present at the conference. Her vast experience translated into an entertaining, engaging and valuable presentation. She spent the whole day with us and thoroughly enjoyed her interactions with the delegates at this her first dedicated testing conference.

Slide from Lynne Cazaly's keynotelynne2Slide from Lynne Cazaly's keynote

Paul Seaman closed out the conference with some acknowledgements and closing remarks, before the crowd dispersed and it was pleasing to see so many people joining us for the post-conference cocktail reception, splendidly catered by the Jasper. The vibe was fantastic and it was nice for us as organizers to finally relax a little and enjoy chatting with delegates.


A conference doesn’t happen by accident, there’s a lot of work over many months for a whole bunch of people, so it’s time to acknowledge the various help we had along the way.

The conference has been actively supported by the Association for Software Testing and couldn’t happen without their backing so thanks to the AST and particularly Ilari who continues to be an enthusiastic promoter of the Australian conference via his presence on the AST board. Our wonderful event planner, Val Gryfakis, makes magic happen and saves the rest of us so much work in dealing with the venue and making sure everything runs to plan – we seriously couldn’t run the event without you, Val!

We had a big response to our call for proposals for TiCCA19, so thanks to everyone who took the time and effort to apply to provide content for the conference. Paul and I were assisted by Michele Playfair in selecting the programme and it was great to have Michele’s perspective as we narrowed down the field. We can only choose a very small subset for a one-day conference and we hope many of you will have another go when the next CFP comes around.

There is of course no conference without content so a huge thanks to our great presenters, be they delivering workshops, keynotes or track sessions. Thanks to those who bought tickets and supported the event as delegates, your engagement and positive feedback meant a lot to us as organizers.

Finally, my personal thanks go to my mate Paul for his help, encouragement, ideas and listening ear during the weeks and months leading up to the event, we make a great team and neither of us would do this gig with anyone else, cheers mate.


A year off giving conference presentations

Having just received a rejection from my only pending CFP submission, 2019 will likely be the first year since 2013 where I don’t give a conference presentation.

It’s always disappointing when the effort of crafting a talk in response to a CFP doesn’t result in the opportunity to give the talk, but my strike rate over the last few years has been pretty good and I’m grateful for the awesome opportunities I’ve been afforded by events in New Zealand, Sweden, Estonia, Vietnam, US and Australia.

As anyone who’s prepared and given a conference talk will know, there’s a lot of time and effort involved – from crafting a CFP submission, to refining the story, building a slide deck, performing some practice runs, travelling to the event (especially from somewhere as a remote as Australia!), and actually delivering the talk. In the absence of this work, I’m looking forward to putting more effort into my community projects as well as kicking off a new testing-related personal project very soon.

In the short term, though, my focus is on the Testing in Context Conference Australia coming up in Melbourne at the end of February. It’s great to be working with Paul Seaman and the Association for Software Testing on this event and I’m really looking forward to putting on a great show, as well as meeting up with old friends from the testing community and hopefully making some new ones as we come together to learn, share and enjoy the company of great testers from around the world.

(There’s still plenty of time to register for the conference and the pre-conference workshops, all the details can be found at


The TiCCA19 conference programme is live!

After a successful 2018 event in the shape of CASTx18, the Association for Software Testing were keen to continue Australian conferences and so Paul Seaman and I took on the job of organizing the 2019 event on the AST’s behalf. We opted to rebrand the conference to avoid confusion with the AST’s well-known CAST event (held annually in North America) and so Testing in Context Conference Australia was born.

It’s been a busy few months in getting to the point where the full line-up for this conference is now live.

Paul and I decided to go for a theme with an Australian bent again, the time “From Little Things, Big Things Grow”. It’s always a milestone in planning a conference when it comes time to open up the call for proposals and we watched the proposals flowing in, with the usual surge towards the CFP closing date of 31st October.

The response to the CFP was excellent, with 95 proposals coming in from all around the globe. We had ideas from first-time presenters and also some from very seasoned campaigners on the testing conference circuit. My thanks go to everyone who took the time and effort to put forward a proposal.

We were joined by Michele Playfair to help us select a programme from the CFP responses. This was an interesting process (as usual), making some hard decisions to build what we considered the best conference programme from what was submitted. With only eight track session slots to fill, we couldn’t choose all of the excellent talks we were offered unfortunately.

The tracks we have chosen are hopefully broad enough in topic to be interesting to many testers. Our keynotes come from Ben Simo (making his first trip and conference appearance in Australia!) and local legend, Lynne Cazaly. Rounding out our programme are three full-day workshops showcasing top Melbourne talent, in the shape of Neil Killick, Scott Miles and Toby Thompson. I’m proud of the programme we have on offer and thank all the speakers who’ve accepted our invitation to help us deliver an awesome event.

The complete TiCCA19 line-up is:

Keynotes (March 1st)

  • Ben Simo with “Is There A Problem Here?”
  • Lynne Cazaly with “Try to see it my way: How developers, technicians, managers and leaders can better understand each other”

Tracks (March 1st)

  •  with “From Prototype to Product: Building a VR Testing Effort”
  •  with “Tales of Fail – How I failed a Quality Coach role”
  •  with “Test Reporting in the Hallway”
  •  with “The Automation Gum Tree”
  •  with “Old Dog, New Tricks: How Traditional Testers Can Embrace Code”
  •  with “The Uncertain Future of Non-Technical Testing”
  • Adam Howard with “Exploratory Testing: LIVE!”
  •  with “The Little Agile Testing Manifesto”

Workshops (February 28th)

  • Neil Killick with “From “Quality Assurance” to “Quality Champion” – How to be a successful tester in an agile team”
  • Scott Miles with “Leveraging the Power of API Testing”
  • Toby Thompson with “Applied Exploratory Testing”

For more details about TiCCA19, including the full schedule and the chance to benefit from significant discounts during the “Little Ripper” period of registration, visit

I hope to see you in Melbourne next year!

ER of attending and presenting at the inaugural TestBash Australia conference (Sydney)

The first TestBash conference to be held in Australia/New Zealand took place in Sydney on October 19. The well-established conference brand of the Ministry of Testing ensured a sell-out crowd (of around 130) for this inaugural event, quite an achievement in the tough Australian market for testing conferences. The conference was held in the Aerial function centre at the University of Technology in Sydney.

The Twitter hashtag for the event was #testbash (from which I’ve borrowed the photos in this post) and this was very active across the conference and in the days after.

I was there to both attend and present at the conference. In fact, I would be co-presenting with Paul Seaman on our volunteer work teaching software testing to young adults on the autism spectrum. It was great to have this opportunity and we were humbled to be selected from the vast response the conference had to its call for papers.

The event followed the normal TestBash format, viz. a single day conference consisting of a single track with an opening and closing keynote plus a session of “99 second talks” (the TestBash version of lightning talks). Track sessions were 30 or 45 minutes in duration, generally with very little time after each talk for questions from the audience (especially in the case of the 30-minute slots).

Early arrivals were rewarded with the opportunity to participate in a Lean Coffee session out on the balcony at the Aerial function centre, a nice way to start the day in the morning sunshine (and with pretty good barista coffee too!).

The conference proper kicked off at 8.50am with a short opening address from the event MC, Trish Koo. She welcomed everyone, gave some background about the Ministry of Testing and also gave a shout out to all of the sponsors (viz. Enov8ApplitoolsGumtreeTyro and Testing Times).

The opening keynote came from Maaret Pyhajarvi (from Finland) with “Next Level Teamwork: Pairing and Mobbing”. Maaret is very well-known for her work around mobbing and this was a good introductory talk on the topic. She mentioned that mobbing involves everyone in the team working together around one computer, which helps learning as everyone knows something that the others don’t. By way of contrast, she outlined strong-style pairing, in which “I have an idea, you take the keyboard to drive”. In this style, different levels of skill help, being unequal at the task is actually a good thing. Maaret said she now only uses pairing as a way to train people, not to actually test software. In a mobbing scenario, there is always one driver on the keyboard who is only following instructions and not thinking. A designated navigator makes decisions on behalf of the group. The roles are rotated ever four minutes and a retro is held at the end of every session. Maaret also noted the importance of mixing roles in the mob (e.g. testers, developers, automation engineers). This was a strong opening keynote with content pitched at just the right level for it to be of general interest.


Next up was a 30-minute talk from Alister Scott (from Automattic) with “How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing”. He introduced his talk by giving some context about the way the company is organized – 800 people across 69 countries, with everyone remote (i.e. no offices!), and all internal communications being facilitated by WordPress (dogfooding). Alistair structured his talk as a series of problems and their solutions, starting with the problem of broken customer flows in production (when they moved to continuous delivery). Their solution to this problem was to add automated end-to-end testing of signup flows in production (and only in production). This solution led to the next problem, having non-deterministic end-to-end tests due to ever-changing A/B tests. The solution to this problem was an override of A/B tests during testing. The next problem was these new tests being too slow, too late (only in production) and too hidden, so they moved to parallel tests and adding “canaries” on merge (before deployment), simple tests of key features (signing up and publishing a page) designed to give fast feedback of major breaking changes. This led to the next problem, having to revert merges and slow local runs to which the solution was having live branch tests with canaries on every pull request. This led to the observation that, of course, canaries don’t find all the problems, so the solution then was to add optional full test suites on live branches. Even then, a problem persisted with Internet Explorer 11 and Safari 10 specific issues, so IE11 and Safari 10 canaries were added. The final problem is still current, in that people still break end-to-end tests! This was a nicely structured short talk about a journey of end-to-end testing and how solving one problem led to another (and ultimately has put them in a position of having no manual regression testing), good content.


A welcome break for morning tea and a chance to catch up with familiar faces came next before the delegates reconvened, with Enov8 getting the chance for a 99-second sponsor talk before sessions resumed.

First up after the break was a 30-minute session thanks to Michele Playfair (of YOW!) with “A Tester’s Guide to Changing Hearts and Minds”. Her key message was that the ability to change people’s opinions about testing was essentially a marketing exercise and she introduced the “4 P’s of marketing”, viz. Product, Price, Promotion and Placement. She argued that, as testers, we need to be better at defining our product (we should be able to answer questions like “what do you do here?”) and also promoting ourselves (by building relationships and networks, and revealing our value). This was a good short talk from Michele, a different angle on the topic of testers describing and showing their value.


Next up was Peter Bartlett (of Campaign Monitor) with a 45-minute talk on “Advancing Quality by Turning Developers into Quality Champions”. He defined a “quality champion” as “a developer who actively promotes quality in their team”, with this being a temporary role (typically lasting six months or so) which is rotated amongst the team. He generally selects someone who already displays a quality mindset or is an influencer within the team to take on the role initially and then trains them via one-on-one meetings, contextual training and against set goals. He encourages them to ask questions like “what areas are hard to test and why?”, “what can I do to make it easier for you to develop your code and be confident in its quality?”, and “what’s the riskiest piece of what you’re working on?”.  Pete holds regular group meetings with all of the quality champions, these might be demo meetings, lean coffees or workshops/activities (e.g. how to write good acceptance criteria, dealing with automation flakiness, playing the dice game, introducing a new tool, how to use heuristics, live group testing). He has noted some positive changes as a result of using this quality champions model, including increased testability, a growth in knowledge and understanding around quality, new automation tests and performance tool testing research. Pete wrapped up with some tips, including starting small, taking time to explain and listen (across all project stakeholders), and to keep reviewing. This was a similar talk to Pete’s talk at the CASTx18 conference earlier in the year but it felt more fully developed here, no doubt as a result of another six months or so of trying this approach in Campaign Monitor.


As the clock struck noon, it was time for Paul Seaman (of Travelport Locomote) and I to take the big stage for our 30-minute talk, “A Spectrum of Difference – Creating EPIC Software Testers”. We outlined the volunteer work we’ve been doing with EPIC Assist to teach software testing to young adults on the autism spectrum (a topic on which I’ve already blogged extensively) and we were pleased with how our co-presenting effort went – and we thought we looked pretty cool in our EPIC polo shirts! We managed to finish up just about on time and the content seemed to resonate with this audience.


With our talk commitment completed, it was lunch hour (albeit with very limited vegan options despite pre-ordering) and it was good to get some fresh air and sunshine out on the venue’s balcony. Paul and I received lots of great feedback about our talk during lunch, it’s always so nice when people make the effort to express their thanks or interest.

Returning from lunch, it was Applitools’ turn to get their 99-seconds of fame as a sponsor before presentations resumed, in the form of a 45-minute session by Adam Howard (of TradeMe) with “Exploratory Testing: LIVE”. This was a really brave presentation, with Adam performing exploratory testing of a feature in the TradeMe website (New Zealand’s EBay) that had been deliberately altered by a developer in ways Adam was not aware of (via an A/B deployment in production). It was brave in many ways: he relied on internet connectivity and a stable VPN connection back to his office in New Zealand, and also exposed himself to testing a feature for the first time in front of 130 eagle-eyed testers! He applied some classic ET techniques and talked through everything he was doing in very credible terms, so this session served as an object lesson to anyone unfamiliar with what genuine exploratory testing looks like and how valuable it can be (Adam unearthed many issues, some of which probably weren’t deliberately introduced for the purposes of his session!). Great work from a solid presenter.


The following 30-minute talk was Paul Maxwell-Walters with “Avoid Sleepwalking to Failure! On Abstractions and Keeping it Real in Software Teams”. This was a really serious talk, high on well-researched content and it was a struggle to give all the content the coverage it deserved in such a short slot. He introduced the ideas of hyper-normalization and hyper-reality before getting into talking about abstractions, viz. “quality” and “measurement”. I particularly liked this quote from his talk, “bad metrics and abstractions are delusional propaganda”! This maybe would have been a better talk if he’d tried to cover less content, but nevertheless it was really engaging and interesting stuff.


The final break came next before we reconvened for the push to the finish. First up after the break was another 99-second sponsor talk, this time Anne-Marie Charrett (conference co-organizer) on her consultancy business, Testing Times.

The last 30-minute slot went to first-time conference presenter, Georgia de Pont (of Tyro), with “Test Representatives – An Alternative Approach to Test Practice Management” and she presented very confidently and calmly on her first outing. She outlined how Tyro moved to having testers embedded in agile teams and, while there lots of positives from doing this, there was also a lack of consistency in test practice across the teams and no way to consider practice-wide improvements. She went on to talk about the move to “test representatives” (who are themselves embedded testers in teams), one from each tribe, who have a mission to provide a community for testers and act as points of contact for initiatives impacting testing. Each representative then shares the outputs of the representatives group with their team. Initiatives the representatives have covered so far include clarifying the role of the embedded tester, improving the test recruitment process (via a pair testing exercise), onboarding new test engineers, performance criteria for test engineers, upskilling test engineers, co-ordinating engineering-wide test engineers and developing a Quality Engineering strategy. There is also a stretch goal for testers to operate across teams. Georgia’s recommended steps to implement such a model were to start small, look for volunteers over selection, communicate the work of the representatives across the organization, survey to get feedback, hold retros within the representatives group and foster support from engineering leadership. This was a solid talk, especially impressive considering Georgia’s lack of experience in this environment.


The final presentation of the day was a closing keynote thanks to Parimala Hariprasad (of Amadeus) with “Enchanting Experiences – The Future of Mobile Apps”. Her time on stage was pretty brief (using only a little over half of her 45-minute slot before Q&A) but was very engaging.  She argued that designing great products isn’t about good screens, it’s about great – enchanting – experiences. She said we should think more about ecosystems than apps and screens as systems become more complex and interconnected. Her neat slides and confident presentation style made her messaging very clear and she also handled Q&A pretty well.


The last session of the conference was dedicated to “99 second talks”, the TestBash version of lightning talks in which each speaker gets just 99 seconds to present on a topic of their choice. There were plenty of volunteers so the short keynote was made up for by more 99s talks, some 18 in total, as follows:

  • Sam Connelly – on depression (and introducing “spoon theory“)
  • Amanda Dean – on why she believes testing is not a craft and should be thought of as a profession
  • Maaret Pyhajarvi – live exploratory testing of an API (using the Gilded Rose example, as per her recent webinar on the same topic)
  • Cameron Bradley – on why a common automation framework is a good thing (based on his experience of implementing one at Tabcorp)
  • Dany Matthias – on experimenting with coffee!
  • Melissa Ngau – on giving and receiving feedback
  • Geoff Dunn – on conflict and how testers can help to resolve it
  • Catherine Karena – on mentoring
  • Nicky West – what is good strategy?
  • Kim Nepata – Blockchain 101
  • Sunil Kumar – mobile application testing: how, what and why?
  • Said – on rotations and why they’re essential in development teams
  • Melissa (Editor Boss at Ministry of Testing) – living a dream as a writer
  • Leela – on transitioning from a small to a large company
  • Haramut – demo of a codeless automation framework
  • Trish Koo – promoting her test automation training course
  • Anne-Marie Charrett – “Audience-Driven Speaking”
  • Maaret Pyhajarvi – promoting the Speak Easy mentoring programme


After a brief closing speech from Trish Koo, the conference closed out. The action then moved to the nearby Knox Street Bar for a post-conference “meetup” with free drinks courtesy of sponsorship from Innodev. This was a fun evening, relaxing with old friends from the testing community and talking conference organizing with others involved in this, erm, fun activity!


I’ll finish off this blog post with some general thoughts on this conference.

The standard of presentations was excellent, as you might expect from a TestBash and the massive response to their call for papers (around 250). The mix of topics was also very good, from live exploratory testing (I would love to see something like this at every testing conference) to automation to coaching/training/interpersonal talks.

The single track format of all TestBash conferences means there is no fear of missing out, but the desire to pack as many talks as possible into the single day means very limited opportunity for Q&A (which is often where the really interesting discussions are). I personally missed the deep questioning that occurs post-presentations at conferences like CAST.

Although the sponsor talks were kept to short 99-second formats, I still find sponsor talks of any kind uncomfortable, especially at a relatively expensive conference.

Paul and I enjoyed presenting to this audience and the Ministry of Testing do an excellent job in terms of pre-gig information and speaker compensation (expensing literally door-to-door). We appreciated the opportunity to share our story and broaden awareness of our programme with EPIC Assist.

Speaking at the inaugural TestBash Australia conference

I’m delighted to have recently found out that I’ll be co-presenting (with Paul Seaman) at the first TestBash conference in Sydney, Australia, in October 2018.

It was Paul’s idea to submit a proposal to TestBash to talk about our continuing experience of teaching software testing to young adults on the autism spectrum through the EPIC TestAbility Academy. We presented on this topic at the LAST conference in Melbourne in 2017 and this time we’ll be able to share more experience, as we’re already halfway through the second run of the programme as I write.

The first course had six students (five of whom completed the full 12-weeks) while the current one has ten students. As we expected, the course will be quite different each time we run it based on the unique attributes of the students involved – suffice to say, it’s another very revealing and rewarding experience as we work with these ten inspiring young people.

Thanks again to Paul for suggesting we submit to this conference and also to the Ministry of Testing for giving us the opportunity to share this great story. I must also extend thanks to EPIC Assist for their ongoing support of this programme and especially to Kym Vassiliou without whose tireless efforts this second run might not have got off the ground.

See you at TestBash Australia 2018 !

CASTx18, context-driven testing fun in Melbourne


Way back in May 2017, I blogged about the fact that I was invited to be the Program Chair for the CASTx18 context-driven testing conference in Melbourne. Fast forward many months and lots of organizing & planning later, the conference took place last week – and was great fun and very well-received by its audience.

Pre-conference meetup

A bonus event came about the evening before the conference started when my invited opening keynote speaker, Katrina Clokie, offered to give a meetup-style talk if I could find a way to make it happen. Thanks to excellent assistance from the Association for Software Testing and the Langham Hotel, we managed to run a great meetup and Katrina’s talk on testing in DevOps was an awesome way to kick off a few days of in-depth treatment of testing around CASTx18. (I’ve blogged about this meetup here.)

Conference format

The conference itself was quite traditional in its format, consisting of a first day of tutorials followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach)..

Day 1 – tutorials

The first day of CASTx18 consisted of two concurrent tutorials, viz.

  • Introduction to Coaching Testing (Anne-Marie Charrett & Pete Bartlett)
  • Testing Strategies for Microservices (Scott Miles)

There were good-sized groups in both tutorials and presenters and students alike seemed to have enjoyable days. My thanks to the presenters for putting together such good-quality content to share and to the participants for making the most of the opportunity.

After the tutorials, we held a cocktail reception for two hours to which all conference delegates were invited as well as other testers from the general Melbourne testing community. This was an excellent networking opportunity and it was good to see most of the conference speakers in attendance, sharing their experiences with delegates. The friendly, relaxed and collaborative vibe on display at this reception was a sign of things to come!

Day 2 – conference report

The conference was kicked off at 8.30am with an introduction by Ilari Henrik Aegerter (board member of the AST) and then by me as conference program chair, followed by Richard Robinson outlining the way open season would be facilitated after each track talk.


It was then down to me to introduce the opening keynote, which came from Katrina Clokie (of Bank of New Zealand), with “Broken Axles: A Tale of Test Environments”. Katrina talked about when she first started as a test practice manager at BNZ and she was keen to find out what was holding testing back across the bank, to which the consistent response was test environments. She encouraged the teams to start reporting descriptions of issues and their impact (how many hours they were impacted for and how many people were impacted). It turned out the teams were good at complaining but not so good at explaining to the business why these problems really mattered. Moving to expressing the impact in terms of dollars seemed to help a lot in this regard! She noted that awareness was different from the ability to take action so visualizations of the impact of test environment problems for management along with advocacy for change (using the SPIN model) were required to get things moving. All of these tactics apply to “fixing stuff that’s already broken” so she then moved on to more proactive measures being taken at BNZ to stop or detect test environment problems before their impact becomes so high. Katrina talked about monitoring and alerting, noting that this needs to be treated quite differently in a test environment than in the production environment. She stumbled across the impressive Rabobank 3-D model of IT systems dependencies and thought it might help to visualize dependencies at BNZ but, after she identified 54 systems, this idea was quickly abandoned as being too complex and time-consuming. Instead of mapping all the dependencies between systems, she has instead built dashboards that map the key architectural pieces and show the status of those. This was a nice opening keynote (albeit a little short at 25-minutes), covering a topic that seldom makes its way onto conference programmes. The 20-minutes of open season indicated that problems with test environments are certainly nothing unique to BNZ!


A short break followed before participants had a choice of two track sessions, in the shapes of Adam Howard (of New Zealand’s answer to EBay, TradeMe) with “Automated agility!? Let’s talk truly agile testing” and James Espie (of Pushpay) with “Community whack-a-mole! Bug bashes, why they’re great and how to run them effectively”. I opted for James’s talk and he kicked off by immediately linking his topic to the conference theme, by suggesting that involving other people in testing (via bug bashes) is just like Burke and Wills who had a team around them to enable them to be successful. At Pushpay, they run a bug bash for every major feature they release – the group consists of 8-18 people (some of whom have not seen the feature before) testing for 60-90 minutes, around two weeks before the beta release of the feature. James claimed such bug bashes are useful for a number of reasons: bringing fresh eyes (preventing snowblindness), bringing a diversity of brains (different people know different things) and bringing diversity of perspectives (quality means different things to different people). Given his experience of running a large number of bug bashes, James shared some lessons learned: 1) coverage (provide some direction or you might find important things have been left uncovered, e.g. everyone tested on the same browser), 2) keeping track (don’t use a formal bug tracking system like JIRA, use something simpler like Slack, a wiki page, a Google sheet), 3) logistics (be ready, have the right hardware, software and test data in place as well as internet, wi-fi, etc.), 4) marketing (it’s hard to get different people each time. advertise in at least three different ways, “shoulder tap” invitation works well, provide snacks – the “hummus effect”!), and 5) triage (might end up with very few bugs or a very large number, potentially a lot of duplicates, consider triaging “on the go” during the running of the bug bash). James noted that for some features, the cost of setting up and running a bug bash is not worth ii and he also mentioned that these events need to be run with sufficient time between them so that people don’t get fatigued or simply tired of the idea. He highlighted some bonuses, including accidental load testing, knowledge sharing and team building. This was a really strong talk, full of practical takeaways, delivered confidently and with some beautiful slide work (James is a cartoonist). The open season exhausted all of the remaining session time, always a good sign that the audience has been engaged and interested in the topic.



A morning tea break followed before participants again had a choice of two track sessions, either “Journey to continuous delivery” from Kim Engel or “My Journey as a Quality Coach” from Lalitha Yenna (of Xero). I attended Lalitha’s talk, having brought her into the programme as a first-time presenter. I’d reviewed Lalitha’s talk content in the weeks leading up to the conference, so I was confident in the content but unsure of how she’d deliver it on the day – I certainly need not have worried! From her very first opening remarks, she came across as very confident and calm, pacing herself perfectly and using pauses very effectively – the audience would not have known it was her first time and her investment in studying other presenters (via TED talks in particular) seriously paid off. Lalitha’s role was an experiment for Xero as they wanted to move towards collective ownership of quality. She spent time observing the teams and started off by “filing the gaps” as she saw them. She met with some passive resistance as she did this, making her realize the importance of empathy. She recommended the book The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever as it helped her become more competent as she coached the teams around her. She noted that simply removing the “Testing” column from their JIRA boards had a big effect in terms of pushing testing left in their development process. Lalitha was open about the challenges she faced and the mistakes she’d made. Initially, she found it hard to feel or show her accomplishments, later realizing that she needed instead to quantify her learnings. She noted that individual coaching was sometimes required and that old habits still came back sometimes within the teams (especially under times of stress). She also realized that she gave the teams too much education and moved to a “just in time” model of educating them based on their current needs and maturity. A nice takeaway was her DANCEBAR story kickoff mnemonic: Draw/mindmap, Acceptance Criteria, Non-functional requirements, Think like the Customer, Error conditions, Business rules, Automation, Regression. In summary, Lalitha said her key learnings on her journey so far in quality coaching were persistence, passion, continuous learning, empathy, and asking lots of questions. This was a fantastic 30-minute talk from a first-time presenter, so confidently delivered and she also dealt well with 15-minutes or so of open season questioning.


Lunch was a splendid buffet affair in the large open area outside the Langham ballroom and it was great to see the small but engaged crowd networking so well (we looked for any singletons to make them feel welcome, but couldn’t find any!)

The afternoon gave participants a choice either two track sessions or one longer workshop before the closing keynote. The first of the tracks on offer came from Nicky West (of Yambay) with “How I Got Rid of Test Cases”, with the concurrent workshop courtesy of Paul Holland (of Medidata Solutions) on “Creativity, Imagination, and Creating Better Test Ideas”. I chose Nicky’s track session and she kicked off by setting some context. Yambay is a 25-person company that had been using an outsourced testing service, running their testing via step-by-step test cases. The outsourcing arrangement was stopped in 2016 with Nicky being brought in to setup a testing team and process. She highlighted a number of issues with using detailed test cases, including duplicating detailed requirements, lack of visibility to the business and reinforcement of the fallacy that “anyone can test”. When Yambay made the decision to move to agile, this also inspired change in the testing practice. Moving to user stories with acceptance criteria was a quick win for the business stakeholders and acceptance criteria became the primary basis for testing (with the user story then being the single source of truth in terms of both requirements and testing). Nicky indicated some other types of testing that takes place in Yambay, including “shakedown” tests (which are documented via mindmaps, marked up to show progress and then finally exported as Word documents for external stakeholders), performance & load tests (which are automated) and operating system version update tests (which are documented in the same way as shakedown tests). In terms of regression testing, “product user stories” are used plus automation (using REST Assured for end-to-end tests), re-using user stories to form test plans. Nicky closed by highlighting efficiency gains from her change of approach including one maintaining one set of assets (user stories), time savings from not writing test cases (and more time to perform exploratory testing), and not needing a test management tool (saving both time and money). This was a handy 40-minute talk, with a good message. The idea of moving away from a test case-driven testing approach shouldn’t have been new for this audience but the ten-minute open season suggested otherwise and it was clear that a number of people got new ideas from this talk.

A short break followed, before heading into the final track session (or the continuation of Paul’s workshop). I spent the hour with Pete Bartlett (of Campaign Monitor) and “Flying the Flag for Quality as a 1-Man-Band”. Pete talked about finding himself in the position of being the only “tester” in his part of the organization and the tactics he used to bring quality across the development cycle. Firstly, he was “finding his bearings” by conducting surveys (to gain an understanding what “quality” meant to different people), meeting with team leads and measuring some stuff (to both see if his changes were having an impact and also to justify what he was doing). Then he started creating plans based on the strengths and weaknesses identified in the surveys, with clear achievable goals. Executing on those plans meant getting people on board, continuing to measure and refine, and being vocal. Pete also enlisted some “Quality Champions” across the teams to help him out with sending the quality message. This good 45-minute talk was jam-packed, maybe spending a little too long on the opening points and feeled slightly rushed towards the end. The open season fully used the rest of his session.

With the track sessions over, it was time for the afternoon tea break and the last opportunity for more networking.

It was left to James Christie (of Claro Testing) to provide the closing keynote, “Embrace bullshit? Or embrace complexity?”, introduced by Lee. I invited James based on conversations I’d had with him at a conference dinner in Dublin some years ago and his unique background in auditing as well as testing gives him a very different perspective. His basic message in the keynote was that we can either continue to embrace bullshit jobs that actually don’t add much value or we can become more comfortable with complexity and all that it brings with it. There was way too much content in his talk, meaning he used the whole hour before we could break for a few questions! This was an example of where less would have been more, half the content would have made a great talk. The only way to summarize this keynote is to provide some quotes and links to recommended reading, there is so much good material to follow up on here:

  • Complex systems are always broken. Success and failure are not absolutes. Complex systems can be broken but still very valuable to someone.
  • Nobody knows how a socio-technical system really works.
  • Why do accidents happen? Heinrich domino modelSwiss cheese model, Systems Theory
  • Everything that can go wrong usually goes right, with a drift to failure.
  • The root cause is just where you decide to stop looking.
  • Testing is exploring the unknowns and finding the differences between the imagined and the found.
  • Safety II (notable names in this area: Sydney Dekker, John Allspaw, Noah Sussman, Richard Cook)
  • Instead of focusing on accidents, understand why systems work safely.
  • Cynefin model (Dave Snowden, Liz Keogh)
  • John Gall Systemantics: How Systems Work and Especially How They Fail
  • Richard Cook How Complex Systems Fail
  • Steven Shorrock & Claire Williams Human Factors & Ergonomics in Practice


The conference was closed out by a brief closing speech from Ilari, during which he mentioned the AST’s kind US$1000 donation to the EPIC TestAbility Academy, the software testing training programme for young adults on the autism spectrum run by Paul Seaman and I through EPIC Assist.


  • The move away from embedded testers in agile teams seems to be accelerating, with many companies adopting the test coach approach of operating across teams to help developers become better testers of their own work. There was little consistency on display here, though, about the best model for test coaching. I see this as an interesting trend and still see a role for dedicated testers within agile teams but with a next “level” of coaching/architect role operating cross-teams in the interests of skills development, consistency and helping to build a testing community across an organization.
  • A common thread was less testers in organizations, with testing now being seen as more of a team responsibility thanks to the widespread adoption of agile approaches to software development. The future for “testers as test case executors” looks grim.
  • The “open season” discussion time after each presentation was much better than I’ve seen at any other conference using the K-cards system. The open seasons felt more like those at peer conferences and perhaps the small audience enabled some people to speak up who otherwise wouldn’t have.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue).
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for again taking the chance on running such an event.

With thanks

I’d like to take the opportunity to publicly express my thanks to:

  • The AST for putting their trust in me (along with Paul Seaman as Assistant Program Chair) to select the programme for this conference,
  • The speakers for sharing their stories, without you there is no content to create a conference,
  • Valerie Gryfakis, Roxane Jackson and the wonderful event staff at the Langham for their smiling faces and wonderful smooth running of the conference,
  • Paul Seaman for always being there for me when I needed advice or assistance, and
  • The AST for their donation to the EPIC TestAbility Academy.

The only trouble with running a successful and fun event is the overwhelming desire to do it all again, so watch this space…