It was a long time coming (thanks to COVID and the harsh restrictions imposed on Melbourne especially during 2020 and 2021) but, after three years, the Testing Talks conference finally took place at the Melbourne Convention and Exhibition Centre on Thursday 20th October.
Appropriately billed as “The Reunion”, the conference saw over 400 keen testers assembling for the single-track event, one of the largest tech conferences in Australia post-COVID so hats off to Cameron Bradley and his team for making the event happen!

I arrived fairly early and there were already plenty of people checked in and enjoying catching up. I bumped into several familiar faces almost immediately (g’day Paul, Pete and Rob!) and it was lovely to meet up in person again after a long time between conference drinks.

The conference kicked off in the massive Clarendon room at about 9am with a brief introduction by Cameron Bradley, who showed his great passion for testing and community while displaying genuine humility and appreciation for others. An excellent start to the day’s proceedings.
The opening talk came from David Colwell (VP of AI & ML, Tricentis) with “How to test a learning system”. He defined a learning system as any system that improves with respect to a task given more exposure to the task. Such systems are more than just rules, with artificial neural networks being an early example. David noted that many modern learning systems are good at getting the right answers after learning, but it’s often difficult to know why. When testing a learning system, looking at its accuracy alone is not enough and we need to look at where it’s inaccurate to see if small degrees of inaccuracy are actually indicative of big problems. He gave the example of a system which had been trained on data with a small population of Indigenous people that led to significant issues with its outputs for Indigenous people while appearing to be high-90% accurate overall. Inevitably, David used Tricentis’s own product, Vision AI, as a case study, but only very briefly and he mentioned that good old combinatorial testing focusing on the intersections that matter was key in testing this system. His key message was that the same testing techniques (e.g. combinatorial, automation, exploratory testing) and same testing skills are still relevant for these types of learning systems, it’s just a different application of those techniques and skills. David is an excellent presenter and he pitched this talk at a level suitable for a general audience (without turning it into a vendor pitch). I was pleased to see a focus on understanding why such systems give the results they do rather than just celebrating their “accuracy”. An interesting and well-presented opening session, sadly missing an opportunity for Q&A.
Next up on the big stage was Andrew Whitehouse (Wipro) with “Design-driven concepts for automated testing”. He used the analogy of a refrigerator and the mechanism it uses to stop itself from freezing inside. His message was to focus on testing system behaviours and look at interactions to drive design decisions. Andrew suggested the use of contract tests to check that the structure of interactions stays OK and collaboration tests to check that the behaviour of interactions stays OK. The key is to use both of these approaches at scale, under load and over time to reveal different types of issues. He really laboured the fridge analogy and (pun intended) it left me cold. The key message made sense but the construction of the argument around the fridge didn’t work too well and the slides didn’t help either (too many words and poor colour choices leading to contrast & readability issues). There was again no Q&A following this talk.

Morning tea (or coffee in my case, thanks to the excellent St Ali-fuelled barista coffee cart!) was a good opportunity for a stretch and it was nice to bump into more old friends (g’day Erik!). The catering from MCEC was vegan-friendly with clear labelling and lots of yummy choices, much appreciated.
Heading back into the conference room, the next session was “Automate any app with Appium” by Rohan Singh (Headspin). He gave a brief introduction to Appium (which uses the Selenium WebDriver protocol) and then went straight into a demo, in which he installed the Appium Python client, connected to his real Android device and then created a simple automated check against the EBay app. Rohan’s demo was well prepared and went well – perhaps too well as his 45-minute session was all over in about 15 minutes (even including a short spiel about his employer, Headspin)! The very rapid execution left a hole in the schedule so we all headed back out into the open space until the next session.
The only lightning talk of the day wrapped up the morning session, in the form of Matt Fellows (SmartBear) giving an “Introduction to Contract Testing”. It was great to see Matt on stage and I’ve personally appreciated his help around contract testing in the past. He continues to be a strong advocate for this approach, as co-founder of PACTFlow and now through the SmartBear offerings. He kicked off by noting some of the problems with traditional integration testing which – while it exercises the whole stack – is often slow, fragile, hard to debug, difficult to manage in terms of environments, has questionable coverage and can result in build queues. Matt outlined the basics of contract testing as an API integration testing technique that is simpler, requires no test environment, runs fast, scales linearly and can be deployed independently. This was a perfect lightning talk, executed in bang on 10 minutes and providing a worthy introduction to this important topic.


Lunch again saw MCEC stepping up to the plate in terms of vegan options and the leisurely schedule left enough time to enjoy some pleasant sunshine along the Yarra out the front of the exhibition centre before heading back for the long afternoon session.
Laszlo Simity (SauceLabs) had the job of engaging with the post-lunch audience with his talk, “Elevate your testing”, and he was more than up to the task. He began with some history lessons on the development of IT systems and outlined the current pain point: an exponential growth in testing requirements at the same time as an exponential decay in testing timeframes. He said more tests + more people + more tools = brute force, but there is an alternative to this brute force approach, viz. what he called “signal driven quality”:

Laszlo’s idea was to connect information from all of our different types of testing into one place, with the aim of making smarter testing decisions. He outlined a few signals to illustrate the approach:
- Failure analysis – a small number of failures generally cause most of the test quality issues
- API testing – validate the business layer with API tests and reduce tests through the UI
- UI performance testing – to provide early awareness of performance degradation, e.g. using Google Lighthouse
- Accessibility testing – applying WCAG and using tools such as Axe (axe-core)
Only in his last slide did Laszlo refer to his employer, SauceLabs, and that their solution included all of the above signals into one platform. This was a nicely-crafted talk, taking us on a journey from history into pain points and through to potential solutions. It was an object lesson in how to give a talk as a vendor & sponsor and there was also a good Q&A at the end of his session.
A big name in the Selenium community was up next, with Manoj Kumar (LambdaTest) talking about “What’s new in Selenium 4?”. Manoj mentioned that relative locators, Selenium Grid observability and a DevTools protocol (e.g. to mock geolocation) are all new in version 4 and that WebDriver BiDi (bi-directional) is now available for cross-browser automation. He provided some short demos of the new features in this session, which was (unsurprisingly) very focused on (and pro) Selenium. While this content was probably interesting for the toolsmiths in the audience, it didn’t feel like a talk of general relevance to me.
A short break for afternoon tea (and more delicious vegan treats) was welcome before heading into the home stretch.
The next session, “Interactive Exploratory Testing” by Sarmila Padmanabhan & Cameron Bradley (both of Bunnings), was the one that stood out to me from the programme given my particular interest in all things Exploratory Testing. Sarmila gave the familiar definition of exploratory testing from Cem Kaner and also mentioned context-driven testing! The session then moved onto explaining three “exploratory testing techniques” in the shape of mob testing, bug bashes and crowd testing. In mob testing, a group from different disciplines test the system via one driver working on a single device. The delegates were split up into groups (one per row in the room) to test a deliberately buggy web app using a mob approach, but the groups were far too large and undisciplined to make this work well. Reconvening, the next topic was bug bashes, defined as short bursts of intense usage, using a group of people from different disciplines testing using multiple devices/browsers. Sarmila suggested this was a useful approach for production-ready features. The planned bug bash exercise was abandoned since the previous exercise had basically degenerated into a bug bash. The final topic was crowd testing, where real people in real-world conditions test the same app, as a complement to other types of testing. It has the benefit of a diversity of people and environments (e.g. devices). The exercise for this was to test the testingtalks.com.au site, but it unfortunately crashed under the large load soon after starting the exercise. I didn’t feel that this session was really about exploratory testing as I understand and practice it. The large audience made it too hard to practically run meaningful exercises, with the group becoming somewhat out of control at times. I’d love to see a session specifically on exploratory testing at the next conference, highlighting what a credible, structured and valuable approach it can be when done well.
Another big name in the automation space came next, Anand Bagmar (Software Quality Evangelist, Applitools) talking about “Techniques to Eradicate Flaky Tests”. Anand backpedalled from the talk’s claim right from the off, noting that eradication is likely impossible. He mentioned some common challenges in UI-level automation, such as long running tests with slow feedback, limitations on automatable scenarios at this level, the pain of cross-browser/cross-device execution and, of course, “flaky” tests. Anand outlined three ways to reduce flakiness:
- Reduce the number of tests – less testing at the UI level, with more at lower levels (and, yes, he mentioned the test pyramid as a model!)
- Remove external dependencies via Intelligent Virtualization – he recommended SpecMatic as a stub server, describing it as “PACT on steroids”!
- Use visual assertions – Anand argued that the current approach to testing is incorrect, it’s mundane, tedious and error prone. Testing is about much more than “spot the difference” and we need to “leverage AI” and “replicate human eyes and brain”. He then pitched the VisualAI tool from his employer (and sponsor) Applitools as a way of achieving “perfection across all screens and browsers”. UX using VisualAI then became part of his updated test pyramid.
I liked his closing message to “make automation intelligent but not magic” and he was a good presenter with great audience interaction, but the talk became too much of a pitch for VisualAI towards the end unfortunately.
It was left to Basarat Syed (Pepperstore) to close out the presentations for the day, with “Automating testing web apps with Cypress”. His session consisted almost entirely of demos, in which he built end-to-end tests of a simple MVC “to do” application. His naturally laid back style made for an entertaining session, even if he perhaps chose to cover too many very similar examples in his demo. His takeaway message was to test behaviours and not implementation – and that Cypress is awesome! A short Q&A wrapped things up.
It was then time for Cameron Bradley to return to the lectern to close out the conference with the usual “thank you’s” and formalities. A large number of prize draws then followed out in the open space from the many sponsor competitions held during the day.
For those interested in continuing the festivities, the conference moved on to the nearby Munich Brauhaus on South Wharf for drinks and nibbles. It was good to see so many people turning up to socialize, even if the ability to communicate with each other was compromised by the very noisy pub and its Band Karaoke (which enticed a number of Testing Talks speakers to take the mic!). I enjoyed chatting with friends old and new for a couple of hours over a few ciders, a nice way to end a big day with the testing community.
Apart from the talks themselves, I made a few other observations during the day.
Venue – The venue was excellent, with a good comfortable room, top notch audio/visuals and thoughtful vegan catering. The coffee cart with St Ali coffee was very welcome too (even though it didn’t offer oat milk!).
Audience – As an, erm, more senior member of the Melbourne testing community, it was interesting to see the audience here. While I was in the company of a few friends of similar vintages, the majority of the crowd were young and obviously keen to engage with the sponsors. I was a little disappointed that parts of the audience weren’t as respectful as they might have been, with talking during presentations being common no matter where I sat in the auditorium.
Programme – I generally avoid talks by sponsors at conferences but that was impossible to do here as most of the presenters were from one of the event’s sponsors. For the most part, they didn’t indulge in product pitches during their talks, though, which was good to see. I would have liked to see more Q&A after each talk – there was generally no time for Q&A and, when there was some Q&A, no audience mics were used and the presenters didn’t repeat the question for the broader audience to know what question they were answering.
The programme was very focused on automation/tooling and I would have liked to see more talks about human testing: the challenges, interesting new approaches and first-person experience reports. Given the younger audience at this conference and the prevalence of tooling vendors as sponsors, it concerns me that it would be too easy for them to think this is what testing is all about and then missing out on learning the fundamentals of our craft.
Kudos to Cameron and the Testing Talks team for making this event finally happen. I know from personal experience of organizing a number of testing events in Melbourne how much work is involved and how hard it can be to get a crowd, even in more “normal” times! Cam’s authenticity and desire for community building shone through from the opening remarks to his easy-going conversations with delegates at the pub, my congratulations to all involved in bringing so many of us together for a great day.