Same story, different companies

I’ve recently had the chance to deliver presentations in the offices of a couple of different companies. Both of these opportunities arose out of the work I’ve been doing with Paul Seaman in running software testing training for young adults on the autism spectrum through EPIC Recruit Assist and our programme, the EPIC TestAbility Academy (ETA).

Having delivered a presentation about ETA at the Melbourne LAST conference, one of the audience members from our talk – Darko Zoroja – reached out to us to see if we’d be willing to deliver the same talk again, this time at his workplace, Seek. Both Paul and I are keen to spread the ETA message as much as we’re able so we immediately said yes and were soon heading along St Kilda Road to Seek’s headquarters to meet Darko. Running the ETA presentation as a “brown bag” over lunch worked well, with a good crowd in the big open kitchen/lunch/presentation space gathering to hear our talk. We got a lot of thoughtful questions from this audience too and some interest was shown in EPIC and ETA (and maybe we even found a candidate for the next run of ETA as well). Thanks to Darko and everyone we met at Seek for their warm hospitality and excellent presentation facilities – and for giving up lunch breaks to listen to Paul and I !

Lee presenting at Seek

The next opportunity came thanks to Paul’s employer, Travelport Locomote, so it was another trip down St Kilda Road to give the ETA presentation again, this time as a “lunch and learn” session in their open space (which comes handily equipped with an incredibly distracting wide view over Albert Park lake and Port Phillip Bay). It was a small but engaged bunch of Paul’s colleagues and there were some great questions at the end as well as another offer of assistance in running future ETA sessions. Thanks to this group also for giving up their lunch hour to spend listening to us telling our story.

A big “Thanks” to everyone who’s already shown interest in what we’re doing with EPIC and, of course, to my mate Paul without whose grit and determination in finding an organization to get this thing off the ground we’d have no story to tell.

If your organization has a genuine interest in diversity and would be keen to find out more about the EPIC TestAbility Academy, we’d be more than happy to give our talk on your premises too so just reach out if that’s of interest.

ER of presenting at the LAST conference (and observations on the rise of “QA”)

As I’ve blogged previously, I was set to experience three “firsts” at the recent LAST conference held in Melbourne. Now on the other side of the experience, it’s worth reviewing each of those firsts.

It was my first time attending a LAST conference and it was certainly quite a different experience to any other conference I’ve attended. Most of my experience is in attending testing-related conferences (of both commercial and community varieties) and LAST was a much broader church, but still with a few testing talks to be found on the programme.

With about a dozen concurrent tracks, it was a tough job choosing talks and having so many tracks just seems a bit OTT to me. It was the first person experience reports that made for highlights during this conference, as is usually the case. The Seek guys, Brian Rankin and Norman Noble, presented Seek’s agile transformation story in “Building quality products as a team” and this was a compelling and honest story about their journey. In “Agile @ Uni: patience young grasshopper”, Toby Durden and Tim Hetherington (both of Deakin University) talked about a similar journey at their university and the challenges of adopting more agile approaches at program rather than project levels – this was again good open, honest and genuine storytelling.

(I also made an effort to attend the talks specifically on testing, see later in this blog post for my general thoughts around those.)

The quality of information provided by the LAST organizers in the lead up to the conference was second to none, so hats off to them for preparing so well and giving genuinely useful information to presenters. Having said that, the experience “on the day” wasn’t great in my opinion. It still amazes me that conferences think it’s OK to not have a room helper for each and every session, especially for those conferences that encourage lots of new or inexperienced presenters like this one. A room helper can cover introductions, facilitate Q&A, keep things on track timewise, and assist with any AV issues – while their presence can simply be a comfort to a nervous presenter.

Secondly, this was the first time I’d co-presented a talk at a conference and it turned out to be a very good experience. Paul Seaman and I practiced our talk a few times, both via Skype calls and also in front of an audience, so we were confident in our content and timing as we went into the “live” situation. It was great to have some company up there and sharing the load felt very natural & comfortable. Paul and I are already discussing future joint presentations now that we know we can make a decent job of it. (The only negatives surrounding the actual delivery of the talk related to the awful room we had been given, with the AV connection being at the back of the room meaning we couldn’t see our soft-copy speaker notes while presenting – but neither of us thought this held us back from delivering a good presentation.)

Lee and Paul kicking off their presentation at LAST

Thirdly, this was the first time I’d given a conference talk about my involvement with the EPIC TestAbility Academy. The first run of this 12-week software testing training programme for young adults on the autism spectrum has just finished and Paul & I are both delighted with the way it’s gone. We’ve had amazing support from EPIC Recruit Assist and learned a lot along the way, so the next run of the programme should be even better. My huge thanks to the students who stuck with us and hopefully they can use some of the skills we’ve passed on to secure themselves meaningful employment in the IT sector. The feedback from our talk on this topic at LAST was incredible, with people offering their (free) help during future runs of the training, describing what we’re doing as “heartwarming” and organizations reaching out to have us give the same talk in their offices to spread the word. This was a very rewarding talk and experience – and a big “thank you” to Paul for being such a great bloke to work with on this journey.

Turning to the testing talks at LAST (and also the way testing was being discussed at Agile Australia the week before), I am concerned about the way “QA” has become a thing again in the agile community. I got the impression that agile teams are looking for a way to describe the sort of contributions I’d expect a good tester to make to a team, but are unwilling to refer to that person as a “tester”. Choosing the term “QA” appeared to be seen as a way to talk about the broader responsibilities a tester might have apart from “just testing stuff”. The danger here is in the loading of the term “QA” – as in “Quality Assurance” – and using it seems to go against the whole team approach to quality that agile teams strive for. What’s suddenly wrong with calling someone a “tester”? Does that very title limit them to such an extent that they can’t “shift left”, be involved in risk analysis, help out with automation, coach others on how to do better testing, etc.? I’d much rather we refer to specialist testers as testers and let them show their potentially huge value in agile teams as they apply those testing skills to more than “just testing stuff”.

Attending the Agile Australia conference (June 22 & 23, 2017)

Although the Agile Australia conference has been running for nine years, I attended it for the first time recently when it took place in Sydney. It was again sold out (and oversold if the “standing room only” keynotes and rumours of mass late registrations from one of the larger sponsors were anything to go by) and it’s become a massive commercial conference, set to celebrate its tenth anniversary next year in Melbourne.

There was a big selection of talks, with each day being kicked off by three back-to-back forty-minute keynotes before splitting into multiple tracks (with one track comprised of so-called “sponsored content”).

The keynotes on both days were of high quality and certainly some of the best talks of the conference for me. Barry O’Reilly was entertaining and engaging in his talk on lessons learned in trying to deploy lean in enterprise environments, while Jez Humble busted a few myths on the deployability of continuous delivery in various organizations. He won me over when he mentioned Exploratory Testing as part of the CD pipeline, the only time I heard mention of ET during the entire event. Neal Ford did a good job in his keynote, talking about how best practices turn into anti-patterns and Sami Honkonen‘s effort was a highlight of the conference in talking about the building blocks required to build a responsive organization.

In terms of track sessions, there wasn’t a single session dedicated to testing and maybe everyone with a good testing story to tell has simply given up submitting to this conference now (my last two submissions haven’t got up) but there was plenty to keep me occupied. Highlights were John Contad‘s passionately delivered talk about mentoring at REA Group, Dr Lisa Harvey-Smith‘s fascinating presentation on dark matter, and Estie & Anthony Boteler‘s talk about working with an intern software tester on the autism spectrum, also at REA Group. This talk resonated strongly with me thanks to my recent work with Paul Seaman and EPIC Recruit Assist in delivering the EPIC TestAbility Academy software testing training programme for young adults on the autism spectrum.

My takeaways were:

  • The focus in the agile community has moved away from “doing Scrum better” to looking at the human factors in successful projects.
  • Talks on psychological safety, neurodiversity, mentorship and such were great to see here, as the importance of people in project success becomes better understood.
  • Testing as a skilled craft is still not being valued by this community, with the crucial role of exploratory testing being mentioned only once in all the talks I attended.

Out of the thousand or so official photos from this conference, there’s only one to provide evidence of my attendance – waiting in line at the coffee cart, kind of says it all really.


Some firsts at the LAST conference (Melbourne)

My next conference speaking gig has just come in – the LAST conference in Melbourne at the end of June 2017. This event will mark a series of “firsts” for me.

Firstly (pun intended), this will be my first time attending a LAST conference so I’m looking forward to the huge variety of speakers they have and being part of a community-driven event.

Secondly, this will be the first time I’ve co-presented a talk at a conference. I expect this to be quite a different experience to “going solo” but, given that I’m doing it with my good mate Paul Seaman, I’m comfortable it will go very well.

Finally, this will be the first time I’ve given a conference talk about my involvement with the EPIC TestAbility Academy. Both Paul and I are excited about this project to teach software testing to young adults on the autism spectrum (and we’ve both blogged about it previously – Paul’s blog, Lee’s blog) and we’re pleased to have the opportunity to share our story at this conference. Working together to create a slide deck is another first for both of us and it’s an interesting & enjoyable challenge, for which we’ve found effective new ways of collaborating.

Thanks to LAST for selecting our talk, I’ll blog about the experience of delivering it after the event.

Program Chair for CASTx18 in Melbourne

It was to my great surprise that I was recently asked to be Program Chair for the AST’s second conference venture in Australia, CASTx18 in Melbourne next year.

The first CAST outside of North America was held in Sydney in February 2017 and was so successful that the AST have opted to give Australia another go, moving down South to Melbourne. Although my main responsibility as Program Chair revolves around the conference content, as a local I can also help out with various tasks “on the ground” to assist the rest of the AST folks who are based outside of Australia.

Coming up with a theme was my first challenge and I’ve opted to give it some Aussie flavour, with “Testing in the Spirit of Burke & Wills” to evoke the ideas of pioneering and exploration.

I’m excited by this opportunity to put together a conference of great content for the local and international testing community – and also humbled by the AST’s faith in me to do so.

Keep an eye on the AST CASTx18 website for more details, the date and venue shouldn’t be too far away, with a CFP to follow.

We’re the voice

A few things have crossed my feeds in the last couple of weeks around the context-driven testing community, so thought I’d post my thoughts on them here.

It’s always good to see a new edition of Testing Trapeze magazine and the April edition was no exception in providing some very readable and thought-provoking content. In the first article, Hamish Tedeschi wrote on “Value in Testing” and made this claim:

Testing communities bickering about definitions of inane words, certification and whether automation is actually testing has held the testing community back

I don’t agree with Hamish’s opinion here and wonder what basis there is for claiming that these things (or indeed any others) have “held the testing community back” – held it back from what, compared to some unknown state of where it might have been otherwise?

Michael Bolton tweeted shortly after this publication went live (but not in response to it) that:

Some symptoms [of testers who don’t actually like testing] include fixation on tools (but not business risk); reluctance to discuss semantics and why chosen words matter in context.

It seems to be a common – and increasingly common – target of those of us in the context-driven testing community that we’re overly focused on “semantics” (or “bickering about definitions of inane words”). We’re not just talking about the meaning of words for the sake of it, but rather to “make certain distinctions clear, with the goal of reducing the risk that someone will misunderstand—or miss—something important” (Michael Bolton again, [1]).


I believe these distinctions have led to less ambiguity in the way we talk about testing (at least within this community) and that doesn’t feel like something that would hold us back, rather the opposite. As an example, the introduction (and refinement) of “testing” and “checking” (see [2]) was such an important one, it allows for much easier conversations with many different kinds of stakeholders about the differences – in a way that the terminology of “validation” and “verification”, for example, really didn’t.

While writing this blog post, Michael posted a blog in which he mentions this subject again (see [3]):

Speaking more precisely costs very little, helps us establish our credibility, and affords deeper thinking about testing

Thanks to Twitter, I then stumbled across an interview between Rex Black and Joe Colantonio, titled “Best Practices Vs Good Practices – Ranting with Rex Black” (see [4]). In this interview, there are some less than subtle swipes at the CDT community, e.g. “Rex often sees members of the testing community take a common phrase and somehow impart attributes to it that no one else does.” The example used for the “common phrase” throughout the interview is “best practices” and, of course, the very tenets of CDT call the use of this phrase into question.

Rex offered up an awesome rebuttal to use the next time you find yourself attempting to explain best practices to people, which is: Think pattern, not recipe.

How can some people have such an amazingly violent reaction to such an anodyne phrase? And why do they think it means “recipe” when it’s clearly not meant that way?

In case you’re unfamiliar with the word, “anodyne” is defined in the Oxford English dictionary as meaning “Not likely to cause offence or disagreement and somewhat dull”. So, the suggestion is that the term “best practices” is unlikely to cause disagreement and therein lies the exact problem with using it. Rex suggests that we “take a common phrase [best practices] and somehow impart attributes to it that no one else does” (emphasis is mine). The fact that he goes on to offer a rebuttal to mis-use of the term suggests to me that the common understanding of what it means is not so common. Surely it’s not too much of a stretch to see that some people might see “best” as meaning “there are no better”, thus taking so-called “best practices” and applying them in contexts where they simply don’t make any sense.

Still in my Twitter feed, it was good to see James Christie continuing his work in standing against the ISO 29119 software testing standard. You might remember that James presented about this at CAST 2014 (see [5]) and this started something of a movement against the imposition of a pointless and potentially damaging standard on software testing – the resulting “Stop 29119” campaign was the first time I’d seen the CDT community coming together so strongly and voicing its opposition to something in such a united way (I blogged about it too, see [6]).

It appears that some of our concerns were warranted with the first job advertisements now starting to appear that demand experience in applying ISO 29119.

James recently tweeted a link to a blog post (see [7]):

Has this author spoken to any #stop29119 campaigners? There’s little evidence of understanding the issues. … #testing

Read the blog post and make of it what you will. This part stood out to me:

Innitally there was controversy over the content of the ISO 29119 standard, with several organizations in opposition to the content (2014).  Several individuals in particular from the Context-Driven School of testing were vocal in their opposition, even beginning a petition against the new testing standards, they gained over a thousand signatures to it.  The opposition seems to have been the result of a few individuals who were ill – informed about the new standards as well as those that felt excluded from the standards creation process

An interesting take on our community’s opposition to the standard!

To end on a wonderfully positive note, I’m looking forward to attending and presenting at CAST 2017 in Nashville later in the year – a gathering of our community is always something special and the chance to exchange experiences & opinions with the engaged folks of CDT is an opportunity not to be missed.

We’re the voices in support of a context-driven approach to testing, let’s not be afraid to use them.


[1] Michael Bolton “The Rapid Software Testing Namespace”

[2] James Bach & Michael Bolton “Testing and Checking Refined”

[3] Michael Bolton “Deeper Testing (2): Automating the Testing”

[4] Rex Black and Joe Colantonio “Best Practices Vs Good Practices – Ranting with Rex Black”

[5] James Christie “Standards – Promoting Quality or Restricting Competition” (CAST 2014)

[6] Lee Hawkins “A Turning Point for the Context-driven Testing Community”

[7] Eva Johnson “ISO 29119 Testing Standard – Why the controversy?”

Creativity and testing

I’ve just finished reading Scott Berkun’s new book, The Dance of the Possible – “The Mostly Honest, Completely Irreverent Guide to Creativity”. As with his previous books, it makes for easy reading and he makes his points clearly and honestly. I read this book based on enjoying a couple of his other works – in the shapes of Confessions of a Public Speaker and Ghost of my Father – and wasn’t anticipating the amount of testing-related goodness I found in his new one!

In just the second chapter, Scott tackles the tricky topic of where to begin when starting a piece of creative work. He talks about taking an exploratory approach:

The primary goal when you’re starting creative work is to explore, and to explore demands you do things where you are not sure of the outcome. There will be false starts, twists, turns and pivots. These should be welcomed as natural parts of the experience, rather than resisted as mistakes or failures.

Exploratory testing, anyone?!  One of the joys of taking a session-based exploratory testing approach in my experience is the uncertainty of what information we’ll learn about our product in each session – this is so much more rewarding for the tester than knowing they’ll just report a “pass” or “fail” at the end of following a test case, for example.

As Scott moves on to methods for finding ideas (in chapter 4), one of my favourite tools for test planning and reporting makes an appearance:

Another approach to finding interesting combinations is called a mind map. On a large piece of paper write your main goal, subject or idea down in the center and circle it. Then think of an attribute, or an idea, related to the main one and write it down, drawing a line back to the main idea. Then think of another and another, connecting each one to any previous idea that seems most related.

Keep drawing lines and making associations. Soon you’ll have a page full of circles and lines capturing different ways to think about your main thought.

Exploratory testing puts the onus on the tester to come up with test ideas and this seems to be one of the biggest challenges for testers moving from a scripted approach, “how will I know what to test?” The skill of coming up with test ideas is one that requires practice and mind maps are a great way to both organize those ideas and give the tester a way to visualize their ideas, start to combine (or separate) them, and so on.

In chapter 5, Scott talks about creative projects being “a dance between two forces, expanding to consider more ideas and shrinking to narrow things down enough to finish” and how this very idea can be challenging for those who focus on efficiency:

Looking back on a finished project, you might think the time spent exploring ideas that didn’t get used was wasted. It’s easy to believe you should have known from the beginning which ideas would work best and which wouldn’t. This is an illusion. Creativity is exploration. You are going into the unknown on purpose. You can only learn about ideas as you develop them, and there’s no reliable predictor of which ones will pay off and which ones won’t. Certainly the more conservative you are in the ideas you pick, the more predictable the process will be, but by being more conservative you are likely being less creative and will discover fewer insights. Arguably the more creations you make the better your intuition gets, but you won’t find any successful creator, even the legends, who gets it right all the time.

People obsessed with efficiency have a hard time accepting this truth. They would also have a very hard time being at sea with Magellan, working with Edison in his lab or with Frida Kahlo in her art studio. They’d be stunned to see the “waste” of prototypes and sketches, and mystified by how many days Magellan had to spend at sea without discovering a single thing. Discovery is never efficient.

I see this “dance” a lot and it’s the natural tester dance between doing more testing (“consider more ideas”) and calling “good enough” (“narrow things down enough to finish”). I think these words from Scott are a great way to express the benefit of more creative testing in helping us better assess risks in our products:

Certainly the more conservative you are in the ideas you pick, the more predictable the process will be, but by being more conservative you are likely being less creative and will discover fewer insights.

Scott’s new book was a fairly short and enjoyable read. I always say that testing is a creative endeavour – seeking to counter the idea that testing is repetitive, boring work whenever I come across it – so Scott’s words on creativity will be a handy reference in this regard.