Teaching testing again at the EPIC TestAbility Academy

After an enjoyable experience with EPIC Assist in 2017 offering testing training to young adults on the autism spectrum, we (that is, Paul Seaman and I) have just started another run of the EPIC TestAbility Academy.

This second course is being held in the excellent facilities of ANZ’s Docklands office and it’s great to have their support in providing high quality surroundings in which to teach the course. Jon O’Neill, head of ANZ Testing Services, gave a brief  (and entertaining) introduction as we kicked off the first session of this second course:

Jon O'Neill (ANZ) introducing himself at the first session of the second EPIC TestAbility Academy

Thanks to some great marketing efforts and a lot of legwork on EPIC’s behalf (a big shout out to Kym Vassiliou especially), this course has filled to our maximum of ten students and it was great to hear them all introducing themselves during the first session – so a big welcome to Braeden, Monique, Dom, Mario, Shen, Damian, Zoe, Caleb, Scott and Marco.

Lee and Paul introducing themselves at the first session of the second EPIC TestAbility Academy

After all the intros, the first session was devoted to discussion about the “what” and “why” of testing, before we wrapped up with a critical thinking exercise, “test the ball”. The engagement and insightful contributions from the group made the opening session very enjoyable for us as teachers (and hopefully also for the students!).

In the second session, we spent quite some time going over the students’ findings from the homework (viz. testing this Palindrome Checker website) and they had come up with some awesome test ideas (including some both Paul and I hadn’t thought of). Next up, we covered the importance of stakeholders before we dived into a group exercise in the shape of the Wason Selection Task. This proved to be a big hit, with excellent engagement, lots of differing opinions, good discussions and (almost!) reaching consensus. To wrap up the session, we ran another group testing exercise in which we all explored puzzle number 2 of James Lyndsay’s Black Box Puzzles (the students already tried to explain the behaviour of puzzle number 1 as part of the application process for the course). This was a fun session and the group is already forming good social bonds and everyone appears to be comfortable contributing ideas.

In addition to Craig Thompson, our ever-present helper from EPIC, Michele Playfair has been taking part in the sessions too as she will stand in for me during absences due to work travel in the coming months. (Paul, Michele and I are all offering our services on a voluntary basis.)

It’s great to have a “full house” for the second run of this course and initial signs are very encouraging, so it’s going to be an interesting twelve weeks as we seek to impart some of our knowledge and passion for good testing and watch these young adults learn and grow.

We are also grateful to the Association for Software Testing for their donation which supports refreshment breaks for the duration of this course.

(Thanks to Kym for the photos used in this post.)


CASTx18, context-driven testing fun in Melbourne


Way back in May 2017, I blogged about the fact that I was invited to be the Program Chair for the CASTx18 context-driven testing conference in Melbourne. Fast forward many months and lots of organizing & planning later, the conference took place last week – and was great fun and very well-received by its audience.

Pre-conference meetup

A bonus event came about the evening before the conference started when my invited opening keynote speaker, Katrina Clokie, offered to give a meetup-style talk if I could find a way to make it happen. Thanks to excellent assistance from the Association for Software Testing and the Langham Hotel, we managed to run a great meetup and Katrina’s talk on testing in DevOps was an awesome way to kick off a few days of in-depth treatment of testing around CASTx18. (I’ve blogged about this meetup here.)

Conference format

The conference itself was quite traditional in its format, consisting of a first day of tutorials followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach)..

Day 1 – tutorials

The first day of CASTx18 consisted of two concurrent tutorials, viz.

  • Introduction to Coaching Testing (Anne-Marie Charrett & Pete Bartlett)
  • Testing Strategies for Microservices (Scott Miles)

There were good-sized groups in both tutorials and presenters and students alike seemed to have enjoyable days. My thanks to the presenters for putting together such good-quality content to share and to the participants for making the most of the opportunity.

After the tutorials, we held a cocktail reception for two hours to which all conference delegates were invited as well as other testers from the general Melbourne testing community. This was an excellent networking opportunity and it was good to see most of the conference speakers in attendance, sharing their experiences with delegates. The friendly, relaxed and collaborative vibe on display at this reception was a sign of things to come!

Day 2 – conference report

The conference was kicked off at 8.30am with an introduction by Ilari Henrik Aegerter (board member of the AST) and then by me as conference program chair, followed by Richard Robinson outlining the way open season would be facilitated after each track talk.


It was then down to me to introduce the opening keynote, which came from Katrina Clokie (of Bank of New Zealand), with “Broken Axles: A Tale of Test Environments”. Katrina talked about when she first started as a test practice manager at BNZ and she was keen to find out what was holding testing back across the bank, to which the consistent response was test environments. She encouraged the teams to start reporting descriptions of issues and their impact (how many hours they were impacted for and how many people were impacted). It turned out the teams were good at complaining but not so good at explaining to the business why these problems really mattered. Moving to expressing the impact in terms of dollars seemed to help a lot in this regard! She noted that awareness was different from the ability to take action so visualizations of the impact of test environment problems for management along with advocacy for change (using the SPIN model) were required to get things moving. All of these tactics apply to “fixing stuff that’s already broken” so she then moved on to more proactive measures being taken at BNZ to stop or detect test environment problems before their impact becomes so high. Katrina talked about monitoring and alerting, noting that this needs to be treated quite differently in a test environment than in the production environment. She stumbled across the impressive Rabobank 3-D model of IT systems dependencies and thought it might help to visualize dependencies at BNZ but, after she identified 54 systems, this idea was quickly abandoned as being too complex and time-consuming. Instead of mapping all the dependencies between systems, she has instead built dashboards that map the key architectural pieces and show the status of those. This was a nice opening keynote (albeit a little short at 25-minutes), covering a topic that seldom makes its way onto conference programmes. The 20-minutes of open season indicated that problems with test environments are certainly nothing unique to BNZ!


A short break followed before participants had a choice of two track sessions, in the shapes of Adam Howard (of New Zealand’s answer to EBay, TradeMe) with “Automated agility!? Let’s talk truly agile testing” and James Espie (of Pushpay) with “Community whack-a-mole! Bug bashes, why they’re great and how to run them effectively”. I opted for James’s talk and he kicked off by immediately linking his topic to the conference theme, by suggesting that involving other people in testing (via bug bashes) is just like Burke and Wills who had a team around them to enable them to be successful. At Pushpay, they run a bug bash for every major feature they release – the group consists of 8-18 people (some of whom have not seen the feature before) testing for 60-90 minutes, around two weeks before the beta release of the feature. James claimed such bug bashes are useful for a number of reasons: bringing fresh eyes (preventing snowblindness), bringing a diversity of brains (different people know different things) and bringing diversity of perspectives (quality means different things to different people). Given his experience of running a large number of bug bashes, James shared some lessons learned: 1) coverage (provide some direction or you might find important things have been left uncovered, e.g. everyone tested on the same browser), 2) keeping track (don’t use a formal bug tracking system like JIRA, use something simpler like Slack, a wiki page, a Google sheet), 3) logistics (be ready, have the right hardware, software and test data in place as well as internet, wi-fi, etc.), 4) marketing (it’s hard to get different people each time. advertise in at least three different ways, “shoulder tap” invitation works well, provide snacks – the “hummus effect”!), and 5) triage (might end up with very few bugs or a very large number, potentially a lot of duplicates, consider triaging “on the go” during the running of the bug bash). James noted that for some features, the cost of setting up and running a bug bash is not worth ii and he also mentioned that these events need to be run with sufficient time between them so that people don’t get fatigued or simply tired of the idea. He highlighted some bonuses, including accidental load testing, knowledge sharing and team building. This was a really strong talk, full of practical takeaways, delivered confidently and with some beautiful slide work (James is a cartoonist). The open season exhausted all of the remaining session time, always a good sign that the audience has been engaged and interested in the topic.



A morning tea break followed before participants again had a choice of two track sessions, either “Journey to continuous delivery” from Kim Engel or “My Journey as a Quality Coach” from Lalitha Yenna (of Xero). I attended Lalitha’s talk, having brought her into the programme as a first-time presenter. I’d reviewed Lalitha’s talk content in the weeks leading up to the conference, so I was confident in the content but unsure of how she’d deliver it on the day – I certainly need not have worried! From her very first opening remarks, she came across as very confident and calm, pacing herself perfectly and using pauses very effectively – the audience would not have known it was her first time and her investment in studying other presenters (via TED talks in particular) seriously paid off. Lalitha’s role was an experiment for Xero as they wanted to move towards collective ownership of quality. She spent time observing the teams and started off by “filing the gaps” as she saw them. She met with some passive resistance as she did this, making her realize the importance of empathy. She recommended the book The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever as it helped her become more competent as she coached the teams around her. She noted that simply removing the “Testing” column from their JIRA boards had a big effect in terms of pushing testing left in their development process. Lalitha was open about the challenges she faced and the mistakes she’d made. Initially, she found it hard to feel or show her accomplishments, later realizing that she needed instead to quantify her learnings. She noted that individual coaching was sometimes required and that old habits still came back sometimes within the teams (especially under times of stress). She also realized that she gave the teams too much education and moved to a “just in time” model of educating them based on their current needs and maturity. A nice takeaway was her DANCEBAR story kickoff mnemonic: Draw/mindmap, Acceptance Criteria, Non-functional requirements, Think like the Customer, Error conditions, Business rules, Automation, Regression. In summary, Lalitha said her key learnings on her journey so far in quality coaching were persistence, passion, continuous learning, empathy, and asking lots of questions. This was a fantastic 30-minute talk from a first-time presenter, so confidently delivered and she also dealt well with 15-minutes or so of open season questioning.


Lunch was a splendid buffet affair in the large open area outside the Langham ballroom and it was great to see the small but engaged crowd networking so well (we looked for any singletons to make them feel welcome, but couldn’t find any!)

The afternoon gave participants a choice either two track sessions or one longer workshop before the closing keynote. The first of the tracks on offer came from Nicky West (of Yambay) with “How I Got Rid of Test Cases”, with the concurrent workshop courtesy of Paul Holland (of Medidata Solutions) on “Creativity, Imagination, and Creating Better Test Ideas”. I chose Nicky’s track session and she kicked off by setting some context. Yambay is a 25-person company that had been using an outsourced testing service, running their testing via step-by-step test cases. The outsourcing arrangement was stopped in 2016 with Nicky being brought in to setup a testing team and process. She highlighted a number of issues with using detailed test cases, including duplicating detailed requirements, lack of visibility to the business and reinforcement of the fallacy that “anyone can test”. When Yambay made the decision to move to agile, this also inspired change in the testing practice. Moving to user stories with acceptance criteria was a quick win for the business stakeholders and acceptance criteria became the primary basis for testing (with the user story then being the single source of truth in terms of both requirements and testing). Nicky indicated some other types of testing that takes place in Yambay, including “shakedown” tests (which are documented via mindmaps, marked up to show progress and then finally exported as Word documents for external stakeholders), performance & load tests (which are automated) and operating system version update tests (which are documented in the same way as shakedown tests). In terms of regression testing, “product user stories” are used plus automation (using REST Assured for end-to-end tests), re-using user stories to form test plans. Nicky closed by highlighting efficiency gains from her change of approach including one maintaining one set of assets (user stories), time savings from not writing test cases (and more time to perform exploratory testing), and not needing a test management tool (saving both time and money). This was a handy 40-minute talk, with a good message. The idea of moving away from a test case-driven testing approach shouldn’t have been new for this audience but the ten-minute open season suggested otherwise and it was clear that a number of people got new ideas from this talk.

A short break followed, before heading into the final track session (or the continuation of Paul’s workshop). I spent the hour with Pete Bartlett (of Campaign Monitor) and “Flying the Flag for Quality as a 1-Man-Band”. Pete talked about finding himself in the position of being the only “tester” in his part of the organization and the tactics he used to bring quality across the development cycle. Firstly, he was “finding his bearings” by conducting surveys (to gain an understanding what “quality” meant to different people), meeting with team leads and measuring some stuff (to both see if his changes were having an impact and also to justify what he was doing). Then he started creating plans based on the strengths and weaknesses identified in the surveys, with clear achievable goals. Executing on those plans meant getting people on board, continuing to measure and refine, and being vocal. Pete also enlisted some “Quality Champions” across the teams to help him out with sending the quality message. This good 45-minute talk was jam-packed, maybe spending a little too long on the opening points and feeled slightly rushed towards the end. The open season fully used the rest of his session.

With the track sessions over, it was time for the afternoon tea break and the last opportunity for more networking.

It was left to James Christie (of Claro Testing) to provide the closing keynote, “Embrace bullshit? Or embrace complexity?”, introduced by Lee. I invited James based on conversations I’d had with him at a conference dinner in Dublin some years ago and his unique background in auditing as well as testing gives him a very different perspective. His basic message in the keynote was that we can either continue to embrace bullshit jobs that actually don’t add much value or we can become more comfortable with complexity and all that it brings with it. There was way too much content in his talk, meaning he used the whole hour before we could break for a few questions! This was an example of where less would have been more, half the content would have made a great talk. The only way to summarize this keynote is to provide some quotes and links to recommended reading, there is so much good material to follow up on here:

  • Complex systems are always broken. Success and failure are not absolutes. Complex systems can be broken but still very valuable to someone.
  • Nobody knows how a socio-technical system really works.
  • Why do accidents happen? Heinrich domino modelSwiss cheese model, Systems Theory
  • Everything that can go wrong usually goes right, with a drift to failure.
  • The root cause is just where you decide to stop looking.
  • Testing is exploring the unknowns and finding the differences between the imagined and the found.
  • Safety II (notable names in this area: Sydney Dekker, John Allspaw, Noah Sussman, Richard Cook)
  • Instead of focusing on accidents, understand why systems work safely.
  • Cynefin model (Dave Snowden, Liz Keogh)
  • John Gall Systemantics: How Systems Work and Especially How They Fail
  • Richard Cook How Complex Systems Fail
  • Steven Shorrock & Claire Williams Human Factors & Ergonomics in Practice


The conference was closed out by a brief closing speech from Ilari, during which he mentioned the AST’s kind US$1000 donation to the EPIC TestAbility Academy, the software testing training programme for young adults on the autism spectrum run by Paul Seaman and I through EPIC Assist.


  • The move away from embedded testers in agile teams seems to be accelerating, with many companies adopting the test coach approach of operating across teams to help developers become better testers of their own work. There was little consistency on display here, though, about the best model for test coaching. I see this as an interesting trend and still see a role for dedicated testers within agile teams but with a next “level” of coaching/architect role operating cross-teams in the interests of skills development, consistency and helping to build a testing community across an organization.
  • A common thread was less testers in organizations, with testing now being seen as more of a team responsibility thanks to the widespread adoption of agile approaches to software development. The future for “testers as test case executors” looks grim.
  • The “open season” discussion time after each presentation was much better than I’ve seen at any other conference using the K-cards system. The open seasons felt more like those at peer conferences and perhaps the small audience enabled some people to speak up who otherwise wouldn’t have.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue).
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for again taking the chance on running such an event.

With thanks

I’d like to take the opportunity to publicly express my thanks to:

  • The AST for putting their trust in me (along with Paul Seaman as Assistant Program Chair) to select the programme for this conference,
  • The speakers for sharing their stories, without you there is no content to create a conference,
  • Valerie Gryfakis, Roxane Jackson and the wonderful event staff at the Langham for their smiling faces and wonderful smooth running of the conference,
  • Paul Seaman for always being there for me when I needed advice or assistance, and
  • The AST for their donation to the EPIC TestAbility Academy.

The only trouble with running a successful and fun event is the overwhelming desire to do it all again, so watch this space…

Pre-CASTx18 meetup with Katrina Clokie

With Katrina Clokie being one of my invited keynotes for the CASTx18 conference, she kindly offered to give a meetup-style talk on the evening before the conference. After some searching around for a suitable venue, the AST kindly sponsored the event as part of their deal at the Langham Hotel so I could then advertise the event. I used a free Eventbrite account and easily sold out the meetup simply via promotion on Twitter and LinkedIn.

View from my room at the Langham Hotel

When it came to the evening of Tuesday 27th February, the lovely Flinders Room in the Langham had been nicely laid out and keen participants started arriving early and partaking of the fine food and beverages on offer. We left a good half-hour for people to arrive and network before kicking off the meetup at 6pm.

Ilari Henrik Aegerter formally initiated proceedings, starting with an acknowledgement of country to the traditional owners of the land on which the event was being held and then talking about the mission and activities of the AST. Next up, I introduced Katrina and she took the stage to a crowd of about 25 keen listeners.

Katrina spoke for about 45-minutes, sharing four first-person experience stories and referencing them back to her book, “A Practical Guide to Testing in DevOps”. Her experience of working in a DevOps environment within a large bank has given her lots of opportunity to gain experience in different teams at different stages of their DevOps journey. She made a deliberate choice to include a story of failure too, always a good idea as there are often more learnings to be had from failure than success. Katrina’s easy presentation style makes her content both engaging and readily consumable, with great practical takeaways. The lengthy Q&A session after her talk indicated that many people found the content relevant and went away with ideas to try in their own workplaces.

Katrina giving her presentation Katrina giving her presentation Katrina giving her presentation

We still had the room and catering for another half-hour or so after Katrina’s talk, so there were some excellent discussions and further questions for Katrina before we wrapped up. The feedback from participants was overwhelmingly positive, both in terms of the awesome content from Katrina’s talk and also the venue facilities, service & catering.

My personal thanks go to Katrina for offering to do a talk of this nature for the Melbourne testing community and also to the AST for making it happen within such a beautiful venue (with a big shout out to Valerie Gryfakis for doing all the leg work with the hotel).

(If you haven’t already bought a copy, Katrina’s book is an excellent resource for anyone involved in modern development projects, packed full of advice and examples, and is very reasonably priced – check it out on LeanPub. I’ve previously written a review of the book on this blog too.)

Building a testing community of practice

I’m a regular visitor to China where Quest has a large R&D facility in Zhuhai in the Guangdong province. My responsibility extends to a group of 50-60 testing-related people in that office and so there is always something new and interesting going on, whether I’m “on site” or working with the teams remotely.

Many of these testers were lucky enough last year to attend three days of training in their office with Rob Sabourin and his infectious enthusiasm immediately paid great dividends across the group. One of the excellent side-effects of bringing testers from different teams together for the training was their realization of the benefits of sharing knowledge between each other. The teams are all working in a Scrum style with testers embedded into teams and this had resulted in the common problem of a lack of knowledge sharing around testing practice.

Striking while that iron was hot, I decided to establish a Testing Guild, essentially a community of practice around testing for the people in this group. Thanks to great local management support, this initiative got going quickly and the Guild now meets every two weeks to discuss and share knowledge around testing. They are documenting their meetings and discussions so I get a feel for what’s going on – but the Testing Guild really belongs to them and they set the direction. I occasionally act as “guest speaker” and will of course participate when I’m in the office.

After the Testing Guild had been running for a few months (based on some initial ideas I had – inspired by the so-called Spotify Model – and local management input), I thought it would be wise to learn more about how others recommend building such communities of practice. For no other reason than it looked like exactly the kind of content I was after, I bought the succinct (70-page A5) Building Successful Communities of Practice book by Emily Webber, an Agile consultant and coach from the UK.

The reasons that Emily cites for having a community of practice closely match with my intentions, viz. sharing knowledge & building better practice, breaking down organisational silos, accelerating professional development across the organisation, happier & more motivated people, and hiring & building a better team. Obviously some of these reasons are longer-term benefits but I believe we’re already seeing some of these benefits in our Testing Guild.

She also covers the different stages of development of such a community, how to create the right environment for it and some different models of leadership. Emily discusses how to work out who belongs to the community – in our case, this was very straightforward and we decided that everyone with a testing-related role in our part of the organisation should be part of the Testing Guild.

She discusses becoming a community, getting value from it, using the community to identify skills gaps in its members, growing the community and making it self-sustaining. The idea of using the Testing Guild to identify skills gaps wasn’t something I’d thought about and this will be a useful takeaway from this book. Emily packs a fair bit of content into a small book and it’s good general advice about how to build a community of practice with some first person experience of how to make them successful. Most of the content helped reinforce that we’re basically on the right track with what we’re doing in the Testing Guild in Zhuhai.

I’m looking forward to my next trip to China soon to experience the Testing Guild firsthand and actively contribute to one of their sessions while I have the chance. It will be interesting to see how it develops and changes over time too, certainly worthy of a future blog post!

2017 in review

It really is that time again as another year comes to a close and I take some time to look back on 2017.

In terms of this blog, I wrote 22 posts in 2017, coincidentally exactly the same as 2016! This remains well in excess of my (internal) target cadence of one post per month and my blogging was much more regular in 2017. The stats indicate that Twitter was again the main driver of traffic to my blog and it received about the same number of views in 2017 as in 2016, so if there are topics you’d like to see me talking about here (especially to encourage new readers), please let me know.


I made it to four conferences during the year: two specialized testing conferences and two agile-ish ones, and I presented at two of these four.

My first conference of 2017 came in February with the Association for Software Testing‘s first conference outside North America, in the shape of CASTx17 in Sydney. This was a good testing conference and was successful enough for the AST to bring their conference back to Australia in 2018, more on that below! A review of this conference appears in a previous blog post.

It was another trip to Sydney for my next conference in June, the enormous Agile Australia event. There was no testing-related content in sight here, but there were some decent talks (especially the keynotes) that made it worth enduring the mass commercialism of this conference. I blogged about my experience of attending Agile Australia here.

My first speaking gig of the year came at the end of June, co-presenting with Paul Seaman at the LAST (Lean, Agile, Systems Thinking) conference in Melbourne. This community-focused event had a massive range of speakers and talks over two days and it was a good chance to share our story of building and running a software testing training course for young adults on the autism spectrum (much more on this to come below). It was an enjoyable gig and marked the first time I’d co-presented, so also served as handy presentation experience (see a previous blog post for details).

My last conference of the year in August provided my second speaking gig, at the AST’s main event, CAST held in Nashville. This small conference was very enjoyable to attend, with a lot of great talks from people with an interest in context-driven testing. My talk – A Day in the Life of a Test Architect – went well with a very active “open season” of questioning following my presentation. It was also great to catch up with so many familiar faces including my mentor, Rob Sabourin, and the chance to explore this part of the US some more after the conference was too good an opportunity to miss (including experiencing the total solar eclipse from the Great Smoky Mountains national park). My experience report of attending and presenting at CAST previously appeared on this blog.

I only made it to one testing meetup during the year, that being the Sydney Testers event held around CASTx17. This well-attended meetup was a great experience and the large membership base of this meetup group continues to reflect a vibrant testing community in Sydney.

Work stuff

It’s been a good year following the sale of the Dell Software group to Francisco Partners. We’re back under the name of Quest and our first year as a standalone company has gone well with my role thankfully not really changing as a result, so I’m still lucky enough to get to work with some amazing people all around the globe. Our big pockets of testers continue to be in China and the Czech Republic with a few others in the US and Australia. I expect to visit most of our overseas offices during 2018, having only been to the Zhuhai (China) office once in 2017.

Community work

My community efforts through 2017 were all directed to a new venture, offering software testing training to young adults on the autism spectrum with the help of the not-for-profit disability organization, EPIC Assist. Together with Paul Seaman, we have built the EPIC TestAbility Academy and completed our first run of the 12-week course. It’s been an incredibly rewarding experience, with a lot of learning opportunities both for us as presenters and the students on the course.  We both give our time for free and it’s nice to give back and share our knowledge in the hope of securing meaningful employment for some of these young people. We’re also looking forward to running the course again, starting early in 2018. The programme has received a lot of interest and Paul & I have been happy to present about it at the LAST conference, within the offices of Seek and Locomote, and also at an ANZTB SiGIST event.

Other stuff

My community work on the EPIC TestAbility Academy led to a couple of co-authored articles with Paul Seaman during the year. The first appeared in Women Testers magazine and the second in Testing Trapeze magazine, so thanks to these two publications for the opportunity to share our story with the broader software testing community.

In May, I was offered the chance to be Program Chair for the AST’s second conference in Australia, CASTx18 in Melbourne. I was very happy to accept their invitation and it’s been a busy few months organizing the call for papers and ultimately selecting a programme from the submissions we received. I announced the programme in November and it’s an excellent collection of local and international talent, all headed to Melbourne for the event running on February 28 and March 1 at the Langham Hotel on Southbank – I hope to see some of you there!

It’s been a busy year professionally and no doubt 2018 has some exciting opportunities in store. In the meantime, I wish you all a very Happy New Year and hope you enjoy my posts to come through 2018.

The CASTx18 conference programme is live!

Back in May, I was asked by the Association for Software Testing to be the Program Chair for CASTx18 in Melbourne and it’s been a busy six months or so since then in getting to the point where the full line-up for this conference is now live.

After coming up with a theme with an Australian bent – “Testing in the Spirit of Burke & Wills” – it was exciting to open up the call for proposals and then watch the proposals trickling in, that trickle turning into a fast-flowing river as the CFP closing date approached!

The response to the CFP was very pleasing, with a broad range of proposals from all over the world, from first-time presenters to some very seasoned campaigners. My thanks go to everyone who took the time and effort to put forward a proposal.

Helped by my Assistant Program Chair, Paul Seaman, it was a time-consuming process to select content to fill just eight track session slots available on the conference day. It was always our intention to provide a balanced & diverse programme and hopefully we’ve achieved that:

  • The tracks cover a broad range of topics – from automation to working as a lone tester, from continuous delivery to running bug bashes.
  • We have a brand new voice, Monica Diaz, giving her first conference talk as a result of opening up a track session slot to the Speak Easy programme (for which I am also a volunteer).
  • Across the conference day line-up, we have 4 female speakers and 5 male.
  • It’s a truly international menu, with speakers from Australia, New Zealand, Canada and the UK.

It gives me great pride to announce our complete line-up, as follows:

Keynotes (March 1st)

Tracks (March 1st)

  • Adam Howard with “Automated agility!? Let’s talk truly agile testing”
  • James Espie with “Community whack-a-mole! Bug bashes, why they’re great and how to run them effectively”
  • Monica Diaz with “Evolution of Testing”
  • Kim Engel with “Journey to continuous delivery”
  • Paul Holland with “Creativity, Imagination, and Creating Better Test Ideas” (workshop)
  • Nicola West with “How I Got Rid of Test Cases”
  • Peter Bartlett with “Flying the Flag for Quality as a 1-Man-Band”



Tutorials (February 28th)

For more details about CASTx18, including the full schedule and the chance to benefit from significant discounts during the “early bird” period of registration, visit castx18.org

Thanks again to the AST for the trust they’ve placed in me to build the programme and hopefully what’s on offer is not only appealing to a wide audience of testers & others but also adds to the legacy of great CAST conferences.

I hope to see you in Melbourne next year!

“A Practical Guide to Testing in DevOps” (Katrina Clokie)

I was excited to learn that well-known New Zealand tester, Katrina Clokie, had decided to write a book. Her popular blog, Katrina The Tester, already provided plenty of evidence of her ability to write clearly across a broad range of topics of interest to the testing community and so I had high expectations of her book, A Practical Guide to Testing in DevOps (released through Leanpub).

The book starts off with an overview of what DevOps is (and isn’t), along with some opening thoughts around where testing fits into a DevOps culture. The next couple of chapters compare and contrast testing in development with testing in production. While those of us who’ve been in software testing for a decade or more will have been schooled to think of testing in production as a huge “no no”, the move to DevOps (along with the new engineering around it that makes excellent alerting, monitoring and rollback possible) means we need to think differently. Katrina does a great job of balancing how testing can add value during development (highlighting the importance of automation but also the high value of human exploration) and what good testing in production also looks like. This was highly useful content for me and I liked the way she introduced the concepts of A/B testing, beta testing and monitoring in production as actually being “test practices” and the risk mitigation (“exposure control”) that can come from ideas such as staged rollouts and dark launching.

The next chapter focuses on test environments, looking at the way platforms have evolved and the use of infrastructure as code, configuration management, containers and cloud. Katrina offers advice on test practices around these environments and I liked the idea of testing the infrastructure as being a part of the overall test effort in a DevOps environment (and this was something I hadn’t read about anywhere else).

A highlight chapter for me comes next in the shape of seven industry examples. Real world examples are a good way to set context and there are a broad range of industries and project types reflected here. Each case study is short but focuses on just one aspect of their DevOps journey, e.g. A/B testing or using Docker.

Just when I thought this book had peaked in its content and usefulness, the final chapter – “Test Strategy in DevOps” – proved me wrong! There is some directly applicable material in here for anyone who is currently facing the challenge of defining test strategy in a DevOps environment. The section on rethinking the test pyramid is particularly noteworthy I think, presenting the idea of a “DevOps Bug Filter”, a simple graphical representation of the way in which bugs might find their way through our various levels of testing. This looks like a very simple but effective way to communicate around a test strategy in a DevOps environment and I certainly intend to make use of it!

In her preface, Katrina says:

This book is for testers who want to understand DevOps and what it means for their role. It’s also for people in other roles who want to know more about how testing fits into a DevOps model. I hope to encourage people to explore the opportunities for testing in DevOps, and to become curious about how they might approach quality in a different way.

She’s achieved this mission, but also so much more in my opinion. This book offers so much practical content, much of which I feel is applicable to a wide variety of software development projects and not just those “doing DevOps” – I actually see it as more of a manual for what software testing looks like in the modern world.

Katrina’s book is a steal at the suggested Leanpub price of $15 (and, to her credit, she is also making it available for free), a worthy new addition to the essential toolkit for anyone involved in software development and testing.

(A quick plug for the CASTx18 conference coming in Melbourne early in 2018 (for which I’m Program Chair), Katrina is both a keynote speaker and a tutorial presenter so this conference offers a great opportunity to hear more from her.)