Background
Way back in May 2017, I blogged about the fact that I was invited to be the Program Chair for the CASTx18 context-driven testing conference in Melbourne. Fast forward many months and lots of organizing & planning later, the conference took place last week – and was great fun and very well-received by its audience.
Pre-conference meetup
A bonus event came about the evening before the conference started when my invited opening keynote speaker, Katrina Clokie, offered to give a meetup-style talk if I could find a way to make it happen. Thanks to excellent assistance from the Association for Software Testing and the Langham Hotel, we managed to run a great meetup and Katrina’s talk on testing in DevOps was an awesome way to kick off a few days of in-depth treatment of testing around CASTx18. (I’ve blogged about this meetup here.)
Conference format
The conference itself was quite traditional in its format, consisting of a first day of tutorials followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach)..
Day 1 – tutorials
The first day of CASTx18 consisted of two concurrent tutorials, viz.
- Introduction to Coaching Testing (Anne-Marie Charrett & Pete Bartlett)
- Testing Strategies for Microservices (Scott Miles)
There were good-sized groups in both tutorials and presenters and students alike seemed to have enjoyable days. My thanks to the presenters for putting together such good-quality content to share and to the participants for making the most of the opportunity.
After the tutorials, we held a cocktail reception for two hours to which all conference delegates were invited as well as other testers from the general Melbourne testing community. This was an excellent networking opportunity and it was good to see most of the conference speakers in attendance, sharing their experiences with delegates. The friendly, relaxed and collaborative vibe on display at this reception was a sign of things to come!
Day 2 – conference report
The conference was kicked off at 8.30am with an introduction by Ilari Henrik Aegerter (board member of the AST) and then by me as conference program chair, followed by Richard Robinson outlining the way open season would be facilitated after each track talk.
It was then down to me to introduce the opening keynote, which came from Katrina Clokie (of Bank of New Zealand), with “Broken Axles: A Tale of Test Environments”. Katrina talked about when she first started as a test practice manager at BNZ and she was keen to find out what was holding testing back across the bank, to which the consistent response was test environments. She encouraged the teams to start reporting descriptions of issues and their impact (how many hours they were impacted for and how many people were impacted). It turned out the teams were good at complaining but not so good at explaining to the business why these problems really mattered. Moving to expressing the impact in terms of dollars seemed to help a lot in this regard! She noted that awareness was different from the ability to take action so visualizations of the impact of test environment problems for management along with advocacy for change (using the SPIN model) were required to get things moving. All of these tactics apply to “fixing stuff that’s already broken” so she then moved on to more proactive measures being taken at BNZ to stop or detect test environment problems before their impact becomes so high. Katrina talked about monitoring and alerting, noting that this needs to be treated quite differently in a test environment than in the production environment. She stumbled across the impressive Rabobank 3-D model of IT systems dependencies and thought it might help to visualize dependencies at BNZ but, after she identified 54 systems, this idea was quickly abandoned as being too complex and time-consuming. Instead of mapping all the dependencies between systems, she has instead built dashboards that map the key architectural pieces and show the status of those. This was a nice opening keynote (albeit a little short at 25-minutes), covering a topic that seldom makes its way onto conference programmes. The 20-minutes of open season indicated that problems with test environments are certainly nothing unique to BNZ!
A short break followed before participants had a choice of two track sessions, in the shapes of Adam Howard (of New Zealand’s answer to EBay, TradeMe) with “Automated agility!? Let’s talk truly agile testing” and James Espie (of Pushpay) with “Community whack-a-mole! Bug bashes, why they’re great and how to run them effectively”. I opted for James’s talk and he kicked off by immediately linking his topic to the conference theme, by suggesting that involving other people in testing (via bug bashes) is just like Burke and Wills who had a team around them to enable them to be successful. At Pushpay, they run a bug bash for every major feature they release – the group consists of 8-18 people (some of whom have not seen the feature before) testing for 60-90 minutes, around two weeks before the beta release of the feature. James claimed such bug bashes are useful for a number of reasons: bringing fresh eyes (preventing snowblindness), bringing a diversity of brains (different people know different things) and bringing diversity of perspectives (quality means different things to different people). Given his experience of running a large number of bug bashes, James shared some lessons learned: 1) coverage (provide some direction or you might find important things have been left uncovered, e.g. everyone tested on the same browser), 2) keeping track (don’t use a formal bug tracking system like JIRA, use something simpler like Slack, a wiki page, a Google sheet), 3) logistics (be ready, have the right hardware, software and test data in place as well as internet, wi-fi, etc.), 4) marketing (it’s hard to get different people each time. advertise in at least three different ways, “shoulder tap” invitation works well, provide snacks – the “hummus effect”!), and 5) triage (might end up with very few bugs or a very large number, potentially a lot of duplicates, consider triaging “on the go” during the running of the bug bash). James noted that for some features, the cost of setting up and running a bug bash is not worth ii and he also mentioned that these events need to be run with sufficient time between them so that people don’t get fatigued or simply tired of the idea. He highlighted some bonuses, including accidental load testing, knowledge sharing and team building. This was a really strong talk, full of practical takeaways, delivered confidently and with some beautiful slide work (James is a cartoonist). The open season exhausted all of the remaining session time, always a good sign that the audience has been engaged and interested in the topic.
A morning tea break followed before participants again had a choice of two track sessions, either “Journey to continuous delivery” from Kim Engel or “My Journey as a Quality Coach” from Lalitha Yenna (of Xero). I attended Lalitha’s talk, having brought her into the programme as a first-time presenter. I’d reviewed Lalitha’s talk content in the weeks leading up to the conference, so I was confident in the content but unsure of how she’d deliver it on the day – I certainly need not have worried! From her very first opening remarks, she came across as very confident and calm, pacing herself perfectly and using pauses very effectively – the audience would not have known it was her first time and her investment in studying other presenters (via TED talks in particular) seriously paid off. Lalitha’s role was an experiment for Xero as they wanted to move towards collective ownership of quality. She spent time observing the teams and started off by “filing the gaps” as she saw them. She met with some passive resistance as she did this, making her realize the importance of empathy. She recommended the book The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever as it helped her become more competent as she coached the teams around her. She noted that simply removing the “Testing” column from their JIRA boards had a big effect in terms of pushing testing left in their development process. Lalitha was open about the challenges she faced and the mistakes she’d made. Initially, she found it hard to feel or show her accomplishments, later realizing that she needed instead to quantify her learnings. She noted that individual coaching was sometimes required and that old habits still came back sometimes within the teams (especially under times of stress). She also realized that she gave the teams too much education and moved to a “just in time” model of educating them based on their current needs and maturity. A nice takeaway was her DANCEBAR story kickoff mnemonic: Draw/mindmap, Acceptance Criteria, Non-functional requirements, Think like the Customer, Error conditions, Business rules, Automation, Regression. In summary, Lalitha said her key learnings on her journey so far in quality coaching were persistence, passion, continuous learning, empathy, and asking lots of questions. This was a fantastic 30-minute talk from a first-time presenter, so confidently delivered and she also dealt well with 15-minutes or so of open season questioning.
Lunch was a splendid buffet affair in the large open area outside the Langham ballroom and it was great to see the small but engaged crowd networking so well (we looked for any singletons to make them feel welcome, but couldn’t find any!)
The afternoon gave participants a choice either two track sessions or one longer workshop before the closing keynote. The first of the tracks on offer came from Nicky West (of Yambay) with “How I Got Rid of Test Cases”, with the concurrent workshop courtesy of Paul Holland (of Medidata Solutions) on “Creativity, Imagination, and Creating Better Test Ideas”. I chose Nicky’s track session and she kicked off by setting some context. Yambay is a 25-person company that had been using an outsourced testing service, running their testing via step-by-step test cases. The outsourcing arrangement was stopped in 2016 with Nicky being brought in to setup a testing team and process. She highlighted a number of issues with using detailed test cases, including duplicating detailed requirements, lack of visibility to the business and reinforcement of the fallacy that “anyone can test”. When Yambay made the decision to move to agile, this also inspired change in the testing practice. Moving to user stories with acceptance criteria was a quick win for the business stakeholders and acceptance criteria became the primary basis for testing (with the user story then being the single source of truth in terms of both requirements and testing). Nicky indicated some other types of testing that takes place in Yambay, including “shakedown” tests (which are documented via mindmaps, marked up to show progress and then finally exported as Word documents for external stakeholders), performance & load tests (which are automated) and operating system version update tests (which are documented in the same way as shakedown tests). In terms of regression testing, “product user stories” are used plus automation (using REST Assured for end-to-end tests), re-using user stories to form test plans. Nicky closed by highlighting efficiency gains from her change of approach including one maintaining one set of assets (user stories), time savings from not writing test cases (and more time to perform exploratory testing), and not needing a test management tool (saving both time and money). This was a handy 40-minute talk, with a good message. The idea of moving away from a test case-driven testing approach shouldn’t have been new for this audience but the ten-minute open season suggested otherwise and it was clear that a number of people got new ideas from this talk.
A short break followed, before heading into the final track session (or the continuation of Paul’s workshop). I spent the hour with Pete Bartlett (of Campaign Monitor) and “Flying the Flag for Quality as a 1-Man-Band”. Pete talked about finding himself in the position of being the only “tester” in his part of the organization and the tactics he used to bring quality across the development cycle. Firstly, he was “finding his bearings” by conducting surveys (to gain an understanding what “quality” meant to different people), meeting with team leads and measuring some stuff (to both see if his changes were having an impact and also to justify what he was doing). Then he started creating plans based on the strengths and weaknesses identified in the surveys, with clear achievable goals. Executing on those plans meant getting people on board, continuing to measure and refine, and being vocal. Pete also enlisted some “Quality Champions” across the teams to help him out with sending the quality message. This good 45-minute talk was jam-packed, maybe spending a little too long on the opening points and feeled slightly rushed towards the end. The open season fully used the rest of his session.
With the track sessions over, it was time for the afternoon tea break and the last opportunity for more networking.
It was left to James Christie (of Claro Testing) to provide the closing keynote, “Embrace bullshit? Or embrace complexity?”, introduced by Lee. I invited James based on conversations I’d had with him at a conference dinner in Dublin some years ago and his unique background in auditing as well as testing gives him a very different perspective. His basic message in the keynote was that we can either continue to embrace bullshit jobs that actually don’t add much value or we can become more comfortable with complexity and all that it brings with it. There was way too much content in his talk, meaning he used the whole hour before we could break for a few questions! This was an example of where less would have been more, half the content would have made a great talk. The only way to summarize this keynote is to provide some quotes and links to recommended reading, there is so much good material to follow up on here:
- Complex systems are always broken. Success and failure are not absolutes. Complex systems can be broken but still very valuable to someone.
- Nobody knows how a socio-technical system really works.
- Why do accidents happen? Heinrich domino model, Swiss cheese model, Systems Theory
- Everything that can go wrong usually goes right, with a drift to failure.
- The root cause is just where you decide to stop looking.
- Testing is exploring the unknowns and finding the differences between the imagined and the found.
- Safety II (notable names in this area: Sydney Dekker, John Allspaw, Noah Sussman, Richard Cook)
- Instead of focusing on accidents, understand why systems work safely.
- Cynefin model (Dave Snowden, Liz Keogh)
- John Gall Systemantics: How Systems Work and Especially How They Fail
- Richard Cook How Complex Systems Fail
- Steven Shorrock & Claire Williams Human Factors & Ergonomics in Practice
The conference was closed out by a brief closing speech from Ilari, during which he mentioned the AST’s kind US$1000 donation to the EPIC TestAbility Academy, the software testing training programme for young adults on the autism spectrum run by Paul Seaman and I through EPIC Assist.
Takeaways
- The move away from embedded testers in agile teams seems to be accelerating, with many companies adopting the test coach approach of operating across teams to help developers become better testers of their own work. There was little consistency on display here, though, about the best model for test coaching. I see this as an interesting trend and still see a role for dedicated testers within agile teams but with a next “level” of coaching/architect role operating cross-teams in the interests of skills development, consistency and helping to build a testing community across an organization.
- A common thread was less testers in organizations, with testing now being seen as more of a team responsibility thanks to the widespread adoption of agile approaches to software development. The future for “testers as test case executors” looks grim.
- The “open season” discussion time after each presentation was much better than I’ve seen at any other conference using the K-cards system. The open seasons felt more like those at peer conferences and perhaps the small audience enabled some people to speak up who otherwise wouldn’t have.
- The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue).
- It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for again taking the chance on running such an event.
With thanks
I’d like to take the opportunity to publicly express my thanks to:
- The AST for putting their trust in me (along with Paul Seaman as Assistant Program Chair) to select the programme for this conference,
- The speakers for sharing their stories, without you there is no content to create a conference,
- Valerie Gryfakis, Roxane Jackson and the wonderful event staff at the Langham for their smiling faces and wonderful smooth running of the conference,
- Paul Seaman for always being there for me when I needed advice or assistance, and
- The AST for their donation to the EPIC TestAbility Academy.
The only trouble with running a successful and fun event is the overwhelming desire to do it all again, so watch this space…
Pingback: 2018 in review | Rockin' and Testing All Over The World – therockertester