Category Archives: Context-driven testing

Pre-TiCCA19 conference meetup

In the weeks leading up to the Testing in Context Conference Australia 2019, our thoughts turned to how we might sneak in a meetup event alongside the conference to make the most of the fact that Melbourne would be home to so many awesome testers at the same time.

Thanks to the conference venue – the Jasper Hotel – giving us use of one of our workshop rooms for an evening and also food & drink sponsorship by House of Test (Switzerland), the meetup became feasible and a bit of social media advertising coupled with a free Eventbrite campaign led to about twenty keen testers (including a number of TiCCA19 conference speakers) assembling at the Jasper on the evening of Thursday 28th February.

Some pre-meetup networking gave people the chance to make new friends as well as giving the conference speakers a chance to meet some of their fellow presenters. After I gave a very brief opening, it was time for the content to kick off in the shape of a presentation by well-known and respected Kiwi context-driven tester, Aaron Hodder. His talk was titled “Inclusive Collaboration – how our differences can make the difference” in which he explored how having a neurodiverse workforce can give you a competitive edge, and how the workplace can respect diverse needs and different requirements for interaction and collaboration to bring out the best in everyone’s differences. This was a beautifully-crafted talk, delivered with Aaron’s unique blend of personal connection to the topic and a smattering of self-deprecation, while still driving home a hard-hitting message. (Aaron also shared some great resources on Inclusive Collaboration at https://goo.gl/768M0u).

Aaron Hodder addresses the meetupAaron Hodder addresses the meetupThe idea of "My user manual" presented by Aaron Hodder

A short networking break then gave everyone the chance to mingle some more and clean up the remains of the food, before we kicked off the panel session. Ably facilitated by Rich Robinson, the panel consisted of four TiCCA19 speakers, in the shape of Graeme Harvey, Aaron Hodder, Sam Connelly and Ben Simo. The conversation was driven by a few questions from Rich: How have you seen the testing role change in your career? How do you think the testing role will change into the future? Is the manual testing role dead? The resulting 45-minute discussion between the panel and audience was engaging and interesting – and kudos to Rich for such a great job in running the panel.

Graeme, Aaron, Sam and Ben talking testing during the panel sessionGraeme, Aaron, Sam and Ben talking testing during the panel session

We enjoyed putting this meetup on for the Melbourne testing community and the feedback from everyone involved was very positive, so thanks again to everyone who made it happen.

In response to “context-driven testing” is the “don’t do stupid stuff” school of testing

I blogged about the Twitter conversation that ensued from this tweet from Katrina Clokie:

One of the threads that came out of this conversation narrowed the focus down to “schools of testing” and, in particular, the context-driven testing community:

There’s a bit to unpack here, so let me address these replies piece by piece.

“Divisive rhetoric from some of the thought leaders in that camp”

I can only assume that Rex was referring to the more vocal members of the CDT community, such as James Bach. I haven’t personally experienced anyone trying to be deliberately divisive in the CDT community, but I acknowledge that passion sometimes manifests itself in some strongly-worded comments. Even then, I wouldn’t see this as “rhetoric” as that implies a lack of sincerity or meaningful content. The CDT community, in my experience, attracts those who are sincere about improving software testing, the way it’s done, and the value it delivers.

The use of the term “thought leaders” is also interesting as I don’t see anyone within this community referring to themselves or anyone else as thought leaders. There are obviously more prominent members of the CDT community but also many doing great work in advancing the craft of software testing in line with the principles of CDT behind the scenes (i.e. not so vocally via avenues such as social media).

“CDT is more accurately called the “pay attention” or the “don’t do stupid stuff” school of testing”

I’m not sure whether Matt Griscom’s response was designed to provoke CDT community members or stemmed from a genuine misunderstanding of the seven principles of CDT, which are:

  1. The value of any practice depends on its context.
  2. There are good practices in context, but there are no best practices.
  3. People, working together, are the most important part of any project’s context.
  4. Projects unfold over time in ways that are often not predictable.
  5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
  6. Good software testing is a challenging intellectual process.
  7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

I agree that we should all be paying attention as testers (or as any other contributor to a project). Paying attention to the broader project context is really important if we are to do a great job of testing, but it is still overlooked and too many testers seem to think the software in front of them is the most important (or, worse, only) aspect of the context that they need to care about.

The seven principles of CDT may well also help to decrease the chances of testers spending their time doing “stupid stuff”, but that seems like a good thing to me. Working in alignment with these principles is, to me, a better approach than following standards or “best practices” that fail to account for the unique context of the project I’m working in. I’d argue that many best practices or recommendations from other “schools” actively promote what would in fact be “stupid stuff” in many contexts.

“the value of the phrase “context-driven””

I don’t see “context-driven” as a phrase – we have a clear statement of the seven principles backing what “context-driven testing” is (see above) and the value comes from understanding what those principles mean and performing testing in alignment with them. Rex replied on Matt’s request for enlightenment, saying “”Marketing” is the value enjoyed by a small few testers. “Schism” is the price paid by all other testers.” I don’t agree with this and the use of the term “schism” is exactly the kind of divisive language Rex was accusing CDT community members of using. Does anyone “outside” of the CDT community really “pay a price” for the existence of that community? I just don’t see it.

(The domain that Matt refers to is http://context-driven-testing.com/ and it’s not being actively maintained as far as I’m aware, but it does at least give us a reference point for the principles. )

There – obviously – remain challenges for the context-driven testing community in communicating the very real value and benefits that come from testing viewed via the lens of the CDT principles. It’s great to see the continued efforts of the Association for Software Testing in this regard, with their most recent CAST conference having the theme of “bridging between communities”. I’m also proud to co-organize the AST’s Australian conference, TiCCA19, and look forward to delivering a great programme to a broad representation of the local testing community, with a focus on CDT and the value that approaches built around CDT principles offer.

On the testing community merry-go-round

This tweet from Katrina Clokie started a long and interesting discussion on Twitter:

I was a little surprised to see Katrina saying this as she’s been a very active and significant contributor to the testing community for many years and is an organizer for the highly-regarded WeTest conferences in New Zealand. It seems that her tweet was motivated by her recent experiences at non-testing conferences and it’s been great to see such a key member of the testing community taking opportunities to present at non-testing events.

The replies to this tweet were plentiful and largely supportive of the position that (a) the testing community has been talking about the same things for a decade or more, and (b) does not reach out to learn from & help educate other IT communities.

Groundhog Day?

Are we, as a testing community, really talking about the same things over and over again? I actually think we are and we aren’t, it really depends on your lens as to how you see this.

As Maria Kedemo replied on the Twitter thread, “What is old to you and me might be new to others” and I certainly think it’s the case that many conference topics repeat the same subject matter year on year – but this is not necessarily a bad thing. A show of hands in answering “who’s a first-timer?” at a conference usually results in a large proportion of hands going up, so there is always a new audience for the same messages. Provided these messages are sound and valuable, then why not repeat them to cover new entrants to the community? What might sound like the same talk/content from a presentation title on a programme could well be very different in content to what it was a decade ago, too. While I’m not familiar with developer conference content, I would imagine that they’re not dissimilar in this area, with some foundational developer topics being mainstays of conference programmes year on year.

I’ve been a regular testing conference delegate since 2007 (and, since 2014, speaker) and noticed significant changes in the “topics du jour” over this period. I’ve seen a move away from a focus on testing techniques and “testing as an independent thing” towards topics like quality coaching, testing as part of a whole team approach to quality (thanks agile), and human factors in being successful as a tester. At developer-centric conferences, I imagine shifts in topic driven frequently by changes in technology/language and also likely shifts due to agile adoption too.

As you may know, I’m involved with organizing the Association for Software Testing conferences in Australia and I do this for a number of reasons. One is to offer a genuine context-driven testing community conference in this geography (because I see that as a tremendously valuable thing in itself) and another is to build conference programmes offering something different from what I see at other testing events in Australia. The recently-released TiCCA19 conference programme, for example, features a keynote presentation from Lynne Cazaly and she is not directly connected with software testing but will deliver very relevant messages to our audience mainly drawn from the testing community.

Reach out

I think most disciplines – be they IT, testing or otherwise – fail to capitalize on the potential to learn from others, maybe it’s just human nature.

At least in the context-driven part of the testing world, though, I’ve seen genuine progress in taking learnings from a broader range of disciplines including social science, systems thinking, psychology and philosophy. I personally thank Michael Bolton for introducing me to many interesting topics from these broader disciplines that have helped me greatly in understanding the human aspects involved in testing.

In terms of broadening our message about what we believe good testing looks like, I agree that it’s generally the case that the more public members of the testing community are not presenting at, for example, developer-centric conferences. I have recently seen Katrina and others (e.g. Anne-Marie Charrett) taking the initiative to do so, though, and hopefully more non-testing conferences will see the benefit of including testing/quality talks on their programmes. (I have so far been completely unsuccessful in securing a presentation slot at non-testing conferences via their usual CFP routes.)

So I think it’s a two-way street here – we as testing conference organizers need to be more open to including content from “other” communities and also vice versa.

I hope Katrina continues to contribute to the testing community, her voice would be sorely missed.

PS: I will blog separately about some of the replies to Katrina’s thread that were specifically aimed at the context-driven testing community.

The TiCCA19 conference programme is live!

After a successful 2018 event in the shape of CASTx18, the Association for Software Testing were keen to continue Australian conferences and so Paul Seaman and I took on the job of organizing the 2019 event on the AST’s behalf. We opted to rebrand the conference to avoid confusion with the AST’s well-known CAST event (held annually in North America) and so Testing in Context Conference Australia was born.

It’s been a busy few months in getting to the point where the full line-up for this conference is now live.

Paul and I decided to go for a theme with an Australian bent again, the time “From Little Things, Big Things Grow”. It’s always a milestone in planning a conference when it comes time to open up the call for proposals and we watched the proposals flowing in, with the usual surge towards the CFP closing date of 31st October.

The response to the CFP was excellent, with 95 proposals coming in from all around the globe. We had ideas from first-time presenters and also some from very seasoned campaigners on the testing conference circuit. My thanks go to everyone who took the time and effort to put forward a proposal.

We were joined by Michele Playfair to help us select a programme from the CFP responses. This was an interesting process (as usual), making some hard decisions to build what we considered the best conference programme from what was submitted. With only eight track session slots to fill, we couldn’t choose all of the excellent talks we were offered unfortunately.

The tracks we have chosen are hopefully broad enough in topic to be interesting to many testers. Our keynotes come from Ben Simo (making his first trip and conference appearance in Australia!) and local legend, Lynne Cazaly. Rounding out our programme are three full-day workshops showcasing top Melbourne talent, in the shape of Neil Killick, Scott Miles and Toby Thompson. I’m proud of the programme we have on offer and thank all the speakers who’ve accepted our invitation to help us deliver an awesome event.

The complete TiCCA19 line-up is:

Keynotes (March 1st)

  • Ben Simo with “Is There A Problem Here?”
  • Lynne Cazaly with “Try to see it my way: How developers, technicians, managers and leaders can better understand each other”

Tracks (March 1st)

  •  with “From Prototype to Product: Building a VR Testing Effort”
  •  with “Tales of Fail – How I failed a Quality Coach role”
  •  with “Test Reporting in the Hallway”
  •  with “The Automation Gum Tree”
  •  with “Old Dog, New Tricks: How Traditional Testers Can Embrace Code”
  •  with “The Uncertain Future of Non-Technical Testing”
  • Adam Howard with “Exploratory Testing: LIVE!”
  •  with “The Little Agile Testing Manifesto”

Workshops (February 28th)

  • Neil Killick with “From “Quality Assurance” to “Quality Champion” – How to be a successful tester in an agile team”
  • Scott Miles with “Leveraging the Power of API Testing”
  • Toby Thompson with “Applied Exploratory Testing”

For more details about TiCCA19, including the full schedule and the chance to benefit from significant discounts during the “Little Ripper” period of registration, visit ticca19.org

I hope to see you in Melbourne next year!

CASTx18, context-driven testing fun in Melbourne

Background

Way back in May 2017, I blogged about the fact that I was invited to be the Program Chair for the CASTx18 context-driven testing conference in Melbourne. Fast forward many months and lots of organizing & planning later, the conference took place last week – and was great fun and very well-received by its audience.

Pre-conference meetup

A bonus event came about the evening before the conference started when my invited opening keynote speaker, Katrina Clokie, offered to give a meetup-style talk if I could find a way to make it happen. Thanks to excellent assistance from the Association for Software Testing and the Langham Hotel, we managed to run a great meetup and Katrina’s talk on testing in DevOps was an awesome way to kick off a few days of in-depth treatment of testing around CASTx18. (I’ve blogged about this meetup here.)

Conference format

The conference itself was quite traditional in its format, consisting of a first day of tutorials followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach)..

Day 1 – tutorials

The first day of CASTx18 consisted of two concurrent tutorials, viz.

  • Introduction to Coaching Testing (Anne-Marie Charrett & Pete Bartlett)
  • Testing Strategies for Microservices (Scott Miles)

There were good-sized groups in both tutorials and presenters and students alike seemed to have enjoyable days. My thanks to the presenters for putting together such good-quality content to share and to the participants for making the most of the opportunity.

After the tutorials, we held a cocktail reception for two hours to which all conference delegates were invited as well as other testers from the general Melbourne testing community. This was an excellent networking opportunity and it was good to see most of the conference speakers in attendance, sharing their experiences with delegates. The friendly, relaxed and collaborative vibe on display at this reception was a sign of things to come!

Day 2 – conference report

The conference was kicked off at 8.30am with an introduction by Ilari Henrik Aegerter (board member of the AST) and then by me as conference program chair, followed by Richard Robinson outlining the way open season would be facilitated after each track talk.

intro

It was then down to me to introduce the opening keynote, which came from Katrina Clokie (of Bank of New Zealand), with “Broken Axles: A Tale of Test Environments”. Katrina talked about when she first started as a test practice manager at BNZ and she was keen to find out what was holding testing back across the bank, to which the consistent response was test environments. She encouraged the teams to start reporting descriptions of issues and their impact (how many hours they were impacted for and how many people were impacted). It turned out the teams were good at complaining but not so good at explaining to the business why these problems really mattered. Moving to expressing the impact in terms of dollars seemed to help a lot in this regard! She noted that awareness was different from the ability to take action so visualizations of the impact of test environment problems for management along with advocacy for change (using the SPIN model) were required to get things moving. All of these tactics apply to “fixing stuff that’s already broken” so she then moved on to more proactive measures being taken at BNZ to stop or detect test environment problems before their impact becomes so high. Katrina talked about monitoring and alerting, noting that this needs to be treated quite differently in a test environment than in the production environment. She stumbled across the impressive Rabobank 3-D model of IT systems dependencies and thought it might help to visualize dependencies at BNZ but, after she identified 54 systems, this idea was quickly abandoned as being too complex and time-consuming. Instead of mapping all the dependencies between systems, she has instead built dashboards that map the key architectural pieces and show the status of those. This was a nice opening keynote (albeit a little short at 25-minutes), covering a topic that seldom makes its way onto conference programmes. The 20-minutes of open season indicated that problems with test environments are certainly nothing unique to BNZ!

katrina

A short break followed before participants had a choice of two track sessions, in the shapes of Adam Howard (of New Zealand’s answer to EBay, TradeMe) with “Automated agility!? Let’s talk truly agile testing” and James Espie (of Pushpay) with “Community whack-a-mole! Bug bashes, why they’re great and how to run them effectively”. I opted for James’s talk and he kicked off by immediately linking his topic to the conference theme, by suggesting that involving other people in testing (via bug bashes) is just like Burke and Wills who had a team around them to enable them to be successful. At Pushpay, they run a bug bash for every major feature they release – the group consists of 8-18 people (some of whom have not seen the feature before) testing for 60-90 minutes, around two weeks before the beta release of the feature. James claimed such bug bashes are useful for a number of reasons: bringing fresh eyes (preventing snowblindness), bringing a diversity of brains (different people know different things) and bringing diversity of perspectives (quality means different things to different people). Given his experience of running a large number of bug bashes, James shared some lessons learned: 1) coverage (provide some direction or you might find important things have been left uncovered, e.g. everyone tested on the same browser), 2) keeping track (don’t use a formal bug tracking system like JIRA, use something simpler like Slack, a wiki page, a Google sheet), 3) logistics (be ready, have the right hardware, software and test data in place as well as internet, wi-fi, etc.), 4) marketing (it’s hard to get different people each time. advertise in at least three different ways, “shoulder tap” invitation works well, provide snacks – the “hummus effect”!), and 5) triage (might end up with very few bugs or a very large number, potentially a lot of duplicates, consider triaging “on the go” during the running of the bug bash). James noted that for some features, the cost of setting up and running a bug bash is not worth ii and he also mentioned that these events need to be run with sufficient time between them so that people don’t get fatigued or simply tired of the idea. He highlighted some bonuses, including accidental load testing, knowledge sharing and team building. This was a really strong talk, full of practical takeaways, delivered confidently and with some beautiful slide work (James is a cartoonist). The open season exhausted all of the remaining session time, always a good sign that the audience has been engaged and interested in the topic.

adam

james

A morning tea break followed before participants again had a choice of two track sessions, either “Journey to continuous delivery” from Kim Engel or “My Journey as a Quality Coach” from Lalitha Yenna (of Xero). I attended Lalitha’s talk, having brought her into the programme as a first-time presenter. I’d reviewed Lalitha’s talk content in the weeks leading up to the conference, so I was confident in the content but unsure of how she’d deliver it on the day – I certainly need not have worried! From her very first opening remarks, she came across as very confident and calm, pacing herself perfectly and using pauses very effectively – the audience would not have known it was her first time and her investment in studying other presenters (via TED talks in particular) seriously paid off. Lalitha’s role was an experiment for Xero as they wanted to move towards collective ownership of quality. She spent time observing the teams and started off by “filing the gaps” as she saw them. She met with some passive resistance as she did this, making her realize the importance of empathy. She recommended the book The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever as it helped her become more competent as she coached the teams around her. She noted that simply removing the “Testing” column from their JIRA boards had a big effect in terms of pushing testing left in their development process. Lalitha was open about the challenges she faced and the mistakes she’d made. Initially, she found it hard to feel or show her accomplishments, later realizing that she needed instead to quantify her learnings. She noted that individual coaching was sometimes required and that old habits still came back sometimes within the teams (especially under times of stress). She also realized that she gave the teams too much education and moved to a “just in time” model of educating them based on their current needs and maturity. A nice takeaway was her DANCEBAR story kickoff mnemonic: Draw/mindmap, Acceptance Criteria, Non-functional requirements, Think like the Customer, Error conditions, Business rules, Automation, Regression. In summary, Lalitha said her key learnings on her journey so far in quality coaching were persistence, passion, continuous learning, empathy, and asking lots of questions. This was a fantastic 30-minute talk from a first-time presenter, so confidently delivered and she also dealt well with 15-minutes or so of open season questioning.

lalitha

Lunch was a splendid buffet affair in the large open area outside the Langham ballroom and it was great to see the small but engaged crowd networking so well (we looked for any singletons to make them feel welcome, but couldn’t find any!)

The afternoon gave participants a choice either two track sessions or one longer workshop before the closing keynote. The first of the tracks on offer came from Nicky West (of Yambay) with “How I Got Rid of Test Cases”, with the concurrent workshop courtesy of Paul Holland (of Medidata Solutions) on “Creativity, Imagination, and Creating Better Test Ideas”. I chose Nicky’s track session and she kicked off by setting some context. Yambay is a 25-person company that had been using an outsourced testing service, running their testing via step-by-step test cases. The outsourcing arrangement was stopped in 2016 with Nicky being brought in to setup a testing team and process. She highlighted a number of issues with using detailed test cases, including duplicating detailed requirements, lack of visibility to the business and reinforcement of the fallacy that “anyone can test”. When Yambay made the decision to move to agile, this also inspired change in the testing practice. Moving to user stories with acceptance criteria was a quick win for the business stakeholders and acceptance criteria became the primary basis for testing (with the user story then being the single source of truth in terms of both requirements and testing). Nicky indicated some other types of testing that takes place in Yambay, including “shakedown” tests (which are documented via mindmaps, marked up to show progress and then finally exported as Word documents for external stakeholders), performance & load tests (which are automated) and operating system version update tests (which are documented in the same way as shakedown tests). In terms of regression testing, “product user stories” are used plus automation (using REST Assured for end-to-end tests), re-using user stories to form test plans. Nicky closed by highlighting efficiency gains from her change of approach including one maintaining one set of assets (user stories), time savings from not writing test cases (and more time to perform exploratory testing), and not needing a test management tool (saving both time and money). This was a handy 40-minute talk, with a good message. The idea of moving away from a test case-driven testing approach shouldn’t have been new for this audience but the ten-minute open season suggested otherwise and it was clear that a number of people got new ideas from this talk.

A short break followed, before heading into the final track session (or the continuation of Paul’s workshop). I spent the hour with Pete Bartlett (of Campaign Monitor) and “Flying the Flag for Quality as a 1-Man-Band”. Pete talked about finding himself in the position of being the only “tester” in his part of the organization and the tactics he used to bring quality across the development cycle. Firstly, he was “finding his bearings” by conducting surveys (to gain an understanding what “quality” meant to different people), meeting with team leads and measuring some stuff (to both see if his changes were having an impact and also to justify what he was doing). Then he started creating plans based on the strengths and weaknesses identified in the surveys, with clear achievable goals. Executing on those plans meant getting people on board, continuing to measure and refine, and being vocal. Pete also enlisted some “Quality Champions” across the teams to help him out with sending the quality message. This good 45-minute talk was jam-packed, maybe spending a little too long on the opening points and feeled slightly rushed towards the end. The open season fully used the rest of his session.

With the track sessions over, it was time for the afternoon tea break and the last opportunity for more networking.

It was left to James Christie (of Claro Testing) to provide the closing keynote, “Embrace bullshit? Or embrace complexity?”, introduced by Lee. I invited James based on conversations I’d had with him at a conference dinner in Dublin some years ago and his unique background in auditing as well as testing gives him a very different perspective. His basic message in the keynote was that we can either continue to embrace bullshit jobs that actually don’t add much value or we can become more comfortable with complexity and all that it brings with it. There was way too much content in his talk, meaning he used the whole hour before we could break for a few questions! This was an example of where less would have been more, half the content would have made a great talk. The only way to summarize this keynote is to provide some quotes and links to recommended reading, there is so much good material to follow up on here:

  • Complex systems are always broken. Success and failure are not absolutes. Complex systems can be broken but still very valuable to someone.
  • Nobody knows how a socio-technical system really works.
  • Why do accidents happen? Heinrich domino modelSwiss cheese model, Systems Theory
  • Everything that can go wrong usually goes right, with a drift to failure.
  • The root cause is just where you decide to stop looking.
  • Testing is exploring the unknowns and finding the differences between the imagined and the found.
  • Safety II (notable names in this area: Sydney Dekker, John Allspaw, Noah Sussman, Richard Cook)
  • Instead of focusing on accidents, understand why systems work safely.
  • Cynefin model (Dave Snowden, Liz Keogh)
  • John Gall Systemantics: How Systems Work and Especially How They Fail
  • Richard Cook How Complex Systems Fail
  • Steven Shorrock & Claire Williams Human Factors & Ergonomics in Practice

christie

The conference was closed out by a brief closing speech from Ilari, during which he mentioned the AST’s kind US$1000 donation to the EPIC TestAbility Academy, the software testing training programme for young adults on the autism spectrum run by Paul Seaman and I through EPIC Assist.

Takeaways

  • The move away from embedded testers in agile teams seems to be accelerating, with many companies adopting the test coach approach of operating across teams to help developers become better testers of their own work. There was little consistency on display here, though, about the best model for test coaching. I see this as an interesting trend and still see a role for dedicated testers within agile teams but with a next “level” of coaching/architect role operating cross-teams in the interests of skills development, consistency and helping to build a testing community across an organization.
  • A common thread was less testers in organizations, with testing now being seen as more of a team responsibility thanks to the widespread adoption of agile approaches to software development. The future for “testers as test case executors” looks grim.
  • The “open season” discussion time after each presentation was much better than I’ve seen at any other conference using the K-cards system. The open seasons felt more like those at peer conferences and perhaps the small audience enabled some people to speak up who otherwise wouldn’t have.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue).
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for again taking the chance on running such an event.

With thanks

I’d like to take the opportunity to publicly express my thanks to:

  • The AST for putting their trust in me (along with Paul Seaman as Assistant Program Chair) to select the programme for this conference,
  • The speakers for sharing their stories, without you there is no content to create a conference,
  • Valerie Gryfakis, Roxane Jackson and the wonderful event staff at the Langham for their smiling faces and wonderful smooth running of the conference,
  • Paul Seaman for always being there for me when I needed advice or assistance, and
  • The AST for their donation to the EPIC TestAbility Academy.

The only trouble with running a successful and fun event is the overwhelming desire to do it all again, so watch this space…

Pre-CASTx18 meetup with Katrina Clokie

With Katrina Clokie being one of my invited keynotes for the CASTx18 conference, she kindly offered to give a meetup-style talk on the evening before the conference. After some searching around for a suitable venue, the AST kindly sponsored the event as part of their deal at the Langham Hotel so I could then advertise the event. I used a free Eventbrite account and easily sold out the meetup simply via promotion on Twitter and LinkedIn.

View from my room at the Langham Hotel

When it came to the evening of Tuesday 27th February, the lovely Flinders Room in the Langham had been nicely laid out and keen participants started arriving early and partaking of the fine food and beverages on offer. We left a good half-hour for people to arrive and network before kicking off the meetup at 6pm.

Ilari Henrik Aegerter formally initiated proceedings, starting with an acknowledgement of country to the traditional owners of the land on which the event was being held and then talking about the mission and activities of the AST. Next up, I introduced Katrina and she took the stage to a crowd of about 25 keen listeners.

Katrina spoke for about 45-minutes, sharing four first-person experience stories and referencing them back to her book, “A Practical Guide to Testing in DevOps”. Her experience of working in a DevOps environment within a large bank has given her lots of opportunity to gain experience in different teams at different stages of their DevOps journey. She made a deliberate choice to include a story of failure too, always a good idea as there are often more learnings to be had from failure than success. Katrina’s easy presentation style makes her content both engaging and readily consumable, with great practical takeaways. The lengthy Q&A session after her talk indicated that many people found the content relevant and went away with ideas to try in their own workplaces.

Katrina giving her presentation Katrina giving her presentation Katrina giving her presentation

We still had the room and catering for another half-hour or so after Katrina’s talk, so there were some excellent discussions and further questions for Katrina before we wrapped up. The feedback from participants was overwhelmingly positive, both in terms of the awesome content from Katrina’s talk and also the venue facilities, service & catering.

My personal thanks go to Katrina for offering to do a talk of this nature for the Melbourne testing community and also to the AST for making it happen within such a beautiful venue (with a big shout out to Valerie Gryfakis for doing all the leg work with the hotel).

(If you haven’t already bought a copy, Katrina’s book is an excellent resource for anyone involved in modern development projects, packed full of advice and examples, and is very reasonably priced – check it out on LeanPub. I’ve previously written a review of the book on this blog too.)

Attending and presenting at CAST 2017 (Nashville)

Back in March, I was delighted to learn that my proposal to speak at the Conference of the Association for Software Testing in Nashville was accepted and it was then the usual nervous & lengthy gap between acceptance and the actual event.

It was a long trip from Melbourne to Nashville for CAST 2017 – this would be my first CAST since the 2014 event in New York and also my first time as a speaker at their event. This was the 12th annual conference of the AST which took place on August 16, 17 & 18 and was held at the totally ridiculous Gaylord Opryland Resort, a 3000-room resort and convention centre with a massive indoor atrium (and river!) a few miles outside of downtown Nashville. The conference theme was What the heck do testers do anyway?

The event drew a crowd of 160, mainly from the US but with a number of internationals too (I was the only participant from Australia, unsurprisingly!).

My track session was “A Day in the Life of a Test Architect”, a talk I’d first given at STARWest in Anaheim in 2016, and I was up on the first conference day, right after lunch. I arrived early to set up and the AV all worked seamlessly so I felt confident as my talk kicked off to a nicely filled room with about fifty in attendance.

room

I felt like the delivery of the talk itself went really well. I’d rehearsed the talk a few times in the weeks before the conference and I didn’t forget too many of the points I meant to make. The talk took about 35 minutes before the “open season” started – this is the CAST facilitated “Q&A” session using the familiar “K cards” system (borrowed from peer conferences but now a popular choice at bigger conferences too). The questions kept coming and it was an interesting & challenging 25 minutes to field them all. My thanks to Griffin Jones who facilitated my open season and thanks to the audience for their engagement and thoughtful, respectful questioning.

room2

A number of the questions during open season related to my recent volunteer work with Paul Seaman in teaching software testing to young adults on the autism spectrum. My mentor, Rob Sabourin, attended my talk and suggested afterwards that a lightning talk about this work would be a good idea to share a little more about what was obviously a topic of some interest to this audience. And so it was that I found myself unexpectedly signing up to do another talk at CAST 2017!

lightning

With only a five-minute slot, it was still a worthwhile experience giving the lightning talk and it led to a number of good conversations afterwards, resulting in some connections to follow up and some resources to review. Thanks to all those who offered help and useful information as a result of this lightning talk, it’s greatly appreciated.

lee_lightning

With my talk(s) over, the Welcome Reception was a chance to relax with friends old and new over an open bar. A photo booth probably seemed like a good idea at the time, but people always get silly as evidenced by the following three clowns (viz. yours truly, Rob Sabourin and Ben Simo) who got the ball rolling by being the first to take the plunge:

booth

I thought the quality of the keynotes and track sessions at CAST 2017 was excellent and I didn’t feel like I attended any bad talks at all. Of course, there are always those talks that stand out for various reasons and two tracks really deserve a shout out.

It’s not every conference where you walk into a session to find the presenter dressed in a pilot’s uniform and asking you to take your seats in preparation for take off! But that’s what we got with Alexandre Bauduin (of House of Test, Switzerland) and his talk “Your Safety as a Boeing 777 Passenger is the Product of a ‘Big Gaming Rig'”. Alexandre used to be an airline pilot and his talk was about the time he spent working for CAE in Montreal, the world’s leading manufacturer of simulators for the aviation and medical industries. He was a certification engineer, test pilot and then test strategy lead for the company’s Boeing 777 simulator and spent in excess of 10,000 hours test flying it. He mentioned that the simulator had 10-20 million lines of code and 1-2 million physical parts, amazing machinery. His anecdotes about the testing challenges were entertaining but also very serious and it was clear that the marriage of his actual pilot skills with his testing skills had made for a strong combination in terms of finding bugs that really mattered in this critical simulator. This was a fantastic talk delivered with style and confidence, Alexandre is the sort of presenter you could listen to for hours. An inspired pick by the program committee.

777

Based purely on the title, I took a punt on Chris Glaettli (of Thales, Switzerland) with “How we tested Gotthard Base Tunnel to start operation one year early” – and again this was an inspired move! Chris was part of the test team for various systems in the 50km Gotthard base tunnel (the longest and deepest tunnel in the world) from Switzerland to Italy creating a “flat rail” through the Alps and it was fascinating to hear about the challenges of being involved in such a huge engineering project, both in terms of construction and test environments (and some of the factors they needed to consider). Chris delivered his talk very well and he’d clearly made some very wise choices along the way to help the project be delivered early. In such a regulated environment, he’d done a great job in working closely with auditors to keep the testing documentation down to a minimum while still meeting their strict requirements. This was another superb session, classic conference material.

I noted that some of the “big names” in the context-driven testing community were not present at the conference this year and, perhaps coincidentally, there didn’t seem to be as much controversy or “red carding” during open seasons. For me, the environment seemed much friendlier and safer for presenters than I’d seen at the last CAST I attended (and, as a first-time presenter at CAST, I very much appreciated that feeling of safety). It was also interesting to learn that the theme for the 2018 conference is “Bridging Communities” and I see this as a very positive step for the CDT community which, rightly or wrongly, has earned a reputation for being disrespectful and unwilling to engage in discussion with those from other “schools” of testing.

I’d like to take this chance to thank Rob Sabourin and the AST program committee for selecting my talk and giving me the opportunity to present at their conference. It was a thoroughly enjoyable experience.

We’re the voice

A few things have crossed my feeds in the last couple of weeks around the context-driven testing community, so thought I’d post my thoughts on them here.

It’s always good to see a new edition of Testing Trapeze magazine and the April edition was no exception in providing some very readable and thought-provoking content. In the first article, Hamish Tedeschi wrote on “Value in Testing” and made this claim:

Testing communities bickering about definitions of inane words, certification and whether automation is actually testing has held the testing community back

I don’t agree with Hamish’s opinion here and wonder what basis there is for claiming that these things (or indeed any others) have “held the testing community back” – held it back from what, compared to some unknown state of where it might have been otherwise?

Michael Bolton tweeted shortly after this publication went live (but not in response to it) that:

Some symptoms [of testers who don’t actually like testing] include fixation on tools (but not business risk); reluctance to discuss semantics and why chosen words matter in context.

It seems to be a common – and increasingly common – target of those of us in the context-driven testing community that we’re overly focused on “semantics” (or “bickering about definitions of inane words”). We’re not just talking about the meaning of words for the sake of it, but rather to “make certain distinctions clear, with the goal of reducing the risk that someone will misunderstand—or miss—something important” (Michael Bolton again, [1]).

 

I believe these distinctions have led to less ambiguity in the way we talk about testing (at least within this community) and that doesn’t feel like something that would hold us back, rather the opposite. As an example, the introduction (and refinement) of “testing” and “checking” (see [2]) was such an important one, it allows for much easier conversations with many different kinds of stakeholders about the differences – in a way that the terminology of “validation” and “verification”, for example, really didn’t.

While writing this blog post, Michael posted a blog in which he mentions this subject again (see [3]):

Speaking more precisely costs very little, helps us establish our credibility, and affords deeper thinking about testing

Thanks to Twitter, I then stumbled across an interview between Rex Black and Joe Colantonio, titled “Best Practices Vs Good Practices – Ranting with Rex Black” (see [4]). In this interview, there are some less than subtle swipes at the CDT community, e.g. “Rex often sees members of the testing community take a common phrase and somehow impart attributes to it that no one else does.” The example used for the “common phrase” throughout the interview is “best practices” and, of course, the very tenets of CDT call the use of this phrase into question.

Rex offered up an awesome rebuttal to use the next time you find yourself attempting to explain best practices to people, which is: Think pattern, not recipe.

How can some people have such an amazingly violent reaction to such an anodyne phrase? And why do they think it means “recipe” when it’s clearly not meant that way?

In case you’re unfamiliar with the word, “anodyne” is defined in the Oxford English dictionary as meaning “Not likely to cause offence or disagreement and somewhat dull”. So, the suggestion is that the term “best practices” is unlikely to cause disagreement and therein lies the exact problem with using it. Rex suggests that we “take a common phrase [best practices] and somehow impart attributes to it that no one else does” (emphasis is mine). The fact that he goes on to offer a rebuttal to mis-use of the term suggests to me that the common understanding of what it means is not so common. Surely it’s not too much of a stretch to see that some people might see “best” as meaning “there are no better”, thus taking so-called “best practices” and applying them in contexts where they simply don’t make any sense.

Still in my Twitter feed, it was good to see James Christie continuing his work in standing against the ISO 29119 software testing standard. You might remember that James presented about this at CAST 2014 (see [5]) and this started something of a movement against the imposition of a pointless and potentially damaging standard on software testing – the resulting “Stop 29119” campaign was the first time I’d seen the CDT community coming together so strongly and voicing its opposition to something in such a united way (I blogged about it too, see [6]).

It appears that some of our concerns were warranted with the first job advertisements now starting to appear that demand experience in applying ISO 29119.

James recently tweeted a link to a blog post (see [7]):

Has this author spoken to any #stop29119 campaigners? There’s little evidence of understanding the issues.
http://intland.com/blog/agile/test-management/iso-29119-testing-standard-why-the-controversy/ … #testing

Read the blog post and make of it what you will. This part stood out to me:

Innitally there was controversy over the content of the ISO 29119 standard, with several organizations in opposition to the content (2014).  Several individuals in particular from the Context-Driven School of testing were vocal in their opposition, even beginning a petition against the new testing standards, they gained over a thousand signatures to it.  The opposition seems to have been the result of a few individuals who were ill – informed about the new standards as well as those that felt excluded from the standards creation process

An interesting take on our community’s opposition to the standard!

To end on a wonderfully positive note, I’m looking forward to attending and presenting at CAST 2017 in Nashville later in the year – a gathering of our community is always something special and the chance to exchange experiences & opinions with the engaged folks of CDT is an opportunity not to be missed.

We’re the voices in support of a context-driven approach to testing, let’s not be afraid to use them.

References

[1] Michael Bolton “The Rapid Software Testing Namespace” http://www.developsense.com/blog/2015/02/the-rapid-software-testing-namespace/

[2] James Bach & Michael Bolton “Testing and Checking Refined” http://www.satisfice.com/blog/archives/856

[3] Michael Bolton “Deeper Testing (2): Automating the Testing” http://www.developsense.com/blog/2017/04/deeper-testing-2-automating-the-testing/

[4] Rex Black and Joe Colantonio “Best Practices Vs Good Practices – Ranting with Rex Black” https://www.joecolantonio.com/2017/04/13/best-practices-rant/

[5] James Christie “Standards – Promoting Quality or Restricting Competition” (CAST 2014)

[6] Lee Hawkins “A Turning Point for the Context-driven Testing Community” https://therockertester.wordpress.com/2014/08/21/a-turning-point-for-the-context-driven-testing-community/

[7] Eva Johnson “ISO 29119 Testing Standard – Why the controversy?” https://intland.com/blog/agile/test-management/iso-29119-testing-standard-why-the-controversy/

Being part of a community

There has been a lot of Twitter activity about the CDT community in the last couple of weeks. Katrina Clokie also penned an excellent blog post, A community discussion, and there seems to be a lot of unresolved disputes between different folks representing different parts of the testing community. Some of this just feels like the normal level of background noise spiking for a short time. It’s not the first time a storm of this type has blown itself up around the CDT community and it won’t be the last.

I particularly liked Katrina’s statement that “I strive to be approachable, humble and open to questions”, as this is also my own approach to both being a member of a testing community and also helping to bring others into it.

I have been heavily involved in the TEAM meetup to build a new testing community in Melbourne and also helping to make the Australian Testing Days conference happen (though I will not be involved in the future of the event). I write this blog in the hope of sharing my ideas and opinions and maybe bringing readers into my community as a result.

I’ve chosen not to add to the noise by responding to the Twitter commentary around the CDT community right now, but I don’t feel that my lack of contribution to the discussion either reflects approval or disapproval of the behaviour of any member of any of the communities that consider themselves CDT.

As I’ve blogged before, our Values and principles define us.

 

Software testing: craft or engineering?

I like the fact that Rex Black shares his thoughts every month through his free webinar series, even if I often don’t agree with his content. Hearing what other people think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good software testing looks like.

I recently attended Rex’s webinar titled “Why does software quality still suck?” and his premise was that software quality is abysmal and always has been.

This was one of the webinars where his content was very far away from my own ideas about software testing. Let’s start with the premise that software quality is bad and that’s the way it’s always been. Is it really still bad? Is it as bad as it was 20 years ago? Is it better than it was 5 years ago? I don’t know a way of measuring quality such that these questions could be meaningfully answered. What I do know is that the way software is developed, deployed and consumed has changed a great deal but much of the teaching around how to test that software has its roots in the past. Maybe software quality still sucks because the testing industry (in general) has failed to adapt to the changes in the way it is built, deployed and consumed?

Rex noted that manufacturing industries are capable of six sigma levels of quality (which roughly means three defects per million items), yet fairly recent Capers Jones research suggests that C++ and Java code typically contains around 13000 defects per million lines – so “software quality has not matured to true engineering yet”. There is the implicit suggestion here that building software is like building widgets so we in the software business should be able to create six sigma levels of quality in the code we write and deliver as software to our customers. In repeatable production-line manufacturing processes, it’s not too hard to see how you could whittle down the problems during production to achieve very low levels of defects. However, building software is not a repeatable production-line process, every piece of software is different. It’s also harder to define what a defect means in software and it’s also not clear that the presence of more defects necessarily means poorer quality in the opinion of the customer.

 

Let’s suppose for argument’s sake that software quality does still suck, what are the causes of that? Rex had a few broad categories of causes, a couple of which I will mention here, viz. under-qualified software professionals and a failure to follow best practices.

In terms of under-qualified software professionals being a cause of bad software, he said “certifications are a start, especially if we make them omnipresent” and he noted that such certifications need to be credible & valuable and also need to be seen to be credible & valuable. When it comes to testing, there is no omnipresent certification (though perhaps ISTQB is coming frighteningly close) and I remain unconvinced that there should be. The link from software testers not being certified to software sucking is a seriously tenuous one as far as I’m concerned. Bad software is not just a product of bad testing and the best testers on earth can’t make a bad piece of software good if the environment isn’t right for them to help in doing so. What would help – in a general sense – is highly skilled software testers and there are many ways of acquiring skills outside of any certification scheme. Let’s not confuse qualification with skill.

Making the link between sucky software and a failure to follow best practices was one of Rex’s main points in this webinar. His claim was that “if we applied best practices, software would suck a lot less” and he capped it off with the bold statement that “Failure to follow best practices [in software development and testing] is negligence” (in the legal sense). This was again supported by references to manufacturing industries and the idea that if we could move software development to being true engineering, then we’d be in a position where following best practices was not only the norm, but was a legal requirement. As is common knowledge, I associate myself with the context-driven school of testing and one of their principles is “There are good practices in context, but there are no best practices.” So does this mean testers following context-driven principles are contributing to the software they produce being of bad quality? I see no evidence of that and my experience suggests that the exact opposite happens when testers move to more CDT styles of thinking, focusing on skills and applying appropriate approaches and techniques that make sense in the context of the project they’re contributing to.

Rex made the comment a few times that we’re still in the “craft” stage in terms of quality when it comes to building software and we need to strive to get to the “true engineering” stage. When I think of a “craftsman”, I imagine a person who is very skilled at doing something (words like “bespoke”, “excellence”, “experience” all come to my mind) and software testing is such a thing – the difference between a tester who is truly skilled in this craft and one who is inexperienced or lacks the right skills is enormous, in terms of the contribution they can make to projects and specifically to helping software suck less. There are also great benefits to taking an engineering approach to our work as well, of course, but I don’t see it as a continuum from craft to engineering, I see one complementing the other.

(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ and the one I refer to in the above post can be listened to in full at http://rbcs-us.com/resources/webinars/why-does-software-quality-still-suck/)