Category Archives: Context-driven testing

All testing is exploratory: change my mind

I’ve recently returned to Australia after several weeks in Europe, mainly for pleasure with a small amount of work along the way. Catching up on some of the testing-related chatter on my return, I spotted that Rex Black repeated his “Myths of Exploratory Testing” webinar in September. I respect the fact that he shares his free webinar content every month and, even though I often find myself disagreeing with his opinions, hearing what others think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good testing looks like.

Rex started off with his definition of exploratory testing (ET), viz.

A technique that uses knowledge, experience and skills to test software in a non-linear and investigatory fashion

He claimed that this is a “pretty widely shared definition of ET” but I don’t agree. The ISTQB Glossary uses the following definition:

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

The definition I hear most often is something like the following James Bach/Michael Bolton effort (which they used until 2015):

An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project

They have since deprecated the term “exploratory testing” in favour of simply “testing” (from 2015), defining testing as:

Evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.

Rex went on to say that the test basis and test oracles in ET “are primarily skills, knowledge and experience” and any such testing is referred to as “experience-based testing” (per the ISTQB definition, viz. “Testing based on the tester’s experience, knowledge and intuition.”). Experience-based testing that is investigatory is then deemed to be exploratory. I have several issues with this. There is an implication here that ET involves testing without using a range of oracles that might include specifications, user stories, or other more “formal” sources of what the software is meant to do. Rex reinforces this when he goes on to say that ET is a form of validation and “may tell us little or nothing about conformance to specification because the specification may not even be consulted by the tester”. Also, I can’t imagine any valuable testing that doesn’t rely on the tester’s skills, knowledge and experience so it seems to me that all testing would fall under this “experience-based testing” banner.

The first myth Rex discussed was the “origin myth”, that ET was invented in the 1990s in Silicon Valley or at least that was when a “name got hung on it” (e.g. Cem Kaner). He argued instead that it was invented by whoever wrote the first program, that IBM were doing it in the 1960s, that the independent test teams in Fred Brooks’s 1975 book Mythical Man Month were using ET, and “error guessing” as introduced by Glenford Myers in the classic book Art of Software Testing sounds “a whole lot like a form of ET”. The History of Definitions of ET on James Bach’s blog is a good reference in this regard, in my opinion. While I agree that programmers have been performing some kind of investigatory or unscripted testing in their development and debugging activities as long as programming has been a thing, it’s important that we define our testing activities in a way that makes the way we talk about what we do both accurate and credible. I see the argument for suggesting that error guessing is a form of ET, but it’s just one tactic that might be employed by a tester skilled in the much broader approach that is ET.

The next myth Rex discussed was the “completeness myth”, that “playing around” with the software is sufficient to test it. He mentioned that there is little education around testing in degrees in Software Engineering so people don’t understand what testing can and cannot do, which leads to myths like this. I agree that there is a general lack of understanding in our industry of how important structured ET is as part of a testing strategy, I haven’t personally heard this myth being espoused anywhere recently though.

Next up was the “sufficiency myth”, that some teams bring in a “mighty Jedi warrior of ET & this person has helped [them] to find every bug that can matter”. He mentioned a study from Microsoft where they split their testing groups for the same application, with one using ET (and other reactive strategies) only, while the other used pre-designed tests (including automated tests) only. The sets of bugs found by these two teams was partially but not fully overlapping, hence proving that ET alone is not sufficient. I’m confident that even if the groups had been divided up and did the same kind of testing (be it ET or pre-designed), then the sets of bugs from the two teams would also have been partially but not fully overlapping (there is some evidence to support this, albeit from a one-off small case study, from Aaron Hodder & James Bach in their article Test Cases Are Not Testing)! I’m not sure where this myth comes from, I’ve not heard it from anyone in the testing industry and haven’t seen a testing strategy that relies solely on ET. I do find that using ET as an approach can really help in focusing on finding bugs that matter, though, and that seems like a good thing to me.

Rex continued with the “irrelevance myth”, that we don’t have to worry about ET (or, indeed, any validation testing at all) because of the use of ATDD, BDD, or TDD. He argued that all of these approaches are verification rather than validation, so some validation is still relevant (and necessary). I’ve seen this particular myth and, if anything, it seems to be more prevalent over time especially in the CI/CD/DevOps world where automated checks (of various kinds) are viewed as sufficient gates to production deployment. Again, I see this as a lack of understanding of what value ET can add and that’s on us as a testing community to help people understand that value (and explain where ET fits into these newer, faster deployment approaches).

The final myth that Rex brought up was the “ET is not manageable myth”. In dispelling this myth, he mentioned the Rapid Reporter tool, timeboxed sessions, and scoping using charters (where a “charter is a set of one or more test conditions”). This was all quite reasonable, basically referring to session-based test management (SBTM) without using that term. One of his recommendations seemed odd, though: “record planned session time versus actual [session] time” – sessions are strictly timeboxed in an SBTM situation so planned and actual time are always the same. While this seems to be one of the more difficult aspects of SBTM at least initially for testers in my experience, sticking to the timebox is critical if ET is to be truly manageable.

Moving on from the myths, Rex talked about “reactive strategies” in general, suggesting they were suitable in agile environments but that we also need risk-based strategies and automation in addition to ET. He said that the reliance on skills and experience when using ET (in terms of the test basis and test oracle) mean that heuristics are a good way of triggering test ideas and he made the excellent point that all of our “traditional” test techniques still apply when using ET.

Rex’s conclusion was also sound, “I consider (the best practice of) ET to be essential but not sufficient by itself” and I have no issue with that (well, apart from his use of the term “best practice”) – and again don’t see any credible voices in the testing community arguing otherwise.

The last twenty minutes of the webinar was devoted to Q&A from both the online and live audience (the webinar was delivered in person at the STPCon conference). An interesting question from the live audience was “Has ET finally become embedded in the software testing lifecycle?” Rex responded that the “religious warfare… in the late 2000s/early 2010s has abated, some of the more obstreperous voices of that era have kinda taken their show off the road for various reasons and aren’t off stirring the pot as much”. This was presumably in reference to the somewhat heated debate going on in the context-driven testing community in that timeframe, some of which was unhelpful but much of which helped to shape much clearer thinking around ET, SBTM and CDT in general in my opinion. I wouldn’t describe it as “religious warfare”, though.

Rex also mentioned in response to this question that he actually now sees the opposite problem in the DevOps world, with “people running around saying automate everything” and the belief that automated tests by themselves are sufficient to decide when software is worthy of deployment to production. In another reference to Bolton/Bach, he argued that the “checking” and “testing” distinction was counterproductive in pointing out the fallacy of “automate everything”. I found this a little ironic since Rex constantly seeks to make the distinction between validation and verification, which is very close to the distinction that testing and checking seeks to draw (albeit in much more lay terms as far as I’m concerned). I’ve actually found the “checking” and “testing” terminology extremely helpful in making exactly the point that there is “testing” (as commonly understood by those outside of our profession) that cannot be automated, it’s a great conversation starter in this area for me.

One of Rex’s closing comments was again directed to the “schism” of the past with the CDT community, “I’m relieved that we aren’t still stuck in these incredibly tedious religious wars we had for that ten year period of time”.

There was a lot of good content in Rex’s webinar and nothing too controversial. His way of talking about ET (even the definition he chooses to use) is different to what I’m more familiar with from the CDT community but it’s good to hear him referring to ET as an essential part of a testing strategy. I’ve certainly seen an increased willingness to use ET as the mainstay of so-called “manual” testing efforts and putting structure around it using SBTM adds a lot of credibility. For the most part in my teams across Quest, we now consider test efforts to be considered ET only if they are performed within the framework of SBTM so that we have that accountability and structure in place for the various stakeholders to treat this approach as credible and worthy of their investment.

So, finally getting to the reason for the title of this post, both by Rex’s (I would argue unusual) definition (and even the ISTQB’s definition) or by what I would argue is the more widely accepted definition (Bach/Bolton above), it seems to me that all testing is exploratory. I’m open to your arguments to change my mind!

(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ The one I refer to in this blog post has not appeared there as yet, but the audio is available via https://rbcs-us.com/resources/podcast/)

Testing in Context Conference Australia 2019

The third annual conference of the Association for Software Testing (AST) outside of North America took place in Melbourne in the shape of Testing in Context Conference Australia 2019 (TiCCA19) on February 28 & March 1. The conference was held at the Jasper Hotel near the Queen Victoria Market.

The event drew a crowd of about 50, mainly from Australia and New Zealand but also with a decent international contingent (including a representative of the AST and a couple of testers all the way from Indonesia!).

I co-organized the event with Paul Seaman and the AST allowed us great freedom in how we put the conference together. We decided on the theme first, From Little Things Big Things Grow, and had a great response to our call for papers, resulting in what we thought was an awesome programme.

The Twitter hashtag for the event was #ticca19 and this was fairly active across the conference.

The event consisted of a first day of workshops followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical AST/peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach).

Takeaways

  • Testing is not dead, despite what you might hear on social media or from some automation tooling vendors. There is a vibrant community of skilled human testers who display immense value in their organizations. My hope is that these people will promote their skills more broadly and advocate for human involvement in producing great software.
  • Ben Simo’s keynote highlighted just how normalized bad software has become, we really can do better as a software industry and testers have a key role to play.
  • While “automation” is still a hot topic, I got a sense of a move back towards valuing the role of humans in producing quality software. This might not be too surprising given the event was a context-driven testing conference, but it’s still worth noting.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue). There was evidence of genuine conferring happening all over the place, exactly what we aimed for!
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for continuing to back our event in Melbourne.
  • I had a tiring but rewarding experience in co-organizing this event with Paul, the testing community in Melbourne is a great place to be!

Workshop day (Thursday 28th February)

We offered two full-day workshops to kick the event off, with “Applied Exploratory Testing” presented by Toby Thompson (from Software Education) and “Leveraging the Power of API Testing” presented by Scott Miles. Both workshops went well and it was pleasing to see them being well attended. Feedback on both workshops has been excellent so well done to Toby and Scott on their big efforts in putting the workshops together and delivering them so professionally.

Toby Thompson setting up his ET workshopScott Miles ready to start his API testing workshop

Pre-conference meetup (Thursday 28th February)

We decided to hold a free meetup on the evening before the main conference day to offer the broader Melbourne testing community the chance to meet some of the speakers as well as hearing a great presentation and speaker panel session. Thanks to generous sponsorship, the meetup went really well, with a small but highly engaged audience – I’ve blogged in detail about the meetup at https://therockertester.wordpress.com/2019/03/04/pre-ticca19-conference-meetup/

Aaron Hodder addresses the meetupGraeme, Aaron, Sam and Ben talking testing during the panel session

Conference day (Friday 1st March)

The conference was kicked off at 8.30am with some opening remarks from me including an acknowledgement of traditional owners and calling out two students who we sponsored to attend from the EPIC TestAbility Academy. Next up was Ilari Henrik Aegerter (board member of the AST) who briefly explained what the AST’s mission is and what services and benefits membership provides, followed by Richard Robinson outlining the way “open season” would be facilitated after each track talk.

I then introduced our opening keynote, Ben Simo with “Is There A Problem Here?”. Ben joined us all the way from Phoenix, Arizona, and this was his first time in Australia so we were delighted to have him “premiere” at our conference! His 45-minute keynote showed us many cases where he has experienced problems when using systems & software in the real world – from Australian road signs to his experience of booking his flights with Qantas, from hotel booking sites to roadtrip/mapping applications, and of course covering his well-publicized work around Healthcare.gov some years ago. He encouraged us to move away from “pass/fail” to asking “is there a problem here?” and, while not expecting perfection, know that our systems and software can be better. A brief open season brought an excellent first session to a close.

Ben Simo during his keynote (photo from Lynne Cazaly)

After a short break, the conference split into two track sessions with delegates having the choice of “From Prototype to Product: Building a VR Testing Effort” with Nick Pass or “Tales of Fail – How I failed a Quality Coach role” with Samantha Connelly (who has blogged about her talk and also her TiCCA19 conference experience in general).

While Sam’s talk attracted the majority of the audience, I opted to spend an hour with Nick Pass as he gave an excellent experience report of his time over in the UK testing virtual reality headsets for DisplayLink. Nick was in a new country, working for a new company in a new domain and also working on a brand new product within that company. He outlined the many challenges including technical, physical (simulator sickness), processes (“sort of agile”) and personal (“I have no idea”). Due to the nature of the product, there were rapid functionality changes and lots of experimentation and prototyping. Nick said he viewed “QA” as “Question Asker” in this environment and he advocated a Quality Engineering approach focused on both product and process. Test design was emergent but, when they got their first customer (hTC), the move to productizing meant a tightening up of processes, more automated checks, stronger testing techniques and adoption of the LeSS framework. This was a good example of a well-crafted first-person experience report from Nick with a simple but effective deck to guide the way. His 40-minute talk was followed by a full open season with a lot of questions both around the cool VR product and his role in building a test discipline for it.

Nick Pass talks VR

Morning tea was a welcome break and was well catered by the Jasper, before tracks resumed in the shape of “Test Reporting in the Hallway” with Morris Nye and “The Automation Gum Tree” with Michelle Macdonald.

I joined Michelle – a self-confessed “automation enthusiast” – as she described her approach to automation for the Pronto ERP product using the metaphor of the Aussie gum tree (which meant some stunning visuals in her slide deck). Firstly, she set the scene – she has built an automated testing framework using Selenium and Appium to deal with the 50,000 screens, 2000 data objects and 27 modules across Pronto’s system. She talked about their “Old Gum”, a Rational Robot system to test their Win32 application which then matured to use TestComplete. Her “new species” needed to cover both web and device UIs, preferably be based on open source technologies, be easy for others to create scripts, and needed support. It was Selenium IDE as a first step and the resulting framework is seen as successful as it’s easy to install, everyone has access to use it, knowledge has been shared, and patience has paid off. The gum tree analogies came thick and fast as the talk progressed. She talked about Inhabitants, be they consumers, diggers or travellers, then the need to sometimes burn off (throw away and start again), using the shade (developers working in feature branches) and controlling the giants (all too easy for automation to get too big and out of control). Michelle had a little too much content and her facilitator had to wrap her up at 50 minutes into the session so we had time for some questions during open season. There were some sound ideas in Michelle’s talk and she delivered it with passion, supported by the best-looking deck of the conference.

A sample of the beautiful slides in Michelle's talk

Lunch was a chance to relax over nice food and it was great to see people genuinely conferring over the content from the morning’s sessions. The hour passed quickly before delegates reconvened for another two track sessions.

First up for the afternoon was a choice between “Old Dog, New Tricks: How Traditional Testers Can Embrace Code” with Graeme Harvey and “The Uncertain Future of Non-Technical Testing” with Aaron Hodder.

I chose Aaron’s talk and he started off by challenging us as to what “technical” meant (and, as a large group, we failed to reach a consensus) as well as what “testing” meant. He gave his idea of what “non-technical testing” means: manually writing test scripts in English and a person executing them, while “technical testing” means: manually writing test scripts in Java and a machine executing them! He talked about the modern development environment and what he termed “inadvertent algorithmic cruelty”, supported by examples. He mentioned that he’s never seen a persona of someone in crisis or a troll when looking at user stories, while we have a great focus on technical risks but much less so on human risks. There are embedded prejudices in much modern software and he recommended the book Weapons of Math Destruction by Cathy O’Neil. This was another excellent talk from Aaron, covering a little of the same ground as his meetup talk but also breaking new ground and providing us with much food for thought in the way we build and test our software for real humans in the real world. Open season was busy and fully exhausted the one-hour in Aaron’s company.

Adam Howard introduces Aaron Hodder for his track

Graeme Harvey ready to present

A very brief break gave time for delegates to make their next choice, “Exploratory Testing: LIVE!” with Adam Howard or “The Little Agile Testing Manifesto” with Samantha Laing. Having seen Adam’s session before (at TestBash Australia 2018), I decided to attend Samantha’s talk. She introduced the Agile Testing Manifesto that she put together with Karen Greaves, which highlights that testing is an activity rather than a phase, we should aim to prevent bugs rather than focusing on finding them, look at testing over checking, aim to help build the best system possible instead of trying to break it, and emphasizes the whole team responsibility for quality. She gave us three top tips to take away: 1) ask “how can we test that?”, 2) use a “show me” column on your agile board (instead of an “in test” column), and 3) do all the testing tasks first (before development ones). This was a useful talk for the majority of her audience who didn’t seem to be very familiar with this testing manifesto.

Sam Laing presenting her track session (photo from Lynne Cazaly)

With the track sessions done for the day, afternoon tea was another chance to network and confer before the conference came back together in the large Function Hall for the closing keynote. Paul did the honours in introducing the well-known Lynne Cazaly with “Try to See It My Way: Communication, Influence and Persuasion”.

She encouraged us to view people as part of the system and deliberately choose to “entertain” different ideas and information. In trying to understand differences, you will actually find similarities. Lynne pointed out that we over-simplify our view of others and this leads to a lack of empathy. She introduced the Karpman Drama Triangle and the Empowerment Dynamic (by David Emerald). Lynne claimed that “all we’re ever trying to do is feel better about ourselves” and, rather than blocking ideas, we should yield and adopt a “go with” style of facilitation.

Lynne was a great choice of closing keynote and we were honoured to have her agree to present at the conference. Her vast experience translated into an entertaining, engaging and valuable presentation. She spent the whole day with us and thoroughly enjoyed her interactions with the delegates at this her first dedicated testing conference.

Slide from Lynne Cazaly's keynotelynne2Slide from Lynne Cazaly's keynote

Paul Seaman closed out the conference with some acknowledgements and closing remarks, before the crowd dispersed and it was pleasing to see so many people joining us for the post-conference cocktail reception, splendidly catered by the Jasper. The vibe was fantastic and it was nice for us as organizers to finally relax a little and enjoy chatting with delegates.

Acknowledgements

A conference doesn’t happen by accident, there’s a lot of work over many months for a whole bunch of people, so it’s time to acknowledge the various help we had along the way.

The conference has been actively supported by the Association for Software Testing and couldn’t happen without their backing so thanks to the AST and particularly Ilari who continues to be an enthusiastic promoter of the Australian conference via his presence on the AST board. Our wonderful event planner, Val Gryfakis, makes magic happen and saves the rest of us so much work in dealing with the venue and making sure everything runs to plan – we seriously couldn’t run the event without you, Val!

We had a big response to our call for proposals for TiCCA19, so thanks to everyone who took the time and effort to apply to provide content for the conference. Paul and I were assisted by Michele Playfair in selecting the programme and it was great to have Michele’s perspective as we narrowed down the field. We can only choose a very small subset for a one-day conference and we hope many of you will have another go when the next CFP comes around.

There is of course no conference without content so a huge thanks to our great presenters, be they delivering workshops, keynotes or track sessions. Thanks to those who bought tickets and supported the event as delegates, your engagement and positive feedback meant a lot to us as organizers.

Finally, my personal thanks go to my mate Paul for his help, encouragement, ideas and listening ear during the weeks and months leading up to the event, we make a great team and neither of us would do this gig with anyone else, cheers mate.

 

Pre-TiCCA19 conference meetup

In the weeks leading up to the Testing in Context Conference Australia 2019, our thoughts turned to how we might sneak in a meetup event alongside the conference to make the most of the fact that Melbourne would be home to so many awesome testers at the same time.

Thanks to the conference venue – the Jasper Hotel – giving us use of one of our workshop rooms for an evening and also food & drink sponsorship by House of Test (Switzerland), the meetup became feasible and a bit of social media advertising coupled with a free Eventbrite campaign led to about twenty keen testers (including a number of TiCCA19 conference speakers) assembling at the Jasper on the evening of Thursday 28th February.

Some pre-meetup networking gave people the chance to make new friends as well as giving the conference speakers a chance to meet some of their fellow presenters. After I gave a very brief opening, it was time for the content to kick off in the shape of a presentation by well-known and respected Kiwi context-driven tester, Aaron Hodder. His talk was titled “Inclusive Collaboration – how our differences can make the difference” in which he explored how having a neurodiverse workforce can give you a competitive edge, and how the workplace can respect diverse needs and different requirements for interaction and collaboration to bring out the best in everyone’s differences. This was a beautifully-crafted talk, delivered with Aaron’s unique blend of personal connection to the topic and a smattering of self-deprecation, while still driving home a hard-hitting message. (Aaron also shared some great resources on Inclusive Collaboration at https://goo.gl/768M0u).

Aaron Hodder addresses the meetupAaron Hodder addresses the meetupThe idea of "My user manual" presented by Aaron Hodder

A short networking break then gave everyone the chance to mingle some more and clean up the remains of the food, before we kicked off the panel session. Ably facilitated by Rich Robinson, the panel consisted of four TiCCA19 speakers, in the shape of Graeme Harvey, Aaron Hodder, Sam Connelly and Ben Simo. The conversation was driven by a few questions from Rich: How have you seen the testing role change in your career? How do you think the testing role will change into the future? Is the manual testing role dead? The resulting 45-minute discussion between the panel and audience was engaging and interesting – and kudos to Rich for such a great job in running the panel.

Graeme, Aaron, Sam and Ben talking testing during the panel sessionGraeme, Aaron, Sam and Ben talking testing during the panel session

We enjoyed putting this meetup on for the Melbourne testing community and the feedback from everyone involved was very positive, so thanks again to everyone who made it happen.

In response to “context-driven testing” is the “don’t do stupid stuff” school of testing

I blogged about the Twitter conversation that ensued from this tweet from Katrina Clokie:

One of the threads that came out of this conversation narrowed the focus down to “schools of testing” and, in particular, the context-driven testing community:

There’s a bit to unpack here, so let me address these replies piece by piece.

“Divisive rhetoric from some of the thought leaders in that camp”

I can only assume that Rex was referring to the more vocal members of the CDT community, such as James Bach. I haven’t personally experienced anyone trying to be deliberately divisive in the CDT community, but I acknowledge that passion sometimes manifests itself in some strongly-worded comments. Even then, I wouldn’t see this as “rhetoric” as that implies a lack of sincerity or meaningful content. The CDT community, in my experience, attracts those who are sincere about improving software testing, the way it’s done, and the value it delivers.

The use of the term “thought leaders” is also interesting as I don’t see anyone within this community referring to themselves or anyone else as thought leaders. There are obviously more prominent members of the CDT community but also many doing great work in advancing the craft of software testing in line with the principles of CDT behind the scenes (i.e. not so vocally via avenues such as social media).

“CDT is more accurately called the “pay attention” or the “don’t do stupid stuff” school of testing”

I’m not sure whether Matt Griscom’s response was designed to provoke CDT community members or stemmed from a genuine misunderstanding of the seven principles of CDT, which are:

  1. The value of any practice depends on its context.
  2. There are good practices in context, but there are no best practices.
  3. People, working together, are the most important part of any project’s context.
  4. Projects unfold over time in ways that are often not predictable.
  5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
  6. Good software testing is a challenging intellectual process.
  7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

I agree that we should all be paying attention as testers (or as any other contributor to a project). Paying attention to the broader project context is really important if we are to do a great job of testing, but it is still overlooked and too many testers seem to think the software in front of them is the most important (or, worse, only) aspect of the context that they need to care about.

The seven principles of CDT may well also help to decrease the chances of testers spending their time doing “stupid stuff”, but that seems like a good thing to me. Working in alignment with these principles is, to me, a better approach than following standards or “best practices” that fail to account for the unique context of the project I’m working in. I’d argue that many best practices or recommendations from other “schools” actively promote what would in fact be “stupid stuff” in many contexts.

“the value of the phrase “context-driven””

I don’t see “context-driven” as a phrase – we have a clear statement of the seven principles backing what “context-driven testing” is (see above) and the value comes from understanding what those principles mean and performing testing in alignment with them. Rex replied on Matt’s request for enlightenment, saying “”Marketing” is the value enjoyed by a small few testers. “Schism” is the price paid by all other testers.” I don’t agree with this and the use of the term “schism” is exactly the kind of divisive language Rex was accusing CDT community members of using. Does anyone “outside” of the CDT community really “pay a price” for the existence of that community? I just don’t see it.

(The domain that Matt refers to is http://context-driven-testing.com/ and it’s not being actively maintained as far as I’m aware, but it does at least give us a reference point for the principles. )

There – obviously – remain challenges for the context-driven testing community in communicating the very real value and benefits that come from testing viewed via the lens of the CDT principles. It’s great to see the continued efforts of the Association for Software Testing in this regard, with their most recent CAST conference having the theme of “bridging between communities”. I’m also proud to co-organize the AST’s Australian conference, TiCCA19, and look forward to delivering a great programme to a broad representation of the local testing community, with a focus on CDT and the value that approaches built around CDT principles offer.

On the testing community merry-go-round

This tweet from Katrina Clokie started a long and interesting discussion on Twitter:

I was a little surprised to see Katrina saying this as she’s been a very active and significant contributor to the testing community for many years and is an organizer for the highly-regarded WeTest conferences in New Zealand. It seems that her tweet was motivated by her recent experiences at non-testing conferences and it’s been great to see such a key member of the testing community taking opportunities to present at non-testing events.

The replies to this tweet were plentiful and largely supportive of the position that (a) the testing community has been talking about the same things for a decade or more, and (b) does not reach out to learn from & help educate other IT communities.

Groundhog Day?

Are we, as a testing community, really talking about the same things over and over again? I actually think we are and we aren’t, it really depends on your lens as to how you see this.

As Maria Kedemo replied on the Twitter thread, “What is old to you and me might be new to others” and I certainly think it’s the case that many conference topics repeat the same subject matter year on year – but this is not necessarily a bad thing. A show of hands in answering “who’s a first-timer?” at a conference usually results in a large proportion of hands going up, so there is always a new audience for the same messages. Provided these messages are sound and valuable, then why not repeat them to cover new entrants to the community? What might sound like the same talk/content from a presentation title on a programme could well be very different in content to what it was a decade ago, too. While I’m not familiar with developer conference content, I would imagine that they’re not dissimilar in this area, with some foundational developer topics being mainstays of conference programmes year on year.

I’ve been a regular testing conference delegate since 2007 (and, since 2014, speaker) and noticed significant changes in the “topics du jour” over this period. I’ve seen a move away from a focus on testing techniques and “testing as an independent thing” towards topics like quality coaching, testing as part of a whole team approach to quality (thanks agile), and human factors in being successful as a tester. At developer-centric conferences, I imagine shifts in topic driven frequently by changes in technology/language and also likely shifts due to agile adoption too.

As you may know, I’m involved with organizing the Association for Software Testing conferences in Australia and I do this for a number of reasons. One is to offer a genuine context-driven testing community conference in this geography (because I see that as a tremendously valuable thing in itself) and another is to build conference programmes offering something different from what I see at other testing events in Australia. The recently-released TiCCA19 conference programme, for example, features a keynote presentation from Lynne Cazaly and she is not directly connected with software testing but will deliver very relevant messages to our audience mainly drawn from the testing community.

Reach out

I think most disciplines – be they IT, testing or otherwise – fail to capitalize on the potential to learn from others, maybe it’s just human nature.

At least in the context-driven part of the testing world, though, I’ve seen genuine progress in taking learnings from a broader range of disciplines including social science, systems thinking, psychology and philosophy. I personally thank Michael Bolton for introducing me to many interesting topics from these broader disciplines that have helped me greatly in understanding the human aspects involved in testing.

In terms of broadening our message about what we believe good testing looks like, I agree that it’s generally the case that the more public members of the testing community are not presenting at, for example, developer-centric conferences. I have recently seen Katrina and others (e.g. Anne-Marie Charrett) taking the initiative to do so, though, and hopefully more non-testing conferences will see the benefit of including testing/quality talks on their programmes. (I have so far been completely unsuccessful in securing a presentation slot at non-testing conferences via their usual CFP routes.)

So I think it’s a two-way street here – we as testing conference organizers need to be more open to including content from “other” communities and also vice versa.

I hope Katrina continues to contribute to the testing community, her voice would be sorely missed.

PS: I will blog separately about some of the replies to Katrina’s thread that were specifically aimed at the context-driven testing community.

The TiCCA19 conference programme is live!

After a successful 2018 event in the shape of CASTx18, the Association for Software Testing were keen to continue Australian conferences and so Paul Seaman and I took on the job of organizing the 2019 event on the AST’s behalf. We opted to rebrand the conference to avoid confusion with the AST’s well-known CAST event (held annually in North America) and so Testing in Context Conference Australia was born.

It’s been a busy few months in getting to the point where the full line-up for this conference is now live.

Paul and I decided to go for a theme with an Australian bent again, the time “From Little Things, Big Things Grow”. It’s always a milestone in planning a conference when it comes time to open up the call for proposals and we watched the proposals flowing in, with the usual surge towards the CFP closing date of 31st October.

The response to the CFP was excellent, with 95 proposals coming in from all around the globe. We had ideas from first-time presenters and also some from very seasoned campaigners on the testing conference circuit. My thanks go to everyone who took the time and effort to put forward a proposal.

We were joined by Michele Playfair to help us select a programme from the CFP responses. This was an interesting process (as usual), making some hard decisions to build what we considered the best conference programme from what was submitted. With only eight track session slots to fill, we couldn’t choose all of the excellent talks we were offered unfortunately.

The tracks we have chosen are hopefully broad enough in topic to be interesting to many testers. Our keynotes come from Ben Simo (making his first trip and conference appearance in Australia!) and local legend, Lynne Cazaly. Rounding out our programme are three full-day workshops showcasing top Melbourne talent, in the shape of Neil Killick, Scott Miles and Toby Thompson. I’m proud of the programme we have on offer and thank all the speakers who’ve accepted our invitation to help us deliver an awesome event.

The complete TiCCA19 line-up is:

Keynotes (March 1st)

  • Ben Simo with “Is There A Problem Here?”
  • Lynne Cazaly with “Try to see it my way: How developers, technicians, managers and leaders can better understand each other”

Tracks (March 1st)

  •  with “From Prototype to Product: Building a VR Testing Effort”
  •  with “Tales of Fail – How I failed a Quality Coach role”
  •  with “Test Reporting in the Hallway”
  •  with “The Automation Gum Tree”
  •  with “Old Dog, New Tricks: How Traditional Testers Can Embrace Code”
  •  with “The Uncertain Future of Non-Technical Testing”
  • Adam Howard with “Exploratory Testing: LIVE!”
  •  with “The Little Agile Testing Manifesto”

Workshops (February 28th)

  • Neil Killick with “From “Quality Assurance” to “Quality Champion” – How to be a successful tester in an agile team”
  • Scott Miles with “Leveraging the Power of API Testing”
  • Toby Thompson with “Applied Exploratory Testing”

For more details about TiCCA19, including the full schedule and the chance to benefit from significant discounts during the “Little Ripper” period of registration, visit ticca19.org

I hope to see you in Melbourne next year!

CASTx18, context-driven testing fun in Melbourne

Background

Way back in May 2017, I blogged about the fact that I was invited to be the Program Chair for the CASTx18 context-driven testing conference in Melbourne. Fast forward many months and lots of organizing & planning later, the conference took place last week – and was great fun and very well-received by its audience.

Pre-conference meetup

A bonus event came about the evening before the conference started when my invited opening keynote speaker, Katrina Clokie, offered to give a meetup-style talk if I could find a way to make it happen. Thanks to excellent assistance from the Association for Software Testing and the Langham Hotel, we managed to run a great meetup and Katrina’s talk on testing in DevOps was an awesome way to kick off a few days of in-depth treatment of testing around CASTx18. (I’ve blogged about this meetup here.)

Conference format

The conference itself was quite traditional in its format, consisting of a first day of tutorials followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach)..

Day 1 – tutorials

The first day of CASTx18 consisted of two concurrent tutorials, viz.

  • Introduction to Coaching Testing (Anne-Marie Charrett & Pete Bartlett)
  • Testing Strategies for Microservices (Scott Miles)

There were good-sized groups in both tutorials and presenters and students alike seemed to have enjoyable days. My thanks to the presenters for putting together such good-quality content to share and to the participants for making the most of the opportunity.

After the tutorials, we held a cocktail reception for two hours to which all conference delegates were invited as well as other testers from the general Melbourne testing community. This was an excellent networking opportunity and it was good to see most of the conference speakers in attendance, sharing their experiences with delegates. The friendly, relaxed and collaborative vibe on display at this reception was a sign of things to come!

Day 2 – conference report

The conference was kicked off at 8.30am with an introduction by Ilari Henrik Aegerter (board member of the AST) and then by me as conference program chair, followed by Richard Robinson outlining the way open season would be facilitated after each track talk.

intro

It was then down to me to introduce the opening keynote, which came from Katrina Clokie (of Bank of New Zealand), with “Broken Axles: A Tale of Test Environments”. Katrina talked about when she first started as a test practice manager at BNZ and she was keen to find out what was holding testing back across the bank, to which the consistent response was test environments. She encouraged the teams to start reporting descriptions of issues and their impact (how many hours they were impacted for and how many people were impacted). It turned out the teams were good at complaining but not so good at explaining to the business why these problems really mattered. Moving to expressing the impact in terms of dollars seemed to help a lot in this regard! She noted that awareness was different from the ability to take action so visualizations of the impact of test environment problems for management along with advocacy for change (using the SPIN model) were required to get things moving. All of these tactics apply to “fixing stuff that’s already broken” so she then moved on to more proactive measures being taken at BNZ to stop or detect test environment problems before their impact becomes so high. Katrina talked about monitoring and alerting, noting that this needs to be treated quite differently in a test environment than in the production environment. She stumbled across the impressive Rabobank 3-D model of IT systems dependencies and thought it might help to visualize dependencies at BNZ but, after she identified 54 systems, this idea was quickly abandoned as being too complex and time-consuming. Instead of mapping all the dependencies between systems, she has instead built dashboards that map the key architectural pieces and show the status of those. This was a nice opening keynote (albeit a little short at 25-minutes), covering a topic that seldom makes its way onto conference programmes. The 20-minutes of open season indicated that problems with test environments are certainly nothing unique to BNZ!

katrina

A short break followed before participants had a choice of two track sessions, in the shapes of Adam Howard (of New Zealand’s answer to EBay, TradeMe) with “Automated agility!? Let’s talk truly agile testing” and James Espie (of Pushpay) with “Community whack-a-mole! Bug bashes, why they’re great and how to run them effectively”. I opted for James’s talk and he kicked off by immediately linking his topic to the conference theme, by suggesting that involving other people in testing (via bug bashes) is just like Burke and Wills who had a team around them to enable them to be successful. At Pushpay, they run a bug bash for every major feature they release – the group consists of 8-18 people (some of whom have not seen the feature before) testing for 60-90 minutes, around two weeks before the beta release of the feature. James claimed such bug bashes are useful for a number of reasons: bringing fresh eyes (preventing snowblindness), bringing a diversity of brains (different people know different things) and bringing diversity of perspectives (quality means different things to different people). Given his experience of running a large number of bug bashes, James shared some lessons learned: 1) coverage (provide some direction or you might find important things have been left uncovered, e.g. everyone tested on the same browser), 2) keeping track (don’t use a formal bug tracking system like JIRA, use something simpler like Slack, a wiki page, a Google sheet), 3) logistics (be ready, have the right hardware, software and test data in place as well as internet, wi-fi, etc.), 4) marketing (it’s hard to get different people each time. advertise in at least three different ways, “shoulder tap” invitation works well, provide snacks – the “hummus effect”!), and 5) triage (might end up with very few bugs or a very large number, potentially a lot of duplicates, consider triaging “on the go” during the running of the bug bash). James noted that for some features, the cost of setting up and running a bug bash is not worth ii and he also mentioned that these events need to be run with sufficient time between them so that people don’t get fatigued or simply tired of the idea. He highlighted some bonuses, including accidental load testing, knowledge sharing and team building. This was a really strong talk, full of practical takeaways, delivered confidently and with some beautiful slide work (James is a cartoonist). The open season exhausted all of the remaining session time, always a good sign that the audience has been engaged and interested in the topic.

adam

james

A morning tea break followed before participants again had a choice of two track sessions, either “Journey to continuous delivery” from Kim Engel or “My Journey as a Quality Coach” from Lalitha Yenna (of Xero). I attended Lalitha’s talk, having brought her into the programme as a first-time presenter. I’d reviewed Lalitha’s talk content in the weeks leading up to the conference, so I was confident in the content but unsure of how she’d deliver it on the day – I certainly need not have worried! From her very first opening remarks, she came across as very confident and calm, pacing herself perfectly and using pauses very effectively – the audience would not have known it was her first time and her investment in studying other presenters (via TED talks in particular) seriously paid off. Lalitha’s role was an experiment for Xero as they wanted to move towards collective ownership of quality. She spent time observing the teams and started off by “filing the gaps” as she saw them. She met with some passive resistance as she did this, making her realize the importance of empathy. She recommended the book The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever as it helped her become more competent as she coached the teams around her. She noted that simply removing the “Testing” column from their JIRA boards had a big effect in terms of pushing testing left in their development process. Lalitha was open about the challenges she faced and the mistakes she’d made. Initially, she found it hard to feel or show her accomplishments, later realizing that she needed instead to quantify her learnings. She noted that individual coaching was sometimes required and that old habits still came back sometimes within the teams (especially under times of stress). She also realized that she gave the teams too much education and moved to a “just in time” model of educating them based on their current needs and maturity. A nice takeaway was her DANCEBAR story kickoff mnemonic: Draw/mindmap, Acceptance Criteria, Non-functional requirements, Think like the Customer, Error conditions, Business rules, Automation, Regression. In summary, Lalitha said her key learnings on her journey so far in quality coaching were persistence, passion, continuous learning, empathy, and asking lots of questions. This was a fantastic 30-minute talk from a first-time presenter, so confidently delivered and she also dealt well with 15-minutes or so of open season questioning.

lalitha

Lunch was a splendid buffet affair in the large open area outside the Langham ballroom and it was great to see the small but engaged crowd networking so well (we looked for any singletons to make them feel welcome, but couldn’t find any!)

The afternoon gave participants a choice either two track sessions or one longer workshop before the closing keynote. The first of the tracks on offer came from Nicky West (of Yambay) with “How I Got Rid of Test Cases”, with the concurrent workshop courtesy of Paul Holland (of Medidata Solutions) on “Creativity, Imagination, and Creating Better Test Ideas”. I chose Nicky’s track session and she kicked off by setting some context. Yambay is a 25-person company that had been using an outsourced testing service, running their testing via step-by-step test cases. The outsourcing arrangement was stopped in 2016 with Nicky being brought in to setup a testing team and process. She highlighted a number of issues with using detailed test cases, including duplicating detailed requirements, lack of visibility to the business and reinforcement of the fallacy that “anyone can test”. When Yambay made the decision to move to agile, this also inspired change in the testing practice. Moving to user stories with acceptance criteria was a quick win for the business stakeholders and acceptance criteria became the primary basis for testing (with the user story then being the single source of truth in terms of both requirements and testing). Nicky indicated some other types of testing that takes place in Yambay, including “shakedown” tests (which are documented via mindmaps, marked up to show progress and then finally exported as Word documents for external stakeholders), performance & load tests (which are automated) and operating system version update tests (which are documented in the same way as shakedown tests). In terms of regression testing, “product user stories” are used plus automation (using REST Assured for end-to-end tests), re-using user stories to form test plans. Nicky closed by highlighting efficiency gains from her change of approach including one maintaining one set of assets (user stories), time savings from not writing test cases (and more time to perform exploratory testing), and not needing a test management tool (saving both time and money). This was a handy 40-minute talk, with a good message. The idea of moving away from a test case-driven testing approach shouldn’t have been new for this audience but the ten-minute open season suggested otherwise and it was clear that a number of people got new ideas from this talk.

A short break followed, before heading into the final track session (or the continuation of Paul’s workshop). I spent the hour with Pete Bartlett (of Campaign Monitor) and “Flying the Flag for Quality as a 1-Man-Band”. Pete talked about finding himself in the position of being the only “tester” in his part of the organization and the tactics he used to bring quality across the development cycle. Firstly, he was “finding his bearings” by conducting surveys (to gain an understanding what “quality” meant to different people), meeting with team leads and measuring some stuff (to both see if his changes were having an impact and also to justify what he was doing). Then he started creating plans based on the strengths and weaknesses identified in the surveys, with clear achievable goals. Executing on those plans meant getting people on board, continuing to measure and refine, and being vocal. Pete also enlisted some “Quality Champions” across the teams to help him out with sending the quality message. This good 45-minute talk was jam-packed, maybe spending a little too long on the opening points and feeled slightly rushed towards the end. The open season fully used the rest of his session.

With the track sessions over, it was time for the afternoon tea break and the last opportunity for more networking.

It was left to James Christie (of Claro Testing) to provide the closing keynote, “Embrace bullshit? Or embrace complexity?”, introduced by Lee. I invited James based on conversations I’d had with him at a conference dinner in Dublin some years ago and his unique background in auditing as well as testing gives him a very different perspective. His basic message in the keynote was that we can either continue to embrace bullshit jobs that actually don’t add much value or we can become more comfortable with complexity and all that it brings with it. There was way too much content in his talk, meaning he used the whole hour before we could break for a few questions! This was an example of where less would have been more, half the content would have made a great talk. The only way to summarize this keynote is to provide some quotes and links to recommended reading, there is so much good material to follow up on here:

  • Complex systems are always broken. Success and failure are not absolutes. Complex systems can be broken but still very valuable to someone.
  • Nobody knows how a socio-technical system really works.
  • Why do accidents happen? Heinrich domino modelSwiss cheese model, Systems Theory
  • Everything that can go wrong usually goes right, with a drift to failure.
  • The root cause is just where you decide to stop looking.
  • Testing is exploring the unknowns and finding the differences between the imagined and the found.
  • Safety II (notable names in this area: Sydney Dekker, John Allspaw, Noah Sussman, Richard Cook)
  • Instead of focusing on accidents, understand why systems work safely.
  • Cynefin model (Dave Snowden, Liz Keogh)
  • John Gall Systemantics: How Systems Work and Especially How They Fail
  • Richard Cook How Complex Systems Fail
  • Steven Shorrock & Claire Williams Human Factors & Ergonomics in Practice

christie

The conference was closed out by a brief closing speech from Ilari, during which he mentioned the AST’s kind US$1000 donation to the EPIC TestAbility Academy, the software testing training programme for young adults on the autism spectrum run by Paul Seaman and I through EPIC Assist.

Takeaways

  • The move away from embedded testers in agile teams seems to be accelerating, with many companies adopting the test coach approach of operating across teams to help developers become better testers of their own work. There was little consistency on display here, though, about the best model for test coaching. I see this as an interesting trend and still see a role for dedicated testers within agile teams but with a next “level” of coaching/architect role operating cross-teams in the interests of skills development, consistency and helping to build a testing community across an organization.
  • A common thread was less testers in organizations, with testing now being seen as more of a team responsibility thanks to the widespread adoption of agile approaches to software development. The future for “testers as test case executors” looks grim.
  • The “open season” discussion time after each presentation was much better than I’ve seen at any other conference using the K-cards system. The open seasons felt more like those at peer conferences and perhaps the small audience enabled some people to speak up who otherwise wouldn’t have.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue).
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for again taking the chance on running such an event.

With thanks

I’d like to take the opportunity to publicly express my thanks to:

  • The AST for putting their trust in me (along with Paul Seaman as Assistant Program Chair) to select the programme for this conference,
  • The speakers for sharing their stories, without you there is no content to create a conference,
  • Valerie Gryfakis, Roxane Jackson and the wonderful event staff at the Langham for their smiling faces and wonderful smooth running of the conference,
  • Paul Seaman for always being there for me when I needed advice or assistance, and
  • The AST for their donation to the EPIC TestAbility Academy.

The only trouble with running a successful and fun event is the overwhelming desire to do it all again, so watch this space…

Pre-CASTx18 meetup with Katrina Clokie

With Katrina Clokie being one of my invited keynotes for the CASTx18 conference, she kindly offered to give a meetup-style talk on the evening before the conference. After some searching around for a suitable venue, the AST kindly sponsored the event as part of their deal at the Langham Hotel so I could then advertise the event. I used a free Eventbrite account and easily sold out the meetup simply via promotion on Twitter and LinkedIn.

View from my room at the Langham Hotel

When it came to the evening of Tuesday 27th February, the lovely Flinders Room in the Langham had been nicely laid out and keen participants started arriving early and partaking of the fine food and beverages on offer. We left a good half-hour for people to arrive and network before kicking off the meetup at 6pm.

Ilari Henrik Aegerter formally initiated proceedings, starting with an acknowledgement of country to the traditional owners of the land on which the event was being held and then talking about the mission and activities of the AST. Next up, I introduced Katrina and she took the stage to a crowd of about 25 keen listeners.

Katrina spoke for about 45-minutes, sharing four first-person experience stories and referencing them back to her book, “A Practical Guide to Testing in DevOps”. Her experience of working in a DevOps environment within a large bank has given her lots of opportunity to gain experience in different teams at different stages of their DevOps journey. She made a deliberate choice to include a story of failure too, always a good idea as there are often more learnings to be had from failure than success. Katrina’s easy presentation style makes her content both engaging and readily consumable, with great practical takeaways. The lengthy Q&A session after her talk indicated that many people found the content relevant and went away with ideas to try in their own workplaces.

Katrina giving her presentation Katrina giving her presentation Katrina giving her presentation

We still had the room and catering for another half-hour or so after Katrina’s talk, so there were some excellent discussions and further questions for Katrina before we wrapped up. The feedback from participants was overwhelmingly positive, both in terms of the awesome content from Katrina’s talk and also the venue facilities, service & catering.

My personal thanks go to Katrina for offering to do a talk of this nature for the Melbourne testing community and also to the AST for making it happen within such a beautiful venue (with a big shout out to Valerie Gryfakis for doing all the leg work with the hotel).

(If you haven’t already bought a copy, Katrina’s book is an excellent resource for anyone involved in modern development projects, packed full of advice and examples, and is very reasonably priced – check it out on LeanPub. I’ve previously written a review of the book on this blog too.)

Attending and presenting at CAST 2017 (Nashville)

Back in March, I was delighted to learn that my proposal to speak at the Conference of the Association for Software Testing in Nashville was accepted and it was then the usual nervous & lengthy gap between acceptance and the actual event.

It was a long trip from Melbourne to Nashville for CAST 2017 – this would be my first CAST since the 2014 event in New York and also my first time as a speaker at their event. This was the 12th annual conference of the AST which took place on August 16, 17 & 18 and was held at the totally ridiculous Gaylord Opryland Resort, a 3000-room resort and convention centre with a massive indoor atrium (and river!) a few miles outside of downtown Nashville. The conference theme was What the heck do testers do anyway?

The event drew a crowd of 160, mainly from the US but with a number of internationals too (I was the only participant from Australia, unsurprisingly!).

My track session was “A Day in the Life of a Test Architect”, a talk I’d first given at STARWest in Anaheim in 2016, and I was up on the first conference day, right after lunch. I arrived early to set up and the AV all worked seamlessly so I felt confident as my talk kicked off to a nicely filled room with about fifty in attendance.

room

I felt like the delivery of the talk itself went really well. I’d rehearsed the talk a few times in the weeks before the conference and I didn’t forget too many of the points I meant to make. The talk took about 35 minutes before the “open season” started – this is the CAST facilitated “Q&A” session using the familiar “K cards” system (borrowed from peer conferences but now a popular choice at bigger conferences too). The questions kept coming and it was an interesting & challenging 25 minutes to field them all. My thanks to Griffin Jones who facilitated my open season and thanks to the audience for their engagement and thoughtful, respectful questioning.

room2

A number of the questions during open season related to my recent volunteer work with Paul Seaman in teaching software testing to young adults on the autism spectrum. My mentor, Rob Sabourin, attended my talk and suggested afterwards that a lightning talk about this work would be a good idea to share a little more about what was obviously a topic of some interest to this audience. And so it was that I found myself unexpectedly signing up to do another talk at CAST 2017!

lightning

With only a five-minute slot, it was still a worthwhile experience giving the lightning talk and it led to a number of good conversations afterwards, resulting in some connections to follow up and some resources to review. Thanks to all those who offered help and useful information as a result of this lightning talk, it’s greatly appreciated.

lee_lightning

With my talk(s) over, the Welcome Reception was a chance to relax with friends old and new over an open bar. A photo booth probably seemed like a good idea at the time, but people always get silly as evidenced by the following three clowns (viz. yours truly, Rob Sabourin and Ben Simo) who got the ball rolling by being the first to take the plunge:

booth

I thought the quality of the keynotes and track sessions at CAST 2017 was excellent and I didn’t feel like I attended any bad talks at all. Of course, there are always those talks that stand out for various reasons and two tracks really deserve a shout out.

It’s not every conference where you walk into a session to find the presenter dressed in a pilot’s uniform and asking you to take your seats in preparation for take off! But that’s what we got with Alexandre Bauduin (of House of Test, Switzerland) and his talk “Your Safety as a Boeing 777 Passenger is the Product of a ‘Big Gaming Rig'”. Alexandre used to be an airline pilot and his talk was about the time he spent working for CAE in Montreal, the world’s leading manufacturer of simulators for the aviation and medical industries. He was a certification engineer, test pilot and then test strategy lead for the company’s Boeing 777 simulator and spent in excess of 10,000 hours test flying it. He mentioned that the simulator had 10-20 million lines of code and 1-2 million physical parts, amazing machinery. His anecdotes about the testing challenges were entertaining but also very serious and it was clear that the marriage of his actual pilot skills with his testing skills had made for a strong combination in terms of finding bugs that really mattered in this critical simulator. This was a fantastic talk delivered with style and confidence, Alexandre is the sort of presenter you could listen to for hours. An inspired pick by the program committee.

777

Based purely on the title, I took a punt on Chris Glaettli (of Thales, Switzerland) with “How we tested Gotthard Base Tunnel to start operation one year early” – and again this was an inspired move! Chris was part of the test team for various systems in the 50km Gotthard base tunnel (the longest and deepest tunnel in the world) from Switzerland to Italy creating a “flat rail” through the Alps and it was fascinating to hear about the challenges of being involved in such a huge engineering project, both in terms of construction and test environments (and some of the factors they needed to consider). Chris delivered his talk very well and he’d clearly made some very wise choices along the way to help the project be delivered early. In such a regulated environment, he’d done a great job in working closely with auditors to keep the testing documentation down to a minimum while still meeting their strict requirements. This was another superb session, classic conference material.

I noted that some of the “big names” in the context-driven testing community were not present at the conference this year and, perhaps coincidentally, there didn’t seem to be as much controversy or “red carding” during open seasons. For me, the environment seemed much friendlier and safer for presenters than I’d seen at the last CAST I attended (and, as a first-time presenter at CAST, I very much appreciated that feeling of safety). It was also interesting to learn that the theme for the 2018 conference is “Bridging Communities” and I see this as a very positive step for the CDT community which, rightly or wrongly, has earned a reputation for being disrespectful and unwilling to engage in discussion with those from other “schools” of testing.

I’d like to take this chance to thank Rob Sabourin and the AST program committee for selecting my talk and giving me the opportunity to present at their conference. It was a thoroughly enjoyable experience.

We’re the voice

A few things have crossed my feeds in the last couple of weeks around the context-driven testing community, so thought I’d post my thoughts on them here.

It’s always good to see a new edition of Testing Trapeze magazine and the April edition was no exception in providing some very readable and thought-provoking content. In the first article, Hamish Tedeschi wrote on “Value in Testing” and made this claim:

Testing communities bickering about definitions of inane words, certification and whether automation is actually testing has held the testing community back

I don’t agree with Hamish’s opinion here and wonder what basis there is for claiming that these things (or indeed any others) have “held the testing community back” – held it back from what, compared to some unknown state of where it might have been otherwise?

Michael Bolton tweeted shortly after this publication went live (but not in response to it) that:

Some symptoms [of testers who don’t actually like testing] include fixation on tools (but not business risk); reluctance to discuss semantics and why chosen words matter in context.

It seems to be a common – and increasingly common – target of those of us in the context-driven testing community that we’re overly focused on “semantics” (or “bickering about definitions of inane words”). We’re not just talking about the meaning of words for the sake of it, but rather to “make certain distinctions clear, with the goal of reducing the risk that someone will misunderstand—or miss—something important” (Michael Bolton again, [1]).

 

I believe these distinctions have led to less ambiguity in the way we talk about testing (at least within this community) and that doesn’t feel like something that would hold us back, rather the opposite. As an example, the introduction (and refinement) of “testing” and “checking” (see [2]) was such an important one, it allows for much easier conversations with many different kinds of stakeholders about the differences – in a way that the terminology of “validation” and “verification”, for example, really didn’t.

While writing this blog post, Michael posted a blog in which he mentions this subject again (see [3]):

Speaking more precisely costs very little, helps us establish our credibility, and affords deeper thinking about testing

Thanks to Twitter, I then stumbled across an interview between Rex Black and Joe Colantonio, titled “Best Practices Vs Good Practices – Ranting with Rex Black” (see [4]). In this interview, there are some less than subtle swipes at the CDT community, e.g. “Rex often sees members of the testing community take a common phrase and somehow impart attributes to it that no one else does.” The example used for the “common phrase” throughout the interview is “best practices” and, of course, the very tenets of CDT call the use of this phrase into question.

Rex offered up an awesome rebuttal to use the next time you find yourself attempting to explain best practices to people, which is: Think pattern, not recipe.

How can some people have such an amazingly violent reaction to such an anodyne phrase? And why do they think it means “recipe” when it’s clearly not meant that way?

In case you’re unfamiliar with the word, “anodyne” is defined in the Oxford English dictionary as meaning “Not likely to cause offence or disagreement and somewhat dull”. So, the suggestion is that the term “best practices” is unlikely to cause disagreement and therein lies the exact problem with using it. Rex suggests that we “take a common phrase [best practices] and somehow impart attributes to it that no one else does” (emphasis is mine). The fact that he goes on to offer a rebuttal to mis-use of the term suggests to me that the common understanding of what it means is not so common. Surely it’s not too much of a stretch to see that some people might see “best” as meaning “there are no better”, thus taking so-called “best practices” and applying them in contexts where they simply don’t make any sense.

Still in my Twitter feed, it was good to see James Christie continuing his work in standing against the ISO 29119 software testing standard. You might remember that James presented about this at CAST 2014 (see [5]) and this started something of a movement against the imposition of a pointless and potentially damaging standard on software testing – the resulting “Stop 29119” campaign was the first time I’d seen the CDT community coming together so strongly and voicing its opposition to something in such a united way (I blogged about it too, see [6]).

It appears that some of our concerns were warranted with the first job advertisements now starting to appear that demand experience in applying ISO 29119.

James recently tweeted a link to a blog post (see [7]):

Has this author spoken to any #stop29119 campaigners? There’s little evidence of understanding the issues.
http://intland.com/blog/agile/test-management/iso-29119-testing-standard-why-the-controversy/ … #testing

Read the blog post and make of it what you will. This part stood out to me:

Innitally there was controversy over the content of the ISO 29119 standard, with several organizations in opposition to the content (2014).  Several individuals in particular from the Context-Driven School of testing were vocal in their opposition, even beginning a petition against the new testing standards, they gained over a thousand signatures to it.  The opposition seems to have been the result of a few individuals who were ill – informed about the new standards as well as those that felt excluded from the standards creation process

An interesting take on our community’s opposition to the standard!

To end on a wonderfully positive note, I’m looking forward to attending and presenting at CAST 2017 in Nashville later in the year – a gathering of our community is always something special and the chance to exchange experiences & opinions with the engaged folks of CDT is an opportunity not to be missed.

We’re the voices in support of a context-driven approach to testing, let’s not be afraid to use them.

References

[1] Michael Bolton “The Rapid Software Testing Namespace” http://www.developsense.com/blog/2015/02/the-rapid-software-testing-namespace/

[2] James Bach & Michael Bolton “Testing and Checking Refined” http://www.satisfice.com/blog/archives/856

[3] Michael Bolton “Deeper Testing (2): Automating the Testing” http://www.developsense.com/blog/2017/04/deeper-testing-2-automating-the-testing/

[4] Rex Black and Joe Colantonio “Best Practices Vs Good Practices – Ranting with Rex Black” https://www.joecolantonio.com/2017/04/13/best-practices-rant/

[5] James Christie “Standards – Promoting Quality or Restricting Competition” (CAST 2014)

[6] Lee Hawkins “A Turning Point for the Context-driven Testing Community” https://therockertester.wordpress.com/2014/08/21/a-turning-point-for-the-context-driven-testing-community/

[7] Eva Johnson “ISO 29119 Testing Standard – Why the controversy?” https://intland.com/blog/agile/test-management/iso-29119-testing-standard-why-the-controversy/