Program Chair for CASTx18 in Melbourne

It was to my great surprise that I was recently asked to be Program Chair for the AST’s second conference venture in Australia, CASTx18 in Melbourne next year.

The first CAST outside of North America was held in Sydney in February 2017 and was so successful that the AST have opted to give Australia another go, moving down South to Melbourne. Although my main responsibility as Program Chair revolves around the conference content, as a local I can also help out with various tasks “on the ground” to assist the rest of the AST folks who are based outside of Australia.

Coming up with a theme was my first challenge and I’ve opted to give it some Aussie flavour, with “Testing in the Spirit of Burke & Wills” to evoke the ideas of pioneering and exploration.

I’m excited by this opportunity to put together a conference of great content for the local and international testing community – and also humbled by the AST’s faith in me to do so.

Keep an eye on the AST CASTx18 website for more details, the date and venue shouldn’t be too far away, with a CFP to follow.

We’re the voice

A few things have crossed my feeds in the last couple of weeks around the context-driven testing community, so thought I’d post my thoughts on them here.

It’s always good to see a new edition of Testing Trapeze magazine and the April edition was no exception in providing some very readable and thought-provoking content. In the first article, Hamish Tedeschi wrote on “Value in Testing” and made this claim:

Testing communities bickering about definitions of inane words, certification and whether automation is actually testing has held the testing community back

I don’t agree with Hamish’s opinion here and wonder what basis there is for claiming that these things (or indeed any others) have “held the testing community back” – held it back from what, compared to some unknown state of where it might have been otherwise?

Michael Bolton tweeted shortly after this publication went live (but not in response to it) that:

Some symptoms [of testers who don’t actually like testing] include fixation on tools (but not business risk); reluctance to discuss semantics and why chosen words matter in context.

It seems to be a common – and increasingly common – target of those of us in the context-driven testing community that we’re overly focused on “semantics” (or “bickering about definitions of inane words”). We’re not just talking about the meaning of words for the sake of it, but rather to “make certain distinctions clear, with the goal of reducing the risk that someone will misunderstand—or miss—something important” (Michael Bolton again, [1]).


I believe these distinctions have led to less ambiguity in the way we talk about testing (at least within this community) and that doesn’t feel like something that would hold us back, rather the opposite. As an example, the introduction (and refinement) of “testing” and “checking” (see [2]) was such an important one, it allows for much easier conversations with many different kinds of stakeholders about the differences – in a way that the terminology of “validation” and “verification”, for example, really didn’t.

While writing this blog post, Michael posted a blog in which he mentions this subject again (see [3]):

Speaking more precisely costs very little, helps us establish our credibility, and affords deeper thinking about testing

Thanks to Twitter, I then stumbled across an interview between Rex Black and Joe Colantonio, titled “Best Practices Vs Good Practices – Ranting with Rex Black” (see [4]). In this interview, there are some less than subtle swipes at the CDT community, e.g. “Rex often sees members of the testing community take a common phrase and somehow impart attributes to it that no one else does.” The example used for the “common phrase” throughout the interview is “best practices” and, of course, the very tenets of CDT call the use of this phrase into question.

Rex offered up an awesome rebuttal to use the next time you find yourself attempting to explain best practices to people, which is: Think pattern, not recipe.

How can some people have such an amazingly violent reaction to such an anodyne phrase? And why do they think it means “recipe” when it’s clearly not meant that way?

In case you’re unfamiliar with the word, “anodyne” is defined in the Oxford English dictionary as meaning “Not likely to cause offence or disagreement and somewhat dull”. So, the suggestion is that the term “best practices” is unlikely to cause disagreement and therein lies the exact problem with using it. Rex suggests that we “take a common phrase [best practices] and somehow impart attributes to it that no one else does” (emphasis is mine). The fact that he goes on to offer a rebuttal to mis-use of the term suggests to me that the common understanding of what it means is not so common. Surely it’s not too much of a stretch to see that some people might see “best” as meaning “there are no better”, thus taking so-called “best practices” and applying them in contexts where they simply don’t make any sense.

Still in my Twitter feed, it was good to see James Christie continuing his work in standing against the ISO 29119 software testing standard. You might remember that James presented about this at CAST 2014 (see [5]) and this started something of a movement against the imposition of a pointless and potentially damaging standard on software testing – the resulting “Stop 29119” campaign was the first time I’d seen the CDT community coming together so strongly and voicing its opposition to something in such a united way (I blogged about it too, see [6]).

It appears that some of our concerns were warranted with the first job advertisements now starting to appear that demand experience in applying ISO 29119.

James recently tweeted a link to a blog post (see [7]):

Has this author spoken to any #stop29119 campaigners? There’s little evidence of understanding the issues. … #testing

Read the blog post and make of it what you will. This part stood out to me:

Innitally there was controversy over the content of the ISO 29119 standard, with several organizations in opposition to the content (2014).  Several individuals in particular from the Context-Driven School of testing were vocal in their opposition, even beginning a petition against the new testing standards, they gained over a thousand signatures to it.  The opposition seems to have been the result of a few individuals who were ill – informed about the new standards as well as those that felt excluded from the standards creation process

An interesting take on our community’s opposition to the standard!

To end on a wonderfully positive note, I’m looking forward to attending and presenting at CAST 2017 in Nashville later in the year – a gathering of our community is always something special and the chance to exchange experiences & opinions with the engaged folks of CDT is an opportunity not to be missed.

We’re the voices in support of a context-driven approach to testing, let’s not be afraid to use them.


[1] Michael Bolton “The Rapid Software Testing Namespace”

[2] James Bach & Michael Bolton “Testing and Checking Refined”

[3] Michael Bolton “Deeper Testing (2): Automating the Testing”

[4] Rex Black and Joe Colantonio “Best Practices Vs Good Practices – Ranting with Rex Black”

[5] James Christie “Standards – Promoting Quality or Restricting Competition” (CAST 2014)

[6] Lee Hawkins “A Turning Point for the Context-driven Testing Community”

[7] Eva Johnson “ISO 29119 Testing Standard – Why the controversy?”

Creativity and testing

I’ve just finished reading Scott Berkun’s new book, The Dance of the Possible – “The Mostly Honest, Completely Irreverent Guide to Creativity”. As with his previous books, it makes for easy reading and he makes his points clearly and honestly. I read this book based on enjoying a couple of his other works – in the shapes of Confessions of a Public Speaker and Ghost of my Father – and wasn’t anticipating the amount of testing-related goodness I found in his new one!

In just the second chapter, Scott tackles the tricky topic of where to begin when starting a piece of creative work. He talks about taking an exploratory approach:

The primary goal when you’re starting creative work is to explore, and to explore demands you do things where you are not sure of the outcome. There will be false starts, twists, turns and pivots. These should be welcomed as natural parts of the experience, rather than resisted as mistakes or failures.

Exploratory testing, anyone?!  One of the joys of taking a session-based exploratory testing approach in my experience is the uncertainty of what information we’ll learn about our product in each session – this is so much more rewarding for the tester than knowing they’ll just report a “pass” or “fail” at the end of following a test case, for example.

As Scott moves on to methods for finding ideas (in chapter 4), one of my favourite tools for test planning and reporting makes an appearance:

Another approach to finding interesting combinations is called a mind map. On a large piece of paper write your main goal, subject or idea down in the center and circle it. Then think of an attribute, or an idea, related to the main one and write it down, drawing a line back to the main idea. Then think of another and another, connecting each one to any previous idea that seems most related.

Keep drawing lines and making associations. Soon you’ll have a page full of circles and lines capturing different ways to think about your main thought.

Exploratory testing puts the onus on the tester to come up with test ideas and this seems to be one of the biggest challenges for testers moving from a scripted approach, “how will I know what to test?” The skill of coming up with test ideas is one that requires practice and mind maps are a great way to both organize those ideas and give the tester a way to visualize their ideas, start to combine (or separate) them, and so on.

In chapter 5, Scott talks about creative projects being “a dance between two forces, expanding to consider more ideas and shrinking to narrow things down enough to finish” and how this very idea can be challenging for those who focus on efficiency:

Looking back on a finished project, you might think the time spent exploring ideas that didn’t get used was wasted. It’s easy to believe you should have known from the beginning which ideas would work best and which wouldn’t. This is an illusion. Creativity is exploration. You are going into the unknown on purpose. You can only learn about ideas as you develop them, and there’s no reliable predictor of which ones will pay off and which ones won’t. Certainly the more conservative you are in the ideas you pick, the more predictable the process will be, but by being more conservative you are likely being less creative and will discover fewer insights. Arguably the more creations you make the better your intuition gets, but you won’t find any successful creator, even the legends, who gets it right all the time.

People obsessed with efficiency have a hard time accepting this truth. They would also have a very hard time being at sea with Magellan, working with Edison in his lab or with Frida Kahlo in her art studio. They’d be stunned to see the “waste” of prototypes and sketches, and mystified by how many days Magellan had to spend at sea without discovering a single thing. Discovery is never efficient.

I see this “dance” a lot and it’s the natural tester dance between doing more testing (“consider more ideas”) and calling “good enough” (“narrow things down enough to finish”). I think these words from Scott are a great way to express the benefit of more creative testing in helping us better assess risks in our products:

Certainly the more conservative you are in the ideas you pick, the more predictable the process will be, but by being more conservative you are likely being less creative and will discover fewer insights.

Scott’s new book was a fairly short and enjoyable read. I always say that testing is a creative endeavour – seeking to counter the idea that testing is repetitive, boring work whenever I come across it – so Scott’s words on creativity will be a handy reference in this regard.

Attending the CASTx conference in Sydney (21st February, 2017)

The annual conference of the Association for Software Testing (AST) took its first step outside of North America in 2017, with the CASTx conference in Sydney on February 20 & 21. Since I align myself with the context-driven principles advocated by the AST, I decided to attend the event’s conference day on the 21st (disclaimer: I submitted a track session proposal for this conference but it was not accepted.)

The conference was held in the stunning Art Deco surrounds of the Grace Hotel in the Sydney CBD and drew a crowd of about 90, mainly from Australia and New Zealand but also with a decent international contingent (including representatives of the AST). The Twitter hashtag for the event was #castx17 and this was fairly active across the conference and in the days since.

The full event consisted of a first day of tutorials (a choice of three, by Michael Bolton, Goranka Bjedov and Abigail Bangser & Mark Winteringham) followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with forty minutes for the presentation followed by twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach).

My conference day turned out to include:

  • Conference opening by Ilari Henrik Aegerter (board member of the AST), Anne-Marie Charrett (conference program chair) and Eric Proegler (board member and treasurer of the AST).
  • Opening keynote came from Goranka Bjedov (of Facebook), with “Managing Capacity and Performance in a Large Scale Production Environment”.
  • Track session “Rise of the Machine (Learning)” from Stephanie Wilson (of Xero)
  • Track session “Testing with Humans: How Atlassian Validates Its Products With Customers” by Georgie Bottomley (of Atlassian)
  • Track session “To Boldly Go: Taking the Enterprise to SBTM” by Aaron Hodder (of Assurity Consulting NZ)
  • Track session “Auditing Agile Projects” by Michelle Moffat (of Tyro Payments)
  • Closing keynote by Michael Bolton (of Developsense) with “The Secret Life of Automation”

The opening keynote was fantastic.  I last heard Goranka speak when she keynoted the STANZ conference here in 2011. She started off by saying how well Facebook had prepared for the US elections in terms of handling load (and the coincidental additional load arising from India’s ban on large currency notes), but then told the story of how around half of all Facebook users had been declared dead just a few days after the election (an unfortunate by-product of releasing their new “memorial” feature that didn’t actually bother to check that the member was dead before showing the memorial!). This was an example of her theme that Facebook doesn’t care about quality and such changes can be made by developers without being discovered, but their resolution times are fast when such problems immediately start being reported by their users. The stats she provided about Facebook load were incredible – 1.7 billion monthly active users for the main site, around 1 billion for each of WhatsApp and Messenger, plus around 0.5 billion for Instagram. Facebook now has the largest photo storage in the world and already holds more video content than YouTube. Her 2013 stats showed, per 30 minutes, their infrastructure handled 108 billion MySQL queries, the upload of 10 million photos and scanned 105TB with Hive! This load is handled by Facebook’s private cloud built in ten locations across the US and Europe. Servers are all Linux and all data centres are powered using green power (and it was interesting to note that they rely on evaporative cooling to keep power usage down). The reasons for a lack of an Australian data centre became obvious when Goranka talked about the big long-term power contracts they require and also “world class internet” (at which point the room burst into laughter). Details of all the server specifications can be found at Her objectives in managing capacity and performance are: low latency for users, the ability to launch things quickly (succeed or fail quickly, don’t worry about efficiency, don’t care about quality) and conservation (in terms of power, money, computers, network and developer time). Her goals are: right things running on the right gear, running efficiently, knowing if something is broken or about to break, and knowing why something is growing. She also talked through their load testing approach – which runs every second of every day – and their testing around shutting down an entire region to be ready for disasters. Although this wasn’t really a pure testing talk, it was fascinating to learn more about the Facebook infrastructure and how it is managed and evolving. It was made all the more interesting by Goranka’s irreverent style – she openly admitted to not being a Facebook user and cannot understand why people want to post photos of cats and their lunches on the internet!

From the tracks, it was interesting to hear about Xero’s QA mission statement, viz.  “Influence Xero culture to be more quality oriented and transform software from “good” to “wow”” (Stephanie Wilson’s talk) and it was surprising to me to learn that Atlassian was not doing any decent sort of UX research until so recently (from Georgie Bottomley’s talk), but maybe that explains some of the quirky interactions we’ve all come to known and love in JIRA!

I’ve seen Aaron Hodder present a few times before and he always delivers real experiences with a unique insight – and this session was no exception. His talk was a fascinating insight into dysfunctional client/vendor contract-heavy enterprise IT environments. The novel approach he came up with at Assurity was session-based test management in a light disguise in order to make it palatable in its terminology and reporting, but it was very cleverly done and the project sounds like it’s in much better shape than it was as a result. A really good talk with handy takeaways, and not just for a tester finding themselves in the unfortunate position of being in a project like the one Aaron experienced.

Michelle Moffat presented the idea that the agile practices are, in audit terms, controls and it is the way evidence is gathered in this environment that is so different – she uses photos, videos, attends meetings and automated controls (for example, from the build system) rather than relying on the creation of documents. This was a really interesting talk and it was great to see someone from well outside of our sphere taking on the ideas of agile and finding ways to meet her auditing responsibility without imposing any additional work on the teams doing the development and testing.

Michael Bolton’s closing keynote was a highlight of my day and he used his time well, offering us his usual thought-provoking content delivered with theatre. Michael’s first “secret” was that a test cannot be automated and automated testing does not exist. He made the excellent point that if we keep talking about automated testing, then people will continue to believe that it does exist. He has also observed that people focus on the How and What of automated button-pushing, but rarely the Why. He identified some common automation (anti)patterns and noted that “tools are helping us to do more lousy, shallow testing faster and worse than ever before”! He revealed a few more secrets along the way (such as there being no such thing as “flaky” checks) before his time ran out all too soon.

There were a few takeaways for me from this conference:

  • There is a shift in focus for testing as SaaS and continuous delivery shifts the ability to respond to problems in production much more quickly and easily than ever before.
  • The “open season” discussion time after each presentation was, as usual, a great success and is a really good way of getting some deeper Q&A going than in more traditionally-run conferences.
  • It’s great to have a context-driven testing conference on Australian soil and the AST are to be commended for taking the chance on running such an event (that said, the awareness of what context-driven testing means in practice seemed surprisingly low in the audience).
  • The AST still seems to struggle with meeting its mission (viz. “to advance the understanding of the science and practice of software testing according to context-driven principles”) and I personally didn’t see how some of the track sessions on offer in this conference (interesting though they were) worked towards achieving that mission.

In summary, I’m glad I attended CASTx and it was good to see the level of support for AST’s first international conference event, hopefully their first of many to help broaden the appeal and reach of the AST’s effort in advocating for context-driven testing.

An excellent set of summary photos has been put together from Twitter, at

A worthwhile 40-minute roundtable discussion with five CASTx speakers/organizers (viz. Abigail Bangser, Mark Winteringham, Aaron Hodder, Anne-Marie Charrett and Ilari Henrik Aegerter) can also be heard at

What we can learn from the “schools” of organic food production

The Sustainable Living Festival took place in Melbourne recently and we decided to take a look on the Sunday of the festival. It was a pleasant set up, along the banks of the Yarra river next to Federation Square, with a wide variety of stalls, eateries and venues for talks throughout the day.

After wandering the stalls for a while and then enjoying an early lunch, we opted to head to a talk and ended up at the outdoor stage for Permaculture – The 4th Ethic, presented by “Pete The Permie”. Not knowing anything about permaculture before this talk, we were perhaps not his target audience but he was an engaging presenter and the content was really interesting.

Permaculture is a system of agricultural and social design principles centered on simulating or directly utilizing the patterns and features observed in natural ecosystems. Permaculture was developed, and the term coined by Bill Mollison and David Holmgren in 1978. It is based on three core tenets, viz.

  • Care for the earth: This is the first principle, because without a healthy earth, humans cannot flourish.
  • Care for the people: Provision for people to access those resources necessary for their existence.
  • Return of surplus: Reinvesting surpluses back into the system to provide for the first two ethics.

You can become “certified” in permaculture via the “Permaculture Design Course” (PDC), based around these three core tenets of the approach. Pete’s talk was about whether such PDCs should also include a fourth ethic on “Care of spirit”. He was discussing whether religion, Biodynamics or other spiritual systems should be included in the teaching or not (and, if not, where does the nurturing of people fit in a “Science Only” based design system in a world that needs lots more caring of oneself & each other?). This was fascinating stuff and it’s obviously a big deal in the permaculture community about whether these less scientific aspects should be included in their certification.

These other aspects are a feature of the Biodynamic approach.

Biodynamic agriculture is a form of alternative agriculture very similar to organic farming, but it includes various esoteric concepts drawn from the ideas of Rudolf Steiner (1861–1925). Initially developed since 1924, it was the first of the organic agriculture movements. It treats soil fertility, plant growth, and livestock care as ecologically interrelated tasks, emphasizing spiritual and mystical perspectives.

It soon became clear as Pete talked about the differences of opinion between the “schools” of organic food production – Permaculture and Biodynamic – that there were similarities with the “schools of testing” that our industry appears to have become somewhat preoccupied with in recent years. Pete’s approach has been to learn lots more about some of the unscientific aspects of the Biodynamic approach, as he argued it can’t do him any harm to learn about them and see if there are lessons to be learned in his application of permaculture. It was notable that he didn’t show disrespect to people following Biodynamics but was open to learn more about their ideas, while maintaining his strong association with Permaculture.

The lesson I took away from this talk was that isolating your thinking to one particular school of thought – in any field – is limiting and you might be surprised by the usefulness of approaches or ideas from a different school of thought. Hopefully those of us who align ourselves strongly with the context-driven “school” of testing can always remember to be respectful of those who align themselves with other schools and also become students of those schools to understand them better and perhaps find useful ideas to apply in our own contexts.

My first Sydney Testers Meetup – 20 February 2017

The Sydney Testers meetup is one of the largest software testing meetup groups in the world and they hold meetups and other gatherings very frequently in the Sydney CBD.

With the CASTx conference taking place on 21st February, the group organized a meetup the evening before in the offices of IAG (just across the road from the conference venue, The Grace Hotel) and so I went along to take part in my first Sydney Testers meetup event.

It was a complicated and diligent security operation that meant only those who had explicitly RSVP’d to the meetup would be allowed entry through the well-secured IAG building, so only around 50 people actually got into the meetup. The first 45-minutes or so were an opportunity to network over pizza and drinks and it was good to meet up with some familiar faces from both the local (Australia and New Zealand) and international testing community, as well as chat with some unfamiliar testers.

It was Eric Proegler’s job to kick off proceedings at 6.15pm and he talked about the Association for Software Testing (organizers of the CASTx event), for which he is a board member and had travelled from the US to be at the conference. Eric has been a key player in expanding AST’s reach outside of North America, with the CASTx conference being their first conference outside of those shores (and hence confirming his joke that AST does not stand for “American Software Testers”!).

The 2000th member of the meetup was in the house and received a gift for their trouble, this is a seriously big group and it was instrumental in helping to bring the CASTx conference to Australia, so kudos to Sydney Testers for their efforts.

The first 50-minutes of the meetup were devoted to a panel Q&A session on “Questions Facing Software Testing”, with the panel consisting of some serious testing talent:

The first question was around whether “manual testing” is dead. Goranka talked about the “death of quality”, thanks to SaaS delivery and the ability to fix very quickly when customers discover a problem. Aaron questioned the use of the term “manual testing” but managed to avoid ranting too much, noting that “testing looks like it’s easy to understand” when in fact it isn’t. Michael Bolton, sitting just behind me in the audience, pointed out that as testers “our job is to demolish unwarranted confidence”.

The next question simply asked “What are the top three challenges facing software testing today?” It came down to Eric, Ilari and Aaron to proffer one challenge each, viz. “explaining what good testing is to people who think they already know what testing is”, “not losing your humour” and “describing our value to stakeholders” respectively.

The third question was “Why hasn’t the AST had global reach and how is software testing different in different parts of the world?” Ilari suggested that previous AST boards made up entirely of Americans hadn’t helped the situation. On the topic of differences between testing around the world, Eric suggested that there were no big differences but Goranka strongly disagreed and said she’d recently travelled to New Zealand mainly to get the different perspectives from their testing community. She also mentioned that ISTQB is more favoured in some parts of the world than others, with Europe being “in love with ISTQB”. Ilari came back to the discussion and said the main differences around the world were simply “just different flavours of stupidity” when it comes to testing!

Question four was “Can we show senior managers what better testing looks like? Can we win the battle?” The panel were fairly pessimistic in answering this question, with Eric pointing out that only a very small percentage of testers attend conferences or meetups, but the AST really want to reach disengaged testers to help them become more passionate about their craft. Ilari noted that “most people in testing don’t give a sh*t” (like in most professions, he argued). Aaron spoke from his experience in consulting and noted that many testers & their managers are “cut off from the outside world and better ideas” in their organizations with a “cult-like” devotion to following company processes.

With time running out, one more question was directed to the panel, “What should we as a testing community be focused on next?” Eric suggested looking at robots and automation in general, Ilari said we should focus more on continuous learning in general rather than a particular technology, Goranka said all things Cloud (but especially around performance and security), and Aaron suggested doubling-down on the human stuff (UX, systems thinking, ethics, etc) in a world where we are now able to “churn out crap faster than ever”.

This was a good-natured panel session and it was interesting that even these highly-regarded individuals (most of whom associate themselves very strongly with the context-driven testing community) disagreed on many things but were able to maintain a civilized and engaging discussion.

A fifteen-minute break was welcome, before the group reformed for the next part of the meetup. This part was led by Paul Holland and Michael Bolton, who ran one of the exercises from the Rapid Software Testing (RST) class with the entire group. I’ve been lucky enough to attend RST twice and had already seen their chosen exercise before, so I chose to observe rather than participate.

The exercise was the so-called Wason Selection Task, by cognitive psychologist Peter Wason. This seemingly simple puzzle occupied the next 40-minutes or so and made for a great group exercise. It was interesting to watch people fall into the traps along the way and also to see Paul & Michael drawing the testing-related learnings from it along the way. If you haven’t seen this puzzle before, go try it!

Paul Holland fields questions for the panel of Eric, Ilari, Aaron and Goranka  Paul Holland and the Wason selection task

The meetup wrapped up at around 8.30pm and it was great to see a bunch of such passionate and engaged testers in the one room, a good experience at my first Sydney Testers event!


Sydney Testers Twitter handle: @SydneyTesters

Sydney Testers meetup site website:

Testers, how good is your waggle dance?

Since returning from Europe, I’ve been enjoying a new commuting option in the shape of a ferry service across the bay and this relaxing 90-minute trip is proving to be a great alternative to my only previous option of a 30km drive plus one-hour train journey.

Cruising to Melbourne in this way has opened up some more time for reading and I’m just finishing The Wisdom of Crowds by James Surowiecki.

The book explores a simple idea: “large groups of people are smarter than an elite few, now matter how brilliant – better at solving problems, fostering innovation, coming to wise decisions, even predicting the future.”

It’s enjoyable stuff and, of course, I can’t help but draw connections between some of its content and software testing.

The following paragraphs from the book describe how bees go about finding good food sources for their hive (emphasis is mine):

Bees are remarkably efficient at finding food. According to Thomas Seeley, author of “The Wisdom of the Hive”, a typical bee colony can search six or more kilometres from the hive, and if there is a flower patch within two kilometres of the hive, the bees have a better-than half chance of finding it. How do the bees do this? They don’t sit around and have a collective discussion about where foragers should go. Instead, the hive sends out a host of scout bees to search the surrounding area. When a scout bee has found a nectar source that seems strong, he comes back and does a waggle dance, the intensity of which is shaped, in some way, by the excellence of the nectar supply at the site. The waggle dance attracts other forager bees, which follow the first forager, while foragers who have found less-good sites attract fewer followers and, in some cases, eventually abandon their sites entirely. The result is that bee foragers end up distributing themselves across different nectar sources in an almost perfect fashion, meaning that they get as much food as possible relative to the time and energy they put into searching. It is a collectively brilliant solution to the colony’s food problem.

What’s important, though, is the way the colony gets to that collectively intelligent solution. It does not get there by first rationally considering all the alternatives and then determining an ideal foraging pattern. It can’t do this, because it doesn’t have any idea what the possible alternatives – that is, where the different flower patches – are. So instead, it sends out scouts in many different directions and trusts that at least one of them will find the best patch, return, and do a good dance so that the hive will know where the food source is.

I immediately saw similarities with exploratory testing when I read this.

When we’re looking to identify interesting or risky areas of a product under test, our initial charters are quite loose, since we don’t necessarily have a good idea of where to look yet. Debriefing our sessions gives us the chance to narrow in on where to look next or where to return to as fertile ground for finding interesting information about the product.

So, as a tester returning information to your team, how good is your waggle dance?

(The Wisdom of Crowds is an interesting read – and not just for the story of the waggle dance!)