Monthly Archives: July 2014

Release decisions and marshmallows

The Stanford “marshmallow experiment” was a series of studies on delayed gratification led by psychologist Walter Mischel, in the late ’60s and early ’70s. A recent revisiting of this experiment can be seen in this short video and, whether you believe in the claims of follow-up studies or not, it’s interesting to watch the different behaviours of different kids (even the difference between two twin boys in this regard).

I was working on a presentation about thinking of testing as being an information service provider when this video came to my attention (thanks to my wife). It got me thinking about the people who make release decisions for the software we work on.

We can provide one marshmallows worth of valuable product information right now and you can release based on that information. Or, we can spend some more time doing really good testing and then give you two marshmallows worth of information, to make an even more informed release decision!

The problem with this analogy is that neither marshmallow necessarily tastes very good – and the second one is likely to taste worse than the first, right?!

Anyway, maybe just give your business folks a marshmallow anyway, it might just make their day.

Pros and cons of a new work environment

I have recently moved into my own office for the first time, after some minor reshuffling of our floor of the building. I am adapting to my new environment and discovering the pros and cons of (relative) isolation.

On the plus side, it’s nice to have more dedicated space to store my testing book collection and also to have a small meeting space without the need to book a meeting room. The option to close the door to indicate “do not disturb” makes conference calls easier and also serves as a vehicle to block out time for uninterrupted thought – this already seems to be translating into more frequent blog posts as well as more reading.

On the downside, there is a feeling of isolation that comes from no longer overhearing team conversations (these are often a way of spotting misunderstandings or blockages needing to be resolved). The most annoying thing so far, however, is the light sensor that insists on turning the office lights off after around fifteen minutes when it doesn’t detect movement – say when I’m sitting typing behind my large monitor…! While the resulting arm-waving it necessitates might be good exercise (and is no doubt mirth inducing for those on the office floor), it is incredibly frustrating. I was convinced that turning off fluoros and then turning them back on again was wasteful of electricity (and hence supporting my idea to request the light sensor be bypassed the next time the electrician visits the office), but it seems that it’s an urban myth, as per this Mythbusters episode.

While cleaning out office before moving in, a pile of books had been left behind by the previous occupier, most of which were eagerly snapped up once their recycling demise was advertised. I took the opportunity to grab one for myself as I remembered hearing good reports about it, as a result of which I’m now about halfway through the excellent Peopleware by Tom DeMarco and Tim Lister. By a weird coincidence, much of what I’ve read so far is about providing intellect-based workers with quiet, spacious surroundings in which to do their best work!

Getting our message across about what “testing” really is

A recent Tweet about the BugBuster product again made me realise what a long journey we have as a community to educate the wider populous about what “testing” actually is (and is not).

The BugBuster website, for example, says this on its “Features” page:

Who said testing meant writing and endlessly maintaining test cases? BugBuster runs smart software agents that explore and test your website automatically. That’s right, no need to write test cases! The agents … test the various elements of the web app as if it was done by a human being.

The emphasis on the tool doing the same thing as humans is such a common perception of what testing can be reduced to, the “checking”* mentality is everywhere. I have no issues with using tools to help with testing, with automation to perform mundane checking, to help speed up development (not testing). But I do take issue with the idea that testing is dehumanizable.

They raise a good question here: “who said testing meant writing and endlessly maintaining test cases?” I spent too long thinking this was my job too and it’s almost unbelievable to look back at that time and think that I was adding any value to anything. The realization that testing really isn’t this but is in fact intellectually challenging and can add incredible value to the process of delivering great software for our users took me too long to reach, but at least I got there (thanks to Michael Bolton and the life changing experience that was his Rapid Software Testing course back in 2007).

How do we help others in this industry come to the same realization when they are bombarded with messages that dehumanize what “testing” really is? The context-driven testing community is full of great thinkers and their ideas about how to do great testing, but how do we in that community get our message across to the masses? While we do already have organizations like AST and ISST flying the CDT flag, what else can we do to broaden the wider community’s knowledge of what “testing” really is?

* Want to know more about the “Testing vs. checking” distinction? Start here with this Michael Bolton blog post.

Snooker and test planning

I was lucky enough to spend the weekend watching the semi-finals and final of the Bendigo Goldfields Open snooker tournament. This world-ranking event is relatively new on the snooker calendar and I’ve taken the opportunity to attend each year, with the “finals” weekend now becoming an annual event for me. Coverage of snooker is sadly almost non-existent in Australia so this is my one chance each year to watch the sport I love, up close and personal. (I will use some snooker terminology in what follows, so if you’re not too familiar with the game, it might be worth a look at some basic rules and terms first.)

As I watched the deft skills of the players, it occurred to me that there are parallels between how they go about constructing a break (with the aim of scoring enough points to win the frame) and how we as exploratory testers go about testing a feature or product.

Break building is all about strategy and planning, with good players working out several shots ahead so that they maneuver the cue ball into the right place after each shot in order to make the next shot (and subsequent ones) as straightforward as possible. These plans don’t look too far ahead though and the actual outcome of each shot will often mean their plan for subsequent shots is no longer valid, so they need to adjust. Sometimes this will leave the player with a choice between deliberately ending the break now with a safety shot or taking on a more difficult or risky shot in order to continue the break. These risk assessments seem to improve with player experience (“match fitness”) and are also heavily influenced by the timing within the game (e.g. whether this player has a large advantage in terms of frames won over the opponent).

This is similar to test planning when we’re using exploratory testing – planning just enough ahead so we know where we’re heading, but being mindful that we will probably need to adjust as we go along, depending on what happens along the way. It is this ability to course adjust that gives power to exploratory testing and also allows the professional snooker player to deal with the flow of the balls during play.

The snooker player also has black swan events to worry about. One of the most notable of these events is the so-called kick – this is where either the cue ball or the object ball literally jumps in the air slightly after receiving contact from the cue or the cue ball respectively. This is almost always bad news for the player, since the angle on either ball is disturbed and contact is rarely clean. Many a seemingly simple shot has been scuppered by a kick. (This is why you will see players requesting balls – particularly the cue ball which becomes dirty with residual chalk from the tip of the cue – be cleaned frequently especially at critical points in a break.)

We can take a few cues (forgive the pun) from snooker when we’re thinking about our test execution/planning – consider your options just a few moves ahead, take considered risks, and be patient when you need to be. And those black swans are always lurking…

Oh, and for the sake of completeness, the tournament was won by Englishman Judd Trump (world number 6) who defeated Aussie Neil Robertson (world number 1). That’s right, the world number 1 snooker player is from Melbourne, surely the most under-rated sports professional in this country with zero media coverage (even when he won the World Championship!).