Monthly Archives:

The way we say things matters

This lovely little piece of mis-translation came through my Twitter feed over the weekend (originally from Sheenagh Pugh):

chinese

I am reliably informed by team mates in our Zhuhai office that a better translation would read “Keep off the grass”, but the wording as it is makes for a much nicer message I think. (For a much more in-depth look at this translation problem, check out the post on this very topic on the Language Log.)

This got me thinking about the way we express ourselves as testers. We’re often the bringers of bad news and choosing how to express that information to our stakeholders can make a huge difference to how both the individual tester and the profession of testing is perceived.

I’ve been reading recently about interactional expertise (Collins and Evans) and emotional intelligence and I think these are subjects that testers need to be familiar with to help them interact with their varied stakeholders in more effective ways. While writing good defect reports is still an essential (and overlooked) skill, the ability to communicate with stakeholders more generally is becoming more and more important especially in agile teams, so I’m sure that developing these skills will elevate testers within their teams and help to make them the valued team members they really should be.

(And, while I’m here, I strongly recommend that you grab yourself a copy of the latest Status Quo album, called “Aquostic“. This is the band’s first all acoustic effort and has just charted in the UK at number 5, not bad for a band in its sixth decade!)

How to win the war? Follow the script

In mid-2002, the US armed forces ran one of the largest and most expensive war game experiments in history, known as the “Millennium Challenge 2002”. It was designed to be a test of new technologies to enable network-centric warfare to give better command and control over both current and future weaponry and tactics.

The scenario was that a crazed but cunning (and strongly anti-American) military commander had broken away from his government somewhere in the Persian Gulf. Religious and ethnic loyalty gave him power and strong links to terrorist organizations made him even more dangerous. War was imminent.

The US side, known as the “Blue” team (as they always are in such military exercises apparently), were pitted against the “Red” team – with the rogue commander being played by retired Marine Corps Lieutenant General, Paul Van Riper.

It’s worth a quick note on the character of  Van Riper at this point. His forty year military career included Vietnam and reading about him (especially from the words of those he led) it is clear that he was a straight-talking leader who inspired his teams to work for him even in the most dangerous and difficult of circumstances. By the time of this war game, he was retired and in his mid-60s – with no real need to be circumspect.

What actually happened during the running of the war game is described well in [1]:

In the first few days of the exercise, using surprise and unorthodox tactics, the wily 64-year-old Vietnam veteran sank most of the US expeditionary fleet in the Persian Gulf, bringing the US assault to a halt.

What happened next will be familiar to anyone who ever played soldiers in the playground. Faced with an abrupt and embarrassing end to the most expensive and sophisticated military exercise in US history, the Pentagon top brass simply pretended the whole thing had not happened. They ordered their dead troops back to life and “refloated” the sunken fleet. Then they instructed the enemy forces to look the other way as their marines performed amphibious landings. Eventually, Van Riper got so fed up with all this cheating that he refused to play any more. Instead, he sat on the sidelines making abrasive remarks until the three-week war game – grandiosely entitled Millennium Challenge – staggered to a star-spangled conclusion on August 15, with a US “victory”.

Van Riper very publicly aired his opinions on how ridiculously the game had been played and strongly criticized the idea that the ultimate “Blue” victory validated anything about the technology and approach the game was designed to test. In [2], he says:

There were accusations that Millennium Challenge was rigged. I can tell you it was not. It started out as a free-play exercise, in which both Red and Blue had the opportunity to win the game. However, about the third or fourth day, when the concepts that the command was testing failed to live up to their expectations, the command then began to script the exercise in order to prove these concepts.

This was my critical complaint. You might say, “Well, why didn’t these concepts live up to the expectations?” I think they were fundamentally flawed in that they leaned heavily on technology. They leaned heavily on systems analysis of decision-making.

It would seem that the skills and experience of Paul Van Riper and his ability to react quickly to what he observed gave him a significant advantage over the scripted, process-driven approach of his enemy. Yet, rather than making any effort to incorporate his alternative strategies, it was deemed better to constrain his actions to allow the script to play out the way it was “meant to”.

The analogy with scripted vs exploratory tests is very strong I think, so perhaps next time you’re locked in battle with a factory schooled commander of scripted testing, take up the battle and demonstrate your superior powers of testing. Even if your testing war game ends up the same way as the Millenium Challenge, at least you might have won the battle – and won some supporters for your exploratory testing cause along the way.

For reference…

[1] “Wake-up Call” (The Guardian, UK): http://www.theguardian.com/world/2002/sep/06/usa.iraq

[2] “The Immutable Danger of War” (Scott Willis interview with Van Riper) http://www.pbs.org/wgbh/nova/military/immutable-nature-war.html

(You can read more about the Millenium Challenge (2002) on wikipedia.)

Inspiration for this post came from reading about this war game in the fascinating book Blink by Malcolm Gladwell (and the same book provided inspiration for my previous post, Maier’s Two Cord Puzzle and Testing Heuristics).

Maier’s “two cord puzzle” and testing heuristics

Norman Maier was an experimental psychologist at the University of Michigan. In 1931, he was interested in exploring how people solve problems and came up with a puzzle which has become known as the “two cord puzzle”.

He attached two cords to the ceiling of his lab and asked people to come up with ways to tie the two ends together. The two cords were placed just far enough apart that, while holding on to one cord, you couldn’t reach the other cord (it wouldn’t be much of a puzzle otherwise!). Some objects were placed around the room – such as extension cords, poles, clamps, and weights – and participants could use any of these items to help them solve his puzzle.

While most people worked out quickly that attaching an extension cord to one of the cords would solve the problem, as would using a pole, these obvious answers didn’t satisfy Maier – he was looking for a different, simple and elegant solution. He would keep asking the participants to come up with new solutions, doing this until they ran out of new ideas.

The elegant solution Maier was looking for was to attach a weight to one of the cords and set it swinging. Then you grab the other rope and can reach the swinging rope when it comes towards you. Very few participants worked out this solution – until they were given a seemingly accidental clue.

Throughout the experiment Maier would wander around the lab until, when people had run out of ideas, he would apparently accidentally brush against one of the ropes and set it swinging. Within a minute of this apparently accidental clue, most people would then come up with the solution.

This experiment shows how easily we can be primed with a solution to a problem – without even realizing it. When the participants in Maier’s experiment were asked afterwards, only one-third of them realized they’d been given a massive clue by him setting one of the ropes swinging. The remainder had stories about how they came to the solution for themselves and, while these stories might have been representative of their conscious experience, they were clearly not the real reason why they solved the problem.

This story got me thinking about testing and, in particular, heuristics. I see heuristics as being these clues to help us as testers find problems in the products we’re testing. While they are not unconscious or accidental when used, experienced practitioners who’ve been using heuristics (and constantly developing their own new ones) over time probably get to the point where their use does become unconscious (“unconscious competence”).

I’m sure you’ve had that feeling where you look at a feature or product and you just know there’s a bug? Maybe that’s a heuristic gently brushing against the product and handing you a clue.

For more about heuristics and their power in testing, try these resources:

(I came across Maier’s two cord puzzle while reading Blink by Malcolm Gladwell. This is a great book with lots of testing takeaways, maybe more blog posts to come when I finish reading it.)