“Range: Why Generalists Triumph in a Specialized World” (David Epstein)

I’m a sucker for the airport bookshop and I’ve blogged before on books acquired from these venerable establishments. On a recent trip to the US, a book stood out to me as I browsed, because of its subtitle: “Why generalists triumph in a specialized world”. It immediately triggered memory of the “generalizing specialists” idea that seemed so popular in the agile community maybe ten years ago (but hasn’t been so hot recently, at least not in what I’ve been reading around agile). And so it was that Range: Why Generalists Triumph in a Specialized World by David Epstein accompanied me on my travels, giving me a fascinating read along the way.

David’s opening gambit is a comparison of the journeys of two well-known sportsmen, viz. Roger Federer and Tiger Woods. While Woods was singularly focused on becoming excellent at golf from a very young age, Federer tried many different sports before eventually becoming the best male tennis player the world has ever seen. While Woods went for early specialization, Federer opted for breadth and a range of sports before realizing where he truly wanted to specialize and excel. David notes:

The challenge we all face is how to maintain the benefits of breadth, diverse experience, interdisciplinary thinking, and delayed concentration in a world that increasingly incentivizes, even demands, hyperspecialization. While it is undoubtedly true that there are areas that require individuals with Tiger’s precocity and clarity of purpose, as complexity increases – and technology spins the world into vaster webs of interconnected systems in which each individual sees only a small part – we also need more Rogers: people who start broad and embrace diverse experiences and perspectives while they progress. People with range.

Chapter 1 – “The Cult of the Head Start” – uses the example of chess grand masters, similarly to golf, where early specialization works well. David makes an interesting observation here around AI, a topic which seems to be finding its way into more and more conversations in software testing, and the last line of this quote from this chapter applies well to the very real challenges involved in thinking about AI as a replacement for human testers in my opinion:

The progress of AI in the closed and orderly world of chess, with instant feedback and bottomless data, has been exponential. In the rule-bound but messier world of driving, AI has made tremendous progress, but challenges remain. In a truly open-world problem devoid of rigid rules and reams of perfect historical data, AI has been disastrous. IBM’s Watson destroyed at Jeopardy! and was subsequently pitched as a revolution in cancer care, where it flopped so spectacularly that several AI experts told me they worried its reputation would taint AI research in health-related fields. As one oncologist put it, “The difference between winning at Jeopardy! and curing all cancer is that we know the answer to Jeopardy! questions.” With cancer, we’re still working on posing the right questions in the first place.

In Chapter 2 – “How the Wicked World Was Made” – David shares some interesting stories around IQ testing and notes that:

…society, and particularly higher education, has responded to the broadening of the mind by pushing specialization, rather than focusing early training on conceptual, transferable knowledge.

I see the same pattern in software testing, with people choosing to specialize in one particular automation tool over learning more broadly about good testing, risk analysis, critical thinking and so on, skills that could be applied more generally (and are also less prone to redundancy as technology changes). In closing out the chapter, David makes the following observation which again rings very true in testing:

The more constrained and repetitive a challenge, the more likely it will be automated, while great rewards will accrue to those who can take conceptual knowledge from one problem or domain and apply it in an entirely new one.

A fascinating – and new to me – story about early Venetian music opens Chapter 3 – “When Less of the Same Is More”. In discussing how musicians learn and apply across genres, his conclusion again makes for poignant reading for testers especially those with a desire to become excellent exploratory testers:

[This] is in line with a classic research finding that is not specific to music: breadth of training predicts breadth of transfer. That is, the more contexts in which something is learned, the more the learner creates abstract models, and the less they rely on any particular example. Learners become better at applying their knowledge to a situation they’ve never seen before, which is the essence of creativity.

Chapter 4’s title is a nod to Daniel Kahneman, “Learning, Fast and Slow”, and looks at the difficulty of teaching and training to make it more broadly applicable than the case directly under instruction, using examples from maths students and naval officers:

Knowledge with enduring utility must be very flexible, composed of mental schemes that can be matched to new problems. The virtual naval officers in the air defense simulation and the math students who engaged in interleaved practice were learning to recognize deep structural similarities in types of problems. They could not rely on the same type of problem repeating, so they had to identify underlying conceptual connections in simulated battle threats, or math problems, that had never actually been seen before. They then matched a strategy to each new problem. When a knowledge structure is so flexible that it can be applied effectively even in new domains or extremely novel situations, it is called “far transfer.”

I think we face similar challenges in software testing. We’re usually testing something different from what we’ve tested before, we’re generally not testing the same thing over and over again (hopefully). Thinking about how we’ve faced similar testing challenges in the past and applying appropriate learnings from those to new testing situations is a key skill and helps us to develop our toolbox of ideas, strategies and other tools from which to draw when faced with a new situation. This “range” and ability to make conceptual connections is also very important in performing good risk analysis, another key testing skill.

In Chapter 5 – “Thinking Outside Experience” – David tells the story of Kepler and how he drew new information about astronomy by using analogies from very disparate areas, leading to his invention of astrophysics. He was a fastidious note taker too, just like a good tester:

Before he began his tortuous march of analogies toward reimagining the universe, Kepler had to get very confused on his homework. Unlike Galileo and Isaac Newton, he documented his confusion. “What matters to me,” Kepler wrote, “is not merely to impart to the reader what I have to say, but above all to convey to him the reasons, subterfuges, and lucky hazards which led to my discoveries.”

Chapter 6 – “The Trouble with Too Much Grit” – starts by telling the story of Van Gogh, noting:

It would be easy enough to cherry-pick stories of exceptional late developers overcoming the odds. But they aren’t exceptions by virtue of their late starts, and those late starts did not stack the odds against them. Their late starts were integral to their eventual success.

David also shares a story about a major retention issue experienced by a select part of the US Army, concluding:

In the industrial era, or the “company man era”…”firms were highly specialized,” with employees generally tackling the same suite of challenges repeatedly. Both the culture of the time – pensions were pervasive and job switching might be viewed as disloyal – and specialization were barriers to worker mobility outside of the company. Plus, there was little incentive for companies to recruit from outside when employees regularly faced kind learning environments, the type where repetitive experience alone leads to improvement. By the 1980s, corporate culture was changing. The knowledge economy created “overwhelming demand for…employees with talents for conceptualization and knowledge creation.” Broad conceptual skills now helped in an array of jobs, and suddenly control over career trajectory shifted from the employer, who looked inward at a ladder of opportunity, to the employee, who peered out at a vast web of possibility. In the private sector, an efficient talent market rapidly emerged as workers shuffled around in pursuit of match quality. [the degree of fit between the work someone does and who they are – their abilities and proclivities] While the world changed, the Army stuck with the industrial-era ladder.

In Chapter 7 – “Flirting with Your Possible Selves” – David shares the amazing career story of Frances Hesselbein as an example of changing tack many times rather than choosing an early specialization and sticking with it, and the many successes it can yield along the journey. He cites:

[computational neuroscientist Ogo Ogas] uses the shorthand “standardization covenant” for the cultural notion that it is rational to trade a winding path of self-exploration for a rigid goal with a head start because it ensures stability. “The people we study who are fulfilled do pursue a long-term goal, but they only formulate it after a period of discovery,” he told me. “Obviously, there’s nothing wrong with getting a law or medical degree or PhD. But it’s actually riskier to make that commitment before you know how it fits you. And don’t consider the path fixed. People realize things about themselves halfway through medical school.” Charles Darwin for example.

Chapter 8 – “The Outsider Advantage” – talks about the benefits of bringing diverse skills and experiences to bear in problem solving:

[Alph] Bingham had noticed that established companies tended to approach problems with so-called local search, that is, using specialists from a single domain, and trying solutions that worked before. Meanwhile, his invitation to outsiders worked so well that it was spun off as an entirely different company. Named InnoCentive, it facilitates entities in any field acting as “seekers” paying to post “challenges” and rewards for outside “solvers.” A little more than one-third of challenges were completely solved, a remarkable portion given that InnoCentive selected for problems that had stumped the specialists who posted them. Along the way, InnoCentive realized it could help seekers tailor their posts to make a solution more likely. The trick: to frame the challenge so that it attracted a diverse array of solvers. The more likely a challenge was to appeal not just to scientists but also to attorneys and dentists and mechanics, the more likely it was to be solved.

Bingham calls it “outside-in” thinking: finding solutions in experiences far outside of focused training for the problem itself. History is littered with world-changing examples.

This sounds like the overused “think outside the box” concept, but there’s a lot of validity here, the fact is that InnoCentive works:

…as specialists become more narrowly focused, “the box” is more like Russian nesting dolls. Specialists divide into subspecialties, which soon divide into sub-subspecialties. Even if they get outside the small doll, they may get stuck inside the next, slightly larger one. 

In Chapter 9 – “Lateral Thinking with Withered Technology” – David tells the fascinating story of Nintendo and how the Game Boy was such a huge success although built using older (“withered”) technology. Out of this story, he mentions the idea of “frogs” and “birds” from physicist and mathematician, Freeman Dyson:

…Dyson styled it this way: we need both focused frogs and visionary birds. “Birds fly high in the air and survey broad vistas of mathematics out to the far horizon,” Dyson wrote in 2009. “They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and only see the flowers that grow nearby. They delight in the details of particular objects, and they solve problems one at a time.” As a mathematician, Dyson labeled himself a frog but contended, “It is stupid to claim that birds are better than frogs because they see farther, or that frogs are better than birds because they see deeper.” The world, he wrote, is both broad and deep. “We need birds and frogs working together to explore it.” Dyson’s concern was that science is increasingly overflowing with frogs, trained only in a narrow specialty and unable to change as science itself does. “This is a hazardous situation,” he warned, “for the young people and also for the future of science.”

I like this frog and bird analogy and can picture examples from working with teams where excellent testing arose from a combination of frogs and birds working together to produce the kind of product information neither would have provided alone.

David makes the observation that communication technology and our increasingly easy access to vast amounts of information is also playing a part in reducing our need for specialists:

…narrowly focused specialists in technical fields…are still absolutely critical, it’s just that their work is widely accessible, so fewer suffice

An interesting study on patents further reinforces the benefits of “range”:

In low-uncertainty domains, teams of specialists were more likely to author useful patents. In high-uncertainty domains – where the fruitful questions themselves were less obvious – teams that included individuals who had worked on a wide variety of technologies were more likely to make a splash. The higher the domain uncertainty, the more important it was to have a high-breadth team member… When the going got uncertain, breadth made the difference.

In Chapter 10 – “Fooled by Expertise” – David looks at how poorly “experts” are able to predict the future and talks about work from psychologist and political scientist, Philip Tetlock:

Tetlock conferred nicknames …that became famous throughout the psychology and intelligence-gathering communities: the narrow view hedgehogs, who “know one big thing” and the integrator foxes, who “know many little things.”

Hedgehog experts were deep but narrow. Some had spent their careers studying a single problem. Like [Paul[ Ehrlich and [Julian] Simon, they fashioned tidy theories of how the world works through the single lens of their specialty, and then bent every event to fit them. The hedgehogs, according to Tetlock, “toil devotedly” within one tradition of their specialty, “and reach for formulaic solutions to ill-defined problems.” Outcomes did not matter; they were proven right by both successes and failures, and burrowed further into their ideas. It made them outstanding at predicting the past, but dart-throwing chimps at predicting the future. The foxes, meanwhile, “draw from an eclectic array of traditions, and accept ambiguity and contradiction,” Tetlock wrote. Where hedgehogs represented narrowness, foxes ranged outside a single discipline or theory and embodied breadth.

David’s observations on this later in the chapter reminded me of some testers I’ve worked with over the years who are unwilling to see beyond the binary “pass” or “fail” outcome of a test:

Beneath complexity, hedgehogs tend to see simple, deterministic rules of cause and effect framed by their area of expertise, like repeating patterns on a chessboard. Foxes see complexity in what others mistake for simple cause and effect. They understand that most cause and effect relationships are probabilistic, not deterministic. There are unknowns, and luck, and even when history apparently repeats, it does not do so precisely. They recognize that they are operating in the very definition of a wicked learning environment, where it can be very hard to learn, from either wins or losses.

Chapter 11 – “Learning to Drop Your Familiar Tools” – starts off by telling the story of the Challenger space shuttle disaster and how, even though some people knew about the potential for the problem that caused the disaster, existing practices and culture within NASA got in the way of that knowledge being heard. The “Carter Racing” Harvard Business School case study mimics the Challenger disaster but the participants have to make a race/no race decision on whether to run a racing car with some known potential problems. Part of this story reminded very much of the infamous Dice Game so favoured by the context-driven testing community:

“Okay…here comes a quantitative question,” the professor says. “How many times did I say yesterday if you want additional information let me know?” Muffled gasps spread across the room. “Four times,” the professor answers himself. “Four times I said if you want additional information let me know.” Not one student asked for the missing data [they needed to make a good decision].

A fascinating story about the behaviour of firefighters in bushfire situations was very revealing, with many of those who perish being found weighed down with heavy equipment when they could have ditched their tools and probably run to safety:

Rather than adapting to unfamiliar situations, whether airline accidents or fire tragedies, [pyschologist and organizational behaviour expert Karl] Weick saw that experienced groups became rigid under pressure and “regress to what they know best.” They behaved like a collective hedgehog, bending an unfamiliar situation to a familiar comfort zone, as if trying to will it to become something they had actually experienced before. For wildland firefighters, their tools are what they know best. “Firefighting tools define the firefighter’s group membership, they are the firefighter’s reason for being deployed in the first place,” Weick wrote. “Given the central role of tools in defining the essence of a firefighter, it’s not surprising that dropping one’s tools creates an existential crisis.” As Maclean succinctly put it, “When a firefighter is told to drop his firefighting tools, he is told to forget he is a firefighter.”

This reminded me of some testers who hang on to test management tools or a particular automation tool as though it defines them and their work. We should be thinking more broadly and using tools to aid us, not define us:

There are fundamentals – scales and chords – that every member must overlearn, but those are just tools for sensemaking in a dynamic environment. There are no tools that cannot be dropped, reimagined, or repurposed in order to navigate an unfamiliar challenge. Even the most sacred tools. Even the tools so taken for granted they become invisible.

Chapter 12 – “Deliberate Amateurs” – wraps up the main content of the book. I love this idea:

They [amateurs] embrace what Max Delbruck, a Nobel laureate who studied the intersection of physics and biology, called “the principle of limited sloppiness.” Be careful not to be too careful, Delbruck warned, or you will unconsciously limit your exploration.

This note on the global financial crisis rings true in testing also, all too often we see testing compartmentalized and systemic issues go undetected:

While I was researching this book, an official with the US Securities and Exchange Commission learned I was writing about specialization and contacted me to make sure I knew that specialization had played a critical role in the 2008 global financial crisis. “Insurance regulators regulated insurance, bank regulators regulated banks, securities regulators regulated securities, and consumer regulators regulated consumers,” the official told me. “But the provision of credit goes across all those markets. So we specialized products, we specialized regulation, and the question is, ‘Who looks across those markets?’ The specialized approach to regulation missed systemic issues.”

We can also learn something from this observation about team structures, especially in the world of microservices and so on:

In professional networks that acted as fertile soil for successful groups, individuals moved easily between teams, crossing organizational and disciplinary boundaries and finding new collaborators. Networks that spawned unsuccessful teams, conversely, were broken into small, isolated clusters in which the same people collaborated over and over. Efficient and comfortable, perhaps, but apparently not a creative engine.

In his Conclusion, David offers some good advice:

Approach your personal voyage and projects like Michelangelo approached a block of marble, willing to learn and adjust as you go, and even to abandon a previous goal and change directions entirely should the need arise. Research on creators in domains from technological innovation to comic books shows that that a diverse group of specialists cannot fully replace the contributions of broad individuals. Even when you move on from an area of work or an entire domain, that experience is not wasted.

Finally, remember that there is nothing inherently wrong with specialization. We all specialize to one degree or another, at some point or other.

I thoroughly enjoyed reading “Range”. David’s easy writing style illustrated his points with good stories and examples, making this a very accessible and comprehensible book. There were many connections to what we see in the world of software testing, hopefully I’ve managed to illuminate some of these in this post.

This is recommended reading for anyone involved in technology and testers in particular I think will gain a lot of insights from reading this book. And, remember, “Be careful not to be too careful”!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s