Category Archives: Uncategorized

A very different conference experience

My Twitter feed has been busy in recent weeks with testing conference season in full swing.

First on my radar after some time away in Europe on holidays was TestBash Australia, followed soon afterwards by their New Zealand and San Francisco incarnations. Next up was the German version of the massive Agile Testing Days and another mega-conference in the shape of European stalwart EuroSTAR is in progress as I write.

It’s one of the joys of social media that we can share in the goings on of these conferences even if we can’t attend in person. The only testing conference I’ve attended in 2019 has been TiCCA19 in Melbourne (an event I co-organized with Paul Seaman and the Association for Software Testing) but I hope to get to an event or two in 2020.

I did attend a very different kind of conference at the Melbourne Town Hall in October, though, in the shape of the full weekend Animal Activists Forum. There was a great range of talks across several tracks on both days and I saw inspiring presentations from passionate activists. Organizations like Voiceless, Animals Australia, Aussie Farms, The Vegan Society, and the Animal Justice Party – as well as many individuals – are doing so much good work for this movement.

There were some marked differences between this conference and the testing/IT conferences I generally attend. Firstly, the cost for the two full days of this event (including refreshments but not lunches) was just AU$80 (early bird), representing remarkable value given the location and range of great talks on offer.

Another obvious difference was the prevalence of female speakers on the programme, probably due to the fact that the vegan community is believed to be around 70-80% female. It was good to see more passion and positivity emanating from the stage too, all the more remarkable when considering the atrocities and realities of the animal exploitation industries that many of us are regularly exposed to within this movement.

The focus of most of the talks I attended was on actionable content, things we could do to help advance the movement. While there was some discussion of theory, history and philosophy, it was for the most part discussed with a view to providing ideas for what we can do now to advance animal rights. Many IT conference talks would do well to similarly focus on actionable takeaways.

While there were many differences compared to tech conferences, there was also evidence of common themes. One of the areas of commonality was how difficult it is to persuade people to change, even in the face of facts and evidence in support of the positive impacts of the change, such as going vegan (with the focus being squarely on going vegan for the animals in this audience, while also considering the environmental and health benefits). It was good to hear the different ideas and approaches from different speakers and activist groups. We need many different styles of advocacy when it comes to context-driven testing too – different people are going to be reached in different ways (it’s almost as though context matters!).

It’s interesting to me how easy it sometimes seems to be to change people’s minds or opinions, though. An example I’ve seen unfolding is the introduction of dairy products into China. I’ve been working with testing teams there for seven years and, for the first few years, I rarely saw or heard any mention of dairy products. This situation has changed very rapidly, thanks to massive marketing efforts by the dairy industry (most notably – and sadly – from Australia and New Zealand dairy companies). Even though almost all Chinese people are lactose intolerant and have little idea about how to use products like dairy milk and cheese, the consumption of these products has become very mainstream. From infant formula (a very lucrative business) to milk on supermarket shelves (with some very familiar Australian brands on show) to Starbucks, the dairy offerings are now ubiquitous. The fact that these products are normalized in the West enables an easier sell to the Chinese and their marketing has been heavily contextualized, for example some of the advertising claims that drinking cow’s milk will help children grow taller. These nutritional falsehoods have worked in the West and are now working in China. The dairy mythology has been successfully sold to this enormous market and the unbelievable levels of cruelty that will result from this, as well as the inevitable negative human health implications, are tragic. Such large industries, of course, have dollars on their side to mount huge marketing campaigns and are driven by profit above the abuse of animals or the health of their consumers . But maybe there are lessons to be learned from their approaches to messaging that can be beneficial in selling good approaches to testing (without the blatant untruths, of course)?

(By the way, does anyone reading this post know if the ISQTB is having a marketing push in China right now? A couple of my colleagues there have talked to me about ISTQB certification just in the last week, while no-one has mentioned it before in the seven years I’ve been working with testers in China…)

If you found this post interesting, I humbly recommend that you also read this one, What becoming vegan taught me about software testing

All testing is exploratory: change my mind

I’ve recently returned to Australia after several weeks in Europe, mainly for pleasure with a small amount of work along the way. Catching up on some of the testing-related chatter on my return, I spotted that Rex Black repeated his “Myths of Exploratory Testing” webinar in September. I respect the fact that he shares his free webinar content every month and, even though I often find myself disagreeing with his opinions, hearing what others think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good testing looks like.

Rex started off with his definition of exploratory testing (ET), viz.

A technique that uses knowledge, experience and skills to test software in a non-linear and investigatory fashion

He claimed that this is a “pretty widely shared definition of ET” but I don’t agree. The ISTQB Glossary uses the following definition:

An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.

The definition I hear most often is something like the following James Bach/Michael Bolton effort (which they used until 2015):

An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project

They have since deprecated the term “exploratory testing” in favour of simply “testing” (from 2015), defining testing as:

Evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.

Rex went on to say that the test basis and test oracles in ET “are primarily skills, knowledge and experience” and any such testing is referred to as “experience-based testing” (per the ISTQB definition, viz. “Testing based on the tester’s experience, knowledge and intuition.”). Experience-based testing that is investigatory is then deemed to be exploratory. I have several issues with this. There is an implication here that ET involves testing without using a range of oracles that might include specifications, user stories, or other more “formal” sources of what the software is meant to do. Rex reinforces this when he goes on to say that ET is a form of validation and “may tell us little or nothing about conformance to specification because the specification may not even be consulted by the tester”. Also, I can’t imagine any valuable testing that doesn’t rely on the tester’s skills, knowledge and experience so it seems to me that all testing would fall under this “experience-based testing” banner.

The first myth Rex discussed was the “origin myth”, that ET was invented in the 1990s in Silicon Valley or at least that was when a “name got hung on it” (e.g. Cem Kaner). He argued instead that it was invented by whoever wrote the first program, that IBM were doing it in the 1960s, that the independent test teams in Fred Brooks’s 1975 book Mythical Man Month were using ET, and “error guessing” as introduced by Glenford Myers in the classic book Art of Software Testing sounds “a whole lot like a form of ET”. The History of Definitions of ET on James Bach’s blog is a good reference in this regard, in my opinion. While I agree that programmers have been performing some kind of investigatory or unscripted testing in their development and debugging activities as long as programming has been a thing, it’s important that we define our testing activities in a way that makes the way we talk about what we do both accurate and credible. I see the argument for suggesting that error guessing is a form of ET, but it’s just one tactic that might be employed by a tester skilled in the much broader approach that is ET.

The next myth Rex discussed was the “completeness myth”, that “playing around” with the software is sufficient to test it. He mentioned that there is little education around testing in degrees in Software Engineering so people don’t understand what testing can and cannot do, which leads to myths like this. I agree that there is a general lack of understanding in our industry of how important structured ET is as part of a testing strategy, I haven’t personally heard this myth being espoused anywhere recently though.

Next up was the “sufficiency myth”, that some teams bring in a “mighty Jedi warrior of ET & this person has helped [them] to find every bug that can matter”. He mentioned a study from Microsoft where they split their testing groups for the same application, with one using ET (and other reactive strategies) only, while the other used pre-designed tests (including automated tests) only. The sets of bugs found by these two teams was partially but not fully overlapping, hence proving that ET alone is not sufficient. I’m confident that even if the groups had been divided up and did the same kind of testing (be it ET or pre-designed), then the sets of bugs from the two teams would also have been partially but not fully overlapping (there is some evidence to support this, albeit from a one-off small case study, from Aaron Hodder & James Bach in their article Test Cases Are Not Testing)! I’m not sure where this myth comes from, I’ve not heard it from anyone in the testing industry and haven’t seen a testing strategy that relies solely on ET. I do find that using ET as an approach can really help in focusing on finding bugs that matter, though, and that seems like a good thing to me.

Rex continued with the “irrelevance myth”, that we don’t have to worry about ET (or, indeed, any validation testing at all) because of the use of ATDD, BDD, or TDD. He argued that all of these approaches are verification rather than validation, so some validation is still relevant (and necessary). I’ve seen this particular myth and, if anything, it seems to be more prevalent over time especially in the CI/CD/DevOps world where automated checks (of various kinds) are viewed as sufficient gates to production deployment. Again, I see this as a lack of understanding of what value ET can add and that’s on us as a testing community to help people understand that value (and explain where ET fits into these newer, faster deployment approaches).

The final myth that Rex brought up was the “ET is not manageable myth”. In dispelling this myth, he mentioned the Rapid Reporter tool, timeboxed sessions, and scoping using charters (where a “charter is a set of one or more test conditions”). This was all quite reasonable, basically referring to session-based test management (SBTM) without using that term. One of his recommendations seemed odd, though: “record planned session time versus actual [session] time” – sessions are strictly timeboxed in an SBTM situation so planned and actual time are always the same. While this seems to be one of the more difficult aspects of SBTM at least initially for testers in my experience, sticking to the timebox is critical if ET is to be truly manageable.

Moving on from the myths, Rex talked about “reactive strategies” in general, suggesting they were suitable in agile environments but that we also need risk-based strategies and automation in addition to ET. He said that the reliance on skills and experience when using ET (in terms of the test basis and test oracle) mean that heuristics are a good way of triggering test ideas and he made the excellent point that all of our “traditional” test techniques still apply when using ET.

Rex’s conclusion was also sound, “I consider (the best practice of) ET to be essential but not sufficient by itself” and I have no issue with that (well, apart from his use of the term “best practice”) – and again don’t see any credible voices in the testing community arguing otherwise.

The last twenty minutes of the webinar was devoted to Q&A from both the online and live audience (the webinar was delivered in person at the STPCon conference). An interesting question from the live audience was “Has ET finally become embedded in the software testing lifecycle?” Rex responded that the “religious warfare… in the late 2000s/early 2010s has abated, some of the more obstreperous voices of that era have kinda taken their show off the road for various reasons and aren’t off stirring the pot as much”. This was presumably in reference to the somewhat heated debate going on in the context-driven testing community in that timeframe, some of which was unhelpful but much of which helped to shape much clearer thinking around ET, SBTM and CDT in general in my opinion. I wouldn’t describe it as “religious warfare”, though.

Rex also mentioned in response to this question that he actually now sees the opposite problem in the DevOps world, with “people running around saying automate everything” and the belief that automated tests by themselves are sufficient to decide when software is worthy of deployment to production. In another reference to Bolton/Bach, he argued that the “checking” and “testing” distinction was counterproductive in pointing out the fallacy of “automate everything”. I found this a little ironic since Rex constantly seeks to make the distinction between validation and verification, which is very close to the distinction that testing and checking seeks to draw (albeit in much more lay terms as far as I’m concerned). I’ve actually found the “checking” and “testing” terminology extremely helpful in making exactly the point that there is “testing” (as commonly understood by those outside of our profession) that cannot be automated, it’s a great conversation starter in this area for me.

One of Rex’s closing comments was again directed to the “schism” of the past with the CDT community, “I’m relieved that we aren’t still stuck in these incredibly tedious religious wars we had for that ten year period of time”.

There was a lot of good content in Rex’s webinar and nothing too controversial. His way of talking about ET (even the definition he chooses to use) is different to what I’m more familiar with from the CDT community but it’s good to hear him referring to ET as an essential part of a testing strategy. I’ve certainly seen an increased willingness to use ET as the mainstay of so-called “manual” testing efforts and putting structure around it using SBTM adds a lot of credibility. For the most part in my teams across Quest, we now consider test efforts to be considered ET only if they are performed within the framework of SBTM so that we have that accountability and structure in place for the various stakeholders to treat this approach as credible and worthy of their investment.

So, finally getting to the reason for the title of this post, both by Rex’s (I would argue unusual) definition (and even the ISTQB’s definition) or by what I would argue is the more widely accepted definition (Bach/Bolton above), it seems to me that all testing is exploratory. I’m open to your arguments to change my mind!

(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ The one I refer to in this blog post has not appeared there as yet, but the audio is available via https://rbcs-us.com/resources/podcast/)

What becoming vegan taught me about software testing

While I’ve been in the software testing industry for twenty years, I’ve only been vegan for a few years. Veganism is a movement around minimizing harm to our fellow human and non-human animals with compassion at its core, and I think of it as being built on three main pillars, viz. ethics, the environment, and health.

I will discuss how being vegan has strong parallels with being a software tester in terms of the three pillars of veganism, the similar challenges involved in changing the mindsets of others – and also the need to frequently justify one’s own beliefs!

Be prepared to learn something about veganism and maybe having some long-held assumptions challenged, but also expect to take away some ideas for changing hearts and minds in your life as a tester.

Ethics

At its heart, veganism is an ethical principle designed to reduce – to the greatest extent possible – the suffering we cause to other human and non-human animals. It is unfortunate that the movement is seen as “radical” or “militant” when it essentially asks nothing more of us than to not cause unnecessary suffering. It feels to me like the animal rights movement is on the rise, just as the human rights movement was through the 1800s and 1900s.

With ethics being at the core of veganism, they also need to be at the core of software testing. Doing the right thing is the right thing to do and our job as testers often involves the relay of bad or unpopular information. As algorithms take over so many decision-making processes in the modern world, our dependency on the developers writing these algorithms to do the right thing only increases. The Volkswagen “dieselgate”[1] scandal is an example of where many people involved with the development of the “defeat devices” (possibly including both software developers and testers) didn’t speak up enough to stop these devices going into the public domain and potentially causing harm to so many people.

Since we have now acknowledged as a society that humans have the right to live a life free from unnecessary suffering and so on, organizations employing testers have an obligation to engage them in intellectually challenging work fit for humans to invest their valuable lives in. Putting intelligent humans into roles where they are expected to mimic the actions of machines (e.g. executing step-by-step detailed test cases) is, to me at least, unethical.

Environment

According to a recent study by the University of Oxford[2], the leading cause of greenhouse gas emissions is the animal agriculture industry, with the bulk of these emissions emanating from factory farms. This significant contribution to one of our planet’s most pressing problems is rarely acknowledged or publicly discussed, especially by governments. Our disregard for the environment is bad for the animals and bad for humans. It’s not too much of a stretch to view factory farming and so-called “factory testing” in similar lights. The mechanistic dogma of factory testing is bad for our business, yet remains for many organizations their default position and unquestioned “best practice” approach to software testing, despite the evidence of its inefficiencies.

The animal agriculture business is also the largest contributor to deforestation, either to create pasture for grazing or to grow soy beans or grain to feed to animals. Animals are very inefficient in their conversion of food and water into the products eaten by humans (in terms of calories and protein[3]), so we could easily enormously reduce the amount of deforestation while at the same time providing more food for humans to eat directly. The dominant ideology around consuming animal products again makes this a conversation rarely heard. We could similarly cut out the inefficiencies in a large proportion of the world’s software testing industry by removing the “middle man” of excessive documentation. I hoped that the rise in popularity of more agile approaches to software development would spell the end of heavyweight “testing processes” in which we spend more time writing about the testing we might do and less time actually interacting with the software to see what it does do and assessing whether that’s what we wanted. The dominant ideology around software testing – very successfully promoted by organizations like the ISTQB – still manages to push challenges to these approaches to the fringes, however, and talks of detractors as radical or even unprofessional.

Health

Credible studies[4] point to the fact that reducing animal products in the human diet (or, better, eliminating them altogether) leads to improved long-term health outcomes. A wholefood plant-based diet appears to be optimal for human health, both in terms of physical and mental wellbeing.

It’s good to see some important aspects of self-care entering the conversation in the IT community, such as practicing mindfulness and getting adequate sleep. The mythology around “all nighters” and hero efforts popularized in the IT community thankfully seems to be unravelling as we realize the importance of physical and mental wellbeing when it comes to our ability to perform well in our work settings. Testers have often borne the brunt of last-minute hero efforts to get releases out and I’m encouraged by the changes I see in our industry to a mindset of the whole development team being responsible for the quality of deliverables, a positive outcome from the move to more agile approaches perhaps.

The same old questions, time and time again

One of the challenges of being vegan is the fact that everyone suddenly becomes an expert on nutrition to explain why it is unhealthy to not consume animal products. The questions are surprisingly similar from many different people and can be seen in the many online materials from vegan bloggers and influencers. The most common question is probably “where do you get your protein?”, reflecting both a modern obsession over protein intake (it’s almost impossible to be protein deficient when consuming an average amount of calories) and also poor nutritional advice from doctors, government bodies, and mainstream media. It’s worth remembering that your typical GP receives only a few hours of nutritional training during their time at medical school, while government policy is heavily influenced by lobbying from big agriculture and mainstream media relies heavily on advertising revenues from the animal agriculture industry. The animals consumed by humans as meat get their protein from eating plants, just like humans can.

Another common question is “What about vitamin B12?” and, yes, most vegans will supplement with (vegan-sourced) B12. What most people don’t realize is that factory farmed animals are supplemented with B12 so vegans are simply cutting out the middle man (or, rather, middle animal). Animals are not some magic B12 producing machine, B12 comes naturally from bacteria in soil and hence the need for supplementation in animals now raised in such unnatural conditions.

The animal agriculture industry relies on this marketing and mythology, as well as the general lack of questioning about what is seen as the norm in consuming animal products. The same applies to the software testing industry where big players and repeated mythology have created such a dominant world view of testing that it goes unquestioned by so many. If you go against the grain in this industry (say, aligning yourself with the context-driven testing “school”, as I have chosen to do), you can expect some of these common questions:

  • How do you measure coverage?
  • How can you test without test cases?
  • What percentage of your testing is automated?
  • Why do you focus so much on semantics (like “testing” and “checking”)?

Software testing is also one of the few specialties I’ve seen in the IT industry in which people outside of the specialty believe and openly express that they know how to do it well, despite having little or no experience of actually performing good testing. The ongoing misapprehension that testing is somehow “easy” or lesser than other software development specialties is something we should all counter whenever we have the opportunity – software testing is a professional, cognitive activity that requires skills including – but not limited to – critical thinking, analysis, attention to detail, and the ability to both focus and defocus. Let’s not devalue ourselves as good testers by not speaking up for our craft! Knowing how to respond well to the common questions is a good place to start.

Hello old friend, confirmation bias!

When it comes to cognitive biases, one of the most common I’ve witnessed is “confirmation bias”, especially when talking about food. Confirmation bias is the tendency to search for, interpret, favour and recall information in a way that confirms your pre-existing beliefs or hypotheses. Talking specifically in the context of the food we eat, everyone is looking for good news about their bad habits so any “research” that suggests eating meat, eggs, dairy products, chocolate, etc. is actually good for you gets big press and plenty of clicks. (One trick is to “follow the money” when it comes to research articles like this, as funding from big agriculture invariably can be found somewhere behind them, in my experience.) It should come as no surprise that the overwhelming scientific evidence to the contrary doesn’t warrant the same attention!

I see the same confirmation bias being displayed by many in the testing community, with big differences of opinion between the so-called “schools of testing”[5] and a lack of willingness to even consider the ideas from one school within the others. Even when credible experience reports[6] have been published around the poor efficacy of a test case-based approach to testing, for example, there is no significant shift away from what I’d call “traditional” approaches to testing in many large organizations and testing outsourcing service providers.

To counter confirmation bias when trying to change the mindset of testers, it’s worth looking at the very different approaches taken by different activists in the vegan movement. I was first introduced to the Socratic Method when I took Michael Bolton’s Rapid Software Testing course back in 2007 and it’s been a powerful tool for fostering critical thinking in my work since then. Many vegan activists also use the Socratic Method to help non-vegans explore their logical inconsistencies, but their approach to using it varies widely. A well-known Australian activist, Joey Carbstrong[7], pulls no punches with his somewhat “in your face” in style, whereas the high-profile UK activist Earthling Ed[8] uses a gentler approach to achieve similar results. These stylistic differences remind me of those I’ve personally experienced by attending Rapid Software Testing delivered by Michael Bolton and, later, with James Bach. I strongly believe in the power of the Socratic Method in terms of fostering critical thinking skills and it’s a powerful approach to use when confronted by those who doubt or disregard your preferred approach to testing based on little but their own confirmation bias.

Time for dessert!

Any belief or action that doesn’t conform to the mainstream narrative or paradigm causes people to become defensive. Just as humans consuming animal products is seen as natural, normal and necessary[9] (when it is demonstrably none of those), a departure from the norms of the well-peddled testing methodologies is likely to result in you being questioned, criticized and feeling the need to justify your own beliefs. I would encourage you to navigate your own path, be well informed and find approaches and techniques that work well in your context, then advocate for your good ideas via respectful dialogue and use of the Socratic Method.

And, yes, even vegans get to eat yummy desserts! I hope you’ve learned a little more about veganism and maybe I’ve helped to dispel a few myths around it – maybe try that vegan option the next time you go out for a meal or check out one of the many vegan activists[7,8,10,11s] spreading the word about veganism.

References

[1] Volkswagen “dieselgate”: https://www.sbs.com.au/news/what-is-the-volkswagen-dieselgate-emissions-scandal

[2] “Reducing food’s environmental impacts through producers and consumers”, Science Volume 360, Issue 6392, 1st June 2018: https://science.sciencemag.org/content/360/6392/987

[3] “Energy and protein feed-to-food conversion efficiencies in the US and potential food security gains from dietary changes” (A Shepon, G Eshel, E Noor and R Milo), Environmental Research Letters Volume 11, Number 10, 4th October 2016: https://iopscience.iop.org/article/10.1088/1748-9326/11/10/105002

[4] Examples from the Physicians Committee for Responsible Medicine (US): https://www.pcrm.org/clinical-research

[5] “Four Schools of Testing” (Bret Pettichord), Workshop on Teaching Software Testing, Florida Tech, February 2003: http://www.testingeducation.org/conference/wtst_pettichord_FSofST2.pdf

[6] “Test Cases Are Not Testing” (James Bach & Aaron Hodder), Testing Trapeze magazine, February 2015: https://www.satisfice.com/download/test-cases-are-not-testing

[7] Joey Carbstrong https://www.youtube.com/channel/UCG6usHVNuRbexyisxE27nDw

[8] Earthling Ed https://www.youtube.com/channel/UCVRrGAcUc7cblUzOhI1KfFg/videos

[9] “Why We Love Dogs, Eat Pigs, and Wear Cows: An Introduction to Carnism” (Melanie Joy): https://www.carnism.org/

[10] That Vegan Couple https://www.youtube.com/channel/UCV8d4At_1yUUgpsnqyDchrw

[11] Mic The Vegan https://www.youtube.com/channel/UCGJq0eQZoFSwgcqgxIE9MHw/videos

Two decades at Quest Software

Today (2nd August 2019) marks twenty years since I first sat down at a desk at Quest Software in Melbourne as a “Senior Tester”.

I’d migrated from the UK just a few weeks earlier and arrived in Australia in the middle of the late 90s tech boom. The local broadsheet newspaper, The Age, had a separate section once a week which was a hefty tome and packed full of IT jobs. I sent my CV to many different recruitment companies advertising in the newspaper and started to get some interest. My scatter gun approach was a response to the lack of opportunities for LISP developers (my previous skill from three years as a developer back in the UK, working on expert systems for IBM) but I did focus a little on openings for technical writers, believing I could string words together pretty well and had a decent grasp of technology.

One of the first interviews I secured for such a technical writing position was for a company I’d never heard of, Quest Software out in the Eastern suburbs of Melbourne (Ashburton, at that time). After some hasty company research, I remember catching a train there and following the recruiter’s directions to “take the staircase next to the bottle shop” to locate the Quest office (actually, one of two offices in the same street due to recent expansion). My interview would be with the head of the technical writing team and we started off with a chat over coffee in the kitchen. I didn’t even realize this was the interview, it was so relaxed and welcoming! At the end of the coffee/interview, he asked whether I’d also like to chat with the head of the testing team as she was looking for people too, so of course I took the opportunity to do so. This was again a very informal chat and I left the office with a technical writing task to complete. After completing the task, I was soon contacted to return to the Quest office to further my application for a software testing position, but not the technical writing one. A test case writing task formed part of this next slightly more formal interview, my first attempt at writing such a document! It was very shortly afterwards that the recruiter let me know I had an offer of a role as a “Senior Tester” and I couldn’t return the required paperwork fast enough – I’d found my first job in Australia!

I considered myself very fortunate to have secured a position so quickly after arriving into Australia. I was certainly lucky to find a great recruiter, Keith Phillips from Natural Solutions, and I recall visiting him in person for the first time after the deal was done with Quest, down at his office in South Melbourne. It turned out we had a common connection to the University of Wales in Aberystwyth, where I studied for both my undergraduate and doctoral degrees. We also studied in the same department (Mathematics) and, although Keith’s studies were some years before mine, many of the same department staff were still around during my time there as well. I believe Keith is still in the recruitment industry and I have fond memories of his kind, professional and unhurried approach to his work, something not common during my experiences with recruiters back then.

Back to 2nd August, 1999, then and my first day at the Quest office in Ashburton. Amidst the dotcom madness, Quest were growing rapidly and I was just one of many new starters coming through the door every week. We were sitting two to a desk for a while until we moved to bigger new digs in Camberwell, about three months after I joined. We grew rapidly and I enjoyed my time as a tester, slotting in well to a couple of different development teams and learning the ropes from other testers in the office. Being new to the testing game, I didn’t realize that we had a very “traditional” approach to testing in Quest at that time – I was part of an independent testing team under a Test Manager and spent a lot of my time writing and executing test cases, and producing lots of documentation (thanks, Rational Unified Process).

I was also learning the ropes of living in a new country and I’m indebted to my colleagues at the time for their patience and help in many aspects of me settling into a Melbourne life!

I worked across a few teams in my role as a “Senior Tester” from 1999 until 2004 when I was promoted to a “Test Team Lead” and given people management responsibility for the first time, leading a small group of testers as well as retaining hands-on testing commitments. I realize now that I was a classic “process cop” and quality gate fanatic, persisting with the very traditional ideas around testing and test management. This was an interesting and challenging time for me and, while I enjoyed some aspects of managing people, it was also not the most enjoyable aspect of my job.

It was during my time as test lead that Quest ran the Rapid Software Testing course in-house with Michael Bolton, in our Ottawa office in 2007. It was a very long way to travel to attend this course, but it was truly career-changing for me and opened my eyes to a new world of what testing was for and how it could be done differently. I returned to work in Melbourne inspired to change the way we thought about testing at Quest and took every chance I could to spread the word about the great new ideas I’d been exposed to. Looking back on it now, I banged this drum pretty hard and was probably quite annoying – but challenging the status quo seemed like the right thing to do.

During a shift to adopting Scrum within some of the Melbourne teams and a move away from the independent test team, I really saw an opportunity to bring in new testing ideas from Rapid Software Testing and so, in 2008, a new position was created to enable me to focus on doing so, viz. “Test Architect”. Evangelizing the new ideas and approaches across the Melbourne teams was the main job here and the removal of people management responsibility gave me a welcome chance to focus on effecting change in our testing approach. I enjoyed this new role very much over the next five years, during which time we moved to Southbank and Quest Software was acquired by Dell to form part of their new Software business.

My role expanded in 2013 to provide test architectural guidance across all of the worldwide Information Management group as “Principal Test Architect”. One of the great benefits of this promotion was the chance to work closely with colleagues in other parts of the world and I became a very regular visitor to our office in China, helping the talented and enthusiastic young testers there. I also started my conference presentation journey in 2014, a massive step outside my comfort zone! While attending a testing peer conference in Sydney in 2013, I was fortunate to meet Rob Sabourin (who was acting as content owner for the event) and he encouraged me to share my story (of implementing session-based exploratory testing with the teams in China) to a much wider audience, leading to my first conference talk at Let’s Test in Sweden the following year. This started a journey of giving conference talks all over the world, another great set of experiences and I appreciate the support I’ve had from Quest along the way in expanding the reach of my messages.

Dell sold off its software business in late 2016 and so I was again working for Quest but this time under its new owners, Francisco Partners.

My last promotion came in 2018, becoming “Director of Software Craft” to work across all of the Information Management business in helping to improve the way we develop, build and test our software. This continues to be both a challenging and rewarding role, in which I’m fortunate to work alongside supportive peers at the Director level as we strive for continuous improvement, not just in the way we test but the way we do software development.

My thanks go to the many great colleagues I’ve shared this journey with, some have gone onto other things, but a surprising number are still here with 20+ years of service. The chance to work with many of my colleagues on the ground across the world has been – and continues to be – a highlight of my job.

I’ve been fortunate to enjoy the support and encouragement of some excellent managers too, allowing me the freedom to grow, contribute to the testing community outside of Quest, and ultimately expand my purview across all of the Information Management business unit in my capacity as Director of Software Craft.

Little did I think on 2nd August 1999 that my first job in Australia would be the only one I’d know some twenty years later, but I consider myself very lucky to have found Quest and I’ve enjoyed learning & growing both personally & professionally alongside the company. My thanks to everyone along the way who’s made this two decade-long journey so memorable!

On AI

I’ve read a number of books on similar topics this year around artificial intelligence, machine learning, algorithms, etc. Coming to this topic with little in the way of prior knowledge, I feel like I’ve learned a great deal.

Our increasing reliance on decisions made my machines instead of humans is having significant – and sometimes truly frightening – consequences. Despite the supposed objectivity of algorithmic decision making, there is plenty of evidence of human biases encoded into these algorithms and the proprietary nature of some of these systems means that many are left powerless in their search for explanations about the decisions made by these algorithms.

Each of these books tackles the subject from a different perspective and I recommend them all:

It feels like “AI in testing” is becoming a thing, with my feeds populated with articles, blog posts and ads about the increasingly large role AI is playing or will play in software testing. It strikes me that we would be wise to learn from the mistakes discussed in these books in terms of trying to fully replace human decision making in testing with those made by machines. The biases encoded into these algorithms should also be acknowledged – it seems likely that confirmatory biases will be present in terms of testing and we neglect the power of human ingenuity and exploration at our peril when it comes to delivering software that both solves problems for and makes sense to (dare I say “delights”) our customers.

“Range: Why Generalists Triumph in a Specialized World” (David Epstein)

I’m a sucker for the airport bookshop and I’ve blogged before on books acquired from these venerable establishments. On a recent trip to the US, a book stood out to me as I browsed, because of its subtitle: “Why generalists triumph in a specialized world”. It immediately triggered memory of the “generalizing specialists” idea that seemed so popular in the agile community maybe ten years ago (but hasn’t been so hot recently, at least not in what I’ve been reading around agile). And so it was that Range: Why Generalists Triumph in a Specialized World by David Epstein accompanied me on my travels, giving me a fascinating read along the way.

David’s opening gambit is a comparison of the journeys of two well-known sportsmen, viz. Roger Federer and Tiger Woods. While Woods was singularly focused on becoming excellent at golf from a very young age, Federer tried many different sports before eventually becoming the best male tennis player the world has ever seen. While Woods went for early specialization, Federer opted for breadth and a range of sports before realizing where he truly wanted to specialize and excel. David notes:

The challenge we all face is how to maintain the benefits of breadth, diverse experience, interdisciplinary thinking, and delayed concentration in a world that increasingly incentivizes, even demands, hyperspecialization. While it is undoubtedly true that there are areas that require individuals with Tiger’s precocity and clarity of purpose, as complexity increases – and technology spins the world into vaster webs of interconnected systems in which each individual sees only a small part – we also need more Rogers: people who start broad and embrace diverse experiences and perspectives while they progress. People with range.

Chapter 1 – “The Cult of the Head Start” – uses the example of chess grand masters, similarly to golf, where early specialization works well. David makes an interesting observation here around AI, a topic which seems to be finding its way into more and more conversations in software testing, and the last line of this quote from this chapter applies well to the very real challenges involved in thinking about AI as a replacement for human testers in my opinion:

The progress of AI in the closed and orderly world of chess, with instant feedback and bottomless data, has been exponential. In the rule-bound but messier world of driving, AI has made tremendous progress, but challenges remain. In a truly open-world problem devoid of rigid rules and reams of perfect historical data, AI has been disastrous. IBM’s Watson destroyed at Jeopardy! and was subsequently pitched as a revolution in cancer care, where it flopped so spectacularly that several AI experts told me they worried its reputation would taint AI research in health-related fields. As one oncologist put it, “The difference between winning at Jeopardy! and curing all cancer is that we know the answer to Jeopardy! questions.” With cancer, we’re still working on posing the right questions in the first place.

In Chapter 2 – “How the Wicked World Was Made” – David shares some interesting stories around IQ testing and notes that:

…society, and particularly higher education, has responded to the broadening of the mind by pushing specialization, rather than focusing early training on conceptual, transferable knowledge.

I see the same pattern in software testing, with people choosing to specialize in one particular automation tool over learning more broadly about good testing, risk analysis, critical thinking and so on, skills that could be applied more generally (and are also less prone to redundancy as technology changes). In closing out the chapter, David makes the following observation which again rings very true in testing:

The more constrained and repetitive a challenge, the more likely it will be automated, while great rewards will accrue to those who can take conceptual knowledge from one problem or domain and apply it in an entirely new one.

A fascinating – and new to me – story about early Venetian music opens Chapter 3 – “When Less of the Same Is More”. In discussing how musicians learn and apply across genres, his conclusion again makes for poignant reading for testers especially those with a desire to become excellent exploratory testers:

[This] is in line with a classic research finding that is not specific to music: breadth of training predicts breadth of transfer. That is, the more contexts in which something is learned, the more the learner creates abstract models, and the less they rely on any particular example. Learners become better at applying their knowledge to a situation they’ve never seen before, which is the essence of creativity.

Chapter 4’s title is a nod to Daniel Kahneman, “Learning, Fast and Slow”, and looks at the difficulty of teaching and training to make it more broadly applicable than the case directly under instruction, using examples from maths students and naval officers:

Knowledge with enduring utility must be very flexible, composed of mental schemes that can be matched to new problems. The virtual naval officers in the air defense simulation and the math students who engaged in interleaved practice were learning to recognize deep structural similarities in types of problems. They could not rely on the same type of problem repeating, so they had to identify underlying conceptual connections in simulated battle threats, or math problems, that had never actually been seen before. They then matched a strategy to each new problem. When a knowledge structure is so flexible that it can be applied effectively even in new domains or extremely novel situations, it is called “far transfer.”

I think we face similar challenges in software testing. We’re usually testing something different from what we’ve tested before, we’re generally not testing the same thing over and over again (hopefully). Thinking about how we’ve faced similar testing challenges in the past and applying appropriate learnings from those to new testing situations is a key skill and helps us to develop our toolbox of ideas, strategies and other tools from which to draw when faced with a new situation. This “range” and ability to make conceptual connections is also very important in performing good risk analysis, another key testing skill.

In Chapter 5 – “Thinking Outside Experience” – David tells the story of Kepler and how he drew new information about astronomy by using analogies from very disparate areas, leading to his invention of astrophysics. He was a fastidious note taker too, just like a good tester:

Before he began his tortuous march of analogies toward reimagining the universe, Kepler had to get very confused on his homework. Unlike Galileo and Isaac Newton, he documented his confusion. “What matters to me,” Kepler wrote, “is not merely to impart to the reader what I have to say, but above all to convey to him the reasons, subterfuges, and lucky hazards which led to my discoveries.”

Chapter 6 – “The Trouble with Too Much Grit” – starts by telling the story of Van Gogh, noting:

It would be easy enough to cherry-pick stories of exceptional late developers overcoming the odds. But they aren’t exceptions by virtue of their late starts, and those late starts did not stack the odds against them. Their late starts were integral to their eventual success.

David also shares a story about a major retention issue experienced by a select part of the US Army, concluding:

In the industrial era, or the “company man era”…”firms were highly specialized,” with employees generally tackling the same suite of challenges repeatedly. Both the culture of the time – pensions were pervasive and job switching might be viewed as disloyal – and specialization were barriers to worker mobility outside of the company. Plus, there was little incentive for companies to recruit from outside when employees regularly faced kind learning environments, the type where repetitive experience alone leads to improvement. By the 1980s, corporate culture was changing. The knowledge economy created “overwhelming demand for…employees with talents for conceptualization and knowledge creation.” Broad conceptual skills now helped in an array of jobs, and suddenly control over career trajectory shifted from the employer, who looked inward at a ladder of opportunity, to the employee, who peered out at a vast web of possibility. In the private sector, an efficient talent market rapidly emerged as workers shuffled around in pursuit of match quality. [the degree of fit between the work someone does and who they are – their abilities and proclivities] While the world changed, the Army stuck with the industrial-era ladder.

In Chapter 7 – “Flirting with Your Possible Selves” – David shares the amazing career story of Frances Hesselbein as an example of changing tack many times rather than choosing an early specialization and sticking with it, and the many successes it can yield along the journey. He cites:

[computational neuroscientist Ogo Ogas] uses the shorthand “standardization covenant” for the cultural notion that it is rational to trade a winding path of self-exploration for a rigid goal with a head start because it ensures stability. “The people we study who are fulfilled do pursue a long-term goal, but they only formulate it after a period of discovery,” he told me. “Obviously, there’s nothing wrong with getting a law or medical degree or PhD. But it’s actually riskier to make that commitment before you know how it fits you. And don’t consider the path fixed. People realize things about themselves halfway through medical school.” Charles Darwin for example.

Chapter 8 – “The Outsider Advantage” – talks about the benefits of bringing diverse skills and experiences to bear in problem solving:

[Alph] Bingham had noticed that established companies tended to approach problems with so-called local search, that is, using specialists from a single domain, and trying solutions that worked before. Meanwhile, his invitation to outsiders worked so well that it was spun off as an entirely different company. Named InnoCentive, it facilitates entities in any field acting as “seekers” paying to post “challenges” and rewards for outside “solvers.” A little more than one-third of challenges were completely solved, a remarkable portion given that InnoCentive selected for problems that had stumped the specialists who posted them. Along the way, InnoCentive realized it could help seekers tailor their posts to make a solution more likely. The trick: to frame the challenge so that it attracted a diverse array of solvers. The more likely a challenge was to appeal not just to scientists but also to attorneys and dentists and mechanics, the more likely it was to be solved.

Bingham calls it “outside-in” thinking: finding solutions in experiences far outside of focused training for the problem itself. History is littered with world-changing examples.

This sounds like the overused “think outside the box” concept, but there’s a lot of validity here, the fact is that InnoCentive works:

…as specialists become more narrowly focused, “the box” is more like Russian nesting dolls. Specialists divide into subspecialties, which soon divide into sub-subspecialties. Even if they get outside the small doll, they may get stuck inside the next, slightly larger one. 

In Chapter 9 – “Lateral Thinking with Withered Technology” – David tells the fascinating story of Nintendo and how the Game Boy was such a huge success although built using older (“withered”) technology. Out of this story, he mentions the idea of “frogs” and “birds” from physicist and mathematician, Freeman Dyson:

…Dyson styled it this way: we need both focused frogs and visionary birds. “Birds fly high in the air and survey broad vistas of mathematics out to the far horizon,” Dyson wrote in 2009. “They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and only see the flowers that grow nearby. They delight in the details of particular objects, and they solve problems one at a time.” As a mathematician, Dyson labeled himself a frog but contended, “It is stupid to claim that birds are better than frogs because they see farther, or that frogs are better than birds because they see deeper.” The world, he wrote, is both broad and deep. “We need birds and frogs working together to explore it.” Dyson’s concern was that science is increasingly overflowing with frogs, trained only in a narrow specialty and unable to change as science itself does. “This is a hazardous situation,” he warned, “for the young people and also for the future of science.”

I like this frog and bird analogy and can picture examples from working with teams where excellent testing arose from a combination of frogs and birds working together to produce the kind of product information neither would have provided alone.

David makes the observation that communication technology and our increasingly easy access to vast amounts of information is also playing a part in reducing our need for specialists:

…narrowly focused specialists in technical fields…are still absolutely critical, it’s just that their work is widely accessible, so fewer suffice

An interesting study on patents further reinforces the benefits of “range”:

In low-uncertainty domains, teams of specialists were more likely to author useful patents. In high-uncertainty domains – where the fruitful questions themselves were less obvious – teams that included individuals who had worked on a wide variety of technologies were more likely to make a splash. The higher the domain uncertainty, the more important it was to have a high-breadth team member… When the going got uncertain, breadth made the difference.

In Chapter 10 – “Fooled by Expertise” – David looks at how poorly “experts” are able to predict the future and talks about work from psychologist and political scientist, Philip Tetlock:

Tetlock conferred nicknames …that became famous throughout the psychology and intelligence-gathering communities: the narrow view hedgehogs, who “know one big thing” and the integrator foxes, who “know many little things.”

Hedgehog experts were deep but narrow. Some had spent their careers studying a single problem. Like [Paul[ Ehrlich and [Julian] Simon, they fashioned tidy theories of how the world works through the single lens of their specialty, and then bent every event to fit them. The hedgehogs, according to Tetlock, “toil devotedly” within one tradition of their specialty, “and reach for formulaic solutions to ill-defined problems.” Outcomes did not matter; they were proven right by both successes and failures, and burrowed further into their ideas. It made them outstanding at predicting the past, but dart-throwing chimps at predicting the future. The foxes, meanwhile, “draw from an eclectic array of traditions, and accept ambiguity and contradiction,” Tetlock wrote. Where hedgehogs represented narrowness, foxes ranged outside a single discipline or theory and embodied breadth.

David’s observations on this later in the chapter reminded me of some testers I’ve worked with over the years who are unwilling to see beyond the binary “pass” or “fail” outcome of a test:

Beneath complexity, hedgehogs tend to see simple, deterministic rules of cause and effect framed by their area of expertise, like repeating patterns on a chessboard. Foxes see complexity in what others mistake for simple cause and effect. They understand that most cause and effect relationships are probabilistic, not deterministic. There are unknowns, and luck, and even when history apparently repeats, it does not do so precisely. They recognize that they are operating in the very definition of a wicked learning environment, where it can be very hard to learn, from either wins or losses.

Chapter 11 – “Learning to Drop Your Familiar Tools” – starts off by telling the story of the Challenger space shuttle disaster and how, even though some people knew about the potential for the problem that caused the disaster, existing practices and culture within NASA got in the way of that knowledge being heard. The “Carter Racing” Harvard Business School case study mimics the Challenger disaster but the participants have to make a race/no race decision on whether to run a racing car with some known potential problems. Part of this story reminded very much of the infamous Dice Game so favoured by the context-driven testing community:

“Okay…here comes a quantitative question,” the professor says. “How many times did I say yesterday if you want additional information let me know?” Muffled gasps spread across the room. “Four times,” the professor answers himself. “Four times I said if you want additional information let me know.” Not one student asked for the missing data [they needed to make a good decision].

A fascinating story about the behaviour of firefighters in bushfire situations was very revealing, with many of those who perish being found weighed down with heavy equipment when they could have ditched their tools and probably run to safety:

Rather than adapting to unfamiliar situations, whether airline accidents or fire tragedies, [pyschologist and organizational behaviour expert Karl] Weick saw that experienced groups became rigid under pressure and “regress to what they know best.” They behaved like a collective hedgehog, bending an unfamiliar situation to a familiar comfort zone, as if trying to will it to become something they had actually experienced before. For wildland firefighters, their tools are what they know best. “Firefighting tools define the firefighter’s group membership, they are the firefighter’s reason for being deployed in the first place,” Weick wrote. “Given the central role of tools in defining the essence of a firefighter, it’s not surprising that dropping one’s tools creates an existential crisis.” As Maclean succinctly put it, “When a firefighter is told to drop his firefighting tools, he is told to forget he is a firefighter.”

This reminded me of some testers who hang on to test management tools or a particular automation tool as though it defines them and their work. We should be thinking more broadly and using tools to aid us, not define us:

There are fundamentals – scales and chords – that every member must overlearn, but those are just tools for sensemaking in a dynamic environment. There are no tools that cannot be dropped, reimagined, or repurposed in order to navigate an unfamiliar challenge. Even the most sacred tools. Even the tools so taken for granted they become invisible.

Chapter 12 – “Deliberate Amateurs” – wraps up the main content of the book. I love this idea:

They [amateurs] embrace what Max Delbruck, a Nobel laureate who studied the intersection of physics and biology, called “the principle of limited sloppiness.” Be careful not to be too careful, Delbruck warned, or you will unconsciously limit your exploration.

This note on the global financial crisis rings true in testing also, all too often we see testing compartmentalized and systemic issues go undetected:

While I was researching this book, an official with the US Securities and Exchange Commission learned I was writing about specialization and contacted me to make sure I knew that specialization had played a critical role in the 2008 global financial crisis. “Insurance regulators regulated insurance, bank regulators regulated banks, securities regulators regulated securities, and consumer regulators regulated consumers,” the official told me. “But the provision of credit goes across all those markets. So we specialized products, we specialized regulation, and the question is, ‘Who looks across those markets?’ The specialized approach to regulation missed systemic issues.”

We can also learn something from this observation about team structures, especially in the world of microservices and so on:

In professional networks that acted as fertile soil for successful groups, individuals moved easily between teams, crossing organizational and disciplinary boundaries and finding new collaborators. Networks that spawned unsuccessful teams, conversely, were broken into small, isolated clusters in which the same people collaborated over and over. Efficient and comfortable, perhaps, but apparently not a creative engine.

In his Conclusion, David offers some good advice:

Approach your personal voyage and projects like Michelangelo approached a block of marble, willing to learn and adjust as you go, and even to abandon a previous goal and change directions entirely should the need arise. Research on creators in domains from technological innovation to comic books shows that that a diverse group of specialists cannot fully replace the contributions of broad individuals. Even when you move on from an area of work or an entire domain, that experience is not wasted.

Finally, remember that there is nothing inherently wrong with specialization. We all specialize to one degree or another, at some point or other.

I thoroughly enjoyed reading “Range”. David’s easy writing style illustrated his points with good stories and examples, making this a very accessible and comprehensible book. There were many connections to what we see in the world of software testing, hopefully I’ve managed to illuminate some of these in this post.

This is recommended reading for anyone involved in technology and testers in particular I think will gain a lot of insights from reading this book. And, remember, “Be careful not to be too careful”!

“Essentialism: The Disciplined Pursuit of Less” (Greg McKeown)

After seeing several recommendations for the book Essentialism: The Disciplined Pursuit of Less, I borrowed a copy from the Melbourne Library Service recently – and then read the book from cover-to-cover over only a couple of sittings. This is a sign of how much I enjoyed reading it and the messages in the book resonated strongly with me, on both a personal and professional level. The parallels between what Greg McKeown writes about here and the Agile movement in software development are also (perhaps surprisingly) strong and this helped make the book even more contextually significant for me.

The fundamental idea here is “Less but better.”

The way of the Essentialist is the relentless pursuit of less but better… Essentialism is not about how to get more things done; it’s about how to get the right things done. It doesn’t mean just doing less for the sake of less either. It is about making the wisest possible investment of your time and energy in order to operate at your highest point of contribution by doing only what is essential.

Greg argues that we have forgotten our ability to choose and feel compelled to “do it all” and say yes to everything:

The ability to choose cannot be taken away or even given away – it can only be forgotten… When we forget our ability to choose, we learn to be helpless. Drip by drip we allow our power to be taken away until we end up becoming a function of other people’s choices – or even a function of our own past choices.

It’s all too easy in our busy, hyper-connected lives to think almost everything is essential and that the opportunities that come our way are almost equal. But the Essentialist thinks almost everything is non-essential and “distinguishes the vital few from the trivial many.”

Greg makes an important point about trade-offs, something again that it’s all too easy to forget and instead over-commit and try to do everything asked of us or take on all the opportunities coming our way:

Essentialists see trade-offs as an inherent part of life, not as an inherently negative part of life. Instead of asking “What do I have to give up?”, they ask “What do I want to go big on?” The cumulative impact of this small change in thinking can be profound.

The trap of “busyness” leads us to not spend the time we should reflecting on what’s really important.

Essentialists spend as much time as possible exploring, listening, debating, questioning, and thinking. But their exploration is not an end in itself. The purpose of their exploration is to discern the vital few from the trivial many.

The topic of sleep comes next and this seems to be a hot topic right now. A non-Essentialist thinks “One hour less of sleep equals one more hour of productivity” while the Essentialist thinks “One more hour of sleep equals several more hours of  much higher productivity.” This protection of the asset that is sleep is increasingly being demonstrated as important, not only for productivity but also for mental health.

Our highest priority is to protect our ability to prioritize.

Prioritizing which opportunities to take on is a challenge for many of us, I’ve certainly taken on too much at times. Greg’s advice when selecting opportunities is simple:

If it isn’t a clear yes, then it’s a clear no

Of course, actually saying “no” can be difficult and a non-Essentialist will avoid doing so  to avoid feeling social awkwardness and pressure, instead saying “yes” to everything. An Essentialist, meanwhile, “dares to say no firmly, resolutely and gracefully and says “yes” only to things that really matter.” This feels like great advice and thankfully Greg offers a few tips for how to say “no” gracefully:

  • Separate the decision from the relationship
  • Saying “no” gracefully doesn’t have to mean using the word no
  • Focus on the trade-off
  • Remind yourself that everyone is selling something
  • Make your peace with the fact that saying “no” often requires trading popularity for respect
  • Remember that a clear “no” can be more graceful than a vague  or non-committal “yes”

The section on subtracting (removing obstacles to bring forth more) resonated strongly with my experiences in software development:

Essentialists don’t default to Band-Aid solutions. Instead of looking for the most obvious or immediate obstacles, they look for the ones slowing down progress. They ask “What is getting in the way of achieving what is essential?” While the non-Essentialist is busy applying more and more pressure and piling on more and more solutions, the Essentialist simply makes a one-time investment in removing obstacles. This approach goes beyond just solving problems, it’s a method of reducing your efforts to maximize your results.

Similarly when looking at progress, there are obvious similarities with the way agile teams think and work:

A non-Essentialist starts with a big goal and gets small results and they go for the flashiest wins. An Essentialist starts small and gets big results and they celebrate small acts of progress.

The benefits of routine are also highlighted, for “without routine, the pull of non-essential distractions will overpower us” and I see the value in the routines of Scrum, for example, as a way of keeping distractions at bay and helping team execution appear more effortless.

This relatively short book is packed with great stories and useful takeaways. As we all lead more connected and busy lives where the division between work and not-work has become so blurred for so many of us, the ideas in this book are practical ways to help focus on what really matters. I’m certainly motivated to now focus more on a smaller number of projects especially outside of work, a decision I’d already taken before reading this book but reading it also validated that decision as well as providing me with good ways of dealing with whatever opportunities may arise and truly prioritizing the ones that matter.