Two decades at Quest Software

Today (2nd August 2019) marks twenty years since I first sat down at a desk at Quest Software in Melbourne as a “Senior Tester”.

I’d migrated from the UK just a few weeks earlier and arrived in Australia in the middle of the late 90s tech boom. The local broadsheet newspaper, The Age, had a separate section once a week which was a hefty tome and packed full of IT jobs. I sent my CV to many different recruitment companies advertising in the newspaper and started to get some interest. My scatter gun approach was a response to the lack of opportunities for LISP developers (my previous skill from three years as a developer back in the UK, working on expert systems for IBM) but I did focus a little on openings for technical writers, believing I could string words together pretty well and had a decent grasp of technology.

One of the first interviews I secured for such a technical writing position was for a company I’d never heard of, Quest Software out in the Eastern suburbs of Melbourne (Ashburton, at that time). After some hasty company research, I remember catching a train there and following the recruiter’s directions to “take the staircase next to the bottle shop” to locate the Quest office (actually, one of two offices in the same street due to recent expansion). My interview would be with the head of the technical writing team and we started off with a chat over coffee in the kitchen. I didn’t even realize this was the interview, it was so relaxed and welcoming! At the end of the coffee/interview, he asked whether I’d also like to chat with the head of the testing team as she was looking for people too, so of course I took the opportunity to do so. This was again a very informal chat and I left the office with a technical writing task to complete. After completing the task, I was soon contacted to return to the Quest office to further my application for a software testing position, but not the technical writing one. A test case writing task formed part of this next slightly more formal interview, my first attempt at writing such a document! It was very shortly afterwards that the recruiter let me know I had an offer of a role as a “Senior Tester” and I couldn’t return the required paperwork fast enough – I’d found my first job in Australia!

I considered myself very fortunate to have secured a position so quickly after arriving into Australia. I was certainly lucky to find a great recruiter, Keith Phillips from Natural Solutions, and I recall visiting him in person for the first time after the deal was done with Quest, down at his office in South Melbourne. It turned out we had a common connection to the University of Wales in Aberystwyth, where I studied for both my undergraduate and doctoral degrees. We also studied in the same department (Mathematics) and, although Keith’s studies were some years before mine, many of the same department staff were still around during my time there as well. I believe Keith is still in the recruitment industry and I have fond memories of his kind, professional and unhurried approach to his work, something not common during my experiences with recruiters back then.

Back to 2nd August, 1999, then and my first day at the Quest office in Ashburton. Amidst the dotcom madness, Quest were growing rapidly and I was just one of many new starters coming through the door every week. We were sitting two to a desk for a while until we moved to bigger new digs in Camberwell, about three months after I joined. We grew rapidly and I enjoyed my time as a tester, slotting in well to a couple of different development teams and learning the ropes from other testers in the office. Being new to the testing game, I didn’t realize that we had a very “traditional” approach to testing in Quest at that time – I was part of an independent testing team under a Test Manager and spent a lot of my time writing and executing test cases, and producing lots of documentation (thanks, Rational Unified Process).

I was also learning the ropes of living in a new country and I’m indebted to my colleagues at the time for their patience and help in many aspects of me settling into a Melbourne life!

I worked across a few teams in my role as a “Senior Tester” from 1999 until 2004 when I was promoted to a “Test Team Lead” and given people management responsibility for the first time, leading a small group of testers as well as retaining hands-on testing commitments. I realize now that I was a classic “process cop” and quality gate fanatic, persisting with the very traditional ideas around testing and test management. This was an interesting and challenging time for me and, while I enjoyed some aspects of managing people, it was also not the most enjoyable aspect of my job.

It was during my time as test lead that Quest ran the Rapid Software Testing course in-house with Michael Bolton, in our Ottawa office in 2007. It was a very long way to travel to attend this course, but it was truly career-changing for me and opened my eyes to a new world of what testing was for and how it could be done differently. I returned to work in Melbourne inspired to change the way we thought about testing at Quest and took every chance I could to spread the word about the great new ideas I’d been exposed to. Looking back on it now, I banged this drum pretty hard and was probably quite annoying – but challenging the status quo seemed like the right thing to do.

During a shift to adopting Scrum within some of the Melbourne teams and a move away from the independent test team, I really saw an opportunity to bring in new testing ideas from Rapid Software Testing and so, in 2008, a new position was created to enable me to focus on doing so, viz. “Test Architect”. Evangelizing the new ideas and approaches across the Melbourne teams was the main job here and the removal of people management responsibility gave me a welcome chance to focus on effecting change in our testing approach. I enjoyed this new role very much over the next five years, during which time we moved to Southbank and Quest Software was acquired by Dell to form part of their new Software business.

My role expanded in 2013 to provide test architectural guidance across all of the worldwide Information Management group as “Principal Test Architect”. One of the great benefits of this promotion was the chance to work closely with colleagues in other parts of the world and I became a very regular visitor to our office in China, helping the talented and enthusiastic young testers there. I also started my conference presentation journey in 2014, a massive step outside my comfort zone! While attending a testing peer conference in Sydney in 2013, I was fortunate to meet Rob Sabourin (who was acting as content owner for the event) and he encouraged me to share my story (of implementing session-based exploratory testing with the teams in China) to a much wider audience, leading to my first conference talk at Let’s Test in Sweden the following year. This started a journey of giving conference talks all over the world, another great set of experiences and I appreciate the support I’ve had from Quest along the way in expanding the reach of my messages.

Dell sold off its software business in late 2016 and so I was again working for Quest but this time under its new owners, Francisco Partners.

My last promotion came in 2018, becoming “Director of Software Craft” to work across all of the Information Management business in helping to improve the way we develop, build and test our software. This continues to be both a challenging and rewarding role, in which I’m fortunate to work alongside supportive peers at the Director level as we strive for continuous improvement, not just in the way we test but the way we do software development.

My thanks go to the many great colleagues I’ve shared this journey with, some have gone onto other things, but a surprising number are still here with 20+ years of service. The chance to work with many of my colleagues on the ground across the world has been – and continues to be – a highlight of my job.

I’ve been fortunate to enjoy the support and encouragement of some excellent managers too, allowing me the freedom to grow, contribute to the testing community outside of Quest, and ultimately expand my purview across all of the Information Management business unit in my capacity as Director of Software Craft.

Little did I think on 2nd August 1999 that my first job in Australia would be the only one I’d know some twenty years later, but I consider myself very lucky to have found Quest and I’ve enjoyed learning & growing both personally & professionally alongside the company. My thanks to everyone along the way who’s made this two decade-long journey so memorable!

Advertisements

On AI

I’ve read a number of books on similar topics this year around artificial intelligence, machine learning, algorithms, etc. Coming to this topic with little in the way of prior knowledge, I feel like I’ve learned a great deal.

Our increasing reliance on decisions made my machines instead of humans is having significant – and sometimes truly frightening – consequences. Despite the supposed objectivity of algorithmic decision making, there is plenty of evidence of human biases encoded into these algorithms and the proprietary nature of some of these systems means that many are left powerless in their search for explanations about the decisions made by these algorithms.

Each of these books tackles the subject from a different perspective and I recommend them all:

It feels like “AI in testing” is becoming a thing, with my feeds populated with articles, blog posts and ads about the increasingly large role AI is playing or will play in software testing. It strikes me that we would be wise to learn from the mistakes discussed in these books in terms of trying to fully replace human decision making in testing with those made by machines. The biases encoded into these algorithms should also be acknowledged – it seems likely that confirmatory biases will be present in terms of testing and we neglect the power of human ingenuity and exploration at our peril when it comes to delivering software that both solves problems for and makes sense to (dare I say “delights”) our customers.

“Range: Why Generalists Triumph in a Specialized World” (David Epstein)

I’m a sucker for the airport bookshop and I’ve blogged before on books acquired from these venerable establishments. On a recent trip to the US, a book stood out to me as I browsed, because of its subtitle: “Why generalists triumph in a specialized world”. It immediately triggered memory of the “generalizing specialists” idea that seemed so popular in the agile community maybe ten years ago (but hasn’t been so hot recently, at least not in what I’ve been reading around agile). And so it was that Range: Why Generalists Triumph in a Specialized World by David Epstein accompanied me on my travels, giving me a fascinating read along the way.

David’s opening gambit is a comparison of the journeys of two well-known sportsmen, viz. Roger Federer and Tiger Woods. While Woods was singularly focused on becoming excellent at golf from a very young age, Federer tried many different sports before eventually becoming the best male tennis player the world has ever seen. While Woods went for early specialization, Federer opted for breadth and a range of sports before realizing where he truly wanted to specialize and excel. David notes:

The challenge we all face is how to maintain the benefits of breadth, diverse experience, interdisciplinary thinking, and delayed concentration in a world that increasingly incentivizes, even demands, hyperspecialization. While it is undoubtedly true that there are areas that require individuals with Tiger’s precocity and clarity of purpose, as complexity increases – and technology spins the world into vaster webs of interconnected systems in which each individual sees only a small part – we also need more Rogers: people who start broad and embrace diverse experiences and perspectives while they progress. People with range.

Chapter 1 – “The Cult of the Head Start” – uses the example of chess grand masters, similarly to golf, where early specialization works well. David makes an interesting observation here around AI, a topic which seems to be finding its way into more and more conversations in software testing, and the last line of this quote from this chapter applies well to the very real challenges involved in thinking about AI as a replacement for human testers in my opinion:

The progress of AI in the closed and orderly world of chess, with instant feedback and bottomless data, has been exponential. In the rule-bound but messier world of driving, AI has made tremendous progress, but challenges remain. In a truly open-world problem devoid of rigid rules and reams of perfect historical data, AI has been disastrous. IBM’s Watson destroyed at Jeopardy! and was subsequently pitched as a revolution in cancer care, where it flopped so spectacularly that several AI experts told me they worried its reputation would taint AI research in health-related fields. As one oncologist put it, “The difference between winning at Jeopardy! and curing all cancer is that we know the answer to Jeopardy! questions.” With cancer, we’re still working on posing the right questions in the first place.

In Chapter 2 – “How the Wicked World Was Made” – David shares some interesting stories around IQ testing and notes that:

…society, and particularly higher education, has responded to the broadening of the mind by pushing specialization, rather than focusing early training on conceptual, transferable knowledge.

I see the same pattern in software testing, with people choosing to specialize in one particular automation tool over learning more broadly about good testing, risk analysis, critical thinking and so on, skills that could be applied more generally (and are also less prone to redundancy as technology changes). In closing out the chapter, David makes the following observation which again rings very true in testing:

The more constrained and repetitive a challenge, the more likely it will be automated, while great rewards will accrue to those who can take conceptual knowledge from one problem or domain and apply it in an entirely new one.

A fascinating – and new to me – story about early Venetian music opens Chapter 3 – “When Less of the Same Is More”. In discussing how musicians learn and apply across genres, his conclusion again makes for poignant reading for testers especially those with a desire to become excellent exploratory testers:

[This] is in line with a classic research finding that is not specific to music: breadth of training predicts breadth of transfer. That is, the more contexts in which something is learned, the more the learner creates abstract models, and the less they rely on any particular example. Learners become better at applying their knowledge to a situation they’ve never seen before, which is the essence of creativity.

Chapter 4’s title is a nod to Daniel Kahneman, “Learning, Fast and Slow”, and looks at the difficulty of teaching and training to make it more broadly applicable than the case directly under instruction, using examples from maths students and naval officers:

Knowledge with enduring utility must be very flexible, composed of mental schemes that can be matched to new problems. The virtual naval officers in the air defense simulation and the math students who engaged in interleaved practice were learning to recognize deep structural similarities in types of problems. They could not rely on the same type of problem repeating, so they had to identify underlying conceptual connections in simulated battle threats, or math problems, that had never actually been seen before. They then matched a strategy to each new problem. When a knowledge structure is so flexible that it can be applied effectively even in new domains or extremely novel situations, it is called “far transfer.”

I think we face similar challenges in software testing. We’re usually testing something different from what we’ve tested before, we’re generally not testing the same thing over and over again (hopefully). Thinking about how we’ve faced similar testing challenges in the past and applying appropriate learnings from those to new testing situations is a key skill and helps us to develop our toolbox of ideas, strategies and other tools from which to draw when faced with a new situation. This “range” and ability to make conceptual connections is also very important in performing good risk analysis, another key testing skill.

In Chapter 5 – “Thinking Outside Experience” – David tells the story of Kepler and how he drew new information about astronomy by using analogies from very disparate areas, leading to his invention of astrophysics. He was a fastidious note taker too, just like a good tester:

Before he began his tortuous march of analogies toward reimagining the universe, Kepler had to get very confused on his homework. Unlike Galileo and Isaac Newton, he documented his confusion. “What matters to me,” Kepler wrote, “is not merely to impart to the reader what I have to say, but above all to convey to him the reasons, subterfuges, and lucky hazards which led to my discoveries.”

Chapter 6 – “The Trouble with Too Much Grit” – starts by telling the story of Van Gogh, noting:

It would be easy enough to cherry-pick stories of exceptional late developers overcoming the odds. But they aren’t exceptions by virtue of their late starts, and those late starts did not stack the odds against them. Their late starts were integral to their eventual success.

David also shares a story about a major retention issue experienced by a select part of the US Army, concluding:

In the industrial era, or the “company man era”…”firms were highly specialized,” with employees generally tackling the same suite of challenges repeatedly. Both the culture of the time – pensions were pervasive and job switching might be viewed as disloyal – and specialization were barriers to worker mobility outside of the company. Plus, there was little incentive for companies to recruit from outside when employees regularly faced kind learning environments, the type where repetitive experience alone leads to improvement. By the 1980s, corporate culture was changing. The knowledge economy created “overwhelming demand for…employees with talents for conceptualization and knowledge creation.” Broad conceptual skills now helped in an array of jobs, and suddenly control over career trajectory shifted from the employer, who looked inward at a ladder of opportunity, to the employee, who peered out at a vast web of possibility. In the private sector, an efficient talent market rapidly emerged as workers shuffled around in pursuit of match quality. [the degree of fit between the work someone does and who they are – their abilities and proclivities] While the world changed, the Army stuck with the industrial-era ladder.

In Chapter 7 – “Flirting with Your Possible Selves” – David shares the amazing career story of Frances Hesselbein as an example of changing tack many times rather than choosing an early specialization and sticking with it, and the many successes it can yield along the journey. He cites:

[computational neuroscientist Ogo Ogas] uses the shorthand “standardization covenant” for the cultural notion that it is rational to trade a winding path of self-exploration for a rigid goal with a head start because it ensures stability. “The people we study who are fulfilled do pursue a long-term goal, but they only formulate it after a period of discovery,” he told me. “Obviously, there’s nothing wrong with getting a law or medical degree or PhD. But it’s actually riskier to make that commitment before you know how it fits you. And don’t consider the path fixed. People realize things about themselves halfway through medical school.” Charles Darwin for example.

Chapter 8 – “The Outsider Advantage” – talks about the benefits of bringing diverse skills and experiences to bear in problem solving:

[Alph] Bingham had noticed that established companies tended to approach problems with so-called local search, that is, using specialists from a single domain, and trying solutions that worked before. Meanwhile, his invitation to outsiders worked so well that it was spun off as an entirely different company. Named InnoCentive, it facilitates entities in any field acting as “seekers” paying to post “challenges” and rewards for outside “solvers.” A little more than one-third of challenges were completely solved, a remarkable portion given that InnoCentive selected for problems that had stumped the specialists who posted them. Along the way, InnoCentive realized it could help seekers tailor their posts to make a solution more likely. The trick: to frame the challenge so that it attracted a diverse array of solvers. The more likely a challenge was to appeal not just to scientists but also to attorneys and dentists and mechanics, the more likely it was to be solved.

Bingham calls it “outside-in” thinking: finding solutions in experiences far outside of focused training for the problem itself. History is littered with world-changing examples.

This sounds like the overused “think outside the box” concept, but there’s a lot of validity here, the fact is that InnoCentive works:

…as specialists become more narrowly focused, “the box” is more like Russian nesting dolls. Specialists divide into subspecialties, which soon divide into sub-subspecialties. Even if they get outside the small doll, they may get stuck inside the next, slightly larger one. 

In Chapter 9 – “Lateral Thinking with Withered Technology” – David tells the fascinating story of Nintendo and how the Game Boy was such a huge success although built using older (“withered”) technology. Out of this story, he mentions the idea of “frogs” and “birds” from physicist and mathematician, Freeman Dyson:

…Dyson styled it this way: we need both focused frogs and visionary birds. “Birds fly high in the air and survey broad vistas of mathematics out to the far horizon,” Dyson wrote in 2009. “They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and only see the flowers that grow nearby. They delight in the details of particular objects, and they solve problems one at a time.” As a mathematician, Dyson labeled himself a frog but contended, “It is stupid to claim that birds are better than frogs because they see farther, or that frogs are better than birds because they see deeper.” The world, he wrote, is both broad and deep. “We need birds and frogs working together to explore it.” Dyson’s concern was that science is increasingly overflowing with frogs, trained only in a narrow specialty and unable to change as science itself does. “This is a hazardous situation,” he warned, “for the young people and also for the future of science.”

I like this frog and bird analogy and can picture examples from working with teams where excellent testing arose from a combination of frogs and birds working together to produce the kind of product information neither would have provided alone.

David makes the observation that communication technology and our increasingly easy access to vast amounts of information is also playing a part in reducing our need for specialists:

…narrowly focused specialists in technical fields…are still absolutely critical, it’s just that their work is widely accessible, so fewer suffice

An interesting study on patents further reinforces the benefits of “range”:

In low-uncertainty domains, teams of specialists were more likely to author useful patents. In high-uncertainty domains – where the fruitful questions themselves were less obvious – teams that included individuals who had worked on a wide variety of technologies were more likely to make a splash. The higher the domain uncertainty, the more important it was to have a high-breadth team member… When the going got uncertain, breadth made the difference.

In Chapter 10 – “Fooled by Expertise” – David looks at how poorly “experts” are able to predict the future and talks about work from psychologist and political scientist, Philip Tetlock:

Tetlock conferred nicknames …that became famous throughout the psychology and intelligence-gathering communities: the narrow view hedgehogs, who “know one big thing” and the integrator foxes, who “know many little things.”

Hedgehog experts were deep but narrow. Some had spent their careers studying a single problem. Like [Paul[ Ehrlich and [Julian] Simon, they fashioned tidy theories of how the world works through the single lens of their specialty, and then bent every event to fit them. The hedgehogs, according to Tetlock, “toil devotedly” within one tradition of their specialty, “and reach for formulaic solutions to ill-defined problems.” Outcomes did not matter; they were proven right by both successes and failures, and burrowed further into their ideas. It made them outstanding at predicting the past, but dart-throwing chimps at predicting the future. The foxes, meanwhile, “draw from an eclectic array of traditions, and accept ambiguity and contradiction,” Tetlock wrote. Where hedgehogs represented narrowness, foxes ranged outside a single discipline or theory and embodied breadth.

David’s observations on this later in the chapter reminded me of some testers I’ve worked with over the years who are unwilling to see beyond the binary “pass” or “fail” outcome of a test:

Beneath complexity, hedgehogs tend to see simple, deterministic rules of cause and effect framed by their area of expertise, like repeating patterns on a chessboard. Foxes see complexity in what others mistake for simple cause and effect. They understand that most cause and effect relationships are probabilistic, not deterministic. There are unknowns, and luck, and even when history apparently repeats, it does not do so precisely. They recognize that they are operating in the very definition of a wicked learning environment, where it can be very hard to learn, from either wins or losses.

Chapter 11 – “Learning to Drop Your Familiar Tools” – starts off by telling the story of the Challenger space shuttle disaster and how, even though some people knew about the potential for the problem that caused the disaster, existing practices and culture within NASA got in the way of that knowledge being heard. The “Carter Racing” Harvard Business School case study mimics the Challenger disaster but the participants have to make a race/no race decision on whether to run a racing car with some known potential problems. Part of this story reminded very much of the infamous Dice Game so favoured by the context-driven testing community:

“Okay…here comes a quantitative question,” the professor says. “How many times did I say yesterday if you want additional information let me know?” Muffled gasps spread across the room. “Four times,” the professor answers himself. “Four times I said if you want additional information let me know.” Not one student asked for the missing data [they needed to make a good decision].

A fascinating story about the behaviour of firefighters in bushfire situations was very revealing, with many of those who perish being found weighed down with heavy equipment when they could have ditched their tools and probably run to safety:

Rather than adapting to unfamiliar situations, whether airline accidents or fire tragedies, [pyschologist and organizational behaviour expert Karl] Weick saw that experienced groups became rigid under pressure and “regress to what they know best.” They behaved like a collective hedgehog, bending an unfamiliar situation to a familiar comfort zone, as if trying to will it to become something they had actually experienced before. For wildland firefighters, their tools are what they know best. “Firefighting tools define the firefighter’s group membership, they are the firefighter’s reason for being deployed in the first place,” Weick wrote. “Given the central role of tools in defining the essence of a firefighter, it’s not surprising that dropping one’s tools creates an existential crisis.” As Maclean succinctly put it, “When a firefighter is told to drop his firefighting tools, he is told to forget he is a firefighter.”

This reminded me of some testers who hang on to test management tools or a particular automation tool as though it defines them and their work. We should be thinking more broadly and using tools to aid us, not define us:

There are fundamentals – scales and chords – that every member must overlearn, but those are just tools for sensemaking in a dynamic environment. There are no tools that cannot be dropped, reimagined, or repurposed in order to navigate an unfamiliar challenge. Even the most sacred tools. Even the tools so taken for granted they become invisible.

Chapter 12 – “Deliberate Amateurs” – wraps up the main content of the book. I love this idea:

They [amateurs] embrace what Max Delbruck, a Nobel laureate who studied the intersection of physics and biology, called “the principle of limited sloppiness.” Be careful not to be too careful, Delbruck warned, or you will unconsciously limit your exploration.

This note on the global financial crisis rings true in testing also, all too often we see testing compartmentalized and systemic issues go undetected:

While I was researching this book, an official with the US Securities and Exchange Commission learned I was writing about specialization and contacted me to make sure I knew that specialization had played a critical role in the 2008 global financial crisis. “Insurance regulators regulated insurance, bank regulators regulated banks, securities regulators regulated securities, and consumer regulators regulated consumers,” the official told me. “But the provision of credit goes across all those markets. So we specialized products, we specialized regulation, and the question is, ‘Who looks across those markets?’ The specialized approach to regulation missed systemic issues.”

We can also learn something from this observation about team structures, especially in the world of microservices and so on:

In professional networks that acted as fertile soil for successful groups, individuals moved easily between teams, crossing organizational and disciplinary boundaries and finding new collaborators. Networks that spawned unsuccessful teams, conversely, were broken into small, isolated clusters in which the same people collaborated over and over. Efficient and comfortable, perhaps, but apparently not a creative engine.

In his Conclusion, David offers some good advice:

Approach your personal voyage and projects like Michelangelo approached a block of marble, willing to learn and adjust as you go, and even to abandon a previous goal and change directions entirely should the need arise. Research on creators in domains from technological innovation to comic books shows that that a diverse group of specialists cannot fully replace the contributions of broad individuals. Even when you move on from an area of work or an entire domain, that experience is not wasted.

Finally, remember that there is nothing inherently wrong with specialization. We all specialize to one degree or another, at some point or other.

I thoroughly enjoyed reading “Range”. David’s easy writing style illustrated his points with good stories and examples, making this a very accessible and comprehensible book. There were many connections to what we see in the world of software testing, hopefully I’ve managed to illuminate some of these in this post.

This is recommended reading for anyone involved in technology and testers in particular I think will gain a lot of insights from reading this book. And, remember, “Be careful not to be too careful”!

“Essentialism: The Disciplined Pursuit of Less” (Greg McKeown)

After seeing several recommendations for the book Essentialism: The Disciplined Pursuit of Less, I borrowed a copy from the Melbourne Library Service recently – and then read the book from cover-to-cover over only a couple of sittings. This is a sign of how much I enjoyed reading it and the messages in the book resonated strongly with me, on both a personal and professional level. The parallels between what Greg McKeown writes about here and the Agile movement in software development are also (perhaps surprisingly) strong and this helped make the book even more contextually significant for me.

The fundamental idea here is “Less but better.”

The way of the Essentialist is the relentless pursuit of less but better… Essentialism is not about how to get more things done; it’s about how to get the right things done. It doesn’t mean just doing less for the sake of less either. It is about making the wisest possible investment of your time and energy in order to operate at your highest point of contribution by doing only what is essential.

Greg argues that we have forgotten our ability to choose and feel compelled to “do it all” and say yes to everything:

The ability to choose cannot be taken away or even given away – it can only be forgotten… When we forget our ability to choose, we learn to be helpless. Drip by drip we allow our power to be taken away until we end up becoming a function of other people’s choices – or even a function of our own past choices.

It’s all too easy in our busy, hyper-connected lives to think almost everything is essential and that the opportunities that come our way are almost equal. But the Essentialist thinks almost everything is non-essential and “distinguishes the vital few from the trivial many.”

Greg makes an important point about trade-offs, something again that it’s all too easy to forget and instead over-commit and try to do everything asked of us or take on all the opportunities coming our way:

Essentialists see trade-offs as an inherent part of life, not as an inherently negative part of life. Instead of asking “What do I have to give up?”, they ask “What do I want to go big on?” The cumulative impact of this small change in thinking can be profound.

The trap of “busyness” leads us to not spend the time we should reflecting on what’s really important.

Essentialists spend as much time as possible exploring, listening, debating, questioning, and thinking. But their exploration is not an end in itself. The purpose of their exploration is to discern the vital few from the trivial many.

The topic of sleep comes next and this seems to be a hot topic right now. A non-Essentialist thinks “One hour less of sleep equals one more hour of productivity” while the Essentialist thinks “One more hour of sleep equals several more hours of  much higher productivity.” This protection of the asset that is sleep is increasingly being demonstrated as important, not only for productivity but also for mental health.

Our highest priority is to protect our ability to prioritize.

Prioritizing which opportunities to take on is a challenge for many of us, I’ve certainly taken on too much at times. Greg’s advice when selecting opportunities is simple:

If it isn’t a clear yes, then it’s a clear no

Of course, actually saying “no” can be difficult and a non-Essentialist will avoid doing so  to avoid feeling social awkwardness and pressure, instead saying “yes” to everything. An Essentialist, meanwhile, “dares to say no firmly, resolutely and gracefully and says “yes” only to things that really matter.” This feels like great advice and thankfully Greg offers a few tips for how to say “no” gracefully:

  • Separate the decision from the relationship
  • Saying “no” gracefully doesn’t have to mean using the word no
  • Focus on the trade-off
  • Remind yourself that everyone is selling something
  • Make your peace with the fact that saying “no” often requires trading popularity for respect
  • Remember that a clear “no” can be more graceful than a vague  or non-committal “yes”

The section on subtracting (removing obstacles to bring forth more) resonated strongly with my experiences in software development:

Essentialists don’t default to Band-Aid solutions. Instead of looking for the most obvious or immediate obstacles, they look for the ones slowing down progress. They ask “What is getting in the way of achieving what is essential?” While the non-Essentialist is busy applying more and more pressure and piling on more and more solutions, the Essentialist simply makes a one-time investment in removing obstacles. This approach goes beyond just solving problems, it’s a method of reducing your efforts to maximize your results.

Similarly when looking at progress, there are obvious similarities with the way agile teams think and work:

A non-Essentialist starts with a big goal and gets small results and they go for the flashiest wins. An Essentialist starts small and gets big results and they celebrate small acts of progress.

The benefits of routine are also highlighted, for “without routine, the pull of non-essential distractions will overpower us” and I see the value in the routines of Scrum, for example, as a way of keeping distractions at bay and helping team execution appear more effortless.

This relatively short book is packed with great stories and useful takeaways. As we all lead more connected and busy lives where the division between work and not-work has become so blurred for so many of us, the ideas in this book are practical ways to help focus on what really matters. I’m certainly motivated to now focus more on a smaller number of projects especially outside of work, a decision I’d already taken before reading this book but reading it also validated that decision as well as providing me with good ways of dealing with whatever opportunities may arise and truly prioritizing the ones that matter.

Kevlin Henney at the “Software Art Thou?” meetup (Melbourne, 7th March 2019)

The latest in Zendesk’s excellent “Software Art Thou?” meetup series saw the UK’s Kevlin Henney addressing a packed house (of over 100) in Melbourne on the evening of 7th March.

Kevlin is an independent consultant, speaker, writer and trainer. His development interests are in patterns, programming, practice and process. He is co-author of A Pattern Language for Distributed Computing and On Patterns and Pattern Languages, two volumes in the Pattern-Oriented Software Architecture series. He is also editor of 97 Things Every Programmer Should Know and 97 Things Every Java Programmer Should Know.

The talk was advertised as follows:

“It’s just semantics.” How many conversations about philosophy, politics and programming are derailed by this thought-stopping comment?

Semantics is all about meaning. If there is one thing we struggle with and need to get better at, it is the search for and clarification of meaning. The world in which a software system lives is filled with meaning. The structure, concepts and names that inform the code, its changes and the mental models held by developers and others business roles are expressions of meaning. The very act of development is an exercise in meaning — its discovery, its formulation, its communication. Paradigms, processes and practices are anchored in different ways of thinking about and arriving at meaning.

But just because we are immersed in concepts of meaning from an early age, and just because the daily work of software development is about wrangling meaning, and just because it’s just semantics, that doesn’t mean we’re necessarily good at it. It takes effort and insight. Let’s talk about what we mean.

Kevlin’s talk was titled “What do you mean?” which he quickly modified to “WTF do you mean?”. He kicked off by talking about abstraction and this quote from Dijkstra:

The purpose of abstraction is not to be vague but to create a new semantic level in which one can be absolutely precise

He pointed out that when we’re criticized for trying to be precise with language with statements such as “It’s just semantics”, we need to remember that this literally means”It’s just meaning” so why wouldn’t we seek that?!

Turning specifically to software development, Kevlin argued that code, tests, scripts, etc. are all “code”, literally the codification of knowledge. Software development, to him, is a process of knowledge acquisition through learning, communication and social negotiation. Software architecture is a model of participation with design comprising synthesis and analysis (which are opposites of each other).

An Ernest Hemingway quote came next:

The only kind of writing is rewriting

Kevlin argued that software development is the production of variation, it’s not manufacturing – kudos to him for this messaging, it’s still all too common to see this ill-placed manufacturing/factory model placed on software development and it leads to nonsense like “Quality Assurance” when we really should be talking about testing. He also made the good point that what we do is straddling natural and programming languages.

Kevlin said that the domain (whatever it is) always looks very different from the inside and that:

Your customer doesn’t mean what they say

They use their terms and context, they leave out significant details (what they don’t say), they make assumptions, and actually don’t know what they want in the first place (it’s just a property of humans)!

As an argument for iterative development and the idea of slowing down to become faster, he quoted Neil Gaiman:

You learn from finishing things

There is a big difference between speed and velocity, with the latter being overloaded by the agile community. He claimed speed often leads to us “building the wrong thing brilliantly” rather than going slower in the right direction.

He made another excellent point as he was close to finishing up around our modern fascination with “prioritizing by business value”, with the suggestion we say “estimated business value” as this judgement of value is itself prone to error.

This was a very professionally-delivered talk, serious in nature but delivered with anecdotes and some dry British humour along the way, supported by a very nice slide deck.

Zendesk always put on a great meetup and this was no exception. Their space is large and airy with excellent audio visual facilities plus they lay on a lot of finger food and a well-stocked (and varied) bar service. The quality of their presenters is also always top notch and, although I wasn’t familiar with Kevlin and his work, this was a really engaging (even after talking non-stop for seventy minutes!) and interesting talk on a topic that gets scant treatment in the software development industry – and also in testing, specifically. Many of us in the context-driven testing community are often accused of playing semantic games but, as Kevlin ably demonstrated, conveying meaning is critically important and genuinely difficult to do well.

“Turn the Ship Around!” (L. David Marquet)

After seeing a number of positive reviews and recommendations for this book, I asked the Melbourne Library Service to procure a copy – they agreed and I’ve recently enjoyed reading the fruits of their investment.

Marquet is a former nuclear submarine commander and the book details his moves to change the leadership on a poorly-performing submarine from leader-follower to what he calls “leader-leader”.

He starts out by describing what leadership meant in the (US) Navy, quoting from the “Naval Academy leadership book”:

Leadership is the art, science, or gift by which a person is enabled or privileged to direct the thoughts, plans, and actions of others in such a manner as to obtain and command their obedience, their confidence, their respect, and their loyal co-operation

As he points out “leadership in the Navy, and in most organizations, is about controlling people. It divides the world into two groups of people: leaders and followers.” His solution to the leader-follower pattern is the leader-leader model:

The leader-leader structure is fundamentally different from the leader-follower structure. At its core is the belief that we can all be leaders and, in fact, it’s best when we are all leaders. Leadership is not some mystical quality that some possess and others do not. As humans, we all have what it takes, and we all need to use our leadership abilities in every aspect of our work life.

The leader-leader model not only achieves great improvements in effectiveness and morale, but also makes the organization stronger. Most critically, these improvements are enduring, decoupled from the leader’s personality and presence. Leader-leader structures are significantly more resilient, and they do not rely on the designated leader always being right. Further, leader-leader structures spawn additional leaders throughout the organization naturally. It can’t be stopped.

Marquet details his journey of building the leader-leader model during his time turning around the flagging fortunes of the Sante Fe submarine. His passion, guts and honesty in making the changes he did shine through the narrative and results in a really simple but powerful model for changing the way we view leadership in organizations.

He argues that “the core of the leader-leader model is giving employees control over what they work on and how they work. It means letting them make meaningful decisions. The two enabling pillars are competency and clarity.”

One of the first things Marquet noticed on joining the Santa Fe was their focus on avoiding mistakes:

What happened with Santa Fe…was that the crew was becoming gun-shy about making mistakes. The best way not to make a mistake is not to do anything or make any decisions. It dawned on me the day I assumed command that focusing on avoiding errors is helpful for understanding the mechanics of procedures and detecting impending major problems before they occur, but it is a debilitating approach when adopted as the objective of an organization.

This observation led to his first mechanism for clarity: achieve excellence, don’t just avoid errors.

His discovery of the processes around signing off sailor’s leave led to his first mechanism for control: find the genetic code for control and rewrite it:

The first step in changing the genetic code of any organization or system is delegating control, or decision-making authority, as much as is comfortable, and then adding a pinch more. This isn’t an empowerment “program”. It’s changing the way the organization controls decisions in an enduring, personal way.

I like this idea of “Don’t move information to authority, move authority to information” in this area too.

The next control mechanism Marquet came up with was: act your way to new thinking:

When you’re trying to change employees’ behaviours, you have basically two approaches to choose from: change your own thinking and hope this leads to new behaviour, or change your behaviour and hope this leads to new thinking. On board Sante Fe, the officers and I did the latter, acting our way to new thinking.

The next control mechanism rings very true in software development, especially when adopting agile practices: have short early conversations to get efficient work:

Supervisors needed to recognize that the demand for perfect products the first time they see them results in significant waste and frustration throughout the entire organization. Even a thirty-second check early on could save your people numerous hours of work… a well-meaning yet erroneous translation of intent [could result] in a significant waste of resources.

In his mission to turn passive followers into active leaders, a “minor trick of language” turned into an effective means of control: use “I intend to…” Marquet would “avoid giving orders. Officers would state their intentions with “I intend to…” and I would say “Very well”. Then each man would execute his plan.” It turned out that this simple change “profoundly shifted ownership of the plan to the officers.”

Another control mechanism follows: resist the urge to provide solutions. This is a really hard habit to break and I’ve been working on this personally during my coaching and mentoring activities. I can see how it breaks down the leader-follower mentality, but it takes a deliberate effort to stop yourself from stepping in and solutionifying!

Marquet’s next control mechanism is: eliminating top-down monitoring systems:

Supervisors frequently bemoan the “lack of ownership” in their employees. When I observe what they do and what practices they have in their organization, I can see how they defeat any attempt to build ownership.

Worse, if they’ve voiced their frustrations out loud, their employees perceive them as hypocritical and they lose credibility. Don’t preach and hope for ownership; implement mechanisms that actually give ownership… Eliminating top-down monitoring systems will do it for you. I’m not talking about eliminating data collection and measuring processes that simply report conditions without judgement. Those are important as they “make the invisible visible”. What you want to avoid are the systems whereby senior personnel are determining what junior personnel should be doing.

More control follows in the shape of: think out loud:

When I heard what my watch officers were thinking, it made it much easier for me to keep my mouth shut and let them execute their plans. It was generally when they were quiet and I didn’t know what they would do next that I was tempted to step in. Thinking out loud is essential for making the leap from leader-follower to leader-leader.

Regular inspections were part of Navy life and Marquet and his crew decided to “be open and invite outside criticism”, in a control mechanism he calls: embrace the inspectors.

Embrace the inspectors can be viewed as a mechanism to enhance competence, but I think it fits even better in the discussion of control because it allowed us not only to be better submariners but also to maintain control of our destiny.

[It] also turned out to be an incredibly powerful vehicle for learning. Whenever an inspection team was on board, I would hear crew members saying things like, “I’ve been having a problem with this. What have you seen other ships do to solve it?” Most inspection teams found this attitude remarkable.

While Marquet started off by pushing decision making and control to lower and lower levels in the organization, he found that control by itself was not enough and he also needed to bolster the technical competence of his crew if this approach was to be successful.

The first mechanism for competence he outlines is: take deliberate action. Following an incident where a circuit-breaker was mistakenly closed on the submarine, the idea of taking deliberate action arose following a postmortem:

“Well, he was just in auto. He didn’t engage his brain before he did what he did: he was just executing a procedure.”

I thought that was perspective. We discussed a mechanism for engaging your brain before acting. We decided that when operating a nuclear-powered submarine we wanted people to act deliberately, and we decided on “take deliberate action” as our mechanism. This meant prior to any action, the operator paused and vocalized and gestured toward what he was about to do, and only after taking a deliberate pause would he execute the action. Our intent was to eliminate those “automatic” mistakes. Since the goal of “take deliberate action” was to introduce deliberateness in the mind of the operator, it didn’t matter whether anyone was around or not. Deliberate actions were not performed for the benefit of an observer or an inspector. They weren’t for show.

This particular mechanism reminded me of the ideas in Daniel Kahneman’s “Thinking, Fast and Slow” book.

The next competence mechanism is easy to relate to: we learn (everywhere, all the time). Embedding the idea that everyone in an organization needs to be constantly learning is a very good thing, be it in a military setting like Marquet’s or a software engineering setting. I actually think folks in IT are generally on board with this idea due to the high rate of change in technology, programming languages, etc. and the popularity of IT-related meetup groups, for example, are an indicator of a willingness to continue learning outside of the scope of the day-to-day in the office.

His next mechanism for competence is: don’t brief, certify.

A briefing is a passive activity for everyone except the briefer. Everyone else “is briefed”. There is no responsibility for preparation or study. It’s easy to just nod and say “ready” without full intellectual engagement. Furthermore, the sole responsibility in participating in a brief is to show up. Finally, a brief, as such, is not a decision point. The operation is going to happen and we are simply talking about it first.

We decided to do away with briefs. From that point on we would do certifications.

A certification is different from a brief in that during a certification, the person in charge of his team asks them questions. At the end of the certification, a decision is made whether or not the team is ready to perform the upcoming operation. If the team has not demonstrated the necessary knowledge during the certification, the operation should be postponed.

Another competence mechanism is presented next: continually and consistently repeat the same message:

Repeat the same message, day after day, meeting after meeting, event after event. Sounds redundant, repetitive, and boring. But what’s the alternative? Changing the message? That results in confusion and a lack of direction. I didn’t realize the degree to which old habits die hard, even when people are emotionally on board with the change.

This mechanism is one I’ve employed frequently in coaching testers around the world and it’s surprisingly effective (I say “surprising” since it surprised me that it is both necessary and valuable to do it).

Marquet’s last competence mechanism is: specifying goals, not methods. This arose from a fire drill in which the team members followed a prescribed response but failed to extinguish the fire within the safe time limit:

…[now] the crew was motivated to devise the best approach to putting out the fire. Once they were freed from following a prescribed way of doing things, they came up with many ingenious ways to shave seconds off our response time [to fires].

The problem with specifying the method along with the goal is one of diminished control. Provide your people with the objective and let them figure out the method.

Lovers of “best practices” please take note!

The third and final set of mechanisms Marquet introduces are around clarity:

As more decision-making authority is pushed down the chain of command, it becomes increasingly important that everyone throughout the organization understands what the organization is about. This is called clarity.

Clarity means people at all levels of an organization clearly and completely understand what the organization is about. This is needed because people in the organization make decisions against a set of criteria that includes what the organization is trying to accomplish. If clarity of purpose is misunderstood, then the criteria by which a decision is made will be skewed, and suboptimal decisions will be made.

His first mechanism for clarity is: building trust and taking care of your people:

It’s hard to find a leadership book that doesn’t encourage us to “take care of our people”. What I learned is this: Taking care of your people does not mean protecting them from the consequences of their own behaviour. That’s the path to irresponsibility. What it does mean is giving them every available tool and advantage to achieve their aims in life, beyond the specifics of the job. In some cases that meant further education; in other cases crewmen’s goals were incompatible with Navy life and they separated on good terms.

The next mechanism for clarity is: use your legacy for inspiration. This one helped to provide organizational clarity, explaining the “why” for the crewmen’s service:

Many organizations have inspiring early starts and somehow “lose their way” at some later point. I urge you to tap into the sense of purpose and urgency that developed during those early days or during some crisis. The trick is to find real ways to keep those alive as the organization grows. One of the easiest ways is simply to talk about them. Embed them into your guiding principles and use those words in efficiency reports and personnel awards.

Another mechanism for clarity comes next: use guiding principles for decision criteria.

Leaders like to hang a list of guiding principles on office walls for display, but often those principles don’t become part of the fabric of the organization. Not on Santa Fe. We did several things to reinforce these principles and make them real to the crew. For example, when we wrote awards or evaluations, we tried to couch behaviours in the language of these principles. “Petty Office M exhibited Courage and Openness when reporting…

Most of you have organizational principles. Go out and ask the first three people you see what they are. I was at one organization that proudly displayed its motto in Latin. I asked everyone I saw what it meant. The only one who knew was the CEO. That’s not good.

I’ve personally seen these working well within Quest – we have a set of Core Values and we refer to them regularly at all levels of the organization.

Another mechanism for clarity is: use immediate recognition to reinforce desired behaviours. I really like this one, it’s simple but easy to forget to do:

When I say immediate recognition, I mean immediate. Not thirty days. Not thirty minutes. Immediate.

Look at your structures for awards. Are they limited? Do they pit some of your employees against others? That structure will result in competition at the lowest level. If what you want is collaboration, then you are destroying it.

A mechanism for organizational clarity comes next: begin with the end in mind.

As you work with individuals in your organization to develop their vision for the future, it is crucial that you establish specific, measurable goals. These goals will help the individuals realize their ambitions. In addition, you as a mentor have to establish that you are sincerely interested in the problems of the person you are mentoring. By taking action to support the individual, you will prove that you are indeed working in their best interest and always keeping the end in mind.

His final mechanism for clarity is: encourage a questioning attitude over blind obedience. He asks “Will your people follow an order that isn’t correct? Do you want obedience or effectiveness? Have you built a culture that embraces a questioning attitude?” Reinforcing that asking questions is a good idea is so important in what we do as software testers (I recently heard Nick Pass define “QA” as “Question Asker” during his talk at the TiCCA19 conference) and there are sometimes personality and cultural barriers to overcome in encouraging people to question (the latter I have much experience of while working with our teams in China).

In summary, Marquet’s set of mechanisms for Control, Competence and Clarity are as follows.

Control

  • Find the genetic code for control and rewrite it
  • Act your way to new thinking
  • Short, early conversations make efficient work
  • Use “I intend to…” to turn passive followers into active leaders
  • Resist the urge to provide solutions
  • Eliminate top-down monitoring systems
  • Think out loud (both superiors and sub-ordinates)
  • Embrace the inspectors

Competence

  • Take deliberate action
  • We learn (everywhere, all the time)
  • Don’t brief, certify
  • Continually and consistently repeat the message
  • Specify goals, not methods

Clarity

  • Achieve excellence, don’t just avoid errors
  • Build trust and take care of your people
  • Use your legacy for inspiration
  • Use guiding principles for decision criteria
  • Use immediate recognition to reinforce desired behaviours
  • Begin with the end in mind
  • Encourage a questioning attitude over blind obedience

There are so many useful takeaways in this book; it’s a short but engaging read and the direct applicability to the way we manage the people in software development projects is very clear – especially if you’re aiming for truly self-organizing agile teams!

I highly recommend reading Turn the Ship Around to anyone interested in genuinely empowering people in their teams.

Testing in Context Conference Australia 2019

The third annual conference of the Association for Software Testing (AST) outside of North America took place in Melbourne in the shape of Testing in Context Conference Australia 2019 (TiCCA19) on February 28 & March 1. The conference was held at the Jasper Hotel near the Queen Victoria Market.

The event drew a crowd of about 50, mainly from Australia and New Zealand but also with a decent international contingent (including a representative of the AST and a couple of testers all the way from Indonesia!).

I co-organized the event with Paul Seaman and the AST allowed us great freedom in how we put the conference together. We decided on the theme first, From Little Things Big Things Grow, and had a great response to our call for papers, resulting in what we thought was an awesome programme.

The Twitter hashtag for the event was #ticca19 and this was fairly active across the conference.

The event consisted of a first day of workshops followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical AST/peer conference style, with around forty minutes for the presentation followed by around twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach).

Takeaways

  • Testing is not dead, despite what you might hear on social media or from some automation tooling vendors. There is a vibrant community of skilled human testers who display immense value in their organizations. My hope is that these people will promote their skills more broadly and advocate for human involvement in producing great software.
  • Ben Simo’s keynote highlighted just how normalized bad software has become, we really can do better as a software industry and testers have a key role to play.
  • While “automation” is still a hot topic, I got a sense of a move back towards valuing the role of humans in producing quality software. This might not be too surprising given the event was a context-driven testing conference, but it’s still worth noting.
  • The delegation was quite small but the vibe was great and feedback incredibly positive (especially about the programme and the venue). There was evidence of genuine conferring happening all over the place, exactly what we aimed for!
  • It’s great to have a genuine context-driven testing conference on Australian soil and the AST are to be commended for continuing to back our event in Melbourne.
  • I had a tiring but rewarding experience in co-organizing this event with Paul, the testing community in Melbourne is a great place to be!

Workshop day (Thursday 28th February)

We offered two full-day workshops to kick the event off, with “Applied Exploratory Testing” presented by Toby Thompson (from Software Education) and “Leveraging the Power of API Testing” presented by Scott Miles. Both workshops went well and it was pleasing to see them being well attended. Feedback on both workshops has been excellent so well done to Toby and Scott on their big efforts in putting the workshops together and delivering them so professionally.

Toby Thompson setting up his ET workshopScott Miles ready to start his API testing workshop

Pre-conference meetup (Thursday 28th February)

We decided to hold a free meetup on the evening before the main conference day to offer the broader Melbourne testing community the chance to meet some of the speakers as well as hearing a great presentation and speaker panel session. Thanks to generous sponsorship, the meetup went really well, with a small but highly engaged audience – I’ve blogged in detail about the meetup at https://therockertester.wordpress.com/2019/03/04/pre-ticca19-conference-meetup/

Aaron Hodder addresses the meetupGraeme, Aaron, Sam and Ben talking testing during the panel session

Conference day (Friday 1st March)

The conference was kicked off at 8.30am with some opening remarks from me including an acknowledgement of traditional owners and calling out two students who we sponsored to attend from the EPIC TestAbility Academy. Next up was Ilari Henrik Aegerter (board member of the AST) who briefly explained what the AST’s mission is and what services and benefits membership provides, followed by Richard Robinson outlining the way “open season” would be facilitated after each track talk.

I then introduced our opening keynote, Ben Simo with “Is There A Problem Here?”. Ben joined us all the way from Phoenix, Arizona, and this was his first time in Australia so we were delighted to have him “premiere” at our conference! His 45-minute keynote showed us many cases where he has experienced problems when using systems & software in the real world – from Australian road signs to his experience of booking his flights with Qantas, from hotel booking sites to roadtrip/mapping applications, and of course covering his well-publicized work around Healthcare.gov some years ago. He encouraged us to move away from “pass/fail” to asking “is there a problem here?” and, while not expecting perfection, know that our systems and software can be better. A brief open season brought an excellent first session to a close.

Ben Simo during his keynote (photo from Lynne Cazaly)

After a short break, the conference split into two track sessions with delegates having the choice of “From Prototype to Product: Building a VR Testing Effort” with Nick Pass or “Tales of Fail – How I failed a Quality Coach role” with Samantha Connelly (who has blogged about her talk and also her TiCCA19 conference experience in general).

While Sam’s talk attracted the majority of the audience, I opted to spend an hour with Nick Pass as he gave an excellent experience report of his time over in the UK testing virtual reality headsets for DisplayLink. Nick was in a new country, working for a new company in a new domain and also working on a brand new product within that company. He outlined the many challenges including technical, physical (simulator sickness), processes (“sort of agile”) and personal (“I have no idea”). Due to the nature of the product, there were rapid functionality changes and lots of experimentation and prototyping. Nick said he viewed “QA” as “Question Asker” in this environment and he advocated a Quality Engineering approach focused on both product and process. Test design was emergent but, when they got their first customer (hTC), the move to productizing meant a tightening up of processes, more automated checks, stronger testing techniques and adoption of the LeSS framework. This was a good example of a well-crafted first-person experience report from Nick with a simple but effective deck to guide the way. His 40-minute talk was followed by a full open season with a lot of questions both around the cool VR product and his role in building a test discipline for it.

Nick Pass talks VR

Morning tea was a welcome break and was well catered by the Jasper, before tracks resumed in the shape of “Test Reporting in the Hallway” with Morris Nye and “The Automation Gum Tree” with Michelle Macdonald.

I joined Michelle – a self-confessed “automation enthusiast” – as she described her approach to automation for the Pronto ERP product using the metaphor of the Aussie gum tree (which meant some stunning visuals in her slide deck). Firstly, she set the scene – she has built an automated testing framework using Selenium and Appium to deal with the 50,000 screens, 2000 data objects and 27 modules across Pronto’s system. She talked about their “Old Gum”, a Rational Robot system to test their Win32 application which then matured to use TestComplete. Her “new species” needed to cover both web and device UIs, preferably be based on open source technologies, be easy for others to create scripts, and needed support. It was Selenium IDE as a first step and the resulting framework is seen as successful as it’s easy to install, everyone has access to use it, knowledge has been shared, and patience has paid off. The gum tree analogies came thick and fast as the talk progressed. She talked about Inhabitants, be they consumers, diggers or travellers, then the need to sometimes burn off (throw away and start again), using the shade (developers working in feature branches) and controlling the giants (all too easy for automation to get too big and out of control). Michelle had a little too much content and her facilitator had to wrap her up at 50 minutes into the session so we had time for some questions during open season. There were some sound ideas in Michelle’s talk and she delivered it with passion, supported by the best-looking deck of the conference.

A sample of the beautiful slides in Michelle's talk

Lunch was a chance to relax over nice food and it was great to see people genuinely conferring over the content from the morning’s sessions. The hour passed quickly before delegates reconvened for another two track sessions.

First up for the afternoon was a choice between “Old Dog, New Tricks: How Traditional Testers Can Embrace Code” with Graeme Harvey and “The Uncertain Future of Non-Technical Testing” with Aaron Hodder.

I chose Aaron’s talk and he started off by challenging us as to what “technical” meant (and, as a large group, we failed to reach a consensus) as well as what “testing” meant. He gave his idea of what “non-technical testing” means: manually writing test scripts in English and a person executing them, while “technical testing” means: manually writing test scripts in Java and a machine executing them! He talked about the modern development environment and what he termed “inadvertent algorithmic cruelty”, supported by examples. He mentioned that he’s never seen a persona of someone in crisis or a troll when looking at user stories, while we have a great focus on technical risks but much less so on human risks. There are embedded prejudices in much modern software and he recommended the book Weapons of Math Destruction by Cathy O’Neil. This was another excellent talk from Aaron, covering a little of the same ground as his meetup talk but also breaking new ground and providing us with much food for thought in the way we build and test our software for real humans in the real world. Open season was busy and fully exhausted the one-hour in Aaron’s company.

Adam Howard introduces Aaron Hodder for his track

Graeme Harvey ready to present

A very brief break gave time for delegates to make their next choice, “Exploratory Testing: LIVE!” with Adam Howard or “The Little Agile Testing Manifesto” with Samantha Laing. Having seen Adam’s session before (at TestBash Australia 2018), I decided to attend Samantha’s talk. She introduced the Agile Testing Manifesto that she put together with Karen Greaves, which highlights that testing is an activity rather than a phase, we should aim to prevent bugs rather than focusing on finding them, look at testing over checking, aim to help build the best system possible instead of trying to break it, and emphasizes the whole team responsibility for quality. She gave us three top tips to take away: 1) ask “how can we test that?”, 2) use a “show me” column on your agile board (instead of an “in test” column), and 3) do all the testing tasks first (before development ones). This was a useful talk for the majority of her audience who didn’t seem to be very familiar with this testing manifesto.

Sam Laing presenting her track session (photo from Lynne Cazaly)

With the track sessions done for the day, afternoon tea was another chance to network and confer before the conference came back together in the large Function Hall for the closing keynote. Paul did the honours in introducing the well-known Lynne Cazaly with “Try to See It My Way: Communication, Influence and Persuasion”.

She encouraged us to view people as part of the system and deliberately choose to “entertain” different ideas and information. In trying to understand differences, you will actually find similarities. Lynne pointed out that we over-simplify our view of others and this leads to a lack of empathy. She introduced the Karpman Drama Triangle and the Empowerment Dynamic (by David Emerald). Lynne claimed that “all we’re ever trying to do is feel better about ourselves” and, rather than blocking ideas, we should yield and adopt a “go with” style of facilitation.

Lynne was a great choice of closing keynote and we were honoured to have her agree to present at the conference. Her vast experience translated into an entertaining, engaging and valuable presentation. She spent the whole day with us and thoroughly enjoyed her interactions with the delegates at this her first dedicated testing conference.

Slide from Lynne Cazaly's keynotelynne2Slide from Lynne Cazaly's keynote

Paul Seaman closed out the conference with some acknowledgements and closing remarks, before the crowd dispersed and it was pleasing to see so many people joining us for the post-conference cocktail reception, splendidly catered by the Jasper. The vibe was fantastic and it was nice for us as organizers to finally relax a little and enjoy chatting with delegates.

Acknowledgements

A conference doesn’t happen by accident, there’s a lot of work over many months for a whole bunch of people, so it’s time to acknowledge the various help we had along the way.

The conference has been actively supported by the Association for Software Testing and couldn’t happen without their backing so thanks to the AST and particularly Ilari who continues to be an enthusiastic promoter of the Australian conference via his presence on the AST board. Our wonderful event planner, Val Gryfakis, makes magic happen and saves the rest of us so much work in dealing with the venue and making sure everything runs to plan – we seriously couldn’t run the event without you, Val!

We had a big response to our call for proposals for TiCCA19, so thanks to everyone who took the time and effort to apply to provide content for the conference. Paul and I were assisted by Michele Playfair in selecting the programme and it was great to have Michele’s perspective as we narrowed down the field. We can only choose a very small subset for a one-day conference and we hope many of you will have another go when the next CFP comes around.

There is of course no conference without content so a huge thanks to our great presenters, be they delivering workshops, keynotes or track sessions. Thanks to those who bought tickets and supported the event as delegates, your engagement and positive feedback meant a lot to us as organizers.

Finally, my personal thanks go to my mate Paul for his help, encouragement, ideas and listening ear during the weeks and months leading up to the event, we make a great team and neither of us would do this gig with anyone else, cheers mate.