Monthly Archives: February 2016

TEAM meetup number 8

The eighth TEAM meetup was held on 24th February, in the very funky offices of Envato (thanks to them for providing the venue, pizzas and drinks). Our membership had increased by 50 from the previous meetup, to stand at 350, and we had a record response in terms of RSVPs so it’s great to see the group becoming more and more popular.

After a short introduction by Rajesh Mathur, I gave the first presentation, “Growing Testing Skills using the Agile Testing Ecosystem”. I had previously given this talk at Nordic Testing Days 2015 (Estonia) and the Agile Testing & Automation Summit 2016 (Melbourne), but the content seemed new for most of this audience and there was a very active Q&A session afterwards which had to be drawn to a close in the interests of timekeeping. This excellent model for testing within agile teams doesn’t seem very well-known and is not getting the attention it deserves in my opinion. Based on the questions and post-presentation discussions I had with participants, the model will certainly get a run at a few companies in Melbourne now!

A short break allowed the group of about 40 to network and and clean up what was left of the pizzas. Good conversations ensued and, as usual, the group were a very engaged bunch and somewhat reluctantly regrouped in readiness for the second presentation.

This second presentation came from fellow TEAM co-organizer Paul Seaman giving his personal experiences of “Why/how Agile Transformations Fail”. This was a “warts and all” type of presentation, pulling no punches in terms of its content detailing how hard it had been to implement an agile transformation in Paul’s company. His conclusion was that aiming for “pure agile” was a bridge too far for most companies and that a more realistic target of transformation to an agile hybrid approach was much more realistic.

For further meetup announcements, remember to follow our meetup.com page at:

http://www.meetup.com/Test-Engineering-Alliance-Melbourne/

Also keep an eye on our website, http://www.testengineeringalliance.com, where you will find all of our different offerings – including the opportunity to take the Rapid Software Testing course with the one and only Michael Bolton and a brand new testing conference, Australian Testing Days, all happening in May 2016. The programme for the conference is world class, including Michael Bolton, Lee Copeland and Anne-Marie Charrett, as well as more local talent from Australia and New Zealand.

Looking back on Rapid Software Testing

I’m currently preparing a new conference talk and part of the talk covers a transformational period of my testing career, viz. when I attended the Rapid Software Testing (RST) course with Michael Bolton.

Back in 2007, my employer – Quest Software (since acquired by Dell) – decided to bring Michael Bolton into their office in Kanata (near Ottawa, Canada) to run a three-day in-house RST class. Although I worked out of the Melbourne (Australia) office (and still do), I was invited to participate in the class and I jumped at the opportunity.

Up to this point, I’d been in testing for about eight years and was following a fairly traditional lead with a heavy focus on writing test cases, traceability, etc. Although I was considered to be doing well and adding value to the projects I worked on, there were some internal struggles with the amount of time I was spending on stuff that didn’t really seem to add much value to anything. Having said that, I wasn’t really looking around for alternatives at this stage and wasn’t even aware that there was a community of testers doing things differently and coming from a very different viewpoint as to what good testing looked like.

So, returning to November 2007 and a long flight from Melbourne to Ottawa (via Los Angeles and Chicago) in the early stages of the Canadian Winter (and the start of the Australian Summer). Moving from around 30 degrees Celsius to less than 30 degrees Fahrenheit would be the first shock of the trip!

Being an in-house course meant meeting up with around 20 other employees of Quest Software, mainly from Canadian offices, a few from the US and just me from Australia. Michael learned all of our names right at the start of the class and never got one wrong for the rest of the three days, so it felt like there were personal connections between students and teacher right from the start.

Those of you familiar with RST will know the kind of content that’s on offer, but it was the great content combined with the passionate and engaging teaching style of Michael that made the class so awesome. My natural shyness was confronted frequently by the use of the socratic method to stimulate critical thinking, but it soon became more comfortable and the revelations just kept coming. It was a very tiring three days but also the most valuable time in a classroom I’ve ever had in terms of changing the way I think – my idea of what good testing looked like was changed forever by those three days and it set me up to genuinely enjoy software testing and become passionate about it.

Taking RST was just the beginning of what felt like a new start in testing for me. Since then, I’ve spread my wings to engage much more with the worldwide testing community and now regularly attend & present at international testing conferences. I blog about testing (obviously) and follow great testers on Twitter. I co-organize the TEAM testing meetup in Melbourne and I’m also co-organizing the inaugural Australian Testing Days conference in Melbourne in May (and we’re delighted to have Michael as our opening keynote speaker). And, of course, I now add more value during my work at Dell Software. All of these varied and enjoyable aspects of my testing career can really be traced back to the change of mindset that the RST class gave me – what an awesome return on three days of time invested in the class!

When James Bach came to Melbourne in June 2011 to give a public RST class, I advocated strongly for all of our Melbourne testers to attend so we were the majority group in that class. I deliberately took something of a back seat during the course, to let others experience the exercises and learning via the socratic method. It was certainly interesting to see essentially the same content delivered by James as opposed to Michael, different styles but same great learning opportunities. I was pleased that most of my group enjoyed the class and found useful takeaways (although they had already benefited from my RST experience and knowledge sharing before taking the class themselves).

With my positive and valuable experiences of RST, it’s great news that the TEAM group has secured Michael to visit Melbourne to present the RST class and also the one-day “RST for Managers” class. This is an amazing opportunity to learn a different way of thinking about testing – and maybe it will be as career-changing for you as it was for me!

TEAM 3-day RST class: http://testengineeringalliance.com/rapid-software-public-class/

TEAM 1-day RST for Managers class: http://testengineeringalliance.com/rapid-software-testing-for-managers/

A small contribution to the next generation of software engineering professionals

I just took part in my first ever Google Hangouts video call (and the technology worked very well). The call was with a group of three undergraduate students from McGill University in Montreal, Canada, and came about as a result of my response to the call for participation in a “Test Management Survey”. The survey was run by A Quality Leadership Institute – founded by Anna Royzman – in partnership with the well-known testing consultant, trainer and speaker, Rob Sabourin (who teaches at McGill too).

Although I’d seen Rob present at conferences, my first opportunity to spend quality time with him came during the OZWST 2013 peer conference held at Google’s offices in Sydney. Rob acted as content owner for the conference and he did an incredible job. He showed a genuine interest during my experience report (which was on implementing session-based exploratory testing with an offshore testing team in China) and it was great to spend time with him outside of the workshop to discuss some more. Since then, Rob has been a great supporter of my work and his passion & encouragement to share my story has led to me give a number of international conference presentations. So, when I heard that Rob was involved in this Test Management Survey, it was a good chance for me to give something back to him – albeit quite indirectly – in thanks for his inspiration, mentorship and support over the last few years.

The call for participation in the survey indicated the following:

McGill University undergraduate students of the course “ECSE 428 Software Engineering Practice” prepare a paper and short research project as part of their assignment work.

In the winter 2016 Semester, January through April 2016, groups of 4 students will research software engineering process from the perspective of software test management.

Each group of students will interview two software test managers from different organizations.

Interviews will be done in person, via phone, skype, facetime or similar technologies.

The interview questions seemed to be in part scripted and in part left to the guys to follow leads as they came up during my answers. This is a great idea, bringing some real life experience of testing into the view of the bright young next generation of software engineers. Their questions were sensible and, as most of my recent experience has been with agile teams using exploratory testing, my answers appeared to be of genuine interest to them (maybe because they fell outside of the theory they’d been taught?). Hopefully some of this research will be made available as it would be really interesting to see the conclusions these students come to after speaking to a diverse range of test managers from all over the world during the course of their research.

I think it’s important to give back to the testing community to help it become stronger and encourage newer players to enjoy the great career that testing can offer. This interview was a small contribution in the grand scheme of things, of course, but sharing experiences is what helps us all to improve the way we work. I also volunteer for the excellent Speak Easy initiative and this is helping to bring lots of great new voices to the conference circuit (including one for the conference I am co-organizing in Melbourne in May, Australian Testing Days!). I am enjoying co-organizing the Melbourne TEAM testing meetups too and it’s great to meet engaged testers who are actively looking for opportunities to learn and improve themselves.

 

Reading matter & Simple Rules

I see two very common patterns among testers when it comes to their reading habits – most have never read a book dedicated to software testing and even fewer have thought to read books in other fields (such as psychology and social sciences) that would help them understand the problems of performing great testing. This is an amazing state of affairs for “professionals”, especially when there is so much excellent material just waiting to be read, absorbed and applied from good practitioners in both our field and related ones.

So, if you haven’t read any software testing books, I suggest you do so – something like Lessons Learned in Software Testing: A Context-Driven Approach (Cem Kaner, James Bach, Brett Pettichord) or Perfect Software: And Other Illusions about Testing (Jerry Weinberg) would be pretty good places to start. But also think about some broader reading, with Thinking Fast and Slow (Daniel Kahneman) and Tacit and Explicit Knowledge (Harry Collins) being regularly cited as helpful texts for testers to broaden their horizons.

As a case in point, I recently stumbled across the book Simple Rules: How to Thrive in a Complex World (by Donald Sull and Kathleen M. Eisenhardt) as it came on my wife’s radar in the trading world and she thought it might be interesting in relation to software testing too.

 

The book recognizes the complexity of modern life and how simple rules can help us to cut through some of that complexity, rather than coming up with increasingly complicated solutions.

For most of us, complexity is a problem we struggle to manage in our own lives every day… By limiting the number of guidelines, simple rules help maintain a strict focus on what matters most while remaining easy to remember and use. In a wide range of decisions, simple rules can guide choice while leaving ample room to exercise judgement and creativity.

I liked the idea that simplification leaves room for judgement and creativity, this resonating closely with the benefits I see in using exploratory testing (over detailed scripted tests).

Fighting complexity is an ongoing battle that can wear us down. Disheartened, people tolerate complicated solutions that don’t work, or cling to overly simplistic narratives… that deny the interdependencies characterizing modern life.

Again, I immediately thought of how the world of context-driven testing acknowledges this complexity and interdependency, but doesn’t try to come up with one size fits all hefty processes or standards to address these realities.

The authors talk about four key aspects of simple rules:

First off, simple rules consist of a handful of guidelines applied to a specific activity or decision… They’re intended to offer a limited amount of guidance, so there’s no need for a lot of them. Keeping the number of rules to a handful forces you to focus on what matters most. You might think that capping the number of rules would result in guidelines that are too simplistic to solve complex problems. Not so. In many situations, a handful of factors matter a great deal, while a long tail of peripheral variables can be safely ignored.

Second, simple rules are tailored to the situations of the particular people who will use them, versus one-size-fits-all rules that apply to everyone.

Third, simple rules are applied to a single well-defined activity or decision… Simple rules are most effective when they apply to critical activities or decisions that represent bottlenecks to accomplishing an important goal.

Finally, simple rules give concrete guidance without being overly prescriptive… Simple rules leave room to exercise creativity and pursue unanticipated opportunities.

They include some interesting examples of where such simple rules have been successful in battling complexity and resulted in good decision making, without being so overly prescriptive as to be useless or ignored.

Simple rules work best when flexibility matters more than consistency… Both flexibility and consistency have their advantages, but increasing one reduces the other. Then a large number of highly directive rules… are the best tools to use. Detailed rules are particularly useful for avoiding catastrophic errors, such as plane crashes, mishaps in nuclear power plants, and surgical deaths that result from known causes. Pilots are fond of saying that “checklists are written in blood”, a reference to how these lists are developed in the first place.

Checklists are a valuable tool in testing, just as they are for doctors and pilots. But flexibility is also very important in testing, acknowledging that there is much we don’t know about a system (despite whatever level of specification you might have) until we start to explore its behaviour.

Simple rules impose a threshold level of structure while avoiding the rigidity that results from too many restrictions. The resulting flexibility makes it easier to adapt to changing circumstances and seize fleeting opportunities. Simple rules can also produce better decisions than more complicated models can, particularly when time and information are limited.

Have you ever been in a situation during software development (including testing) where time and information were limited?! I like the idea of seizing “fleeting opportunities” as might be revealed during exploratory testing.

In the context-driven testing world, you will hear the term heuristics a lot and, for those of us who are practitioners, using heuristics turns out to be very powerful in our testing. The authors of this book also recognize the benefits of “rules of thumb”:

Simple rules enable people to make quick, reasonably accurate decisions that require less effort than more complicated approaches. When there is not much time or when information is at a minimum, these rules of thumb can save the day. Simple rules work because they focus on key aspects of a decision while ignoring peripheral considerations. By using simple rules, people can function without constantly stopping to rethink every aspect of a decision every time they make it.

Rules of thumb are often viewed a second-rate measures for use when people lack the time or information to come to a more considered judgement. Indeed, the term rule of thumb refers to a rough and practical approach that is not particularly accurate or reliable in every situation…

Counterintuitive as it may sound, simple rules can outperform more analytically complicated and information-intensive approaches even when there is ample time and information to make a decision. This is especially true in situations where links between cause and effect are poorly understood, when important variables are highly correlated, when a few factors matter most, and when a gap exists between knowing what to do and actually doing it. Simple rules do not trump complicated models every time, but they do so more often than you might think.

Heuristics are a powerful decision-making tool, often matching or even outperforming more sophisticated approaches. They are easy to remember and use, attributes that increase the odds that people will not only make the right choice, but translate their decision into action and stick with it over time.

The case for using simple rules seems pretty strong and I’ve certainly found heuristics to be a powerful means of generating test ideas during exploratory testing.

The authors ask the obvious question:

 

 

These rules also raise an important question: if simple rules are so effective in so many situations, why do complex solutions remain so prevalent? Regulators churn out ever more detailed rules, personnel departments promulgate thick policy manuals, and self-styled experts promote ever more arcane diet and exercise regimes. People crave simplicity and… simple rules often outperform more complicated approaches. Why aren’t simple rules even more common? What are the obstacles to simplifying our lives, corporations and societies? And, more importantly, how can we overcome these obstacles to achieve simplicity?

While most testers I’ve met are keen to simplify the way they work, they are often caught up in highly bureaucratic procedures and heavyweight documentation that result in less time spent actually testing than they would prefer.

The first obstacle is the effort required to develop simple rules. Like most worthwhile endeavours, it takes time and energy to get them right. The process of developing simple rules requires ruthless prioritization – honing in on the essential and decluttering the peripheral… The payoffs of simplification often dwarf the costs of getting there.

The people who benefit from complexity pose a second obstacle to simplicity. The costs of complex solutions are distributed across many people while the benefits of complexity tend to be concentrated in the hands of a few. These beneficiaries have, as a consequence, strong incentives to resist simplification. Much of the complexity of the US tax code, for example, exists because special interest groups secure tax breaks… that benefit a small number of individuals. These special interest groups obviously benefit from complexity, but so do the lobbyists who make their case to legislators, as well as the lawmakers themselves. After creating a labyrinth of rules, regulators and politicians often walk through the revolving door to join the companies they formally supervised.

As in many other industries, the software testing industry has seen the rise of certifications and standards that appear to resist simplicity and generate lots of work for the bodies overseeing them.

The third obstacle to simplicity is what we call the “myth of requisite complexity,” the mistaken belief that complex problems demand complicated solutions. There are, naturally, situations when complicated solutions are appropriate (e.g. detailed checklists used by pilots and surgeons). But detailed rules and regulations aren’t the only possible way to deal with complexity… Complicated solutions should be a considered choice, not the result of regulatory autopilot.

Often, complex rules and regulations arise out of a distrust of human nature. If people cannot be trusted to do the right thing, detailed regulations are necessary to prevent malfeasance. Many corporations, for example, rely on thick policy manuals to control people who might abuse their discretion. But these bad apples represent a tiny fraction of all employees.” Talking about Netflix as an example, “the company’s policy for expenses, travel, gifts, and conducting personal business at work… was reduced to four rules: (1) expense what you would not otherwise spend, (2) travel as if it were your own money, (3) disclose nontrivial gifts from vendors, and (4) do personal stuff at work when it is inefficient not to.

There are other good examples in the book of where simple solutions actually work, where complex problems don’t necessary demand complicated solutions. Good software testing is a complex problem (as we all know!) but we can tackle this problem in simple ways that add genuine value, rather than always resorting to complicated approaches, processes or standards of how to go about our work.

As they come to close out their book, the authors say:

..Simple rules work because they provide a threshold level of structure while leaving ample scope to exercise discretion. Complex rules, in contrast, attempt to anticipate every contingency and dictate what to do in each scenario, thereby reducing people to automatons who do what they are told. But human discretion is not a defect to be eliminated, it is our greatest hope in the battle against complexity. Close to the facts on the ground, individuals can draw on their judgement and creativity to manage risks and seize unexpected opportunities. The latitude to exercise discretion not only makes simple rules effective, it makes them attractive. People thrive given the opportunity to apply their judgement and creativity to the situations they face day to day. And if they benefit from simple rules, they are more likely to use them and use them well.

These are all great arguments for the use of more exploratory approaches to testing, where the tester is in control of their own destiny, “human discretion is not a defect to be eliminated”!

I really enjoyed this book and (as I’ve summarized above) saw many parallels with the world of software testing. It’s great to see a book that isn’t anything to do with software testing providing such insight and applicable ideas. Reading this also gave me hope that simplicity can be seen as a positive thing and that pragmatic approaches to software testing (for which I am a strong advocate) have the ability to add huge value to software development projects.