My ER of attending and presenting at STARWest 2016

I recently had the pleasure of heading to Southern California to attend and present at the long-running STARWest conference. Although the event is always held at the Disneyland Resort, it’s a serious conference and attracted a record delegation of over 1200 participants. For a testing conference, this is just about as big as it gets and was probably on a par with some recent EuroSTARs that I’ve attended.

My conference experience consisted of attending two full days of tutorials then two conference days, plus presenting one track session and doing an interview for the Virtual Conference event. It was an exhausting few days but also a very engaging & enjoyable time.

Rather than going through every presentation, I’ll talk to a few highlights:

  • Michael Bolton tutorial “Critical Thinking for Software Testers”
    The prospect of again spending a full day with Michael was an exciting one – and he didn’t disappoint. His tutorial drew heavily from the content of Rapid Software Testing (as expected), but this was not a big issue for his audience (of about 50) here as hardly anyone was familiar with RST, his work with James Bach, Jerry Weinberg, etc. Michael defined “critical thinking” to be “thinking about thinking with the aim of not getting fooled” and he illustrated this many times with interesting examples. The usual “checking vs. testing”, critical distance, models of testing, system 1 vs. system 2 thinking, and “Huh? Really? And? So?” heuristic familiar to those of us who follow RST and Bolton/Bach’s work were all covered and it seemed that Michael converted a few early skeptics during this class. An enjoyable and stimulating day’s class.
  • Rob Sabourin tutorial “Test Estimation in the Face of Uncertainty”
    I was equally excited to be spending half a day in the company of someone who has given me great support and encouragement – and without whose support I probably wouldn’t have made the leap into presenting at conferences. Whenever Rob Sabourin presents or teaches, you’re guaranteed passion and engagement and he did a fine job of covering what can be a pretty dry subject. In his audience of about 40, it was a 50/50 split between those on agile and waterfall projects and some of the estimation techniques he outlined suited one or other SDLC model better, while some were generic. He covered most of the common estimation techniques and often expressed his opinion on their usefulness! For example, using “% of project effort/spend” as a way of estimating testing required was seen as ignoring many factors that influence how much testing we need to do and also ignores the fact that small development efforts can result in big testing efforts. Rob also said this technique “belittles the cognitive aspects of testing”, I heartily agreed! Rob also cited the work of Steve McConnell on developer:tester ratios, in which he found that there was wide variability in this ratio, depending on the organization and environment (e.g. NASA has 10 testers to each developer for flight control software systems while it in business systems, Steve found ratios of between 3:1 and 20:1), rendering talk of an “industry standard” for this measurement seem futile. More agile-friendly techniques such as Wisdom of the Crowd, planning poker and T-shirt sizing were also covered. Rob finished off with his favourite technique, Hadden’s Size/Complexity Technique (from Rita Hadden), and this seemed like a simple way to arrive at decent estimates to iterate on over time.
  • Mary Thorn keynote “Optimize Your Test Automation to Deliver More Value”
    The second conference day kicked off with a keynote from Mary Thorn (of Ipreo). She based her talk around various experiences of implementing automation during her consulting work and, as such, it was really good practical content. I wasn’t familiar with Mary before this keynote but I enjoyed her presentation style and pragmatic approach.
  • Jared Richardson keynote “Take Charge of Your Testing Career: Bring Your Skills to the Next Level”
    The conference was closed out by another keynote, from Jared Richardson (of Agile Artisans). Jared is best known as one of the authors of the GROWS methodology and he had some good ideas around skills development in line with that methodology. He argued that experiments lead to experience and we gain experience both by accident and also intentionally. He also mentioned the Dreyfus model of skills acquisition. He questioned why we often compare ourselves as an industry to other “building” industries when we are very young compared to other building industries with hundreds or thousands of years of experience. He implored us to adopt a learner mentality (rather than an expert mentality) and to become “habitual experimenters”. This was an engaging keynote, delivered very well by Jared and packed full of great ideas.

Moving onto my track session presentation, my topic was “A Day in the Life of a Test Architect” and I was up immediately after lunch on the second day of the conference (and pitted directly against the legendary – and incredibly entertaining – Isabel Evans):

Room signage for Lee's talk at STARWest

I was very pleased to get essentially a full house for my talk and my initial worries about the talk being a little short for the one hour slot were unfounded as I ended up going for a good 45 minutes:


There was a good Q&A session after my talk too and I had to cut it to make way for the next speaker to set up in the same room. It was good to meet some other people in my audience with the title of “Test Architect” to compare notes.

Shortly after my talk, I had the pleasure of giving a short speaker interview as part of the event’s “Virtual Conference” (a free way to remotely see the keynotes and some other talks from the event), with Jennifer Bonine:

Lee being interviewed by Jennifer Bonine for the STARWest Virtual Conference

Looking at some of the good and not so good aspects of the event overall:


  • The whole show was very well-organized, everything worked seamlessly based on years of experience of running this and similar conferences.
  • There was a broad range of talks to choose from and they were generally of a good standard.
  • The keynotes were all excellent.

Not so good

  • The sheer size of the event was quite overwhelming, with so much going on all the time and it was hard for me to choose what to see when (and the resulting FOMO).
  • As a speaker, I was surprised not to have a dedicated facilitator for my room, to introduce me, facilitate Q&A, etc. (I had made the assumption that track talks – at such a large and mature event – would be facilitated, but there was nothing in the conference speaker pack to indicate that this would be the case.)
  • I’ve never received so much sponsor email spam after registering for a conference.
  • I generally stuck to my conference attendance heuristic of “don’t attend talks given by anyone who works for a conference sponsor”, this immediately restricted my programme quite considerably. There were just too many sponsor talks for my liking.

In terms of takeaways:


  • Continuous Delivery and DevOps was a hot topic, with its own theme of track sessions dedicated to it – there seemed to be a common theme of fear about testers losing their jobs within such environments, but also some good talks about how testing changes – rather than disappears – in these environments.
  • Agile is mainstream (informal polls in some talks indicated 50-70% of the audience were in agile projects) and many testers are still not embracing it. There seems to be some leading edge work from (some of) the true CD companies and some very traditional work in enterprise environments, with a big middle ground of agile/hybrid adoption rife with poor process, confusion and learning challenges.
  • The topic of “schools of testing” again came up, perhaps due to the recent James Bach “Slide Gate” incident. STARWest is a broad church and the idea of a “school of schools” (proposed by Julie Gardiner during her lightning keynote talk) seemed to be well received.
  • There is plenty of life left in big commercial testing conferences with the big vendors as sponsors – this was the biggest STARWest yet and the Expo was huge and full of the big names in testing tools, all getting plenty of interest. The size of the task in challenging these big players shouldn’t be underestimated by anyone trying to move towards more pragmatic and people-oriented approaches to testing.

Thanks again to Lee Copeland and all at TechWell for this amazing opportunity, I really appreciated it and had a great time attending & presenting at this event.

Making the most of conference attendance

I attend a lot of testing conferences (and present at a few too), most recently the massive STARWest held at Disneyland in Anaheim, California. I’ve been regularly attending such conferences for about ten years now and have noticed some big changes in the behaviour of people during these events.

Back in the day, most conferences dished out printed copies of the presentation slides and audience members generally seemed to follow along in the hard copy, making notes as the presentation unfolded. It was rare to see anyone checking emails on a laptop or phone during talks. The level of engagement with the speaker generally seemed quite high.

Fast forward ten years and it’s a very different story. Thankfully, most conferences no longer feel the need to demolish a forest to print out the slides for everyone in attendance. However, I have noticed a dramatic decrease in note taking during talks (whether that be on paper or virtually) and a dramatic increase in electronic distractions (such as checking email, internet surfing, and tweeting). The level of engagement with the presentation content seems much lower (to me) than it used to be.

I’m probably old school in that I like to take notes – on paper – during every presentation I attend, not only to give me a reference for what was of interest to me during the talk, but also to practice the key testing skill of note taking. Taking good notes is an under-rated element of the testing toolbox and so important for those practicing session-based exploratory testing.

Given that conference speakers put huge effort into preparing & giving their talks and employers spend large amounts of money for their employees to attend a conference, I’d encourage conference attendees to make every effort to be “in the moment” for each talk, take some notes, and then catch up on those important emails in the many breaks on offer. (Employers, please give your conference attendees the opportunity to engage more by letting them know that those “urgent” emails can probably wait till the end of each talk before getting a response.)

Conferences are a great opportunity to learn, network and share experiences. Remember how fortunate you are to be able to attend them and engage deeply while you have the chance.

(And, yes, I will blog about my experiences of attending and presenting at STARWest separately.)

Testers and Twitter

I was lucky enough to attend and present at the massive STARWest conference, held at Disneyland in Anaheim, last week. I’ll blog separately about the experience but I wanted to answer a question I got after my presentation right here on my blog.

Part of my presentation was discussing my decision to join Twitter and how it has become my “go to” place for keeping up-to-date with the various goings on in the world of testing. (If you’re interested, I was persuaded to join Twitter when I attended the Kiwi Workshop on Software Testing in Wellington in 2013 – and very glad I made the leap!)

I think I made a good case for joining Twitter as a tester and hence the question after my talk, “Who should I follow then?” Looking through my list, I think the following relatively small set would give a Twitter newbie a good flavour of what’s going on in testing (feel free to comment with your ideas too).

Ilari Henrik Aegerter: @ilarihenrik

James Marcus Bach: @jamesmarcusbach

Jon Bach: @jbtestpilot

Michael Bolton: @michaelbolton

Richard Bradshaw: @FriendlyTester

Alexandra Casapu: @coveredincloth

Fiona Charles: @FionaCCharles

Anne-Marie Charrett: @charrett

James Christie: @james_christie

Katrina Clokie: @katrina_tester

David Greenlees: @DMGreenlees

Aaron Hodder: @AWGHodder

Martin Hynie: @vds4

Stephen Janaway: @stephenjanaway

Helena Jeret-Mäe: @HelenaJ_M

Keith Klain: @KeithKlain

Nick Pass: @SlatS

Erik Petersen: @erik_petersen

Richard Robinson: @richrichnz

Rich Rogers: @richrtesting

Robert Sabourin: @RobertASabourin

Paul Seaman: @beaglesays

Testing Trapeze: @TestingTrapeze

Santhosh Tuppad: @santhoshst & @TestInsane

Attending a non-testing conference

I have recently found myself enjoying the latter stages of a fine British Summer (really, no sarcasm intended) and headed down to Cornwall to attend the Agile On The Beach conference. This was the first non-testing conference I’ve attended in a very long time, so it was certainly an interesting experience and a chance to compare and contrast what I see on the testing “circuit”.

This was the sixth running of this not-for-profit agile conference and it was sold out with 350 participants. It was traditional in its structure, with an  opening keynote each day followed by five tracks of 45-minute sessions punctuated by morning tea, lunch and afternoon tea. One idea I’d never seen before was inviting all the speaker to give elevator pitches for their talks immediately following each morning’s keynote. This gave the speakers a good chance to promote their slot and also gave the audience the chance to hear an up-to-date description of their content.

The highlight for me was the opening keynote, from well-known agilist Linda Rising (and perhaps best known for her book Fearless Change) with “Better Decision Making: Science or Stories?”. Her talk discussed whether the adoption of agile is being based on stories rather than scientific evidence. Do we even need evidence as agile practices are seen as “common sense”? Linda argued that we’re reluctant to believe science/data and find stories (experience reports) much more compelling. Science validates but doesn’t always convince people (and scientists suffer from confirmation bias). Could Agile be a placebo? Does it work because we believe in it? Linda noted that organizations don’t really encourage scientific method as decision makers want action rather than investigation. Does any of this sound familiar from the testing world, particularly the context-driven part of the world? I’m a big fan of the idea of experience reports as evidence, they make for compelling presentations and provide me with stories that I can relate to the challenges I may be facing in my testing. Interestingly, Linda also mentioned Thinking, Fast & Slow (Daniel Kahneman), a much-cited reference in the testing community these days.

Maybe I shouldn’t be so surprised, but there was very little talk about the role of human testing in software delivery in agile teams. Only one presentation I attended mentioned exploratory testing, with all others only talking about automation/automating all the tests/automating away manual tests. Some of the continuous delivery talks made claims along the lines of “you can only do CD by automating everything” so I’m sure we’ll be seeing more frequently delivered bad software if this mindset prevails.


In terms of takeaways, I noted:

  • The agile movement is mainstream, but still with little consensus about many aspects of it.
  • Testing as a specialization is not widely discussed, with most experience reports talking up automation but failing to recognize the requirement for human testers to help teams build quality in. “Code quality” (as defined by various static code analysis techniques) was also commonly mentioned.
  • There was a lot of mention of metrics in various talks – avoiding vanity metrics and identifying useful metrics that drive your desired behaviours (rather than choosing “industry standard” metrics that are often prone to encouraging bad behaviours).
  • Continuous delivery is a hot topic, moving agility up from the CI level to deployment and release. CD is again being used a reason for automating all testing when it should the opposite – building better quality in, with the help of people who specialize in helping the team to do that, is an obvious way to reduce risk of simply deploying bad product more frequently.
  • “Business agility” is also a hot topic, moving other parts of the business – not just software development/IT – to a more agile means of working is a big challenge especially in larger organizations.
  • Speaker elevator pitches are a great conference idea (as is a conference party on the beach, take note Aussies!)

(The conference organizers are kindly collating photos, blog posts, presentations, etc. at if you’re looking for more detail.)

Next stop, STARWest in Anaheim where I’m presenting A Day In The Life Of A Test Architect – say g’day if you’ll be there too!

ER: guest editor for Testing Trapeze magazine

I’ve recently had the pleasure of acting as guest editor for Testing Trapeze magazine and thought it would be worth briefly writing about this experience.

If you’re not familiar with this magazine, it started in February 2014 and is published online bi-monthly. It is normally edited by well-known and respected member of the context-driven testing community, Katrina Clokie, from New Zealand. From the magazine’s website “About”:

We want to see a small, simple, quality magazine that amplifies the voices of testers from Australia and New Zealand, presenting a clear, consistent message of good testing practice from existing and emerging leaders. We want to demonstrate the caliber of our community and encourage new testers to join us by engaging in their work at a different level. We want to create a publication that we feel proud of, that truly represents Australia and New Zealand on the international stage; a magazine that you want to read, share and contribute to.

Over the last two and a half years, the magazine has consistently provided a high quality experience for its readers by focusing on a relatively small number of articles per issue and wrapping them up in a beautifully presented publication. Each edition typically comprises of four articles from local (i.e. Australia and New Zealand) authors plus another from an international author. There is no set theme per edition and new writers are actively encouraged, so the sixteen editions to date have given lots of opportunity to new voices from the testing community particularly across Australia and New Zealand.

The main tasks for the editor are logistical and organizational in nature – communicating with authors to get their articles in for review, organizing reviewers for each article, finalizing the content, ordering the articles in the magazine, and writing the editorial. The ease or difficulty of the job is largely dictated by the other people involved and, in my brief experience, everyone was on the same page (no pun intended) in terms of getting good content ready in time to publish to our deadline. Luckily for me, Katrina wrote a blog post Behind the Scenes: Editor of Testing Trapeze in 2015 which helped me work out the various tasks I needed to check off along the way.

It was interesting to see the draft articles coming in from the various authors and the different amounts of review feedback that needed to be incorporated to get to “final” versions for the magazine. Thanks to the reviewers for doing such timely and diligent jobs in providing constructive feedback which was taken on board by the authors.

The magazine is free to download (as a PDF) from the Testing Trapeze website; I strongly encourage you to become a regular reader and also consider expressing an interest in writing an article – you will be warmly welcomed and provided with practical and helpful feedback from the reviewers, there’s nothing to be afraid of!

Thanks again to Katrina for the opportunity to briefly edit the magazine, to Adam for the amazing work with layout and the website, and to all of the authors and reviewers without whom we’d have no content to share. I hope you enjoy the edition I was lucky enough to have the opportunity to bring together.

Being part of a community

There has been a lot of Twitter activity about the CDT community in the last couple of weeks. Katrina Clokie also penned an excellent blog post, A community discussion, and there seems to be a lot of unresolved disputes between different folks representing different parts of the testing community. Some of this just feels like the normal level of background noise spiking for a short time. It’s not the first time a storm of this type has blown itself up around the CDT community and it won’t be the last.

I particularly liked Katrina’s statement that “I strive to be approachable, humble and open to questions”, as this is also my own approach to both being a member of a testing community and also helping to bring others into it.

I have been heavily involved in the TEAM meetup to build a new testing community in Melbourne and also helping to make the Australian Testing Days conference happen (though I will not be involved in the future of the event). I write this blog in the hope of sharing my ideas and opinions and maybe bringing readers into my community as a result.

I’ve chosen not to add to the noise by responding to the Twitter commentary around the CDT community right now, but I don’t feel that my lack of contribution to the discussion either reflects approval or disapproval of the behaviour of any member of any of the communities that consider themselves CDT.

As I’ve blogged before, our Values and principles define us.


“Testing is dead” (again) thanks to DevOps

I’ve been coming across talk of DevOps a lot recently, both in my work and also in the broader software development community. The continuous delivery & deployment folks all seem very excited about the manifold releases that can now be achieved, thanks to great tooling and “all of that automated testing” before deployment. From what I’ve been hearing, this has the potential to be the perfect storm of poor agile practices meeting a lack of understanding of what can and cannot be automated when it comes to “testing”.

The following quote comes from John Ferguson Smart in his recent article, The Role of QA in a DevOps World :

there is a lot more to DevOps than simply automating a build pipeline. DevOps is not an activity, it is a culture, and a culture that goes much deeper than what appears to the naked eye. You could say that DevOps involves creating a culture where software development teams works seamlessly with IT operations so that they can work together to build, test, release and update applications more frequently and efficiently.

I think this quote says a lot about DevOps and makes it clear that it’s not just about automating the stuff around building and deploying the software. With the big focus on automating, it is somewhat inevitable that the same misunderstandings are being made about the role of human testers in all this as were common during the early stages of agile adoption:

Some organisations also seem to believe that they can get away with fewer testers when they adopt a DevOps. After all, if all the tests are automated, who needs manual testers?

In reading more about the culture of DevOps, there are two obvious limitations we need to be talking about and making people aware of, viz. “acceptance criteria” and “automated testing”.

Acceptance criteria

Let’s be clear from that start that meeting all of the acceptance criteria does not mean the story/product is acceptable to your customer. There is a real danger of acceptance criteria being used in the same fallacious way that we previously used “requirements”, as though we can completely specify all the dimensions of acceptability by producing a set of criteria. Meeting all of the specified acceptance criteria might mean the story/product is acceptable. When the specified acceptance criteria are not met, the story/product is definitely not acceptable. So, we’d be better off thinking of them as “rejection criteria”. Michael Bolton wrote an excellent blog on this, Acceptance Tests: Let’s Change the Title, Too and he says (the bold emphasis is mine):

The idea that we’re done when the acceptance tests pass is a myth. As a tester, I can assure you that a suite of passing acceptance tests doesn’t mean that the product is acceptable to the customer, nor does it mean that the customer should accept it. It means that the product is ready for serious exploration, discovery, investigation, and learning—that is, for testing—so that we can find problems that we didn’t anticipate with those tests but that would nonetheless destroy value in the product.

Have a think about what that means for automated acceptance tests…

“Automated testing”

Although I prefer the term “automated checks” over “automated tests” (to highlight the fact that “testing” requires human thinking skills), I’ll indulge the common parlance for the purposes of this topic. It feels like increasingly greater reliance is being placed on automated tests to signify that all is OK with the products we build, especially in the world of DevOps where deployments of code change without any further human interaction are seen as normal as long as all the automated tests are “green”.

Let’s reflect for a moment on why we write automated tests. In another excellent blog post,  s/automation/programming/, Michael Bolton says:

people do programming, and that good programming can be hard, and … good programming requires skill.  And even good programming is vulnerable to errors and other problems.

We acknowledge that writing programs is difficult and prone to human error. Now, suppose instead of calling our critical checks “automated tests”, we instead referred to them as “programmed tests” – this would make it very clear that we’re:

writing more programs when we’re already rationally uncertain about the programs we’ve got.

Michael suggests similar substitutions:

Let’s automate to do all the testing? “Let’s write programs to do all the testing.”

Testing will be faster and cheaper if we automate. “Testing will be faster and cheaper if we write programs.”

Automation will replace human testers. “Writing programs will replace human testers.”

I think this makes it very clear that we cannot automate all of our testing on our way to quality products, be it in a DevOps environment or otherwise.

(For a great reference piece on the use of automation in testing, I recommend Michael Bolton & James Bach’s A Context-Driven Approach to Automation in Testing)

What then of the role for “manual” testers? John Ferguson Smart notes the changing role of testing in such environments and these changes again mirror the kinds of role I’ve been advocating for testers within agile teams:

It is true that you don’t have time for a separate, dedicated testing phase in a DevOps environment. But that doesn’t mean there is no manual testing going on. On the contrary, manual testing, in the form of exploratory testing, performance testing, usability testing and so on, is done continuously throughout the project, not just at the end… The role of the tester in a DevOps project involves exploring, discovering, and providing feedback about the product quality and design, as early as it is feasible to do so, and not just at the end of the process.

I’ll quote John Ferguson Smart again in conclusion – and hopefully I’ve made it clear why I agree with his opinion (the emphasis is again mine):

So testers are not made redundant because you have the technical capability to deploy ten times a day. On the contrary, testers play a vital role in ensuring that the application that gets deployed ten times a day is worth deploying.