ER: Attending the Cambridge Exploratory Workshop on Testing (CEWT)

One of the great things about being out of Australia for a while is the ability to experience testing community events in other parts of the world.

I recently attended a Belfast Testers meetup and, shortly afterwards, received an invitation from James Thomas to take part in the third Cambridge Exploratory Workshop on Testing – an invitation I readily accepted!

This peer workshop took place on Sunday 6th November and was held in the offices of games developer Jagex on the (enormous) Cambridge Science Park, with a total of 12 participants (the perfect size for such an event), as follows:

Michael Ambrose (Jagex)

James Thomas, Karo Stoltzenburg, Sneha Bhat, Aleksandar Simic (all from Linguamatics)

Alan Wallace (GMSL)

James Coombes (Nokia)

Neil Younger (DisplayLink)

Chris Kelly (Redgate)

Iuliana Silvasan

Chris George (Cambridge Consultants)

Lee Hawkins (Quest)

The workshop theme was “Why do we Test, and What is Testing (Anyway)?” and, after some introductions and housekeeping about how the workshop would be run, it was time for the first ten-minute talk, from Michael Ambrose with “Teach Them to Fish”. He talked about teaching developers to test at Jagex, as well as upskilling testers to be pseudo-developers. He said there was a technical need to cover more and more as well as a desire to get testers learning more (as a different approach to, say, pushing developers to do automation). Michael noted that there were a number of implications of these changes, including the perception of testers, working out what’s unique about what testers do, and knowing how far to go (getting testers to the level of junior developers might be enough). This was an interesting take on the current “testers need to be more technical” commentary in the industry and the twenty-minute discussion period was easily filled up.

Next up was James Coombes with “Who should do testing and what can they test?” He talked about the “I own quality” culture within Nokia and how he sees different roles being responsible for different aspects of quality. James suggested that developers should find most of the bugs and fix them, while QA then find the next highest number of bugs. Security testers act as specialists with generally few (but important) bugs being found. Documenters/trainers are well placed to find usability bugs, while customer support staff have good knowledge of how customers actually use their products and so can provide good testing tours. Alpha test engineers are responsible for integration/end-to-end testing and catch the low frequency bugs. Finally, customers are hopefully finding the very low frequency bugs. This was an interesting talk about getting everyone involved in the testing activity (and highlighted the “testing is an activity, not a role” idea). I particularly liked what James said about unit testing – “if someone changes your code and they don’t know they’ve broken it, it’s not their problem, it’s yours for not writing a good enough unit test”.

After a short break, I was up next with my talk “What is Testing? It depends …” I decided to tackle the latter half of the theme (i.e. the “what” rather than the “why”) and my idea was to discuss what testing means depending on the perspective of the stakeholder. We focus a lot of time and effort in the community on refining a definition of testing (and I favour the James Bach & Michael Bolton definition given towards the end of the Exploratory Testing 3.0 blog post) but this (or any other) definition is probably not very helpful to some stakeholders. I covered anumber of perspectives such as “Testing is a way to make money” (if you’re a testing tools vendor or a testing outsourcing services provider), “Testing is a cost centre” (if you’re a CFO) and “Testing is dead” (if you’re a CxO type reading some of the headline IT magazines & websites). There was a good discussion after my talk, mainly focused on the cost centre perspective and how this has impacted people in their day-to-day work. I was pleased with how my talk went (especially given the short time I had to prepare) and received some good feedback, particularly on the concise nature of the slides and the confidence with which it was presented. My slide deck can be seen at What is Testing? It depends…


The last session before lunch saw Aleksandar Simic with “A Two-day Testing Story”. He did a fine job of breaking down a two-day period in his work into various different activities, some testing-related (e.g. pairing on test design) and some not (e.g. working with IT support on a networking issue). Aleksandar’s level of self-inspection was impressive, as was his naming of the various activities, learning opportunities and challenges along the way. His “testing diary” seems to be working well for him in identifying and naming his testing activities and this would make an interesting conference talk with some further development.

Lunch provided a good chance for us all to chat and unwind a little after the intensive morning spent talking testing.

First up after the lunch break was Karo Stoltzenburg with “I test, therefore I am”. She had adopted the idea of substitution in preparing her talk so looked to answer the question “Why do I test?” and see where that took her. Karo’s answer was “Because I like it” and then she explored why she liked it, identifying investigation, learning, exploring, use of the scientific method, collaborating, thinking in different contexts and diversity as aspects of testing that appealed to her. I liked Karo’s closing remarks in which she said “I test because it makes me happy, because it’s interesting, challenging and varied work”. We really need more positive messages like Karo’s being expressed in the testing community (and wider still), so I’d love to see this become a full conference talk one day. She did a good job of communicating her passion for testing and there were some interesting discussions in the group following her talk, with a degree of agreement about why testing is so engaging for some of us.

The sixth and final talk of the day came from James Thomas with “Testing All the Way Down, and Other Directions” He walked through an in-depth analysis of Elisabeth Hendrickson’s “Tested = Checked + Explored” from her book, Explore It! James decided to explore this definition of testing using techniques from that definition which wouldn’t classify his actions as testing. He described how he’d interacted with Elisabeth on some of his questions after exploring the idea in this way and finally presented his proposed alternative definition of testing as “the pursuit of actual or potential incongruity” (Note that James more fully describes this talk in his blog post, Testing All the Way Down, and Other Directions) The main focus of discussion after James’s talk was around his proposed definition of testing and I’ll be following the broader community’s response to his proposal with interest.

A few discussion points arose during the day for which we didn’t have time to go deep between talks, so we dedicated ten minutes to each of the following topics to close out the workshop content:

  • Quality – what does it mean? (Weinberg definition, but are others more helpful?)
  • Domain knowledge (can bias you, can empathy with the end user be a disadvantage? How do we best adjust strategy to mitigate for any lack of domain knowledge?)
  • Evaluating success (how do we measure the success of spreading testing into development and other disciplines?)
  • Is testing just “the stuff that testers do”? (probably not!)
  • How do we make a difference? (blogging, workshops in our own workplaces, brown bag sessions, broader invitation list to peer conferences)

To wrap up, a short retrospective was held where we were all encouraged to note good things to continue, anything we’d like to stop doing, and suggestions for what we should start to do. There were some good ideas, briefly discussed by the group and I’d expect to see some of these ideas taken up by the CEWT organizers as part of their fourth incarnation.

The CEWT group standing outside number 10 Downing Street (or inside the Jagex office, maybe):


This was a really good day of deep-diving with a passionate group of testers, exactly what a peer conference should be all about. Thanks again to James for the invitation and thanks to all the participants for making me so welcome.

For reflections on the event from others, keep an eye on the CEWT blog at

Living up to my handle: rocking and testing in Belfast

I was lucky enough to find myself in Northern Ireland recently, with a trip based around attending one of Status Quo‘s final ‘electric’ shows in Belfast. This was my first trip to Northern Ireland and it was a very enjoyable few days. Visiting the stunning North coast around the Giant’s Causeway was a highlight, as was exploring the varied districts of the city of Belfast itself, from the colourful (both literally and figuratively) walls of the Falls/Shanklin area to the stunning City Hall.

So, firstly, the “rocker” part of the trip. The big SSE Arena would be home to Status Quo for one night on Friday 30th October. The Emerald Isle has always been good territory for the band and this gig would be no exception. Luckily, our hotel – the Premier Inn Titanic Quarter – was literally next door to the venue and our room had a view over the loading area at the back, so a perfect spot to keep an eye on proceedings during the day in the lead up to the gig. An early queue formed (as usual) and it was good to catch up with friends there and also make some new ones (one very generous bloke in the queue kindly gave my wife a free ticket so she could join me in attending the gig!). With doors opening a little later than usual (due to the Quo crew arriving late because of travel problems from the mainland to Ireland), the queue was very long by the time we were allowed in and it was a sprint down to the barrier, securing a spot centre stage.

Support came from Uriah Heep and they got a great reception, having not played live in this part of the world for over thirty years. While their brand of heavy rock (somewhat like Deep Purple to my ears) is not really to my taste, they did a good job of getting the 7-8000 strong audience involved and provided a good warm-up during their hour-long set.

At just after 9pm, it was finally time for Quo to take the stage and it was another top performance, with Rick Parfitt’s replacement, Richie Malone, getting a particularly warm welcome in his home territory. There were no setlist surprises (it is Status Quo after all!) but, as always with a live gig, there are subtle differences from night to night and the encouragable crowd made this a very enjoyable show.


Now, where does the “tester” fit into the trip? Well, luckily for me, the fairly new Belfast Tester meetup group had announced a meetup during one of the nights I was in Belfast (and thankfully not the same night as the Status Quo gig), so I decided to go along and see what this new testing community looks like.

It was the fourth meetup of the Belfast Testers Meetup group and it took place on the evening of Thursday 29th October, in the boardroom of Shopkeep. I was warmly welcomed as a “one off” attendee of the meetup, which drew a crowd of about 25 (many of whom were first-timers as you’d expect at such a new meetup). It’s always good to see a new community of testers being built and this one should do well, with strong leaders and a burgeoning IT industry in the city.

Meetup co-organizer Neill Boyd kicked off the meetup, after gathering up the crowd enjoying the hospitality and impressive surroundings of the Shopkeep office (all very “startuppy”) and funneling them into the boardroom. It was cosy, with 20+ of us in the room but it made for a good space for easy conversation and Q&A with the presenters.

His introduction was handy for newcomers, announced that the TestBash conference is adding Belfast to its growing list of city destinations in 2017, and outlined the agenda of two talks to fill the evening.

First up was Allan Hunter (Senior Tester at PwC Emerging Technologies team) with “User Focused Testing”. This short talk discussed how user research is a key ingredient in powering insightful testing. He talked on his experiences of user research and how this can be applied to more traditional testing activities. His basic message was that we as testers are well placed to test the problem and not just the solution. He packed a lot of cool content into ten minutes then held his own through a lengthy period of questioning.

Secondly, we had Ursula Wlodarczyk (a tester working in the startup, SaltDNA) with “Tester v s Designer: Why tester can make a great (UX) designer”. Ursula’s was a much longer talk and tried to cover a huge amount of ground, enough for at least a couple of good talks I’d say. She talked about exploring a tester’s skillset in the context of the User Experience field and, similarly to Allan, she recognizes that testers can provide great insights into design (not just at the user interface either) and organizations should be allowing testers the opportunities to engage in design meetings and provide their valuable insights and suggestions before the software is built. A very long talk for a meetup (it would actually make a decent conference talk), but some nice ideas and good resources.

The meetup kicks off  Allan Hunter presenting  Ursula Wlodarczyk presenting

So, thanks to Belfast and thanks to the two very different communities – Status Quo and testing – that continue to give me the chance to rock and test all over the world.

My ER of attending and presenting at STARWest 2016

I recently had the pleasure of heading to Southern California to attend and present at the long-running STARWest conference. Although the event is always held at the Disneyland Resort, it’s a serious conference and attracted a record delegation of over 1200 participants. For a testing conference, this is just about as big as it gets and was probably on a par with some recent EuroSTARs that I’ve attended.

My conference experience consisted of attending two full days of tutorials then two conference days, plus presenting one track session and doing an interview for the Virtual Conference event. It was an exhausting few days but also a very engaging & enjoyable time.

Rather than going through every presentation, I’ll talk to a few highlights:

  • Michael Bolton tutorial “Critical Thinking for Software Testers”
    The prospect of again spending a full day with Michael was an exciting one – and he didn’t disappoint. His tutorial drew heavily from the content of Rapid Software Testing (as expected), but this was not a big issue for his audience (of about 50) here as hardly anyone was familiar with RST, his work with James Bach, Jerry Weinberg, etc. Michael defined “critical thinking” to be “thinking about thinking with the aim of not getting fooled” and he illustrated this many times with interesting examples. The usual “checking vs. testing”, critical distance, models of testing, system 1 vs. system 2 thinking, and “Huh? Really? And? So?” heuristic familiar to those of us who follow RST and Bolton/Bach’s work were all covered and it seemed that Michael converted a few early skeptics during this class. An enjoyable and stimulating day’s class.
  • Rob Sabourin tutorial “Test Estimation in the Face of Uncertainty”
    I was equally excited to be spending half a day in the company of someone who has given me great support and encouragement – and without whose support I probably wouldn’t have made the leap into presenting at conferences. Whenever Rob Sabourin presents or teaches, you’re guaranteed passion and engagement and he did a fine job of covering what can be a pretty dry subject. In his audience of about 40, it was a 50/50 split between those on agile and waterfall projects and some of the estimation techniques he outlined suited one or other SDLC model better, while some were generic. He covered most of the common estimation techniques and often expressed his opinion on their usefulness! For example, using “% of project effort/spend” as a way of estimating testing required was seen as ignoring many factors that influence how much testing we need to do and also ignores the fact that small development efforts can result in big testing efforts. Rob also said this technique “belittles the cognitive aspects of testing”, I heartily agreed! Rob also cited the work of Steve McConnell on developer:tester ratios, in which he found that there was wide variability in this ratio, depending on the organization and environment (e.g. NASA has 10 testers to each developer for flight control software systems while it in business systems, Steve found ratios of between 3:1 and 20:1), rendering talk of an “industry standard” for this measurement seem futile. More agile-friendly techniques such as Wisdom of the Crowd, planning poker and T-shirt sizing were also covered. Rob finished off with his favourite technique, Hadden’s Size/Complexity Technique (from Rita Hadden), and this seemed like a simple way to arrive at decent estimates to iterate on over time.
  • Mary Thorn keynote “Optimize Your Test Automation to Deliver More Value”
    The second conference day kicked off with a keynote from Mary Thorn (of Ipreo). She based her talk around various experiences of implementing automation during her consulting work and, as such, it was really good practical content. I wasn’t familiar with Mary before this keynote but I enjoyed her presentation style and pragmatic approach.
  • Jared Richardson keynote “Take Charge of Your Testing Career: Bring Your Skills to the Next Level”
    The conference was closed out by another keynote, from Jared Richardson (of Agile Artisans). Jared is best known as one of the authors of the GROWS methodology and he had some good ideas around skills development in line with that methodology. He argued that experiments lead to experience and we gain experience both by accident and also intentionally. He also mentioned the Dreyfus model of skills acquisition. He questioned why we often compare ourselves as an industry to other “building” industries when we are very young compared to other building industries with hundreds or thousands of years of experience. He implored us to adopt a learner mentality (rather than an expert mentality) and to become “habitual experimenters”. This was an engaging keynote, delivered very well by Jared and packed full of great ideas.

Moving onto my track session presentation, my topic was “A Day in the Life of a Test Architect” and I was up immediately after lunch on the second day of the conference (and pitted directly against the legendary – and incredibly entertaining – Isabel Evans):

Room signage for Lee's talk at STARWest

I was very pleased to get essentially a full house for my talk and my initial worries about the talk being a little short for the one hour slot were unfounded as I ended up going for a good 45 minutes:


There was a good Q&A session after my talk too and I had to cut it to make way for the next speaker to set up in the same room. It was good to meet some other people in my audience with the title of “Test Architect” to compare notes.

Shortly after my talk, I had the pleasure of giving a short speaker interview as part of the event’s “Virtual Conference” (a free way to remotely see the keynotes and some other talks from the event), with Jennifer Bonine:

Lee being interviewed by Jennifer Bonine for the STARWest Virtual Conference

Looking at some of the good and not so good aspects of the event overall:


  • The whole show was very well-organized, everything worked seamlessly based on years of experience of running this and similar conferences.
  • There was a broad range of talks to choose from and they were generally of a good standard.
  • The keynotes were all excellent.

Not so good

  • The sheer size of the event was quite overwhelming, with so much going on all the time and it was hard for me to choose what to see when (and the resulting FOMO).
  • As a speaker, I was surprised not to have a dedicated facilitator for my room, to introduce me, facilitate Q&A, etc. (I had made the assumption that track talks – at such a large and mature event – would be facilitated, but there was nothing in the conference speaker pack to indicate that this would be the case.)
  • I’ve never received so much sponsor email spam after registering for a conference.
  • I generally stuck to my conference attendance heuristic of “don’t attend talks given by anyone who works for a conference sponsor”, this immediately restricted my programme quite considerably. There were just too many sponsor talks for my liking.

In terms of takeaways:


  • Continuous Delivery and DevOps was a hot topic, with its own theme of track sessions dedicated to it – there seemed to be a common theme of fear about testers losing their jobs within such environments, but also some good talks about how testing changes – rather than disappears – in these environments.
  • Agile is mainstream (informal polls in some talks indicated 50-70% of the audience were in agile projects) and many testers are still not embracing it. There seems to be some leading edge work from (some of) the true CD companies and some very traditional work in enterprise environments, with a big middle ground of agile/hybrid adoption rife with poor process, confusion and learning challenges.
  • The topic of “schools of testing” again came up, perhaps due to the recent James Bach “Slide Gate” incident. STARWest is a broad church and the idea of a “school of schools” (proposed by Julie Gardiner during her lightning keynote talk) seemed to be well received.
  • There is plenty of life left in big commercial testing conferences with the big vendors as sponsors – this was the biggest STARWest yet and the Expo was huge and full of the big names in testing tools, all getting plenty of interest. The size of the task in challenging these big players shouldn’t be underestimated by anyone trying to move towards more pragmatic and people-oriented approaches to testing.

Thanks again to Lee Copeland and all at TechWell for this amazing opportunity, I really appreciated it and had a great time attending & presenting at this event.

Making the most of conference attendance

I attend a lot of testing conferences (and present at a few too), most recently the massive STARWest held at Disneyland in Anaheim, California. I’ve been regularly attending such conferences for about ten years now and have noticed some big changes in the behaviour of people during these events.

Back in the day, most conferences dished out printed copies of the presentation slides and audience members generally seemed to follow along in the hard copy, making notes as the presentation unfolded. It was rare to see anyone checking emails on a laptop or phone during talks. The level of engagement with the speaker generally seemed quite high.

Fast forward ten years and it’s a very different story. Thankfully, most conferences no longer feel the need to demolish a forest to print out the slides for everyone in attendance. However, I have noticed a dramatic decrease in note taking during talks (whether that be on paper or virtually) and a dramatic increase in electronic distractions (such as checking email, internet surfing, and tweeting). The level of engagement with the presentation content seems much lower (to me) than it used to be.

I’m probably old school in that I like to take notes – on paper – during every presentation I attend, not only to give me a reference for what was of interest to me during the talk, but also to practice the key testing skill of note taking. Taking good notes is an under-rated element of the testing toolbox and so important for those practicing session-based exploratory testing.

Given that conference speakers put huge effort into preparing & giving their talks and employers spend large amounts of money for their employees to attend a conference, I’d encourage conference attendees to make every effort to be “in the moment” for each talk, take some notes, and then catch up on those important emails in the many breaks on offer. (Employers, please give your conference attendees the opportunity to engage more by letting them know that those “urgent” emails can probably wait till the end of each talk before getting a response.)

Conferences are a great opportunity to learn, network and share experiences. Remember how fortunate you are to be able to attend them and engage deeply while you have the chance.

(And, yes, I will blog about my experiences of attending and presenting at STARWest separately.)

Testers and Twitter

I was lucky enough to attend and present at the massive STARWest conference, held at Disneyland in Anaheim, last week. I’ll blog separately about the experience but I wanted to answer a question I got after my presentation right here on my blog.

Part of my presentation was discussing my decision to join Twitter and how it has become my “go to” place for keeping up-to-date with the various goings on in the world of testing. (If you’re interested, I was persuaded to join Twitter when I attended the Kiwi Workshop on Software Testing in Wellington in 2013 – and very glad I made the leap!)

I think I made a good case for joining Twitter as a tester and hence the question after my talk, “Who should I follow then?” Looking through my list, I think the following relatively small set would give a Twitter newbie a good flavour of what’s going on in testing (feel free to comment with your ideas too).

Ilari Henrik Aegerter: @ilarihenrik

James Marcus Bach: @jamesmarcusbach

Jon Bach: @jbtestpilot

Michael Bolton: @michaelbolton

Richard Bradshaw: @FriendlyTester

Alexandra Casapu: @coveredincloth

Fiona Charles: @FionaCCharles

Anne-Marie Charrett: @charrett

James Christie: @james_christie

Katrina Clokie: @katrina_tester

David Greenlees: @DMGreenlees

Aaron Hodder: @AWGHodder

Martin Hynie: @vds4

Stephen Janaway: @stephenjanaway

Helena Jeret-Mäe: @HelenaJ_M

Keith Klain: @KeithKlain

Nick Pass: @SlatS

Erik Petersen: @erik_petersen

Richard Robinson: @richrichnz

Rich Rogers: @richrtesting

Robert Sabourin: @RobertASabourin

Paul Seaman: @beaglesays

Testing Trapeze: @TestingTrapeze

Santhosh Tuppad: @santhoshst & @TestInsane

Attending a non-testing conference

I have recently found myself enjoying the latter stages of a fine British Summer (really, no sarcasm intended) and headed down to Cornwall to attend the Agile On The Beach conference. This was the first non-testing conference I’ve attended in a very long time, so it was certainly an interesting experience and a chance to compare and contrast what I see on the testing “circuit”.

This was the sixth running of this not-for-profit agile conference and it was sold out with 350 participants. It was traditional in its structure, with an  opening keynote each day followed by five tracks of 45-minute sessions punctuated by morning tea, lunch and afternoon tea. One idea I’d never seen before was inviting all the speaker to give elevator pitches for their talks immediately following each morning’s keynote. This gave the speakers a good chance to promote their slot and also gave the audience the chance to hear an up-to-date description of their content.

The highlight for me was the opening keynote, from well-known agilist Linda Rising (and perhaps best known for her book Fearless Change) with “Better Decision Making: Science or Stories?”. Her talk discussed whether the adoption of agile is being based on stories rather than scientific evidence. Do we even need evidence as agile practices are seen as “common sense”? Linda argued that we’re reluctant to believe science/data and find stories (experience reports) much more compelling. Science validates but doesn’t always convince people (and scientists suffer from confirmation bias). Could Agile be a placebo? Does it work because we believe in it? Linda noted that organizations don’t really encourage scientific method as decision makers want action rather than investigation. Does any of this sound familiar from the testing world, particularly the context-driven part of the world? I’m a big fan of the idea of experience reports as evidence, they make for compelling presentations and provide me with stories that I can relate to the challenges I may be facing in my testing. Interestingly, Linda also mentioned Thinking, Fast & Slow (Daniel Kahneman), a much-cited reference in the testing community these days.

Maybe I shouldn’t be so surprised, but there was very little talk about the role of human testing in software delivery in agile teams. Only one presentation I attended mentioned exploratory testing, with all others only talking about automation/automating all the tests/automating away manual tests. Some of the continuous delivery talks made claims along the lines of “you can only do CD by automating everything” so I’m sure we’ll be seeing more frequently delivered bad software if this mindset prevails.


In terms of takeaways, I noted:

  • The agile movement is mainstream, but still with little consensus about many aspects of it.
  • Testing as a specialization is not widely discussed, with most experience reports talking up automation but failing to recognize the requirement for human testers to help teams build quality in. “Code quality” (as defined by various static code analysis techniques) was also commonly mentioned.
  • There was a lot of mention of metrics in various talks – avoiding vanity metrics and identifying useful metrics that drive your desired behaviours (rather than choosing “industry standard” metrics that are often prone to encouraging bad behaviours).
  • Continuous delivery is a hot topic, moving agility up from the CI level to deployment and release. CD is again being used a reason for automating all testing when it should the opposite – building better quality in, with the help of people who specialize in helping the team to do that, is an obvious way to reduce risk of simply deploying bad product more frequently.
  • “Business agility” is also a hot topic, moving other parts of the business – not just software development/IT – to a more agile means of working is a big challenge especially in larger organizations.
  • Speaker elevator pitches are a great conference idea (as is a conference party on the beach, take note Aussies!)

(The conference organizers are kindly collating photos, blog posts, presentations, etc. at if you’re looking for more detail.)

Next stop, STARWest in Anaheim where I’m presenting A Day In The Life Of A Test Architect – say g’day if you’ll be there too!

ER: guest editor for Testing Trapeze magazine

I’ve recently had the pleasure of acting as guest editor for Testing Trapeze magazine and thought it would be worth briefly writing about this experience.

If you’re not familiar with this magazine, it started in February 2014 and is published online bi-monthly. It is normally edited by well-known and respected member of the context-driven testing community, Katrina Clokie, from New Zealand. From the magazine’s website “About”:

We want to see a small, simple, quality magazine that amplifies the voices of testers from Australia and New Zealand, presenting a clear, consistent message of good testing practice from existing and emerging leaders. We want to demonstrate the caliber of our community and encourage new testers to join us by engaging in their work at a different level. We want to create a publication that we feel proud of, that truly represents Australia and New Zealand on the international stage; a magazine that you want to read, share and contribute to.

Over the last two and a half years, the magazine has consistently provided a high quality experience for its readers by focusing on a relatively small number of articles per issue and wrapping them up in a beautifully presented publication. Each edition typically comprises of four articles from local (i.e. Australia and New Zealand) authors plus another from an international author. There is no set theme per edition and new writers are actively encouraged, so the sixteen editions to date have given lots of opportunity to new voices from the testing community particularly across Australia and New Zealand.

The main tasks for the editor are logistical and organizational in nature – communicating with authors to get their articles in for review, organizing reviewers for each article, finalizing the content, ordering the articles in the magazine, and writing the editorial. The ease or difficulty of the job is largely dictated by the other people involved and, in my brief experience, everyone was on the same page (no pun intended) in terms of getting good content ready in time to publish to our deadline. Luckily for me, Katrina wrote a blog post Behind the Scenes: Editor of Testing Trapeze in 2015 which helped me work out the various tasks I needed to check off along the way.

It was interesting to see the draft articles coming in from the various authors and the different amounts of review feedback that needed to be incorporated to get to “final” versions for the magazine. Thanks to the reviewers for doing such timely and diligent jobs in providing constructive feedback which was taken on board by the authors.

The magazine is free to download (as a PDF) from the Testing Trapeze website; I strongly encourage you to become a regular reader and also consider expressing an interest in writing an article – you will be warmly welcomed and provided with practical and helpful feedback from the reviewers, there’s nothing to be afraid of!

Thanks again to Katrina for the opportunity to briefly edit the magazine, to Adam for the amazing work with layout and the website, and to all of the authors and reviewers without whom we’d have no content to share. I hope you enjoy the edition I was lucky enough to have the opportunity to bring together.