Category Archives: Conferences

Attending and presenting at CAST 2017 (Nashville)

Back in March, I was delighted to learn that my proposal to speak at the Conference of the Association for Software Testing in Nashville was accepted and it was then the usual nervous & lengthy gap between acceptance and the actual event.

It was a long trip from Melbourne to Nashville for CAST 2017 – this would be my first CAST since the 2014 event in New York and also my first time as a speaker at their event. This was the 12th annual conference of the AST which took place on August 16, 17 & 18 and was held at the totally ridiculous Gaylord Opryland Resort, a 3000-room resort and convention centre with a massive indoor atrium (and river!) a few miles outside of downtown Nashville. The conference theme was What the heck do testers do anyway?

The event drew a crowd of 160, mainly from the US but with a number of internationals too (I was the only participant from Australia, unsurprisingly!).

My track session was “A Day in the Life of a Test Architect”, a talk I’d first given at STARWest in Anaheim in 2016, and I was up on the first conference day, right after lunch. I arrived early to set up and the AV all worked seamlessly so I felt confident as my talk kicked off to a nicely filled room with about fifty in attendance.

room

I felt like the delivery of the talk itself went really well. I’d rehearsed the talk a few times in the weeks before the conference and I didn’t forget too many of the points I meant to make. The talk took about 35 minutes before the “open season” started – this is the CAST facilitated “Q&A” session using the familiar “K cards” system (borrowed from peer conferences but now a popular choice at bigger conferences too). The questions kept coming and it was an interesting & challenging 25 minutes to field them all. My thanks to Griffin Jones who facilitated my open season and thanks to the audience for their engagement and thoughtful, respectful questioning.

room2

A number of the questions during open season related to my recent volunteer work with Paul Seaman in teaching software testing to young adults on the autism spectrum. My mentor, Rob Sabourin, attended my talk and suggested afterwards that a lightning talk about this work would be a good idea to share a little more about what was obviously a topic of some interest to this audience. And so it was that I found myself unexpectedly signing up to do another talk at CAST 2017!

lightning

With only a five-minute slot, it was still a worthwhile experience giving the lightning talk and it led to a number of good conversations afterwards, resulting in some connections to follow up and some resources to review. Thanks to all those who offered help and useful information as a result of this lightning talk, it’s greatly appreciated.

lee_lightning

With my talk(s) over, the Welcome Reception was a chance to relax with friends old and new over an open bar. A photo booth probably seemed like a good idea at the time, but people always get silly as evidenced by the following three clowns (viz. yours truly, Rob Sabourin and Ben Simo) who got the ball rolling by being the first to take the plunge:

booth

I thought the quality of the keynotes and track sessions at CAST 2017 was excellent and I didn’t feel like I attended any bad talks at all. Of course, there are always those talks that stand out for various reasons and two tracks really deserve a shout out.

It’s not every conference where you walk into a session to find the presenter dressed in a pilot’s uniform and asking you to take your seats in preparation for take off! But that’s what we got with Alexandre Bauduin (of House of Test, Switzerland) and his talk “Your Safety as a Boeing 777 Passenger is the Product of a ‘Big Gaming Rig'”. Alexandre used to be an airline pilot and his talk was about the time he spent working for CAE in Montreal, the world’s leading manufacturer of simulators for the aviation and medical industries. He was a certification engineer, test pilot and then test strategy lead for the company’s Boeing 777 simulator and spent in excess of 10,000 hours test flying it. He mentioned that the simulator had 10-20 million lines of code and 1-2 million physical parts, amazing machinery. His anecdotes about the testing challenges were entertaining but also very serious and it was clear that the marriage of his actual pilot skills with his testing skills had made for a strong combination in terms of finding bugs that really mattered in this critical simulator. This was a fantastic talk delivered with style and confidence, Alexandre is the sort of presenter you could listen to for hours. An inspired pick by the program committee.

777

Based purely on the title, I took a punt on Chris Glaettli (of Thales, Switzerland) with “How we tested Gotthard Base Tunnel to start operation one year early” – and again this was an inspired move! Chris was part of the test team for various systems in the 50km Gotthard base tunnel (the longest and deepest tunnel in the world) from Switzerland to Italy creating a “flat rail” through the Alps and it was fascinating to hear about the challenges of being involved in such a huge engineering project, both in terms of construction and test environments (and some of the factors they needed to consider). Chris delivered his talk very well and he’d clearly made some very wise choices along the way to help the project be delivered early. In such a regulated environment, he’d done a great job in working closely with auditors to keep the testing documentation down to a minimum while still meeting their strict requirements. This was another superb session, classic conference material.

I noted that some of the “big names” in the context-driven testing community were not present at the conference this year and, perhaps coincidentally, there didn’t seem to be as much controversy or “red carding” during open seasons. For me, the environment seemed much friendlier and safer for presenters than I’d seen at the last CAST I attended (and, as a first-time presenter at CAST, I very much appreciated that feeling of safety). It was also interesting to learn that the theme for the 2018 conference is “Bridging Communities” and I see this as a very positive step for the CDT community which, rightly or wrongly, has earned a reputation for being disrespectful and unwilling to engage in discussion with those from other “schools” of testing.

I’d like to take this chance to thank Rob Sabourin and the AST program committee for selecting my talk and giving me the opportunity to present at their conference. It was a thoroughly enjoyable experience.

ER of presenting at the LAST conference (and observations on the rise of “QA”)

As I’ve blogged previously, I was set to experience three “firsts” at the recent LAST conference held in Melbourne. Now on the other side of the experience, it’s worth reviewing each of those firsts.

It was my first time attending a LAST conference and it was certainly quite a different experience to any other conference I’ve attended. Most of my experience is in attending testing-related conferences (of both commercial and community varieties) and LAST was a much broader church, but still with a few testing talks to be found on the programme.

With about a dozen concurrent tracks, it was a tough job choosing talks and having so many tracks just seems a bit OTT to me. It was the first person experience reports that made for highlights during this conference, as is usually the case. The Seek guys, Brian Rankin and Norman Noble, presented Seek’s agile transformation story in “Building quality products as a team” and this was a compelling and honest story about their journey. In “Agile @ Uni: patience young grasshopper”, Toby Durden and Tim Hetherington (both of Deakin University) talked about a similar journey at their university and the challenges of adopting more agile approaches at program rather than project levels – this was again good open, honest and genuine storytelling.

(I also made an effort to attend the talks specifically on testing, see later in this blog post for my general thoughts around those.)

The quality of information provided by the LAST organizers in the lead up to the conference was second to none, so hats off to them for preparing so well and giving genuinely useful information to presenters. Having said that, the experience “on the day” wasn’t great in my opinion. It still amazes me that conferences think it’s OK to not have a room helper for each and every session, especially for those conferences that encourage lots of new or inexperienced presenters like this one. A room helper can cover introductions, facilitate Q&A, keep things on track timewise, and assist with any AV issues – while their presence can simply be a comfort to a nervous presenter.

Secondly, this was the first time I’d co-presented a talk at a conference and it turned out to be a very good experience. Paul Seaman and I practiced our talk a few times, both via Skype calls and also in front of an audience, so we were confident in our content and timing as we went into the “live” situation. It was great to have some company up there and sharing the load felt very natural & comfortable. Paul and I are already discussing future joint presentations now that we know we can make a decent job of it. (The only negatives surrounding the actual delivery of the talk related to the awful room we had been given, with the AV connection being at the back of the room meaning we couldn’t see our soft-copy speaker notes while presenting – but neither of us thought this held us back from delivering a good presentation.)

Lee and Paul kicking off their presentation at LAST

Thirdly, this was the first time I’d given a conference talk about my involvement with the EPIC TestAbility Academy. The first run of this 12-week software testing training programme for young adults on the autism spectrum has just finished and Paul & I are both delighted with the way it’s gone. We’ve had amazing support from EPIC Recruit Assist and learned a lot along the way, so the next run of the programme should be even better. My huge thanks to the students who stuck with us and hopefully they can use some of the skills we’ve passed on to secure themselves meaningful employment in the IT sector. The feedback from our talk on this topic at LAST was incredible, with people offering their (free) help during future runs of the training, describing what we’re doing as “heartwarming” and organizations reaching out to have us give the same talk in their offices to spread the word. This was a very rewarding talk and experience – and a big “thank you” to Paul for being such a great bloke to work with on this journey.

Turning to the testing talks at LAST (and also the way testing was being discussed at Agile Australia the week before), I am concerned about the way “QA” has become a thing again in the agile community. I got the impression that agile teams are looking for a way to describe the sort of contributions I’d expect a good tester to make to a team, but are unwilling to refer to that person as a “tester”. Choosing the term “QA” appeared to be seen as a way to talk about the broader responsibilities a tester might have apart from “just testing stuff”. The danger here is in the loading of the term “QA” – as in “Quality Assurance” – and using it seems to go against the whole team approach to quality that agile teams strive for. What’s suddenly wrong with calling someone a “tester”? Does that very title limit them to such an extent that they can’t “shift left”, be involved in risk analysis, help out with automation, coach others on how to do better testing, etc.? I’d much rather we refer to specialist testers as testers and let them show their potentially huge value in agile teams as they apply those testing skills to more than “just testing stuff”.

Attending the Agile Australia conference (June 22 & 23, 2017)

Although the Agile Australia conference has been running for nine years, I attended it for the first time recently when it took place in Sydney. It was again sold out (and oversold if the “standing room only” keynotes and rumours of mass late registrations from one of the larger sponsors were anything to go by) and it’s become a massive commercial conference, set to celebrate its tenth anniversary next year in Melbourne.

There was a big selection of talks, with each day being kicked off by three back-to-back forty-minute keynotes before splitting into multiple tracks (with one track comprised of so-called “sponsored content”).

The keynotes on both days were of high quality and certainly some of the best talks of the conference for me. Barry O’Reilly was entertaining and engaging in his talk on lessons learned in trying to deploy lean in enterprise environments, while Jez Humble busted a few myths on the deployability of continuous delivery in various organizations. He won me over when he mentioned Exploratory Testing as part of the CD pipeline, the only time I heard mention of ET during the entire event. Neal Ford did a good job in his keynote, talking about how best practices turn into anti-patterns and Sami Honkonen‘s effort was a highlight of the conference in talking about the building blocks required to build a responsive organization.

In terms of track sessions, there wasn’t a single session dedicated to testing and maybe everyone with a good testing story to tell has simply given up submitting to this conference now (my last two submissions haven’t got up) but there was plenty to keep me occupied. Highlights were John Contad‘s passionately delivered talk about mentoring at REA Group, Dr Lisa Harvey-Smith‘s fascinating presentation on dark matter, and Estie & Anthony Boteler‘s talk about working with an intern software tester on the autism spectrum, also at REA Group. This talk resonated strongly with me thanks to my recent work with Paul Seaman and EPIC Recruit Assist in delivering the EPIC TestAbility Academy software testing training programme for young adults on the autism spectrum.

My takeaways were:

  • The focus in the agile community has moved away from “doing Scrum better” to looking at the human factors in successful projects.
  • Talks on psychological safety, neurodiversity, mentorship and such were great to see here, as the importance of people in project success becomes better understood.
  • Testing as a skilled craft is still not being valued by this community, with the crucial role of exploratory testing being mentioned only once in all the talks I attended.

Out of the thousand or so official photos from this conference, there’s only one to provide evidence of my attendance – waiting in line at the coffee cart, kind of says it all really.

35418873912_aa16905aa2_o

Some firsts at the LAST conference (Melbourne)

My next conference speaking gig has just come in – the LAST conference in Melbourne at the end of June 2017. This event will mark a series of “firsts” for me.

Firstly (pun intended), this will be my first time attending a LAST conference so I’m looking forward to the huge variety of speakers they have and being part of a community-driven event.

Secondly, this will be the first time I’ve co-presented a talk at a conference. I expect this to be quite a different experience to “going solo” but, given that I’m doing it with my good mate Paul Seaman, I’m comfortable it will go very well.

Finally, this will be the first time I’ve given a conference talk about my involvement with the EPIC TestAbility Academy. Both Paul and I are excited about this project to teach software testing to young adults on the autism spectrum (and we’ve both blogged about it previously – Paul’s blog, Lee’s blog) and we’re pleased to have the opportunity to share our story at this conference. Working together to create a slide deck is another first for both of us and it’s an interesting & enjoyable challenge, for which we’ve found effective new ways of collaborating.

Thanks to LAST for selecting our talk, I’ll blog about the experience of delivering it after the event.

Program Chair for CASTx18 in Melbourne

It was to my great surprise that I was recently asked to be Program Chair for the AST’s second conference venture in Australia, CASTx18 in Melbourne next year.

The first CAST outside of North America was held in Sydney in February 2017 and was so successful that the AST have opted to give Australia another go, moving down South to Melbourne. Although my main responsibility as Program Chair revolves around the conference content, as a local I can also help out with various tasks “on the ground” to assist the rest of the AST folks who are based outside of Australia.

Coming up with a theme was my first challenge and I’ve opted to give it some Aussie flavour, with “Testing in the Spirit of Burke & Wills” to evoke the ideas of pioneering and exploration.

I’m excited by this opportunity to put together a conference of great content for the local and international testing community – and also humbled by the AST’s faith in me to do so.

Keep an eye on the AST CASTx18 website for more details, the date and venue shouldn’t be too far away, with a CFP to follow.

Attending the CASTx conference in Sydney (21st February, 2017)

The annual conference of the Association for Software Testing (AST) took its first step outside of North America in 2017, with the CASTx conference in Sydney on February 20 & 21. Since I align myself with the context-driven principles advocated by the AST, I decided to attend the event’s conference day on the 21st (disclaimer: I submitted a track session proposal for this conference but it was not accepted.)

The conference was held in the stunning Art Deco surrounds of the Grace Hotel in the Sydney CBD and drew a crowd of about 90, mainly from Australia and New Zealand but also with a decent international contingent (including representatives of the AST). The Twitter hashtag for the event was #castx17 and this was fairly active across the conference and in the days since.

The full event consisted of a first day of tutorials (a choice of three, by Michael Bolton, Goranka Bjedov and Abigail Bangser & Mark Winteringham) followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with forty minutes for the presentation followed by twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach).

My conference day turned out to include:

  • Conference opening by Ilari Henrik Aegerter (board member of the AST), Anne-Marie Charrett (conference program chair) and Eric Proegler (board member and treasurer of the AST).
  • Opening keynote came from Goranka Bjedov (of Facebook), with “Managing Capacity and Performance in a Large Scale Production Environment”.
  • Track session “Rise of the Machine (Learning)” from Stephanie Wilson (of Xero)
  • Track session “Testing with Humans: How Atlassian Validates Its Products With Customers” by Georgie Bottomley (of Atlassian)
  • Track session “To Boldly Go: Taking the Enterprise to SBTM” by Aaron Hodder (of Assurity Consulting NZ)
  • Track session “Auditing Agile Projects” by Michelle Moffat (of Tyro Payments)
  • Closing keynote by Michael Bolton (of Developsense) with “The Secret Life of Automation”

The opening keynote was fantastic.  I last heard Goranka speak when she keynoted the STANZ conference here in 2011. She started off by saying how well Facebook had prepared for the US elections in terms of handling load (and the coincidental additional load arising from India’s ban on large currency notes), but then told the story of how around half of all Facebook users had been declared dead just a few days after the election (an unfortunate by-product of releasing their new “memorial” feature that didn’t actually bother to check that the member was dead before showing the memorial!). This was an example of her theme that Facebook doesn’t care about quality and such changes can be made by developers without being discovered, but their resolution times are fast when such problems immediately start being reported by their users. The stats she provided about Facebook load were incredible – 1.7 billion monthly active users for the main site, around 1 billion for each of WhatsApp and Messenger, plus around 0.5 billion for Instagram. Facebook now has the largest photo storage in the world and already holds more video content than YouTube. Her 2013 stats showed, per 30 minutes, their infrastructure handled 108 billion MySQL queries, the upload of 10 million photos and scanned 105TB with Hive! This load is handled by Facebook’s private cloud built in ten locations across the US and Europe. Servers are all Linux and all data centres are powered using green power (and it was interesting to note that they rely on evaporative cooling to keep power usage down). The reasons for a lack of an Australian data centre became obvious when Goranka talked about the big long-term power contracts they require and also “world class internet” (at which point the room burst into laughter). Details of all the server specifications can be found at OpenCompute.org Her objectives in managing capacity and performance are: low latency for users, the ability to launch things quickly (succeed or fail quickly, don’t worry about efficiency, don’t care about quality) and conservation (in terms of power, money, computers, network and developer time). Her goals are: right things running on the right gear, running efficiently, knowing if something is broken or about to break, and knowing why something is growing. She also talked through their load testing approach – which runs every second of every day – and their testing around shutting down an entire region to be ready for disasters. Although this wasn’t really a pure testing talk, it was fascinating to learn more about the Facebook infrastructure and how it is managed and evolving. It was made all the more interesting by Goranka’s irreverent style – she openly admitted to not being a Facebook user and cannot understand why people want to post photos of cats and their lunches on the internet!

From the tracks, it was interesting to hear about Xero’s QA mission statement, viz.  “Influence Xero culture to be more quality oriented and transform software from “good” to “wow”” (Stephanie Wilson’s talk) and it was surprising to me to learn that Atlassian was not doing any decent sort of UX research until so recently (from Georgie Bottomley’s talk), but maybe that explains some of the quirky interactions we’ve all come to known and love in JIRA!

I’ve seen Aaron Hodder present a few times before and he always delivers real experiences with a unique insight – and this session was no exception. His talk was a fascinating insight into dysfunctional client/vendor contract-heavy enterprise IT environments. The novel approach he came up with at Assurity was session-based test management in a light disguise in order to make it palatable in its terminology and reporting, but it was very cleverly done and the project sounds like it’s in much better shape than it was as a result. A really good talk with handy takeaways, and not just for a tester finding themselves in the unfortunate position of being in a project like the one Aaron experienced.

Michelle Moffat presented the idea that the agile practices are, in audit terms, controls and it is the way evidence is gathered in this environment that is so different – she uses photos, videos, attends meetings and automated controls (for example, from the build system) rather than relying on the creation of documents. This was a really interesting talk and it was great to see someone from well outside of our sphere taking on the ideas of agile and finding ways to meet her auditing responsibility without imposing any additional work on the teams doing the development and testing.

Michael Bolton’s closing keynote was a highlight of my day and he used his time well, offering us his usual thought-provoking content delivered with theatre. Michael’s first “secret” was that a test cannot be automated and automated testing does not exist. He made the excellent point that if we keep talking about automated testing, then people will continue to believe that it does exist. He has also observed that people focus on the How and What of automated button-pushing, but rarely the Why. He identified some common automation (anti)patterns and noted that “tools are helping us to do more lousy, shallow testing faster and worse than ever before”! He revealed a few more secrets along the way (such as there being no such thing as “flaky” checks) before his time ran out all too soon.

There were a few takeaways for me from this conference:

  • There is a shift in focus for testing as SaaS and continuous delivery shifts the ability to respond to problems in production much more quickly and easily than ever before.
  • The “open season” discussion time after each presentation was, as usual, a great success and is a really good way of getting some deeper Q&A going than in more traditionally-run conferences.
  • It’s great to have a context-driven testing conference on Australian soil and the AST are to be commended for taking the chance on running such an event (that said, the awareness of what context-driven testing means in practice seemed surprisingly low in the audience).
  • The AST still seems to struggle with meeting its mission (viz. “to advance the understanding of the science and practice of software testing according to context-driven principles”) and I personally didn’t see how some of the track sessions on offer in this conference (interesting though they were) worked towards achieving that mission.

In summary, I’m glad I attended CASTx and it was good to see the level of support for AST’s first international conference event, hopefully their first of many to help broaden the appeal and reach of the AST’s effort in advocating for context-driven testing.

An excellent set of summary photos has been put together from Twitter, at https://twitter.com/i/moments/833978066607050752

A worthwhile 40-minute roundtable discussion with five CASTx speakers/organizers (viz. Abigail Bangser, Mark Winteringham, Aaron Hodder, Anne-Marie Charrett and Ilari Henrik Aegerter) can also be heard at https://t.co/A0CuXGAdd7

ER: Attending the Cambridge Exploratory Workshop on Testing (CEWT)

One of the great things about being out of Australia for a while is the ability to experience testing community events in other parts of the world.

I recently attended a Belfast Testers meetup and, shortly afterwards, received an invitation from James Thomas to take part in the third Cambridge Exploratory Workshop on Testing – an invitation I readily accepted!

This peer workshop took place on Sunday 6th November and was held in the offices of games developer Jagex on the (enormous) Cambridge Science Park, with a total of 12 participants (the perfect size for such an event), as follows:

Michael Ambrose (Jagex)

James Thomas, Karo Stoltzenburg, Sneha Bhat, Aleksandar Simic (all from Linguamatics)

Alan Wallace (GMSL)

James Coombes (Nokia)

Neil Younger (DisplayLink)

Chris Kelly (Redgate)

Iuliana Silvasan

Chris George (Cambridge Consultants)

Lee Hawkins (Quest)

The workshop theme was “Why do we Test, and What is Testing (Anyway)?” and, after some introductions and housekeeping about how the workshop would be run, it was time for the first ten-minute talk, from Michael Ambrose with “Teach Them to Fish”. He talked about teaching developers to test at Jagex, as well as upskilling testers to be pseudo-developers. He said there was a technical need to cover more and more as well as a desire to get testers learning more (as a different approach to, say, pushing developers to do automation). Michael noted that there were a number of implications of these changes, including the perception of testers, working out what’s unique about what testers do, and knowing how far to go (getting testers to the level of junior developers might be enough). This was an interesting take on the current “testers need to be more technical” commentary in the industry and the twenty-minute discussion period was easily filled up.

Next up was James Coombes with “Who should do testing and what can they test?” He talked about the “I own quality” culture within Nokia and how he sees different roles being responsible for different aspects of quality. James suggested that developers should find most of the bugs and fix them, while QA then find the next highest number of bugs. Security testers act as specialists with generally few (but important) bugs being found. Documenters/trainers are well placed to find usability bugs, while customer support staff have good knowledge of how customers actually use their products and so can provide good testing tours. Alpha test engineers are responsible for integration/end-to-end testing and catch the low frequency bugs. Finally, customers are hopefully finding the very low frequency bugs. This was an interesting talk about getting everyone involved in the testing activity (and highlighted the “testing is an activity, not a role” idea). I particularly liked what James said about unit testing – “if someone changes your code and they don’t know they’ve broken it, it’s not their problem, it’s yours for not writing a good enough unit test”.

After a short break, I was up next with my talk “What is Testing? It depends …” I decided to tackle the latter half of the theme (i.e. the “what” rather than the “why”) and my idea was to discuss what testing means depending on the perspective of the stakeholder. We focus a lot of time and effort in the community on refining a definition of testing (and I favour the James Bach & Michael Bolton definition given towards the end of the Exploratory Testing 3.0 blog post) but this (or any other) definition is probably not very helpful to some stakeholders. I covered anumber of perspectives such as “Testing is a way to make money” (if you’re a testing tools vendor or a testing outsourcing services provider), “Testing is a cost centre” (if you’re a CFO) and “Testing is dead” (if you’re a CxO type reading some of the headline IT magazines & websites). There was a good discussion after my talk, mainly focused on the cost centre perspective and how this has impacted people in their day-to-day work. I was pleased with how my talk went (especially given the short time I had to prepare) and received some good feedback, particularly on the concise nature of the slides and the confidence with which it was presented. My slide deck can be seen at What is Testing? It depends…

cewt1

The last session before lunch saw Aleksandar Simic with “A Two-day Testing Story”. He did a fine job of breaking down a two-day period in his work into various different activities, some testing-related (e.g. pairing on test design) and some not (e.g. working with IT support on a networking issue). Aleksandar’s level of self-inspection was impressive, as was his naming of the various activities, learning opportunities and challenges along the way. His “testing diary” seems to be working well for him in identifying and naming his testing activities and this would make an interesting conference talk with some further development.

Lunch provided a good chance for us all to chat and unwind a little after the intensive morning spent talking testing.

First up after the lunch break was Karo Stoltzenburg with “I test, therefore I am”. She had adopted the idea of substitution in preparing her talk so looked to answer the question “Why do I test?” and see where that took her. Karo’s answer was “Because I like it” and then she explored why she liked it, identifying investigation, learning, exploring, use of the scientific method, collaborating, thinking in different contexts and diversity as aspects of testing that appealed to her. I liked Karo’s closing remarks in which she said “I test because it makes me happy, because it’s interesting, challenging and varied work”. We really need more positive messages like Karo’s being expressed in the testing community (and wider still), so I’d love to see this become a full conference talk one day. She did a good job of communicating her passion for testing and there were some interesting discussions in the group following her talk, with a degree of agreement about why testing is so engaging for some of us.

The sixth and final talk of the day came from James Thomas with “Testing All the Way Down, and Other Directions” He walked through an in-depth analysis of Elisabeth Hendrickson’s “Tested = Checked + Explored” from her book, Explore It! James decided to explore this definition of testing using techniques from that definition which wouldn’t classify his actions as testing. He described how he’d interacted with Elisabeth on some of his questions after exploring the idea in this way and finally presented his proposed alternative definition of testing as “the pursuit of actual or potential incongruity” (Note that James more fully describes this talk in his blog post, Testing All the Way Down, and Other Directions) The main focus of discussion after James’s talk was around his proposed definition of testing and I’ll be following the broader community’s response to his proposal with interest.

A few discussion points arose during the day for which we didn’t have time to go deep between talks, so we dedicated ten minutes to each of the following topics to close out the workshop content:

  • Quality – what does it mean? (Weinberg definition, but are others more helpful?)
  • Domain knowledge (can bias you, can empathy with the end user be a disadvantage? How do we best adjust strategy to mitigate for any lack of domain knowledge?)
  • Evaluating success (how do we measure the success of spreading testing into development and other disciplines?)
  • Is testing just “the stuff that testers do”? (probably not!)
  • How do we make a difference? (blogging, workshops in our own workplaces, brown bag sessions, broader invitation list to peer conferences)

To wrap up, a short retrospective was held where we were all encouraged to note good things to continue, anything we’d like to stop doing, and suggestions for what we should start to do. There were some good ideas, briefly discussed by the group and I’d expect to see some of these ideas taken up by the CEWT organizers as part of their fourth incarnation.

The CEWT group standing outside number 10 Downing Street (or inside the Jagex office, maybe):

cewt2

This was a really good day of deep-diving with a passionate group of testers, exactly what a peer conference should be all about. Thanks again to James for the invitation and thanks to all the participants for making me so welcome.

For reflections on the event from others, keep an eye on the CEWT blog at

http://cewtblog.blogspot.co.uk/2016/11/cewt-3-reflections.html

My ER of attending and presenting at STARWest 2016

I recently had the pleasure of heading to Southern California to attend and present at the long-running STARWest conference. Although the event is always held at the Disneyland Resort, it’s a serious conference and attracted a record delegation of over 1200 participants. For a testing conference, this is just about as big as it gets and was probably on a par with some recent EuroSTARs that I’ve attended.

My conference experience consisted of attending two full days of tutorials then two conference days, plus presenting one track session and doing an interview for the Virtual Conference event. It was an exhausting few days but also a very engaging & enjoyable time.

Rather than going through every presentation, I’ll talk to a few highlights:

  • Michael Bolton tutorial “Critical Thinking for Software Testers”
    The prospect of again spending a full day with Michael was an exciting one – and he didn’t disappoint. His tutorial drew heavily from the content of Rapid Software Testing (as expected), but this was not a big issue for his audience (of about 50) here as hardly anyone was familiar with RST, his work with James Bach, Jerry Weinberg, etc. Michael defined “critical thinking” to be “thinking about thinking with the aim of not getting fooled” and he illustrated this many times with interesting examples. The usual “checking vs. testing”, critical distance, models of testing, system 1 vs. system 2 thinking, and “Huh? Really? And? So?” heuristic familiar to those of us who follow RST and Bolton/Bach’s work were all covered and it seemed that Michael converted a few early skeptics during this class. An enjoyable and stimulating day’s class.
  • Rob Sabourin tutorial “Test Estimation in the Face of Uncertainty”
    I was equally excited to be spending half a day in the company of someone who has given me great support and encouragement – and without whose support I probably wouldn’t have made the leap into presenting at conferences. Whenever Rob Sabourin presents or teaches, you’re guaranteed passion and engagement and he did a fine job of covering what can be a pretty dry subject. In his audience of about 40, it was a 50/50 split between those on agile and waterfall projects and some of the estimation techniques he outlined suited one or other SDLC model better, while some were generic. He covered most of the common estimation techniques and often expressed his opinion on their usefulness! For example, using “% of project effort/spend” as a way of estimating testing required was seen as ignoring many factors that influence how much testing we need to do and also ignores the fact that small development efforts can result in big testing efforts. Rob also said this technique “belittles the cognitive aspects of testing”, I heartily agreed! Rob also cited the work of Steve McConnell on developer:tester ratios, in which he found that there was wide variability in this ratio, depending on the organization and environment (e.g. NASA has 10 testers to each developer for flight control software systems while it in business systems, Steve found ratios of between 3:1 and 20:1), rendering talk of an “industry standard” for this measurement seem futile. More agile-friendly techniques such as Wisdom of the Crowd, planning poker and T-shirt sizing were also covered. Rob finished off with his favourite technique, Hadden’s Size/Complexity Technique (from Rita Hadden), and this seemed like a simple way to arrive at decent estimates to iterate on over time.
  • Mary Thorn keynote “Optimize Your Test Automation to Deliver More Value”
    The second conference day kicked off with a keynote from Mary Thorn (of Ipreo). She based her talk around various experiences of implementing automation during her consulting work and, as such, it was really good practical content. I wasn’t familiar with Mary before this keynote but I enjoyed her presentation style and pragmatic approach.
  • Jared Richardson keynote “Take Charge of Your Testing Career: Bring Your Skills to the Next Level”
    The conference was closed out by another keynote, from Jared Richardson (of Agile Artisans). Jared is best known as one of the authors of the GROWS methodology and he had some good ideas around skills development in line with that methodology. He argued that experiments lead to experience and we gain experience both by accident and also intentionally. He also mentioned the Dreyfus model of skills acquisition. He questioned why we often compare ourselves as an industry to other “building” industries when we are very young compared to other building industries with hundreds or thousands of years of experience. He implored us to adopt a learner mentality (rather than an expert mentality) and to become “habitual experimenters”. This was an engaging keynote, delivered very well by Jared and packed full of great ideas.

Moving onto my track session presentation, my topic was “A Day in the Life of a Test Architect” and I was up immediately after lunch on the second day of the conference (and pitted directly against the legendary – and incredibly entertaining – Isabel Evans):

Room signage for Lee's talk at STARWest

I was very pleased to get essentially a full house for my talk and my initial worries about the talk being a little short for the one hour slot were unfounded as I ended up going for a good 45 minutes:

starwest1.jpg

There was a good Q&A session after my talk too and I had to cut it to make way for the next speaker to set up in the same room. It was good to meet some other people in my audience with the title of “Test Architect” to compare notes.

Shortly after my talk, I had the pleasure of giving a short speaker interview as part of the event’s “Virtual Conference” (a free way to remotely see the keynotes and some other talks from the event), with Jennifer Bonine:

Lee being interviewed by Jennifer Bonine for the STARWest Virtual Conference

Looking at some of the good and not so good aspects of the event overall:

Good 

  • The whole show was very well-organized, everything worked seamlessly based on years of experience of running this and similar conferences.
  • There was a broad range of talks to choose from and they were generally of a good standard.
  • The keynotes were all excellent.

Not so good

  • The sheer size of the event was quite overwhelming, with so much going on all the time and it was hard for me to choose what to see when (and the resulting FOMO).
  • As a speaker, I was surprised not to have a dedicated facilitator for my room, to introduce me, facilitate Q&A, etc. (I had made the assumption that track talks – at such a large and mature event – would be facilitated, but there was nothing in the conference speaker pack to indicate that this would be the case.)
  • I’ve never received so much sponsor email spam after registering for a conference.
  • I generally stuck to my conference attendance heuristic of “don’t attend talks given by anyone who works for a conference sponsor”, this immediately restricted my programme quite considerably. There were just too many sponsor talks for my liking.

In terms of takeaways:

 

  • Continuous Delivery and DevOps was a hot topic, with its own theme of track sessions dedicated to it – there seemed to be a common theme of fear about testers losing their jobs within such environments, but also some good talks about how testing changes – rather than disappears – in these environments.
  • Agile is mainstream (informal polls in some talks indicated 50-70% of the audience were in agile projects) and many testers are still not embracing it. There seems to be some leading edge work from (some of) the true CD companies and some very traditional work in enterprise environments, with a big middle ground of agile/hybrid adoption rife with poor process, confusion and learning challenges.
  • The topic of “schools of testing” again came up, perhaps due to the recent James Bach “Slide Gate” incident. STARWest is a broad church and the idea of a “school of schools” (proposed by Julie Gardiner during her lightning keynote talk) seemed to be well received.
  • There is plenty of life left in big commercial testing conferences with the big vendors as sponsors – this was the biggest STARWest yet and the Expo was huge and full of the big names in testing tools, all getting plenty of interest. The size of the task in challenging these big players shouldn’t be underestimated by anyone trying to move towards more pragmatic and people-oriented approaches to testing.

Thanks again to Lee Copeland and all at TechWell for this amazing opportunity, I really appreciated it and had a great time attending & presenting at this event.

Making the most of conference attendance

I attend a lot of testing conferences (and present at a few too), most recently the massive STARWest held at Disneyland in Anaheim, California. I’ve been regularly attending such conferences for about ten years now and have noticed some big changes in the behaviour of people during these events.

Back in the day, most conferences dished out printed copies of the presentation slides and audience members generally seemed to follow along in the hard copy, making notes as the presentation unfolded. It was rare to see anyone checking emails on a laptop or phone during talks. The level of engagement with the speaker generally seemed quite high.

Fast forward ten years and it’s a very different story. Thankfully, most conferences no longer feel the need to demolish a forest to print out the slides for everyone in attendance. However, I have noticed a dramatic decrease in note taking during talks (whether that be on paper or virtually) and a dramatic increase in electronic distractions (such as checking email, internet surfing, and tweeting). The level of engagement with the presentation content seems much lower (to me) than it used to be.

I’m probably old school in that I like to take notes – on paper – during every presentation I attend, not only to give me a reference for what was of interest to me during the talk, but also to practice the key testing skill of note taking. Taking good notes is an under-rated element of the testing toolbox and so important for those practicing session-based exploratory testing.

Given that conference speakers put huge effort into preparing & giving their talks and employers spend large amounts of money for their employees to attend a conference, I’d encourage conference attendees to make every effort to be “in the moment” for each talk, take some notes, and then catch up on those important emails in the many breaks on offer. (Employers, please give your conference attendees the opportunity to engage more by letting them know that those “urgent” emails can probably wait till the end of each talk before getting a response.)

Conferences are a great opportunity to learn, network and share experiences. Remember how fortunate you are to be able to attend them and engage deeply while you have the chance.

(And, yes, I will blog about my experiences of attending and presenting at STARWest separately.)

Testers and Twitter

I was lucky enough to attend and present at the massive STARWest conference, held at Disneyland in Anaheim, last week. I’ll blog separately about the experience but I wanted to answer a question I got after my presentation right here on my blog.

Part of my presentation was discussing my decision to join Twitter and how it has become my “go to” place for keeping up-to-date with the various goings on in the world of testing. (If you’re interested, I was persuaded to join Twitter when I attended the Kiwi Workshop on Software Testing in Wellington in 2013 – and very glad I made the leap!)

I think I made a good case for joining Twitter as a tester and hence the question after my talk, “Who should I follow then?” Looking through my list, I think the following relatively small set would give a Twitter newbie a good flavour of what’s going on in testing (feel free to comment with your ideas too).

Ilari Henrik Aegerter: @ilarihenrik

James Marcus Bach: @jamesmarcusbach

Jon Bach: @jbtestpilot

Michael Bolton: @michaelbolton

Richard Bradshaw: @FriendlyTester

Alexandra Casapu: @coveredincloth

Fiona Charles: @FionaCCharles

Anne-Marie Charrett: @charrett

James Christie: @james_christie

Katrina Clokie: @katrina_tester

David Greenlees: @DMGreenlees

Aaron Hodder: @AWGHodder

Martin Hynie: @vds4

Stephen Janaway: @stephenjanaway

Helena Jeret-Mäe: @HelenaJ_M

Keith Klain: @KeithKlain

Nick Pass: @SlatS

Erik Petersen: @erik_petersen

Richard Robinson: @richrichnz

Rich Rogers: @richrtesting

Robert Sabourin: @RobertASabourin

Paul Seaman: @beaglesays

Testing Trapeze: @TestingTrapeze

Santhosh Tuppad: @santhoshst & @TestInsane