Category Archives: Uncategorized

A year off giving conference presentations

Having just received a rejection from my only pending CFP submission, 2019 will likely be the first year since 2013 where I don’t give a conference presentation.

It’s always disappointing when the effort of crafting a talk in response to a CFP doesn’t result in the opportunity to give the talk, but my strike rate over the last few years has been pretty good and I’m grateful for the awesome opportunities I’ve been afforded by events in New Zealand, Sweden, Estonia, Vietnam, US and Australia.

As anyone who’s prepared and given a conference talk will know, there’s a lot of time and effort involved – from crafting a CFP submission, to refining the story, building a slide deck, performing some practice runs, travelling to the event (especially from somewhere as a remote as Australia!), and actually delivering the talk. In the absence of this work, I’m looking forward to putting more effort into my community projects as well as kicking off a new testing-related personal project very soon.

In the short term, though, my focus is on the Testing in Context Conference Australia coming up in Melbourne at the end of February. It’s great to be working with Paul Seaman and the Association for Software Testing on this event and I’m really looking forward to putting on a great show, as well as meeting up with old friends from the testing community and hopefully making some new ones as we come together to learn, share and enjoy the company of great testers from around the world.

(There’s still plenty of time to register for the conference and the pre-conference workshops, all the details can be found at http://ticca19.org.)

 

Advertisements

2018 in review

I’ll briefly look back on 2018 to close out my blogging for the year. I published 19 blog posts in 2018, down a little from 2017 (with 22 posts). My target cadence remains one post per month so I feel like I’ve done “enough” over the year and hopefully provided some valuable and interesting content along the way. The stats indicate almost exactly the same number of views of my blog as during the previous year, but with a slight increase in the number of visitors. If there are topics you’d like to see me talking about here (especially to encourage more new readers), please just let me know.

Conferences & meetups

It was my quietest year in a long time in terms of conference attendance. I made it to just two conferences (both specific testing events), co-organizing one and co-presenting at the other.

My first conference of 2018 came in February with the Association for Software Testing‘s second Australian conference,  CASTx18 in Melbourne, for which I was Programme Chair and local organizer. The conference went really well, with a great programme (well, I would say that!) and lots of good vibes from the delegates. The Langham Hotel was a fine venue for the event and the success of the conference led the AST to commit to the 2019 conference (and beyond) – more on that below!

My only speaking gig of the year came in October up in Sydney, co-presenting with Paul Seaman at the inaugural TestBash Australia conference. This sold-out conference featured a good single-track programme and it was great to meet up with so many friends from the testing community there. Our presentation went well and the topic (our volunteer work running a software testing training course for young adults on the autism spectrum) seemed to resonate with many people in the audience. It was an enjoyable gig all round and we appreciated the opportunity to broaden awareness of the EPIC TestAbility Academy.

In terms of meetups, I only made it to those running alongside conferences. I organized a meetup before the CASTx18 conference and Katrina Clokie drew a good crowd, with fantastic hospitality courtesy of the Langham. The pre-TestBash Sydney Testers meetup in Sydney saw a presentation from Trish Koo and a decent bunch of testers turned up at the impressive Gumtree offices in the CBD.

Work stuff

Quest under private equity ownership continues to do well. I again managed to visit our major Engineering locations during the year, namely in China, California and Czech Republic (those three locations within about two months actually!), and the opportunity to travel and work with people from different cultures remains one of the most enjoyable (and challenging) aspects of my role.

I was promoted during the year, to “Director of Software Craft” (previously “Principal Test Architect”), giving me a broad remit to help the Engineering teams across the world improve the way they build, test and deploy their software.

Community work

My community efforts through 2018 were directed in two main ways, viz. the EPIC TestAbility Academy (ETA) and the AST’s conference.

ETA – a software testing training course for young adults on the autism spectrum (in association with the not-for-profit disability organization, EPIC Assist) that I present together with Paul Seaman – continued in 2018 after the good start we made in 2017. Although we originally planned to run the course twice during the year, we only managed to run it once and I was absent for a large portion of it due to work and personal travel commitments (with Michele Playfair doing an outstanding job of covering for me). For the first time, we had a couple of students finding placements at the end of the course actually doing software testing so that was incredibly rewarding. We hope to continue with ETA in 2019 if EPIC Assist can find a way to staff and fund the programme.

At the CASTx18 conference, I was asked by the AST to more formally take on responsibility for the ongoing organization of their Australian conference. It was not an easy decision to take on this responsibility, but I was honoured to be asked and decided to accept on the basis of jointly working with Paul Seaman to organize their conference from 2019 onwards. Paul and I decided to rebrand the conference and so “Testing in Context Conference Australia” (TiCCA) was born. We enjoyed coming up with a theme, inviting our keynote speakers (viz. Lynne Cazaly and Ben Simo), running a call for proposals, and selecting our speakers. Registrations are ticking along and we’re looking forward to running the two-day conference at the end of February at the Jasper Hotel. (More details on the conference and registration packages can be found at the conference website, http://ticca19.org).

Other stuff

I got the opportunity to appear on two different podcasts during the year, something I’d never done before. The first one was for the New Zealand-based SuperTestingBros podcast where I talked about neurodiversity and ETA with Paul Seaman.

The second one was a long-distance affair, chatting with Johan Steyn from South Africa for his Careers in Software Testing podcast.

These were both good experiences, quite different in flavour but hopefully of general interest and I look forward to opportunities to do more podcasts in the future.

I feel like the year has been a good mix in terms of developing professionally while also giving back via a couple of community-focused projects in ETA and TiCCA. I’m sure 2019 has challenges in store and I have a new (personal) testing-related project hopefully kicking off early in the New Year, so watch this space for more details on that!

In the meantime, all that remains for me to do is wish you all a very Merry Christmas & Happy New Year, and I hope you enjoy my posts to come through 2019.

In response to “context-driven testing” is the “don’t do stupid stuff” school of testing

I blogged about the Twitter conversation that ensued from this tweet from Katrina Clokie:

One of the threads that came out of this conversation narrowed the focus down to “schools of testing” and, in particular, the context-driven testing community:

There’s a bit to unpack here, so let me address these replies piece by piece.

“Divisive rhetoric from some of the thought leaders in that camp”

I can only assume that Rex was referring to the more vocal members of the CDT community, such as James Bach. I haven’t personally experienced anyone trying to be deliberately divisive in the CDT community, but I acknowledge that passion sometimes manifests itself in some strongly-worded comments. Even then, I wouldn’t see this as “rhetoric” as that implies a lack of sincerity or meaningful content. The CDT community, in my experience, attracts those who are sincere about improving software testing, the way it’s done, and the value it delivers.

The use of the term “thought leaders” is also interesting as I don’t see anyone within this community referring to themselves or anyone else as thought leaders. There are obviously more prominent members of the CDT community but also many doing great work in advancing the craft of software testing in line with the principles of CDT behind the scenes (i.e. not so vocally via avenues such as social media).

“CDT is more accurately called the “pay attention” or the “don’t do stupid stuff” school of testing”

I’m not sure whether Matt Griscom’s response was designed to provoke CDT community members or stemmed from a genuine misunderstanding of the seven principles of CDT, which are:

  1. The value of any practice depends on its context.
  2. There are good practices in context, but there are no best practices.
  3. People, working together, are the most important part of any project’s context.
  4. Projects unfold over time in ways that are often not predictable.
  5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
  6. Good software testing is a challenging intellectual process.
  7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

I agree that we should all be paying attention as testers (or as any other contributor to a project). Paying attention to the broader project context is really important if we are to do a great job of testing, but it is still overlooked and too many testers seem to think the software in front of them is the most important (or, worse, only) aspect of the context that they need to care about.

The seven principles of CDT may well also help to decrease the chances of testers spending their time doing “stupid stuff”, but that seems like a good thing to me. Working in alignment with these principles is, to me, a better approach than following standards or “best practices” that fail to account for the unique context of the project I’m working in. I’d argue that many best practices or recommendations from other “schools” actively promote what would in fact be “stupid stuff” in many contexts.

“the value of the phrase “context-driven””

I don’t see “context-driven” as a phrase – we have a clear statement of the seven principles backing what “context-driven testing” is (see above) and the value comes from understanding what those principles mean and performing testing in alignment with them. Rex replied on Matt’s request for enlightenment, saying “”Marketing” is the value enjoyed by a small few testers. “Schism” is the price paid by all other testers.” I don’t agree with this and the use of the term “schism” is exactly the kind of divisive language Rex was accusing CDT community members of using. Does anyone “outside” of the CDT community really “pay a price” for the existence of that community? I just don’t see it.

(The domain that Matt refers to is http://context-driven-testing.com/ and it’s not being actively maintained as far as I’m aware, but it does at least give us a reference point for the principles. )

There – obviously – remain challenges for the context-driven testing community in communicating the very real value and benefits that come from testing viewed via the lens of the CDT principles. It’s great to see the continued efforts of the Association for Software Testing in this regard, with their most recent CAST conference having the theme of “bridging between communities”. I’m also proud to co-organize the AST’s Australian conference, TiCCA19, and look forward to delivering a great programme to a broad representation of the local testing community, with a focus on CDT and the value that approaches built around CDT principles offer.

On the testing community merry-go-round

This tweet from Katrina Clokie started a long and interesting discussion on Twitter:

I was a little surprised to see Katrina saying this as she’s been a very active and significant contributor to the testing community for many years and is an organizer for the highly-regarded WeTest conferences in New Zealand. It seems that her tweet was motivated by her recent experiences at non-testing conferences and it’s been great to see such a key member of the testing community taking opportunities to present at non-testing events.

The replies to this tweet were plentiful and largely supportive of the position that (a) the testing community has been talking about the same things for a decade or more, and (b) does not reach out to learn from & help educate other IT communities.

Groundhog Day?

Are we, as a testing community, really talking about the same things over and over again? I actually think we are and we aren’t, it really depends on your lens as to how you see this.

As Maria Kedemo replied on the Twitter thread, “What is old to you and me might be new to others” and I certainly think it’s the case that many conference topics repeat the same subject matter year on year – but this is not necessarily a bad thing. A show of hands in answering “who’s a first-timer?” at a conference usually results in a large proportion of hands going up, so there is always a new audience for the same messages. Provided these messages are sound and valuable, then why not repeat them to cover new entrants to the community? What might sound like the same talk/content from a presentation title on a programme could well be very different in content to what it was a decade ago, too. While I’m not familiar with developer conference content, I would imagine that they’re not dissimilar in this area, with some foundational developer topics being mainstays of conference programmes year on year.

I’ve been a regular testing conference delegate since 2007 (and, since 2014, speaker) and noticed significant changes in the “topics du jour” over this period. I’ve seen a move away from a focus on testing techniques and “testing as an independent thing” towards topics like quality coaching, testing as part of a whole team approach to quality (thanks agile), and human factors in being successful as a tester. At developer-centric conferences, I imagine shifts in topic driven frequently by changes in technology/language and also likely shifts due to agile adoption too.

As you may know, I’m involved with organizing the Association for Software Testing conferences in Australia and I do this for a number of reasons. One is to offer a genuine context-driven testing community conference in this geography (because I see that as a tremendously valuable thing in itself) and another is to build conference programmes offering something different from what I see at other testing events in Australia. The recently-released TiCCA19 conference programme, for example, features a keynote presentation from Lynne Cazaly and she is not directly connected with software testing but will deliver very relevant messages to our audience mainly drawn from the testing community.

Reach out

I think most disciplines – be they IT, testing or otherwise – fail to capitalize on the potential to learn from others, maybe it’s just human nature.

At least in the context-driven part of the testing world, though, I’ve seen genuine progress in taking learnings from a broader range of disciplines including social science, systems thinking, psychology and philosophy. I personally thank Michael Bolton for introducing me to many interesting topics from these broader disciplines that have helped me greatly in understanding the human aspects involved in testing.

In terms of broadening our message about what we believe good testing looks like, I agree that it’s generally the case that the more public members of the testing community are not presenting at, for example, developer-centric conferences. I have recently seen Katrina and others (e.g. Anne-Marie Charrett) taking the initiative to do so, though, and hopefully more non-testing conferences will see the benefit of including testing/quality talks on their programmes. (I have so far been completely unsuccessful in securing a presentation slot at non-testing conferences via their usual CFP routes.)

So I think it’s a two-way street here – we as testing conference organizers need to be more open to including content from “other” communities and also vice versa.

I hope Katrina continues to contribute to the testing community, her voice would be sorely missed.

PS: I will blog separately about some of the replies to Katrina’s thread that were specifically aimed at the context-driven testing community.

The TiCCA19 conference programme is live!

After a successful 2018 event in the shape of CASTx18, the Association for Software Testing were keen to continue Australian conferences and so Paul Seaman and I took on the job of organizing the 2019 event on the AST’s behalf. We opted to rebrand the conference to avoid confusion with the AST’s well-known CAST event (held annually in North America) and so Testing in Context Conference Australia was born.

It’s been a busy few months in getting to the point where the full line-up for this conference is now live.

Paul and I decided to go for a theme with an Australian bent again, the time “From Little Things, Big Things Grow”. It’s always a milestone in planning a conference when it comes time to open up the call for proposals and we watched the proposals flowing in, with the usual surge towards the CFP closing date of 31st October.

The response to the CFP was excellent, with 95 proposals coming in from all around the globe. We had ideas from first-time presenters and also some from very seasoned campaigners on the testing conference circuit. My thanks go to everyone who took the time and effort to put forward a proposal.

We were joined by Michele Playfair to help us select a programme from the CFP responses. This was an interesting process (as usual), making some hard decisions to build what we considered the best conference programme from what was submitted. With only eight track session slots to fill, we couldn’t choose all of the excellent talks we were offered unfortunately.

The tracks we have chosen are hopefully broad enough in topic to be interesting to many testers. Our keynotes come from Ben Simo (making his first trip and conference appearance in Australia!) and local legend, Lynne Cazaly. Rounding out our programme are three full-day workshops showcasing top Melbourne talent, in the shape of Neil Killick, Scott Miles and Toby Thompson. I’m proud of the programme we have on offer and thank all the speakers who’ve accepted our invitation to help us deliver an awesome event.

The complete TiCCA19 line-up is:

Keynotes (March 1st)

  • Ben Simo with “Is There A Problem Here?”
  • Lynne Cazaly with “Try to see it my way: How developers, technicians, managers and leaders can better understand each other”

Tracks (March 1st)

  •  with “From Prototype to Product: Building a VR Testing Effort”
  •  with “Tales of Fail – How I failed a Quality Coach role”
  •  with “Test Reporting in the Hallway”
  •  with “The Automation Gum Tree”
  •  with “Old Dog, New Tricks: How Traditional Testers Can Embrace Code”
  •  with “The Uncertain Future of Non-Technical Testing”
  • Adam Howard with “Exploratory Testing: LIVE!”
  •  with “The Little Agile Testing Manifesto”

Workshops (February 28th)

  • Neil Killick with “From “Quality Assurance” to “Quality Champion” – How to be a successful tester in an agile team”
  • Scott Miles with “Leveraging the Power of API Testing”
  • Toby Thompson with “Applied Exploratory Testing”

For more details about TiCCA19, including the full schedule and the chance to benefit from significant discounts during the “Little Ripper” period of registration, visit ticca19.org

I hope to see you in Melbourne next year!

Another survey on the state of testing worldwide, the “ISTQB Worldwide Software Testing Practices Report 2017-18”

I blogged recently about the Capgemini/Micro Focus/Sogeti “World Quality Report 2018/19” (WQR) and, shortly afterwards, another report from a worldwide survey around software testing appeared, this time in the shape of the ISTQB Worldwide Software Testing Practices Report 2017-18. The publication of this report felt like another opportunity to review the findings and conclusions, as well as comparing and contrasting with the WQR.

The survey size is stated as “more than 2000” so it’s similar in reach to the WQR, but it’s good to see that the responses to the ISTQB survey are much more heavily weighted to testers than managers/executives (with 43% of responses from people identifying themselves as a “Tester” and 77% being “technical” vs. 23% “managers”). Organizational size information is not provided in this report, whereas the WQR data showed it was heavily skewed towards the largest companies.

The ISTQB report comes in at a light forty pages compared to the WQR’s seventy, in part due to the different presentation style here. This report mainly consists of data with some commentary on it, but no big treatise conclusions as was the case in the WQR.

Main findings (pages 4-5)

The report’s “main findings” are listed as:

  1. More than 2000 people from 92 countries contributed to the … report. In this year’s report, respondents’ geographic distribution is quite well balanced.
  2. The outcome of the 2017-2018 report is mostly in parallel with the results of the one done in 2015-16.
  3. Test analyst, test manager and technical test analyst titles are the top three titles used in a typical tester’s career path.
  4. Main improvement areas in software testing are test automation, knowledge about test processes, and communication between development and testing.
  5. Top five test design techniques utilized by software testing teams are use case testing, exploratory testing, boundary value analysis, checklist based, and error guessing.
  6. New technologies or subjects that are expected to affect software testing in near future are security, artificial intelligence, and big data.
  7. Trending topics for software testing profession in near future will be test automation, agile testing, and security testing.
  8. Non-testing skills expected from a typical tester are soft skills, business/domain knowledge, and business analysis skills.

The first “finding” is not really a finding, it’s data about the survey and its respondents. There is nothing particularly surprising in the other findings. Finding number 5 is interesting, though, as I wouldn’t expect to see “exploratory testing” being considered a test technique alongside the likes of boundary value analysis. For me, exploratory testing is an approach to testing during which we can employ a variety of techniques (such as BVA).

Background of respondents (pages 8-9)

The geographical distribution of responses is probably indicative of ISTQB strongholds, with 33% from Asia, 27% from North America and 27% from Europe.

More than half of the respondents come from “Information Technology” organizations, so this is again a difference from the WQR and indicates a different target demographic. Three quarters of the responses here are from just four organization types, viz. IT, Financial Services, Healthcare & Medical, and Telecom, Media & Entertainment.

Organizational and economic aspects of testing (pages 11-14)

In answering “Who is responsible for software testing in your company?”, a huge 79% said “In-house test team” but this isn’t the whole story as just 30% said “Only in-house test team”, so most are also using some other supplemental source of testers (e.g. 19% said “off-shore test team”). When it comes to improving testers’ competency, 50% responded with “Certification of competencies” which is again probably due to the ISTQB slant on the targets for the survey. It’s good to see a hefty 27% of respondents saying “Participation at conferences”, though.

The classic “What percent of a typical IT/R&D project budget is allocated to software testing?” question comes next. This continues to baffle me as a meaningful question especially in agile environments when what constitutes testing compared to development is not easy to determine. The most common answer here (41% of responses) was “11-25%” while only 8.5% said “>40%”. You might recall that the WQR finding in this area was 26% so this report is broadly consistent. But it still doesn’t make sense as something to measure, at least not in the agile context of my organization.

When asked about their expectations of testing budget for the year ahead, 61% indicated some growth, 31% expected it to be stable and just 8% expected a decline.

Processes (pages 15-23)

As you’d probably expect, the Processes section is the chunkiest of the whole report.

It kicks off by asking “What are the main objectives of your testing activities?” with the top three responses being “To detect bugs”, “To show the system is working properly” and “To gain confidence”. While finding bugs is an important part of our job as testers, it is but one part of the job and arguably not the most important one. The idea that testing can show the system “is working properly” concerns me as does the idea that we can give other people confidence by testing. What we need to focus on is testing to reveal information about the product and communicating that information is ways that help others to make decisions about whether we have the product they want and whether the risks to its value that we identify are acceptable or not. A worrying 15% of responses to this question were “To have zero defects”.

A set of 17 testing types and 18 testing topics form the basis for the next question, “Which of the below testing types and/or topics are important for your organization?” Functional testing easily won the testing types competition (at 83%) while user acceptance testing took the gong in the topics race (at 66%).  Thinking of testing in this “types” sort of breakdown is a thing in the ISTQB syllabus but I’m not convinced it has much relevance in day-to-day testing work, but appreciate other contexts might see this differently. 53% of respondents said exploratory testing was an important topic, but later responses (see “Testing techniques and levels”) make me uneasy about what people are thinking of as ET here.

When it comes to improvement, 64% of respondents said “Test automation” was the main improvement area in their testing activities. I’m not sure whether the question was asking which areas they see as having improved the most or the areas that still have the most room for improvement, but either way, it’s not surprising to see automation heading this list.

The final question in this section asks “What are the top testing challenges in your agile projects?”, with “Test automation”, “Documentation” and “Collaboration” heading the answers. The report suggests: “The root cause behind these challenges may be continuously evolving nature of software in Agile projects, cultural challenges/resistance to Agile ways of working”. While these are possible causes, another is the mistaken application of “traditional” approaches to software testing (as still very much highlighted by the ISTQB syllabus) in agile environments.

Skills & career paths (pages 24-31)

The second largest portion of the report kicks off by looking at the career path of testers in the surveyed organizations. The most commonly reported career path is “Tester -> Test Analyst”, closely followed by “Tester -> Test Manager”. I don’t find titles like those used here very relevant or informative, they mean quite different things between different organizations so this data is of questionable value. Similarly the next question – “What could be the next level in the career path for a test manager?” (with the top response being a dead heat between “Test Department Director” and “Project Manager”) – doesn’t really tell me very much.

More interesting are the results of the next question, “Which testing skills do you expect from testers?” with the answers: Test Execution (70%), Bug Reporting (68%), Test Design (67%), Test Analysis (67%), Test Automation (62%), Test Planning (60%), Test Strategy (52%), Test Implementation (50%), Test Monitoring (38%), Bug Advocacy (29%) and Other (2%). This indicates, as the report itself concludes, that today’s tester is expected to have a broad range of skills – testing is no longer about “running tests and reporting bugs”.

The last two questions in this section are around “non-testing skills” expected of testers, firstly in an agile context and then in a non-agile context. The answers are surprisingly similar, with “Soft skills”, “Business/domain knowledge” and “Business Analysis” forming the top three in both cases (albeit with the second two skills reversed in their order). It troubles me to think in terms of “non-testing skills” when we really should be encouraging testers to skill in the areas that add most value to their teams, in whatever context that happens to be. In drawing distinctions between what is and isn’t a testing skill, I think we diminish the incredibly varied skills that a great tester can bring to a team.

Tools & automation (pages 32-33)

On tool usage, the majority of respondents indicated use of defect tracking, test automation, test execution, test management, and performance testing tools. This is unsurprising as raw statistics, but it would be nice to know how those tools are being used to improve outcomes in the respondents’ environments.

The other question in this section is “What is the percentage of automated test cases you use with respect to your overall test cases?” Maybe you can hear my sighs. How anyone can honestly answer this question is beyond me, but anyway, 19% of respondents said more than 50%, while close to half of them said less than 10%. The report makes the mistake of interpreting these numbers as coverage percentages, when that it not what the question asked: “Almost half of respondents that implemented automated tests reported that their coverage is up to 20%” The question in itself is meaningless and reinforces the common misconceptions that all tests are equal and that you can compare automated tests to “other” tests in a meaningful way.

Testing techniques & levels (pages 34-35)

It’s interesting to see the list of “test techniques” on offer in answering “Which test techniques are utilized by your testing team?”. The top five responses were Use Case Testing (73%), Exploratory Testing (67.2%), Boundary Value Analysis (52.3%), Checklist-Based (49.7%) and Error Guessing (36%). I’m assuming respondents answered here in accordance with the definition of these techniques from the ISTQB syllabus. I find it almost impossible to believe that two-thirds of the sample are really doing what those of us in the context-driven testing world would recognize as exploratory testing. The list of techniques doesn’t contain comparable things for me anyway, again I see ET as an approach rather than a technique comparable to boundary value analysis, equivalence partitioning, decision tables, etc.

When it comes to test “levels”, system and integration testing are indicated as consuming the most of the testing budget, unsurprisingly. It’s not clear where spend on automated testing fits into these levels.

Future of testing (pages 36-39)

In answering “Which new technologies or subjects will be important to the software testing industry in the following 5 years?”, around half of the respondents said Security, Artificial Intelligence, Big Data and Cloud. Answering “What will be the most trending topic for software testing profession in near future”, the top responses were Test Automation, Agile Testing and Security Testing. This second question doesn’t seem very useful, what does “most trending topic” really mean? The two questions in this section of the survey were unlikely to result in revelations – and they didn’t.

Wrapping up

With less wordy conclusion drawing in the ISTQB report than the World Quality Report, there is more room for the reader to look at the data and make their own opinions of what that data is telling them. For me, the questions and possible answers generally don’t tell me a great deal about what testers are really doing, what challenges they are facing, or how we grow both testers and testing in the future.

ER of attending and presenting at the inaugural TestBash Australia conference (Sydney)

The first TestBash conference to be held in Australia/New Zealand took place in Sydney on October 19. The well-established conference brand of the Ministry of Testing ensured a sell-out crowd (of around 130) for this inaugural event, quite an achievement in the tough Australian market for testing conferences. The conference was held in the Aerial function centre at the University of Technology in Sydney.

The Twitter hashtag for the event was #testbash (from which I’ve borrowed the photos in this post) and this was very active across the conference and in the days after.

I was there to both attend and present at the conference. In fact, I would be co-presenting with Paul Seaman on our volunteer work teaching software testing to young adults on the autism spectrum. It was great to have this opportunity and we were humbled to be selected from the vast response the conference had to its call for papers.

The event followed the normal TestBash format, viz. a single day conference consisting of a single track with an opening and closing keynote plus a session of “99 second talks” (the TestBash version of lightning talks). Track sessions were 30 or 45 minutes in duration, generally with very little time after each talk for questions from the audience (especially in the case of the 30-minute slots).

Early arrivals were rewarded with the opportunity to participate in a Lean Coffee session out on the balcony at the Aerial function centre, a nice way to start the day in the morning sunshine (and with pretty good barista coffee too!).

The conference proper kicked off at 8.50am with a short opening address from the event MC, Trish Koo. She welcomed everyone, gave some background about the Ministry of Testing and also gave a shout out to all of the sponsors (viz. Enov8ApplitoolsGumtreeTyro and Testing Times).

The opening keynote came from Maaret Pyhajarvi (from Finland) with “Next Level Teamwork: Pairing and Mobbing”. Maaret is very well-known for her work around mobbing and this was a good introductory talk on the topic. She mentioned that mobbing involves everyone in the team working together around one computer, which helps learning as everyone knows something that the others don’t. By way of contrast, she outlined strong-style pairing, in which “I have an idea, you take the keyboard to drive”. In this style, different levels of skill help, being unequal at the task is actually a good thing. Maaret said she now only uses pairing as a way to train people, not to actually test software. In a mobbing scenario, there is always one driver on the keyboard who is only following instructions and not thinking. A designated navigator makes decisions on behalf of the group. The roles are rotated ever four minutes and a retro is held at the end of every session. Maaret also noted the importance of mixing roles in the mob (e.g. testers, developers, automation engineers). This was a strong opening keynote with content pitched at just the right level for it to be of general interest.

maaret

Next up was a 30-minute talk from Alister Scott (from Automattic) with “How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing WordPress.com”. He introduced his talk by giving some context about the way the company is organized – 800 people across 69 countries, with everyone remote (i.e. no offices!), and all internal communications being facilitated by WordPress (dogfooding). Alistair structured his talk as a series of problems and their solutions, starting with the problem of broken customer flows in production (when they moved to continuous delivery). Their solution to this problem was to add automated end-to-end testing of signup flows in production (and only in production). This solution led to the next problem, having non-deterministic end-to-end tests due to ever-changing A/B tests. The solution to this problem was an override of A/B tests during testing. The next problem was these new tests being too slow, too late (only in production) and too hidden, so they moved to parallel tests and adding “canaries” on merge (before deployment), simple tests of key features (signing up and publishing a page) designed to give fast feedback of major breaking changes. This led to the next problem, having to revert merges and slow local runs to which the solution was having live branch tests with canaries on every pull request. This led to the observation that, of course, canaries don’t find all the problems, so the solution then was to add optional full test suites on live branches. Even then, a problem persisted with Internet Explorer 11 and Safari 10 specific issues, so IE11 and Safari 10 canaries were added. The final problem is still current, in that people still break end-to-end tests! This was a nicely structured short talk about a journey of end-to-end testing and how solving one problem led to another (and ultimately has put them in a position of having no manual regression testing), good content.

ascott

A welcome break for morning tea and a chance to catch up with familiar faces came next before the delegates reconvened, with Enov8 getting the chance for a 99-second sponsor talk before sessions resumed.

First up after the break was a 30-minute session thanks to Michele Playfair (of YOW!) with “A Tester’s Guide to Changing Hearts and Minds”. Her key message was that the ability to change people’s opinions about testing was essentially a marketing exercise and she introduced the “4 P’s of marketing”, viz. Product, Price, Promotion and Placement. She argued that, as testers, we need to be better at defining our product (we should be able to answer questions like “what do you do here?”) and also promoting ourselves (by building relationships and networks, and revealing our value). This was a good short talk from Michele, a different angle on the topic of testers describing and showing their value.

mplayfair

Next up was Peter Bartlett (of Campaign Monitor) with a 45-minute talk on “Advancing Quality by Turning Developers into Quality Champions”. He defined a “quality champion” as “a developer who actively promotes quality in their team”, with this being a temporary role (typically lasting six months or so) which is rotated amongst the team. He generally selects someone who already displays a quality mindset or is an influencer within the team to take on the role initially and then trains them via one-on-one meetings, contextual training and against set goals. He encourages them to ask questions like “what areas are hard to test and why?”, “what can I do to make it easier for you to develop your code and be confident in its quality?”, and “what’s the riskiest piece of what you’re working on?”.  Pete holds regular group meetings with all of the quality champions, these might be demo meetings, lean coffees or workshops/activities (e.g. how to write good acceptance criteria, dealing with automation flakiness, playing the dice game, introducing a new tool, how to use heuristics, live group testing). He has noted some positive changes as a result of using this quality champions model, including increased testability, a growth in knowledge and understanding around quality, new automation tests and performance tool testing research. Pete wrapped up with some tips, including starting small, taking time to explain and listen (across all project stakeholders), and to keep reviewing. This was a similar talk to Pete’s talk at the CASTx18 conference earlier in the year but it felt more fully developed here, no doubt as a result of another six months or so of trying this approach in Campaign Monitor.

pbartlett

As the clock struck noon, it was time for Paul Seaman (of Travelport Locomote) and I to take the big stage for our 30-minute talk, “A Spectrum of Difference – Creating EPIC Software Testers”. We outlined the volunteer work we’ve been doing with EPIC Assist to teach software testing to young adults on the autism spectrum (a topic on which I’ve already blogged extensively) and we were pleased with how our co-presenting effort went – and we thought we looked pretty cool in our EPIC polo shirts! We managed to finish up just about on time and the content seemed to resonate with this audience.

us3us1us2us4

With our talk commitment completed, it was lunch hour (albeit with very limited vegan options despite pre-ordering) and it was good to get some fresh air and sunshine out on the venue’s balcony. Paul and I received lots of great feedback about our talk during lunch, it’s always so nice when people make the effort to express their thanks or interest.

Returning from lunch, it was Applitools’ turn to get their 99-seconds of fame as a sponsor before presentations resumed, in the form of a 45-minute session by Adam Howard (of TradeMe) with “Exploratory Testing: LIVE”. This was a really brave presentation, with Adam performing exploratory testing of a feature in the TradeMe website (New Zealand’s EBay) that had been deliberately altered by a developer in ways Adam was not aware of (via an A/B deployment in production). It was brave in many ways: he relied on internet connectivity and a stable VPN connection back to his office in New Zealand, and also exposed himself to testing a feature for the first time in front of 130 eagle-eyed testers! He applied some classic ET techniques and talked through everything he was doing in very credible terms, so this session served as an object lesson to anyone unfamiliar with what genuine exploratory testing looks like and how valuable it can be (Adam unearthed many issues, some of which probably weren’t deliberately introduced for the purposes of his session!). Great work from a solid presenter.

ahoward

The following 30-minute talk was Paul Maxwell-Walters with “Avoid Sleepwalking to Failure! On Abstractions and Keeping it Real in Software Teams”. This was a really serious talk, high on well-researched content and it was a struggle to give all the content the coverage it deserved in such a short slot. He introduced the ideas of hyper-normalization and hyper-reality before getting into talking about abstractions, viz. “quality” and “measurement”. I particularly liked this quote from his talk, “bad metrics and abstractions are delusional propaganda”! This maybe would have been a better talk if he’d tried to cover less content, but nevertheless it was really engaging and interesting stuff.

pmaxwellwalters

The final break came next before we reconvened for the push to the finish. First up after the break was another 99-second sponsor talk, this time Anne-Marie Charrett (conference co-organizer) on her consultancy business, Testing Times.

The last 30-minute slot went to first-time conference presenter, Georgia de Pont (of Tyro), with “Test Representatives – An Alternative Approach to Test Practice Management” and she presented very confidently and calmly on her first outing. She outlined how Tyro moved to having testers embedded in agile teams and, while there lots of positives from doing this, there was also a lack of consistency in test practice across the teams and no way to consider practice-wide improvements. She went on to talk about the move to “test representatives” (who are themselves embedded testers in teams), one from each tribe, who have a mission to provide a community for testers and act as points of contact for initiatives impacting testing. Each representative then shares the outputs of the representatives group with their team. Initiatives the representatives have covered so far include clarifying the role of the embedded tester, improving the test recruitment process (via a pair testing exercise), onboarding new test engineers, performance criteria for test engineers, upskilling test engineers, co-ordinating engineering-wide test engineers and developing a Quality Engineering strategy. There is also a stretch goal for testers to operate across teams. Georgia’s recommended steps to implement such a model were to start small, look for volunteers over selection, communicate the work of the representatives across the organization, survey to get feedback, hold retros within the representatives group and foster support from engineering leadership. This was a solid talk, especially impressive considering Georgia’s lack of experience in this environment.

gdepont

The final presentation of the day was a closing keynote thanks to Parimala Hariprasad (of Amadeus) with “Enchanting Experiences – The Future of Mobile Apps”. Her time on stage was pretty brief (using only a little over half of her 45-minute slot before Q&A) but was very engaging.  She argued that designing great products isn’t about good screens, it’s about great – enchanting – experiences. She said we should think more about ecosystems than apps and screens as systems become more complex and interconnected. Her neat slides and confident presentation style made her messaging very clear and she also handled Q&A pretty well.

pari

The last session of the conference was dedicated to “99 second talks”, the TestBash version of lightning talks in which each speaker gets just 99 seconds to present on a topic of their choice. There were plenty of volunteers so the short keynote was made up for by more 99s talks, some 18 in total, as follows:

  • Sam Connelly – on depression (and introducing “spoon theory“)
  • Amanda Dean – on why she believes testing is not a craft and should be thought of as a profession
  • Maaret Pyhajarvi – live exploratory testing of an API (using the Gilded Rose example, as per her recent webinar on the same topic)
  • Cameron Bradley – on why a common automation framework is a good thing (based on his experience of implementing one at Tabcorp)
  • Dany Matthias – on experimenting with coffee!
  • Melissa Ngau – on giving and receiving feedback
  • Geoff Dunn – on conflict and how testers can help to resolve it
  • Catherine Karena – on mentoring
  • Nicky West – what is good strategy?
  • Kim Nepata – Blockchain 101
  • Sunil Kumar – mobile application testing: how, what and why?
  • Said – on rotations and why they’re essential in development teams
  • Melissa (Editor Boss at Ministry of Testing) – living a dream as a writer
  • Leela – on transitioning from a small to a large company
  • Haramut – demo of a codeless automation framework
  • Trish Koo – promoting her test automation training course
  • Anne-Marie Charrett – “Audience-Driven Speaking”
  • Maaret Pyhajarvi – promoting the Speak Easy mentoring programme

99s99s-2

After a brief closing speech from Trish Koo, the conference closed out. The action then moved to the nearby Knox Street Bar for a post-conference “meetup” with free drinks courtesy of sponsorship from Innodev. This was a fun evening, relaxing with old friends from the testing community and talking conference organizing with others involved in this, erm, fun activity!

knox

I’ll finish off this blog post with some general thoughts on this conference.

The standard of presentations was excellent, as you might expect from a TestBash and the massive response to their call for papers (around 250). The mix of topics was also very good, from live exploratory testing (I would love to see something like this at every testing conference) to automation to coaching/training/interpersonal talks.

The single track format of all TestBash conferences means there is no fear of missing out, but the desire to pack as many talks as possible into the single day means very limited opportunity for Q&A (which is often where the really interesting discussions are). I personally missed the deep questioning that occurs post-presentations at conferences like CAST.

Although the sponsor talks were kept to short 99-second formats, I still find sponsor talks of any kind uncomfortable, especially at a relatively expensive conference.

Paul and I enjoyed presenting to this audience and the Ministry of Testing do an excellent job in terms of pre-gig information and speaker compensation (expensing literally door-to-door). We appreciated the opportunity to share our story and broaden awareness of our programme with EPIC Assist.