ER of presenting at the LAST conference (and observations on the rise of “QA”)

As I’ve blogged previously, I was set to experience three “firsts” at the recent LAST conference held in Melbourne. Now on the other side of the experience, it’s worth reviewing each of those firsts.

It was my first time attending a LAST conference and it was certainly quite a different experience to any other conference I’ve attended. Most of my experience is in attending testing-related conferences (of both commercial and community varieties) and LAST was a much broader church, but still with a few testing talks to be found on the programme.

With about a dozen concurrent tracks, it was a tough job choosing talks and having so many tracks just seems a bit OTT to me. It was the first person experience reports that made for highlights during this conference, as is usually the case. The Seek guys, Brian Rankin and Norman Noble, presented Seek’s agile transformation story in “Building quality products as a team” and this was a compelling and honest story about their journey. In “Agile @ Uni: patience young grasshopper”, Toby Durden and Tim Hetherington (both of Deakin University) talked about a similar journey at their university and the challenges of adopting more agile approaches at program rather than project levels – this was again good open, honest and genuine storytelling.

(I also made an effort to attend the talks specifically on testing, see later in this blog post for my general thoughts around those.)

The quality of information provided by the LAST organizers in the lead up to the conference was second to none, so hats off to them for preparing so well and giving genuinely useful information to presenters. Having said that, the experience “on the day” wasn’t great in my opinion. It still amazes me that conferences think it’s OK to not have a room helper for each and every session, especially for those conferences that encourage lots of new or inexperienced presenters like this one. A room helper can cover introductions, facilitate Q&A, keep things on track timewise, and assist with any AV issues – while their presence can simply be a comfort to a nervous presenter.

Secondly, this was the first time I’d co-presented a talk at a conference and it turned out to be a very good experience. Paul Seaman and I practiced our talk a few times, both via Skype calls and also in front of an audience, so we were confident in our content and timing as we went into the “live” situation. It was great to have some company up there and sharing the load felt very natural & comfortable. Paul and I are already discussing future joint presentations now that we know we can make a decent job of it. (The only negatives surrounding the actual delivery of the talk related to the awful room we had been given, with the AV connection being at the back of the room meaning we couldn’t see our soft-copy speaker notes while presenting – but neither of us thought this held us back from delivering a good presentation.)

Lee and Paul kicking off their presentation at LAST

Thirdly, this was the first time I’d given a conference talk about my involvement with the EPIC TestAbility Academy. The first run of this 12-week software testing training programme for young adults on the autism spectrum has just finished and Paul & I are both delighted with the way it’s gone. We’ve had amazing support from EPIC Recruit Assist and learned a lot along the way, so the next run of the programme should be even better. My huge thanks to the students who stuck with us and hopefully they can use some of the skills we’ve passed on to secure themselves meaningful employment in the IT sector. The feedback from our talk on this topic at LAST was incredible, with people offering their (free) help during future runs of the training, describing what we’re doing as “heartwarming” and organizations reaching out to have us give the same talk in their offices to spread the word. This was a very rewarding talk and experience – and a big “thank you” to Paul for being such a great bloke to work with on this journey.

Turning to the testing talks at LAST (and also the way testing was being discussed at Agile Australia the week before), I am concerned about the way “QA” has become a thing again in the agile community. I got the impression that agile teams are looking for a way to describe the sort of contributions I’d expect a good tester to make to a team, but are unwilling to refer to that person as a “tester”. Choosing the term “QA” appeared to be seen as a way to talk about the broader responsibilities a tester might have apart from “just testing stuff”. The danger here is in the loading of the term “QA” – as in “Quality Assurance” – and using it seems to go against the whole team approach to quality that agile teams strive for. What’s suddenly wrong with calling someone a “tester”? Does that very title limit them to such an extent that they can’t “shift left”, be involved in risk analysis, help out with automation, coach others on how to do better testing, etc.? I’d much rather we refer to specialist testers as testers and let them show their potentially huge value in agile teams as they apply those testing skills to more than “just testing stuff”.

Advertisements

Attending the Agile Australia conference (June 22 & 23, 2017)

Although the Agile Australia conference has been running for nine years, I attended it for the first time recently when it took place in Sydney. It was again sold out (and oversold if the “standing room only” keynotes and rumours of mass late registrations from one of the larger sponsors were anything to go by) and it’s become a massive commercial conference, set to celebrate its tenth anniversary next year in Melbourne.

There was a big selection of talks, with each day being kicked off by three back-to-back forty-minute keynotes before splitting into multiple tracks (with one track comprised of so-called “sponsored content”).

The keynotes on both days were of high quality and certainly some of the best talks of the conference for me. Barry O’Reilly was entertaining and engaging in his talk on lessons learned in trying to deploy lean in enterprise environments, while Jez Humble busted a few myths on the deployability of continuous delivery in various organizations. He won me over when he mentioned Exploratory Testing as part of the CD pipeline, the only time I heard mention of ET during the entire event. Neal Ford did a good job in his keynote, talking about how best practices turn into anti-patterns and Sami Honkonen‘s effort was a highlight of the conference in talking about the building blocks required to build a responsive organization.

In terms of track sessions, there wasn’t a single session dedicated to testing and maybe everyone with a good testing story to tell has simply given up submitting to this conference now (my last two submissions haven’t got up) but there was plenty to keep me occupied. Highlights were John Contad‘s passionately delivered talk about mentoring at REA Group, Dr Lisa Harvey-Smith‘s fascinating presentation on dark matter, and Estie & Anthony Boteler‘s talk about working with an intern software tester on the autism spectrum, also at REA Group. This talk resonated strongly with me thanks to my recent work with Paul Seaman and EPIC Recruit Assist in delivering the EPIC TestAbility Academy software testing training programme for young adults on the autism spectrum.

My takeaways were:

  • The focus in the agile community has moved away from “doing Scrum better” to looking at the human factors in successful projects.
  • Talks on psychological safety, neurodiversity, mentorship and such were great to see here, as the importance of people in project success becomes better understood.
  • Testing as a skilled craft is still not being valued by this community, with the crucial role of exploratory testing being mentioned only once in all the talks I attended.

Out of the thousand or so official photos from this conference, there’s only one to provide evidence of my attendance – waiting in line at the coffee cart, kind of says it all really.

35418873912_aa16905aa2_o

Some firsts at the LAST conference (Melbourne)

My next conference speaking gig has just come in – the LAST conference in Melbourne at the end of June 2017. This event will mark a series of “firsts” for me.

Firstly (pun intended), this will be my first time attending a LAST conference so I’m looking forward to the huge variety of speakers they have and being part of a community-driven event.

Secondly, this will be the first time I’ve co-presented a talk at a conference. I expect this to be quite a different experience to “going solo” but, given that I’m doing it with my good mate Paul Seaman, I’m comfortable it will go very well.

Finally, this will be the first time I’ve given a conference talk about my involvement with the EPIC TestAbility Academy. Both Paul and I are excited about this project to teach software testing to young adults on the autism spectrum (and we’ve both blogged about it previously – Paul’s blog, Lee’s blog) and we’re pleased to have the opportunity to share our story at this conference. Working together to create a slide deck is another first for both of us and it’s an interesting & enjoyable challenge, for which we’ve found effective new ways of collaborating.

Thanks to LAST for selecting our talk, I’ll blog about the experience of delivering it after the event.

Program Chair for CASTx18 in Melbourne

It was to my great surprise that I was recently asked to be Program Chair for the AST’s second conference venture in Australia, CASTx18 in Melbourne next year.

The first CAST outside of North America was held in Sydney in February 2017 and was so successful that the AST have opted to give Australia another go, moving down South to Melbourne. Although my main responsibility as Program Chair revolves around the conference content, as a local I can also help out with various tasks “on the ground” to assist the rest of the AST folks who are based outside of Australia.

Coming up with a theme was my first challenge and I’ve opted to give it some Aussie flavour, with “Testing in the Spirit of Burke & Wills” to evoke the ideas of pioneering and exploration.

I’m excited by this opportunity to put together a conference of great content for the local and international testing community – and also humbled by the AST’s faith in me to do so.

Keep an eye on the AST CASTx18 website for more details, the date and venue shouldn’t be too far away, with a CFP to follow.

We’re the voice

A few things have crossed my feeds in the last couple of weeks around the context-driven testing community, so thought I’d post my thoughts on them here.

It’s always good to see a new edition of Testing Trapeze magazine and the April edition was no exception in providing some very readable and thought-provoking content. In the first article, Hamish Tedeschi wrote on “Value in Testing” and made this claim:

Testing communities bickering about definitions of inane words, certification and whether automation is actually testing has held the testing community back

I don’t agree with Hamish’s opinion here and wonder what basis there is for claiming that these things (or indeed any others) have “held the testing community back” – held it back from what, compared to some unknown state of where it might have been otherwise?

Michael Bolton tweeted shortly after this publication went live (but not in response to it) that:

Some symptoms [of testers who don’t actually like testing] include fixation on tools (but not business risk); reluctance to discuss semantics and why chosen words matter in context.

It seems to be a common – and increasingly common – target of those of us in the context-driven testing community that we’re overly focused on “semantics” (or “bickering about definitions of inane words”). We’re not just talking about the meaning of words for the sake of it, but rather to “make certain distinctions clear, with the goal of reducing the risk that someone will misunderstand—or miss—something important” (Michael Bolton again, [1]).

 

I believe these distinctions have led to less ambiguity in the way we talk about testing (at least within this community) and that doesn’t feel like something that would hold us back, rather the opposite. As an example, the introduction (and refinement) of “testing” and “checking” (see [2]) was such an important one, it allows for much easier conversations with many different kinds of stakeholders about the differences – in a way that the terminology of “validation” and “verification”, for example, really didn’t.

While writing this blog post, Michael posted a blog in which he mentions this subject again (see [3]):

Speaking more precisely costs very little, helps us establish our credibility, and affords deeper thinking about testing

Thanks to Twitter, I then stumbled across an interview between Rex Black and Joe Colantonio, titled “Best Practices Vs Good Practices – Ranting with Rex Black” (see [4]). In this interview, there are some less than subtle swipes at the CDT community, e.g. “Rex often sees members of the testing community take a common phrase and somehow impart attributes to it that no one else does.” The example used for the “common phrase” throughout the interview is “best practices” and, of course, the very tenets of CDT call the use of this phrase into question.

Rex offered up an awesome rebuttal to use the next time you find yourself attempting to explain best practices to people, which is: Think pattern, not recipe.

How can some people have such an amazingly violent reaction to such an anodyne phrase? And why do they think it means “recipe” when it’s clearly not meant that way?

In case you’re unfamiliar with the word, “anodyne” is defined in the Oxford English dictionary as meaning “Not likely to cause offence or disagreement and somewhat dull”. So, the suggestion is that the term “best practices” is unlikely to cause disagreement and therein lies the exact problem with using it. Rex suggests that we “take a common phrase [best practices] and somehow impart attributes to it that no one else does” (emphasis is mine). The fact that he goes on to offer a rebuttal to mis-use of the term suggests to me that the common understanding of what it means is not so common. Surely it’s not too much of a stretch to see that some people might see “best” as meaning “there are no better”, thus taking so-called “best practices” and applying them in contexts where they simply don’t make any sense.

Still in my Twitter feed, it was good to see James Christie continuing his work in standing against the ISO 29119 software testing standard. You might remember that James presented about this at CAST 2014 (see [5]) and this started something of a movement against the imposition of a pointless and potentially damaging standard on software testing – the resulting “Stop 29119” campaign was the first time I’d seen the CDT community coming together so strongly and voicing its opposition to something in such a united way (I blogged about it too, see [6]).

It appears that some of our concerns were warranted with the first job advertisements now starting to appear that demand experience in applying ISO 29119.

James recently tweeted a link to a blog post (see [7]):

Has this author spoken to any #stop29119 campaigners? There’s little evidence of understanding the issues.
http://intland.com/blog/agile/test-management/iso-29119-testing-standard-why-the-controversy/ … #testing

Read the blog post and make of it what you will. This part stood out to me:

Innitally there was controversy over the content of the ISO 29119 standard, with several organizations in opposition to the content (2014).  Several individuals in particular from the Context-Driven School of testing were vocal in their opposition, even beginning a petition against the new testing standards, they gained over a thousand signatures to it.  The opposition seems to have been the result of a few individuals who were ill – informed about the new standards as well as those that felt excluded from the standards creation process

An interesting take on our community’s opposition to the standard!

To end on a wonderfully positive note, I’m looking forward to attending and presenting at CAST 2017 in Nashville later in the year – a gathering of our community is always something special and the chance to exchange experiences & opinions with the engaged folks of CDT is an opportunity not to be missed.

We’re the voices in support of a context-driven approach to testing, let’s not be afraid to use them.

References

[1] Michael Bolton “The Rapid Software Testing Namespace” http://www.developsense.com/blog/2015/02/the-rapid-software-testing-namespace/

[2] James Bach & Michael Bolton “Testing and Checking Refined” http://www.satisfice.com/blog/archives/856

[3] Michael Bolton “Deeper Testing (2): Automating the Testing” http://www.developsense.com/blog/2017/04/deeper-testing-2-automating-the-testing/

[4] Rex Black and Joe Colantonio “Best Practices Vs Good Practices – Ranting with Rex Black” https://www.joecolantonio.com/2017/04/13/best-practices-rant/

[5] James Christie “Standards – Promoting Quality or Restricting Competition” (CAST 2014)

[6] Lee Hawkins “A Turning Point for the Context-driven Testing Community” https://therockertester.wordpress.com/2014/08/21/a-turning-point-for-the-context-driven-testing-community/

[7] Eva Johnson “ISO 29119 Testing Standard – Why the controversy?” https://intland.com/blog/agile/test-management/iso-29119-testing-standard-why-the-controversy/

Creativity and testing

I’ve just finished reading Scott Berkun’s new book, The Dance of the Possible – “The Mostly Honest, Completely Irreverent Guide to Creativity”. As with his previous books, it makes for easy reading and he makes his points clearly and honestly. I read this book based on enjoying a couple of his other works – in the shapes of Confessions of a Public Speaker and Ghost of my Father – and wasn’t anticipating the amount of testing-related goodness I found in his new one!

In just the second chapter, Scott tackles the tricky topic of where to begin when starting a piece of creative work. He talks about taking an exploratory approach:

The primary goal when you’re starting creative work is to explore, and to explore demands you do things where you are not sure of the outcome. There will be false starts, twists, turns and pivots. These should be welcomed as natural parts of the experience, rather than resisted as mistakes or failures.

Exploratory testing, anyone?!  One of the joys of taking a session-based exploratory testing approach in my experience is the uncertainty of what information we’ll learn about our product in each session – this is so much more rewarding for the tester than knowing they’ll just report a “pass” or “fail” at the end of following a test case, for example.

As Scott moves on to methods for finding ideas (in chapter 4), one of my favourite tools for test planning and reporting makes an appearance:

Another approach to finding interesting combinations is called a mind map. On a large piece of paper write your main goal, subject or idea down in the center and circle it. Then think of an attribute, or an idea, related to the main one and write it down, drawing a line back to the main idea. Then think of another and another, connecting each one to any previous idea that seems most related.

Keep drawing lines and making associations. Soon you’ll have a page full of circles and lines capturing different ways to think about your main thought.

Exploratory testing puts the onus on the tester to come up with test ideas and this seems to be one of the biggest challenges for testers moving from a scripted approach, “how will I know what to test?” The skill of coming up with test ideas is one that requires practice and mind maps are a great way to both organize those ideas and give the tester a way to visualize their ideas, start to combine (or separate) them, and so on.

In chapter 5, Scott talks about creative projects being “a dance between two forces, expanding to consider more ideas and shrinking to narrow things down enough to finish” and how this very idea can be challenging for those who focus on efficiency:

Looking back on a finished project, you might think the time spent exploring ideas that didn’t get used was wasted. It’s easy to believe you should have known from the beginning which ideas would work best and which wouldn’t. This is an illusion. Creativity is exploration. You are going into the unknown on purpose. You can only learn about ideas as you develop them, and there’s no reliable predictor of which ones will pay off and which ones won’t. Certainly the more conservative you are in the ideas you pick, the more predictable the process will be, but by being more conservative you are likely being less creative and will discover fewer insights. Arguably the more creations you make the better your intuition gets, but you won’t find any successful creator, even the legends, who gets it right all the time.

People obsessed with efficiency have a hard time accepting this truth. They would also have a very hard time being at sea with Magellan, working with Edison in his lab or with Frida Kahlo in her art studio. They’d be stunned to see the “waste” of prototypes and sketches, and mystified by how many days Magellan had to spend at sea without discovering a single thing. Discovery is never efficient.

I see this “dance” a lot and it’s the natural tester dance between doing more testing (“consider more ideas”) and calling “good enough” (“narrow things down enough to finish”). I think these words from Scott are a great way to express the benefit of more creative testing in helping us better assess risks in our products:

Certainly the more conservative you are in the ideas you pick, the more predictable the process will be, but by being more conservative you are likely being less creative and will discover fewer insights.

Scott’s new book was a fairly short and enjoyable read. I always say that testing is a creative endeavour – seeking to counter the idea that testing is repetitive, boring work whenever I come across it – so Scott’s words on creativity will be a handy reference in this regard.

Attending the CASTx conference in Sydney (21st February, 2017)

The annual conference of the Association for Software Testing (AST) took its first step outside of North America in 2017, with the CASTx conference in Sydney on February 20 & 21. Since I align myself with the context-driven principles advocated by the AST, I decided to attend the event’s conference day on the 21st (disclaimer: I submitted a track session proposal for this conference but it was not accepted.)

The conference was held in the stunning Art Deco surrounds of the Grace Hotel in the Sydney CBD and drew a crowd of about 90, mainly from Australia and New Zealand but also with a decent international contingent (including representatives of the AST). The Twitter hashtag for the event was #castx17 and this was fairly active across the conference and in the days since.

The full event consisted of a first day of tutorials (a choice of three, by Michael Bolton, Goranka Bjedov and Abigail Bangser & Mark Winteringham) followed by a single conference day formed of book-ending keynotes sandwiching one-hour track sessions. The track sessions were in typical peer conference style, with forty minutes for the presentation followed by twenty minutes of “open season” (facilitated question and answer time, following the K-cards approach).

My conference day turned out to include:

  • Conference opening by Ilari Henrik Aegerter (board member of the AST), Anne-Marie Charrett (conference program chair) and Eric Proegler (board member and treasurer of the AST).
  • Opening keynote came from Goranka Bjedov (of Facebook), with “Managing Capacity and Performance in a Large Scale Production Environment”.
  • Track session “Rise of the Machine (Learning)” from Stephanie Wilson (of Xero)
  • Track session “Testing with Humans: How Atlassian Validates Its Products With Customers” by Georgie Bottomley (of Atlassian)
  • Track session “To Boldly Go: Taking the Enterprise to SBTM” by Aaron Hodder (of Assurity Consulting NZ)
  • Track session “Auditing Agile Projects” by Michelle Moffat (of Tyro Payments)
  • Closing keynote by Michael Bolton (of Developsense) with “The Secret Life of Automation”

The opening keynote was fantastic.  I last heard Goranka speak when she keynoted the STANZ conference here in 2011. She started off by saying how well Facebook had prepared for the US elections in terms of handling load (and the coincidental additional load arising from India’s ban on large currency notes), but then told the story of how around half of all Facebook users had been declared dead just a few days after the election (an unfortunate by-product of releasing their new “memorial” feature that didn’t actually bother to check that the member was dead before showing the memorial!). This was an example of her theme that Facebook doesn’t care about quality and such changes can be made by developers without being discovered, but their resolution times are fast when such problems immediately start being reported by their users. The stats she provided about Facebook load were incredible – 1.7 billion monthly active users for the main site, around 1 billion for each of WhatsApp and Messenger, plus around 0.5 billion for Instagram. Facebook now has the largest photo storage in the world and already holds more video content than YouTube. Her 2013 stats showed, per 30 minutes, their infrastructure handled 108 billion MySQL queries, the upload of 10 million photos and scanned 105TB with Hive! This load is handled by Facebook’s private cloud built in ten locations across the US and Europe. Servers are all Linux and all data centres are powered using green power (and it was interesting to note that they rely on evaporative cooling to keep power usage down). The reasons for a lack of an Australian data centre became obvious when Goranka talked about the big long-term power contracts they require and also “world class internet” (at which point the room burst into laughter). Details of all the server specifications can be found at OpenCompute.org Her objectives in managing capacity and performance are: low latency for users, the ability to launch things quickly (succeed or fail quickly, don’t worry about efficiency, don’t care about quality) and conservation (in terms of power, money, computers, network and developer time). Her goals are: right things running on the right gear, running efficiently, knowing if something is broken or about to break, and knowing why something is growing. She also talked through their load testing approach – which runs every second of every day – and their testing around shutting down an entire region to be ready for disasters. Although this wasn’t really a pure testing talk, it was fascinating to learn more about the Facebook infrastructure and how it is managed and evolving. It was made all the more interesting by Goranka’s irreverent style – she openly admitted to not being a Facebook user and cannot understand why people want to post photos of cats and their lunches on the internet!

From the tracks, it was interesting to hear about Xero’s QA mission statement, viz.  “Influence Xero culture to be more quality oriented and transform software from “good” to “wow”” (Stephanie Wilson’s talk) and it was surprising to me to learn that Atlassian was not doing any decent sort of UX research until so recently (from Georgie Bottomley’s talk), but maybe that explains some of the quirky interactions we’ve all come to known and love in JIRA!

I’ve seen Aaron Hodder present a few times before and he always delivers real experiences with a unique insight – and this session was no exception. His talk was a fascinating insight into dysfunctional client/vendor contract-heavy enterprise IT environments. The novel approach he came up with at Assurity was session-based test management in a light disguise in order to make it palatable in its terminology and reporting, but it was very cleverly done and the project sounds like it’s in much better shape than it was as a result. A really good talk with handy takeaways, and not just for a tester finding themselves in the unfortunate position of being in a project like the one Aaron experienced.

Michelle Moffat presented the idea that the agile practices are, in audit terms, controls and it is the way evidence is gathered in this environment that is so different – she uses photos, videos, attends meetings and automated controls (for example, from the build system) rather than relying on the creation of documents. This was a really interesting talk and it was great to see someone from well outside of our sphere taking on the ideas of agile and finding ways to meet her auditing responsibility without imposing any additional work on the teams doing the development and testing.

Michael Bolton’s closing keynote was a highlight of my day and he used his time well, offering us his usual thought-provoking content delivered with theatre. Michael’s first “secret” was that a test cannot be automated and automated testing does not exist. He made the excellent point that if we keep talking about automated testing, then people will continue to believe that it does exist. He has also observed that people focus on the How and What of automated button-pushing, but rarely the Why. He identified some common automation (anti)patterns and noted that “tools are helping us to do more lousy, shallow testing faster and worse than ever before”! He revealed a few more secrets along the way (such as there being no such thing as “flaky” checks) before his time ran out all too soon.

There were a few takeaways for me from this conference:

  • There is a shift in focus for testing as SaaS and continuous delivery shifts the ability to respond to problems in production much more quickly and easily than ever before.
  • The “open season” discussion time after each presentation was, as usual, a great success and is a really good way of getting some deeper Q&A going than in more traditionally-run conferences.
  • It’s great to have a context-driven testing conference on Australian soil and the AST are to be commended for taking the chance on running such an event (that said, the awareness of what context-driven testing means in practice seemed surprisingly low in the audience).
  • The AST still seems to struggle with meeting its mission (viz. “to advance the understanding of the science and practice of software testing according to context-driven principles”) and I personally didn’t see how some of the track sessions on offer in this conference (interesting though they were) worked towards achieving that mission.

In summary, I’m glad I attended CASTx and it was good to see the level of support for AST’s first international conference event, hopefully their first of many to help broaden the appeal and reach of the AST’s effort in advocating for context-driven testing.

An excellent set of summary photos has been put together from Twitter, at https://twitter.com/i/moments/833978066607050752

A worthwhile 40-minute roundtable discussion with five CASTx speakers/organizers (viz. Abigail Bangser, Mark Winteringham, Aaron Hodder, Anne-Marie Charrett and Ilari Henrik Aegerter) can also be heard at https://t.co/A0CuXGAdd7