Attending a non-testing conference

I have recently found myself enjoying the latter stages of a fine British Summer (really, no sarcasm intended) and headed down to Cornwall to attend the Agile On The Beach conference. This was the first non-testing conference I’ve attended in a very long time, so it was certainly an interesting experience and a chance to compare and contrast what I see on the testing “circuit”.

This was the sixth running of this not-for-profit agile conference and it was sold out with 350 participants. It was traditional in its structure, with an  opening keynote each day followed by five tracks of 45-minute sessions punctuated by morning tea, lunch and afternoon tea. One idea I’d never seen before was inviting all the speaker to give elevator pitches for their talks immediately following each morning’s keynote. This gave the speakers a good chance to promote their slot and also gave the audience the chance to hear an up-to-date description of their content.

The highlight for me was the opening keynote, from well-known agilist Linda Rising (and perhaps best known for her book Fearless Change) with “Better Decision Making: Science or Stories?”. Her talk discussed whether the adoption of agile is being based on stories rather than scientific evidence. Do we even need evidence as agile practices are seen as “common sense”? Linda argued that we’re reluctant to believe science/data and find stories (experience reports) much more compelling. Science validates but doesn’t always convince people (and scientists suffer from confirmation bias). Could Agile be a placebo? Does it work because we believe in it? Linda noted that organizations don’t really encourage scientific method as decision makers want action rather than investigation. Does any of this sound familiar from the testing world, particularly the context-driven part of the world? I’m a big fan of the idea of experience reports as evidence, they make for compelling presentations and provide me with stories that I can relate to the challenges I may be facing in my testing. Interestingly, Linda also mentioned Thinking, Fast & Slow (Daniel Kahneman), a much-cited reference in the testing community these days.

Maybe I shouldn’t be so surprised, but there was very little talk about the role of human testing in software delivery in agile teams. Only one presentation I attended mentioned exploratory testing, with all others only talking about automation/automating all the tests/automating away manual tests. Some of the continuous delivery talks made claims along the lines of “you can only do CD by automating everything” so I’m sure we’ll be seeing more frequently delivered bad software if this mindset prevails.


In terms of takeaways, I noted:

  • The agile movement is mainstream, but still with little consensus about many aspects of it.
  • Testing as a specialization is not widely discussed, with most experience reports talking up automation but failing to recognize the requirement for human testers to help teams build quality in. “Code quality” (as defined by various static code analysis techniques) was also commonly mentioned.
  • There was a lot of mention of metrics in various talks – avoiding vanity metrics and identifying useful metrics that drive your desired behaviours (rather than choosing “industry standard” metrics that are often prone to encouraging bad behaviours).
  • Continuous delivery is a hot topic, moving agility up from the CI level to deployment and release. CD is again being used a reason for automating all testing when it should the opposite – building better quality in, with the help of people who specialize in helping the team to do that, is an obvious way to reduce risk of simply deploying bad product more frequently.
  • “Business agility” is also a hot topic, moving other parts of the business – not just software development/IT – to a more agile means of working is a big challenge especially in larger organizations.
  • Speaker elevator pitches are a great conference idea (as is a conference party on the beach, take note Aussies!)

(The conference organizers are kindly collating photos, blog posts, presentations, etc. at if you’re looking for more detail.)

Next stop, STARWest in Anaheim where I’m presenting A Day In The Life Of A Test Architect – say g’day if you’ll be there too!

ER: guest editor for Testing Trapeze magazine

I’ve recently had the pleasure of acting as guest editor for Testing Trapeze magazine and thought it would be worth briefly writing about this experience.

If you’re not familiar with this magazine, it started in February 2014 and is published online bi-monthly. It is normally edited by well-known and respected member of the context-driven testing community, Katrina Clokie, from New Zealand. From the magazine’s website “About”:

We want to see a small, simple, quality magazine that amplifies the voices of testers from Australia and New Zealand, presenting a clear, consistent message of good testing practice from existing and emerging leaders. We want to demonstrate the caliber of our community and encourage new testers to join us by engaging in their work at a different level. We want to create a publication that we feel proud of, that truly represents Australia and New Zealand on the international stage; a magazine that you want to read, share and contribute to.

Over the last two and a half years, the magazine has consistently provided a high quality experience for its readers by focusing on a relatively small number of articles per issue and wrapping them up in a beautifully presented publication. Each edition typically comprises of four articles from local (i.e. Australia and New Zealand) authors plus another from an international author. There is no set theme per edition and new writers are actively encouraged, so the sixteen editions to date have given lots of opportunity to new voices from the testing community particularly across Australia and New Zealand.

The main tasks for the editor are logistical and organizational in nature – communicating with authors to get their articles in for review, organizing reviewers for each article, finalizing the content, ordering the articles in the magazine, and writing the editorial. The ease or difficulty of the job is largely dictated by the other people involved and, in my brief experience, everyone was on the same page (no pun intended) in terms of getting good content ready in time to publish to our deadline. Luckily for me, Katrina wrote a blog post Behind the Scenes: Editor of Testing Trapeze in 2015 which helped me work out the various tasks I needed to check off along the way.

It was interesting to see the draft articles coming in from the various authors and the different amounts of review feedback that needed to be incorporated to get to “final” versions for the magazine. Thanks to the reviewers for doing such timely and diligent jobs in providing constructive feedback which was taken on board by the authors.

The magazine is free to download (as a PDF) from the Testing Trapeze website; I strongly encourage you to become a regular reader and also consider expressing an interest in writing an article – you will be warmly welcomed and provided with practical and helpful feedback from the reviewers, there’s nothing to be afraid of!

Thanks again to Katrina for the opportunity to briefly edit the magazine, to Adam for the amazing work with layout and the website, and to all of the authors and reviewers without whom we’d have no content to share. I hope you enjoy the edition I was lucky enough to have the opportunity to bring together.

Being part of a community

There has been a lot of Twitter activity about the CDT community in the last couple of weeks. Katrina Clokie also penned an excellent blog post, A community discussion, and there seems to be a lot of unresolved disputes between different folks representing different parts of the testing community. Some of this just feels like the normal level of background noise spiking for a short time. It’s not the first time a storm of this type has blown itself up around the CDT community and it won’t be the last.

I particularly liked Katrina’s statement that “I strive to be approachable, humble and open to questions”, as this is also my own approach to both being a member of a testing community and also helping to bring others into it.

I have been heavily involved in the TEAM meetup to build a new testing community in Melbourne and also helping to make the Australian Testing Days conference happen (though I will not be involved in the future of the event). I write this blog in the hope of sharing my ideas and opinions and maybe bringing readers into my community as a result.

I’ve chosen not to add to the noise by responding to the Twitter commentary around the CDT community right now, but I don’t feel that my lack of contribution to the discussion either reflects approval or disapproval of the behaviour of any member of any of the communities that consider themselves CDT.

As I’ve blogged before, our Values and principles define us.


“Testing is dead” (again) thanks to DevOps

I’ve been coming across talk of DevOps a lot recently, both in my work and also in the broader software development community. The continuous delivery & deployment folks all seem very excited about the manifold releases that can now be achieved, thanks to great tooling and “all of that automated testing” before deployment. From what I’ve been hearing, this has the potential to be the perfect storm of poor agile practices meeting a lack of understanding of what can and cannot be automated when it comes to “testing”.

The following quote comes from John Ferguson Smart in his recent article, The Role of QA in a DevOps World :

there is a lot more to DevOps than simply automating a build pipeline. DevOps is not an activity, it is a culture, and a culture that goes much deeper than what appears to the naked eye. You could say that DevOps involves creating a culture where software development teams works seamlessly with IT operations so that they can work together to build, test, release and update applications more frequently and efficiently.

I think this quote says a lot about DevOps and makes it clear that it’s not just about automating the stuff around building and deploying the software. With the big focus on automating, it is somewhat inevitable that the same misunderstandings are being made about the role of human testers in all this as were common during the early stages of agile adoption:

Some organisations also seem to believe that they can get away with fewer testers when they adopt a DevOps. After all, if all the tests are automated, who needs manual testers?

In reading more about the culture of DevOps, there are two obvious limitations we need to be talking about and making people aware of, viz. “acceptance criteria” and “automated testing”.

Acceptance criteria

Let’s be clear from that start that meeting all of the acceptance criteria does not mean the story/product is acceptable to your customer. There is a real danger of acceptance criteria being used in the same fallacious way that we previously used “requirements”, as though we can completely specify all the dimensions of acceptability by producing a set of criteria. Meeting all of the specified acceptance criteria might mean the story/product is acceptable. When the specified acceptance criteria are not met, the story/product is definitely not acceptable. So, we’d be better off thinking of them as “rejection criteria”. Michael Bolton wrote an excellent blog on this, Acceptance Tests: Let’s Change the Title, Too and he says (the bold emphasis is mine):

The idea that we’re done when the acceptance tests pass is a myth. As a tester, I can assure you that a suite of passing acceptance tests doesn’t mean that the product is acceptable to the customer, nor does it mean that the customer should accept it. It means that the product is ready for serious exploration, discovery, investigation, and learning—that is, for testing—so that we can find problems that we didn’t anticipate with those tests but that would nonetheless destroy value in the product.

Have a think about what that means for automated acceptance tests…

“Automated testing”

Although I prefer the term “automated checks” over “automated tests” (to highlight the fact that “testing” requires human thinking skills), I’ll indulge the common parlance for the purposes of this topic. It feels like increasingly greater reliance is being placed on automated tests to signify that all is OK with the products we build, especially in the world of DevOps where deployments of code change without any further human interaction are seen as normal as long as all the automated tests are “green”.

Let’s reflect for a moment on why we write automated tests. In another excellent blog post,  s/automation/programming/, Michael Bolton says:

people do programming, and that good programming can be hard, and … good programming requires skill.  And even good programming is vulnerable to errors and other problems.

We acknowledge that writing programs is difficult and prone to human error. Now, suppose instead of calling our critical checks “automated tests”, we instead referred to them as “programmed tests” – this would make it very clear that we’re:

writing more programs when we’re already rationally uncertain about the programs we’ve got.

Michael suggests similar substitutions:

Let’s automate to do all the testing? “Let’s write programs to do all the testing.”

Testing will be faster and cheaper if we automate. “Testing will be faster and cheaper if we write programs.”

Automation will replace human testers. “Writing programs will replace human testers.”

I think this makes it very clear that we cannot automate all of our testing on our way to quality products, be it in a DevOps environment or otherwise.

(For a great reference piece on the use of automation in testing, I recommend Michael Bolton & James Bach’s A Context-Driven Approach to Automation in Testing)

What then of the role for “manual” testers? John Ferguson Smart notes the changing role of testing in such environments and these changes again mirror the kinds of role I’ve been advocating for testers within agile teams:

It is true that you don’t have time for a separate, dedicated testing phase in a DevOps environment. But that doesn’t mean there is no manual testing going on. On the contrary, manual testing, in the form of exploratory testing, performance testing, usability testing and so on, is done continuously throughout the project, not just at the end… The role of the tester in a DevOps project involves exploring, discovering, and providing feedback about the product quality and design, as early as it is feasible to do so, and not just at the end of the process.

I’ll quote John Ferguson Smart again in conclusion – and hopefully I’ve made it clear why I agree with his opinion (the emphasis is again mine):

So testers are not made redundant because you have the technical capability to deploy ten times a day. On the contrary, testers play a vital role in ensuring that the application that gets deployed ten times a day is worth deploying.

TEAM meetup number 11 – Michael Bolton!

The eleventh TEAM meetup was held on 26th May and we returned to a regular venue for us, Aconex‘s great space on Flinders Street in the Melbourne CBD. Thanks again to Aconex for their support in providing the venue, pizzas and drinks.

Our membership had increased to around 470 and we expected a big response when we announced this meetup as Michael Bolton was our headline act! Michael had been in town to run his Rapid Software Testing class (through TEAM) and also to give the opening keynote presentation at the Australian Testing Days 2016 conference.

On a cold and rainy evening, we were pleased to see around 35 of our members make the effort to attend and we kicked off proceedings at about 6pm (with early birds taking the lions share of the pizzas).

Michael kicked off by talking about a game he’s been playing recently, called Keep Talking And Nobody Explodes and some observations about playing this game in relation to software testing. He also gave us a 15-minute “lightning talk” on automation in testing and this was a fascinating example of how to talk about testing using clear language and make a compelling case for the use of automation to enhance the superpowers of humans as testers (rather than trying to replace them).


The group was then formed into a number of teams, most playing the Keep Talking And Nobody Explodes game plus a few playing the famous Dice Game. An hour passed quickly and then the bomb game players shared some observations about their game playing and Michael shared more of his observations from his more considerable experience of playing it. Maybe check the game out for yourself and see what lessons you learn that you can apply to software testing along the way.


The meetup wrapped up just after 8pm and it was – as always – great to see a bunch of such passionate and engaged testers in the one room. Thanks to Michael for taking the time to attend the meetup and provide such an entertaining evening for us all.

For further meetup announcements, remember to follow our page at:

Also keep an eye on our website,, and you can also follow TEAM on social media, at:

Australian Testing Days 2016

So, I’ve co-organized my first software testing conference – the inaugural Australian Testing Days event is over already and what a great event it was!

Conference day on May 20th saw about a hundred enthusiastic locals, interstaters and internationals gather for a big day of talks, networking and socializing. Workshop day on May 21st meant about sixty people were willing to give up their Saturday to learn some great new skills, across four well-attended workshops (one of which I co-presented).

Let’s just backtrack a little, though. About a year ago, this event was not even a twinkle in anyone’s eyes. Major testing conferences in Melbourne had dried up in recent years. There were a few small testing meetup groups and the infrequent ANZTB SIGISTs, but no obvious sense of testing community seemed to exist in this big city.

Rajesh Mathur and I had been discussing the oddity that was a lack of testing community here for a while and around this time last year decided to start a testing meetup group, to focus on building a community of testers with a strong leaning towards our context-driven testing tendencies. So it was that TEAM was formed, Test Engineering Alliance Melbourne, as the banner organization under which to run these meetups. We gathered some strong support and the core group of myself, Rajesh, Paul Seaman, Paul Crimmins and Scott Miles soon gelled to the point where we could run out first event.

The first meetup happened on 3rd July 2015 and we were very pleased to see about 30 people show up. We kept up a regular cadence, with a meetup every month for the rest of the year. As of today, we’ve run ten meetups and also a free workshop under TEAM. Our membership stands at over 450 and we regularly attract 30-40 testers per event. It’s been a great privilege to be involved in forming this community and the buzz around the meetups is invigorating and rewarding. Thanks to everyone who has participated in making the meetups a success – generous sponsors offering rooms, food & drinks, willing volunteers, and of course the many speakers who’ve graced the TEAM meetup stage over the last year.

Towards the end of 2015, the TEAM committee let the sense of meetup success go to their heads a bit and so it was that the idea “let’s do a conference” came to pass… The next six to seven months were a steep learning curve as we put together what would become known as “Australian Testing Days 2016”. As a committee, we worked well together bringing different opinions and perspectives to the table and somehow it all came together to produce what has been a very well-received event. I’m particularly pleased that we created a diverse programme (including new speakers, thanks to Speak Easy) and also that we tried our best to make it a conference where speakers didn’t have to pay to speak. Without speakers, we have no conference so my thanks to our great bunch of speakers and hopefully the ATD experience was a good one for you. It was my pleasure to arrange the speakers’ dinner for you all and it seemed that everyone enjoyed meeting each other (many for the first time) as well as giving the conference committee a chance to meet all the speakers in advance of the event.

The big day arrived on Friday 20th May and we made an early start at Karstens to get set up before the crowds started to arrive.

The calm before the storm!

The calm before the storm!

It all went very smoothly and we kicked off on time with an opening address by Rajesh on behalf of the ATD2K16 committee before Michael Bolton‘s keynote, “The Rapid Software Testing Guide to What You Meant to Say”. As always, it was an engaging talk from Michael and it was particularly enjoyed by the very recent RST graduates in the room (as TEAM had organized RST with Michael in the days leading up to the conference).

The break for morning tea saw the conference attendees enjoying their first opportunity to network and meet new people. The level of noise during the breaks at a conference is a good heuristic for how engaged & passionate people are about why they’re there, so I took the first break as a very positive sign that we were on the right track.

The conference split into two tracks after the morning tea and I facilitated Oliver Erlewein‘s talk on “Reaching Beyond Performance in an Agile World”.

Oliver Erlewein talking performance

Oliver Erlewein talking performance

It was good to see Oliver again, having met him during the Kiwi Workshop on Software Testing in 2013 and he’s a good CDT guy from Wellington in New Zealand. His talk was interesting and raised many questions. Meanwhile, the other track saw conference newbie presenter Michele Cross giving her talk, “Transformation of a QA Department” and she did a fine job by all accounts (Michele coming to us via the Speak Easy channel, I volunteer for this organization aimed at increasing the diversity of speakers at tech conferences).

For the next track session, I was co-facilitating Catherine Karena‘s talk, “It takes a village to raise a tester”. This was an excellent presentation, following the journey of some young people trained in software testing via WorkVentures, a social enterprise focused on helping unemployed youth get a career start in technology. Catherine’s passion for what she does shone through and she’s rightly proud of the great work and outcomes being achieved. Meanwhile, the other track session saw Hamish Tedeschi (all the way from Perth) with “Testing your Driven Development”.

We broke for lunch on time and the hour passed quickly, both for the attendees enjoying lunch & conversation and for us organizers making sure all our ducks were in a row for the afternoon’s proceedings.

I kicked off my afternoon by attending Brian Osman‘s talk on “The School of Rock – CDT Uncut”. Brian is an old mate in the testing game and I was delighted he was on our programme – such a genuine guy and one of the most passionate testing presenters you’ll ever see. As I expected, his talk was a gem, full of great insight and presented with passion and a good dose of great humour too. Meanwhile, the other track session saw another Kiwi bro, in the shape of Aaron Hodder, taking the stage with a very well-received talk on “Software Cartography (or how to build multidimensional information radiators)”.

The last two track talks of the day came next and I co-facilitated Katrina Clokie‘s talk on “Testing web services and microservices”. Although Katrina had almost lost her voice, she did a great job of giving her talk (microphone in hand) and the content was – as always from her – honest, relevant and genuinely useful. Meanwhile, the other track offered the very hot topic of “How to Build a Guild” with James Kitney.

The afternoon tea break again got noisy as attendees exchanged stories about their day, but the break was cut short by an extended session of lightning talks. Originally scheduled for thirty minutes, our willing volunteer Rich Robinson put together an excellent series of six five-minute talks. This was an amazing session, facilitated brilliantly by Rich, seeing some first-timers taking the stage (much kudos to them), some old-timers giving us surprises (like Erik Petersen actually finishing a lightning talk on time!) and rounded out by the one and only Santhosh Tuppad who clearly regarded the five-minute cutoff as some sort of guideline rather than a hard and fast rule! I’m really pleased we decided to include a lightning talks session and even more pleased that it worked so well.

There was just one talk left to wrap up the day, that being Anne-Marie Charrett with “Test Management Revisited”.

Anne-Marie Charrett during her closing keynote

Anne-Marie Charrett during her closing keynote

I was lucky enough to introduce her talk and she spoke well about her time building a testing team at Tyro (and this was a very popular talk according to the feedback forms we got back).

Before I knew it, I was giving the closing address, where had the day gone?!


Time to close out the conference day

The post-conference drinks reception was well-attended and it was a chance for me and the other organizers to wind down. The overwhelmingly positive feedback about the event that we all received during the drinks really meant a lot!

Saturday 21st was another early start to set up for four full-day workshops: Coaching Testers (with Anne-Marie Charrett), Exploratory Testing (with Paul Seaman and I), Ruby for Testers (with Scott Miles) and Web & Mobile Security Testing (with Santhosh Tuppad). We were pleased by the response to the workshop day, seeing about sixty attendees spread across the four workshops on offer.

My day was spent with Paul Seaman and a group of ten enthusiastic testers talking about Exploratory Testing. This was the first time Paul and I had presented together and we seemed to split the load well and frequent exercises and videos went down well with the group.

Other workshops were enjoyed too, according to the feedback we’ve received so far.

After the workshop day wrapped up, a decent-sized group decamped just around the corner to the Irish Times pub for a few celebratory drinks and this was the committee’s chance to finally call time on the ATD2K16 event. It was a great feeling to have run the event well, hear the great feedback and feel the sense of community building it has already created. I felt great pride and also very humbled.


My personal thanks go to all of our sponsors, we certainly couldn’t have got the event off the ground without your support. Catch Software as platinum sponsor couldn’t have been more helpful and supportive, hats off to Bryce and Brent for coming along and being all-round nice blokes all weekend. Our Gold sponsors also provided great support, so thanks to Attribute Group, Cigniti, Association for Software Testing, and Test Insane. Thanks also to our media partners for their support and promotion of the event.

Special thanks to our speakers, I was very proud of the line-up we put together (especially for an inaugural event) so many thanks for supporting us, taking the time to prepare presentations and spending time away from family to visit us in Melbourne.

The TEAM/ATD2K16 committee have become close friends over the last year and our friendship has survived the rigorous test of putting together a conference, so my thanks to Rajesh, Paul S, Paul C and Scott for their diverse opinions, commitment and enthusiasm for building a great community and event.

As I said in my closing speech, “see you next year”!

More on the “commoditisation of testing”

I recently saw an article about UBS centralizing its “QA”:

A few parts of the article stood out for me:

…One major issue that vendors of outsourced testing and quality assurance will have to grapple with … is that they too will have to show that automation will lower their headcounts and the costs they pass on to UBS.

It’s an interesting idea that automation will lead to lower headcounts and reduction in cost. It seems more likely that overall headcount would remain much the same, as additional effort is spent on automation and less effort spent on “manual” testing. Or maybe the goal here is to replace manual testing with automation? The article reads like cost reduction is the goal but automation is not a direct means of cost reduction (though it does give you lots of other great benefits when done well).

Automation … means testing is becoming commoditised and cheaper, and that in turn means vendors should be able to pass on their lower costs.

Really? Testing has become commoditized in the sense that it’s become a race to the bottom in terms of cost, at the expense of real testing skills. Certification has helped the commoditization cause to some extent too, why wouldn’t you hire tester A with certification X in a low cost location over tester B with the same certification X in a high cost location, you’re getting the same thing for less money, right? Well, though they come with the same X Factor, you’re probably not getting the same skills and there are other costs associated with running testing teams in geographically distant locations.

I don’t see the relationship between increased levels of automation and commoditization, as I see the benefits of automation as being different from “testing”. We can automate a lot of “checking” work and we can leverage automation to enhance our abilities to perform testing, but it doesn’t follow that more automation leads to commoditization of what I think of as “testing”.

Quality assurance staff currently represents approximately 20% of the total UBS headcount of staff employed in software engineering and it’s a proportion that Adams expects to fall, and the reason is simple. The automation of app testing and development means fewer human testers, in proportion to the development team as a whole.

OK, so now we’re getting there, the expectation is that they can replace what humans are doing by automation on machines. If all the humans are doing is checking work and not adding any other value that we know great testers can add, then maybe they’re right. But if they’re right, surely it would be worth looking at having more skilled humans doing the value-add work?

“If we were running a music streaming business and our website went down, then we wouldn’t be in the same kind of trouble that a bank would be in if the same thing happened. In QA, we are the the last stop in the process of making sure that doesn’t happen.”

Another shop where “QA” are seen as the “last line of defence”, in 2016. This centralized QA team can look forward to lots of blame being directed their way in the future – and most of it will be blame for things they had little or no control over. A great model!

the restructuring has established that skills in test engineering are fungible across the major business lines of UBS, and that will accelerate our automation and performance engineering.

Yes, I had to look up what “fungible” means ( definition: “being of such nature or kind as to be freely exchangeable or replaceable, in whole or in part, for another of like nature or kind”). The idea appears to be that all the major business lines of UBS in which these testers will be responsible for testing apps are so similar that testing skills are transferable between all of them. This is a best/common practices notion that either doesn’t believe – or simply refuses to acknowledge – that individual project context is one of the most important factors in determining how a tester should operate in that project in order to be of the most value. I strongly believe that test approaches and skills are context dependent – and maybe that’s why I had no idea what “fungible” meant.

This move to centralize will no doubt be followed by a move to decentralize as the circle of testing life goes around. It’s much like the test outsourcing push of recent years, but that tide is already turning as companies like Doran Jones show how local high quality testing talent can be more effective (both in terms of doing good testing and costing less money) in what’s becoming known as “reshoring”.

In some ways, articles like this one from another big company like UBS are just noise and we shouldn’t worry too much about them. But in other ways, this kind of article gets picked up by other big companies and seen as promoting good ideas, big companies tend to follow the lead of other big companies. So maybe we’ll see more “centralized QA” departments following the UBS lead, but it’s just a matter of time before we start reading about the “decentralization of QA”. For those of us with a passion for good context-aware software testing that genuinely adds value to projects, all we can do is publicly state our cases, critique articles like this one, and connect with others of a similar mindset to influence the decision makers within our organizations.