Monthly Archives: April 2016

More on the “commoditisation of testing”

I recently saw an article about UBS centralizing its “QA”:

http://qa-financial.com/knowledge-bank/qa-testing/ubs-centralises-qa-raises-expectations-vendors

A few parts of the article stood out for me:

…One major issue that vendors of outsourced testing and quality assurance will have to grapple with … is that they too will have to show that automation will lower their headcounts and the costs they pass on to UBS.

It’s an interesting idea that automation will lead to lower headcounts and reduction in cost. It seems more likely that overall headcount would remain much the same, as additional effort is spent on automation and less effort spent on “manual” testing. Or maybe the goal here is to replace manual testing with automation? The article reads like cost reduction is the goal but automation is not a direct means of cost reduction (though it does give you lots of other great benefits when done well).

Automation … means testing is becoming commoditised and cheaper, and that in turn means vendors should be able to pass on their lower costs.

Really? Testing has become commoditized in the sense that it’s become a race to the bottom in terms of cost, at the expense of real testing skills. Certification has helped the commoditization cause to some extent too, why wouldn’t you hire tester A with certification X in a low cost location over tester B with the same certification X in a high cost location, you’re getting the same thing for less money, right? Well, though they come with the same X Factor, you’re probably not getting the same skills and there are other costs associated with running testing teams in geographically distant locations.

I don’t see the relationship between increased levels of automation and commoditization, as I see the benefits of automation as being different from “testing”. We can automate a lot of “checking” work and we can leverage automation to enhance our abilities to perform testing, but it doesn’t follow that more automation leads to commoditization of what I think of as “testing”.

Quality assurance staff currently represents approximately 20% of the total UBS headcount of staff employed in software engineering and it’s a proportion that Adams expects to fall, and the reason is simple. The automation of app testing and development means fewer human testers, in proportion to the development team as a whole.

OK, so now we’re getting there, the expectation is that they can replace what humans are doing by automation on machines. If all the humans are doing is checking work and not adding any other value that we know great testers can add, then maybe they’re right. But if they’re right, surely it would be worth looking at having more skilled humans doing the value-add work?

“If we were running a music streaming business and our website went down, then we wouldn’t be in the same kind of trouble that a bank would be in if the same thing happened. In QA, we are the the last stop in the process of making sure that doesn’t happen.”

Another shop where “QA” are seen as the “last line of defence”, in 2016. This centralized QA team can look forward to lots of blame being directed their way in the future – and most of it will be blame for things they had little or no control over. A great model!

the restructuring has established that skills in test engineering are fungible across the major business lines of UBS, and that will accelerate our automation and performance engineering.

Yes, I had to look up what “fungible” means (dictionary.com definition: “being of such nature or kind as to be freely exchangeable or replaceable, in whole or in part, for another of like nature or kind”). The idea appears to be that all the major business lines of UBS in which these testers will be responsible for testing apps are so similar that testing skills are transferable between all of them. This is a best/common practices notion that either doesn’t believe – or simply refuses to acknowledge – that individual project context is one of the most important factors in determining how a tester should operate in that project in order to be of the most value. I strongly believe that test approaches and skills are context dependent – and maybe that’s why I had no idea what “fungible” meant.

This move to centralize will no doubt be followed by a move to decentralize as the circle of testing life goes around. It’s much like the test outsourcing push of recent years, but that tide is already turning as companies like Doran Jones show how local high quality testing talent can be more effective (both in terms of doing good testing and costing less money) in what’s becoming known as “reshoring”.

In some ways, articles like this one from another big company like UBS are just noise and we shouldn’t worry too much about them. But in other ways, this kind of article gets picked up by other big companies and seen as promoting good ideas, big companies tend to follow the lead of other big companies. So maybe we’ll see more “centralized QA” departments following the UBS lead, but it’s just a matter of time before we start reading about the “decentralization of QA”. For those of us with a passion for good context-aware software testing that genuinely adds value to projects, all we can do is publicly state our cases, critique articles like this one, and connect with others of a similar mindset to influence the decision makers within our organizations.

TEAM meetup number 10

The tenth TEAM meetup was held on 13th April, in the great space kindly offered to us by Nintex (thanks to them for also providing nibbles and drinks, while TEAM itself bought the pizzas).

Our membership had increased to around 415 and we had a big response in terms of RSVPs so it’s great to see the group becoming more and more popular, with many repeat attendees.

After my short introduction, we kicked off with Michele Cross and Paul Seaman giving a short talk about “testing idea generators”. This was a good short talk, highlighting a couple of tools for coming up with new test ideas. Paul focused on the Oblique Testing cards produced by Mike Talks, while Michele talked more about Adam Howard‘s Heuristic Testing Dice. The ability to come up with test ideas is particularly relevant in exploratory testing when the tester has more freedom to follow their own ideas about what should be tested than in a scripted testing scenario. This was useful content from Michele and Paul (and many thanks to them for stepping up to cover Colin’s spot as he had to pull out due to sickness at short notice).

Next up was David Bell with his lightning talk, “Commonly Held Beliefs about Test Cases”. David explored some of the common “best practice” beliefs around test cases (such as testing can only be performed using test cases and test cases help you achieve 100% coverage) and added his own thoughts and counter-arguments for many of the beliefs. It was great to see David giving his first public testing presentation and he presented his ideas clearly and in compelling fashion. Well done, David, and we’re looking forward to your next contribution to the meetup group.

For the great debate, we split the group of about 40 into two – with one group building an argument that you cannot test without test cases, the other arguing that you can test without test cases. Each group was given some time to define what they meant by “test case” and then build their argument around whether they could test or not with such test cases. Bringing the two groups “head to head” resulted in a pretty heated – but entirely good natured – debate, expertly moderated by Rajesh Mathur. The group who argued that test cases were a necessity had very smartly come up with a very broad definition of what they meant by a “test case” (covering test ideas, oracles, any thoughts leading to a test, etc.) so it was hard for them to lose the argument and the debate veered off in several different directions before we had to call time at 7.30pm.

It’s great to see such passion in the group and the debate format brings that out very well. The guys from Nintex were surprised by the passion on display and it makes the job of putting these events on so rewarding when it results in such strong debate about the true meaning of good testing.

For further meetup announcements, remember to follow our meetup.com page at:

 http://www.meetup.com/Test-Engineering-Alliance-Melbourne/

Also keep an eye on our website, http://www.testengineeringalliance.com, where you will find all of our different offerings – including the opportunity to take the Rapid Software Testing course with the one and only Michael Bolton and a brand new testing conference, Australian Testing Days, all happening in May 2016. (Note that members of the TEAM meetup group are entitled to a 15% discount off the conference registration fees.)

You can also follow TEAM on social media, at:

Software testing: craft or engineering?

I like the fact that Rex Black shares his thoughts every month through his free webinar series, even if I often don’t agree with his content. Hearing what other people think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good software testing looks like.

I recently attended Rex’s webinar titled “Why does software quality still suck?” and his premise was that software quality is abysmal and always has been.

This was one of the webinars where his content was very far away from my own ideas about software testing. Let’s start with the premise that software quality is bad and that’s the way it’s always been. Is it really still bad? Is it as bad as it was 20 years ago? Is it better than it was 5 years ago? I don’t know a way of measuring quality such that these questions could be meaningfully answered. What I do know is that the way software is developed, deployed and consumed has changed a great deal but much of the teaching around how to test that software has its roots in the past. Maybe software quality still sucks because the testing industry (in general) has failed to adapt to the changes in the way it is built, deployed and consumed?

Rex noted that manufacturing industries are capable of six sigma levels of quality (which roughly means three defects per million items), yet fairly recent Capers Jones research suggests that C++ and Java code typically contains around 13000 defects per million lines – so “software quality has not matured to true engineering yet”. There is the implicit suggestion here that building software is like building widgets so we in the software business should be able to create six sigma levels of quality in the code we write and deliver as software to our customers. In repeatable production-line manufacturing processes, it’s not too hard to see how you could whittle down the problems during production to achieve very low levels of defects. However, building software is not a repeatable production-line process, every piece of software is different. It’s also harder to define what a defect means in software and it’s also not clear that the presence of more defects necessarily means poorer quality in the opinion of the customer.

 

Let’s suppose for argument’s sake that software quality does still suck, what are the causes of that? Rex had a few broad categories of causes, a couple of which I will mention here, viz. under-qualified software professionals and a failure to follow best practices.

In terms of under-qualified software professionals being a cause of bad software, he said “certifications are a start, especially if we make them omnipresent” and he noted that such certifications need to be credible & valuable and also need to be seen to be credible & valuable. When it comes to testing, there is no omnipresent certification (though perhaps ISTQB is coming frighteningly close) and I remain unconvinced that there should be. The link from software testers not being certified to software sucking is a seriously tenuous one as far as I’m concerned. Bad software is not just a product of bad testing and the best testers on earth can’t make a bad piece of software good if the environment isn’t right for them to help in doing so. What would help – in a general sense – is highly skilled software testers and there are many ways of acquiring skills outside of any certification scheme. Let’s not confuse qualification with skill.

Making the link between sucky software and a failure to follow best practices was one of Rex’s main points in this webinar. His claim was that “if we applied best practices, software would suck a lot less” and he capped it off with the bold statement that “Failure to follow best practices [in software development and testing] is negligence” (in the legal sense). This was again supported by references to manufacturing industries and the idea that if we could move software development to being true engineering, then we’d be in a position where following best practices was not only the norm, but was a legal requirement. As is common knowledge, I associate myself with the context-driven school of testing and one of their principles is “There are good practices in context, but there are no best practices.” So does this mean testers following context-driven principles are contributing to the software they produce being of bad quality? I see no evidence of that and my experience suggests that the exact opposite happens when testers move to more CDT styles of thinking, focusing on skills and applying appropriate approaches and techniques that make sense in the context of the project they’re contributing to.

Rex made the comment a few times that we’re still in the “craft” stage in terms of quality when it comes to building software and we need to strive to get to the “true engineering” stage. When I think of a “craftsman”, I imagine a person who is very skilled at doing something (words like “bespoke”, “excellence”, “experience” all come to my mind) and software testing is such a thing – the difference between a tester who is truly skilled in this craft and one who is inexperienced or lacks the right skills is enormous, in terms of the contribution they can make to projects and specifically to helping software suck less. There are also great benefits to taking an engineering approach to our work as well, of course, but I don’t see it as a continuum from craft to engineering, I see one complementing the other.

(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ and the one I refer to in the above post can be listened to in full at http://rbcs-us.com/resources/webinars/why-does-software-quality-still-suck/)