I like the fact that Rex Black shares his thoughts every month through his free webinar series, even if I often don’t agree with his content. Hearing what other people think about software testing helps me to both question and cement my own thoughts and refine my arguments about what I believe good software testing looks like.
I recently attended Rex’s webinar titled “Why does software quality still suck?” and his premise was that software quality is abysmal and always has been.
This was one of the webinars where his content was very far away from my own ideas about software testing. Let’s start with the premise that software quality is bad and that’s the way it’s always been. Is it really still bad? Is it as bad as it was 20 years ago? Is it better than it was 5 years ago? I don’t know a way of measuring quality such that these questions could be meaningfully answered. What I do know is that the way software is developed, deployed and consumed has changed a great deal but much of the teaching around how to test that software has its roots in the past. Maybe software quality still sucks because the testing industry (in general) has failed to adapt to the changes in the way it is built, deployed and consumed?
Rex noted that manufacturing industries are capable of six sigma levels of quality (which roughly means three defects per million items), yet fairly recent Capers Jones research suggests that C++ and Java code typically contains around 13000 defects per million lines – so “software quality has not matured to true engineering yet”. There is the implicit suggestion here that building software is like building widgets so we in the software business should be able to create six sigma levels of quality in the code we write and deliver as software to our customers. In repeatable production-line manufacturing processes, it’s not too hard to see how you could whittle down the problems during production to achieve very low levels of defects. However, building software is not a repeatable production-line process, every piece of software is different. It’s also harder to define what a defect means in software and it’s also not clear that the presence of more defects necessarily means poorer quality in the opinion of the customer.
Let’s suppose for argument’s sake that software quality does still suck, what are the causes of that? Rex had a few broad categories of causes, a couple of which I will mention here, viz. under-qualified software professionals and a failure to follow best practices.
In terms of under-qualified software professionals being a cause of bad software, he said “certifications are a start, especially if we make them omnipresent” and he noted that such certifications need to be credible & valuable and also need to be seen to be credible & valuable. When it comes to testing, there is no omnipresent certification (though perhaps ISTQB is coming frighteningly close) and I remain unconvinced that there should be. The link from software testers not being certified to software sucking is a seriously tenuous one as far as I’m concerned. Bad software is not just a product of bad testing and the best testers on earth can’t make a bad piece of software good if the environment isn’t right for them to help in doing so. What would help – in a general sense – is highly skilled software testers and there are many ways of acquiring skills outside of any certification scheme. Let’s not confuse qualification with skill.
Making the link between sucky software and a failure to follow best practices was one of Rex’s main points in this webinar. His claim was that “if we applied best practices, software would suck a lot less” and he capped it off with the bold statement that “Failure to follow best practices [in software development and testing] is negligence” (in the legal sense). This was again supported by references to manufacturing industries and the idea that if we could move software development to being true engineering, then we’d be in a position where following best practices was not only the norm, but was a legal requirement. As is common knowledge, I associate myself with the context-driven school of testing and one of their principles is “There are good practices in context, but there are no best practices.” So does this mean testers following context-driven principles are contributing to the software they produce being of bad quality? I see no evidence of that and my experience suggests that the exact opposite happens when testers move to more CDT styles of thinking, focusing on skills and applying appropriate approaches and techniques that make sense in the context of the project they’re contributing to.
Rex made the comment a few times that we’re still in the “craft” stage in terms of quality when it comes to building software and we need to strive to get to the “true engineering” stage. When I think of a “craftsman”, I imagine a person who is very skilled at doing something (words like “bespoke”, “excellence”, “experience” all come to my mind) and software testing is such a thing – the difference between a tester who is truly skilled in this craft and one who is inexperienced or lacks the right skills is enormous, in terms of the contribution they can make to projects and specifically to helping software suck less. There are also great benefits to taking an engineering approach to our work as well, of course, but I don’t see it as a continuum from craft to engineering, I see one complementing the other.
(For reference, Rex publishes all his webinars on the RBCS website at http://rbcs-us.com/resources/webinars/ and the one I refer to in the above post can be listened to in full at http://rbcs-us.com/resources/webinars/why-does-software-quality-still-suck/)
That was definitely an interesting webinar. I won the drawing for the free e-learning course which I’m excited about. This debate is starting to sound to me like using standardized testing of students to evaluate teacher quality here in the US — as a metric it can be interesting to look at in the context of an overall evaluation of how good teachers are at their profession – but there is a lot that cannot be measured and being able to tell a “good” teacher from a “bad” teacher comes from years of experience and building of tacit knowledge. It takes a dedicated hiring manager to look at the sum of a person’s skill, drive, and knowledge, and hiring a tester just because of a certification is as risky as hiring a teacher based solely on tests scores of their students without evaluating their passion and how well they will fit with the job they’re being hired to do. The incentives an over-reliance on certification can lead to is the same disastrous consequences we’re seeing here in the states where teachers are “teaching to the test” — teaching only what is going to be covered on the standardized tests and not diverting into more interesting territory as their students show interest. For us it becomes about checking boxes instead of engaging with the software to find things that there are not checkboxes for. And the incentives would be for our stakeholders to give us just enough time for doing the box-checking tasks.
Thanks for the comment, Amanda. I like your statement “it becomes about checking boxes instead of engaging with the software to find things that there are not checkboxes for”. It reminds me of Elisabeth Hendrickson’s “Tested = Checked + Explored” idea (from the “Explore It!” book) – we can check against known things but we need exploration to unearth other risks.
Thanks again for taking the time to comment.
Pingback: Testing Bits – 4/3/16 – 4/9/16 | Testing Curator Blog
Pingback: Some kick ass blog posts from last week #12 - Mr.Slavchev()
Pingback: Five Blogs – 11 April 2016 – 5blogs