I’ve read a number of books on similar topics this year around artificial intelligence, machine learning, algorithms, etc. Coming to this topic with little in the way of prior knowledge, I feel like I’ve learned a great deal.
Our increasing reliance on decisions made my machines instead of humans is having significant – and sometimes truly frightening – consequences. Despite the supposed objectivity of algorithmic decision making, there is plenty of evidence of human biases encoded into these algorithms and the proprietary nature of some of these systems means that many are left powerless in their search for explanations about the decisions made by these algorithms.
Each of these books tackles the subject from a different perspective and I recommend them all:
- Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
(by Virginia Eubanks)
- AI Superpowers: China, Silicon Valley & the New World Order (by Dr Kai-Fu Lee)
- The Formula: How Algorithms Solve All Our Problems…And Create More by (Luke Dormehl)
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (by Cathy O’Neil)
It feels like “AI in testing” is becoming a thing, with my feeds populated with articles, blog posts and ads about the increasingly large role AI is playing or will play in software testing. It strikes me that we would be wise to learn from the mistakes discussed in these books in terms of trying to fully replace human decision making in testing with those made by machines. The biases encoded into these algorithms should also be acknowledged – it seems likely that confirmatory biases will be present in terms of testing and we neglect the power of human ingenuity and exploration at our peril when it comes to delivering software that both solves problems for and makes sense to (dare I say “delights”) our customers.