AI needs to be ‘explainable’ but is that possible?

AI Insights

Amazon's AI made the news early in October after it was revealed that the company had scrapped a recruitment engine because it was 'sexist'. Private Eye, the UK's satirical news magazine, described it as "a reminder to take an extra big pinch of salt whenever you hear that AI will improve the world". However, the reality is more complicated...

Since 2014, Amazon has been trying to automate the process of sifting through the vast number of CVs it receives for every vacancy. In 2015, according to a Reuters report, the company realised that the AI it had been using was not "gender neutral". That's because it had been trained using CVs submitted to Amazon over a period of 10 years and, since those were dominated by men, its decisions were skewed.

It's a reminder that AI is only as good as the data on which it has been trained. If that data is biased, then the AI will reflect it. This isn't a new argument and it isn't specific to machine learning - any kind of big data automation runs the risk of entrenching unfairness, something explored by data scientist Dr Cath O’Neil in her 2017 book Weapons of Math Destruction and, more recently, by political scientist Virginia Eubanks in Automating Inequality, published in September.

Where AI adds a new dimension is that it is meant to learn as it goes through the process and improve over time. Therefore, if the data contains some kind of bias, those biases will be magnified over time. And they might not be visible, even to the people running the system.

It's fairly easy to spot that an AI is favouring men over women. It should be possible to spot discrimination based on ethnicity. But it's much harder to tell that someone was rejected for a job interview because of their first name, or the postcode where they grew up, or a particular verb they used on their application. When biases are automated and hard to spot, they can do a lot of damage.

That's starting to worry people. A study by IBM's Institute of Business Value, published in September, asked 5,000 executives if they were concerned about how AI uses data and makes decisions . About 60 per cent said that they were - up from 29 per cent in the 2016 version of the study. Their worry is falling foul of regulatory and compliance standards.

Meanwhile, in PwC's 2017 Global CEO survey, two thirds of business leaders said they believe that AI and automation will have a negative impact on stakeholder trust in their industry over the next five years.

Suppose a patient sues a hospital over their cancer treatment plan and the hospital says it was following a plan proposed by AI? Perhaps the AI selected the best possible plan in every case except this one. Is it possible to untangle its web of decisions to determine how it gave bad advice on this occasion? What would publicity about the story do to trust in automation in other hospitals and areas of medicine?

Such an example isn't likely today - even hospitals using AI for cancer treatment use it to aid decisions by doctors, not to replace them, but it isn't unimaginable that 'blame the AI' will become a common defensive manoeuvre. Vendors, such as IBM and Google, are responding by adding "explainability tools" into their offerings.

There are good reasons for business leaders to demand explainability, but is it really possible for vendors to provide it? Venture capitalist Rudina Seseri wrote on TechCrunch earlier this year:

"Part of the advantage of some of the current approaches (most notably deep learning), is that the model identifies (some) relevant variables that are better than the ones we can define, so part of the reason why their performance is better relates to that very complexity that is hard to explain because the system identifies variables and relationships that humans have not identified or articulated. If we could, we would program it and call it software."

In other words, true machine learning is almost inexplicable by definition. She adds that compelling firms to reveal how a proprietary AI system works is effectively making it possible for their rivals to copy them: "That’s why, generally, a push for those requirements favour incumbents that have big budgets and dominance in the market and would stifle innovation in the start-up ecosystem."

AI is estimated to represent a $15trn economic opportunity. It's coming. But it won't be perfect. We have to hope there are incentives in place for people throughout the system - vendors, business customers and consumers - to question AI processes and scrutinise decision making.

Written by Shane Richmond (Guest)

See Shane Richmond (Guest)'s blog

Shane Richmond is a freelance technology writer and former Technology Editor of The Daily Telegraph. You can follow him at @shanerichmond

Related blogs

Explainable AI

SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Read more

What to look out for at SC18

SC: The big show with an international HPC audience celebrates its 30th year in 2018. It’s the World Cup of supercomputing and now it’s more than “just” supercomputing. Advancements in data analytics have topics like artificial intelligence (AI), including machine learning and deep learning, as stars of the show. Here's what I am looking forward to seeing in Dallas...

Read more

HPC and AI collusion – Rumours from the trade show floor

Recently I’ve garnered much of my blog inspiration from industry events. February has been no exception and I benefited from a fascinating day earlier in the month at the HPC and Big Data conference in London, just a stone’s throw from the Houses of Parliament. Here are some of my observations from my discussions there...

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.