Explainable AI

AI Tech Trends


SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.


Too much detail?

Human researchers historically have been biased towards using models and tools that yield to our intuition. Nonlinear systems are seen as more chaotic and harder to understand. In recent decades iterative methods using computers to perform repetitive steps have helped address some of these challenges, although how they actually obtain their results can be more difficult for humans to understand. This has in part led the boom in data visualisation techniques, to overcome some of these challenges.

As AI gets more widely deployed, the importance of having explainable models will increase. With AI being used in tasks where incidents may arise resulting in legal action, it will be essential that not only are models and their associated training data archived and subject to version control, but also that the actions of the model are explainable.

Deep Learning adds significant complexity to the form of the models used. Deep Learning models may be constructed from many interconnected nonlinear layers supporting feature extraction and transformation. This high level of coupling between very large numbers of non-linear functions drives the need for extremely complex, highly parallel computations. This complexity is leveraged in Deep Learning to provide models that can address fine details and identify features within a problem that cannot be addressed by traditional means, but it is achieved at the cost of sacrificing simplicity of insight.

Explainable AI (XAI)

Explainable AI is a movement focused on the interpretability of AI models. This is not just about simplifying models, which can often remove the benefits achieved from complexity. Instead, XAI can and does focus on delivering techniques to support human interpretability. A range of approaches can be used, for example, simple methods such as:

  • 2D or 3D projections (this involves taking a larger multi-dimensional space and presenting in a lower dimensional order (2D or 3D)
  • Correlation graphs (2D graphs where the nodes represent variables and the thickness of the lines between them represent the strength of the correlation).

But, with XAI, there is often a decision point at the start of the modelling process as to how interpretable the data scientist wants the model to be. Machine Learning techniques such as Decision Trees, Monotonic Gradient Boosted Machines and Rules-based systems do lead to good results, but in cases where accuracy is more important than interpretability it often falls to visualisation techniques to support human insight. There exist a range of tools that can support these objectives such as:

  • Decision tree surrogates: this is essentially a simple to understand model, used to explain to explain a more complex one by using a simplified decision flow
  • Partial dependence plots: These provide a view of how on average the machine learning model functions. This provides a coarse, high-level overview that does lack detail
  • Individual conditional expectation (ICE): these provide a focus on local relationships and are often a good complement to partial dependence plots – in effect ICE can provide a drill down from partial dependence plots.

These techniques can help aid clarity. They may not be representing the full complexity of the data, but instead, serve to provide a better feel for the data in human terms. These capabilities are going to be key as we advance Deep Learning and AI, and in particular, there will also be intense demand for expert witness skills to help articulate understanding to non-data scientist and non-technical audiences. Part of this process will rely on good visualisation of large data sets leveraging powerful GPU technology to support these representations. So in a sense whilst our abilities to use GPUs for AI has in part created challenges of complexity they will undoubtedly also be part of the solution to enhancing understanding. It is therefore likely that one outcome of the explainable AI movement will be AIs that can help humans with the tricky task of model interpretation.

Back to SC18 - if you are at the show, please come and visit us on the Arm booth in the centre of the show-floor where my colleagues and I will be delighted to meet you. Today is an exciting day for us having announced our new partnership with Arm and Marvell - pop over to find out how you can trial an Arm cluster in Iceland!


Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

AI is a great opportunity, but banks must innovate faster

The impact of AI on the lives of consumers and the operation of businesses is slowly growing. Whether it’s the increasing visibility of autonomous vehicles or the small conveniences of a voice assistant such as Amazon’s Alexa, we’re beginning to get a sense of what AI can do. However, we’re still at the beginning. The truly significant changes are yet to come.

Read more


HPC and AI collusion – Rumours from the trade show floor

Recently I’ve garnered much of my blog inspiration from industry events. February has been no exception and I benefited from a fascinating day earlier in the month at the HPC and Big Data conference in London, just a stone’s throw from the Houses of Parliament. Here are some of my observations from my discussions there...

Read more


3D Map Making Challenges for Autonomous Driving

Building accurate road maps is a central part of the effort to build and deploy more autonomous vehicles in the real world. The term “map” may be a bit of a misnomer, though, because these maps aren’t anything like the flat 2D images available online, they’re complete three-dimensional recreations of roadside environments that are updated on a continuous basis to provide a high degree of accuracy — often down to the centimeter scale. These 3D digital maps are a critical part of an autonomous vehicle’s ability to perceive the world, and have key applications in other technologies, which has made the effort to develop the definitive map a highly competitive endeavour.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.