Explainable AI

AI Tech Trends


SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.


Too much detail?

Human researchers historically have been biased towards using models and tools that yield to our intuition. Nonlinear systems are seen as more chaotic and harder to understand. In recent decades iterative methods using computers to perform repetitive steps have helped address some of these challenges, although how they actually obtain their results can be more difficult for humans to understand. This has in part led the boom in data visualisation techniques, to overcome some of these challenges.

As AI gets more widely deployed, the importance of having explainable models will increase. With AI being used in tasks where incidents may arise resulting in legal action, it will be essential that not only are models and their associated training data archived and subject to version control, but also that the actions of the model are explainable.

Deep Learning adds significant complexity to the form of the models used. Deep Learning models may be constructed from many interconnected nonlinear layers supporting feature extraction and transformation. This high level of coupling between very large numbers of non-linear functions drives the need for extremely complex, highly parallel computations. This complexity is leveraged in Deep Learning to provide models that can address fine details and identify features within a problem that cannot be addressed by traditional means, but it is achieved at the cost of sacrificing simplicity of insight.

Explainable AI (XAI)

Explainable AI is a movement focused on the interpretability of AI models. This is not just about simplifying models, which can often remove the benefits achieved from complexity. Instead, XAI can and does focus on delivering techniques to support human interpretability. A range of approaches can be used, for example, simple methods such as:

  • 2D or 3D projections (this involves taking a larger multi-dimensional space and presenting in a lower dimensional order (2D or 3D)
  • Correlation graphs (2D graphs where the nodes represent variables and the thickness of the lines between them represent the strength of the correlation).

But, with XAI, there is often a decision point at the start of the modelling process as to how interpretable the data scientist wants the model to be. Machine Learning techniques such as Decision Trees, Monotonic Gradient Boosted Machines and Rules-based systems do lead to good results, but in cases where accuracy is more important than interpretability it often falls to visualisation techniques to support human insight. There exist a range of tools that can support these objectives such as:

  • Decision tree surrogates: this is essentially a simple to understand model, used to explain to explain a more complex one by using a simplified decision flow
  • Partial dependence plots: These provide a view of how on average the machine learning model functions. This provides a coarse, high-level overview that does lack detail
  • Individual conditional expectation (ICE): these provide a focus on local relationships and are often a good complement to partial dependence plots – in effect ICE can provide a drill down from partial dependence plots.

These techniques can help aid clarity. They may not be representing the full complexity of the data, but instead, serve to provide a better feel for the data in human terms. These capabilities are going to be key as we advance Deep Learning and AI, and in particular, there will also be intense demand for expert witness skills to help articulate understanding to non-data scientist and non-technical audiences. Part of this process will rely on good visualisation of large data sets leveraging powerful GPU technology to support these representations. So in a sense whilst our abilities to use GPUs for AI has in part created challenges of complexity they will undoubtedly also be part of the solution to enhancing understanding. It is therefore likely that one outcome of the explainable AI movement will be AIs that can help humans with the tricky task of model interpretation.

Back to SC18 - if you are at the show, please come and visit us on the Arm booth in the centre of the show-floor where my colleagues and I will be delighted to meet you. Today is an exciting day for us having announced our new partnership with Arm and Marvell - pop over to find out how you can trial an Arm cluster in Iceland!


Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

Deep Learning at Scale

Deep learning is a current hot topic in HPC and I am sure it will be one of the key themes at SC18. In many cases, we have been hearing about artificial intelligence, machine learning, and deep learning discussed together, though in reality, machine learning is a type of AI, and deep learning is a subset of machine learning. In this article, we will try to best define deep learning and its industrial applications and, more specifically, the benefits of scale - from big data to high-performance computing - to the successful implementation of deep learning.

Read more


Classic HPC v New HPC – Rumours from the trade show floor

Sometimes the combination of networking at a trade show and catching an insightful presentation provide a valuable insight into market dynamics. Earlier this year I attended the HPC and Quantum Computing show in London and following this, watched Addison Snell’s (CEO of Intersect360 Research) “The New HPC” presentation from the Stamford HPC Conference. Both confirmed my HPC suspicions garnered over the last year.

Read more


HPC and AI collusion – Rumours from the trade show floor

Recently I’ve garnered much of my blog inspiration from industry events. February has been no exception and I benefited from a fascinating day earlier in the month at the HPC and Big Data conference in London, just a stone’s throw from the Houses of Parliament. Here are some of my observations from my discussions there...

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.