Integration of AI systems at scale

AI Tech Trends

Today, data scientists and machine learning engineers can implement systems for tackling discrete tasks with a very high degree of success. Results in fields such as image processing to support cancer diagnosis can be extremely accurate, with the best performing algorithms even exceeding experienced clinicians for certain categories of cancer diagnosis.

These successes are down to the ability of deep learning systems to correctly work with the features/attributes of large, but finite training data sets and identify those parameters that are key to giving an accurate categorisation and associated probability.

The repetitive training processes needed to achieve this level of accuracy lend themselves well to tasks that are highly focused, and which can operate almost exclusively within the confines of a computer where repetition has almost zero penalty. The opposite can be true for systems that by necessity must interface with actual physical processes, for example, training a robot to walk or creating a more advanced vehicle braking system combining AI and traction control into a 'system of systems'. Often we find in such situations that mechanical failures arise before training is complete, the result of the frequent repetition needed to develop the desired capability.

As well as the difficulties with certain mechanical systems, there is another dynamic that will be realised as organisations attempt to achieve more advanced AI requiring cooperation between AI agents. This complexity is already realised inside autonomous vehicles, where it is necessary to create this 'system of systems' and have various elements including Deep Learning modules, working together effectively. Then take this up a level of abstraction and think about how autonomous vehicles might interact between themselves in the context of a smart city with AI-assisted traffic lights to better optimise traffic flow.

The challenge for many organisations is that Deep Learning algorithms can, in fact, be quite fragile, in the sense that they don’t respond well to unforeseen inputs. This was recently demonstrated by researchers from Kyushu University who demonstrated that well-placed pixel changes (usually just numbering 1, 3 or 5 brightly lit pixels) could be used to fool an AI system into not recognising an object. This is an effect that might have consequences in the future in areas such as military camouflage (perhaps beneficial) or for cyclists with illuminating LEDs (possibly causing accidents). As a consequence, this puts more emphasis on the need to not just test AI systems within the confines of limited scenarios, but in the context of how they operate in the real world connected into more complex overall systems or across an organisation where they are expected to collaborate.

This has implications for organisations starting to deploy large scale and multiple coordinated AI systems, as it creates an increased emphasis on validation and testing within the context of the environment in which the AI operates. Not only will individual systems need to be updated and trained continuously, but ensembles of connected systems will need to be trained and tested together.

This need for continuous testing and validation will drive significant emphasis on the running costs of such processes. This is where organisations like Verne Global can help. By delivering Deep Learning platforms at scale and with them powered entirely with low cost, 100% renewable energy, organisations can be confident that they are deploying fully tested and validated systems of systems without breaking the bank.

Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

AI is a great opportunity, but banks must innovate faster

The impact of AI on the lives of consumers and the operation of businesses is slowly growing. Whether it’s the increasing visibility of autonomous vehicles or the small conveniences of a voice assistant such as Amazon’s Alexa, we’re beginning to get a sense of what AI can do. However, we’re still at the beginning. The truly significant changes are yet to come.

Read more

Explainable AI

SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Read more

HPC and AI collusion – Rumours from the trade show floor

Recently I’ve garnered much of my blog inspiration from industry events. February has been no exception and I benefited from a fascinating day earlier in the month at the HPC and Big Data conference in London, just a stone’s throw from the Houses of Parliament. Here are some of my observations from my discussions there...

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.