Today, data scientists and machine learning engineers can implement systems for tackling discrete tasks with a very high degree of success. Results in fields such as image processing to support cancer diagnosis can be extremely accurate, with the best performing algorithms even exceeding experienced clinicians for certain categories of cancer diagnosis.
These successes are down to the ability of deep learning systems to correctly work with the features/attributes of large, but finite training data sets and identify those parameters that are key to giving an accurate categorisation and associated probability.
The repetitive training processes needed to achieve this level of accuracy lend themselves well to tasks that are highly focused, and which can operate almost exclusively within the confines of a computer where repetition has almost zero penalty. The opposite can be true for systems that by necessity must interface with actual physical processes, for example, training a robot to walk or creating a more advanced vehicle braking system combining AI and traction control into a 'system of systems'. Often we find in such situations that mechanical failures arise before training is complete, the result of the frequent repetition needed to develop the desired capability.
As well as the difficulties with certain mechanical systems, there is another dynamic that will be realised as organisations attempt to achieve more advanced AI requiring cooperation between AI agents. This complexity is already realised inside autonomous vehicles, where it is necessary to create this 'system of systems' and have various elements including Deep Learning modules, working together effectively. Then take this up a level of abstraction and think about how autonomous vehicles might interact between themselves in the context of a smart city with AI-assisted traffic lights to better optimise traffic flow.
The challenge for many organisations is that Deep Learning algorithms can, in fact, be quite fragile, in the sense that they don’t respond well to unforeseen inputs. This was recently demonstrated by researchers from Kyushu University who demonstrated that well-placed pixel changes (usually just numbering 1, 3 or 5 brightly lit pixels) could be used to fool an AI system into not recognising an object. This is an effect that might have consequences in the future in areas such as military camouflage (perhaps beneficial) or for cyclists with illuminating LEDs (possibly causing accidents). As a consequence, this puts more emphasis on the need to not just test AI systems within the confines of limited scenarios, but in the context of how they operate in the real world connected into more complex overall systems or across an organisation where they are expected to collaborate.
This has implications for organisations starting to deploy large scale and multiple coordinated AI systems, as it creates an increased emphasis on validation and testing within the context of the environment in which the AI operates. Not only will individual systems need to be updated and trained continuously, but ensembles of connected systems will need to be trained and tested together.
This need for continuous testing and validation will drive significant emphasis on the running costs of such processes. This is where organisations like Verne Global can help. By delivering Deep Learning platforms at scale and with them powered entirely with low cost, 100% renewable energy, organisations can be confident that they are deploying fully tested and validated systems of systems without breaking the bank.