A couple of weeks ago Frankfurt may have hosted the purveyors of the fastest machines on the planet at the International Supercomputing Conference ISC19, but it also was the location of a really fascinating meetup focused on innovation in the field of Artificial Intelligence.
Verne Global joined up with hosts WeWork to help sponsor the June Artificial Intelligence Meetup Frankfurt. As I joked with the standing-room only crowd of over 170 participants, there are just so many options when thinking about a topic for a 20-minute talk on Artificial Intelligence (AI). My decision came later than the invitations, so we stuck with the placeholder name: ‘Something Cool About AI’. And, in a room filled with entrepreneurs and developers that are embarking on a new journey in a new and burgeoning field, in an exciting, vibrant European city - we were all set to chew on all things cool in AI.
After introductions, we started with the hype cycle diagram. A talk is always best when you feel some interaction with your audience, and I was legitimately curious to know where our meet-up colleagues felt that we had traveled on the often cliched hype cycle. One of the main reasons that I was curious about people’s perspective is that the truth is that AI has been around as a concept for over 50 years, so in some regards, the hype cycle as it relates to AI is tightly coupled to the personal perspective of the entrepreneur and the product and associated technological subset of AI that they invoke.
So, with the hype cycle up on the screen, I asked for a show of hands as we walked through the various stages. Is AI at the Technology Trigger? Do we see a proliferation of start-up companies accompanied by mass media hype and first generation products? About 20 hands went up. What about the Peak of Inflated Expectations? Have we gone beyond the early adopters and reached 5% of the potential audience? About the same number of hands went up, but nothing dramatic. And then on to the slide into the Trough of Disillusionment. Are we seeing negative press, consolidation of technology providers, and the second and third rounds of funding? I think the mention of negative press caused a stir in the crowd and about 40 hands went up across the room. Half of the room felt that we are still in the early stages of AI as an economic force. And half of the room felt that we were working our way up towards the Plateau of Productivity with high growth adoption and a growing base of adopters that is well into the double digits if not a quarter of the potential marketplace.
I wasn’t surprised to see the widely varying perspective. But, regardless of the stage of AI that each entrepreneur felt that we were encountering in the field of AI, all attendees agreed with my point that Trust in AI is paramount to our ability to continue with high-growth adoption into a technology that we, as a society, sorely need to be applied ethically and for our common good.
Across our planet, there are sociological and physical world challenges that require great minds and new inspiration for large scale solutions. AI will be a central theme in not only ending the debate on Climate Change, but presenting the world with perhaps some uncomfortable options for solving our biggest problems. I am optimistic that technology will play its part, but if history has shown us one thing, it is that without trust, there will be revolution.
So I suggested that we consider some examples of AI that can engender trust. For the positive example of AI, specifically machine learning, applied successfully, I went to a story that I heard at a data center meet-up from a very talented engineer, Jim Gao, of DeepMind. Jim described to the riveted audience how Google used DeepMind’s technology to study the control systems that were in place to guide the control systems at the heart of the cooling systems for Google’s massive data centers.
You might be able to imagine that a room full of data center leaders with 10, 20, and 30 years of experience with the highest resilience of mission critical systems were a bit aghast at the prospect of handing the keys over to an AI algorithm for the control of a many-megawatt cooling plant. And Jim made it clear that this reservation came from the Google engineers as well and it came in spades. In fact, after the algorithm did its analysis, the revised control scheme that it proposed was something that none of the engineers on the team would have recommended.
While this didn’t immediately improve the trust in the algorithm, Google had taken the time to fully model their mechanical systems and they were able to identify that in fact the AI-proposed scheme did have merit. And so, hand-in-hand with the robot, the engineers invoked the machine learning algorithm into service and they were immediately able to witness a noteworthy drop in the amount of energy required to power the cooling systems. In this case, trust was established, because: humans were involved and the humans had done their homework to be able to model the potential impacts of the changes proposed by AI.
On the other hand, we can all just look to stories within the last few weeks to find examples where AI has breached our trust. In fact, all I needed was one image to remind people that AI has shaken our trust in lots of things related to the companies that use AI and machine learning to deliver ads - political or otherwise…
Source: CBS News
But, is this is lack of trust in AI? No. This is a lack of trust in the humans who control the AI, not a lack of trust in the technology itself. The technology is trusted as effective to the point of concern. So, we are left with a concerning situation where the actions or inactions of companies are leaving a promising form of technology at the public guillotine instead of some of the corporate or governmental practices that should be eliminated to improve the trust in systems that use AI. It feels like a grim situation where the public does not have a good grasp on an optimistic way forward.
Moving past the doom and gloom, I was at least able to show a slide that shows the UK seems to hold a fond place in it’s heart for the future of robots as friends, and given the shadow of Brexit I joked with the room that maybe our British neighbours currently prefer robots to their fellow continental Europeans at this point...! Even in a German, technically minded meet-up this one got a genuine laugh from the group.
But joking aside, trust itself is manifested within our bodies in chemistry. While researching for my talk, I stumbled across a fascinating article in the Harvard Business Review, entitiled The Neuroscience of Trust. The author of the post, Paul Zak is the founding director of the Center for Neuroeconomics Studies and a professor of economics, psychology, and management at Claremont Graduate University. He is also the author of Trust Factor: The Science of Creating High-Performance Companies.
Dr. Zak has dedicated a lot of research into the relationship between a chemical substance called oxytocin and the perception of trust in animals, including humans. In the study that I highlighted, Zak’s team divided study participants into two independent groups. The first group was given a sum of money and the promise that if they transferred any amount of that money to an unknown recipient, the amount would triple in value once transmitted. The other group were the recipients of the cash, and they were given the option to share any amount back to the sender. The team used a nasal spray of synthetic oxytocin or placebo to see who shared the most. It turned out that in both cases, the senders and the recipients were more generous in sharing when dosed with the oxytocin nasal spray.
Source: Harvard Business Review
All of this is certainly interesting to create the link between oxytocin and the essence of trust, but beyond spritzing your team with oxytocin (not recommended!) - how is this applicable to the real world? Well, it turns out that Dr. Zak’s group studied the levels of oxytocin in employees after a business leader showed vulnerability. By asking for help, for example, a business leader increased the levels of oxytocin in other members of their team.
The same approach can and should be taken for AI. The majority of today’s algorithms are only successful after tens if not hundreds of thousands of failed predictions through a process commonly known as model training. The crux of learning with AI as with learning in animals is predicated on trial and error. And whether the failures occur in the beginning or occur after the universe they are working with has changed and made the model outdated, AI needs human help. This should be celebrated, not hidden within black box algorithms and excuses when biases and hyperparameters are misapplied. We have a physical, chemical opportunity to engender trust within our own brains if we as the purveyors of the technology are willing to share the vulnerability of the technology with the greater public and most importantly encourage further trust with participating in the development and the training of the technology. The opportunities to be gained with AI are simply too great to allow black box thinking to wipe out our progress.
The final point to make for those at the beginning of their journey in AI is that the field is growing and regulation is coming. As we turn over more and more decisions to this technology, we need to be assured that testing is continuous and progress is checkmarked. The key to preparing for regulated industries is to establish clear objectives, clear goals, and sensible controls as you develop your algorithms for the future. But this journey is not one to take alone. Industry groups are important and so are professional partners that can help to create predictable processes and that will be there when you need help. That will help you build trust the right way - from the inside and out.
So, as you can tell I thoroughly enjoyed my time in Frankfurt and discussing all things cool in AI with a gathering of the city’s most inspiring AI-focused entrepreneurs. I’m expecting similar discussions when I stop into Amsterdam on the 10th October for the WorldAI Summit. I’m looking forward to moderating a panel of AI’s “hottest disruptors” at that event, and if you’re interested in all that is either cool or hot in AI...come and join me!