HPC and AI collusion – Rumours from the trade show floor

Insights Tech Trends


Recently I’ve garnered much of my blog inspiration from industry events. February has been no exception and I benefited from a fascinating day earlier in the month at the HPC and Big Data conference in London, just a stone’s throw from the Houses of Parliament. Here are some of my observations from my discussions there...

At the conference (at which our CTO Tate Cantrell spoke - image above) I recognised Ofcom as the UK communications regulator: “Ofcom is the communications regulator in the UK. They regulate the TV, radio and video-on-demand sectors, fixed-line telecoms (phones), mobiles and postal services, plus the airwaves over which wireless devices operate.” None of which is HPC or big data or the data centers that support them.

By chance, on separate occasions, I chatted with two of the Ofcom delegates and they both had a similar explanation for their attendance: they were familiarising themselves with the HPC industry and researching if there are facets of it, which would benefit from policy or regulation help. For sure, there are aspects of data privacy and artificial intelligence (AI) ethics that could benefit from non-commercial oversight, but I suspect HPC and big data technology is better left alone.

By coincidence, this month Elon Musk quit the board of the research group he co-founded, in 2015, to investigate the ethics of AI. OpenAI said the decision had been taken to avoid any conflict of interest as Elon's electric car company, Tesla, becoming more focused on AI". Surprisingly for an AI developer, he has been one of its most vocal critics, stressing the potential harms. Before founding OpenAI, Elon said “AI was humanity's biggest existential threat”. And in 2017, he said that the United Nations needed to act to prevent a killer robot arms race. I suspect that using some of the brightest minds in the business is a better approach than repurposing a dusty government regulator – I could be wrong.

As interesting as policy can be, I found another area of government involvement far more stimulating. A combination of pure science research grants from both the UK and USA funded a very bright team from Imperial College London (ICL) to investigate the fuel efficiency of commercial aircraft. Having a pilot’s license and spending many a fair-weather weekend guiding a sailplane around the skies, made this subject even more intriguing.

The project aims to improve jet fuel efficiency by more accurately modeling the rear-most rotor blades of the engine. Yes, yes, I know they have already been modeled. However, since the beginning of modern Computational Fluid Dynamics (CFD) in the 70s and 80s many assumptions were required to simplify the computing task to match the available computers. One such assumption was to assume all the jet engine blades operate in laminar or near-laminar airflow, which is a stretch.

A modern by-pass jet engine has a large set of blades in front pushing volumes of air around the more conventional jet engine inside it. Perhaps this first by-pass rotor can be modeled using a laminar flow model with accuracy but the remainder of the internal jet engine most likely can’t. The ICL team had evolved the CFD Navier-Stokes equations, etc. to improve their turbulent air modeling, using today’s HPC clusters, improving the predicted jet engine blade efficiency. I already feel the burden of my international travel carbon footprint improving!

In a footnote to their talk, they described a potentially “leap-frogging” technology combination. With some of the new HPC and GPU resources becoming available to ICL, they want to up their game, yet again. The CFD limitations often result from the homogeneous modeling grid being very fine to match reality. This fine grid multiples the necessary compute power and computation time, so often the grid spacing is compromised to match the available compute resources.

Their inspiration is to take a test environment and model it using abundant CFD computational resources until it provides accurate results. Thereafter, they take the homogeneous grid and evolve it using Machine Learning/ Deep Neutral Network techniques to find a non-linear/non-homogeneous grid which provides a strong correlation to the original CFD results. Now they use the condensed and more efficient grid to rapidly develop the solution with their available computational resources and then finally confirm the final results again using the original fine grid model. If only I was so innovative, I wouldn’t be sat in row 36A crossing another ocean every few weeks!

This collusion between traditional HPC applications and Machine Learning (ML) is going to accelerate the impact of AI way beyond its current data mining, natural language and machine vision hotspots. It’s just as well that we have ample industrial computing facilities at our Icelandic data center to perform this HPC/ML collusion without busting your budget. Join our most recent AI client win, Analytic Engineering and come and see how all AI training roads lead to Iceland!

Let’s chat at GTC Silicon Valley in San Jose in March - bob.fletcher@verneglobal.com


Written by Bob Fletcher

See Bob Fletcher's blog

Bob, a veteran of the telecommunications and technology industries, is Verne Global's VP of Strategy. He has a keen interest in HPC and the continuing evolution of AI and deep neural networks (DNN). He's based in Boston, Massachusetts.

Related blogs

US CLOUD Act raises new data privacy issues

At the end of March, Donald Trump signed into law a $1.3 trillion spending bill that covered a vast range of policy areas. The 2,232-page bill ensured that the US Government would not shut down – at least until September – but it also provided an excellent opportunity for legislators to add other measures to the ‘omnibus’ bill, which, according to Senator Rand Paul, was passed without anyone having read the whole thing.

Read more


Trends Advancing Industrial HPC

As we build-up to SC18, Verne Global is delighted to welcome Brendan McGinty, Director of Industry for the National Center for Supercomputing Applications (NCSA), University of Illinois at Urbana-Champaign, as a Guest Blogger. In his first blog Brendan looks at the industry trends around supercomputing and industrial HPC, and how these are advancing innovation in key markets.

Read more


Deploying Performant Parallel Filesystems - Ansible and BeeGFS

BeeGFS is a parallel file system suitable for High Performance Computing (HPC) with a proven track record in scalable storage solution space. In this blog hosted by Verne Global, we explore how different components of BeeGFS are pieced together and how we have incorporated them into an Ansible role for a seamless storage cluster deployment experience.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.