Qubits North America - Rumours from the Trade Show Floor


It's lunchtime here in the beautiful city of Amsterdam where I am enjoying my time at the excellent World Summit AI, at which Verne Global are exhibiting, and earlier today our CTO Tate Cantrell moderated an engaging panel on today's hottest AI disruptors. As I take a breather from the networking, I thought I would write some reflections about another excellent event which I went to a couple of weeks ago - Qubits North America.

Run by D-Wave, Qubits was a fabulous event, held in a superb venue in Newport, RI without the summer crowds but with the summer weather. It’s a real pity that the sessions were so interesting that I only got a couple of hours to check out the local area.

My goals for the event were to better understand quantum annealing technology and how to program it. They were almost met - my rusty college matrix maths limited my ability to follow the transition of puzzles into their quadratic unconstrained binary optimisation (QUBO) matrix form for the digestion by the quantum computer. I’ll be hunting in the attic for my college math books soon. Nevertheless, I garnered and understood a wealth of information about the hardware before the QUBO definition and the python environment thereafter.

At the start of the event D-Wave announced that their 2,000 qubit computer will be superseded by their new “Advantage” 5,000 qubit machine next summer which created quite a buzz. IBM also recently announced a 53-qubit computer and Google announced a 73-qubit one.

Clearly these are different qubits: D-Wave are the only company pursuing quantum annealing while most of the other players are focused on 'universal quantum gate' computers. A quantum annealer, and universal quantum gate computing are not competitors. While they rely on the same concepts, they are useful for different tasks and different sorts of problems.

Quantum annealers are designed to optimise solutions to problems by quickly searching over a space and finding a minimum (or “solution”). Which works best on problems where there are a lot of potential solutions and finding a “good enough” or “local minima” solution, making something like faster flight possible. Finding the low spot on this map was an often-used visualization:

A universal quantum gate computing system relies on building reliable qubits where basic quantum circuit operations, like the classical CPU/GPU operations we all know, can be put together to create any sequence, running increasingly complex algorithms. Algorithms like Shor’s (to break RSA cryptography) and Grover’s (faster search).

For now the quantum annealers are much closer to quantum supremacy where the computer performs tasks that the traditional CPU/GPU can’t practically do. The computer science academics presenting suggested that somewhere between 10,000 and 15,000 annealing qubits would provide a meaningful commercial computing device. So I expect large corporate labs, starting next summer, to increasingly exploit the D-Wave Advantage as they develop their prototypes.

Much like GPUs and FPGAs, quantum computers are going to be a part of compute cluster which includes CPUs and GPUs for the tasks that they excel at. The applications that a quantum computer is well suited for aren’t the whole puzzle. Except for very-well funded labs most programmers will access their quantum computer via a cloud service which may be “continent local” once the quantum applications go into production. I remember from my internet backbone days that ad serving, gaming technology and similar applications need to be within 50mS round trip delay (RTD) from their users.

With the availability of the D-Wave Advantage, quantum programmers will spend far less of their time breaking their problems into parts fit for the current 2,000 qubits and then reassembling the answer after each part has been annealed. Today 2,000 qubits can work on 7x7 pixel NIST greyscale images but not 8x8 ones. So, the programmers and data scientists become quite adept at data simplification whether it be averaging, medians, 1 in n selection, etc.

The most interesting research applications presented at Qubits were:

Firstly, a NASA automated weather observation project at the University of Baltimore where the cloud height measurements - typically very inaccurate during rain, are modeled with quantum computers to determine potential methods to augment rain impaired data.

And secondly, a 5G cellular multiple-input and multiple-output (MIMO) radio propagation technique validation. Here, quantum computing could likely meet the computing needs for increasing processing power as the data rates increase the bit error rates (BER) stay low and the convergence time to meet the BER solution remains about 1mS. I don’t see a cryogenic cooled quantum computer in a cell tower soon but it’s likely room temperature quantum computing is just over the horizon.

The interesting commercial applications at Qubits were:

Menten.AI who are using quantum annealing to help speed-up the selection of target proteins for use cases in protein therapeutics and biocatalysis and to subsequently be synthesised and tested in their wet lab. So far, they have designed 32:

Representative designs produced by the Menten.ai QPacker

Next, CogniFrame are searching for small improvements in financial service algorithms which will yield huge returns when applied to the volume of funds transacted. Credit decisioning and quantum based asset liability optimisation under a hybrid model are their first targets and their initial results look very encouraging.

I can only imagine the quantum ecosystem energy increasing ten-fold over the next few years as quantum supremacy is achieved and tremendous ROIs result from the early adopters.

In the interim, bring your conventional CPU, GPU clusters and your NVIDIA DGX-2s to Iceland and garner significant operational cost savings to facilitate your researchers becoming quantum computing ready.

And back to Amsterdam, if you are at the World Summit AI today, we're shortly going to be giving out cold Icelandic Einstok beer from our stand. You don't need a quantum computer to tell you that's the perfect environment to come and network within. Let’s talk!

Bob Fletcher, VP Strategy, Verne Global (Email: bob.fletcher@verneglobal.com)

Written by Bob Fletcher

See Bob Fletcher's blog

Bob, a veteran of the telecommunications and technology industries, is Verne Global's VP of Strategy. He has a keen interest in HPC and the continuing evolution of AI and deep neural networks (DNN). He's based in Boston, Massachusetts.

Related blogs

Thinking Beyond the Clouds with HPC and AI as a Service

I recently had the pleasure of presenting to a team from Dell Technologies talking about high performance computing (HPC) and artificial intelligence (AI) workloads within enterprise organizations. The scale and complexity of HPC is growing tremendously. Even with a predicted slowdown for 2020, analyst firm Intersect 360 is still anticipating the market to grow to $55B by 2024.

Read more

HPC & AI workloads lead to Rome

Last week I travelled to the Italian capital, Rome, for an event which I believe will prove very significant for the international high performance computing (HPC) industry. After many years away from the HPC arena and the overall server market, AMD is back with a bang and a new range of powerful processors through its AMD EPYC series.

Read more

HPC and AI collusion – Rumours from the trade show floor

Recently I’ve garnered much of my blog inspiration from industry events. February has been no exception and I benefited from a fascinating day earlier in the month at the HPC and Big Data conference in London, just a stone’s throw from the Houses of Parliament. Here are some of my observations from my discussions there...

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.