There can be no question artificial intelligence (AI) is beginning to proliferate all types of compute. The enterprise use of AI is growing at an exponential rate. Fueled by the need to deliver better customer experiences, increase financial performance, streamline operations, improve clinical outcomes, or push the boundaries of research and development, organisations are investing in AI infrastructure to get insights faster.
In financial services, companies may be harnessing AI for alpha generation, risk management, or fraud detection. In engineering and manufacturing, businesses may be utilising AI for computer-aided design, computational fluid dynamics or image processing. In research and science, AI is helping unlock next-generation discoveries, as scientists and researchers look to simulations to better understand the world around us for weather simulations, seismic modelling, genomic sequencing, protein analysis and so on.
An increasingly common underlying theme for these applications is what sits at the heart of the infrastructure that enables this type of AI compute is now most likely an NVIDIA GPU, rather than a CPU. This is due to the huge technological leap in parallel processing performance that can be achieved through GPUs for this type of application.
NVIDIA DGX A100 systems provide 5 petaFLOPS of compute power. This is a huge leap forward in terms of capability. However, it is easy to think simply about the compute and forget about all the infrastructure that supports it. It is essential to think ‘beyond the box’.
Data centre design is highly complex. The higher the intensity and density of compute, the more complex it gets. So, for the latest GPU technology, the demands on data centre infrastructure are greater than ever. Importantly, not all data centres are built equal and often the invisible is as important as the visible. What sits behind the scenes is as critical, perhaps even more so, as the environment in which compute hardware physically sits. An NVIDIA DGX A100 system is the world’s leading system for AI. DGX A100 systems make up the winning entry on the most recent GREEN500 list. Each system utilises up to 6.5 kW of power, takes up 6U space in a rack, weighs 271 lbs or 123 kg and requires multiple high speed network connections. This is not the type of system that you just put anywhere.
Few data centres are designed from the ground up for high intensity compute and high performance compute. Let’s focus on some of the fundamental considerations beyond the box for data centre selection, with a particular focus on the needs of high intensity compute:
1. Power source and reliability are crucial for high intensity compute and power sustainability is ever more important. Yet not all renewable power sources are efficient for data centres that consume power 24 hours a day. Hydroelectric and geothermal sources make for a much more suitable power generation mix than wind and solar power. Is your data centre really using the right kind of power?
2. The latest generation of AI supercomputers such as the NVIDIA DGX A100 deliver an incredible leap in performance over prior generations, but with that comes the need to carefully plan data centre resources like power and cooling to support state-of-the-art AI infrastructure. Ensuring the data centre infrastructure you choose is built for purpose is critical. The average rack density of only a few years ago was 5 kW per rack. Four of these high-performance systems in one rack could consume 26 kW. Is your data centre really capable of that kind of density?
3. With the exponential growth of AI across all industries, your compute requirements can change and grow rapidly. Ensuring that scalability of the infrastructure that supports your hardware at high density is essential. Is your data centre really able to scale the way your compute might?
4. Security is paramount to all data centre decisions. But if the power of AI is driving a competitive advantage for your business, is your compute and data as secure as you want it to be?
5. Expertise is essential to success, and the key is ensuring that your hardware is supported by experts that understand your AI. Working with partners that understand your GPU systems, your AI applications and your high density data centre requirements is essential to your success.
6. Finally, the total cost of ownership matters more than ever. Data centres built for high intensity compute from the ground up are more efficient and more cost effective. If you are investing in the best hardware, invest in the best data centre.
Verne Global’s data center campus harnesses the 100% renewable power of hydroelectric and geothermal energy mix only found in Iceland to deliver infrastructure engineered and optimised for high intensity compute. We are one of the most advanced NVIDIA DGX-Ready data centres in the world. We are dedicated to supporting the world’s most innovative and demanding industries. Satisfying your high intensity compute needs, effortlessly. With zero impact on the planet, but a big impact on your business. No other place on earth offers the same sustainable infrastructure for AI as we do.