<img src="http://www.gblwebcen.com/41894.png" style="display:none;">
Skip to content

30.05.2025

Flexibility, scalability, and reliability in the AI era: It takes an ecosystem

Our CTO, Tate Cantrell, looks at how data center design is evolving to meet the demands of AI workloads. 

Blog

While the principles of data center design have held steady for years, the realities of AI, power demand, and sustainability are now forcing a shift. These evolving dynamics are redefining expectations and prompting a reassessment of what data center flexibility, scalability, and reliability really mean today. 

The growing demand for AI and high-performance computing is not simply adding capacity requirements. It’s reshaping the core principles of how we design, build, and operate data centers.  

Ahead of the panel discussion I’ll be joining at Datacloud Global Congress 2025 in Cannes, I wanted to share a few of the factors driving the next era of infrastructure, and how we, as an industry, might respond. 

A focus on data center density, not sprawl  

AI demands are driving a fundamental change in infrastructure strategy. While data center facilities continue to sprawl, they're paradoxically leaving smaller footprints as density increases.  The future favours concentrated computing power, with specialised hardware delivering ever more processing capability per rack, driven by the demanding applications at the heart of the Fourth Industrial Revolution.  

This density-focused approach delivers multiple advantages: it uses capital more efficiently by maximising return on physical infrastructure investment. It also provides opportunities for better operational predictability by demanding tighter integration with customer workloads. 

While AI will not reduce overall data center footprints, it is already transforming how that space is engineered and optimised. The focus is no longer on building bigger, but on extracting greater performance from every square metre. The move towards density-led design marks an inflection point: delivering exponentially more computing power without a corresponding increase in physical scale. In doing so, it sets a new standard for efficiency and cost-effectiveness, realising the vision of the data center as a streamlined information factory. 

Cooling smarter 

As data centers evolve to meet the demands of AI and high-performance computing, traditional cooling methods are reaching their limits. The rise of liquid cooling reflects a pivotal change in how facilities manage heat.  

Unlike air-based systems that rely on large volumes of chilled air and energy-intensive equipment, liquid cooling uses water or other fluids to absorb heat directly from the source. This approach not only improves thermal efficiency but also allows for higher operating water temperatures.  

By running warmer water through the system, data centers can significantly reduce their reliance on chillers and compressors, cutting down on both energy consumption and physical infrastructure. In this way, cooling strategy becomes not only an engineering decision but a design differentiator.  

At GTC, we heard NVIDIA outline a vision for equipment running at peak performance using 45°C liquid cooling, opening the door for even greater efficiency across the industry. At those temperatures, it's even possible to eliminate compressors and their associated F-gases in all but the most extreme climates. Running warmer isn’t a compromise – it’s a breakthrough. Like tossing out the air con altogether and still keeping the room cool, it’s smarter, simpler, and far more sustainable. 

Offsite manufacturing 

To meet demand while maintaining reliability, it’s vital to rethink how facilities are built. Offsite manufacturing, where components are prefabricated and tested before arriving on-site, is enabling greater speed, consistency, and quality control. This shift toward a streamlined, manufacturing-led approach starts with engineering. By involving constructors early in the design process, operators can refine solutions and uncover opportunities to move work offsite, thinking beyond the traditional jobsite from the outset. 

“Ready to run” systems that undergo robust factory acceptance testing (FAT) before deployment reduce the need for time-intensive on-site commissioning. This not only accelerates time-to-market but also reduces risk during delivery, particularly when entering new or underserved regions. 

At the same time, leveraging local supply chains brings critical advantages, including regional knowledge, skilled labour, and long-term operational support. Far from being at odds, these approaches work best in tandem. Offsite manufacturing delivers speed and standardisation, while local sourcing ensures adaptability and community integration.   

Convergence of IT and facility systems 

We’re also seeing a shift in how IT and facility systems relate to one another. A new era is emerging – one where IT and facility systems must operate as a single, coordinated engine to support the next generation of compute. Historically, data center infrastructure was designed and operated independently from the IT equipment it supports.  

But as new technologies push the limits of compute, that separation no longer works. Integrating servers, power, cooling, and controls into a unified, intelligent infrastructure is now essential for delivering the performance, efficiency and reliability high-intensity compute demands. 

This means treating IT hardware less like general-purpose tools and more like special-purpose factory equipment – highly specialised, precisely engineered, and tightly integrated with the systems that support it. That requires new thinking in how we coordinate space, power, cooling, and orchestration. This convergence enables smarter energy use, faster problem resolution, and scalable, high-density deployments.  

Sustainable by design 

As AI workloads grow in scale and intensity, so too does their energy footprint. Meeting these demands sustainably means thinking carefully about where compute happens. Locating AI infrastructure in data centers powered by renewable energy, like Verne’s Nordic facilities, offers a smarter, lower-carbon impact path forward. By aligning high-density compute with regions rich in clean energy, organisations can significantly reduce the carbon cost of innovation. 

This location-aware approach is just one part of a broader shift toward low carbon AI. Innovations in liquid cooling, energy optimisation, and efficient hardware design are helping to further reduce environmental impact. But these advancements can’t reach their full potential if the underlying energy source remains carbon intensive. 

Thoughtful infrastructure choices, grounded in both performance and environmental responsibility, will define the next era of AI.  

Rethinking our role in the community 

As data centre campuses grow in size and prominence, so does the scrutiny they attract. Communities are asking hard questions, not just about energy and land use, but also about noise pollution, ecological disruption, and the visual impact on their surroundings. If we don't clearly articulate the value we bring, we leave space for others to define us. We can’t just build infrastructure; we have to build credibility. 

The reality is that data centers contribute meaningfully to local economies – through upfront capital investment, long-term operational jobs, skills development, and the digital infrastructure that powers modern business and society. But we need to do more to make those benefits visible and lasting. Whether through partnerships with local universities, heat reuse programmes, or sustainable development practices, we must take an active role in being part of the communities we serve. 

Looking ahead – Design as a strategic advantage 

Ultimately, the next wave of data center design will be defined not just by technology, but by adaptability. We are building in a time of rapid change – technologically, economically, and socially. The ability to pivot, to optimise quickly, and to meet both customer and community needs will separate the good from the great. 

That means embracing innovation in build processes, designing infrastructure for flexibility at scale, and integrating sustainability not as an afterthought, but as a foundational principle. 

This is the kind of conversation I’m looking forward to continuing with my fellow panellists at Datacloud Global Congress in Cannes. The session, Data Centre Design: Flexibility, Scalability, and Reliability, will explore how we, as an industry, can evolve our models to meet the challenges and opportunities ahead. 

 Join the debate at the Keynote Theatre in the Discover Zone, Datacloud Global Congress, at 16:15 on 4 June 2025.