24.09.2025
From gift horse to racehorse: partnering for a new generation of AI data center
It’s time to rethink how we plan and provision data center capacity, says Verne head of design and product development, Sam Wicks.
Blog

The gift horse that drove cloud growth is a Trojan for AI. The cloud scale-out model of old paved the way for today’s epic deployments. But this model succeeded in a world defined by stable technology advancements and commodity server architectures
Server technology today is moving much faster while companies push the envelope in radical new directions. Server racks now draw in excess of 100 kilowatts while rack scale has increased: deployments of more than 8,000 GPUs is business as usual. But that’s going to change in the near future. Megawatt-scale racks that pack many more GPUs will soon become standard.
But here’s the challenge: adequate provisioning of power and cooling to feed AI’s engine. You can’t simply roll that into an existing data hall. Extrapolate the challenge of power and rack cooling from a single data hall to a campus of halls and you’ll appreciate the magnitude of the problem.
Capacity planning has been a key component of successful cloud data center delivery. Data center designers planned for growth in standard blocks of multi megawatt sizes.
The challenge now? The demand signals for AI are volatile, which is making it difficult for data center architects to predict demand and future-proof capacity. This is a problem because data center construction is a multi-year process that takes up-front commitment on planning, development and investment.
Demand signal volatility has led to the phenomenon called right-sizing. Hyperscalers building or taking on facilities are re-thinking capacity plans for power and cooling because of uncertainty about demand. According to Moody’s, who called out this trend: “The hyperscalers that have been spurring the market's expansion continuously right-size their newly-leased and owned capacity under development because much of this new capacity is being built in anticipation of future needs.”
How do we square a circle defined by the physical constraints of grid, supply chain and construction that lack the elastic response times of AI workloads?
I don’t believe we can. I believe we need a fresh approach, one founded on partnership with chip- and platform-makers to build optimized data centers together so that, when the demand signal locks in, the facilities are in place and ready to go.
Liquid cooling is a great example of how the partnership model can position you for the future. The industry has been talking about this for nearly two decades but now it's here. With liquid blocks on GPUs, CPUs and MOSFETs, the ratio of liquid to air cooling is shifting. Liquid cooling is vital because it allows for the rack-level density that AI demands - which means facilities that planned for liquid cooling are now primed for AI.
We have pursued a strategy of engineering partnerships at Verne. Using industry relationships and an open supplier ecosystem, we built advanced water cooling into our data centers from an early stage. This allowed us to jump forward as an engine for AI, staying ahead of the curve. Partnering is a sound bet because our partners’ roadmaps are guaranteed by solid financial investment: we know what to expect - and can build it.
Building data centers capable of delivering best-in-class AI based on today’s demand signals means thinking about more than building for scale or speed - it means planning. It’s like a prize-winning racehorse: success doesn’t come with an animal delivered minutes before the race. It comes through a program of breeding, feeding and training.
It’s time to prepare the thoroughbred datacenters of tomorrow using a model of careful collaboration, not wait for the future to come ready-wrapped.
I’ll be debating this topic among developments at Datacentre Expo Europe in Amsterdam this month. You can find out more here.
A campus of halls? Are we talking about issues with the size of the halls or the ability to cool and power the racks?
This is a very complex sentence. Is it meant to say that companies (which companies is this - Data centres, consultants, enterprises?) are working hard to stay agile in their design while calculating and trying to predict the future demands for AI?
Again a very complex quote. Let's keep it as it, but it highlights the need to simplify the sentence before so that we distil the essence of the point being made in the quote.