Future of AIAI

Building the foundations for AI at scale

By Darren Watkins, Chief Revenue Officer, VIRTUS Data Centres

Artificial intelligence is now a central theme in boardrooms across the UK. In its January update, the governmentย reportedย that since July 2024,ย private investment into the sector has averaged around ยฃ200 million a day.ย That kind of momentum shows how serious both investors and enterprises are about AIโ€™s potential.ย 

Yetย manyย organisations are hitting the same barrier. The enthusiasm for data science, advanced algorithms and specialist talent is not enough on its own. What too many leaders are discovering is that AI is a physical discipline as well as a digital one. Without the rightย ITย environments in which to run these workloads, projects stall.ย 

This mismatch is alreadyย evidentย in the data. According toโ€ฏS&P Global Market Intelligence, the proportion of companies shelving most of their AI projects rose to 42% over the past year, up from just 17% previously. These failures rarely come down to flawed models or poor teams. The problem is more fundamental: the infrastructure was never designed to host AI at scale.ย 

Why legacy data centres are not enoughย 

For decades, enterprises ran their IT inย on-premiseย server rooms or colocation facilitiesย designed for traditional business applications. The demands wereย relatively steady. Finance systems, payroll,ย emailย and websites required predictable amounts of compute,ย powerย and cooling. Even when demand spiked, the infrastructure was able to cope because the underlying profile of the workloadย remainedย the same.ย 

AI upends that balance. Training a frontier-scale model can mean running thousands ofย graphics processing units (GPUs)ย simultaneously for weeks on end, with each rack consuming tens of kilowattsย or more. Inference is just as demanding inย a different way, with real-time services creating a constant, unpredictable load. The shift is not only about more power; it is about a different kind of demand entirely.ย 

Older facilities are simply not equipped for thisย kind of usage. Racks that once drew 2โ€“4 kW may now need 50โ€“80 kW or more. Cooling systems designed to serve office IT cannot handle the thermal output of modern AI hardware. Power distribution networks need to be redesigned from the ground up. Even the physical structure of a hall, from floor loading to airflow containment, is suddenly a limiting factor. Retrofitting is possible, but it is expensive,ย disruptiveย and rarely sufficient. For most enterprises, the realisticย optionย is to rely on facilities that were engineered for AI from the outset.ย 

Designingย for AI workloadsย 

When the conversation turns to AI, it is often couched in abstract terms: algorithms, data pipelines, cloud platforms. Yet whether an organisation can deploy AI at scale depends on the concrete decisions made during the construction of the data centre itself.ย 

Cooling is the most obvious example. Above about 50 kW per rack, air cooling alone ceases to be effective. Liquid cooling, whether direct-to-chip or immersion-based, becomes a necessity. That means embedding pipework,ย pumpsย and containment into the very fabric of the building. It is far easier and more efficient toย deployย this from day one than to retrofit later.ย 

There are structural considerations too. Fully populated AI racks weigh several tonnes, especially when coupled with liquid cooling equipment. Many facilities were never built with this kind of weight in mind. Electrical systemsย alsoย need to be rethought. Redundant distribution paths and intelligentย uninterruptible power supplyย (UPS)ย systems are not luxuries but requirements in halls dedicated to AI.ย 

Scalability further complicates the picture. A data centre designed to deliver 30 MW of capacity may already be outgrown by modern AI workloads.ย Across Europe, operators are developing sites of 200โ€“500 MW, while in North America some are planning campuses at the gigawatt level.ย At the same time, sustainability is no longer optional. With the environmental cost of AI under scrutiny,ย data centres are being built with renewable integration, waste-heat reuse and advanced monitoringย baked in from the start.ย 

Theย role of data proximityย 

Performance is shaped not only by compute capacity but also by where the data sits. AI systems depend on information that may be distributed across clouds, enterpriseย serversย and edge devices. If the processing is too far from the data, latency increases and performance declines.ย 

This matters for use cases where time is critical. In finance, trading algorithms depend on decisions made in milliseconds, in healthcare, diagnostic tools must return results instantly to be clinically usefulย and in consumer markets, personalised recommendations are judged on responsiveness as much as accuracy. If compute resources areย locatedย hundreds of miles away, the user experience deteriorates rapidly.ย 

As a result, proximity is becoming a factor in site selection. Enterprises are starting to choose facilities based not just on cost or total capacity but on closeness to key datasets and user populations. The data centre is shifting from being a neutral storage environment to an optimisation layer in the AI value chain.ย 

Whyย latency is a business issueย 

Latency is no longer an IT problem to be tolerated. In the age ofย AIย it is a commercial issueย because customers are unwilling to wait. A delay of half a second might have been acceptable for an internal batch process, but it feels jarring in a real-time fraud detection system or conversational interface.ย 

Physics makes this unavoidable. A single centralised site may deliver economies of scale, but if it is too far from users or data, delay is inevitable. Increasingly, the solution is distributed infrastructure. Operators are deploying facilities closer to financial hubs, creativeย clustersย and population centres, reducing the physical distance between compute and the people or systems relying on it. Location strategy is becoming a form of competitive differentiation.ย 

Buildingย flexibility into infrastructureย 

AI projects evolve quickly. Models are retrained, datasets expand, and regulation changes course. Static infrastructure cannot keep up with this pace. To remain relevant, data centres need flexibility at their core.ย 

That flexibility is reflected in multiple ways. Facilities must be able to scale from 20 kW per rack to well over 100 kW without wholesale redesign. They must allow workloads to shiftย between sites toย comply withย regulation or to improve performance. Expansionย has toย be modular, enabling operators to add capacity or new cooling systems without taking workloads offline.ย 

In short, adaptability is what transforms a data centre from a fixed asset into a strategic partner. Without it, organisations risk building infrastructure that is obsolete before it has paid back itsย initialย investment.ย 

Responding to sustainability pressuresย 

The energy footprint of AI is already under scrutiny. The International Energy Agencyย (IEA)ย has noted that training a single large model can consume as much energy as hundreds of homes over a year (IEA, 2024). As AI adoption grows, these numbers will only come under greater attention from regulators,ย investorsย and the public.ย 

Leading operators are responding by connecting directly to renewable energy grids, investing in systems to reuse waste heat for district heating, and applying AI tools to optimise cooling efficiency. Just as importantly, they are reporting transparently against recognised ESG frameworks. What was once viewed as good practice is now increasingly a licence toย operate.ย 

Infrastructure as the differentiatorย 

The narrative around AI often focuses on data science and algorithms. These are vital, but they cannot deliver value without the physical environments in which to run them. Enterprises thatย attemptย to host advanced workloads in legacy facilities are quickly confronted with limits on power, cooling,ย resilienceย and location.ย 

The organisationsย that areย progressing theย fastest are those that view data centre strategy as a core element of their AI strategy. They are investing in facilities built for density, proximity,ย flexibilityย and sustainability. In the AI race, it is not onlyย expertiseย in coding or modelling that sets leaders apart. The decisive factor is whether the infrastructure is ready to support the ambitions placed upon it.ย 

Author

Related Articles

Back to top button