
Artificial intelligence is now a central theme in boardrooms across the UK. In its January update, the government reported that since July 2024, private investment into the sector has averaged around £200 million a day. That kind of momentum shows how serious both investors and enterprises are about AI’s potential.
Yet many organisations are hitting the same barrier. The enthusiasm for data science, advanced algorithms and specialist talent is not enough on its own. What too many leaders are discovering is that AI is a physical discipline as well as a digital one. Without the right IT environments in which to run these workloads, projects stall.
This mismatch is already evident in the data. According to S&P Global Market Intelligence, the proportion of companies shelving most of their AI projects rose to 42% over the past year, up from just 17% previously. These failures rarely come down to flawed models or poor teams. The problem is more fundamental: the infrastructure was never designed to host AI at scale.
Why legacy data centres are not enough
For decades, enterprises ran their IT in on-premise server rooms or colocation facilities designed for traditional business applications. The demands were relatively steady. Finance systems, payroll, email and websites required predictable amounts of compute, power and cooling. Even when demand spiked, the infrastructure was able to cope because the underlying profile of the workload remained the same.
AI upends that balance. Training a frontier-scale model can mean running thousands of graphics processing units (GPUs) simultaneously for weeks on end, with each rack consuming tens of kilowatts or more. Inference is just as demanding in a different way, with real-time services creating a constant, unpredictable load. The shift is not only about more power; it is about a different kind of demand entirely.
Older facilities are simply not equipped for this kind of usage. Racks that once drew 2–4 kW may now need 50–80 kW or more. Cooling systems designed to serve office IT cannot handle the thermal output of modern AI hardware. Power distribution networks need to be redesigned from the ground up. Even the physical structure of a hall, from floor loading to airflow containment, is suddenly a limiting factor. Retrofitting is possible, but it is expensive, disruptive and rarely sufficient. For most enterprises, the realistic option is to rely on facilities that were engineered for AI from the outset.
Designing for AI workloads
When the conversation turns to AI, it is often couched in abstract terms: algorithms, data pipelines, cloud platforms. Yet whether an organisation can deploy AI at scale depends on the concrete decisions made during the construction of the data centre itself.
Cooling is the most obvious example. Above about 50 kW per rack, air cooling alone ceases to be effective. Liquid cooling, whether direct-to-chip or immersion-based, becomes a necessity. That means embedding pipework, pumps and containment into the very fabric of the building. It is far easier and more efficient to deploy this from day one than to retrofit later.
There are structural considerations too. Fully populated AI racks weigh several tonnes, especially when coupled with liquid cooling equipment. Many facilities were never built with this kind of weight in mind. Electrical systems also need to be rethought. Redundant distribution paths and intelligent uninterruptible power supply (UPS) systems are not luxuries but requirements in halls dedicated to AI.
Scalability further complicates the picture. A data centre designed to deliver 30 MW of capacity may already be outgrown by modern AI workloads. Across Europe, operators are developing sites of 200–500 MW, while in North America some are planning campuses at the gigawatt level. At the same time, sustainability is no longer optional. With the environmental cost of AI under scrutiny, data centres are being built with renewable integration, waste-heat reuse and advanced monitoring baked in from the start.
The role of data proximity
Performance is shaped not only by compute capacity but also by where the data sits. AI systems depend on information that may be distributed across clouds, enterprise servers and edge devices. If the processing is too far from the data, latency increases and performance declines.
This matters for use cases where time is critical. In finance, trading algorithms depend on decisions made in milliseconds, in healthcare, diagnostic tools must return results instantly to be clinically useful and in consumer markets, personalised recommendations are judged on responsiveness as much as accuracy. If compute resources are located hundreds of miles away, the user experience deteriorates rapidly.
As a result, proximity is becoming a factor in site selection. Enterprises are starting to choose facilities based not just on cost or total capacity but on closeness to key datasets and user populations. The data centre is shifting from being a neutral storage environment to an optimisation layer in the AI value chain.
Why latency is a business issue
Latency is no longer an IT problem to be tolerated. In the age of AI it is a commercial issue because customers are unwilling to wait. A delay of half a second might have been acceptable for an internal batch process, but it feels jarring in a real-time fraud detection system or conversational interface.
Physics makes this unavoidable. A single centralised site may deliver economies of scale, but if it is too far from users or data, delay is inevitable. Increasingly, the solution is distributed infrastructure. Operators are deploying facilities closer to financial hubs, creative clusters and population centres, reducing the physical distance between compute and the people or systems relying on it. Location strategy is becoming a form of competitive differentiation.
Building flexibility into infrastructure
AI projects evolve quickly. Models are retrained, datasets expand, and regulation changes course. Static infrastructure cannot keep up with this pace. To remain relevant, data centres need flexibility at their core.
That flexibility is reflected in multiple ways. Facilities must be able to scale from 20 kW per rack to well over 100 kW without wholesale redesign. They must allow workloads to shift between sites to comply with regulation or to improve performance. Expansion has to be modular, enabling operators to add capacity or new cooling systems without taking workloads offline.
In short, adaptability is what transforms a data centre from a fixed asset into a strategic partner. Without it, organisations risk building infrastructure that is obsolete before it has paid back its initial investment.
Responding to sustainability pressures
The energy footprint of AI is already under scrutiny. The International Energy Agency (IEA) has noted that training a single large model can consume as much energy as hundreds of homes over a year (IEA, 2024). As AI adoption grows, these numbers will only come under greater attention from regulators, investors and the public.
Leading operators are responding by connecting directly to renewable energy grids, investing in systems to reuse waste heat for district heating, and applying AI tools to optimise cooling efficiency. Just as importantly, they are reporting transparently against recognised ESG frameworks. What was once viewed as good practice is now increasingly a licence to operate.
Infrastructure as the differentiator
The narrative around AI often focuses on data science and algorithms. These are vital, but they cannot deliver value without the physical environments in which to run them. Enterprises that attempt to host advanced workloads in legacy facilities are quickly confronted with limits on power, cooling, resilience and location.
The organisations that are progressing the fastest are those that view data centre strategy as a core element of their AI strategy. They are investing in facilities built for density, proximity, flexibility and sustainability. In the AI race, it is not only expertise in coding or modelling that sets leaders apart. The decisive factor is whether the infrastructure is ready to support the ambitions placed upon it.



