
It’s no secret that the incredible capabilities of AI are bringing new demands in terms of compute power. But as AI tools continue to be rolled out across new industries at rapid pace, greater pressures are being placed on its supporting infrastructure.
This momentum has begun to influence change in the data center space. This is because the infrastructure required to power AI workloads is vastly different from traditional data center setups. AI tools bring new requirements in the form of specialized hardware, greater energy efficiency and more advanced networking capabilities, which has not only influenced changes in existing data centers, but is also changing how new data centers are designed, built and operated.
The shift from CPUs to GPUs
One of the greatest differences between AI-driven and traditional workloads is the shift from CPU-based processing to GPU-centric architectures. While CPUs are highly efficient at handling sequential tasks, AI models require massive processing capabilities that allow multiple calculations to be done simultaneously. This is where GPUs excel, which is why they are so crucial for training and running AI models effectively.
However, this increased reliance on GPUs brings a new set of challenges particularly when it comes to power consumption. AI-enabled racks consume significantly more power than standard workloads. A case in point, during an announcement made at the 2025 OCP EMEA Summit in April, Google projected that machine learning deployments will start to demand more than 500KW per IT rack by 2030.
To support growing AI workloads, Google has also announced plans to roll out new AI datacenter infrastructure with +/- 400 VDC power and liquid cooling, capable of handling 1MW datacenter racks and increasing thermal loads. To put that into context, a traditional data center rack consumes between 5 and 30KW on average. What this leads to is a data center that generates greater quantities of heat as a result. This has encouraged infrastructure experts to rethink power allocation and cooling strategies for new AI data centers, all to ensure optimal performance without jeopardizing their bottom line.
Power and cooling
AI data centers need infrastructure that is capable of sustaining high–voltage GPU clusters. They also need advanced cooling systems that go beyond traditional air-based cooling solutions.
This has led to data center developers selecting locations that bring natural advantages like cooler climates and access to renewable energy sources. Regions like Canada and Iceland are attractive given the abundance of hydropower and geothermal energy sources, both of which provide cost-effective and sustainable options for managing high-density AI workloads. However, the trade-off is often an increased distance from end users, which can of course impact the latency of AI systems.
For data centers in warmer climates, innovative cooling technologies like liquid cooling and direct-to-chip cooling are increasingly common, offering a far more efficient way of cooling AI-driven racks. But the reality is that many data center developers are placing new sites in locations that sit somewhere in the middle – a temperate climate closer to end users, with some access to a renewable energy source, all while utilizing advanced cooling technologies.
New network innovations
AI’s huge need for data processing power also demands significant upgrades in the networking capabilities of data centers. Traditional data center sites, designed for CPU workloads, typically operate with data transfer speeds of 10 to 20 gigabits per second. In contrast, to efficiently process and transfer massive datasets between computing nodes, AI-driven applications require exponentially higher bandwidths.
This has led data center developers to invest in high-performance networking solutions like high-speed interconnects. These facilitate rapid data transfer between GPU clusters and specialized AI processing units like Tensor Processing Units (TPUs).
Investment in the networking structures that support these AI data centers is essential. It’s key to achieving higher outputs, greater reliability, and reduced latency. Without the right tech in place, these tools wouldn’t be anywhere near as impressive in terms of their speed and capabilities.
The AI ‘race’
It’s understandable that the AI ‘race’ has also driven a race in the data center industry. While some operators anticipated this shift years ago and are well positioned to accommodate growing AI workloads, others are scrambling to catch up – a difficult task, given that building a new data center can take years, let alone the upfront financial investment of doing so.
While AI workloads can technically run in any data center, it’s clear that not all environments are capable of handling their high energy and cooling demands in a cost efficient way. This means that as AI adoption accelerates, data center operators need to carefully balance operational costs, placing new pressures on providers to rethink their pricing models. How the relationship between data center provider and end customer evolves over the coming years will be interesting to watch, particularly as competition between tech providers continues to accelerate.