AIFuture of AI

Designing AI-ready infrastructure: how edge computing is evolving to meet the challenge

By Martin Ryder, channel sales director, Northern Europe, at Vertiv

Artificial intelligence (AI) adoption is moving faster than some infrastructure plans can accommodate. While organisations experiment with models and build proof-of-concepts, the physical systems that support these tools can be underprepared. 

Edge computing is emerging as a critical part of the solution. It offers a way to support AI workloads closer to where data is created and decisions are made. But designing infrastructure that can scale with AI requires a new level of coordination across power, cooling, and connectivity. 

Why edge matters more in the AI era 

Edge computing refers to the placement of compute and storage infrastructure closer to end users, devices, or operational systems. This model reduces latency, increases bandwidth efficiency, and improves response times. 

These benefits are especially important for real-time AI applications. Examples include fraud detection, computer vision, language inference, and industrial automation. 

In many of these use cases, delays of even a few hundred milliseconds can reduce performance or cause system failures. By processing data locally, edge infrastructure improves reliability and responsiveness. 

Critical Digital Infrastructure is now a strategic question 

As AI becomes more embedded in business operations, digital infrastructure planning becomes more complex. Workloads are more power-hungry, produce more heat, and rely on stable low-latency connections. 

Standard air-cooled racks that once supported 5 kilowatts – 10 kilowatts are now being pushed to handle over 50 kilowatts per rack or more in some AI deployments. These conditions put strain on older power systems and traditional cooling designs. 

IT infrastructure design teams must now account for liquid cooling, electrical load balancing, physical layout, and network planning in a single integrated model. 

The power challenge 

AI hardware demands high-density power delivery and reliable performance. Processing clusters built around graphics processing units (GPUs), tensor processing units (TPUs), or custom accelerators draw significantly more power than typical enterprise workloads. 

At the same time, data centre operators are facing pressure to decarbonise. Regulatory requirements across the UK and EU are pushing for greater energy efficiency, lower carbon intensity, and improved reporting. 

To meet these goals, some infrastructure teams are turning to dynamic power systems that can support grid-balancing, on-site energy integration, and intelligent load distribution. These approaches support growing AI demand while addressing grid capacity and environmental factors. 

Cooling that keeps up 

Thermal output from high-density AI workloads is forcing a rethink of cooling strategies. In many cases, traditional air-cooling systems alone are no longer sufficient to keep equipment within safe operating temperatures. 

Liquid cooling, once reserved for niche or high-performance computing (HPC) environments, is now being adopted more broadly. It enables improved heat transfer, including waste heat reuse, and supports the thermal demands of tightly packed racks. 

Data centre operators are also exploring hybrid systems that use both air and liquid in stages, depending on workload intensity. These solutions can reduce energy consumption while maintaining operational stability. 

Cabling as a performance enabler 

Network design and cabling are often underappreciated in AI infrastructure discussions. However, poor cable planning can reduce throughput, increase latency, and create long-term maintenance challenges. 

Congested layouts restrict airflow and complicate upgrades. Poor-quality materials degrade under the heat and load of high-performance systems, introducing data errors or signal loss. 

Standardising on high-performance cable specifications, clear containment layouts, and rigorous testing protocols improves both uptime and scalability. 

Designing for purpose 

Critical digital infrastructure cannot be designed in a vacuum. The AI workload being deployed should shape the power profile, cooling plan, and physical footprint of the data centre. 

Running inference on camera feeds in a logistics centre will have different requirements than training a model on financial data or deploying a language interface in a branch network. 

The best outcomes are achieved when infrastructure and AI teams collaborate from the start. Early alignment avoids costly redesigns and enables systems to be right sized for performance and efficiency. 

Edge vs core: finding the balance 

There is no single model for AI deployment. Some workloads will continue to run best in core data centres or public cloud environments, particularly large-scale training tasks. 

Others, such as decision automation, content filtering, or voice response, benefit from proximity to end users. These use cases are helping to define the next generation of edge facilities. 

The goal is not to replace the core, but to complement it. Hybrid architectures that combine centralised and decentralised capacity are likely to become the norm. 

Sustainability is a design issue 

As AI grows, so does its environmental footprint. Data centre energy consumption is expected to rise substantially by the end of the decade, with AI playing a major role. Goldman Sachs claims that AI will drive a 165% increase in data entre power demand by 2030. 

Operators are responding by adopting heat reuse strategies, improving power usage effectiveness, and implementing water-saving cooling systems. Some data centre providers have pledged to return more water to the system than they consume by 2030. 

Sustainability is no longer an optional consideration. It is becoming central to long-term infrastructure planning and regulatory compliance. 

Standards and repeatability 

To meet demand across geographies, many infrastructure teams are turning to standardised, pre-engineered solutions. Repeatable reference designs make it easier to roll out edge capacity consistently across multiple regions or partners. 

Standardisation also supports maintainability. It simplifies training, documentation, and supply chain logistics. However, flexibility must be preserved to adapt to local conditions such as power availability, climate, or building constraints. 

What success looks like 

AI-ready infrastructure is a continuous capability that must adapt as models evolve, data volumes grow, and regulatory expectations shift. 

Success means systems that perform reliably under load, that scale without major redesign, and that align with the broader goals of the organisation. 

Achieving that requires early collaboration, thoughtful design, and an understanding that critical digital infrastructure is no longer just a technical detail.  

 

Author

Related Articles

Back to top button