Future of AIAI

How smarter cabling can cool the AI boom

By Jan Honig, VP Sales, Data Centre Solutions for Europe at CommScope

Over the past year, artificial intelligence (AI) has advanced at an incredible pace, pushing the boundaries of what’s possible in technology. As a result, machine learning, deep learning and natural language processing have become increasingly integrated into our daily lives. Today, AI data centres account for roughly 1% of global electricity consumption, but this figure is projected to rise to 3-4% by 2030 as a direct result of AI. By 2027, it’s now predicted that the AI industry could consume as much electricity annually as the Netherlands, estimated to be between 85 and 134 terawatt hours (TWh).  

To meet the demands of AI, network infrastructures will need to evolve, impacting everything from cabling and connectivity to architecture, resilience and flexibility. Major shifts are already underway, with bold plans and transformative cabling designs ready to usher in more energy efficient data centres.  

The never-ending need for energy  

The International Energy Agency (IEA) has sounded the alarm over the escalating energy footprint of data centres complicating the sustainability targets of leading tech companies. Microsoft, for example, has seen its emissions rise by nearly 30% since it announced plans to become carbon negative in 2020, mainly due to its extensive AI data centre buildout. Company leaders have also described their sustainability targets as a “moonshot”, acknowledging that progress has been more difficult than anticipated.  

All of this places data centre operators are under huge pressure; there are massive volumes of petabytes flowing through networks which need to be balanced with a requirement for ultra-low latency. At the same time, they must adapt to the surging power demands and higher fibre counts that are reshaping how data centres are designed.  It is their task to make strategic investments to future-proof their data centres. That includes scalable, efficient cabling solutions able to meet evolving AI demands.  

Cabling for high connectivity and low latency 

Processing large AI models requires numerous interconnected GPUs across multiple servers and racks, creating unique cabling challenges that consequently make cooling a critical sustainability concern. GPU clusters demand significantly higher connectivity between servers than traditional systems. Unfortunately, this means fewer servers per rack due to the immense power and heat they generate.  

The result is a sharp increase in inter-rack cabling; GPUs must connect back to switches within the same row or room. To support this bandwidth intensive infrastructure, 400G and 800G links are essential, and they are far beyond the reach of conventional copper connections like DACs, AECs or ACCs. Additionally, each server must also connect to the switch fabric, storage systems and out-of-band management layers.  

As mentioned, latency also remains a critical factor. Studies suggest that up to 30% of AI model training time is lost by network latency, whereas the remaining 70% is used by compute time. Ideally, GPUs should be kept within 100 metres. However, given that a single GPU rack can consume more than 40 kW – much more than the typical server racks – data centre operators often have no choice but to spread them out.  In today’s space constrained data centres, that makes managing the physical footprint of AI infrastructure difficult. Yet, solutions like rollable ribbon fibre can help address these narrow pathways. 

These cables can pack six 3,456-fibre bundles into a single four-inch duct, effectively doubling the capacity compared to flat ribbons. By connecting fibres intermittently in a loose web, the design means they can be rolled into a compact cylinder allowing each fibre to flex independently and conform to tight spaces. This design not only makes the most of the space available but also makes the cables easier to handle as it enables installers to position the fibre more naturally into a smaller cross-section, ideal for splicing.  

Operators are turning to optical transceivers and fibre cabling 

Looking ahead, the value proposition of data centres may well depend on their ability to deliver extensive processing and storage capabilities. To support this, operators should carefully consider their choice of optical transceivers and fibre cabling for AI clusters. In an AI cluster, optics cost is largely driven by the transceiver due to its short links. Transceivers that use parallel fibres offer a clear advantage, as they eliminate the need for optical multiplexers and demultiplexers required in wavelength division multiplexing (WDM). This not only reduces infrastructure costs but also lowers overall power consumption. 

Strategic cabling design   

In summary, as AI continues to transform industries, data centres must move quickly to meet soaring demands for speed, capacity, and energy efficiency. 

Infrastructure planners should look to prioritise forward-looking, scalable, and sustainable designs that can adapt to the pace of innovation. Advanced cabling systems will play a crucial role in cutting costs, reducing power consumption, and streamlining deployment, ensuring that today’s facilities are equipped not only for current AI workloads, but for the breakthroughs that are still to come. 

  

Author

Related Articles

Back to top button