AI

The Hidden Power Problem Threatening AI Adoption

By Ryne Friedman, Associate, hi-tequity

Artificial intelligence (AI) is driving the next era of enterprise transformation. Predictive analytics, automatedย decisioning,ย and generative capabilitiesย heraldย new products and customer experiences. Yet, a critical challenge looms over this AI revolution: the physical infrastructureย requiredย to support it. Data center power availability, cooling capacity, and resiliency, which have typically been operational concerns, are now strategic barriers to AI adoption.ย 

As such, infrastructure resilience has become a boardroom issue. Questions such as “Can you deliver enough power, cooling, and grid capacity on time?” are no longer left to facility managers; they are now central variables in corporate strategy. Grid stress and long lead times for mission-critical equipment are now a hot topic in executive sessions as companies assess whether AI plans are evenย feasible.ย 

Enterprisesย seekingย to capitalize on the promise of AI will be stuck in a holding pattern as data center builders battle over power and the best location to house the most powerful chips created to date. These factors have become the forefront issues dictating when AI servers are activated and how soon income is generated.ย ย 

Power and Cooling: Barriers to AI Adoptionย 

Conventional cooling models were not designed to handle the high-density GPU clusters required by AI. AI workloads are fundamentally different. Training large language models (LLMs) and deploying inference at scale can consume 30โ€“60kW per cabinet. This power consumption is double or even triple the load of legacy CPU racks. The resulting thermal output is straining conventional cooling models.ย 

Utility companies and data center executives confirm thatย grid stressย is the leading challenge to infrastructure development. A 2025ย Deloitteย survey focusing on the infrastructure demands of artificial intelligence (AI) found that 92% of data center operators view power capacity as a key point of resource competition, compared to 71% of power companies. Deloitte also projects U.S. AI-driven data center demands couldย increaseย from 4 GW in 2024 to 123 GW by 2035, a thirtyfold increase.ย 

From Infrastructure Bottlenecks: Risky Businessย 

Infrastructure constraints are not just engineering problemsโ€”they are now a business riskย impactingย outcomes. If power and cooling lag, then AI projects will suffer:ย 

  • Slower analytics deployment: Model training timelines extend, delaying insights that could drive revenue or operational savings.ย 
  • Eroded ROI: Budgets designed around rapid adoption face overruns when facilities take longerโ€”or cost moreโ€”to build.ย 
  • Competitive disadvantage: Rivals with resilient infrastructure gain a faster time-to-value and capture opportunities first.ย 

In short, power and cooling bottlenecks can undermine even the best-laid AI plans.ย 

Grid Build-Out Realitiesย 

The grid itself is another complex matter to consider. While data centers can be built in 18โ€“24 months, new large-scale power generation and transmission projects can take a decade or more toย complete. Gas powerย plants that have not contracted equipment are not likely to come online before the 2030s, and renewable energyย projects face transmission bottlenecks andย permittingย cycles that extend further than a decade.ย 

Even in locations where renewable energy is available, delivering that power to load centers where AI data centersย resideย remainsย a long-term challenge. Withย 92%ย of new capacity additions in 2025 expected to be from renewables and battery storage, grid bottlenecks are likely to intensify.ย 

Supply Chain Fragility Compounds the Challengeย 

Powerย isnโ€™tย the only concern. Delays in critical equipment must be expected. Transformers, switchgear, UPS systems, and cooling distribution units can haveย lead timesย stretchingย 12โ€“18 monthsย or longer. In the current swift pace of AIโ€™s innovation cycle, these delays, combined with grid challenges, are pushing project schedules out of alignment.ย 

Another matter is that demand surges fromย hyperscalersย and colocation providers are straining global supply chains. It is widelyย reportedย that AI workloads have increased dramatically over the last five years. This increase is driving a tenfold demand for GPUs andย the infrastructureย to support them.ย 

Cooling as the Silent Constraintย 

Just as power is constrained, so is cooling. Dense GPU racks create concentrated thermal zones that overwhelm legacy airflow models. Traditional UPS systems can becomeย failure pointsย rather than safeguards when placed in thermally compromised environments.ย 

Emerging solutionsโ€”such as liquid cooling, hot-aisle containment, and even direct-to-chip coolingโ€”are becoming mandatory.ย Hyperscalersย are pilotingย 800VDC cabinet-level powerย distribution mainlyย to support high-density liquid cooling and reduce energy conversion losses. Cooling strategy is now inseparable from AI readiness.ย 

The Boardroom Imperativeย 

Power and cooling are no longer technical details relegated to the operations team; they are now essential components of a comprehensive IT strategy, as well as boardroom concerns. Executives should consider:ย 

  1. Investment Strategy Alignment: AI budgets must incorporate infrastructure resilience as a first-order priority.
  1. Digital Transformation Governance: Transformation roadmaps should integrate infrastructure readiness checkpoints.
  1. Regulatory Navigation: Enterprises should leveragereformssuch as FERC Order 2023โ€™s โ€œfirst-ready, first-servedโ€ cluster studies to speed interconnection.ย 
  1. Risk Management: Uptime Institutereportsunplanned outages now average over $100,000 in costs, much of it tied to cascading power or cooling failures.ย 

Resilience as a Competitive Differentiatorย 

Enterprise boardrooms that prioritize infrastructure resilience will have a competitive advantage. Diversifying supplier bases, early equipment lock-ins, and looking at modular builds are what proactive organizations are doing. Some are co-planning grid expansions with utilities, while others are developing hybrid strategies that combine on-site generation with renewables. HVDC cabinet power and advanced liquid cooling could transform efficiency and density, while reducing operational risk.ย 

AIโ€™s success will beย determinedย by both the algorithmsย andย the infrastructure behind them. Resilienceย ofย power and cooling infrastructure isย a must. Without it, enterprises face slower AI adoption, eroded ROI, and lost competitive advantage. For directors and executives, AI isย about ensuring that the infrastructure exists to handle AI demands. Solving power and cooling problems will decide whether AI initiatives deliver on their promise or stall in their infancy.ย 

Author

Related Articles

Back to top button