AI & Technology

Rethinking storage for the AI Era

By Alex Segeda, Business Development Manager, EMEAI at WD

AI is reshaping digital IT infrastructure at a pace that few anticipated. Analysts have predicted that AI will drive up to 70% of global data centre demand by 2030, and according to IDC, annual data volumes will more than double to 527.5 zettabytes by 2029 (Worldwide IDC Global DataSphere Forecast, 2025-2029, May 2025, Doc #US53363625).

However, in many boardrooms, the conversation remains focused on GPUs, model performance, and talent. Storage is often treated as a passive layer that simply absorbs the output of AI systems. In reality, storage has become one of the most strategic components for an AI-ready infrastructure.

If organisations want to scale AI sustainably and economically, they must rethink storage as an active, flexible foundation, rather than an afterthought.

Scaling for the zettabyte era to create capacity without disruption

Training models, refining them and running inference at scale for AI workloads generates enormous volumes of structured and unstructured data. Video analytics, edge deployments, smart city systems and generative AI applications all depend on the retraining and re-accessing of these vast datasets.

Future-proofing infrastructure today means planning for and with the highest-capacity drives. Already, the industry has 40TB HDDs in qualification, and roadmaps clearly point towards 100TB+ within this decade alone. But reactively adding capacity alone won’t answer this demand. Instead, businesses will need predictable, disruption-free scaling to successfully account for AI workloads.

Historically, major technology shifts can force customers into complex migrations or architectural redesigns that are clunky, slow and expensive; disruption that simply won’t cut it if businesses want to keep up in the AI era. AI models evolve quickly, so infrastructure needs to move in tandem with this speed rather than playing catch-up.

A dual path approach to technology adoption can help to overcome this challenge. By developing industry proven designs like energy-assisted perpendicular magnetic recording (ePMR) in harmony with technology novelties such as heat-assisted magnetic recording (HAMR) allows hyperscalers and enterprises to flexibly adopt HDD technologies on their own timelines with predictable capacity planning and seamless scaling, meaning they can accelerate capacity growth built on architecture they already trust.

Challenging the ‘flash-only’ narrative

Another common misconception in AI infrastructure design is that “performance” automatically translates to “flash storage”. Of course, SSDs play a crucial role in high-performance tiers. However, going the ‘flash-only’ route can dramatically increase cost and complexity. This is because SSD technology comes with significantly higher costs – IDC reports a 5x-10x $/TB premium for SSDs versus HDDs (IDC Worldwide Enterprise HDD Market Overview, 2025-2029, July 2025) – and scaling it across petabyte and exabyte environments can quickly become economically unsustainable.

Recent architectural innovations in HDD technology are closing the performance gap, with new high-bandwidth drive technology enables multiple tracks to be read/written simultaneously, allowing for up to double the overall bandwidth compared to previous solutions. Meanwhile, dual pivot actuator technology increases sequential IO by fitting another independent pivot and actuator without sacrificing capacity or requiring major software reconfiguration.

The result is a new performance tier – one that supports AI workloads previously considered exclusive to flash, but at HDD price points.

Importantly, AI infrastructure must be tiered and workload-specific, and the bulk of AI data – from training datasets, historical records, and modern refinement archives – requires scalable, cost-efficient storage that can deliver high performance access without the expense of all-flash architectures. Balancing performance and capacity is now a strategic decision to fast-track innovation, not a purely technical one.

Sustainability as a design non-negotiable

AI’s rapid expansion is also creating another challenge: vast energy consumption. Data centres are already significant energy consumers, responsible for 1.5 per cent of the world’s electricity consumption, which is on course to double by 2030. AI workloads with their insatiable appetite for massive, continuous power to train and run complex, large-scale models intensify this pressure further. As a result, organisations risk facing rising electricity costs, regulatory scrutiny and high-pressure environmental targets.

New power-optimised hard drives are demonstrating that meaningful gains are possible, as they are expected to reduce energy consumption by up to 20% whilst maintaining split-second accessibility. These new drive designs intentionally trade minimal random IO performance for significantly lower power use and higher capacity, making them ideal for warm AI data, i.e., information that must be accessible in seconds rather than hours.

This effectively helps bridge the gap between warm and cold tiers. Tape usually remains too slow for many AI use cases, but SSDs are often too costly to operate at scale. Power-optimised HDDs can create a sustainable middle ground.

At hyperscale, a 20% reduction in power consumption translates into substantial operational savings and carbon reductions. As AI infrastructure expands through 2026 and beyond, the total cost of ownership and energy efficiency will become central to infrastructure decisions. Now, sustainability can give enterprises a competitive edge, rather than a consequential benefit.

Designing storage for the next decade of AI

Future-proofing infrastructure in the AI era is not defined by speed alone. It is defined by balance – capacity that scales predictably, performance deployed intelligently, and power efficiency engineered into the architecture from the beginning.

Storage must be designed as an active participant in AI innovation, as it feeds into training pipelines, enables model iteration and supports long-term analytics. Without a resilient storage foundation, even the most advanced AI initiatives can struggle to deliver consistent value.

As AI continues to drive data centre demand, organisations that treat storage strategically, rather than tactfully, will be the best positioned to compete. Likely, the winners of the AI race will not simply have the fastest models, they will have built storage architectures capable of sustaining the data growth and intelligence demands of the next decade.

Author

Related Articles

Back to top button