
The AI industry is facing a moment of reckoning. Across the world, tech giants have been pouring billions into massive data centres and training algorithms in a bid to win the ‘AI race’. They have been moving at speed to release new and competitive Co-Pilots, LLM models and agents, and ensure they capitalise on customer interest in AI-led innovation.
And yet, last month, a small Chinese startup named DeepSeek triggered a $1 trillion-plus sell-off in global equities markets as it quietly demonstrated that the “bigger is better” AI philosophy is fundamentally flawed.
Poor timing
Just days before DeepSeek went viral, President Trump stood alongside tech titans Sam Altman, Masayoshi Son and Larry Ellison to unveil Stargate — a $500 billion plan to maintain U.S. dominance in AI infrastructure. At the same time, the UK announced that it was attracting £200 million a day in private AI investment to power the national economic renewal.The timing of these announcements couldn’t be more ironic.
As news emerges that DeepSeek will likely launch a new model sooner than expected, further undercutting Western progress in the AI race, we are faced with an uncomfortable question: What if we’re building tomorrow’s AI Rust Belt?
Exposing the efficiency gap
DeepSeek’s model achieves what seemed impossible: comparable capabilities to leading models while using significantly fewer resources. Their API costs $0.55 per million input tokens, compared to OpenAI’s $15 — a reduction in computing costs greater than 90 percent. That’s not just an efficiency gain — it’s a fundamental challenge to how we think about AI development.
This efficiency gap becomes even more striking when we consider the open source strategies at play. As Meta’s chief AI scientist, Yann LeCun, notes, it’s not that China’s AI is “surpassing the U.S.,” but rather that “open source models are surpassing proprietary ones.”
This opens up a new possibility: what if massive compute spending isn’t the price of progress after all? DeepSeek’s approach suggests that when you combine transparency with efficiency, you create something powerful — a pathway to more sustainable and accessible AI development.
History repeats itself
There is a clear parallel to industrial history in the US too: U.S. steel companies once continued building massive mills even as more efficient approaches emerged elsewhere.
What happens if China repeats this process with AI as it did with steel — underprice, win the market, and leave expensive infrastructure sitting idle? DeepSeek’s success suggests that the future of AI might not belong to those who build the biggest models, but to those who build the most transparent and efficient ones.
Chinese firms aren’t the only ones prioritising efficiency either. While the UK and U.S. bet on big data centres, France is quietly embracing a more nuanced, lifecycle-centric model.
Dubbed “frugal AI,” it emphasises:
- Energy-efficient design: Benchmarking AI systems with green algorithms and eco-design principles;
- Specialised (not generic) models: Favoring task-specific AI over sprawling monoliths;
- Lifecycle approach: From hardware manufacturing to heat recovery, ensuring each phase is sustainable;
- Open reuse of existing models: Aligning with the Association Française de Normalisation’s BP29 on “reusing trained algorithms,” so developers can iterate on proven AI kernels rather than re-inventing them.
France also benefits from a relatively low-carbon energy mix, courtesy of its existing nuclear infrastructure. Rather than pour billions into entirely new power grids, French organisations can focus on demand-side optimisation: trimming AI’s resource needs via smaller, carefully targeted models. This echoes several AFNOR recommendations — such as compressing algorithms, reducing data sprawl and systematically reusing pre-trained models — that collectively maximise efficiency, much like what made DeepSeek so successful.
AI investment needs to reflect both today and tomorrow’s societal demands
It is unclear what happens next. But one thing is clear: transparency and efficiency, both from a cost and energy perspective, must be front and center in our development and deployment of AI.
Large-scale infrastructure spending can bring long-term benefits. Much like the dot-com era’s boom-and-bust led to the overbuilding of fiber networks — which later fueled the rise of cloud computing and streaming — today’s AI investments could similarly pay off. However, these benefits are not guaranteed. And, much like the aftermath of the dot-com era, first came layoffs, lost pensions and devastated communities. We cannot afford to ignore the human costs.
While AI scaling is inevitable, the infrastructure of today establishes the architectural frameworks that will shape how AI systems evolve. Our investments therefore need to align with both today’s demands and tomorrow’s possibilities. The real challenge, therefore, it is as much about how much we build as it is about how we build it: creating flexible frameworks that can support evolving AI innovations.
The race for AI supremacy won’t be won by whoever builds the biggest data centers. It’s about who can build the smartest, most transparent and efficient ones. The question is, with the US committing $500 billion to Stargate, and the UK prioritising AI as part of its national growth agenda, will we see these nations take up a secure and sustainable leadership position, or will continued excess ultimately breed waste?