AI Leadership & Perspective

The Next Phase of AI: Smarter Investment, Stronger Sustainability

By Richard Davis, CEO & Co-founder, 51toCarbonZero

For the past two years, artificial intelligence has moved at extraordinary speed. Organisations rushed to integrate and experiment with generative AI across functions, often driven by competitive pressure rather than clear strategy. That period of rapid expansion is now giving way to something more deliberate.ย 

As we move into 2026, AI is entering a more accountable phase. Businesses are beginning to ask harder questions – not just about what AI can do, but whether it is delivering measurable value at a cost they understand, and with an environmental footprint they can justify.ย ย 

This is not a retreat from AI. It is a necessary maturation.ย 

From experimentation to evaluationย 

Early adoption was defined by urgency and FOMO. Teams trialled multiple tools in parallel, layered AI into workflows, and tested new use cases without fully understanding the long-term implications. That approach made sense at the time. With such aย fast evolvingย technology, the perceived risk of inaction outweighed the risk of inefficiency.ย 

But experimentation without evaluation has consequences. Many organisations now find themselves with overlapping tools, unclear ownership, inconsistent usage, and rising compute costs. AI portfolios have grown organically, not strategically.ย 

This is where the scrutiny begins.ย 

CFOs are now stepping in to assess AI investment in the same way they would any other major infrastructure decision. The focus is no longer just on whether productivity has improved, but whether it has improvedย enoughย to justify ongoing spend. Attention is turning to hidden costs such as cloud consumption, data storage, retraining cycles, and operational overhead.ย 

Increasingly – and importantly – energy use, emissions, and the environmental impact of running AI at scale are also becoming part of that assessment.ย 

The physical footprint of AIย 

Generative AI is resource-intensive by design. Training and running large language models (LLMs)ย requiresย vast amounts of compute power, and that demand translates directly into energy consumption. While an individual query may appear insignificant, usage at scale tells a different story.ย 

This is most visible in the rapid expansion of data centres. Across the UK, US and Europe, proposed developments are raising questions about electricity demand, water use and grid capacity. What was once viewed as a distant infrastructure concern is becoming a local and national issue, with communities, regulators and policymakers scrutinising the environmental implications of AI-led growth.ย 

For businesses, this matters because AIโ€™s footprintย progressivelyย sits withinย their procurementย emissions – typically the largest and least controlled part of a companyโ€™s carbon profile. As sustainability reporting becomes more rigorous, digital infrastructure can no longer be treated as immaterial or external. AI is now a measurable contributor to emissions, and one that organisations will be expected to understand and manage.ย 

Embedding sustainability into AI decision-makingย 

As a result, environmental impact is now becoming part of the core business case for AI, rather than a separate consideration. Decisions about model selection, deployment architecture, and usage patterns are beginning to be assessed through both financial and environmental lenses.ย 

One of the first steps to more sustainable and cost-effective AI is measurement. The first AI wave saw tools being deployed with broad ambitions – โ€œefficiencyโ€,ย โ€œinnovationโ€,ย โ€œspeedโ€ but without agreed benchmarks. This has made performance difficult to assess and obscured theย true costย of AI use.ย 

In many cases, AI has added layers of compute and complexity without clearly replacing manual work or delivering proportional value. As AI investment comes under review, outcomes will matter more than potential, and that means businesses must start introducing clear KPIs to measure AI performance and impact.ย ย 

Whatโ€™sย requiredย across the board is clearer ownership, tighter governance and metrics that capture not only productivity and cost, but also energy consumption and carbon impact. Sustainability can no longer sitย adjacent toย AI strategy; it must be part of how performance is defined.ย 

Itโ€™sย important to note here that the most sustainable choices areย generally alsoย the most commercially sensible. Lower compute intensity reduces cloud costs. Better governance reduces redundancy. Clearer usage guidelines improve consistency and outcomes.ย 

From โ€œAI everywhereโ€ to โ€œAI where it mattersโ€ย 

A critical shift is underway to move from defaulting to large, general-purpose models for every task. In most cases, these models are unnecessary. Smaller, task-specific systems can deliver equivalent results for defined use cases with significantly lower compute requirements. They are often faster, easier to control and cheaper to run, with a much lighter environmental footprint.ย 

This is not about limiting capability. It is about precision. As AI estates mature, organisations are becoming more selective and matching models to purpose, which involves refining prompts to reduce waste, and designing systems that minimise unnecessary processing.ย 

Efficiency, in this context, is both a sustainability lever and a commercial one.ย 

The role of leadershipย 

Responsible AI deployment means moving beyond enthusiasm forย new technologyย and applying the same rigour used for any critical business infrastructure. Leaders need to be clear about the problems AI is being deployed to solve and the value it is expected to create, and how these align with the trade-offs involved in its use.ย 

Accuratelyย determiningย this will require cross-functional collaboration. AI strategy cannot sit solely with technology teams – finance, sustainability,ย operationsย and procurement all have a role to play in shaping how AI is utilised responsibly, efficiently and at scale. Decisions about models, vendors and deployment increasingly cut across cost control, emissionsย managementย and risk governance.ย 

The bottom line is that AI needs to be treated as infrastructure rather than novelty. It absolutely needs to be optimised, but it also needs to be governed to ensure its performance is measured across all relevant business sectors.ย 

A more mature relationship with AIย 

AI is not losing importance;ย itโ€™sย becoming better understood. As adoption moves beyond experimentation, businesses are starting to see AI for what it really is: a powerful capability withย real costs, real constraints, and real responsibilities attached.ย 

The organisations that handle this transition wellย wonโ€™tย be the ones chasing the latest tools, but those bringing discipline to how AI is used.ย Theyโ€™llย focus on fewer, clearer use cases backed byย evidence, and treat sustainability as a practical signal of efficiency rather than an abstract ideal.ย 

The question for 2026 is no longer whether AI should be used:ย itโ€™sย how well it is being used, and at what cost. In that sense, the move from โ€œmore AIโ€ to โ€œsmarter AIโ€ is not a step back. It is progress.ย 

ย 

Author

Related Articles

Back to top button