AI & Technology

Making AI Measurement Meaningful

Now the hype is settling, 2026 is the year organisations must prove AI will deliver ROI. Matt Fuller, Co-founder and Vice President of AI/ML Products at Starburst, explains why measurement is only meaningful if organisations embed AI into core business processes and workflows.
Over the past 18 months, many organisations have experimented with AI through pilots and isolated initiatives. While those experiments have generated excitement and much speculation, they have also created a measurement challenge. The question is no longer whether AI works, but whether it will deliver the measurable return on the significant investments companies have made in data infrastructure, skills, and systems.

Today, AI’s impact is often assessed through activity metrics – usage rates, prompts generated, or time saved – rather than whether it improves business performance. Yet, activity does not equal impact. If organisations want to measure AI’s real contribution, they must anchor it to business outcomes that define competitive position: win rate, customer retention, time-to-market, risk exposure, cost per case, or revenue growth. Therefore, measuring AI meaningfully requires a shift in thinking – from treating AI as a standalone tool to embedding it directly into how the business operates.

Redesigning workflows for the AI era

The main reason AI initiatives fail to scale and, subsequently, prove business value is that they are introduced into existing processes. AI tends to be added as a “bolt-on” step in a legacy workflow rather than being part of the redesigned process.

To unlock value, organisations must therefore rethink and redesign end-to-end workflows around AI rather than inserting it after the fact. I believe that the real opportunity lies in improving decision-making across the whole process – from data architecture to insight generation to operation execution.

When AI is embedded in business operations from the start, its impact becomes measurable through outcomes such as faster product launches, improved forecasting accuracy, reduced operational losses, or stronger customer retention. However, redesigning workflows in this way quickly exposes another challenge: ensuring reliable access to trusted enterprise data.

Building the data foundation for scalable AI

As I’ve outlined, AI pilots often rely on fragmented, project-based datasets, where the data often sit outside of the organisation’s data architecture. While those datasets may be useful for experimentation, they rarely provide the reliability or governance required for enterprise-wide deployment, compromising scalability.

To embed AI into core business processes, organisations must therefore adopt a ‘data product’ mindset. This means creating curated, domain-owned datasets with clear ownership, quality standards, and built-in governance. The data products then become reusable assets capable of supporting analytics, operational systems, and AI models across the organisation.

It is only when AI operates on trusted data products, instead of one-off extracts, that it can be reliably integrated into operational workflows, scaled across the enterprise and measured meaningfully. At that point, AI stops being an experimental capability and becomes a business asset.

But getting to that point requires another shift to happen. Data must be treated as a governed, enterprise-wide strategic asset rather than a by-product of IT systems or a collection of disconnected silos. The shift here isn’t about better AI models; it’s about building a unified data foundation that enables AI to drive durable competitive advantage.

Scaling up security in the AI era

As organisations start embedding AI into decision-making processes, data governance becomes critical. Scaling AI without robust upstream governance structures introduces significant legal, operational, and reputational risks. Before AI influences business-critical decisions, it is imperative that organisations can ensure clear data ownership, enforceable access controls, lineage visibility, and compliance with regional and regulatory requirements.

Importantly, governance cannot simply exist as policy documentation. It must be technically enforced across the entire data estate so that organisations can confidently scale AI while maintaining control over how and what data is accessed and used.  Without that level of governance, AI risks amplifying existing data fragmentation and compliance challenges rather than delivering enterprise value.

People first, every time

AI adoption also raises the important question of the relationship between human expertise and machine intelligence. Much of an organisation’s competitive advantage exists not in data, but in tribal knowledge held by humans – judgment, context and business understanding built over years of activity.

AI systems are only as strong as the data and the context they are given. While organisations should progressively codify that institutional knowledge into governed data products, metadata, and documented business rules, we are still far from a point where AI can fully replicate that depth of human expertise.

Today, AI is enhancing human capabilities rather than replacing them – surfacing insights, accelerating analysis, and providing decision support – while experts remain in the loop to validate, refine, apply judgment, and ensure decisions reflect the broader business context.

As institutional knowledge becomes more structured and accessible, AI’s role will naturally expand. But in the near term, as I’ve outlined, the most successful organisations will be those that use AI to augment their workforce rather than attempt to automate expertise prematurely.

From experimentation to enterprise value

Organisations are now moving beyond AI experimentation, shifting the focus from novelty to measurable business impact. Achieving true AI business value requires embedding AI into how the business actually operates – supported by trusted data foundations, strong governance, and workflows redesigned around better decision-making. It is my view, then, that AI should ultimately be judged not by how often it’s used, but by whether it delivers meaningful business results.


Author

Related Articles

Back to top button