Generative AI’s potential seems limitless, but patience amongst business leaders waiting to see a return on investment is wearing thin.
Consider the scale of investment. American venture capitalists have poured a staggering $49 billion into genAI in the first half of 2025 alone, according to an EY study released in August. This figure has already exceeded the total spending for the entirety of 2024, with no signs of slowing down.
Yet, despite these astronomical investments, genAI’s promise remains largely unrealised. An MIT report published this summer showed that 95% of genAI pilots at companies are failing to drive revenue. Flawed enterprise integration, rather than model performance, is the primary reason identified for the technology failing to meet its promises of streamlining business processes, reducing costs, and accelerating product development in a meaningful way.
As the chasm between capital expenditure and return on investment continues to widen, it begs the question: are businesses betting billions of dollars on a future that may never materialise?
The Data and Memory Problem
One of the primary reasons why most enterprises struggle to make AI practically useful is that their data is disorganised and fragmented. Company data is typically scattered across many systems, trapped behind information siloes and legacy systems that AI systems can’t easily access. That means even the most sophisticated AI models can only provide surface-level responses.
We believe this phenomenon to be true based on the widely accepted ‘scaling laws’: more data leads to bigger and better models. Or, as the old computing adage goes, ‘garbage in, garbage out’: an AI system is only as good as the information you feed it. Businesses, particularly larger ones with decades-old legacy databases scattered across their operations, find it challenging to centralise their wealth of corporate information into their LLMs. And therein lies the opportunity…
It would be remiss to neglect the role investments into AI agents play in enterprises as well. Kevin Scott, Microsoft’s chief technology officer, said in May of this year that for AI agents to reach their full potential, they must be able to collaborate with agents from other firms and have more accurate memories of their interactions. That suggests data is only part of the problem, where an LLM’s memory of past interactions is equally important.
The Connectivity Solution
To overcome the obstacles of using competing systems and forgetful chatbots, Microsoft is backing a technology called Model Context Protocol (MCP), an open-source protocol introduced by Google-backed Anthropic last November, joining companies like OpenAI.
MCP provides a universal, open standard for connecting AI systems to data sources. In simple terms, MCPs are often described as a ‘USB-C port for AI applications’. Instead of building separate, custom integrations for each business tool, companies can use a single protocol to link their AI systems with the necessary information to get better outputs.
MCP In Practice
Industries ranging from retail to financial services are already embedding MCP into their day-to-day operations. UK bank NatWest, for instance, reports that finance teams now use AI agents with MCP to retrieve transaction records and merchant information through simple queries, thereby reducing time spent on reporting and audits. They also mention that compliance teams can set up intelligent assistants to check regulations and customer data, making risk management much quicker and more reliable.
Walmart’s enterprise-wide MCP deployment presents an even more compelling example, as exclusively reported by the Wall Street Journal this summer. The American retail giant revamped its approach to AI agents by using MCP to establish a standardised method for agents to interact with existing services. This foundation led to its June launch of Sparky, a genAI shopping assistant that offers enhanced customer recommendations and support.
Speaking at a VentureBeat event this year, Walmart’s VP of Emerging Technology Desirée Gosby explained how they achieved success by dividing their business areas into domains, wrapping these with MCP protocol, and then enabling them to orchestrate various automated agents. Her point highlights MCP’s inherent value: it enhances existing infrastructure instead of replacing it.
This distinction matters. What sets genuine AI-driven business transformation apart from simple deployment is the level of integration. Successful implementations don’t just add AI to existing processes but embed it within workflows. This requires AI systems that can access and understand all relevant data sources, which is why protocols like MCP are so essential.
ROI Implications
The most compelling business reason for an enterprise to invest in exposing its services through MCP servers today is the immediate reduction in operational costs achieved through infrastructure consolidation.
Enterprises typically maintain separate integration layers for each AI provider: one for OpenAI, another for Anthropic, and another for their internal models. This redundancy is both expensive and inefficient.
By exposing services through MCP, enterprises can build once and connect to any AI system, reducing both development costs and ongoing maintenance overhead. Early adopters are already reducing their AI integration costs by more than 25%. They’re also getting to market much faster; in weeks instead of months.
More than cost savings, MCP’s real value lies in future-proofing the business. The AI landscape is changing at breakneck speed: new models emerge constantly, and being locked into a single provider is becoming a significant business risk. MCP-enabled infrastructure allows enterprises to switch between AI providers seamlessly, negotiate better terms, and adopt best-in-class models as they emerge. It helps ensure enterprises remain agile rather than vendor-dependent.
Growing Pains: Security, Technical Hurdles, and Accountability
That said, MCPs are an emerging framework still experiencing growing pains. One of the main barriers to adoption is trust: how do we build confidence in a system where an AI agent might connect to dozens of independently owned MCP servers, each with different security standards and data handling practices? In a traditional system, a business can control every endpoint. In an MCP ecosystem, this control is distributed across multiple external providers.
The solution lies somewhere in a combination of cryptographic attestation and distributed ledger technologies. Every MCP server should be able to prove its identity, demonstrate compliance with security standards, and maintain an immutable audit trail of interactions.
Beyond trust, significant technical challenges remain. Ensuring different AI models can share context via MCP without compatibility issues presents ongoing difficulties. Current implementations exhibit bugs and incompatibilities between protocol versions. Meanwhile, sharing context across independently owned servers introduces latency and computation costs that organisations must carefully evaluate.
Furthermore, the governance model must address liability and accountability. When an AI agent chains together services from multiple providers to execute a critical business process, who’s responsible if something goes wrong? We need clear frameworks for service-level agreements that cascade across MCP connections, as well as insurance models that reflect this new distributed reality.
Without addressing these trust, technical interoperability, and governance challenges, enterprises will not adopt MCP at scale, regardless of its technical merits. Frankly, they’d be right to be cautious; the technology is only as good as the trust framework supporting it.
Open is the Way
While concerns about trust and security persist, MCP is beginning to catch a favourable tide. The protocol rides a broader wave toward open, interoperable AI systems that are already reshaping the market.
China’s relentless release of high-performance open-source models, catalysed by DeepSeek’s January thunderbolt, is forcing a reckoning with the closed, gated business models of the West. Even OpenAI, long committed to proprietary systems, recently bowed to this pressure by releasing two open-weight reasoning models.
Companies that embrace open, interoperable standards now may find themselves ahead of the curve when the dust settles. Startups and developers are well-positioned to capitalise on this opportunity to champion MCP, while larger corporations may lag as they get held back by the constraints of their legacy systems and bureaucracy.
The choice is clear: experiment (albeit carefully) with today’s emerging AI tools and chart your own route to AI-first operations, or risk falling quickly behind in an era where business clockspeeds have accelerated beyond recognition.