Enterprise AI

Why AI needs a common language to work across enterprise systems

By Mark Dando, General Manager, EMEA North, SUSE

Enterprises are currently engaged in an AI arms race, adopting LLMs at scale in an effort to deliver operational and competitive advantages as quickly as possible. A big part of the challenge, however, is that most models can’t fully interact with the systems that run day-to-day business activities. 

For instance, AI tools often operate in isolation, with different tools, sometimes tied to specific LLMs that limit flexibility, used for different requirements, not to mention the fact that enterprise environments are typically fragmented across cloud, on-premises, legacy and edge platforms. 

But, without access to live operational data, models are limited to advisory roles rather than practical support. This is far from ideal, as organizations obviously want AI to help with everything from troubleshooting and optimization to automation, but current architectures leave models disconnected from real-time system context. 

The Model Context Protocol 

To close the gap, they need secure, extensible ways to connect AI models to external tools and data sources. 

Enter the Model Context Protocol (MCP), a valuable standard for connecting fragmented data, systems, tools and workflows into a unified context for AI. Designed for interoperability, it can work across diverse architectures without relying on any single platform. In doing so, it serves as a bridge between LLMs and the external tools, workflows and systems that drive business activity and store operational data, allowing models to retrieve context from live sources and act on real-time information rather than static inputs. 

The architecture uses a host/server model in which any tool can expose capabilities via MCP, and LLMs consume those capabilities with controlled permissions. Crucially, it also removes the need for bespoke integrations and avoids vendor lock-in by providing a common standard for model-system interaction. As a result, by giving models access to tools and their real operational context, MCP lays the groundwork for AI tools to become more embedded in business processes. 

With this foundation in place, the next step is to understand what MCP actually enables when models can operate in a live operational context. 

MCP makes it possible for models to pull specific, real-time data from operational systems, such as service status, error logs, configuration details and workflow results, among many others and interact directly with infrastructure tools. This gives LLMs the same visibility and control that an operator would typically gather manually, allowing them to identify issues and ultimately, resolve them more quickly. This is very different from static, advisory outputs to models that participate directly in operational decision-making. 

By using MCP, reasoning improves because models are working from the current system state rather than outdated or inferred information. For enterprises, this supports faster problem identification and enables automation guided by real operational insight. 

The importance of open standards  

Clearly, MCP is not the only way to tackle these challenges, but enterprises would be ill-advised to rely on proprietary or one-off integrations if they want AI to operate consistently across diverse systems and environments.  

Instead, open standards like MCP offer a stable way to connect different models with tools and services without being tied to a specific platform or vendor ecosystem. Using a standardized protocol also supports transparent governance, making it easier to define what models can access and what actions they are permitted to take. 

What’s more, because MCP can sit on top of existing systems, users don’t need to replace tooling or re-architect infrastructure to gain AI-driven operational benefits. This approach also encourages broader ecosystem participation, with different tools exposing MCP capabilities in a consistent way that models can consume. 

Looking ahead, as adoption grows, MCP provides a strong foundation for AI-assisted operations to develop more quickly, supported by shared patterns and interoperable tooling. The long-term value lies in constructing AI-native operations that remain flexible as models evolve, tools improve and automation becomes more advanced.  

For those organizations looking to future-proof their AI strategies (which should be all of them), applying open, extensible standards offers an effective route to reducing the inevitable impact of integration overheads and keep control over how and where models are deployed.

Author

Related Articles

Back to top button