AIData

Building AI Agents Where They Belong: Inside the Data

By Oren Eini, CEO, RavenDB

AI agents are everywhere in conversation, yet still absent in practice. Gartner even named them in its 2025 Hype Cycle report on emerging technologies, but most organizations are still stuck with scripted chatbots, brittle integrations, and projects that fail to launch. The technology is not at fault. LLMs are capable and improving rapidly. The real barrier lies in how these agents are built, deployed, and connected to the systems where they’re meant to operate and enhance.Ā 

As consumers, many of us already use generative AI to plan vacations, design workouts, or summarize research. Yet inside most companies, even a simple support request still bounces between static menus and canned replies. The gap between what AI can do and what enterprises deliver is widening, and the root of the issue is where the agents live.Ā 

What Exactly Is an AI Agent?Ā 

There are many ways to define AI agents, but here is one way to look at them. They are not the kind that carry badges and knock on doors, but rather a generation of software built on top of LLMs such as ChatGPT or Llama. These agents go beyond chat interfaces that return answers; they are layers of software able to make decisions and execute actions autonomously.Ā 

They interact with humans through natural language, but their real power is in what happens behind the scenes: pulling data, coordinating across systems, or triggering workflows. In e-commerce applications, for example, an agent might navigate a website on behalf of a customer, while on the back end, others can process requests, enforce business rules, or connect with operational systems. Connected to enterprise environments, this ability goes far beyond scripted flows, enabling context-aware automation and decision making.Ā 

The Reason So Many Projects StruggleĀ 

A recent, widely discussed MIT study found that 95% of enterprise AI initiatives struggle before achieving meaningful business impact. The study emphasized that the challenge is rarely the model itself. Instead, the real difficulty lies in embedding models into workflows, ensuring secure data access, and respecting business logic. That’s where agents come into play: organizations need to create an agent layer that bridges models and operational systems, giving shape to how AI can actually operate inside the enterprise.Ā 

Making Agents Real is ComplexĀ 

But setting up such agents is where complexity creeps in. This matches what many developers experience directly: a maze of APIs, pipelines, and glue code just to make an AI agent useful in production. Weeks or months vanish in integration work, only for the end result to feel fragile and incomplete. Across industries, AI prototypes may impress in demos, but production systems demand something more solid.Ā 

Lessons From the Junior DeveloperĀ 

One way to understand this problem is through the analogy of a junior developer. When a new developer joins a team, they don’t need to know everything to be productive. Instead, they rely on the frameworks, validation layers, and safety checks already built into the system. They can experiment, make suggestions, and even automate small tasks; however, only within the guardrails set by more experienced engineers.Ā 

AI agents should operate the same way. Instead of being handed raw access to an enterprise’s data or siloed into a disconnected interface, they need scoped, structured contexts where they can act productively without compromising safety. Just as a junior developer grows in capability by working inside a well-architected system, AI agents become more reliable when embedded within the right environment.Ā 

When the Model Lets You DownĀ 

It’s tempting to think that scaling models will solve these challenges, but bigger does not always mean better. LLMs are prone to mistakes, gaps in reasoning, or hallucinations. In consumer settings, those failures are somewhat tolerable because users can double-check or rephrase a query. In enterprise settings, they can be catastrophic.Ā 

That’s why fallback mechanisms matter. When an AI agent doesn’t know the answer, it should be able to defer to existing systems, such as databases, search engines, or validation logic, rather than fabricating one. Similarly, as we still ā€œGoogle itā€ when ChatGPT or Claude gets something wrong, enterprise AI must integrate tightly with reliable sources of truth.Ā 

Keeping Agents Close to the DataĀ 

The common thread across all these lessons is proximity. Agents need to live close to the data they use and the logic they must respect. Instead of shuttling information across networks or building endless connectors, the more powerful approach is to bring the agent inside the system itself.Ā 

When agents live inside the database, three things change in practice. Queries are explicitly defined and constrained, so the model knows exactly which tools it can use, what data it can access, and how to apply them, which helps the model understand requests with better accuracy. Privacy is improved because the data remains within your domain rather than being copied across networks. And developers can mix and match different models for different jobs without having to rebuild complex integration paths each time.Ā 

By embedding agents directly into the data layer, developers can define strict operational scopes, enforce authorization policies, and connect seamlessly to existing business processes. The result is not a generic chatbot but a true digital colleague capable of acting on behalf of users within well-defined boundaries, learning from context, and avoiding costly mistakes.Ā 

From Hype to PracticeĀ Ā 

The real story of AI agents is less about raw model power and more about placement. Put them on the outside, and you get brittle glue code. Place them at the heart of your data, and suddenly the messy work of integration starts to feel natural. That shift doesn’t promise perfection, but it reframes what’s possible. What lingers is not a conclusion but a curiosity: maybe the future of enterprise AI won’t be decided by bigger models, but by the quiet choices of where agents are allowed to take root.Ā 

Ā 

Author

Related Articles

Back to top button