AI & Technology

Identity Is Not Enough: Why Your Current Security Stack Is Preventing You From Trusting Agentic AI

By Oren Michels, Founder & CEO of Barndoor

A recentย Washington Post articleย rightly highlighted the risks of agentic AI creating “silent errors” in consumer applications, hallucinations in healthcare advice, mistakes in legal drafts, or booking the wrong flight. These are valid concerns. But focusing solely on consumer applications misses the far more acute version of this problem, which is happening right now inside the enterprise.ย 

The danger is that we are attempting to manage the new digital workforce with governance infrastructure designed for humans, not agents. And unlike humans, these agents are scaling faster than our ability to supervise them, that’s why we createdย Barndoor.

While there is debate about whether AI should be allowed to book a vacation, major global companies are quietly deploying agents that update Salesforce records,ย modifyย financial systems, and access production environments. We are rapidly approaching what I call the “100,000 Agent Problem”. Consider a mid-sized enterprise withย 20,000 employees. If each employee uses just five AI agents during their workday, one for scheduling, one for CRM, one for coding, etc., that organization is suddenly managing 100,000ย autonomous entities accessing internal systems.ย 

However, for the past year, corporate AI has been stuck in “Advisor Mode” dutifully summarizing meetings, rewriting emails, and generating slide decks. This is safe, but itย isnโ€™tย transformative. Summariesย donโ€™tย move the needle on revenue.ย The shift we are seeing now is toward “Action Mode,” where AI stops suggesting what to do and starts actually doing it.ย 

When you move from chat to action, the risk profile changes fundamentally. AI agents behave differently from traditional software. They are probabilistic, not deterministic. I often describe them as “enthusiastic interns”. Like a new intern, an AI agent is incredibly eager to help,ย movesย very fast, and wants toย clearย its task list. But also like an intern, it lacks the institutional context to understand the collateral damage of its actions.ย ย 

If you ask a human employee to “clean up the customer database,” they know that means fixing typos and merging duplicates. If you ask an “enthusiastic intern” agent to do the same, it mightย deleteย ten years of historical sales data because it viewed those inactive records as “clutter”. It did exactly what you asked, with zero malice, and caused a catastrophe.ย ย 

This probabilistic behavior exposes a fatal flaw in our current security stack. For decades, we have relied on Identity and Access Management (IAM) to keep us safe. These systems answer one question:ย Who areย you?.ย IAM works for humans because humans haveย judgment. If aย Salesย Director has permission toย deleteย a deal in Salesforce, we trust them not toย deleteย a million-dollar opportunity on a whim. But an AI agent inherits those same permissions without inheriting the judgment. If that Sales Directorโ€™s agent decides to “help” byย deletingย a record, the IAM system sees a valid user with valid credentials making a valid call. It waves the agent through the front door.ย 

Traditional security is necessary but not sufficient for the agentic era. We need a new layer of infrastructure that governsย behavior, not just identity. We need the ability to enforce granular, conditional permissions, allowing an agent toย createย a new opportunity but explicitlyย blockingย it fromย deletingย orย editingย an existing one. Until we have controls that can distinguish between a safe read action and a destructive write action, we are flying blind.ย 

Beyond security, there is a massive economic barrier to theย 100,000 agentย reality: the “Context Window Exhaustion” crisis. The standard for connecting these agents to data is the Model Context Protocol (MCP). It is a brilliant innovation, acting like a menu that tells the AI what tools are available. But when you connect an agent to a full enterprise stack, Salesforce, Slack, Google Drive, Jira, that “menu” becomes a telephone book.ย 

Currently, an AI agent wastes 80-90% of its processing power (and your budget) reading the descriptions of every tool in your company before it answers a single question. It is the corporate equivalent of hiring a consultant and paying them to read the entire employee directory and every procedure manual before allowing them to answer a simple question about Q3 sales.ย 

Thisย contextย exhaustionย doesn’tย just spike costs by 95%; it destroys accuracy. When an AI is forced to choose between 500 similar-sounding tools, it gets confused. It starts hallucinating, searching Google Drive for data that lives in Snowflake. Without an intelligent context layer to filter this noise, the economics of enterprise AI simply do not work at scale.ย ย 

We have seen this movie before. In the early days of the web, we had Adobe Flash. It was messy, it crashed browsers, and it had securityย holesย but it was utterly necessary to bridge the gap between the static web and the dynamic multimedia future.ย 

MCPย as it exists today is a transitional technology that allows us to bridge our legacy systems to this new agentic world. In its short life,ย itโ€™sย already evolved a lot and will continue to do so. It may somedayย evenย  beย superseded by new and more powerful protocols. But in the meantime, it is the only game in town.ย 

CIOs cannot afford to wait for the perfect standard. Just as employees brought iPhones to work during the Bring Your Own Device (BYOD) revolution regardless of IT policy, employees are now engaging in Bring Your Own AI (BYOAI). They are spinning up unvetted MCP servers and connecting them to corporate data because they need to get their jobs done. Blocking this is futile; it just drives the activity into the shadows.ย 

As weย look intoย 2026, enterprises face a stark choice. They can keep their AI agents read-only safe, neutered, andย ultimately useless.ย Or, they can embrace “write” access, unlocking the massive productivity gains of agents that can actually execute work.ย 

To do the latter, we must stop treating governance as a brake and start treating it as a launchpad. IT departments must evolve into the HR Department for AI, responsible for onboarding, monitoring, and, when necessary,ย firingย these โ€œdigital internsโ€.ย 

The real riskย isn’tย that an AI books the wrong flight for a consumer. The real risk is that enterprises will deploy these powerful agents without the infrastructure to control and manage them, or conversely, that they will be too paralyzed by fearย to deployย them at all. The technology to govern this workforce exists. It is time we started using it.ย 

ย 

Author

Related Articles

Back to top button