
We are entering the era of agentic AI, a class of artificial intelligence that goes beyond passive prompting and becomes an active operator. These agents can initiate actions, complete tasks, make decisions, and even coordinate with other agents. They’re no longer just responding to us; they’re working for us.
But there’s a problem: we haven’t built the security scaffolding to handle this level of autonomy. Many agents operate without verifiable identity, without meaningful access control, and without clear limitations on what they’re allowed to do. That’s not innovation, that’s a security crisis.
If we fail to secure agentic AI systems now, we’ll face a wave of unintended consequences, some accidental, others malicious. Fortunately, we don’t have to start from scratch because we’ve solved many of these problems through standards like OAuth 2.1, OpenID Connect, and policy-based access controls. We need to apply these proven methods to the new world of intelligent agents.
And we need to do it fast.
From Tools to Actors: The Rise of Agentic AI
Agentic AI is different from the chatbots of a few years ago. Today’s agents can:
- Make authenticated API calls
- Access internal enterprise systems
- Parse and summarize legal contracts
- Schedule meetings and send messages
- Conduct financial transactions
- Delegate subtasks to other agents
This means agents are no longer mere tools; they are actors. In some cases, they represent a specific user. In others, they act on behalf of an organization or coordinate with a network of agents to fulfill a goal. They’re persistent, autonomous, and capable of chaining decisions together without a human in the loop.
That autonomy is powerful, but deeply risky. Because most of today’s agent frameworks do not require robust authentication, permissions, or runtime policy enforcement, many agents operate with far too much trust and too little verification.
What Can Go Wrong? Everything.
Imagine an agent pulling a customer’s financial history to assist with a loan prequalification and then forwarding it to a third-party optimization agent trained by an unknown developer. Or consider an enterprise agent with wide API access that’s misled by a prompt and deletes customer records or triggers unauthorized payments.
These aren’t theoretical risks. They’re plausible outcomes of treating AI agents as simple tools rather than trusted actors with controlled capabilities.
The Invisible Risk of Agent-to-Agent Exploits
One of the most overlooked attack surfaces in AI today is agent-to-agent communication. In many multi-agent systems, agents collaborate to break down and complete tasks. But there is rarely any meaningful authentication or authorization in these interactions.
This creates several new threats:
- Impersonation: A malicious agent pretends to be another, tricking collaborators into sharing sensitive data or executing privileged actions.
- Token leakage: Agents may inadvertently share API keys or credentials during cooperative exchanges.
- Context flooding: One agent overwhelms another with irrelevant prompts or payloads designed to bypass safety filters or hijack context windows.
- Prompt injection: Malicious users may manipulate LLM system prompts to extract sensitive information, or force unintended behaviors by embedding harmful instructions within seemingly normal input.
Without mutual authentication and policy enforcement, agent-to-agent interactions become the weakest link in the chain. In many systems today, any agent can talk to any other agent, with no security mediation in between. That’s a recipe for exploitation.
The Solution Already Exists: Standards-Based Authorization
The good news is we don’t need to invent new security models for this challenge, as we already have them.
Over the last decade, the internet has standardized on OAuth 2.1, OpenID Connect, and JSON Web Tokens (JWTs) to manage identity, delegation, and access control. These protocols allow users to grant specific applications limited access to their data, without sharing passwords. They allow scopes, lifetimes, and revocation. They allow dynamic permission enforcement.
What we need now is to extend these standards to agentic contexts. More specifically:
- Agents should authenticate using issued tokens, not static secrets.
- Every token should be bound to a scope, like “read-only customer data” or “schedule meetings only.”
- Permissions should be dynamic and revocable.
- All agent actions should be auditable, with logs that trace who did what, when, and why.
- Agent-to-agent calls should require mutual identity verification and policy validation before any exchange.
These aren’t new ideas, they’re best practices. We just need to bring them into the world of AI agents.
Emerging Standards: MCP and A2A
While the Agentic ecosystem is still taking shape, and often feels like the Wild West, two emerging standards are beginning to bring order to the chaos: Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication. These specifications aim to provide structure and guidance for how AI Agents interact with external tools and with each other.
MCP, originally proposed by Anthropic, focuses on enabling AI agents to interface with third-party services. It defines a high-level protocol, including transport, authorization, and interaction patterns, somewhat akin to how REST API standards evolved. MCP leverages OAuth 2.1 as its foundational authorization mechanism, offering a familiar and secure framework. That said, MCP is still an early-stage spec and doesn’t yet cover all the operational and security edge cases you might expect from a mature standard.
A2A, introduced by Google, tackles a complementary challenge: facilitating communication between agents themselves. It also builds on OAuth 2.1, but goes further in detailing how authorization should work in agentic systems. A2A feels slightly more fleshed out than MCP in terms of agent-level authorization mechanics, but like MCP, it’s still under active development and not yet production-hardened.
Together, MCP and A2A represent promising steps toward standardizing how AI agents interact securely and meaningfully in broader ecosystems. But like any evolving tech, there’s still work to be done before they offer full end-to-end solutions.
What Authorization in Agentic AI Should Look Like
A secure agentic system must include three pillars:
- Identity
Every agent must have a verifiable, cryptographically-bound identity. No anonymous agents. No hardcoded keys. Identity issuance and trust validation should be handled by the service the Agent is connecting to. - Authorization
Agents must only access what they are explicitly permitted to, based on dynamic policies. These permissions should be enforced at the moment of action,not just at configuration time. - Auditability
Every action taken by an agent should be logged, timestamped, and associated with a verifiable identity. This ensures traceability, accountability, and compliance.
These principles turn security from a best-effort practice into a programmable, enforceable layer, one that can scale with the complexity of agentic systems.
Enterprise Readiness Requires Agent Control
Enterprises preparing to deploy agentic systems should treat agents as first-class actors in their infrastructure, not plugins or side tools. This requires:
- Zero-trust by default: No agent should receive implicit trust, even inside internal environments. Every request must be authenticated and authorized.
- Runtime policy enforcement: Don’t rely on build-time scopes or static roles. Policies should evaluate context: what’s being done, when, by whom, and under what conditions.
- Short-lived, revocable tokens: Agents should fetch tokens at runtime via an MCP, scoped only to the task they’re assigned. When the task ends, so does the access.
- No secrets in agents: Embedding secrets in agent code is a critical anti-pattern. Instead, use agent identity to request access through the MCP dynamically.
- Human-in-the-loop safeguards: For high-impact actions (financial transfers, customer deletions), require human verification or escalation paths,even in highly autonomous flows.
- Policy-as-code: Use modern frameworks (e.g., Rego) to define access rules as reusable, testable code,not scattered configuration files. This makes enforcement consistent and auditable.
Trust Isn’t a Feeling, It’s an Architecture
There’s a growing conversation around “trustworthy AI.” But trust isn’t about intuition or brand, it’s about architecture.
You can’t trust an agent unless you can:
- Prove who it is
- Control what it can do
- Monitor what it did
- Revoke access at any time
Anything less isn’t trust, it’s hope. And hope isn’t a security strategy.
The Future Will Be Agentic, And It Must Be Secure
We are only at the beginning of the agentic revolution. These systems will be integrated into every part of our digital lives,from enterprise automation to customer service, supply chain management to personal productivity. Their reach will grow. Their autonomy will deepen.
But without identity, authorization, and access control, we are handing the keys to critical systems over to unverified actors. That’s not innovation, it’s negligence.
We have the tools: open standards, policy engines, and identity protocols.
What we need now is the discipline and the urgency to use them. Let’s not wait for a breach to act.
To learn more: https://ory.sh