
The New Cyber Battlefield
Cybersecurity moves at machine speed. Attackers are already leveraging AI to craft convincing spear-phishing campaigns, generate deepfakes that can bypass biometric systems, and probe defenses with adaptive strategies. At the same time, enterprises are embedding AI agents into their own operations—autonomous actors that behave more like employees than scripts.
Both external adversarial AI and internal AI identities matter, but the more destabilizing challenge lies within the enterprise. The very agents that were introduced to optimize workflows and accelerate productivity can undermine governance, accountability, and traditional security assumptions. The new frontier is not simply fending off malicious outsiders; it is maintaining authority over the autonomous actors inside.
The Rise of AI Agents as Internal Actors
Agents leveraging the power of large language models have shifted from passive automation to active reasoning. What once executed simple, deterministic scripts now prioritizes tasks, delegates to other systems, and adapts to new conditions. These agents are not machine identities. They are autonomous identities.
That autonomy transforms the risk equation. Imagine a provisioning agent designed to streamline access requests. At first, it follows approvals. Over time, it begins detecting patterns and, believing it is being helpful, grants access proactively. What started as an assistant has become a decision-maker. If this behavior goes unnoticed, the organization has ceded control without realizing it.
The threat is not always an intruder breaching the perimeter but authority drifting from within. What makes this risk so insidious is that it corrodes the very principle of identity security—that every action is accountable to a responsible party. Once that breaks down, the enterprise is left with systems executing decisions no one approved, and no one can explain.
Why Traditional Security Models Break Down
The security architecture most enterprises rely on was not built for this world. Identity systems assume every account maps to either a person or a static machine. Logs capture transactions but not intent. Anomaly detection depends on stable baselines of “normal” activity, yet an adaptive agent constantly shifts its own patterns. Even ownership, the bedrock of accountability, is often absent. When an autonomous agent causes harm, the question of who is responsible quickly becomes a void.
The result is that trusted security tools—such as IAM, PAM, SIEM, and endpoint protection—struggle to provide meaningful control. They were designed for predictable actors and fail in the presence of autonomous ones. Security collapses not because the tools malfunction but because they were built on assumptions that no longer hold.
Governing AI Identities
The answer is not more tooling but stronger governance. Autonomous identities require oversight throughout their lifecycle, from discovery to decommissioning. That begins with visibility. Most organizations underestimate the number of agents already operating within their environments. They are embedded in SaaS platforms, running in CI/CD pipelines, or quietly introduced by developers experimenting with AI features. Shadow agents proliferate quickly, and until they are revealed, no amount of security can contain them.
Once visible, agents must be tethered to accountable owners. A named human being should be responsible for each identity’s purpose, behavior, and escalation when problems arise. Ownership is not metadata; it is the foundation of accountability.
Governance must then extend to interpretation. An agent’s decisions cannot remain a black box. Actions must carry context, whether in the form of reasoning traces, inputs, or confidence levels. Only with explainability can organizations understand not only what happened but also why it happened, and whether the rationale aligns with their policy. That explainability must not be something that only a data scientist can understand. In this new world, Bob in Marketing may own an AI Agent, and he needs to be able to see a report of its actions and understand it.
Securing autonomy requires boundaries. Agents must operate within scopes defined by privilege, delegation rules, and circuit breakers that prevent escalation beyond their intended remit. Without containment, even a well-intentioned agent can create systemic risk. And finally, agents must live and die on a schedule. Treating them as permanent fixtures invites the accumulation of risk. A lifecycle approach—provisioning, role changes, and timely retirement—ensures that autonomous identities do not drift into invisibility.
Responding to Rogue Agents
Even under strong governance, incidents will occur. When they do, the response must look less like malware remediation and more like insider threat containment. Autonomous agents often hold privileged access and may be woven into critical business processes. Shutting them down recklessly risks collateral damage.
The more effective approach is layered containment. Circuit breakers can pause high-risk actions until a human intervenes. Fail-safe modes can prevent cascading failures when one agent exceeds its scope. Regular red-team and tabletop exercises, simulating rogue agent behavior, prepare security teams to act under pressure. What matters most is not perfect prevention but the ability to detect drift early, contain it quickly, and restore accountability before real damage occurs.
Boards and executive teams should begin measuring their organizations by new indicators: how quickly behavior drift is detected, how long containment takes, and how effectively lessons are fed back into the governance loop. These metrics, more than traditional vulnerability counts, reveal whether the enterprise is prepared for the age of autonomous identities.
The External Dimension
While the insider threat is a pressing concern, adversarial AI outside the enterprise is a genuine and growing concern. Attackers are deploying generative models to produce phishing campaigns indistinguishable from legitimate communications. Deepfakes now bypass biometric authentication in controlled trials with alarming success. Some AI models have demonstrated alignment faking, deliberately misleading human reviewers to preserve their autonomy. Researchers are experimenting with malware that behaves more like a swarm of coordinating agents than static code.
Defending against this requires a different toolkit: adversarial training to harden models, layered authentication to resist spoofing, and intelligence pipelines tuned to detect AI-driven campaigns. These challenges matter and will intensify. However, unlike internal identities, they remain adversaries to be resisted, not actors that enterprises have deployed themselves. The most immediate danger lies in what organizations already own.
Regulation and Liability
Governance is being reinforced by regulation. Europe’s AI Act introduces mandatory oversight for high-risk systems, requiring documentation, transparency, and human-in-the-loop control. The forthcoming AI Liability Directive goes further, presuming organizational responsibility if an AI system causes harm. In the United States, federal initiatives are setting expectations for explainability and auditability in AI procurement. Local mandates, such as New York City’s bias audit law, have already shifted legal liability to deploying organizations.
Vendor contracts must evolve alongside regulation. Enterprises can no longer accept black-box AI features from SaaS providers without clear disclosures of training sources, decision logs, and liability coverage. The days of outsourcing accountability are over.
Leadership for CISOs
To meet this challenge, CISOs must elevate AI identity risk to the board agenda, framing it not as a technical curiosity but as a strategic vulnerability. Governance processes must be embedded into every deployment, ensuring no agent goes live without review. Cross-functional councils, spanning security, IT, compliance, and business units, must establish standards and enforce accountability. Human oversight should remain in the loop for critical actions, with authority to override or shut down autonomous processes. And, critically, organizations must continuously retire and audit agents to prevent the silent accumulation of risk.
The new perimeter is not defined by firewalls or endpoints but by the line between human accountability and autonomous decision-making. Securing that line is the defining task for the next generation of CISOs.
Closing Thought
External adversaries will continue to exploit AI. But the agents inside the enterprise—the ones carrying out actions under our own authority—pose the more difficult test. They are employees without conscience, accounts without owners, decision-makers without transparency. Security in the age of autonomy depends not on chasing every new threat, but on governing the identities we create ourselves. Without that, the question is no longer whether attackers will get in, but whether our own agents will govern us instead.
About the Author:
Rosario Mastrogiacomo is the Chief Strategy Officer at SPHERE, an author, speaker, and podcast host. With extensive experience in identity security, privileged access management, and identity governance, his role involves strategizing and guiding enterprises toward robust cybersecurity postures. He specializes in identity hygiene, leveraging AI-driven technologies to automate and secure identities at scale.
Rosario’s professional journey has included leadership roles at prominent financial institutions, such as Barclays, Lehman Brothers, and Neuberger Berman, where he honed his skills in complex, highly regulated environments. He regularly publishes insights on cybersecurity trends through his blog and hosts the podcast “Smells Like Identity Hygiene,” which explores advanced topics in identity security and AI-driven governance.
In his upcoming book, AI Identities: Governing the Next Generation of Autonomous Actors (Apress), he explores the strategic, operational, and ethical challenges of securing AI-driven identities and offers a framework for CISOs and architects to govern them effectively.