Future of AIAI

Protecting the Machines: Why Access Control is Critical to Securing AI Agents

By Jeremy London, Director of Engineering, AI and Threat Analytics, Keeper Security

Artificial Intelligence (AI) agents have evolved far beyond their early role as digital assistants for booking meetings or answering simple customer queries. Today, they handle sensitive data, trigger workflows, make financial commitments and communicate with external systems in real time. This growth in capability offers significant efficiency gains, but it also introduces new avenues of risk. 

Gartner forecasts that by 2027, AI agents will augment or automate 50% of business decisions, many of which will involve accessing privileged systems and sensitive data. That represents a fundamental shift in the way organisations operate and a clear signal that autonomy is the future. The opportunity is obvious, yet the risks may be less apparent in the rush to leverage these technologies. An AI agent that leaks confidential data or is tricked into granting unauthorised access can cause real harm. The critical question is not simply what these systems can do, but how they can be managed securely. 

For many organisations, the key to implementing and managing agentic AI safely is strong access control. Without clear boundaries around what an AI agent can see, do and decide, organisations cannot claim to have secured it. This challenge is becoming both the top blocker for cautious organisations hesitant to deploy AI widely and  the top threat vector for those that move forward without guardrails. 

When AI autonomy meets access risk 

Agentic AI is not bound by the same limitations as human users. A person may have role-based access privileges tied to a specific set of systems. An AI agent, however, can traverse multiple environments at speed – linking databases, Application Programming Interfaces (APIs), cloud services and third-party platforms in a single action. 

Traditional perimeter-based security models fail catastrophically in this environment. Unlike human users who typically access systems through managed endpoints and established session boundaries, AI agents can simultaneously authenticate to dozens of APIs, databases, and cloud services within milliseconds. Firewalls and network boundaries cannot contain autonomous entities that operate across multiple domains. Instead, organisations must adopt identity-first security, built on zero-trust architecture. Every action taken by an AI agent should be authenticated, authorised and auditable. Trust must be earned transaction by transaction, not granted by default. 

The risks are far from theoretical. Consider the potential for privilege escalation: an AI agent may begin with narrow permissions but could be manipulated into requesting more. Once granted, those permissions may be exploited by attackers who find ways to influence the system. Similarly, when an agent integrates across multiple services, the compound risk of each connection becomes significant. One weak link can expose an entire workflow. 

Real-world vulnerabilities 

The most visible risks with agentic AI include prompt manipulation and unintended authorisations. Attackers have already demonstrated that by crafting inputs carefully, they can steer AI models into revealing data or performing actions outside of their intended scope. In some cases, this resembles a traditional injection attack, where an attacker inserts malicious input to change system behavior. The difference with AI is that models are designed to adapt and follow instructions flexibly, which makes it easier for attackers to trick them into actions outside their intended scope. 

We’ve observed cases where AI Agents granted administrative access to development environments were manipulated through carefully crafted prompts to execute database queries beyond their intended scope. Effectively bypassing role-based access controls through natural language instructions rather than traditional SQL injection. 

Another scenario involves unintended data access. An AI agent might be connected to an internal knowledge base to support employees. If the boundaries are not properly defined, the same agent might also expose privileged Human Resources (HR) records or financial data when queried. Unlike human error, which is sporadic, AI agents can replicate and amplify missteps at machine speed. 

Privilege escalation is a particular concern as well. For example, an AI agent tasked with onboarding new employees may start with access to basic workflow tools. Over time, it may request permission to generate user credentials, reset passwords or assign roles. Each access extension must be tightly monitored and justified. Without rigorous oversight, permissions accumulate and create new attack vectors. 

Regulatory and compliance pressures 

Governments and regulators are beginning to focus on these risks. In the UK, the recent £15 million Alignment Project, led by the AI Security Institute, is designed to ensure advanced AI systems behave as intended. Partnering with organisations such as AWS and Anthropic, the project illustrates the priority being placed on safety and alignment, not just capability. 

At the European level, the EU AI Act is establishing new expectations for governance and accountability. For enterprise leaders, this means demonstrating not only that AI systems deliver results, but that they do so in ways that are safe, explainable and compliant. A key regulatory question is how organisations log, monitor and audit the decisions made by autonomous agents. This includes maintaining immutable audit trails that capture not just what actions were taken, but the reasoning chains, data sources accessed, and decision points that led to each action, often requirements that traditional user activity logs simply cannot satisfy. Businesses need session recording, detailed audit trails and mechanisms for reconstructing decision-making. This requirement for traceability is both a compliance obligation and a security necessity. 

What does meaningful control look like? 

Securing agentic AI requires moving beyond traditional perimeter defences and embedding identity-first, zero-trust security into every interaction. The following principles can help leaders establish practical boundaries: 

  • Anchor security in zero-trust architecture
    Every AI action must be explicitly authenticated, authorised and auditable. Access should be granted only for the specific task at hand, with no implicit trust carried across systems. 
  • Define and enforce permissions
    Apply least privilege principles rigorously and with precision. AI agents should only ever have the minimum access required to complete a workflow. Dynamic access should be time-limited, approved and logged. Without these guardrails, the likelihood of unauthorised privilege escalation grows.  
  • Continuously monitor and detect anomalies
    Traditional user behaviour analytics are ineffective for autonomous systems  that process data at scale. Organisations must deploy AI-specific behavioral analytics that can detect anomalous patterns like unexpected privilege escalation requests, unusual cross-system data correlations, or deviations from established agent workflows – capabilities that require real-time analysis of both the agent’s actions and the contextual data that influenced those decisions 
  • Audit every session
    Audibility is non-negotiable. Logs must capture not only what AI agents executed, but also the complete decision context; input prompts, accessed datasets, retrieved context, intermediate reasoning steps, and external API responses that influenced each action. Session recording and detailed audit trails are vital for compliance, accountability and investigation. 
  • Secure every integration point
    Every API endpoint, SaaS integration, and third-party service represents not just a potential vulnerability, but a multiplier of risk when AI agents can chain these connections together autonomously. Organisations must assess and secure each integration point, apply stringent API security controls and track the compound risk created when agents span multiple systems. 
  • Balance automation with human oversight
    For tasks that involve compliance or business risk, organisations should adopt a graduated model of autonomy. AI can act independently on low-risk tasks, but human oversight and approval should be mandatory for high-risk actions such as creating credentials or assigning privileged access.  

Striking the balance between innovation and safety 

There is no denying the transformative potential of AI agents. Automation may free people to focus on higher-value work, but it also creates significant new risks.  

The challenge for organisations is to embrace innovation without surrendering control. That means embedding security into every AI deployment – not treating it as an afterthought. Access control, identity-first security and meaningful audit trails are not bureaucratic hurdles. They are the foundations of safe and sustainable AI use. 

The organisations that succeed will be those that recognise AI agents as a fundamentally new class of digital identity. One that requires purpose-built access controls, continuous behavioral monitoring, and comprehensive session recording. Success means moving beyond retrofitting existing security tools and instead implementing identity-first architectures designed for autonomous systems from the ground up. 

Author

Related Articles

Back to top button