AI & TechnologyAgentic

Why AI agents risk turning APIs into a security frontline

By Michael Adjei, Director, Systems Engineering at Illumio

Agentic AI marks a turning point in how organisations use artificial intelligence. These systems no longer just analyse information or generate responses. Instead, they act, triggering workflows, connecting systems, and executing decisions independently and at speed.  

This efficiency comes with new risk. Like any software, agentic AI relies on APIs to connect to different systems. Without the ability to contain this access, this connectivity becomes a major attack surface.  

From static models to autonomous actors 

Early large language models introduced risk, but that risk largely sat with users. People quickly learned to bypass safeguards to misuse outputs, such as creating phishing content.  

At that stage, models relied on static training data, more like a library where all the books were published on the day you walked in. Standard LLMs are assistive tools, but in a passive capacity.  

The real shift came when organisations wanted AI to act as an assistant acting on their behalf. Booking travel, sending emails, or updating systems requires AI to interact with other platforms. Interacting with other systems requires the AI to assume an identity of its own, and that change fundamentally reshapes the risk profile. 

Why APIs sit at the centre of agentic AI risk 

The risks around APIs are nothing new, and organisations have struggled with API sprawl long before AI entered the picture. APIs are built, versioned and authenticated inconsistently, with no universal standard governing their security. This means it can be challenging to implement standardised controls for all API systems.  

Most organisations have embraced Model Context Protocol (MCP) servers to overcome the lack of standardisation and simplify how AI agents connect to external tools and data. They work as the USB-C port of AI, allowing agents to plug into different systems without bespoke integrations.  

However, while they smooth the way for agentic AI, MCP servers do not remove any associated risks. Because they translate APIs for LLMs rather than replacing them, weaknesses already present in APIs pass directly into the AI supply chain. If APIs were already risky, wrapping them in MCP simply passes that risk downstream, while adding new exposure through dynamic capability discovery and third-party dependencies. 

The risks of unmanaged APIs 

Without proper management, APIs can create new cyber risks in multiple ways. For example, ‘zombie APIs’ are deprecated and unmaintained connections that are still accessible and exploitable.  

Another growing issue is the use of ‘shadow APIs’, connectors created without proper approval and documentation. Developers often create them to test ideas or solve short-term problems, leaving them unsanctioned and unknown to security teams.  

Both are dangerous. Zombie APIs still exist, still consume resources, and can still be exploited, while shadow APIs are often the most dangerous because organisations don’t even know they’re there. 

Agentic AI amplifies these issues by greatly multiplying the scale of API interactions and dependencies across environments. Every action an AI agent takes, whether retrieving data, triggering a workflow, or updating a system, relies on an API call.   

This also comes into conflict with the way organisations secure typical identities on their systems. Most models assume that authenticated users apply judgment before acting. Agentic AI breaks that assumption. These systems do not question instructions or evaluate intent; they simply execute logic exactly as designed, even when that logic produces harmful outcomes. 

If an attacker compromises an AI agent with privileged access, it can chain actions across systems at machine speed and scale. In highly interconnected environments, blast radius becomes the defining risk. 

Containing the risk 

As with all other aspects of security, organisations cannot manage what they cannot see, so visibility is essential to mitigating the potential risks of agentic AI and APIs.  

Teams need to understand which AI agents exist, which APIs they interact with, and what actions those connections permit. Without this clarity, controls fail to address real risk. Mapping these relationships reveals how agents, APIs, data, and workloads connect.  

This data can be integrated into a security graph approach that makes it easier to track not just what is happening, but what could happen if something goes wrong. Graph-based security provides the context that allows security teams to distinguish between expected behaviour and genuinely dangerous paths through the environment. 

Once organisations achieve visibility, they can progress to containment. Agentic AI requires clearly defined boundaries that restrict where agents can communicate, which APIs they can access, and what data they can touch.  

Network segmentation confines each agent to a narrow operational zone, limiting lateral movement and privilege escalation. These are the same measures organisations should already be implementing to secure human users and standard software, but they become even more critical with the speed and scale of agentic AI. 

If an attacker compromises an agent, segmentation prevents that compromise from spreading unchecked. If an attacker can’t move beyond the first point of compromise, the economics of the attack change, and defenders regain control. 

Defensive AI also plays a role. Combining the visual context of the security graph with AI-powered analytics helps teams detect and defend against fast-moving threats in the IT environment in real time. 

Governing AI APIs without slowing innovation 

Effective governance allows organisations to secure agentic AI without sacrificing speed. The goal is not to slow deployment, but to define safe operating boundaries.   

Several existing standards provide strong starting points. NIST SP 800-228 addresses API design, authentication, monitoring, and rate limiting, while ISO/IEC 42001 and ISO/IEC 38507 focus on AI governance, accountability, and transparency.  

Together, these frameworks help organisations prioritise risk across custom-built, third-party, and hybrid AI systems. Breaking the problem into pieces that leaders can understand makes it easier to assign ownership and controls without stalling progress. 

The growing pace and scale of agentic AI, and the APIs that power it, are no longer just a data security challenge, but an operational resilience issue. Relying on trust and good intentions is not enough when machines act independently and at scale. The future will be secured by designing AI systems that contain risk by default and limit impact when, not if, compromise occurs. 

Author

Related Articles

Back to top button