AI

Security pitfalls to consider when implementing Agentic AI

By Martyn Ditchburn, CTO in Residence, EMEA, Zscaler

Despite assertions from various AI leaders that artificial general intelligence (AGI) is just around the corner, we’re not yet there. Where organizations have made substantial progress, however, is in agentic AI. With the AI landscape constantly evolving, it’s useful to define the benefits of agentic AI first. Essentially, it’s an AI model that has been designed with a particular orientation and intent, so that it has purpose. It can also be described as having set qualities such as autonomy, adaptability, and persistence in pursuing pre-set goals. And while it’sstill relatively narrow intelligence, its ability to retain memory and adapt means that there are some significant differences to generative AI, which generally waits for prompts. 

Alongside the rapid development of agentic AI models, two security-related factors have to be taken into consideration to avoid data loss in organizations. Firstly, there is a growing threat that adversaries are using agentic AI to extract sensitive data. In the wrong hands, such models can be used to actively probe systems for vulnerabilities rather than just relying on static knowledge. With this artificial support, bad actors can detect and exploit the weaknesses of organizations much more effectively and increase the pace and volumes of attacks. 

The second risk is in the adoption of agentic AI itself, when businesses start using those models to connect to and automate their business processes. Organizations are striving for fast development of innovative products and services to gain a competitive advantage and AI plays a vital role in go-to-market cycles. That speed might result in the unintended consequence of data loss if there are no appropriate security measures in place. 

Here are four considerations to keep in mind if an organization is looking to leverage agentic AI in its tech stack. 

Ensuring an organization remains in control 

AI agents are designed to pursue goals with intent and purpose. While this freedom can yield great results, it can also allow unintended behaviors to creep in. As businesses continue to use agentic AI in areas such as call centers, copywriting, and customer support, it’s critical that they’re not falling into the pitfall of assuming that any agent can be broadly trusted with data access rights. To combat this, it’s useful to treat it like a powerful and tireless user by constraining what the agentic AI model can see and do, and ultimately, assume that at some point it’s going to surprise the IT department in charge and act out of ‘character’. 

The CISO and security team has built up a bit of a reputation as the ‘Office of No’, and while it might be tempting to shut down any use of agentic AI, there are ways that agents can make a real difference. The key is setting the right parameters for it to do so. The same properties that amplify its risk – speed, memory, and autonomy – also hold the key to unlocking great benefits. To unlock these outcomes, it’s critical to engineer the blast radius as best as possible by applying Zero Trust principles to agentic AI, similar to what a user has access to. For instance, time-bound access, privilege permissions, and segmentation, so any mistake doesn’t reply across a system. Additionally, by adding behavioral baselines and certain controls, IT teams can ensure that they’re not slowing agents or your organization down but just scoping their actions. 

Safeguarding against manipulation 

One of the things that makes agentic AI so useful is its memory. Unlike generative AI, which responds to prompts without retaining context, agentic AI retains information across interactions. This could conflict with data protection frameworks too. How the results of agentic AI stored, processed and purged are all considerations that IT teams should be aware of. Additionally, there is a double-edged sword hidden in this process, because while the memory helps it to pursue goals and build outcomes through multi-step reasoning, it also makes it an attractive source of information for threat actors. 

This means that organizations now face the challenge of ensuring their agentic AI models are set up so that they can’t be turned against them. By this, we mean that threat actors or people within organizations can’t use language to violate intent and circumvent safeguards. In line with this, one common mistake that enterprises are making is to assume that a single policy layer inside the agent is enough to prevent this. 

Businesses must use a checks and balances pattern to protect against this and pair each task-oriented agent with a separate validator agent whose sole job is to review actions, confirm policy compliance, and block or quarantine anomalous behavior. However, it’s important to keep them logically and operationally separate so that any threat actor is faced with the challenge of compromising both to succeed. This level of operational excellence not only mitigates any issues, but it improves reliability – increasing the likelihood of cleaner and more auditable outcomes. 

Supercharging coordination in the supply chain 

As adoption of AI agents increases, and agents increasingly work with each other, additional risks emerge. Whether it’s within an organization, or outside, these agents are handing data off to each other. And it’s here that the more traditional and familiar forms of supply chain risks that we’re all familiar with can become supercharged by automation. 

To shore against these supply chain risks, it’s critical that employees understand that the internal guardrails which have been set up don’t transfer when data leaves the estate. Essentially, enterprises should apply the same rigor that is used for SaaS and vendors by making data-sharing boundaries contractual. One approach to shoring up defenses is to apply proxy-based architecture to your APIs and take them off the internet using zero trust principles for segmentation of data according to their criticality. Additionally, for businesses in industries where data is extremely sensitive, such as finance, health or defense, it would be sensible to consider insurance for consequential agent behavior. 

Accountability gaps 

Everyone has at some point in their career got something wrong or made a mistake. The likelihood is that the culprit holds their hands up, admits to it, and endeavors not to do it again. However, with agents increasingly taking on aspects of roles, it raises the question “who is responsible when an autonomous workflow makes a bad call or decision?” 

Legislation doesn’t have a great track record of keeping up with innovation, nor does it show any sign of catching up in the AI era. Which is why it’s vital for organizations that accountability is built in from day one. This can be achieved by enabling actions such as replay modes for critical flows, keeping tamper evident action logs to ensure that an agent isn’t breaching legislative compliance. As well as this, it’s important to define the memory lifecycle controls so that they are aligned with GDPR and similar data protection regimes. Here, it can be useful to use AI to safeguard AI, by having one agent perform the task, and another to prevent unauthorized use and outcomes. Doing so enables trust at scale and with proper auditability, governance problems become engineering problems that can be measured and improved. 

Looking ahead 

Of course, as is often the case across the cyber landscape, if security teams are using AI to protect their businesses, then threat actors are leveraging it as part of their arsenal as well. Already, these bad actors are using agentic systems to continuously probe and generate bespoke exploit code. 

The truth is that traditional flat networks and box-bound controls can’t keep up and there’s limited compute for any proper analysis, so organizations end up fighting tomorrow’s attacks with yesterday’s architecture. By applying zero trust principles and thinking, segmenting systems and data aggressively, and leveraging a platform-based approach, enterprises will be able to reap the benefits of agentic AI, while also remaining secure against it. 

Author

Related Articles

Back to top button