Future of AIAI

Consumer Trust and Agentic AI: Why Securing APIs is the Key to Successful and Secure AI Innovation

By Glyn Morgan, Country Manager UK&I, Salt Security

Agentic AI is taking the world by storm – and it’s becoming unavoidable. In 2025, agentic AI underpins many of the digital functions we carry out professionally, but also personally. For businesses, agentic AI can help save time and improve adaptability, as well as scale intelligence in time consuming, yet critical, areas like customer experience and sales among others. Ultimately agentic AI can help businesses operate with more speed and precision, whilst allowing space for innovation. 

Many organisations have already capitalised on the benefits of agentic AI, integrating it into several business functions, with over half (53%) of these organisations saying that they are already deploying it, or plan to, for customer-facing roles. However, recent research has found consumer resistance towards engaging with and trusting agentic AI, with only 22% of consumers saying they are comfortable sharing data with AI agents. So, how can organisations innovate and grow securely using agentic AI, whilst consumer trust hangs in the balance?  

APIs: Underpinning AI Agents (In)securely  

APIs (Application Programming Interfaces) are the digital foundation for agentic AI, providing the essential tools that grant them autonomy and real-world capability. Acting as the agent’s action layer, APIs enable the AI to access live data and execute tasks across external systems, such as databases or booking services, transforming a thinking machine into a doing machine.  For example, a chatbot may use APIs to access a customer’s order history or to initiate a return request. This ability to request data and interact across different platforms is critical for the function of an AI agent. 

However, this crucial reliance introduces a significant risk: without robust API security, governance, and discovery, agentic AI could inadvertently create pathways for cyberattacks or data leakage. As AI agents handle sensitive information and automate complex tasks, weaknesses in API authentication or access control become serious vulnerabilities. This puts business and customer data (and trust) at significant risk. Strengthening API security is a critical step to building the consumer trust necessary to unlock the full business potential of agentic AI. 

Despite the growing risk associated with them, organisations often lack consistent security practices and tools to monitor and secure APIs. Research has found that only around a third of organisations conduct daily API risk assessments, while 7% report doing so monthly or less. Meanwhile, only 37% say they use dedicated API security solutions. With so much hanging in the balance, these potential gaps in security present a complex cybersecurity challenge for organisations. So, how do organisations begin to secure them? 

Defending Agentic AI Tools (and the APIs That Power Them) Against Threats  

Business leaders must start by monitoring their organisations’ APIs. The key is to continuously monitor, so anomalies can be detected easily – and quickly. Tools, especially those powered by AI, can establish baseline behaviours for machine-to-machine interactions. This is crucial because an agent’s requests can be highly dynamic and unpredictable. Detecting an anomalous sequence of API calls is a key early warning sign of compromise or a flawed agent logic loop. 

Additionally, implementing strong authentication frameworks and least-privilege access is non-negotiable. Every API call made by an agent must be authenticated and agents should only be granted the minimum permissions necessary for their specific tasks. Granular access controls, such as using OAuth tokens with tightly scoped permissions, prevent a compromised agent from gaining unchecked access to the entire enterprise system. 

Protecting sensitive data also requires comprehensive encryption for data in transit and at rest. Using TLS/SSL ensures data is indecipherable if intercepted during an API call, while encrypting data at rest protects the information an agent stores in its memory or knowledge base. This creates a critical layer of defence for customer data and proprietary business logic. 

Finally, maintaining security requires human oversight and proactive measures. Companies should enforce regular security testing and developer training. Frequent penetration testing and bug bounty programmes, specifically targeting the API layer, can uncover vulnerabilities that automated tools miss. Equally important is training developers on secure API design, ensuring they understand the unique risks presented by autonomous agent consumption.  

The Future of Agentic AI 

One thing’s for certain: Agentic AI is going to continue to be increasingly used by organisations. Therefore, it’s critical that organisations use it securely to scale and innovate, else risk losing consumer trust and, potentially, revenue. Adopting strong API security practices and governance, as APIs underpin agentic AI, is essential for building a safe and trustworthy agentic AI ecosystem. 

Author

Related Articles

Back to top button