Analytics

AI agents are here to stay: three tips for seamless integration

By Lou Blatt, Ph.D. Head of Product at Vonage

At first, agentic AI might have seemed like a short-lived gimmick, but: sophisticated, autonomous and with more capability for complex tasks than their chatbot ancestors, AI agents can grow and adapt to perfectly fit any business.

2025 has already seen a quick-fire barrage of new high-profile AI agent products, like Amazon Nova Act, NTT Data’s Smart AI Agent, OpenAI Operator and more. Demand is growing rapidly among both businesses and consumers, with agentic AI streamlining communications and facilitating good experiences between customers and employees. With 85% of businesses hoping to deploy AI agents of their own this year, these solutions are becoming the new normal.

Many businesses are realising the agentic approach is the most viable way for them to achieve the ROI on AI they’ve long been hoping for. How can businesses make sure agentic AI has a positive impact on the business, rather than being another example of ‘AI for AI’s sake’? Let’s take a look at three key principles to follow.

AI agents can’t replace human teams yet

First and foremost, businesses must overcome the myth that all AI agents can act completely autonomously – otherwise they risk pinning unrealistic expectations on a project and setting it up to fail.

Outside of precision-built vertical AI agents that excel in specific industry tasks, these solutions aren’t silver bullets: they require careful forethought, dedicated infrastructure and, most importantly, informed human supervisors. 95% of IT leaders agree that AI solutions are more likely to fail without dedicated, trained human teams. Without this supervision in place, the reputational risks of a rogue AI agent can far outweigh the benefits of a good one.

Moreover, the error rate of AI agents increases with the complexity of the task. For example, if an AI agent has even a 1% error rate, this will grow to over 60% after 100 steps. This is an unacceptable risk: even the smallest errors can snowball into catastrophes. Businesses are better off considering a MAS (multi-agent system) approach, where tasks are distributed amongst multiple AI agents which verify each other’s work. This is an involved process and might not be the quick fix that tech leaders are envisioning, but comprehensive, foolproof solutions are a must when onboarding emerging technology.

‘Off-the-shelf’ AI agents by comparison are more like assistants than they are autonomous project managers. Automating routine administrative tasks is the tech’s biggest strength – while this is not a flashy prospect, it’s a realistic one that can return a solid ROI.

In a customer service setting, for example, AI agents can assist human operators by automatically pulling up resources to assist them with the conversation, or recommending a ‘next best action’, which can be useful in a training context too. This is the key difference between an AI agent and an AI assistant: agents operate autonomously to achieve specific goals with minimal human input, while assistants are reactive systems that only perform tasks when prompted by users.

Deploying hybrid teams of human agents and AI assistants rules out the risk of infuriating ‘doom loops’: customers being sent around from chatbot to AI agent and back again without any real resolution. This can occur when an AI agent lacks the necessary resources to resolve or escalate a complex query, reinforcing the need for complex tasks to be managed by a human.

Businesses could also consider implementing Model Context Protocol (MCP) in their agentic AI tools – this universal protocol allows AI agents to interact with diverse datasets without the need for custom integrations. With MCP, AI agents access real-time, context-rich information so that they (and the human agents they’re assisting) can work to the best of their ability.

Out-of-the-box solutions will fail to impress

A one-size-fits-all approach rarely delivers meaningful results. Generic AI agents lack the depth of understanding required to handle nuanced industry-specific tasks, like specialised workflows across multiple platforms, to a good standard without creating more work for human teams – defeating the purpose of automation.

A successful AI agent doesn’t just respond to queries: it must be designed to perfectly fit vertical-specific tasks and functions, integrate seamlessly with existing business systems, and continuously learn from interactions. For instance, a financial services AI agent needs to comply with strict regulatory guidelines, while a healthcare AI assistant must process sensitive patient data securely and accurately. A generic solution will do nothing but create risk in these critical processes, and should be kept well away from them.

Any savings that businesses might make with generic solutions will be lost in constant fine-tuning and troubleshooting. This is doubly true for those wanting to build their own solution in-house. Without human expertise that can encompass the many cybersecurity and compliance risks involved, it’s just not worth it.

There are far more attractive options on the market now: there’s no shortage of AI-as-a-Service (AIaaS) products being developed by trusted cloud providers like Google and AWS, that help businesses build their own specialised agentic solutions. This approach will be less laser-guided than comprehensive managed services, with dedicated human teams to create and maintain agents, but is a good starting point for businesses to try the waters and inform strategic decision-making.

Preserving trust is paramount

 By 2028, AI-driven breaches will account for 25% of enterprise security incidents. As AI agents handle more and more sensitive data, businesses must prioritise transparency and  accountability before anything else, especially in high-stake sectors like healthcare and finance.

With enterprise genAI spending projected to hit $644 billion this year (a 76% YoY increase), businesses must avoid tunnel-vision and instead prioritise a measured approach amidst the growing hype.

In the UK, AI regulation is being deprioritised to encourage innovation, with no specific legislations in place at this time – but they will come, sooner or later. Businesses who self-govern AI agents can prove to their clients that they’re taking the tech seriously, and putting them first every step of the way. This could involve regularly updating datasets, setting strict rules on internet scraping and copyright infringement, and regular audits to assess and continuously improve data security. Just as AI agents can constantly learn and grow as they operate, so should your business as it embraces this new solution.

Getting the basics right first

 By combining a tailored, trustworthy, and collaborative approach with thorough training and supervision from human teams, organisations can reap the benefits of agentic AI. The testing, refining, and integrating process should not be rushed – agentic AI isn’t going anywhere anytime soon, so it’s better to see what works for your business than rush into a solution that isn’t fit for purpose. Slow and steady wins the race.

Author

Related Articles

Back to top button