AgenticFuture of AIRobotics

What Agentic AI Can Learn From Robotic Process Automation

By Russ Felker, CTO, Trinity Logistics

The rise of agentic AI functionality reminds me back to an earlier shift in enterprise automation. The rise of Robotic Process Automation (RPA).  

Back then, RPA was a true gamechanger. It allowed organizations to mimic human actions within digital systems, offering impressive gains in speed, accuracy, and cost savings. But  

However, the limitations of RPA soon became evident. The systems and processes built tended to be rigid and rule-based. They often stumbled when presented with scenarios even slightly outside their programmed parameters. 

RPA taught us a hard truth about automation. When you give machines autonomy, even in simple form, things can go wrong fast without human oversight. And now, as agentic AI enters the mainstream with more independence and complexity, we’d be wise to remember those early lessons.  

RPA: Fast, Efficient,…and Blind 

Let’s start with the basics. RPA worked by following strict, rule-based instructions. It could click buttons, fill out forms, and move data from one system to another without needing breaks or sleep. It was efficient, until something unexpected happened. 

The problem was that RPA bots couldn’t think. If something in the process broke or changed, they’d keep doing the same thing over and over, even when it clearly wasn’t working. 

In my role, I have seen this firsthand with multiple bots. The warning I attribute to RPA is:  

“A bot will continue to do something in the face of overwhelming evidence it should stop.” 

In one instance, a bot go caught in an infinite error loop. A process failed, an error was returned, and instead of escalating or stopping, the bot kept retyring. If a person had been doing that task, they would’ve raised a flag immediately.  

But the bot? It needed to be explicitly told what to do when things went sideways.  

Consequently, multiple error-checking steps had to be built into the workflows to make sure the bot would notify someone promptly when the inevitable implosion happened.  

The takeaway is clear. Automation without oversight creates risk. Now, with agentic AI we’re facing that same challenge, but the stakes are higher. 

Smarter Doesn’t Mean Safer 

Today, we are witnessing a similar but more complex evolution with the emergence of agentic AI. But, this time, they’re more than just automation tools.  

They’re systems imbued with autonomy and the ability to make decisions independently. Unlike RPA, agentic AI doesn’t just follow static rules. The “agents” work to actively interpret goals, adapt to new inputs, and make decisions along the way. They’re incredibly powerful. 

Yet, this very autonomy introduces new risks. A major difference in agentic AI systems vs. RPA is that the agentic AI will make a judgement call that appears perfectly rational but is detrimental in ways it doesn’t understand.  

Just because a system can make decisions doesn’t mean it knows when to stop or how to question its own logic. In other words, these systems aren’t equipped with actual human judgement or ethics. They don’t pause and ask, “Is this still the right thing to do?” unless we program them to. 

The Risks of Autonomy Without Guardrails 

This introduces an operational challenge for technology executives. Without rigorous human oversight, governance frameworks, and dynamic guardrails, agentic AI can entrench errors, exacerbate risks, and act counter to organizational values or safety requirements. Left unchecked, a system could pursue a flawed interpretation of a goal long past the point of failure.  

The consequences of this could potentially be far more severe than those seen in early RPA mishaps. Where RPA failed when the environment changed, agentic AI may fail more dangerously by persisting in its course with unwarranted confidence.  

To give a pop-culture analogy of this danger, look to the movie iRobot, with the AI system “VIKI”. They conclude that to protect humanity, it must override human autonomy entirely through imprisoning and harming individuals in the relentless pursuit of its misunderstood goal. VIKI continued to act with ruthless logic, despite overwhelming evidence from its human creators that its actions were fundamentally wrong. 

Similarly, in the enterprise context, agentic AI left unchecked could rigidly pursue goals — like maximizing operational efficiency or minimizing costs — without recognizing broader ethical, safety, or reputational consequences.   

If you want another great example of this, just do a quick search for “AI paperclip problem”. You’ll thank (and fear) me later. 

So, What Can We Do? 

This doesn’t mean we should back away from agentic AI. Instead, it means we need to move forward with intention.  

Just like with RPA, success with agentic AI starts by designing systems that assume things will go wrong and building in ways for people to step in. As we execute, it’s critical to blend technological innovation with human stewardship. Here are a few critical strategies for integrating agentic AI responsibly:  

Human-in-the-loop Design 

Don’t take people completely out of the picture. Design systems that escalate anomalies, pause for approval at key decision points, and notify users when something unusual happens.  

Clear Governance and Guardrails 

Make sure your AI agents are aligned with company values, legal frameworks, and safety standards. Define hard boundaries for what AI can and can’t do. 

Monitor for Drift 

Set up systems to detect when the AI’s behavior is changing in unexpected ways, even if those changes may look successful by the numbers. 

Regular Audits and Retraining 

Review agentic AI behavior over time to catch patterns or outcomes that weren’t anticipated. Continual training and recalibration are necessary to keep the system aligned. 

Encourage Ethical Thinking 

Design AI with ethical safeguards. That means prioritizing fairness, transparency, and accountability. 

A New Era, But a Familiar Responsibility 

 We’re at a turning point in automation. Agentic AI holds enormous promise for boosting productivity, creating smarter workflows, and unlocking innovation. But with greater autonomy comes greater responsibility.  

The biggest mistake we could make now is assuming we’ve evolved past the hard lessons of RPA. Hard truth – we haven’t. 

As with all great tools, success with agentic AI will come down to how we design, guide, and supervise it. After all, true intelligence, whether human or artificial, includes the ability to stop, reflect, and course correct. 

Let’s make sure our AI agents know how to do that, too. 

Author

Related Articles

Back to top button