AI Leadership & Perspective

A Generational Shift: How AI Agents are Redefining Work at Banks 

By Adam Famularo, CEO, WorkFusion, a UiPath company

Across almost every industry, leaders are confronting the same realization: the next phase of transformation is not about adopting new technology. It is about rethinking how work itself is done. 

For years, most initiatives have focused on making people more productive with better tools, faster processes, and more data. What is emerging now is something more fundamental. More organizations are redesigning workflows around the idea that software, in the form of agentic AI, can take responsibility for substantial portions of work, with humans providing judgment, oversight, and accountability.  

This shift is not happening all at once. In many companies, it is experimental. But in a few demanding environments, it is already very real.  

AI Agents in Financial Crime Compliance 

A prime example is financial crime compliance in banking.  

Banking operates under constant scrutiny and mounting regulatory and operational pressure. Over the past few years, I have spent a lot of time with bank leaders who all describe the same tension. Financial crime compliance responsibilities keep expanding. Alert volumes keep rising. Investigations are getting more time-consuming. And yet the time, talent, and budget to manage all of it never seem to keep pace. In many financial institutions, more than 90 percent of alerts are false positives. Even so, every alert must be reviewed, documented, and defensible.  

For a long time, the response was predictable. Hire more people, outsource more work, and add more technology. The result was not a step change in effectiveness, but larger, more expensive, and more complex operations that were still under strain.  

For years that operating model didn’t change — until now. We are experiencing a generational shift in how financial crime compliance work gets done.    

More leaders are starting to see that AI is not simply another tool to help people work faster. It is becoming something that can take responsibility for defined pieces of work. In financial crime compliance, for example, this shows up as AI Agents that do the repetitive and/or time-consuming work done by level 1 and 2 analysts and investigators. They gather information, apply policy logic, assemble evidence, and document what was reviewed and why. Humans remain responsible for judgment and final decisions.  

This may sound like a subtle shift. In practice, it is not.  

Most large organizations have been built around the assumption that people do the work and systems support them. AI Agents invert that model for a growing class of structured, repeatable work. The system does the preparation. A human then reviews, decides, and escalates.  

Financial crime compliance is one of the first places where this model has proven itself at scale. I see this every day in my work where banks are using AI Agents to across anti-money laundering (AML) and fraud including: sanctions screening alert review, transaction monitoring alerts and investigations, fraud investigations, Know Your Customer (KYC), enhanced due diligence, and adverse media monitoring, at volumes that would have been unthinkable to handle manually just a few years ago.  

This is not an easy environment. The cost of mistakes is high. If this approach can work here, it is a strong signal that it can work in many other complex and regulated settings.  

We are already seeing what this looks like in practice. In large banks, AI Agents now handle more than a million alerts every day. This represents tens of thousands of hours of manual preparation work shifted away from people and toward machines.  

Increasing Quality, Consistency and Capacity in Regulated Environments 

The most important impact is not just speed or cost (although both matter). Investigations finish faster. Backlogs shrink. Documentation becomes more consistent. Teams can absorb growth without constantly adding headcount. Just as importantly, the compliance function becomes easier to govern and easier to explain to regulators. AI agents function like your best analyst with exponential scale — ensuring reviews are done with precision.  

There is a broader lesson here that extends well beyond financial services.  

In many regulated and operationally complex parts of the enterprise, the real bottleneck is no longer decision-making. It’s preparation, coordination, and documentation. These are the exact areas where AI agents are most effective. Trying to replace human judgment is neither necessary nor wise.   

Using AI to industrialize the work around judgment is where the real leverage lies.  

This also changes how leaders should think about trust and transparency. In regulated environments, it is not enough to know that a system is accurate. You have to be able to show how outcomes were produced. Systems designed as agents, operating within explicit rules and recording every step, are often easier to govern than tools that simply produce recommendations.  

I believe the next phase of enterprise AI will be defined by fleets of specialized agents that own specific workflows, operate inside control frameworks, and are measured by business outcomes. The winning model will not be 100% automation. It will be a clearer and more intentional division of labor between humans and machines.  

None of this makes people less important. It makes their time more valuable. When machines take on repeatable work, humans can focus on risk, interpretation, and accountability. This is where experience and judgment still matter most.  

Alert volumes will keep rising. Expectations will keep increasing. Cost pressure isn’t going away. Meeting these realities requires more than doing the same work with better tools.  

It demands rethinking how work itself gets done.  

AI Agents aren’t just another technology trend. They’re the start of a new way of organizing work. And in financial crime compliance, that future is already taking shape.    

Author

Related Articles

Back to top button