Future of AIAI

How Agentic AI is Helping Financial Services Organisations Fight Fraud

By Federico Valentini, Head of Threat Intelligence at Cleafy

In 2024, banks reimbursed £722 million to people who were tricked into transferring money to fraudsters. It’s a massive problem that needs addressing, but these types of attacks have been hard to protect against using traditional defences that lack scalable, intelligent automation. Now it’s possible, thanks to agentic AI. 

Agentic AI: adding to the defences for financial fraud teams  

Most fraud doesn’t break banking systems, it uses them perfectly. The session is authenticated. The transaction is approved. Everything looks fine until the money moves. That’s the problem. 

Traditional defences focus on the outer edge – firewalls, access controls, and multi-factor authentication (MFA). But this all offers little protection once authentication is achieved. From that point, attackers often have a clear path, and much of today’s fraud happens inside the session, long before payment. 

Unlike traditional AI models, agentic AI builds on advances in intelligent automation to work autonomously and proactively, with only limited human supervision. It is designed to achieve goals or a vision – not just follow rules – and to work on complex sequences of activities. Which all makes it ideal for delivering automation in fast-moving situations like scam prevention and threat detection in banking and financial services. 

When designed with purpose, agentic AI introduces a new layer of defence by enhancing fraud teams with intelligent automation.  

Agentic AI coordinates signals across systems, adapts rules to emerging threats in real-time, and streamlines analyst workflows. Rather than replacing core detection methods like in-session monitoring or telemetry capture, agentic AI acts as a force multiplier. It helps fraud teams manage scale, complexity, and decision fatigue with greater precision. 

Agentic AI: a new solution for a significant problem 

UK Finance, the trade association for the UK banking and financial services sector, recorded 3.13 million cases of unauthorised fraud in 2024, where money is taken from someone’s account by a third party, without the account holder’s consent. It totalled £722 million – a 2% increase on 2023. 

Scams like Authorised Push Payment (APP) fraud – where people are tricked into transferring money to fraudsters – remain high too. APP losses were over £450 million in 2023. And with new rules requiring banks to repay up to £85,000 to a consumer affected by this type of scam, this is a pressing issue for the financial services sector.  

To many businesses, APP scams seem unsolvable. But they’re not. Although a phone call is central feature of an APP attack, the call isn’t the start. The scam begins long before, with reconnaissance and setup, and this creates signals that can be detected in the session – or across prior ones. 

So why agentic AI? 

APP scams don’t prove that the system is broken – just that it’s blind. Many of the steps in a fraud attack, like reconnaissance, session manipulation, and remote access, leave detectable signals. The scammer can pass through perimeter security, but with high-quality telemetry and well-placed detection models, these signals become visible – and actionable. 

Real-time session intelligence is key. By capturing and analysing telemetry from the full user journey, banks can spot fraud signals before the payment is made, giving them time to intervene.  

In reality, this means that banks need automation that understands context, orchestrates decisions, and prioritises where humans should focus. Until recently, this has been hard to achieve. But Agentic AI now makes this possible. 

Agentic AI aids fraud analysts to swiftly identify fraud patterns by continuously monitoring and managing large datasets, and combining it with other sources of intelligence and insight. It can reduce noise, pinpoint anomalies, and deliver enhanced data analysis.  

Agentic AI learns from each new threat. And it works autonomously, freeing up people to manage the critical tasks that need human input. 

Supporting real-time session intelligence with agentic AI 

New regulations place a greater duty of care on the financial institution to protect its customers, many of whom feel lasting impact from a scam. They also mean a greater financial risk to the institution, as compensation payouts soar. 

Central to the future of fraud prevention is real-time session reconstruction – understanding what’s happening as the fraud unfolds, not after the fact. 

Using modular AI models, or micro-models, enables banks to spot specific signs of fraud as they appear. These signals combine to create a full picture of the session, enabling early intervention during a session. Agentic AI enables that to happen faster, with more detail, and delivers fewer false positives – halting transactions without reason and significantly affecting the customer experience. 

Helping fraud teams move faster, with less noise 

Good fraud teams know what to do. The challenge is scale: case volumes, edge cases, and pressure to respond quickly. When overwhelmed with alerts or manual reviews, the important signals can be missed. That’s where agentic AI helps. It doesn’t take over, but knows when to step in and when to step back. 

Unlike basic automation, which accelerates rote tasks, agentic AI is goal driven. It understands intent, reacts in real-time, and logs its decisions so its actions are explainable and auditable. But more automation doesn’t guarantee better outcomes. If misapplied, it can misjudge intent, act opaquely, and damage trust. 

The solution isn’t less automation; it’s smarter automation. Agentic systems must be built with tight guardrails, transparent logs, and escalation protocols that hand control back when uncertainty rises. That’s how you scale trust, not just output. 

Fraudsters have embraced automation 

Agentic AI excels at orchestrating actions in fast-moving situations, helping speed up decisions while ensuring humans remain involved when their expertise matters most. And this is key to fighting financial fraud. 

But even as banks embrace agentic AI, attackers are doing the same. Social engineering is scaling, not just through scripts and playbooks, but with agentic AI that adapts tone, timing, and tactics in real-time. What was once a one-to-one scam is becoming a one-to-many operation. 

Fraudsters are scaling with AI. The only viable defence is to match them – speed with speed, signal with signal – with intelligent systems that can see what they see and act before the damage is done. 

Author

Related Articles

Back to top button