Cyber SecurityAI & Technology

Building resilient fraud prevention systems in the age of AI-powered threats

By Paul Weathersby, Chief Product Officer for Identity & Fraud at Experian UK&I

The fraud landscape has now evolved into a sophisticated battleground between businesses and threat actors using advanced fraud techniques. During busy months, fraud prevention measures which worked even a few years ago are now inadequate against a new generation of threats. These are powered by artificial intelligence (AI) and synthetic identity creation.  

With third-party fraud rising by 9.2% over the past three years, and synthetic and AI-enabled identities accounting for 42% of all identity fraud cases, this is a growing problem. This suggests that bad actors are becoming more sophisticated, making the fraud landscape even more complex and difficult to navigate. 

The high-volume pressure problem 

Traditional identity verification was built for a different era of fraud. Document checks, basic data matching, and simple authentication questions worked well when criminals operated with stolen physical documents and limited technical sophistication, but today’s fraud landscape bears little resemblance to that world. 

During peak trading periods, from November to January, traditional identity systems face a perfect storm of pressures that expose their limitations. Transaction volumes significantly increase, and operational teams are stretched. Traditional fraud detection systems force a binary choice because they were not designed for dynamic risk assessment which is needed now.  

Before, a single set of rules applied the same scrutiny to every application, regardless of context, behaviour, or the signals that separate genuine customers from fraudulent ones.  

The new AI-powered threats 

Generative AI is playing a central role in supporting this shift to AI-based threat techniques. Synthetic fraud, deepfakes, and AI-generated imagery are becoming more widely accessible, allowing threat actors to create highly convincing identity assets at speed. 

The rise of synthetic fraud represents a fundamental shift in how threat actors approach identity theft. Rather than stealing someone’s complete identity, bad actors now construct entirely new personas by blending real and fabricated information. These synthetic identities are built deliberately, establishing online footprints and behavioural patterns that make them appear entirely legitimate. This makes these identities difficult to detect through traditional means alone. 

Going further, modern deepfake technology can now create convincing identity documents, synthetic voice recordings, and video verification content. For businesses relying on document uploads or basic biometric checks, this is a significant threat. A bad actor can generate a realistic-looking passport or driving licence in minutes, complete with consistent details and appropriate security features. Voice-based authentication can be bypassed with short audio samples, and video verification can be spoofed with increasingly accessible technology. 

What once required significant technical expertise and resources is now available through user-friendly platforms and services. Over a third (35%) of UK businesses reported being targeted by AI-related fraud in the first quarter of 2025, compared to just 23% the previous year. Deepfake technology enables sophisticated social engineering attacks, with threat actors using AI-generated voices or cloned video to conduct fraudulent video calls.  

Building a multi-layered defence system 

A significant issue with traditional identity verification is its static nature. Rules-based systems apply predetermined logic to each application. This means the system will check if the document matches certain criteria, verify if data points align, and confirm if the applicant exists in certain databases. When threat actors use new techniques, the system cannot adapt until the rules are manually updated by a human. This creates a permanent lag between how quickly fraud threats are evolving and protection strategies.  

By the time a traditional system has been updated to catch one fraud technique, bad actors have already moved to the next. During high-pressure periods when system updates and rule changes are deliberately frozen to maintain stability, this lag leads to significant exposure. 

These vulnerabilities lead to substantial losses and measurable fraud increases. Facility takeover fraud has risen by 76%, SIM swap fraud grew 1,055%, and online retail experienced a 75% increase in account takeovers. This issue only heightens in times of high-pressure, so multi-layered defence systems are more resilient under stress.  

A multi-layered defence system must be automated and intelligent, without adding to customer friction. The first layer of defence analyses device location signals and behavioural patterns, whilst using modern identity verification biometric checks to detect deepfakes and liveness to ensure that a real person is present.  

Continuous monitoring looks for signals that an account has been compromised, such as unusual transaction patterns or unexpected changes to contact details. This approach is particularly crucial for detecting facility takeover fraud, ensuring that even if credentials are compromised, fraudulent activity can be blocked before losses occur. 

Crucially, human expertise and escalation is critical. Not all fraud can be detected automatically, so clear escalation paths allow both automated systems and customer-facing staff to flag suspicious activity for a human to review. 

The convergence of sophisticated fraud techniques and elevated operational pressure creates an increasingly challenging risk environment for modern businesses. Traditional identity verification and single-point authentication are inadequate against new threats like synthetic identities, deepfakes, and AI-enabled fraud. Organisations must build resilient, multi-layered defence systems that remain effective under pressure whilst delivering seamless experiences to customers. 

Author

Related Articles

Back to top button