The fraud landscapeย has now evolvedย intoย a sophisticated battleground between businesses and threat actors using advanced fraud techniques.ย During busy months, fraud prevention measuresย whichย worked even a few years ago are now inadequateย against a new generation of threats. Theseย are powered by artificial intelligence (AI) and synthetic identity creation.ย ย
Withย third-party fraud rising by 9.2%ย over the past three years, and synthetic and AI-enabled identitiesย accountingย for 42% of allย identity fraud cases, this is a growing problem.ย This suggests thatย badย actors are becoming more sophisticated, making theย fraudย landscape even more complexย and difficult to navigate.ย
Theย high-volumeย pressure problemย
Traditional identity verification was built for a different era of fraud.ย Document checks, basic data matching, and simple authentication questions workedย wellย whenย criminalsย operatedย with stolen physical documents and limited technical sophistication, butย todayโsย fraud landscape bears little resemblance to that world.ย
During peak trading periods, from November to January, traditional identity systems face a perfect storm of pressures that expose their limitations. Transaction volumesย significantly increase, and operational teamsย areย stretched.ย Traditionalย fraud detectionย systems forceย aย binary choice because theyย were notย designed for dynamic risk assessmentย which is needed now.ย ย
Before, aย single set of rules appliedย the same scrutiny to every application, regardless of context, behaviour, or the signals that separate genuine customers from fraudulent ones.ย ย
The new AI-powered threatsย
Generative AI is playingย a central roleย in supporting this shiftย to AI-based threat techniques.ย Synthetic fraud, deepfakes,ย and AI-generated imagery are becoming more widely accessible, allowingย threat actorsย to create highly convincing identity assets at speed.ย
The rise of synthetic fraudย representsย a fundamental shift in how threat actors approach identityย theft.ย Ratherย thanย stealing someone’s complete identity, bad actors now construct entirely new personas by blending real and fabricated information. These synthetic identities are built deliberately,ย establishingย online footprintsย and behavioural patterns that make them appear entirely legitimate.ย This makes these identities difficult to detect through traditional means alone.ย
Going further,ย modern deepfakeย technology can now create convincing identity documents, synthetic voice recordings, and video verification content. For businesses relying on document uploads or basic biometric checks, this isย a significant threat.ย Aย bad actorย can generate a realistic-looking passport or driving licence in minutes, complete with consistent details andย appropriate securityย features. Voice-based authentication can be bypassed with short audio samples, and video verification can be spoofed with increasingly accessible technology.ย
What onceย requiredย significant technicalย expertiseย and resources is now available through user-friendly platforms and services. Over a third (35%) of UK businesses reported being targeted by AI-related fraud in the first quarter of 2025, compared to just 23%ย the previous year.ย Deepfake technology enables sophisticated social engineering attacks, withย threat actorsย using AI-generated voices or cloned video to conduct fraudulent video calls.ย ย
Building a multi-layered defence systemย
A significantย issue withย traditional identity verification is its static nature. Rules-based systems apply predetermined logic to each application. This means the system will check if the document matches certain criteria, verify if data points align,ย andย confirm if the applicant exists in certain databases. When threat actors use new techniques, the system cannot adapt until the rules are manually updated by a human.ย This creates a permanent lag between how quickly fraud threats are evolving andย protection strategies.ย ย
By the time a traditional system has been updated to catch one fraud technique,ย bad actorsย have already moved to the next. During high-pressure periods when system updates and rule changes are deliberately frozen toย maintainย stability, this lagย leads to significant exposure.ย
These vulnerabilities lead toย substantialย losses and measurable fraud increases.ย Facility takeover fraud has risen by 76%,ย SIM swap fraudย grewย 1,055%, and online retail experienced a 75% increase in account takeovers.ย This issue only heightens in times of high-pressure, soย multi-layered defenceย systemsย are moreย resilient under stress.ย ย
Aย multi-layered defence systemย must be automated and intelligent, without adding to customer friction.ย The first layer of defence analyses deviceย location signals and behaviouralย patterns,ย whilstย usingย modern identity verificationย biometric checksย toย detect deepfakes and livenessย to ensure thatย a real person isย present.ย ย
Continuous monitoringย looks forย signals that an account has been compromised, such asย unusual transaction patternsย orย unexpected changes to contact details.ย This approach is particularly crucial for detecting facility takeover fraud, ensuring that even if credentials are compromised, fraudulent activity can be blocked before losses occur.ย
Crucially, humanย expertiseย and escalation is critical. Notย all fraud can be detected automatically, so clear escalation paths allow both automated systems and customer-facing staff to flag suspicious activity forย a human toย review.ย
The convergence of sophisticated fraud techniques and elevated operational pressure creates an increasingly challenging risk environment for modern businesses. Traditional identity verification and single-point authentication are inadequate againstย new threats likeย synthetic identities, deepfakes, and AI-enabled fraud. Organisations mustย build resilient, multi-layered defence systemsย thatย remainย effective under pressure whilst deliveringย seamless experiences to customers.ย

