
As inย many industries, artificial intelligenceย (AI)ย in the world ofย candidate andย employee screeningย representsย a paradox; simultaneously, it is bothย aย great catalystย forย identityย fraud andย aย potent tool to combat it.ย Potential bad actors nowย have access to generativeย platformsย that can create entirely fabricated and falsified identities, but the very same technology can also beย leveragedย by employersย to detect theย deception.ย ย
Few organisations are fully prepared and ready to manage theย scale of attacksย they areย nowย facing, with a worrying volumeย still reliant onย largely manualย processes.ย However,ย opting for a manual approach to tackling AI-fraudย will not work.ย
Consequently,ย adoptingย tech-drivenย complianceย strategiesย is nowย essentialย for all employers to ward offย the next generation ofย potential employee and candidateย identityย fraud. But howย canย theyย leverageย AI and other tools effectively to protect themselves in this increasingly hostile environment?ย ย
Rising threat levelsย
The exponential pace of development and rise of deepfake technology is reshaping every stage of the hiring journey, fromย initialย outreach to final onboarding,ย creating new threats for employers to manage.ย AI offersย aย large rangeย of potential benefitsย to firms,ย butย it isย alsoย driving much of theย aforementionedย criminalย activity. In fact,ย research showsย thatย there has been a staggeringย 2,137%ย increase in deepfake-related fraudย over the pastย threeย yearsย with banks, insurers, and payment servicesย across UK and Europe, which nowย representsย approximatelyย 6.5% of all fraud cases.ย ย
Employers areย facing significant challenges as a result. Aย 2025 report from CIFAS, the UK fraud prevention service, found aย 28% increaseย in โinsiderย threatsโย and employee fraud in the last two years. Inย addition,ย in 2024,ย over a third (38%)ย of fraud attempts took place within the first three months of employment.ย Fraudulent candidates, backed by criminal groups,ย andย even hostile states, in some cases,ย are actively looking to secure jobs at companies of all sizes with the sole intent of leakingย information, attacking sensitive databases, and committing a range of illegal activities.ย ย
Most businesses are underprepared to tackle theseย modernย challengesย at a time whenย theseย threatย levelsย are rising.ย More than halfย of firmsย in both the US and UK say they have been targeted by AI-enabled, or deepfakeย fraud, andย only 10%ย spotted the threat before it had an impact, highlighting the ease with which criminals can exploit processes when there is a lack of preparation.ย ย ย
Manual vs digitalย
A core challengeย facing businessesย is theย relativeย infancy of AI; most firmsย are still reliant on legacy toolsย and processesย which were not designed for a world where technologyย hasย advancedย soย far, and highly effective forgeries can be developed in minutes. While organisations areย manually reviewingย verbal references or credentials, fraudsters are stitching together data fragments from social media profiles, old CVs,ย and stolen voice samples to createย highlyย accurateย composite identitiesย with the capacity toย slip through superficial screening.ย With skills shortages and otherย factorsย placing pressure on hiring teams to fill roles quickly, the timeย requiredย to thoroughly vet applicantsย isย shrinking.ย ย
A combination of these factors means thatย potentially fraudulent candidates areย streetsย ahead ofย the vast majority ofย employers.ย Manyย businesses are fighting a digital war with analogue tools,ย and the onus is onย themย to catch up and level the playing field.ย
Dual natureย
The solution is staring many firmsย in the face; the technology and algorithms behind these attacks canย themselvesย beย leveragedย to strengthen defencesย and fight off fraud. AI-driven liveness tests, for example, now require candidates to respond to random prompts on cameraย to ensure they are human, while facial recognition models confirm thatย theย video matches upย withย official identity documents. Inย addition, otherย digital scanningย tools can examine passports, driving licences and certificates for microscopic inconsistencies such as altered fonts, irregular holograms or manipulated PDF metadata that would be invisible to the human eye, and in a fraction of the time. Equally, voice-biometricย systems can be applied toย analyse acoustic patterns and cadence to spot speech generated by text-to-speech engines orย deepfake platforms.ย Butย theย potentialย of this technologyย should not beย seen asย an invitation for employers to offload their HR teams and invest entirely in ChatGPT, and aย layeredย approach,ย combining people and technology with clear governance policies,ย is the bestย optionย to develop aย truly complete compliance framework.ย
Keep pace with the marketย
Even firms thatย feel theyย areย perfectlyย set up toย identifyย fraudulent candidates or employees can still face challenges, and all employers must continually react and review their policies and risk thresholdsย toย protect themselves in the future. The pace of development with artificial intelligence and other emerging technologies means that continuous monitoring of the broader AI ecosystem, including emerging generative models, decentralised identity solutions and zero-knowledge proof, is now critical, and will prepare organisations to adopt innovative defenceย strategiesย before they become mainstreamย and less effective.ย
The race between fraudsters and businesses will only continue to accelerate, but those employers that recognise the potential of AI and other technology to protect themselves stand to prosper and will be able toย leverageย theirย preparationย into a proactive competitive advantage. By weaving digital tools into their vetting andย backgroundย screeningย program, businesses canย mitigate their risksย byย safeguardingย themselves from potentiallyย frauds,ย invest with confidenceย and mitigate the risksย in theirย recruitmentย activity.ย ย



