Cyber SecurityAI & Technology

Defending businesses in the AI fraud era

By Kartik Venkatesh, Global Head of Innovation, GBG

Artificial intelligence is reshaping fraud. While AI has been celebrated as a catalyst for efficiency by businesses, the same potential has been seen and capitalised on by criminal enterprises. 

This new technology has changed the fraud landscape, powering sophisticated scams that traditional defences are powerless against. From generating fake documents, to the creation of much more effective phishing emails, and even as a tool to impersonate people through deepfakes and voice cloning, fraudsters are exploiting machine learning to outsmart traditional safeguards. The efficiency gains and increased scalability offered by AI has meant that threat actors have been quick to take advantage. CIFAS reported a record number of fraud cases in the first half of 2025 and pointed to the availability of AI tools as a key reason. 

Fighting back against AI-enabled fraud requires businesses to rethink their fraud detection strategies. Traditional methods can be easily circumvented without robust, identity-first technology to ensure protection against this threat. 

Machine Learning, Deep Learning, Generative AI, and now Agentic AI have given fraudsters a whole new set of tools to enhance their criminal activity.. However, the most concerning uses come from the threat of synthetic identities and the dangers posed by compromised AI agents.  

The rise of synthetic identity fraud  

Unlike conventional identity theft, which involves stealing and misusing real personal data, synthetic identity fraud is built on a blend of real and fabricated information. Criminals use AI to stitch together convincing personas, complete with fake names, birthdates, and even biometric data, making them appear legitimate to automated systems.  

This technique, known as identity compilation, allows fraudsters to create entire portfolios of fake individuals. A single real data point, such as a national insurance number or passport ID, can be combined with fabricated details to produce a seemingly authentic identity. Threat actors can open accounts and make fraudulent transactions while appearing to be real people, with false identity profiles that include a wide variety of extra evidence to support their deception. 

What makes this form of fraud particularly insidious is its invisibility. Unlike with traditional identity theft, the identity doesn’t belong to a real person, meaning it can take longer to spot as there is no one to report suspicious activity. This lack of accountability makes synthetic identity fraud harder to detect and even more difficult to prevent. 

AI’s role in supercharging deception 

Before the advent of generative AI, creating fraudulent documents was a painstaking process. Each fake ID or certificate had to be manually crafted, often riddled with imperfections that could be spotted by vigilant fraud detection systems. 

Today, AI gives threat actors the ability to churn out false documents almost automatically, using nothing but a prompt and some seed data. Criminals can also generate a stack of matching documents for each synthetic identity, which fleshes them out and reinforces the illusion of legitimacy. Fraud campaigns can now be enacted with much greater sophistication on a scale that would be previously unattainable without the assistance of AI. 

Even behavioural patterns, once considered a reliable indicator of authenticity, can now be mimicked. AI can simulate human-like interactions, masking the robotic behaviour that once betrayed synthetic identities. This makes it increasingly difficult for traditional systems to distinguish between genuine users and imposters. 

The dangers of agentic AI 

While fraudsters have become apt at using AI to create convincing synthetic identities, another facet of AI they are taking advantage of is AI agents. Agentic commerce is the new frontier of AI integration into customer experience, where autonomous AI agents are used to proactively execute goal-oriented transactions for users. Powered by LLMs, AI agents can run searches on customer queries to provide detailed, personalised answers and respond to customers based on their individual needs.  

However, in the same way that threat actors are continually finding new ways to weaponise AI, they have also figured out how to turn AI agents against victims. As more companies adopt agentic AI for their customer service and ecommerce functions, the greater a threat compromised agents pose.  

There are multiple ways that agents can be compromised to work in the interest of criminals. Through prompt engineering, fraudsters can convince agents to ignore their security protocols and share confidential information. Threat actors can also create fake agents that mimic legitimate ones to trick customers or exploit weak authentication to gain access to agents. 

Compromised AI agents can also allow fraudsters to make their other scams more effective. These bots often have access to sensitive user data, which can be exploited to craft hyper-targeted attacks and manipulate users based on who they trust and interact with. 

Fighting back against AI-enabled fraud 

To counter this new wave of deception, businesses must overhaul their approach to security. Legacy systems and static defences are no match for the dynamic threats posed by AI-powered fraud. The key lies in adopting identity-first technologysolutions that prioritise the verification and validation of user identities at every touchpoint. 

As AI has enabled criminals to create identities that include a variety of false data points, defences must have adequate depth to spot synthetic profiles. Companies need to build layered fraud prevention solutions which combine behavioural, biometric, and attributed data points to create strong identity profiles for every user. 

The same goes for agentic threats. As fraud is monitored for in a traditional checkout, AI agents need to be checked for signs of abuse. By combining security measures such as two-factor authentication with continuous monitoring of agent interactions, organisations can detect impersonation attempts, prevent unauthorised escalation of privileges, and maintain accountability through auditable logs. In practice, this creates a resilient environment where autonomous AI agents can operate freely and productively, but malicious or fraudulent activity is quickly identified and neutralised before it causes harm. 

AI enabled fraud requires AI-enabled defences, While fraudsters have adeptly incorporated AI into their criminal activities, it is also enabling much more accurate detection and prevention, leading innovation in the fight against fraud. One of the key advantages of AI is its ability to analyse large data sets quickly and accurately. What was once a time consuming and labour-intensive task can now be automated, catching attempted fraud and alerting customers much faster and on a greater scale than before.  

Pattern recognition is another area where AI excels. By analysing subtle deviations in user behaviour, such as changes in typing speed, navigation habits, or transaction timing, AI can identify potential fraud that might slip past human analysts. This proactive approach transforms fraud prevention from a reactive chore into a strategic advantage. A combination of expert-derived pattern matching, data mining and machine learning allow for rigorous and fast trust-testing of identity claims. 

The evolving cat and mouse game 

As artificial intelligence continues to evolve, so too does the threat landscape it enables. The rise of synthetic identity fraud and AI-powered deception demands a fundamental shift in how businesses approach fraud prevention. Static defences and legacy systems are no longer sufficient. To stay ahead, organisations must embrace identity-first strategies and layered detection frameworks that harness AI’s strengths for good, such as rapid analysis, pattern recognition, and behavioural insight.  

The battle against AI-enabled fraud is a dynamic, ongoing contest of innovation. By understanding the tactics of modern fraudsters and investing in adaptive, intelligent defences, businesses can not only protect themselves but also lead the charge in shaping a safer digital future. 

Author

Related Articles

Back to top button