
Over the past decade, artificial intelligence has increasingly transformed the day-to-day operations of hundreds of industries, from healthcare to retail, streamlining operations and levelling up efficiency.
However, for all its benefits, AI also has a dark side – one that fraudsters can’t get enough of. Thanks to its ability to process information and produce content in a matter of seconds, generative AI in particular has inadvertently enabled fraudsters to ramp up both the scale and impact of their fraudulent activities and to pursue new types of scams, such as deepfake audio and visual content.
AI has also reduced fraud’s barrier to entry. Thanks to “off the shelf” AI models that can do the hard research and deployment work, being highly technical is no longer a prerequisite to being a fraudster.
The scale of AI fraud is now exponential, with Deloitte predicting that generative AI tools could lead to fraud losses of up to $40 billion by 2027 in the US alone.
As with all things, education is central to action. To be at all successful in leading the fight against fraud, we must first understand the ways in which fraudsters are exploiting AI to their advantage and look at how we can tackle it head-on.
Hyper customised and disturbingly real
AI is being used by fraudsters in multiple ways, from deep fakes to voice cloning. However, what is often overlooked is how AI is also powering greater efficiency and pace for the most common digital scams.
Thanks to AI, scams have now become somewhat industrialised, as thousands of people can now be targeted with hyper-customised scams everyday.
This new generation of scams utilises highly sophisticated and grammatically perfect messaging, tailored to information gleaned from victims’ profiles, social media activity, and even leaked private conversations. Moreover, these messages can be created in seconds and programmed to respond in real time, giving victims the impression they are talking to a real person or organisation.
It’s clear that fraud has moved beyond the humble beginnings of typo-riddled phishing emails sent from dodgy-looking email addresses.
Account takeovers on an industrial scale
On average, people reuse the same password for at least four accounts. For fraudsters, this is a gold mine.
When it comes to accessing existing accounts, fraudsters typically make use of email and password pairs they have obtained from data breaches or through phishing.
Then, relying on the fact that many of us use the same credentials for different accounts, they try their luck by testing these password pairs across multiple user profiles and services.
This was once a manual process, but now it is practically “off the shelf” thanks to pre-built fraud kits – equipped with AI-driven credential stuffing tools, botnets, and phishing frameworks – now available to buy on the dark web.
This type of fraud as a service (FaaS) has drastically lowered the barrier to entry for cybercriminals, reducing the requirements for technical expertise.
Fabricated identities
With AI, fraudsters can also create fake identities with ease.
Using AI-powered tools to source and generate realistic combinations of names, addresses, VAT or Social Security numbers, and financial histories, fraudsters can blend real and fake data to create ‘synthetic identities’ that pass identity verification checks.
AI improves the credibility of synthetic profiles over time by simulating normal consumer behavior, such as building a credit history. As a result, fraudsters can secure loans or credit applications, and engage in money laundering without triggering traditional fraud detection systems.
Also, since these synthetic identities don’t belong to any specific individual, there’s no single victim to report the fraud.
Alarmingly accurate deepfakes
While deepfake videos can take a long time to create, even with the help of AI, the potential rewards make them particularly attractive for fraudsters.
In these cases, criminals use AI-powered deep learning models to generate highly realistic videos of executives, politicians, or public figures – allowing them to impersonate key decision-makers with alarming accuracy. These manipulated videos are then used to request large money transfers, spread misinformation, or manipulate stock prices.
These deepfakes are disturbingly accurate. In February 2024, a corporate finance employee in Hong Kong received an urgent video call from their CFO. The face on the screen was familiar and the voice unmistakable. On instruction from the CFO, the employee authorises a $25 million wire transfer – only to later discover their CFO never made the call.
Elaborate voice cloning
Ever picked up a call from an unfamiliar number, only to be met with silence? There’s a chance a fraudster was on the other end, recording your voice. With just a few seconds of audio, AI can clone voices with near-perfect accuracy.
These voices can then be used to bypass voice authentication systems, create personalised messages to manipulate loved ones, orchestrate romance scams, or trick employees into transferring large sums of money directly into fraudsters’ hands.
A high-profile example of this type of scam took place in February 2025, when a prominent entrepreneur in Italy answered a call from what sounded exactly like the country’s Defense Minister. The request was urgent: send millions of euros to a foreign account to secure the release of kidnapped Italian journalists. But the real Minister never made the call.
The weapon and the shield
AI-driven fraud is becoming an international crisis, but it is possible to fight back. By combining the latest detection tactics with the same type of AI technology used by fraudsters, we can outsmart scammers and protect ourselves.
For example, one of the most effective countermeasures is AI-based Open Source Intelligence (OSINT), which automates the collection and analysis of vast amounts of publicly available data. These types of tools can identify deepfake anomalies, trace fraudulent transactions, and expose synthetic identities before it’s too late.
When it comes to deepfake audio and video scams specifically, AI-driven detection solutions can already analyse voice timbre, facial micro-expressions, and metadata inconsistencies. OSINT can add another critical layer of protection by analysing email addresses, phone numbers and devices used by fraudsters to flag contact details that do not match with the person opening a new account or sending a request to unsuspecting victims.
By looking at social media activity, IP information and biometric data, AI-based OSINT can reveal inconsistencies in online behavior and digital history. The same intelligence can also be used to prevent account takeovers by assessing subtle user behaviors to distinguish real users from bots.
Thanks to AI-driven chat analysis that can detect subtle but suspicious linguistic patterns, AI-powered social engineering scams can be intercepted before they reach potential victims.
Fighting AI with AI
AI has industrialised the scale and impact of fraud, enabling fraudsters to ditch previously laborious fraud tactics and gain quick wins in the process.
Instead of burying our heads in the sand, the best way to tackle the increase in AI fraud is to look at how AI can be implemented in fraud detection strategies and apply it creatively.
When it comes to fraud prevention, OSINT has many capabilities which AI can accelerate on a mass scale, flagging suspicious behavior and user information that suggests bad actors are at play.
Fail to do so, and fraudsters will continue to exploit AI and get the better of us.