AI is developing at an astounding rate; in the past year alone we’ve seen multiple leaps forward in applications of generative AI, and future uses seem limitless, with 92% of Fortune 500 companies using ChatGPT for various purposes. Unfortunately though, with the good comes the bad, and we’re now seeing a spike in criminals capitalising on the rollout of AI technologies too, resulting in a convergence of ‘fraud’ and ‘cyber crime’.
To set the scene, in 2023, criminals stole £1.17bn through online fraud. The rampant use of AI in situations like this – particularly the use of chatbots and deepfakes – will continue to plague businesses as AI evolves.
Businesses must understand the landscape in which they’re operating to stop crime in its tracks.
The evolution of online crime
In the early days of the internet, cybercrime was relatively simple and relegated to the fraud scale.
Fraud, as was seen in the days of “Nigerian prince”-esque scams, was typically highly opportunistic and reactive, and scammers were relatively limited in the damage they could cause. Now, however, we’re seeing the rise of cybercrime which is much more serious, and far harder to stop. Rather than trying to make a quick buck, fraudsters now treat crime as a business, fighting tooth and nail to find and exploit weaknesses.
Online retail fraud is no longer the province of hoodie-wearing lone wolves working in isolation. In recent years, large, sophisticated, and global criminal rings with expertise in e-commerce, fraud prevention, fulfillment, and logistics have set up shops in Eastern Europe and Southeast Asia to launch attacks around the world.
Retail is a great example of the industrialisation of fraud. We’re seeing a higher concentration in non-payment forms of fraud — cases in which a person will impersonate a legitimate customer, claim their product was lost or damaged, and pocket the refund money while keeping the product. Ultimately, the scam results in the retailer having to cover the cost. Similarly return fraud works, with criminal rings or even wayward consumers returning knock-offs or items other than the original product, while securing a refund before their deception is discovered.
As these types of scams become more commonplace they also become easier to detect and address, which in turn pushes fraudsters to look for other means of attack.
Much like a balloon that’s been squeezed, fraudsters have far from disappeared, instead looking for other points across systems to attack. It’s this environment that has spurred the convergence of fraud and cybercrime into a single entity.
Historically fraud has been opportunistic while cybercrime has been more proactive and strategic. However, as fraudsters formulate more organised teams of attack, the lines between these two categories have blurred.
When fraud becomes cybercrime
While serious cyber breaches are an obvious threat to businesses (and the wider global economy), the prevalence and impact of fraud should not be underestimated.
These scams often revolve around fraudsters deceiving people into believing they are a real person or organisation. For example, someone might set up an email pretending to be His Majesty’s Revenue and Customs (HMRC), claiming that someone is owed a tax rebate. The scam works by tempting a person with money, but saying they have to act now and enter their card details to receive payment. As a result, people willingly surrender their details to fraudsters. Other examples might be a person using a stolen or lost card to try and make a claim from a retailer online, demonstrating the reactive nature of early fraud.
By today’s standards, the crimes above are relatively easy to spot, and in the early days of online fraud, it was easy enough to block the email or phone number of a suspected scammer. However, now that cybercrime has become an organised enterprise, we’re seeing full syndicates whose livelihood depends on the continual deception of consumers and businesses. This is what is meant by the move from fraud to cybercrime – it’s not just opportunistic, but highly planned.
One such example we’ve seen recently is the emergence of a Southeast Asian fraud ring. This syndicate was able to steal $660mn in laptops, cell phones, computer chips, gaming devices, and other goods in November 2022 alone. Despite identifying and stopping them in their attempts, they continue to evolve using new technologies and approaches.
An example closer to home is Russia-based Qilin, a criminal organisation that took credit for attacking the NHS’ partner Synnovis in early June 2024. This attack saw blood testing capabilities and blood transfusions severely weakened. This event, and others like it, illustrates the severity of cybercrime if not stopped.
As fraud becomes more efficient, agile, and proactive, criminals will only continue to seek out weaknesses in security systems, data repositories, or customer pools. With all that to contend with, it’s clear that businesses need to get (and stay) ahead to defend themselves and prepare for attacks.
Shining a light on the dark side of AI
The realism of fraud in the age of AI is the real conundrum in countering such attacks. AI chatbots, for example, can mimic text and email patterns almost perfectly. They are also fed by huge volumes of data allowing them to “learn” far more quickly and effectively than humans. This means that any weaknesses are being identified and quickly corrected faster than humans can spot them.
Even more advanced than chatbots are deep fakes: technology intended to recreate a person’s likeness, voice, etc. This technology is extremely impressive and has potential for a myriad of uses in the future. Unfortunately, though, fraudsters are using it today to create convincing doubles of real people in businesses.
These schemes work by playing on the most common flaw in cybersecurity: human error.
While cybercriminals can’t easily guess passwords or gain hold of other sensitive information, they can rely on customers’ or employees’ lack of security knowledge in order to catch them unaware. With AI empowering fraudsters to impersonate friends, loved ones, colleagues, or bosses, it is easier for them than ever before to gain access to personal details and data.
In these examples we see a reflection of the evolving nature of cybercrime. Much like how we have seen the shift from reactive fraud to proactive crime, so do we see how blanket scattergun emails are replaced by targeted attacks on individual people or companies. These fraud techniques are terrifying as they are likely to be different for every person, rather than something generic and preventable.
AI can help fraudsters overcome the shortcomings of previous fraud schemes. And the tech is so affordable, that anyone could use it to set up criminal enterprises. Combined with the fact that the technology is improving extremely quickly, the ‘good guys’ are at a disadvantage.
Fighting back with AI
Dealing with the issue of AI fraud requires a two-pronged solution. One; using AI to fight back, and two; using an intelligence team to support AI by feeding back relevant, accurate, and timely data.
Using AI to counter AI-led fraud is a simple but effective solution for all the same reasons it’s good for fraudsters. The tech is affordable, it advances quickly, and the range of use cases is limitless. In much the same way AI can learn to correct mistakes based on the data it is given, so too can AI learn to identify fraud based on data. AI as a fraud solution is like fighting fire with fire; it can identify fraud trends far faster than humans can, and deal with them more efficiently.
The best part about AI as a solution is its capacity to learn. By feeding AI the right data, it becomes faster at identifying the common traits of scams and scammers, which in turn increases its capacity to absorb and identify crime before it happens. This creates a feedback loop in which AI tools designed to counter fraud are constantly improving in spotting fraud through previous successes in spotting fraud.
However, as good as AI is, it still needs humans to operate and manage its application. Much like how fraud syndicates are using AI to scam businesses, businesses need intelligence teams to fight back. Some advantages of having these intelligence teams include analysts who can help interpret the data AI offers, allowing businesses to know the wider context in which fraud is happening, by confirming the form it is taking; chargebacks, returns fraud, etc. and what can be done to protect against it.
Machine Learning and AI can recognise patterns indicating fraud incredibly quickly and at an impressive scale. But keeping a human in the loop allows experts to rely on human intelligence and intuition to detect new iterations and tactics when fraud rings counter preventative strategies. These context clues can then be fed into the AI, much like the data it was already using to create a feedback loop of learning and improvement.
Using this two-pronged solution, businesses can much more effectively defend themselves from cyber-attacks, allowing them to focus on what’s important: providing excellent customer experience that is both simple and streamlined.
Fraud may be evolving, but the solutions are too. There is a tendency for businesses to assume that fraud is an inevitability, and simply hope they get lucky and avoid it, or cut their losses when it occurs. This attitude is unproductive.
Instead, we should focus on the positives: data, AI and intelligence teams can be used to fight back using the ever-evolving algorithms and data sets underpinning AI solutions. In this way, the future looks bright for businesses.