
Forget the Hollywood version of AI-fuelled cyberattacks. Hackersย arenโtย unleashing hyper-advanced AI to break intoย systems,ย theyโreย using it to make everyday attacks more credible.ย Itโsย more akin to a factory, rather than a film set, where AI is being used to churn out the tried-and-tested attacks on a regular basis.โฏย ย
Think ransomware,ย phishingย and social engineering on a much bigger scale, which are continuing to breach organisational defences and tricking individuals into sharing personal information. At the same time, many businesses are adopting AI tools in their own operations to achieve efficiencies, with little thought of how these could be used by bad actors for their own nefarious means.ย ย
The World Economic Forumโsย Global Cybersecurity Outlook 2025ย reports that 66% of businessesย identifiedย AI/machine learning technologies as the most significant vulnerability to cybersecurity in 2025.โฏ So, what do businesses need to do to secure their data against this evolving threat?ย
Simpler and cheaper than people assumeย
Most hackers use AI in much simpler, cheaper ways than people might assume. The โnew ageโ of AI-driven attacks, such as the use of deepfakes in social engineering, are often too outlandish and not convincing enough to fool anyone. And right now, when people are looking at an AI-generated video, there are still too many hallmarks of it being doctored.ย ย
It’sย very much the rapid analysis and generative capabilities of AI that attackers are making use of. As an example, if a fileย containingย sensitive information finds its way onto the dark web, a nefarious company or individual can simply use an AI bot to extract valuable information in seconds – something that used to take hours of manual effort.ย
And while large language model providers have trained their models to not help criminals formulate phishing emails, bad actors are finding workarounds. For example, they might ask the model toย demonstrateย what a good phishing email is for the purposes of training people on how to spot them.ย In reality, theyย then useย the end resultย in their own phishing attacks. Criminals are becoming adept at prompting AI models to respond in the way they want them to, even if those models have been trained to act in a safe way.โฏย ย
Generative AI is also eradicating typos and errors from emails, making it easier for criminals to pose as vendors sending cold emails, or even as the perfect candidate when applying for a role. And AIโs capabilities mean that one person can now achieve what 20 people were previously able to do. Butย itโsย only scratching the surface of modern cyberattacks.ย ย
The steps needed to protect business operations and AI modelsย ย
Anotherย key wayย hackers areย leveragingย AI to their advantage is via existing models that are in use within target organisations. For example, if threat actors gain access and manipulate the data within AI models, particularly those that are trusted by staff within a business and used daily, they could gradually inject false or misleading information. This could encourage a user to share further sensitive data or even convince them to complete an illegal financial transfer.ย ย
Itโsย a multi-faceted threat to organisations, so what can they do to battle it? The very first place to start is to simply have a clear AI usage policy. Without it, staff can be prone to using AI that inadvertently opens the door to threat actors. Say, for example, that the companyย hasnโtย advised on any documents whichย shouldnโtย be uploaded to a model, which might lead to a spreadsheet ofย very sensitiveย information being added that could be stolen.ย
Policies need to make clear the kind of documents that should be keptย well awayย from AI. Alongside that is the need for controls about which AI models can be used. For example, are users blocked from accessingย particular models, with focus on just one approved type? AI modelsย arenโtย cheap to deploy, and organisations need to consider how they protect that investment with policies and controls.ย ย
Decision-makers also need to know exactly where their most critical data is, andย thatโsย no longer just the responsibility of the technology or IT team. It needs to involve the business decision-makers as well, as itย representsย a business risk. Collaborative decisions should be made on how data is classified, with the most sensitive information encrypted, restricted to only certain users and subject to rights management for protection.ย ย
Looking ahead,ย weโllย likely seeย organisations using their own AI tools to spot well-crafted AI-created phishing emails, which humans are increasingly unable to do. This willย likely leadย to an โAI arms raceโ where models on both sides compete to be the smartest. Security analysis will also be overtaken by AI to scan data for anomalies, enabling monitoring and auditing and help protect businesses from increasingly sophisticated attacks.ย
Viewing security as an ongoing strategyย
AIย isnโtย giving hackers futuristic powers, but it is making familiar threats harder to detect and easier to scale. The reality is a little less cinematic than assumed but far more damaging. Smarter phishing, faster dataย analysisย and more convincing impersonations are the reality. That means businesses cannot treat AI security as just a technical problem.ย ย
Clear policies, visibility of critical data and governance around which tools can be used are now essentials for businesses. With attackers and defenders turning to AI in equal measure, the organisations that stay ahead will be those that view security as an ongoing strategy.ย
ย



