AI & Technology

AI is helping hackers, just not how you think

By Michael Gray, Chief Technology Officer, Thrive

Forget the Hollywood version of AI-fuelled cyberattacks. Hackers aren’t unleashing hyper-advanced AI to break into systems, they’re using it to make everyday attacks more credible. It’s more akin to a factory, rather than a film set, where AI is being used to churn out the tried-and-tested attacks on a regular basis.   

Think ransomware, phishing and social engineering on a much bigger scale, which are continuing to breach organisational defences and tricking individuals into sharing personal information. At the same time, many businesses are adopting AI tools in their own operations to achieve efficiencies, with little thought of how these could be used by bad actors for their own nefarious means.  

The World Economic Forum’s Global Cybersecurity Outlook 2025 reports that 66% of businesses identified AI/machine learning technologies as the most significant vulnerability to cybersecurity in 2025.  So, what do businesses need to do to secure their data against this evolving threat? 

Simpler and cheaper than people assume 

Most hackers use AI in much simpler, cheaper ways than people might assume. The “new age” of AI-driven attacks, such as the use of deepfakes in social engineering, are often too outlandish and not convincing enough to fool anyone. And right now, when people are looking at an AI-generated video, there are still too many hallmarks of it being doctored.  

It’s very much the rapid analysis and generative capabilities of AI that attackers are making use of. As an example, if a file containing sensitive information finds its way onto the dark web, a nefarious company or individual can simply use an AI bot to extract valuable information in seconds – something that used to take hours of manual effort. 

And while large language model providers have trained their models to not help criminals formulate phishing emails, bad actors are finding workarounds. For example, they might ask the model to demonstrate what a good phishing email is for the purposes of training people on how to spot them. In reality, they then use the end result in their own phishing attacks. Criminals are becoming adept at prompting AI models to respond in the way they want them to, even if those models have been trained to act in a safe way.   

Generative AI is also eradicating typos and errors from emails, making it easier for criminals to pose as vendors sending cold emails, or even as the perfect candidate when applying for a role. And AI’s capabilities mean that one person can now achieve what 20 people were previously able to do. But it’s only scratching the surface of modern cyberattacks.  

The steps needed to protect business operations and AI models  

Another key way hackers are leveraging AI to their advantage is via existing models that are in use within target organisations. For example, if threat actors gain access and manipulate the data within AI models, particularly those that are trusted by staff within a business and used daily, they could gradually inject false or misleading information. This could encourage a user to share further sensitive data or even convince them to complete an illegal financial transfer.  

It’s a multi-faceted threat to organisations, so what can they do to battle it? The very first place to start is to simply have a clear AI usage policy. Without it, staff can be prone to using AI that inadvertently opens the door to threat actors. Say, for example, that the company hasn’t advised on any documents which shouldn’t be uploaded to a model, which might lead to a spreadsheet of very sensitive information being added that could be stolen. 

Policies need to make clear the kind of documents that should be kept well away from AI. Alongside that is the need for controls about which AI models can be used. For example, are users blocked from accessing particular models, with focus on just one approved type? AI models aren’t cheap to deploy, and organisations need to consider how they protect that investment with policies and controls.  

Decision-makers also need to know exactly where their most critical data is, and that’s no longer just the responsibility of the technology or IT team. It needs to involve the business decision-makers as well, as it represents a business risk. Collaborative decisions should be made on how data is classified, with the most sensitive information encrypted, restricted to only certain users and subject to rights management for protection.  

Looking ahead, we’ll likely see organisations using their own AI tools to spot well-crafted AI-created phishing emails, which humans are increasingly unable to do. This will likely lead to an ‘AI arms race’ where models on both sides compete to be the smartest. Security analysis will also be overtaken by AI to scan data for anomalies, enabling monitoring and auditing and help protect businesses from increasingly sophisticated attacks. 

Viewing security as an ongoing strategy 

AI isn’t giving hackers futuristic powers, but it is making familiar threats harder to detect and easier to scale. The reality is a little less cinematic than assumed but far more damaging. Smarter phishing, faster data analysis and more convincing impersonations are the reality. That means businesses cannot treat AI security as just a technical problem.  

Clear policies, visibility of critical data and governance around which tools can be used are now essentials for businesses. With attackers and defenders turning to AI in equal measure, the organisations that stay ahead will be those that view security as an ongoing strategy. 

 

Author

Related Articles

Back to top button