Generative AI allows people and organisations to do many things better and faster. Businesses are using AI to transform productivity, efficiency and security. Cyber attackers are also using AI, only theyāre using it to create and distribute spam emails and craft highly persuasive, targeted phishing attacks.Ā Ā
These AI-enabled cyber threats continue to evolve ā but they are not the only ways in which attackers interact with AI.āÆāÆĀ
Security researchers at Barracuda and elsewhere are now seeing threat actors target and manipulate companiesā AI tools and tamper with their AI security features in order to steal and compromise information and weaken a targetās defences.āÆāÆāÆĀ Ā
There are many opportunities to do harm. It is estimated that nearly eight in ten (78%) organizations worldwide now use AI to support and enable least one business function and just under half (45%) use it for three or more. This growing business dependence on AI has made AI-based tools and applications increasingly attractive targets for threat actors.āÆāÆāÆĀ
Email attacks are targeting AI assistantsāÆāÆĀ
AI assistants and the large language models (LLMs) that support their functionality are vulnerable to abuse.āÆāÆResearchers are seeing incidents where AI instructions, known as prompts, are hidden inside legitimate-looking emails. The attackers assume, often correctly, that these malicious prompts will eventually be scanned and processed by the targetās AI information tools, which then becomes infected and vulnerable to manipulation.Ā Ā
This approach was recently reported in Microsoft 365ās AI assistant, Copilot. The vulnerability has been fixed, but it could allow anyone to extract information from a network without needing to go through standard authorisation processes.Ā Ā
Threat actors can exploit the bug to collect and exfiltrate sensitive information from a target.āÆThis is how they do it.Ā
First, the attackers send one or more employees a seemingly harmless email containing concealed and embedded malicious instructions aimed at the AI assistant. This email needs no interaction from the user and lives quietly in their inbox.āÆĀ
When the employee asks the AI assistant for help with a task or query, the assistant scans through older emails, files and data to provide context for its response. As a result, the AI assistant unwittingly infects itself with the malicious prompt.āÆThe malicious prompt could then ask the AI assistant to silently exfiltrate sensitive information, to execute malicious commands or to alter data.āÆĀ
There are also other ways in which toxic emails try to manipulate AI assistants.Ā Ā
One of these involves corrupting the AIās underlying āmemoryā or the way in which it finds and processes data. For example, some AIs can access and use information from sources outside their LLM training model, an approach known as RAG (Retrieval-Augmented Generation).Ā RAG deployments are vulnerable to distortion. This can lead to AI assistants making incorrect decisions, providing false information, or performing unintended actions based on corrupted data.āÆāÆĀ
Tampering with AI-based protectionāÆāÆĀ
Attackers are also learning how to manipulate the AI components of defensive technologies.āÆĀ
Many email security platforms are enhanced with AI-powered features that make them easier to use and more efficient, including features such as auto-replies, āsmartā forwarding, the automated detection and removal of spam, automated ticket creation for issues, and more. This is expanding the potential attack surface that threat actors can target.āÆāÆĀ
If an attacker successfully tampers with these security features, they could manipulate intelligent email security tools to autoreply with sensitive data ā in effect stealing the data.Ā Ā Ā
They could abuse AI security features to escalate helpdesk tickets without verification, which can lead to unauthorized access to systems or data.āÆāÆAnd they could trigger harmful automated activities such as deploying malware, altering critical data, or disrupting business operations.āÆāÆāÆĀ
AI systems are increasingly designed to operate with high levels of autonomy. This means they can be tricked into either impersonating users or trusting impersonators. If successful, this could enable attackers to do things only a few, highly trusted, employees should be able to do or convince the system to leak sensitive data or send fraudulent emails.āÆĀ
How email defenses need to adaptāÆĀ
Traditional email security measures such as legacy email gateways, traditional email authentication protocols and standard IP blocklists are not enough to defend against these threats.Ā
Organizations need an email security platform that is generative-AI resilient.Ā Ā Ā
To defend against increasingly sophisticated AI-powered attacks, the security platform should be able to understand the context of emails (topic, target, type, etc.), tone and behavioural patterns in addition to the email content.āÆItās also best to have AI-based filters in place that donāt just detect and block suspicious emails but can learn over time to prevent manipulation.āÆĀ
AI assistants need to operate in isolation, with measures in place to stop them from acting on any instruction that hasnāt been properly checked out.Ā
For example, just because an email claims to be āfrom the CEOā demanding details of confidential strategy plans, doesnāt mean the AI should automatically act on it. Tools should be set to verify everything before execution.āÆĀ
What comes next? The future of email securityĀ Ā
The AI tools being used within organisations are increasingly built on what is known as āagenticā AI. These are AI systems capable of independent decision-making and autonomous behavior. They can reason, plan and perform actions, adapting in real time to achieve specific goals.āÆāÆĀ
This powerful capability can be manipulated by attackers, and security measures must shift from passive filtering to proactive threat modelling for AI agents.āÆĀ
Email is a great example. Email is becoming an AI-augmented workspace, but it remains one of the top attack vectors. Security strategies need to stop seeing email as a channel. Instead, they need to approach it as an execution environment requiring zero-trust principles and constant AI-aware validation.āÆĀ
āÆĀ