As artificial intelligence continues to embed itself into organisations seeking to make efficiency gains, there is a key challenge that looms over business leaders; governance. As enterprises debate frameworks and evaluate vendor solutions, their employees are already taking decisions into their own hands.
Our recent research reveals that nearly half of employees (49%) are using AI tools at work that haven’t been sanctioned by their employer and, in most cases, their motivation is obvious: they want to get their job done more efficiently and effectively. Yet the rise of this hidden new threat vector represents one of the most significant security gaps organisations face today.
The new face of shadow IT
Security teams have dealt with shadow IT for years. Employees download applications, share files through unapproved services, and find workarounds when corporate systems prove too cumbersome. In fact, our research found that 63% of respondents believe it’s acceptable to use AI tools without IT oversight if their company isn’t providing a sanctioned option.
These risks could be largely mitigated because data was typically confined to predefined boundaries. While shared documents or unauthorized messaging apps posed compliance challenges, the potential damage was limited.
LLMs have rewritten the rules. Employees paste proprietary code into the likes of ChatGPT, upload customer data to analyse trends and share strategic plans with an LLM to refine their writing. In doing so, they’re bypassing IT policies and exposing their organisation’s most valuable assets to systems that may retain, learn from and regurgitate that information.
This is such a widespread problem partly because employees think they are helping the company by making use of AI tools. Our research also found that 71% of employees believe the efficiency benefits of using unapproved AI tools outweigh any privacy risks. They’re trying to work smarter, but the catch is their efficiency drive could compromise the business in ways that they may not have foreseen or considered.
A problem hiding in plain sight
The limitations of employee’s awareness about AI data handling should concern any security leader. Just over half of workers (53%) understand how the data they input into AI tools is saved, analysed or stored, representing a clear knowledge gap that organisations have failed to address.
Consider what this means in practice. An employee uploads a spreadsheet containing customer information to an AI tool to help create a presentation. Do they know whether that data is now part of the tool’s training set? Can it be reconstructed from the model? Will it appear in responses to other users? For most workers, these questions don’t register as concerns.
LLMs introduce a new category of insider risk that traditional security controls weren’t designed to handle. Unlike conventional applications where data flows can be monitored and controlled at the network level, interactions with AI tools can circumvent corporate security entirely. An employee working from a coffee shop on a personal laptop can expose company secrets without triggering a single alert.
The information doesn’t pass through these systems and disappear; prompt histories might be logged, and even when providers claim not to use customer data for training, the data still resides outside corporate governance. Adding to the data security complexity, it can be subject to terms of service that change and legal jurisdictions that may compel disclosure.
The compliance dimension
For regulated industries, the compliance implications are particularly stark. Financial services firms handling customer data under GDPR, healthcare organisations bound by data protection requirements, and any company managing confidential information face serious exposure when employees route that data through unvetted AI systems.
The problem has already materialised. We’ve seen instances where AI tools have inadvertently exposed confidential information through their responses to other users. Security researchers have demonstrated that, with the right prompting techniques, it’s possible to extract training data from language models. When that training data includes your proprietary information, the consequences extend far beyond an awkward conversation with regulators.
Moving beyond detection to prevention
Traditional approaches to shadow IT relied heavily on detection, which involved spotting unauthorised applications and blocking access. This never worked perfectly, and it’s even less well suited to the realities of the AI-era. Browser-based AI tools are increasingly indistinguishable from legitimate web traffic and employees can access them from personal devices, using personal accounts, on networks organisations don’t control.
Detection alone also misses the point. By the time you’ve identified that an employee has used an unauthorised AI tool, the damage is done. The sensitive data has already left your environment, and you can’t recall information that has been fed into an LLM’s context window.
The answer instead lies in prevention. Anti data exfiltration (ADX) technology can identify sensitive data at the endpoint before it leaves the device, regardless of which application the employee is using. Real-time detection, automated policy enforcement and blocking unauthorised data movement without disrupting work is a key requirement.
The human element matters too, not least because most workers genuinely don’t grasp the security implications of their AI usage. So, rather than expecting every employee to become a security expert, organisations need systems that provide the guardrails automatically.
The governance challenge
Beyond technology, organisations need robust policies that acknowledge the reality of AI usage while protecting against data loss. This starts with accepting that employees will use AI tools because they find them valuable, rather than adopting blanket bans which are both ineffective and counterproductive. The goal should be the responsible use of AI with appropriate guardrails in place.
Indeed, companies should provide approved alternatives that meet genuine productivity needs and maintain security controls. If employees are using ChatGPT to improve their writing, offer them an enterprise AI solution with appropriate data handling guarantees. If they’re analysing data with unapproved tools, give them sanctioned options that don’t expose proprietary information. At the same time organisations need to adopt AI security tools that can monitor and control this access. Trust but verify.
Equally important is education, as employees can’t be expected to make informed decisions about AI usage if they don’t understand the risks. Training programmes need to move beyond generic warnings about data security to provide concrete examples of what can go wrong when sensitive information enters AI systems.
This requires honesty about the threat landscape. The conversation should centre on protecting the organisation from risks they may not fully appreciate – when staff understand that their efficiency gains could lead to data breaches, damage customer relationships, or trigger regulatory penalties, the mindset changes.
The gap between what employees are doing and what security teams can see is growing and will only continue to widen further if organisations do not act quickly. Shadow AI demands immediate action – from endpoint protection and clear policies to education programmes that help employees understand the risks.


