
When Samsung engineers leaked sensitive corporate data through ChatGPT in 2023, it was a big wake-up call for many execs. Most realised that their employees were already using AI tools, just not the way they planned. This wasn’t a one-off incident. It’s happening everywhere now.
Here’s what makes it interesting. Only 40% of companies have purchased official LLM subscriptions, but employees at over 90% of companies are regularly using AI tools for work. Are these employees being rebellious? Or is it more likely that workers are just keeping up with their workload and pressure at work, especially as AI becomes more common across a range of industries?
This gap has created what many now call ‘shadow AI.’ Employees use AI to get ahead, often without their managers knowing. When 57% of employees enter sensitive data into personal AI accounts, hidden (and not so hidden) costs become visible through breaches, compliance failures, and cultural fragmentation.
The cost of staying in the shadows
Let’s start with the very visible financial impact of Shadow AI, with these incidents alone costing companies nearly £500,000 on average, according to global data from IBM. Although only 13% of organisations report breaches involving AI tools, 97% of those organisations lacked proper AI access controls. What’s clear is that it’s the companies that don’t address shadow AI proactively that end up addressing it reactively. And usually this comes after the company is burned by an expensive incident.
It’s unsurprising then that 69% of organisations cite AI-powered data leaks as their top security concern, yet 47% have no AI-specific controls in place. They’re struggling to pinpoint where the shadow AI usage is coming from, and don’t have the governance in place to support that.
Companies also need to think about regulatory frameworks like the EU AI Act, which is introducing new requirements for how companies handle AI and data. When employees use personal AI tools for work, companies lose the paper trail that regulators expect to see. This gets even more complex when customer data is involved, due to privacy and security risks of LLMs. One incident of customer data going through an unauthorised AI tool could trigger investigations and fines that surpass even the immediate breach costs.
The hidden costs beyond security
Then there are the less visible problems to consider, specifically culture. When employees feel the need to hide productivity tools, it can divide a workplace between those who use AI effectively in secret and those who don’t, especially if a performance gap occurs. If this isn’t recognised by employers, trust breaks down between management and employees. Employees lose confidence in leadership, while management loses visibility into how work actually gets done. Unmanaged, this problem will just be exacerbated, and the divide between official company processes and actual workplace behaviour will only grow.
Ideas also become less creative when employees default to generic, public tools rather than industry-specific solutions that could integrate with specific business tools. AI use happening in isolation prevents collective learning, with best practices trapped in individual workflows rather than becoming organisational tools that create benefits across entire teams. For example, a marketing team secretly using ChatGPT for campaigns misses opportunities for AI tools that could access customer data, integrate with CRM systems, and provide industry-specific insights. Individual AI experimentation doesn’t scale to organisational capability, leaving companies with scattered individual gains rather than systematic competitive advantages.
Building governance that works
But the solution can’t be restricting AI access. That approach consistently backfires by driving behaviour further underground, which has the opposite of the desired effect and removes the last traces of corporate visibility into AI usage. More importantly, they forfeit the genuine productivity benefits that attracted employees to these tools in the first place.
Forward-thinking companies are building governance frameworks that acknowledge AI use whilst managing enterprise risk through clear, practical policies. This starts with transparent AI usage policies that separate encouraged, permitted, and prohibited uses. Rather than blanket bans, these policies provide employees with clear guidance on what’s acceptable. Employers should regularly audit tools, allowing teams to suggest tools without penalty, bringing shadow activity out of the shadows and into the open where it can be properly managed.
You need secure, integrated solutions that employees actually want to use. AI systems that work with the software people use every day, meet security standards, and don’t require learning entirely new interfaces. When employees can access AI through approved channels that work better than personal tools, shadow usage naturally drops off.
Leadership matters a lot here. Executives need to talk openly about their own AI tool usage, share stories about how good governance prevented problems, and celebrate teams that move from shadow to approved AI use. They should also acknowledge how younger employees are typically more confident using AI tools and encourage this confidence.
At the end of the day, companies don’t want to be stuck playing whack-a-mole with shadow AI use. It’s worth baking in governance frameworks that unlock benefits through innovation, talent retention, and improved productivity. Companies can turn their biggest AI risk into their best opportunity, as long as they don’t fear AI’s potential.



