AI

The Shadow AI Paradox: Why Your Security Training Is Failing and Your Best Employees Are Your Biggest Risk

By Greg Pollock, Head of Research and Insights, UpGuard 

The use of unapproved AI in the workplace, or “Shadow AI,” is an undeniable fact. Dozens of studies show it’s not a niche problem but a near-universal practice.  

In 2023, early in the adoption of AI, Salesforce conducted a survey of 14,000 people and found that over half of generative AI users use unapproved tools. Since then, subsequent studies have documented the increase in shadow AI, with similar research from 2025 finding that 80 to 98%  of employees were using shadow AI. Just like AI, it’s everywhere.  

In response, most organizations have defaulted to a familiar security playbook: block more tools and deploy more security awareness training. This is the conventional wisdom. 

But what if that wisdom is fundamentally wrong? The latest research from UpGuard reveals a startling paradox: the more your employees know about AI security, the more likely they are to use unapproved tools. We’ve been treating a problem of enthusiasm and confidence as one of ignorance, and it’s time to change our entire approach. 

The “AI Power User”: Your Most Confident Offenders 

The conventional assumption is that a “knowledge gap” drives risky behavior. The data, however, tells a different and more interesting story. 

We found a strong positive correlation between a employees who had AI safety training and their regular use of shadow AI. This points to a new persona: the “AI power user”. These are not ignorant employees; they are employees who feel they understand the risks but  have the confidence to make their own, independent judgments. 

That correlation extends across multiple measures of AI literacy. Workers regularly using shadow AI were also the most likely to recall their company’s AI usage policies and to understand how AI tools used data inputs to train models. These users are not just informed, but sophisticated AI users at the leading edge of the adoption curve.  

In other words: your “best” employees—those who are confident, curious, motivated—may also be your highest risk, because they feel capable of working without guardrails.  

Your Security Team Is Leading– But Not In the Way You Think 

This attitude isn’t confined to individual contributors skirting the rules put in place by management. Executives and security leaders are the organizational groups most likely to report using unapproved AI tools. The common thread is that they, too, have high degrees of knowledge and autonomy, giving them the sense of empowerment to make those decisions for themselves.  

In fact, this trend predates generative AI. A decade ago, a McAfee report on shadow IT noted that 80% of employees were using unapproved software. Amongst those, IT professionals were the worst offenders, with the likely explanation of “overconfidence in their ability to assess risks.” 

Today, the pattern continues. Our data shows that security leaders are even more likely than other employees to use unapproved AI—a full 88% of them do, compared to 81% of other workers. They are also 67% more likely to use it as part of their regular workflow. Executives are the most likely culprits; one study found that 93% of executives and senior managers use unapproved AI tools, creating a “paradox of poor example-setting.” 

A Deeper Problem: The Workplace Trust Crisis 

When the people who set policy ignore it, the rest of the organisation feels free to follow suit. This isn’t merely a governance or technical problem; it’s a profoundly human one. AI tools are evolving quickly from assistants into trusted advisors, and displacing trust in people along the way.  

The data on this is startling: 24% of employees now report trusting their preferred AI tools more than their own manager or colleagues. This digital-first trust erodes the very human relationships that corporate governance and security culture are built upon. 

Why They’re Using Unapproved Tools 

When this trust in the corporate structure is fractured, employees are left to make their own risk-reward calculations. Their motivation isn’t malicious; it’s practical. 

The number one reason for using shadow AI is simply that the unapproved tools are “easier”. The other top reasons are that they are “faster” (64%) and “better” (60%) than the cumbersome, company-provided options. This is a clear sign of a procurement and implementation failure. As one i40-Cybernews report bluntly puts it, only 33% of employees find their company-approved tools “fully meet their work needs”—leaving the other 67% to find their own solutions. 

“Security Theater”: Why Blocking and Training Fails 

Attempting to block our way out of this problem is a losing battle. 

When employees encounter a blocked tool they actually want, 45% report they simply find a workaround. Worse, security teams report blocking far more apps than employees ever even notice. This suggests that most of these efforts are “security theater” against irrelevant tools, while determined users simply find another path. 

Training for the Wrong Problem 

Our other go-to solution, security training, is also misaligned. Traditional security training is designed to fix ignorance. But this problem, as the data clearly shows, is driven by confidence and enthusiasm. 

Training might therefore be simply shifting users from uncertainly using shadow AI to confidently doing so on a regular basis. By treating our most engaged employees like disobedient children, we are only pushing their behavior further into the shadows. 

The Only Way Out Is Through: Harnessing Enthusiasm 

This challenge may feel new, but its underlying pattern is not. “Shadow IT” has been a consistent issue for decades, ever since employees could bring their own software to work. As a Forrester article argued back in 2013, calling it “rogue” is a parochial view that misses the point: employees are just trying to get their jobs done. 

The solution is the same as it was then. As a 2007 CIO.com article wisely noted, the solution is to “make users feel comfortable about bringing their underground behavior into the light.” The goal, then, cannot be to stop shadow AI; it must be to manage it. This requires a two-pronged strategy. 

First, a cultural shift: We must stop treating employee curiosity as a threat and instead provide safe, approved channels for exploration and innovation. 

Second, a technological shift: You cannot govern what you cannot see. The priority must shift from “blocking” to “visibility”—monitoring AI usage in real-time to guide employees, apply smart policies, and protect data without resorting to the hard-block tactics that drove them away in the first place. 

Stop Asking “If,” Start Asking “Why” 

Shadow AI is not a fleeting trend to be stamped out. It is a permanent new reality of work to be governed. 

The curiosity and enthusiasm driving it are a powerful business asset, not a liability to be trained away. It’s time to stop asking “Are my employees using shadow AI?” The answer, backed by overwhelming data, is unequivocally yes. 

The real question, the one that matters for the future, is “Why?” Only then can we build a culture that is both profoundly innovative and truly secure. 

Author

Related Articles

Back to top button