
The use of unapproved AI in the workplace, or “Shadow AI,” is an undeniable fact.ย Dozens of studies showย it’sย not a niche problem but a near-universal practice.ย ย
In 2023, early in the adoption of AI, Salesforce conducted a survey of 14,000 people and found that over half of generative AI users use unapproved tools. Since then,ย subsequentย studies have documented the increase in shadow AI, with similar research from 2025 finding thatย 80ย toย 98%ย of employees were using shadow AI. Just like AI,ย itโsย everywhere.ย ย
In response, most organizations have defaulted to a familiar security playbook: block more tools and deploy more security awareness training. This isย the conventionalย wisdom.ย
But what if that wisdom is fundamentally wrong?ย The latest researchย fromย UpGuardย reveals a startling paradox: the more your employees know about AI security, theย more likelyย they are to use unapproved tools.ย We’veย been treating a problem of enthusiasm and confidence as one of ignorance, andย it’sย time to change our entire approach.ย
The “AI Power User”: Your Most Confident Offendersย
The conventional assumption is that a “knowledge gap” drives risky behavior. The data, however, tells a different and more interesting story.ย
We found a strong positive correlation between aย employeesย who had AI safety training and theirย regularย use of shadow AI. This points to a newย persona: the “AI power user”. These are not ignorant employees; they are employees who feel they understand the risksย butย haveย the confidence to make their own, independent judgments.ย
That correlation extends across multiple measures of AI literacy. Workersย regularly usingย shadow AI were also the most likely to recall their companyโs AI usage policies and to understand how AI tools used data inputs to train models. These users are not just informed, but sophisticated AI users at the leading edge of the adoption curve.ย ย
In other words: your โbestโ employeesโthose who are confident, curious, motivatedโmay also be your highest risk, because they feel capable of working without guardrails.ย ย
Your Security Team Is Leadingโ But Notย Inย the Way You Thinkย
This attitudeย isnโtย confined to individual contributors skirting the rules put in place by management. Executives and security leaders are the organizational groups most likely to report using unapproved AI tools. The common thread is that they, too, have high degrees of knowledge and autonomy, giving them the sense of empowerment to make those decisions for themselves.ย ย
In fact, this trend predates generative AI. A decade ago, aย McAfee report on shadow ITย noted that 80% of employees were using unapproved software. Amongst those, IT professionals were the worst offenders, with theย likely explanationย of “overconfidence in their ability to assess risks.”ย
Today, the pattern continues. Our data shows that security leaders are evenย moreย likely than other employees to use unapproved AIโa full 88% of them do, compared to 81% of other workers. They are also 67% more likely to use it as part of their regular workflow. Executives are the most likely culprits; one study found thatย 93% of executives and senior managersย use unapproved AI tools, creating a “paradox of poor example-setting.”ย
A Deeper Problem: The Workplace Trust Crisisย
When the people whoย setย policy ignore it, the rest of theย organisationย feels free to follow suit. Thisย isnโtย merely a governance or technical problem;ย itโsย a profoundly human one. AI tools are evolving quickly from assistants into trusted advisors, andย displacingย trust in people along the way.ย ย
The data on this is startling: 24% of employees now report trusting their preferred AI toolsย moreย than their own manager or colleagues. This digital-first trust erodes the very human relationships that corporate governance and security culture are built upon.ย
Why They’re Using Unapproved Toolsย
When this trust in the corporate structure is fractured, employees are left to make their own risk-reward calculations.ย Their motivation isn’t malicious; it’s practical.ย
The number one reason for using shadow AI is simply that the unapproved tools are “easier”. The other top reasons are that they are “faster” (64%) and “better” (60%) than the cumbersome, company-provided options. This is a clear sign of a procurement andย implementationย failure. As oneย i40-Cybernews reportย bluntly puts it, only 33% of employees find their company-approved tools “fully meet their work needs”โleaving the other 67% to find their own solutions.ย
“Security Theater”: Why Blocking and Training Failsย
Attempting to block our way out of this problem is a losing battle.ย
When employeesย encounterย a blockedย toolย theyย actually want, 45% report they simply find a workaround. Worse, security teams report blocking far more apps than employeesย ever evenย notice. This suggests that most of these efforts are “security theater” against irrelevant tools, while determined users simply find another path.ย
Training for the Wrong Problemย
Our other go-to solution, security training, is also misaligned. Traditional security training is designed to fixย ignorance. But this problem, as the data clearly shows, is driven byย confidenceย andย enthusiasm.ย
Training might therefore be simply shifting users fromย uncertainlyย using shadow AI toย confidentlyย doing so on a regular basis.ย By treating our most engaged employees like disobedient children, we are only pushing their behavior further into the shadows.ย
The Only Way Out Is Through: Harnessing Enthusiasmย
This challenge may feel new, but its underlying pattern is not.ย “Shadow IT” has been a consistent issue for decades, ever since employees could bring their own software to work. As aย Forrester articleย argued back in 2013, calling it “rogue” is a parochial view that misses the point: employees are just trying to get their jobs done.ย
The solution is the same as it was then. As a 2007ย CIO.comย article wisely noted, the solution is to “make users feel comfortable about bringing their underground behavior into the light.”ย The goal, then, cannot be toย stopย shadowย AI; it must be toย manageย it. This requires a two-pronged strategy.ย
First, a cultural shift: We must stop treating employee curiosity as a threat and instead provide safe, approved channels for exploration and innovation.ย
Second, a technological shift: You cannot govern what you cannot see. The priority must shift from “blocking” to “visibility”โmonitoring AI usage in real-time to guide employees, apply smart policies, and protect data without resorting to the hard-block tactics that drove them away in the first place.ย
Stop Asking “If,” Start Asking “Why”ย
Shadow AI is not a fleeting trend to be stamped out. It is a permanent new reality of work to be governed.ย
The curiosity and enthusiasm driving it are a powerful business asset, not a liability to be trained away.ย It’sย time to stop asking “Are my employees using shadow AI?”ย The answer, backed by overwhelming data, is unequivocally yes.ย
The real question, the one that matters for the future, is “Why?”ย Only then can we build a culture that is both profoundly innovative andย truly secure.ย



