
It’s the best of times. There are new billion-dollar investments into AI being announced almost every week.
It’s the worst of times. A recent MIT study found that as many as 95% of AI pilots have failed to generate any return on investment for enterprises.
It’s the best of times. Models continue to improve performance at a remarkable rate, doubling task performance every 7 months.
It’s the worst of times. AI adoption rates declined for the first time in years this summer.
What’s behind the gap between potential and expectations on the one hand, and real-world results on the other? It’s never just one thing, but the single biggest factor is shadow AI.
Defining Shadow AI
To talk about why that is, let’s first establish some common ground. What is shadow AI? It’s when employees are using AI tools to do their jobs without the awareness or permission of company leadership or IT.
This is no small issue; according to a recent study, 70% of employees are using free public tools, compared to 42% using tools provided by their employer. And that usage is obscured to employers, because 57% of employees are actively concealing their use of AI tools and presenting work as if it were their own.
Employees are doing this because too many businesses have been unable to keep up with the pace of change. While leaders debate in endless meetings about whether to adopt AI, which tools to pilot, what departments to test them in, how many licenses to purchase…employees see the tools that are available in the public and just start using the ones they think will help them do their jobs better and faster. When we push our teams to be agile problem solvers, we shouldn’t be surprised that they go out and use the tools that they think will work!
The Problems Caused by Shadow AI
So, if people are being resourceful to do their jobs, what’s the big deal? If only it were that simple! Shadow AI introduces several problems of its own, and then becomes a huge obstacle to successful business AI pilots.
Kicking Privacy Out the Door
Among the most important assets any business has is its own data. That might include intellectual property, confidential communications and strategy documents, and employee and customer information. But shadow AI creates exposure risks with every new tool.
Consider this: Nearly half (46%) of American workers admit to uploading sensitive data to public AI platforms. Once that data is sent to an AI tool, it is out of the original owner’s control; files and chats live on the AI provider’s servers, eligible to become part of that model’s training data, and at risk of a hack or reverse prompt engineering.
That is an unacceptable risk to most businesses, even more so for any business governed by regulations like HIPAA. Yet shadow AI foists these risks on businesses without them even realizing it. And the more tools that employees bring into the workplace, the more points of possible exposure they unwittingly create, as sensitive data gets uploaded to multiple platforms from different providers, onto multiple servers with differing levels of protection.
The Hidden Drain on Productivity
So, employees are introducing privacy risks, but at least they’re getting productivity benefits from their unsanctioned AI use. They are generating productivity benefits, right?
I don’t dispute that there are benefits for individual employees—after all, that’s why people are going to use these tools in the workplace. But collectively, the business is not seeing nearly as much benefit from this disparate AI usage as we would expect. The total is less than the sum of its parts.
Three issues I often see are lack of accuracy, lack of specificity, and lack of consistency.
You may have seen the embarrassing situation Deloitte found itself in earlier this year, when it had to refund the Australian government for a report it conducted. The report was riddled with errors, fake references, and hallucinated data—generated by AI rather than genuine research. What likely started as a desire to work faster and more efficiently turned into a PR nightmare.
The flip side of that error is when AI tools avoid providing inaccurate information by avoiding saying much at all. Because an AI tool likely doesn’t have the deep knowledge of a true expert in a field, we’ve all seen AI work that comes out looking generic; it can lack the level of insight that clients demand, and doesn’t have the specific voice and style people expect.
Even if you manage to avoid those dual errors, shadow AI still risks a lack of consistency. Employees using different tools (since no one is coordinating their usage or sharing resources) will get different outputs. Some tools will be better suited to the task than others, and they’ll all have different tones and structures. Nobody is staying on brand for the company, just on brand for half a dozen different AI models.
In the end, using the tool saved the employee some time on their initial task. But then someone else has to follow up behind cleaning up the issues with accuracy, specificity, and consistency. That eats into whatever time and money was saved in the first place. And if no one follows up, then you’re at risk of being the next headline.
Shadow AI Persists Through AI Pilots
You might think that official AI pilots are the solution here. If people bringing their own AI to work is causing problems, then we just need to introduce the official company AI tool, and problem solved! Unfortunately, not so fast. Shadow AI still has a disruptive role to play.
First of all, employees are resourceful, and they will find ways to keep using the tools they’ve been using all along. If all you have is a polite request not to use other tools, they’ll smile and nod while logging into their personal accounts. If you block access to other tools on the network, they can just log in through their phones or on computers at home. Access to AI tools is easy, for good or for ill.
Second, employees know better than anyone what tools are actually useful in their day-to-day work. Many AI pilots are imposed from above, and based on a combination of what’s financially and/or technically convenient and what leaders think will work.
But it’s the people with their hands in the soil, doing the work, who know what will help them best. If a tool is ill-suited to the task required, people will avoid it. I’m sure we all have friends and connections who’ve complained about their company’s designated AI tool, without naming any names.
Third, most pilots are inherently limited in scope. It might only be a few employees, a limited number of departments, or just specific workflows, while everybody else is still doing what they’ve always done (bring their own AI tools to work!). The AI pilot isn’t introducing AI into the workplace; AI is already there. The pilot might just be yet another tool that competes and conflicts with current shadow AI tools, rather than a true controlled experiment.
In that context, it’s no surprise that so many pilots have disappointed, and that so many companies are throwing up their hands in frustration. There’s clearly value in AI as a productivity tool, an always-on assistant—but the hurdles can seem insurmountable. What’s to be done?
Overcoming the Shadow AI Problem
Fortunately, there are strategies that combat and defeat shadow AI. They’re not easy—there are no silver bullets—but they work.
The first is something that is always easier to say than to do: education. You have to start with making sure employees understand the problem.
It’s not sufficient to simply declare that employees must use these designated tools and must not use these other tools; that’s not education. Instead, get into the why. Preach the gospel of privacy, and be clear about the productivity breakdowns from shadow AI usage. Show the consequences and be honest about the examples of AI failures.
It might not happen overnight, but this is the path to getting full team alignment. Make your team members champions for responsible AI use, and it will become part of your culture.
The second strategy is bottom-up implementation. We mentioned before that many AI pilots are top-down. But the users are the ones who know best what they need. Make it easy for employees to share what tools they’d like to use and what would be most useful for their work.
That’s not to say that you have to blindly agree to any request! But by understanding real AI usage among employees, leaders can give full vetting to different tools and find the most appropriate solution. Rather than starting from a broad mandate to “try AI,” start from a place of understanding what problems employees need AI to solve.
Finally, stay nimble. We all realize that AI is changing fast, but that means businesses must also be prepared to change fast with how they approve and use AI. The best tool for the job today might not be the best tool tomorrow. The privacy-approved model you signed up for could change its policy and no longer be compliant with your standards; it’s incumbent on leaders to stay informed and ready to move.
Implementing AI in business isn’t always easy. But it is necessary. Those who ignore it are likely to be left behind, and those who implement it poorly could face serious problems. Those who get it right have a chance to empower their workforce and deliver unprecedented value to their clients and customers.
Choose wisely.



