
Itโsย the best ofย times. There areย new billion-dollar investmentsย into AI being announcedย almost everyย week.ย
Itโsย the worst ofย times. A recentย MIT studyย found that as many as 95% of AI pilots have failed to generate any return on investment for enterprises.ย
Itโsย the best ofย times.ย Models continue to improve performance at a remarkable rate,ย doubling task performance every 7 months.ย
Itโsย the worst ofย times. AIย adoption rates declinedย for the first time in yearsย this summer.ย
Whatโsย behindย the gap between potential and expectations onย theย oneย hand,ย and real-world results on the other?ย Itโsย neverย just one thing, butย the singleย biggest factor is shadow AI.ย
Defining Shadow AIย
To talk about why that is,ย letโsย firstย establishย some common ground. What is shadow AI?ย Itโsย whenย employees are using AI tools to do their jobs without the awareness or permission of company leadershipย or IT.ย
This is no small issue; according to aย recent study, 70% of employees are using free public tools, compared to 42% using tools provided by their employer.ย And that usageย is obscured to employers, becauseย 57% of employeesย are actively concealing their use of AI tools and presenting work as if it were their own.ย
Employees are doing this because too many businesses have been unable to keep up with the pace of change. While leaders debate in endless meetings about whether to adopt AI, which tools to pilot, what departments to test them in, how many licenses to purchase…employees see the tools that are available in the public and just start using the ones they think will help them do their jobs better and faster.ย When we push our teams to be agile problem solvers, weย shouldnโtย be surprised that they go out and use the tools that they think will work!ย
The Problems Caused by Shadow AIย
So,ย if peopleย are beingย resourceful to do their jobs,ย whatโsย the big deal?ย If only it were that simple!ย Shadow AI introduces several problems of itsย own, andย then becomes a huge obstacle to successful business AI pilots.ย
Kickingย Privacy Out the Doorย
Among the most important assets any business has is its own data. That might include intellectual property, confidential communications and strategy documents,ย and employee and customer information. But shadow AI creates exposure risks with every new tool.ย
Consider this:ย Nearly half (46%)ย of American workers admit to uploading sensitive data to public AI platforms.ย Once that data is sent to an AI tool, it is out of the originalย ownerโsย control; files and chats live on the AI providerโs servers, eligible to become part of thatย modelโsย training data, and at risk ofย a hack or reverse prompt engineering.ย ย
That is an unacceptable risk to most businesses, evenย moreย soย for any business governed by regulations like HIPAA.ย Yet shadow AI foists these risks on businesses without them even realizing it. And the more tools that employees bring into the workplace, the more points ofย possible exposureย theyย unwittingly create, as sensitive data gets uploaded to multiple platforms from different providers, onto multiple servers with differing levels of protection.ย
The Hidden Drain on Productivityย
So,ย employees are introducing privacyย risks, but at leastย theyโreย getting productivity benefits from their unsanctioned AI use. They are generating productivity benefits, right?ย
Iย donโtย dispute that there are benefits for individual employeesโafter all, thatโs why people are goingย to useย these tools in the workplace. But collectively, the business is not seeingย nearly asย much benefit from this disparate AI usageย as we would expect. The total is less than the sum of its parts.ย
Three issues I often see are lack of accuracy, lack of specificity, and lack of consistency.ย
You may have seen the embarrassing situation Deloitte foundย itself inย earlier this year, when it had toย refund the Australian governmentย for a report it conducted.ย The report was riddled with errors, fake references, and hallucinated dataโgenerated by AI rather than genuine research.ย Whatย likely startedย as a desire to work faster and more efficiently turned into a PR nightmare.ย
Theย flip sideย of that error is when AI tools avoid providing inaccurate information by avoiding saying much at all. Because an AI toolย likelyย doesnโtย have the deep knowledge of a true expert in a field,ย weโveย all seen AI work that comes out looking generic; it can lackย the level of insightย thatย clientsย demand, andย doesnโtย have theย specific voice and style people expect.ย
Even if you manage to avoid those dual errors, shadow AI still risks a lack of consistency. Employees using different tools (since no one is coordinating their usage or sharing resources)ย will get different outputs. Some tools will be better suited to the task than others,ย andย theyโllย all have different tones and structures. Nobody is staying on brand for the company, just on brand for half a dozen different AI models.ย
In the end, using the tool saved the employee some time on theirย initialย task.ย But then someone else has to follow up behind cleaning up the issues with accuracy, specificity, and consistency.ย That eats into whatever time and money was saved in the first place. And if no one follows up, thenย youโreย at risk of being the next headline.ย
Shadow AI Persists Through AI Pilotsย
You might think that official AI pilots are the solution here. If people bringing their own AI to work is causing problems, then we just need to introduce the official company AIย tool, andย problem solved!ย Unfortunately,ย notย so fast. Shadow AI still has a disruptive role to play.ย
First of all, employees are resourceful, and they will find ways to keep using the tools theyโve been using all along.ย If all you have is a polite request not to use other tools,ย theyโllย smile and nod while logging into their personal accounts. If you block access to other tools on the network, they can just log in through their phones or on computers at home.ย Access to AI tools is easy, for good or for ill.ย
Second, employees know better than anyone what tools are actually useful in their day-to-day work.ย Many AI pilots are imposed fromย above, andย based on a combination of whatโs financially and/or technically convenient and whatย leaders think will work.ย ย
Butย itโsย the people with their hands in the soil, doing the work, who know what will help them best. If a tool is ill-suited to the taskย required, people will avoid it.ย Iโmย sure we all have friends and connectionsย whoโveย complained aboutย their companyโs designated AI tool, without naming any names.ย
Third, most pilots are inherently limited in scope. It might only be a few employees, a limited number of departments, or just specific workflows, while everybody else is still doing whatย theyโveย always done (bring their own AI tools to work!). The AI pilotย isnโtย introducing AI into the workplace; AI is already there. The pilot might just beย yetย anotherย tool that competesย and conflictsย with current shadow AI tools,ย rather than a true controlled experiment.ย
In that context,ย itโsย no surprise that so many pilotsย haveย disappointed, and that so many companies are throwing up their hands in frustration.ย Thereโsย clearly value in AI as a productivity tool, an always-on assistantโbut the hurdles can seem insurmountable.ย Whatโsย to be done?ย
Overcoming the Shadow AIย Problemย
Fortunately, there are strategies that combat and defeat shadow AI.ย Theyโreย not easyโthere are no silver bulletsโbut they work.ย
The first is something that is always easier to say than to do: education.ย You have toย startย withย making sure employees understand the problem.ย
Itโsย not sufficient to simply declare that employees must use these designated tools and must not use these other tools; thatโs not education. Instead, get into theย why. Preach the gospel ofย privacy, andย be clear about the productivity breakdowns from shadow AI usage. Show theย consequences andย be honest about the examples of AI failures.ย
It might not happen overnight, but this is the path to getting full team alignment. Make your team members champions for responsible AI use, and it will become part of your culture.ย
The second strategy isย bottom-up implementation. We mentioned before that many AI pilots are top-down. Butย the users are the ones who know best what they need. Make it easy for employees toย share what toolsย theyโdย like to use and what would be most useful for their work.ย
Thatโs not to say that you have to blindly agree to any request!ย But by understanding real AI usage among employees, leaders can give full vetting to different tools andย find the mostย appropriate solution.ย Rather than starting from aย broad mandateย to โtry AI,โ start from a place of understanding what problems employeesย need AI to solve.ย
Finally, stay nimble. We all realize that AI is changing fast, but that means businesses must also be prepared to change fast with how they approve and use AI. The best tool for the job today might not be the best tool tomorrow. Theย privacy-approved model you signed up for could change its policy and no longer be compliant with your standards;ย itโsย incumbent on leaders to stay informed and ready to move.ย
Implementing AI in businessย isnโtย always easy. But it is necessary.ย Those who ignore it are likely to be left behind, and those who implement it poorlyย could face serious problems.ย Those who get it right have a chance to empower their workforce and deliver unprecedented value to their clients and customers.ย ย
Choose wisely.ย
Author
Will Adams
President, Tarkenton and pipIQ
Will Adams is an accomplished business executive who believes the best companies are built around exceptional people solving meaningful problems. With more than two decades of leadership experience, he has launched and guided ventures across technology, professional services, and social impact, consistently building organizations that align purpose with performance.
As President of Tarkenton, Will works alongside NFL Hall of Famer Fran Tarkenton to help companies innovate, scale, and better serve their customers. He has directed initiatives in SaaS and AI development, market strategy, and enterprise partnerships with Fortune 500 firms. His leadership has shaped teams that deliver transformative digital solutions for small and mid-sized businesses across global markets.
An entrepreneur at heart, Will co-founded a SaaS company serving museums, private collectors, and family offices, combining technology and craftsmanship to preserve cultural history.
Will serves on the boards of Billfold, a point-of-sale company supporting live entertainment and venues, and the Atlanta Childrenโs Foundation, which serves and advocates for youth in foster care. He also mentors early-stage founders through gener8tor and leads global community development efforts as a 410 Bridge mission leader.
Throughout his career, Will has remained committed to helping people grow, empowering teammates to lead, partners to discover new opportunities, founders to gain clarity, and children to realize their potential.
At his core, he is driven by a simple conviction: business done right improves lives. His work centers on creating enduring impact through people, purpose, and the relentless pursuit of whatโs possible โ always starting with moving the movable.
Will Adams


