
The rapid normalisation of AI in everyday life has blurred the lines between personal and professional use. Once exclusive to tech experts, generative AI is now accessible to anyone with a smartphone, seamlessly integrating into daily tasks.
Employees, accustomed to AIโs convenience outside of work, naturally expect the same efficiency in their roles which has led to the rise of “shadow AI”, the unsanctioned use of AI tools to enhance productivity, from drafting emails to generating reports.
Unlike traditional shadow IT, which revolves around unauthorised hardware or software, shadow AI exists in a more intellectual space, making it harder to detect and control. Without clear governance or training, businesses risk data security issues, compliance violations, and unreliable outputs. As AI continues to embed itself into digital workflows, organisations must proactively address these challenges, ensuring employees harness AIโs benefits while mitigating its risks.
Cultural normalisation and accessibility of AI
Once the domain of Python data scientists with expensive arrays of GPUs, the Genย AIย revolution has given anyone with smart access to powerfulย AIย resources. Itโs now very common to use GenAI in many areas of life and quite often itโs very effective. After the first release of ChatGPT in 2022, I was amazed at how quickly the usage spread to non-tech users. I remember a week after the release overhearing personal trainers in my gym talking about how theyโd got ChatGPT to create social media posts, saving them time and gaining them digital followers. When employees have access to such powerful tools, on their phones outside of work, theyโll naturally expect that same power to make their lives easier at work.
Pressure to performย
The economic climate and corporate competition demand growth.ย AIย offers quick solutions with minimal effort. But theย shadowย AIย risk is not just the big things like using an unauthorisedย AIย model to create a new drug, but the much more common cases of employees using unauthorisedย AIย to perform mundane tasks โ like usingย AI services to create a presentation, format a spreadsheet, rewrite an email, create a SQL query or regular expression, all in the name of productivity. Thereโs so much information out there promising a way to get ahead with โthis one simpleย AIย trickโย
Information overload
We live in a world of information excess, and itโs becoming harder for a human to sift through relevant information and find the information or answer they are looking for.ย Nowadays people donโt want a list of search results that might be relevant and to draw their own conclusions โ they want the answer to the exact question and itโs now almost as common for people to ask chat GPT something rather than Google it. GenAI however quickly summarise large amounts of information whether proprietary documents or search engine results and gives humans answers very quickly. Some even say the age of the search engine is over.
Aggressive AI everywhere
Itโs now very hard to avoidย AIย even if you donโt go looking for it.ย Theย AIย boom of the last 5 years has seenย AIย embedded everywhere โ whether you want it or not.ย AIย augmentation is very common in SaaS services, through chatbots or other assistive accelerators (Think of Salesforce) Even if youโre using a service from a 3rdย party vendor thatโs not primarilyย AI, the chances are thereโs someย AIย embedded somewhere, the use of which may or may not be in line with yourย AIย governance. Likewise In recent weeks companies like Microsoft and Apple have taken a more aggressiveย AIย stance, making their respective assistiveย AIย on by default โ you now have to opt out rather than opt in.
Furthermore, there is very little resistance to these drivers. Most organisations (and these are SMEโs) probably donโt have very comprehensiveย AIย governance in place, orย AIย awareness training. For example, most organisationsย will not yet haveย a culture ofย AIย risk awareness, or at least this cultural awareness is still in its infancy, unlike cyber security, where today, even the next-door neighbourโs dog knows not to click on a link in an unsolicited email, but how many of us really understand how to safely useย AI? What we should do/shouldnโt do which services are ok to use and which are not?
Finally, it could be argued that the problem ofย shadowย AIย is also intellectual, unlikeย shadowย IT,ย shadowย AIย is harder to control because itโs in more of an intellectual context than one of physical hardware. For example, I dare say a great many of us ponder our work problems outside of work. Maybe have the odd flash of inspiration in the shower. That leads to a bit of Saturday morning research, maybe a Googleย AIย overview, Maybe ChatGPT maybe anย AI-generated SEO landing page or video that promises to answer our questions โ bingo weโre usingย shadowย AI.
So by now it should be fairly reasonable to assume that you could have aย shadowย AIย operating at some level in your organisation.

