Future of AI

Lurking in the Shadows: The Rise of Shadow AI in Your Organisation?

By Alex Brown, Chief Technical Architect at Datactics

The rapid normalisation of AI in everyday life has blurred the lines between personal and professional use. Once exclusive to tech experts, generative AI is now accessible to anyone with a smartphone, seamlessly integrating into daily tasks.

Employees, accustomed to AIโ€™s convenience outside of work, naturally expect the same efficiency in their roles which has led to the rise of “shadow AI”, the unsanctioned use of AI tools to enhance productivity, from drafting emails to generating reports.

Unlike traditional shadow IT, which revolves around unauthorised hardware or software, shadow AI exists in a more intellectual space, making it harder to detect and control. Without clear governance or training, businesses risk data security issues, compliance violations, and unreliable outputs. As AI continues to embed itself into digital workflows, organisations must proactively address these challenges, ensuring employees harness AIโ€™s benefits while mitigating its risks.

Cultural normalisation and accessibility of AI

Once the domain of Python data scientists with expensive arrays of GPUs, the Genย AIย revolution has given anyone with smart access to powerfulย AIย resources. Itโ€™s now very common to use GenAI in many areas of life and quite often itโ€™s very effective. After the first release of ChatGPT in 2022, I was amazed at how quickly the usage spread to non-tech users. I remember a week after the release overhearing personal trainers in my gym talking about how theyโ€™d got ChatGPT to create social media posts, saving them time and gaining them digital followers. When employees have access to such powerful tools, on their phones outside of work, theyโ€™ll naturally expect that same power to make their lives easier at work.

Pressure to performย 

The economic climate and corporate competition demand growth.ย AIย offers quick solutions with minimal effort. But theย shadowย AIย risk is not just the big things like using an unauthorisedย AIย model to create a new drug, but the much more common cases of employees using unauthorisedย AIย to perform mundane tasks โ€“ like usingย AI services to create a presentation, format a spreadsheet, rewrite an email, create a SQL query or regular expression, all in the name of productivity. Thereโ€™s so much information out there promising a way to get ahead with โ€œthis one simpleย AIย trickโ€ย 

Information overload

We live in a world of information excess, and itโ€™s becoming harder for a human to sift through relevant information and find the information or answer they are looking for.ย  Nowadays people donโ€™t want a list of search results that might be relevant and to draw their own conclusions โ€“ they want the answer to the exact question and itโ€™s now almost as common for people to ask chat GPT something rather than Google it. GenAI however quickly summarise large amounts of information whether proprietary documents or search engine results and gives humans answers very quickly. Some even say the age of the search engine is over.

Aggressive AI everywhere

Itโ€™s now very hard to avoidย AIย even if you donโ€™t go looking for it.ย Theย AIย boom of the last 5 years has seenย AIย embedded everywhere โ€“ whether you want it or not.ย AIย augmentation is very common in SaaS services, through chatbots or other assistive accelerators (Think of Salesforce) Even if youโ€™re using a service from a 3rdย party vendor thatโ€™s not primarilyย AI, the chances are thereโ€™s someย AIย embedded somewhere, the use of which may or may not be in line with yourย AIย governance. Likewise In recent weeks companies like Microsoft and Apple have taken a more aggressiveย AIย stance, making their respective assistiveย AIย on by default โ€“ you now have to opt out rather than opt in.

Furthermore, there is very little resistance to these drivers. Most organisations (and these are SMEโ€™s) probably donโ€™t have very comprehensiveย AIย governance in place, orย AIย awareness training. For example, most organisationsย will not yet haveย a culture ofย AIย risk awareness, or at least this cultural awareness is still in its infancy, unlike cyber security, where today, even the next-door neighbourโ€™s dog knows not to click on a link in an unsolicited email, but how many of us really understand how to safely useย AI? What we should do/shouldnโ€™t do which services are ok to use and which are not?

Finally, it could be argued that the problem ofย shadowย AIย is also intellectual, unlikeย shadowย IT,ย shadowย AIย is harder to control because itโ€™s in more of an intellectual context than one of physical hardware. For example, I dare say a great many of us ponder our work problems outside of work. Maybe have the odd flash of inspiration in the shower. That leads to a bit of Saturday morning research, maybe a Googleย AIย overview, Maybe ChatGPT maybe anย AI-generated SEO landing page or video that promises to answer our questions โ€“ bingo weโ€™re usingย shadowย AI.

So by now it should be fairly reasonable to assume that you could have aย shadowย AIย operating at some level in your organisation.

Author

Related Articles

Back to top button