Future of AI

Lurking in the Shadows: The Rise of Shadow AI in Your Organisation?

By Alex Brown, Chief Technical Architect at Datactics

The rapid normalisation of AI in everyday life has blurred the lines between personal and professional use. Once exclusive to tech experts, generative AI is now accessible to anyone with a smartphone, seamlessly integrating into daily tasks.

Employees, accustomed to AI’s convenience outside of work, naturally expect the same efficiency in their roles which has led to the rise of “shadow AI”, the unsanctioned use of AI tools to enhance productivity, from drafting emails to generating reports.

Unlike traditional shadow IT, which revolves around unauthorised hardware or software, shadow AI exists in a more intellectual space, making it harder to detect and control. Without clear governance or training, businesses risk data security issues, compliance violations, and unreliable outputs. As AI continues to embed itself into digital workflows, organisations must proactively address these challenges, ensuring employees harness AI’s benefits while mitigating its risks.

Cultural normalisation and accessibility of AI

Once the domain of Python data scientists with expensive arrays of GPUs, the Gen AI revolution has given anyone with smart access to powerful AI resources. It’s now very common to use GenAI in many areas of life and quite often it’s very effective. After the first release of ChatGPT in 2022, I was amazed at how quickly the usage spread to non-tech users. I remember a week after the release overhearing personal trainers in my gym talking about how they’d got ChatGPT to create social media posts, saving them time and gaining them digital followers. When employees have access to such powerful tools, on their phones outside of work, they’ll naturally expect that same power to make their lives easier at work.

Pressure to perform 

The economic climate and corporate competition demand growth. AI offers quick solutions with minimal effort. But the shadow AI risk is not just the big things like using an unauthorised AI model to create a new drug, but the much more common cases of employees using unauthorised AI to perform mundane tasks – like using AI services to create a presentation, format a spreadsheet, rewrite an email, create a SQL query or regular expression, all in the name of productivity. There’s so much information out there promising a way to get ahead with “this one simple AI trick” 

Information overload

We live in a world of information excess, and it’s becoming harder for a human to sift through relevant information and find the information or answer they are looking for.  Nowadays people don’t want a list of search results that might be relevant and to draw their own conclusions – they want the answer to the exact question and it’s now almost as common for people to ask chat GPT something rather than Google it. GenAI however quickly summarise large amounts of information whether proprietary documents or search engine results and gives humans answers very quickly. Some even say the age of the search engine is over.

Aggressive AI everywhere

It’s now very hard to avoid AI even if you don’t go looking for it. The AI boom of the last 5 years has seen AI embedded everywhere – whether you want it or not. AI augmentation is very common in SaaS services, through chatbots or other assistive accelerators (Think of Salesforce) Even if you’re using a service from a 3rd party vendor that’s not primarily AI, the chances are there’s some AI embedded somewhere, the use of which may or may not be in line with your AI governance. Likewise In recent weeks companies like Microsoft and Apple have taken a more aggressive AI stance, making their respective assistive AI on by default – you now have to opt out rather than opt in.

Furthermore, there is very little resistance to these drivers. Most organisations (and these are SME’s) probably don’t have very comprehensive AI governance in place, or AI awareness training. For example, most organisations will not yet have a culture of AI risk awareness, or at least this cultural awareness is still in its infancy, unlike cyber security, where today, even the next-door neighbour’s dog knows not to click on a link in an unsolicited email, but how many of us really understand how to safely use AI? What we should do/shouldn’t do which services are ok to use and which are not?

Finally, it could be argued that the problem of shadow AI is also intellectual, unlike shadow IT, shadow AI is harder to control because it’s in more of an intellectual context than one of physical hardware. For example, I dare say a great many of us ponder our work problems outside of work. Maybe have the odd flash of inspiration in the shower. That leads to a bit of Saturday morning research, maybe a Google AI overview, Maybe ChatGPT maybe an AI-generated SEO landing page or video that promises to answer our questions – bingo we’re using shadow AI.

So by now it should be fairly reasonable to assume that you could have a shadow AI operating at some level in your organisation.

Author

Related Articles

Back to top button