AI & Technology

The new AI governance model that puts Shadow IT on notice

By Art Gilliland, CEO, Delinea

Artificial intelligence (AI) tools are spreading rapidly across workplaces, reshaping how everyday tasks get done. From marketing teams drafting campaigns in ChatGPT to software engineers experimenting with code generators, AI is quietly creeping into every corner of business operations. The problem? Much of AI adoption is happening under the radar without any oversights or governance.ย ย ย 

As a result, shadow AI hasย emergedย as a new security blind spot. The instances of unmanaged andย unauthorisedย AI use will continue to rise untilย organisationsย rethink their approach to AI policy.ย ย 

For CIOs, the answerย isnโ€™tย to prohibit AI tools outright, but to implement flexible guardrails that strike a balance between innovation and risk management. The urgency is undeniable as 93% ofย organisationsย have experienced at least one incident of unauthorised shadow AI use, with 36% reporting multiple instances.ย ย These figures reveal a stark disconnect between formal AI policies and the way employees areย actually engagingย with AI tools in their day-to-day work.ย 

Hereโ€™s howย organisationsย can begin to address the challenge:ย 

Establishingย governance andย guardrailsย 

In order toย get ahead of AI risks,ย organisationsย need AI policies that encourage AI usage within reasonย โ€“ย and in line with their risk appetite. However, theyย can’tย do that with outdated governance models and tools thatย aren’tย purpose-built to detect andย monitorย AI usage across their business.ย ย 

Identifyย theย rightย frameworkย 

There are alreadyย a number ofย frameworks and resources –ย including guidance from theย Department for Science, Innovation and Technology (DSIT),ย theย AI Playbook for Government, theย Information Commissionerโ€™s Office (ICO),ย and theย AI Standards Hub (led by BSI, NPL and The Alan Turing Institute).ย These resources and frameworks can helpย organisationsย b building aย responsible and robust framework for AIย adoption, andย complement international standards from bodies such asย The Internet Society (ISO/IEC)ย and theย Organisation for Economic Co-Operation and Development (OECD).ย ย 

Invest in visibility toolsย 

As a business establishesย the roadmap for AI risk management, it’s crucial that the security leadership team starts assessing what AI usage really looks like in theirย organisationโ€”this means investing inย visibility toolsย that can look at access andย behaviouralย patterns to find generative AI usage in every nook and cranny of theย organisation.ย 

Establish an AIย councilย 

With that information in hand, the CISO should consider establishing an AI council made up of stakeholders from across theย organisationย โ€“ย including IT, security, legal and the C-suiteย โ€“ย to talk about the risks, the compliance issues, and the benefits arising from bothย unauthorisedย andย authorisedย tools that are already starting to permeate their business environments. This council can start to mould policies that meet business needs in a risk-managed way.ย 

For example, the council may notice a shadow AI tool that has taken off that may not be safe, but for which a safer alternative does exist. A policy may beย establishedย to explicitly ban the unsafe tool but suggest use of the other one. Often these policies will need to be paired with investment in not only security controls, but also those alternative AI tools. The council can also help create a method for employees toย submitย new AI tooling for vetting and approval as advancements come to the market.ย 

By creating this direct, transparent line of communication, employees can feel reassured that theyย are adhering to company AI policiesย and empowered to ask questions,ย while also encouraged to explore new tools and methodsย that could support growth down the line.ย 

Update AIย policyย trainingย 

Engaging and training employees will play a crucial role in getting organisational buy-in to keep shadow AI at bay. With better policies in place, employees will need guidance on the nuances of responsible use of AI, why certain policies are in place and data handling risks. This training can help them become active partners in innovating safely.ย ย 

In some sectors, the use of AI in the workplaceย has often been a taboo topic. Clearly outlining best practice for responsible AI usage and the rationale behind an organisationโ€™s policies and processes canย eliminateย uncertainty and mitigate risk.ย 

Governing the Future of AIย 

Shadow AIย isnโ€™tย going away. As generative tools become more deeply embedded in everyday work, the challenge will only grow. Leaders must decide whether to see shadow AI as an uncontrollable threat or as an opportunity to rethink governance for the AI era. The organisations that thrive will be those that embrace innovation with clear guardrails, making AI both safe and transformative.ย 

Author

Related Articles

Back to top button