Cyber SecurityAI

AI is the “Wild West” of cybersecurity that needs to be tamed

By Martin Greenfield, CEO of Quod Orbis

Criminals lurking in the shadows is an image we’re all familiar with. For businesses today, these shadows are cast by unsanctioned tools and technology being deployed by employees without formal oversight, now known as shadow AI. This practice is becoming prolific across global businesses, which means company security is always at risk. Leaders with poorly communicated, or lax, policies around the use of AI are failing to enforce governance discipline as a foundation of workplace culture.   

Given how much of an issue shadow AI is becoming, the responsibility falls to roles beyond the CISO to drive practices like control mapping, continuous monitoring and overall accountability before regulatory and operational risks spiral. The UK’s National Cyber Security Centre has recently reinforced that security is far more than a technical concern and calls for business-wide responsibility. AI sits firmly within that remit. Shadow AI is creating a new Wild West of security risk, and the leaders who draw first are more than likely to walk away unscathed.  

Real governance is more than policies on paper 

We are starting to see businesses evolve how they manage AI risk, whereby they embed it into their existing control and governance frameworks, treating them in the same way as financial or cyber controls, rather than separate. 

Yet a gap remains between ambition and execution. Research from EY shows that while 72% of executives say their organisations have integrated and scaled AI across most initiatives, only 33% believe they have adequate protocols in place to cover all aspects of responsible AI, including accountability and security. 

There’s a shift underway. Instead of relying on annual reviews or static documentation, leading businesses are monitoring AI data inputs, outputs and behaviour in near real time, applying the same discipline long used in cybersecurity. AI demands its own risk mapping exercise against established frameworks like NIST AI Risk Management Framework, with clear ownership split amongst core teams beyond just risk and compliance.  

The steps leaders should take 

Shadow AI is the new shadow IT, causing havoc across businesses without guardrails or approval. The first thing organisations need to do is get a handle on their system visibility. Thanks to the accessible nature of AI tools, most businesses still don’t know where the technology lies within their workflows. From there, AI governance should be aligned with existing risk and compliance structures like DORA, ISO/IEC 42001 or NIST. 

Above all else, assurance needs to be continuous. Controls should be monitored automatically and consistently, not just when the one-year assessment comes around again. A crucial element is making sure human oversight is defined from the outset. Regulators will continue to ask who’s accountable when AI makes a decision, so businesses need to be able to answer. 

Now, there is a persistent myth that stronger governance will slow down innovation, but a myth is all it is. In reality, the right controls give organisations the confidence to innovate quickly and responsibly. When transparency is built into the control systems from the beginning, teams can move faster because they know the AI is being monitored for performance and compliance in real time. Continuous monitoring steps in where manual processes never could, bringing automated assurance and allowing developers to maintain their pace while risk teams retain visibility. 

As a final point, AI risk cannot be managed in isolation. Vendors, consultants and internal governance bodies all play a role in a broader ecosystem of accountability. It’s up to technology vendors to provide complete transparency across the board, so that organisations don’t resort to blind trust, but adopt the mindset of trust-but-verify. Internal AI ethics boards that have already been established need to go through an evolution as well. There is no longer a place for passive advisory boards when more active oversight functions are in demand, supported by real evidence from continuous monitoring tools. 

What’s next?  

Looking ahead, there are a number of AI risks that could still cause headaches for boards if – and likely when – they’re underestimated. Prolific risks usually include those from supply chains when organisations increasingly rely on third-party APIs and pre-trained models without fully auditing their integrity, and then data leakage through generative AI tools especially when an enterprise accelerates its use.  

When dealing with the Wild West of cybersecurity, reaction speed determines the winners and losers. Sitting and waiting around only opens the door wider for shadow AI to turn business systems in unruly and lawless environments. Leaders need real-time visibility and continuous monitoring to bring order and control.

Author

Related Articles

Back to top button