As artificial intelligence (AI) becomes deeply embedded across industries – from finance and healthcare to national security and critical infrastructure – its security risks are accelerating. Yet our approach to mitigating these risks remains stuck in the past.
For years, software security has been guided by something called the Common Vulnerabilities and Exposures (CVE) process. This framework effectively catalogues and prioritises known issues based on the impact or damage they could cause, enabling coordinated disclosure and patching. This system allows for the weakness to be reported promptly and accurately. However, AI doesn’t operate like traditional software – and neither do its vulnerabilities.
AI systems, particularly those powered by large language models (LLMs), are not governed by static lines of code alone. They are shaped by training data, emergent behaviours, and often opaque internal architectures. As a result, security flaws in AI aren’t always traceable to a single bug or hardcoded secret. They may emerge as unpredictable behaviours, exploitation of training blind spots, or subtle manipulations that go undetected by conventional systems. This exposes one of the significant flaws of businesses viewing AI security as traditional software.
Attempting to apply the CVE system to AI threats is therefore not only inadequate – it’s misguided.
APIs and the illusion of control
AI models are typically accessed via familiar Application Programming Interfaces (APIs). On the surface, this makes them appear controllable using standard software security techniques. In reality, APIs can be a critical vulnerability vector.
According to the US Cybersecurity and Infrastructure Security Agency, API-related weaknesses account for a significant share of reported AI issues. The ‘Open Worldwide Application Security Project’ reinforces this concern by highlighting various AI vulnerabilities. However, six of the top ten vulnerabilities affecting LLMs are only exploitable via compromised APIs. As a result, researchers from Truffle Security revealed that common API security concerns, such as hard coded credentials or security keys, can expose access to an LLM.
This is partly a reflection of how models are trained. Many LLMs draw from datasets like Common Crawl, a sweeping archive of the internet that includes decades of insecure coding practices. Unsurprisingly, these practices have seeped into the models themselves, further exacerbating the risk.
Barriers to reporting and disclosure
Adding insult to injury, a major issue worsening AI insecurity is the lack of clear reporting mechanisms. With models often built from a mixture of open-source software, third-party tools, and proprietary datasets, accountability becomes diffuse. Who is responsible when something goes wrong?
Even when researchers discover vulnerabilities, vendors frequently deny their legitimacy – arguing that such flaws don’t conform to established definitions. Without a standardised AI vulnerability reporting framework, many risks are ignored, leaving the door open for exploitation while insisting on using a CVE framework that isn’t able to recognise these new AI weaknesses.
A fragile, invisible supply chain
The AI ecosystem functions as a black box. Unlike traditional software, which can help track dependencies, AI tools rarely offer this level of transparency. An AI bill of materials, known as AIBOM – detailing datasets, model architectures, and embedded dependencies – remains a rare exception rather than the rule.
This lack of visibility in the AI supply chain makes it nearly impossible for security professionals to determine whether systems are affected by known threats. And because AI models evolve dynamically through continual input, they introduce an ever-shifting attack surface.
The industry’s reliance on legacy thinking that systems are either secure or vulnerable only exacerbates the issue. In reality, security in AI means defining what ‘safe’ looks like for systems that learn and change over time.
The skills and tools gap
The security gap isn’t just technical – it’s human. Many organisations lack the expertise to assess AI-specific threats like model inversion, adversarial prompts, or data poisoning. Traditional cybersecurity teams of scanners, software engineers, data architects, and testers, armed with conventional tools, are ill-equipped for this new battlefield.
Even with initiatives like the UK’s AI Security Code of Practice (which I believe should be mandatory for critical industries), progress remains slow. Without enforceable standards, shared threat intelligence, and modernised frameworks adapted from existing CVE reporting to this new reality, I fear the risk of systemic failure is high.
Time for a radical shift
Securing AI requires more than patches and protocol updates – it calls for a complete rethinking of how we build and monitor intelligent systems. That means:
- Independent security testing must become standard practice.
- Legal and procedural barriers to vulnerability disclosure need to be removed.
- Security should be built into the AI lifecycle, not bolted on afterward.
- AIBOMs should be mandated for all AI products.
- AI-specific threat models and tools must be developed and widely adopted.
- Cross-industry collaboration should focus on defining what ‘secure AI’ actually means.
This is not just a technical challenge, but it’s also a strategic imperative. Without transparency, robust frameworks, and collective vigilance, AI will become the next zero-day frontier: undetectable, untraceable, and devastatingly real.