AI systems are now a part of core business operations. From customer-facing applications to internal decision-making workflows, it seems that today everything relies on AI models to drive strategic insights and automate actions. As a result, risk is shifting from infrastructure to models, data flows, and the logic that drives AI behavior.
A large portion of these systems are built on open-source components. According to McKinsey, the majority of organizations leveraging AI rely on open-source models or tools as part of their stacks, further expanding the dependency surface and introducing risks outside direct control.
Yet most security tools still operate at the infrastructure layer, creating a significant visibility gap. And as organizations adopt AI at scale, this gap is becoming hard to ignore.
Main points to note:
- As companies become increasingly reliant on custom AI solutions, end-to-end visibility into the security of these systems is paramount but elusive.
- Many developers incorporate open source AI models and components into their builds, which makes cybersecurity even harder. While open source AI has been demonstrated to be rife with vulnerabilities, software engineering teams often lack the resources to properly vet the libraries they use.
- Older cyber platforms often aim to secure AI systems where they run rather than how they behave and what resources they access. Microsoft, for example, emphasizes access control and monitoring.
- Wiz, an innovative market leader, takes a newer and more comprehensive approach to AI system security by dynamically scanning and mapping how infrastructure, models, data and source code repositories connect.
The Emergence of ‘Invisible Risk’
One of the main risk drivers comes from the AI supply chain. AI models don’t operate in isolation. They rely on external data, third-party integrations, and interconnected systems. One weak link in the chain can compromise the integrity of the entire system without any visible indicators.
A recent study found that over 100 models on Hugging Face, a widely used platform for hosting and distributing ML models and datasets, contained malicious code. A lot of times, developers pull these components directly into production, and there are no mechanisms to alert them that something is wrong.
Attack methods like prompt injection, data poisoning, and model manipulation don’t rely on exploiting vulnerabilities in the conventional sense. There are no obvious indicators like malware execution or suspicious network traffic. From a system perspective, everything appears to be functioning normally.
Where Do Most Security Providers Fall Short?
Most security providers still focus on where AI runs, rather than how it behaves. They monitor the cloud instance, container, or API endpoint, but lack visibility into the model itself. There is no insight into who, how, or when interacts with the model.
And without that insight, security teams have no way of detecting manipulation, data leakage, or misuse. Everything may appear normal at the infrastructure level, while the model is producing incorrect or unsafe outcomes that the business continues to trust and act on.
Dependency tracking is another challenge. The AI supply chain is still new and less transparent, falling outside the scope of security tools built to track code libraries and known CVEs.
Finally, many security solutions rely on static detection methods, despite the dynamic nature of AI systems. AI behavior changes based on input, context, and dependencies, which means that organizations need more behavior-driven monitoring to catch real risks.
The Modern Approach to AI Security
Modern AI security starts with deep visibility into the AI lifecycle, not just the surrounding infrastructure. It requires understanding where models come from, what data they are trained on or retrieve, how they are used within workflows, and what systems they connect to.
This also includes mapping the full dependency chain, from models and datasets to third-party APIs, to understand how risk travels through the AI supply chain. To make that happen, organizations must turn to runtime and interaction-level monitoring.
Behavior-based AI monitoring detects anomalous inputs, risky outputs, and unusual interaction patterns, allowing security teams to observe how the model behaves in real time and identify risks that would otherwise go unnoticed.
Finally, effective AI security depends on correlating risk across layers. Model behavior cannot be evaluated in isolation. It needs to be connected with identity (who is interacting with the system), data (what is being accessed or exposed), dependencies (what external components are involved), and infrastructure and APIs (where actions are executed).
By bringing these elements together, organizations can gain a clear understanding of real business risk.
How Security Platforms Are Evolving to Address AI Risk
Security platforms are evolving to address the shift from infrastructure risk to model and dependency risk. Many vendors are extending existing capabilities to cover AI systems, while a new wave of startups is focusing specifically on AI-native threats.
Wiz, a security leader, now offers end-to-end visibility across infrastructure, models, data, and AI pipelines. By mapping how these components connect through its security graph, Wiz helps identify real attack paths and exposure risks across the AI stack and the AI supply chain.
Cyber software powerhouses like Palo Alto Networks and Microsoft are also integrating AI security into their broader platforms. Their main focus is on introducing AI governance, access control, and monitoring within existing enterprise security and cloud ecosystems.
A new wave of startups also bring unique approaches to AI security, For example, Protect AI specializes in securing the AI supply chain by scanning models and datasets for hidden risks. Lakers, on the other hand, focuses on detecting and preventing runtime threats like prompt injection and unsafe model behavior.
Conclusion
AI is changing not just how businesses operate, but where risk lives. The challenge is not a lack of security controls, but a lack of visibility into how AI systems actually behave and how risk moves through the broader ecosystem.
Luckily, security platforms are evolving to close this gap. By evolving their approach and adopting the right tools, organizations can confidently enter the AI era without sacrificing security or control.



