
GenAI is cementing its place at the core of enterprise transformation. What began as experimental use cases has evolved into business-critical deployments that are redefining how organisations innovate, operate, and protect their data. However, this unprecedented speed of adoption is not without risk. As GenAI goes mainstream, so too do its associated security challenges — and the consequences are already here.
Palo Alto Networks’ ‘State of GenAI in 2025’ research recently revealed that enterprise GenAI traffic has skyrocketed by 890% over the past year, a clear sign that GenAI has shifted from a novelty to an essential utility. But the security implications of this adoption wave are escalating at an even faster speed. Novel threats, including data leakage, intellectual property exposure, and the misuse of AI agents and the proliferation of malicious code are all concerns that demand urgent attention from CISOs, CIOs, and IT security professionals.
Data security incidents and the risks of Shadow AI
Correlated with the increased adoption of GenAI is a substantial growth in data security incidents relating to its use. The research shows that data loss prevention (DLP) incidents linked to GenAI use more than doubled between January and March 2025. On average, organisations now face a 2.5-fold increase in monthly GenAI-related data incidents, representing 14% of all data security incidents detected in SaaS traffic.
Perhaps the most urgent concern is shadow AI, which is unauthorised use of GenAI tools by employees without the IT team’s knowledge or approval. On average, organisations are now using 66 GenAI applications, with 10% of these classified as high risk.
Many of these tools are introduced informally, used through personal accounts or free trials, and lack even the most basic security oversight. This decentralised individual use of AI may seem harmless, but it creates an enormous governance gap. Sensitive data, including intellectual property, source code, and customer information, is being shared with external platforms, often without any visibility from IT teams. This puts the business at heightened risk of data exposure and misuse, regulatory violation and losing control of intellectual property.
Governance isn’t optional
There’s also the risk in using unvetted GenAI tools that are vulnerable to poisoned outputs, phishing scams, and malware disguised as legitimate AI responses. Compounding this risk are fast-evolving regulations for AI and data usage that are being established worldwide, and for which noncompliance can result in severe penalties.
Governments and regulatory bodies globally are working to catch up with AI’s rapid deployment, which is creating uncertainty for businesses. Globally, laws demand extreme caution in the handling and sharing of personal data with GenAI applications. While the UK doesn’t yet have specific AI laws, the Department for Science, Innovation and Technology (DSIT) has published an AI Code of Practice that guides organisations on the adoption, use, and lifecycle management of AI technologies.
To respond to these challenges, organisations must shift from reactive oversight to proactive governance. Security leaders need comprehensive visibility into GenAI application traffic within the organisation and must ensure policy enforcement at scale. This can include conditional access controls to govern who can use GenAI apps and aligning access with role-based permissions, device compliance, and application risk. One can use advanced data loss prevention (DLP) tools that can inspect content flowing to and from GenAI platforms to detect and block sensitive information before it leaves the network.
Defence against modern cyberthreats calls for a zero-trust foundation against GenAI interactions. Assuming no AI tool or plugin is inherently safe, inspecting continuously can help identify and block highly sophisticated and stealthy malware within GenAI responses.
Lastly, businesses should consider establishing a dedicated AI and data oversight committee responsible for tasks such as classifying data (e.g., public, confidential, restricted) and ensuring its appropriate use within AI initiatives.
The explosive growth of GenAI has fundamentally altered the digital landscape for enterprise organisations. It is changing how we work, compete, and create, and doing so at warp speed. As adoption accelerates, so do the threats. There’s a greater risk than ever of data leakage, compliance failures, and security challenges. It’s up to today’s security leaders to set the guardrails that will define a safe and resilient AI-powered future. Prioritising strong data controls, access management, and employee training will be crucial steps for technology leaders laying the foundation for secure GenAI adoption.
Ultimately, securing GenAI isn’t about slowing down but about innovating safely. The businesses that will thrive are those that can leverage the full potential of AI while keeping data, systems, and people protected. That means security must be built into every layer of GenAI adoption, not bolted on after the fact.