Future of AIAI

How enterprises can balance AI’s promise with the security risks

By Chris Harris, EMEA Technical Director, Cybersecurity Products at Thales

AI’s continuing development holds great promise for a wide range of business functions, from customer support, document and data analysis, through to management and administration tasks at scale. The pressure to drive efficiencies and increase productivity is in some cases seeing businesses rapidly integrate AI into core functions without fully understanding the implications, creating room for growing security risks. 

As agentic AI emerges – making decisions and performing tasks without human intervention – the quality of the data that AI is working from becomes a growing concern, as well as keeping up with the speed at which the technology is evolving. According to recent data, nearly 70% of organisations view the rapid pace of AI development as the leading security concern related to its adoption.  

While the rapid adoption of GenAI without safeguards is clearly a security concern, the same research suggests that organisations in the more advanced stages of AI adoption aren’t waiting to fully secure their systems or optimise their tech stacks before forging ahead. In the race to invest and implement AI, there is a risk that enterprises may be inadvertently creating their own biggest security vulnerabilities. 

Data Dependency and the Risk of Exposure 

GenAI thrives on data – often sensitive, proprietary, or regulated, requiring access to vast datasets in the process. This dependency introduces new risks, especially when data governance and access controls are not fully aligned with AI deployment strategies. For instance, employees might inadvertently input confidential information into publicly accessible AI tools, risking exposure of personal or business data. Similarly, integrating GenAI into workflows without clear oversight can result in models training on data they weren’t meant to access, potentially breaching contractual or regulatory boundaries.  

Moreover, the integration of GenAI into business workflows can often happen in silos, with individual departments experimenting independently. This fragmented approach can lead to inconsistent data handling practices and weaknesses in security. Without a centralised framework for managing AI-related data flows, organisations risk losing control over where and how sensitive information is processed, stored, or shared.  

As GenAI becomes embedded in platforms and enterprise systems, visibility into how data is used and protected diminishes. This lack of transparency can lead to inadvertent data leakage, model misuse, or exposure of confidential information – risks that are often underestimated in the race to innovate. One-third of enterprises, for example, now say they are in the “integration” or “transformation” phases of their GenAI journey. 

Regulatory Challenges 

Another area of risk with AI adoption lies in the regulatory environment, as AI systems can inadvertently amplify biases, leading to unintended consequences such as discriminatory decision-making. In turn, these can have significant reputational and legal consequences, undermining trust and damaging enterprise reputations. By embedding regulatory awareness – as well as human oversight – into every stage of AI development,  deployment and decision making, enterprises can mitigate the risk of a data leak or ethical failure.  

Additionally, AI systems typically draw on large volumes of data to generate results or recommendations. Given a large amount of enterprises will use these AI systems, it is likely that this will involve sensitive data.  

This process of data collection comes with security and privacy concerns, meaning businesses must understand the importance of complying with data privacy requirements, as well as taking necessary steps to ringfence and safeguard the data they want to use. They may seek to run AI capabilities in a completely sandboxed environment instead of the public cloud, for example, to avoid data leakage.  

All these risks emphasise the importance of having a carefully defined strategy in place around the usage of AI, the data it runs on and the decisions it is informing.  

Security Measures Catch Up 

In a more positive sign, investments in AI-specific security tools are on the rise, with recent data finding that 73% of enterprises are investing in these tools, either through new budgets or by reallocating existing resources.  

However, just having these tools in place don’t equate to security readiness alone. Alongside the technology, organisations need a clear strategy in place that includes governance, training, and continuous oversight. This will enable businesses to fully understand how AI models are operating within their IT infrastructure, the applications they’re interacting with, and the data they’re pulling from.  

It’s only then will they be in with a good chance of fully protecting themselves against AI-related risks such as hallucinations and data leakage. 

Bridging the Readiness Gap 

To close the gap between GenAI adoption and security maturity, organisations must work closely with the providers they’re using to ensure what they’re choosing is right-sized for them. This could involve things like operating the AI model simply on the company’s own servers locally in a secure environment, rather than requests being bounced out into the public Internet. It’s also possible to ringfence the data that a given AI model is operating from. 

Security should not be a speed bump on the road to innovation – it should be a foundational element that enables safe, scalable AI adoption. Enterprises that recognise this will be better positioned to harness the full potential of GenAI without compromising trust or resilience.  

Author

Related Articles

Back to top button