
While the path to successful adoption is rife with challenges, it’s clear that artificial intelligence is here to stay, and the companies who leverage it successfully will be the ones pulling ahead. In fact, 78 percent of enterprise leaders expect to increase their overall spending on AI in the next fiscal year. Businesses want to move quickly when it comes to game changers like AI. The benefits are manifold, and often there is a sense of urgency when it comes to emerging trends, a perception that the early adopters reap the most benefits.
Generative AI can uncover insights, drive efficiency, and transform the way we collaborate. However, it also introduces unique security and compliance challenges, particularly when deploying third-party AI applications. But conducting a thorough security evaluation is key to successful implementations.
Identifying high-risk applications
Innovation always carries some inherent risks. It is the role of CISOs and internal security teams to evaluate those risks and then make recommendations on how to mitigate them effectively. Commercial large language models (LLMs) that aren’t proprietary to the company in particular require careful consideration so they can be implemented in a way that doesn’t leave sensitive business data vulnerable.
Not being aware of exactly which model is being harnessed for the AI tool the organisation has implemented can easily become a costly mistake. Every public model out there has some weaknesses, and if the security team is blind to those, threat actors can use them to gain access to valuable company data. Some of the fundamental questions should always be “What data is being used? How is it being used? Who has access? Where is it going? How is it secured?”
This type of preliminary evaluation does not stop with identifying the type of model, however. Another key part of evaluating AI models is prioritising systems that are “explainable”, so the way a model makes decisions is understood. Clear insights into algorithm methodologies and mechanisms to address potential biases are part of this transparency, which enables audits of decision-making processes and compliance validation.
Implementation dos and don’ts
The level of risk appetite will vary from business to business, but organisations need to be conscious of what type of data they intend to feed the AI, and this is increasingly true to the public models on the market. While it might be tempting to use the public version of ChatGPT or DeepSeek’s open source model to boost efficiency within the organisation in a fast and cost-effective manner, it carries huge risk potential.
For example, the financial department might have some huge spreadsheets, and they want insights on quickly, so they feed it to the AI. Those spreadsheets, containing valuable and sensitive information, are now being used to train a public AI model that is widely accessible on the internet.
With data exposure events an inevitable reality, providing public AI models access to confidential company information introduces a massive risk. AI models are a treasure trove of data and therefore have bullseyes on their backs. Public models become especially vulnerable when threat actors learn how to manipulate the system via carefully crafted prompts and are able to corrupt the model’s behaviour.
For example, in 2023, ChatGPT disclosed a bug that allowed some users to see the titles of other users’ conversation history, underscoring how improper security can lead to accidental disclosure. The mitigation of this risk will fall outside of the purview of the internal IT team, and they will be reliant on the security of public systems with known vulnerabilities to protect their data – a position no CISO ever wants to be in.
Improving AI literacy
The EU AI act has most recently cemented the importance of AI literacy in its compliance framework. Article four states that both providers and deployers of AI systems must train their staff to use that system, accounting for their level of technical knowledge, experience and education.
Training employees to use AI correctly is just as important as the considerations around the type of AI a business decides to deploy. Employees with access need to be aware of the policies and guiding principles that set the boundaries for responsible AI use within the company. Especially if the use of public LLMs is allowed or encouraged, employees need to stop and ask questions before they feed the AI any information.
Training is necessary even for businesses that have no plans on investing in their own AI solution because employees will use it, whether it’s sanctioned or not. Shadow IT usage has been a long-standing issue regardless of the size of the business, with one study finding that 80% of employees using software not sanctioned by their organisation.
This practice exposes companies to massive security risks and the only way to guard against it is to monitor vigilantly, and to empower employees to use AI in a safe and responsible way. Companies that ban AI use are not making the problem go away.
In fact, they might just make their security posture worse by not educating their staff on the dangers of AI use and exacerbate their levels of shadow IT usage further. Empowering employees, working together with them, and giving them the tools they need to succeed in their roles will reduce the levels of shadow IT, promote safe and responsible AI use and will ultimately benefit everyone.
Risks can be mitigated by asking the right questions and establishing the appropriate frameworks to guide employees. Businesses that dedicate time and resources to finding their balance when it comes to AI use, those who embrace it and figure out a way to use it to their advantage will be the ones who thrive in the coming years.