
The role of the Chief Security Officer (CSO) is rapidly changing. Today, they must go beyond technical oversight and actively shape strategic business decisions, ensuring security remains integral to every initiative.
Executive teams now recognise that technology security is a business-critical priority. As CSOs, we ensure that security considerations are embedded in every decision, driving operational efficiency and protecting our assets. But there is also a growing mindset shift (or at least a need for one) that’s heavily driven by artificial intelligence.
AI introduces unpredictable risks that demand a more balanced approach. CSOs must now also focus is on realistic risk management with the technology while enabling innovation and fostering collaboration across teams.
Cloud history repeats itself
The rapid adoption of AI mirrors the early days of cloud computing: fast deployment, scalability, and immediate business benefits. However, the security challenges are more complex and require proactive leadership.
Fast time to value, however, isn’t the only parallel. Now, just as then, there are thorny questions around how best to enable the broad adoption of a new technology while maintaining robust protections for systems and sensitive data.
As a result, most CSOs are currently grappling with the familiar challenge of managing new technology they don’t fully understand. Moreover, the difficulty of this task is further compounded by non-deterministic AI advances. Designed to process vast volumes of data and produce novel outputs, large language models (LLMs) fuel autonomous decision-making that makes them near impossible to monitor via traditional methods built around mitigating defined risks.
The security versus innovation cycle
Multiple groups and bodies are already striving to meet the need for a clear AI playbook. At a broad level, the European Union’s AI Act has set general obligations for producing and running smart tools, while the intergovernmental Organisation for Economic Co-operation and Development (OECD) () has established core principles for responsible use.
Providing more actionable detail, the International Organisation for Standardisation (ISO) and International Electrotechnical Commission (IEC) have also distributed the first cross-border benchmark for AI system management. Intended to help shape the way tools are handled, the standard includes advice for creating and maintaining self-regulation structures.
These guidelines are a strong step in the right direction, and as policies continue to evolve, compliance will remain essential—but that alone is not sufficient. Threat actors adapt faster than regulations, and security must be proactive, not reactive.
One-off compliance checks are insufficient for AI. Threat actors continuously adapt, targeting vulnerabilities in AI systems, and as CSOs, we must advocate for ongoing vigilance and adaptive security controls. As shown by recent infiltration of solutions such as DeepSeek, the unrelenting determination to find ways around restrictions means no AI model is ever “jailbreak proof”, even with regular updates.
Adapting to the age of uncertainty
So, where does this leave CSOs? In mindset terms, they must accept the potential risks of AI, along with its numerous positive effects. This is especially true as multi-market studies show 92% of companies plan to increase their generative AI investment over the next few years, fuelled by the push to capitalise on mirror ever-higher human intelligence.
Practically, however, it will still be important to find a middle road between facilitating AI use and minimising threats, which will require a change of tack. Security leaders must move from issuing mandates to building consensus, and prioritise clear communication and team engagement to ensure security policies are understood and adopted.
As part of this, they’ll need to expand their wider business knowledge to understand which AI tools drive business value and customer impact. This knowledge is essential for advocating necessary defence and demonstrating the business risks of inadequate security.
What exactly protections look like will vary, but once again, all will need to prioritise balance. Overly restrictive security measures are impractical and an impediment to operational efficiency. CSOs should focus on access frameworks that balance openness with accountability, such as utilising unique signature keys for user validation. Additionally, training must go beyond procedures and explain the rationale behind security policies. Employees need to understand not just how, but why, security measures are in place—especially when using AI.
The rise of AI has expanded the CSO’s responsibilities. They will remain engaged in the well-worn game of risk roulette—trying to guess where the biggest threats are going to come from and figure out how to deflect them—but with new challenges and opportunities. Our role as security leaders is to anticipate these emerging threats, implement effective defences, and enable the business to thrive.