Artificial intelligence is rapidly transforming the cybersecurity landscape. There’s no doubt that it has become one of the most powerful tools available to businesses, with the potential to significantly improve threat detection, streamline processes, and boost productivity.
However, at the same time AI is also equipping malicious actors with new and more sophisticated capabilities, reshaping the threat environment in ways that organisations are still racing to understand.
ISACA’s latest findings on the use of AI in the workplace highlight this issue. More than four in five (83%) IT and business professionals in Europe say that employees in their organisation are already using AI at work. Yet just 31% report that their workplace has a formal, comprehensive policy to guide that use. This widening gap between adoption and oversight leaves many businesses exposed to emerging risks, including misinformation, deepfakes, and AI-generated malicious code.
Without the right safeguards in place, AI can quickly go from being an innovation accelerator to a serious liability. As AI’s role in the enterprise continues to expand, organisations must build the structures and skills needed to use it responsibly – before the risks outpace the rewards.
The Promise of AI in Cybersecurity
AI is already playing a growing role in how organisations strengthen their cyber defences. Its ability to rapidly process large volumes of data is enhancing the speed and precision of threat detection, enabling teams to identify patterns, flag anomalies, and automate elements of their response in near real time.
These capabilities are particularly relevant in areas like threat modelling, where AI can help simulate attack paths, evaluate system vulnerabilities, and inform more strategic decision-making. By streamlining risk prioritisation, supporting faster response, and improving alignment between security and IT leadership, AI can help businesses embed these practices into day-to-day operations.
Elsewhere, AI is also being integrated into the software development lifecycle. When built into code editors, it can support developers by flagging security issues during the build phase, helping to reduce the risk of common vulnerabilities making it into production.
The benefits are already being felt. More than half (56%) of European professionals say that AI has improved productivity, while 71% reported gains in efficiency and time savings. As these tools become more widespread, they’re expected to play a growing role in helping security teams work smarter and stay ahead of emerging threats.
The Evolving Threat Landscape
Despite the benefits AI brings to cybersecurity, it’s also being weaponised by bad actors in ways that are rapidly expanding, and changing, the threat landscape. The same capabilities that make AI useful for defenders – scale, automation and adaptability – are now enabling attackers to breach systems with far greater efficiency. Nearly two-thirds (64%) of European professionals are extremely or very concerned that generative AI could be turned against them – a sign of how quickly these hypothetical risks have become tangible and imminent threats.
Phishing campaigns are becoming harder to detect, with generative tools producing highly personalised messages that mimic tone, formatting, and context. AI is also making it easier to distort information and manipulate perception. From fabricated images to realistic synthetic audio, deepfakes and other forms of synthetic media are now being used to erode trust in digital content. Almost three-quarters (71%) of professionals expect these tactics to become more widespread and harder to detect in the year ahead, yet only 18% say their organisation is investing in tools to counter them.
This growing mismatch between the sophistication of AI-enabled threats and the tools available to counter them is leaving many organisations on the back foot. Traditional security models are struggling to keep pace, and without a more agile, forward-looking approach, the gap will only continue to widen.
The Governance Gap
While AI adoption accelerates, governance has not kept pace, leaving a widening gap between capability and control. While AI is now being used extensively across workplaces, many organisations still lack the internal structures needed to guide its use safely.
This is reflected in the numbers. As noted, 83% of professionals say employees in their organisation are already using AI, yet only 31% report having a formal, comprehensive policy in place. Without clear oversight, organisations are more likely to encounter security lapses, ethical missteps, or inconsistent implementation.
In Europe, regulation is beginning to come to fruition to tackle this issue. The EU’s AI Act came into force in August, setting clear expectations around transparency, accountability and responsible AI use. It signals where international policy is heading – and the kind of standards organisations will increasingly be expected to meet.
By contrast, the approach so far in the UK has been more principles-based and innovation-friendly. But without firmer guidance or enforceable standards, many businesses are left to interpret best practice for themselves. As AI use continues to evolve and scale, the UK risks falling behind international peers if it does not move faster to close this regulatory gap. A lack of alignment with global frameworks could also undermine competitiveness for UK businesses operating in international markets.
As policy catches up, organisations must take the lead internally in the meantime. Effective governance means more than having a policy on paper. It requires role-specific guidance, clear escalation pathways, and a workforce trained to spot and respond to AI-related risks. Without this foundation, even well-intentioned use of AI can leave organisations vulnerable.
Closing the Skills Gap
While governance frameworks provide a foundation for responsible AI use, they are only as effective as the people implementing them. The rapid integration of AI into everyday business functions means employees at all levels are now engaging with the technology – often without formal training, clear guidelines, or even a full understanding of the risks involved.
The resulting skills gap is becoming one of the most pressing challenges for organisations looking to manage AI safely. Indeed, 42% of professionals in Europe believe they will need to significantly improve their AI knowledge in the next six months just to remain effective in their roles.
Without investment in upskilling, employees may use AI tools in the wrong context or fail to spot emerging threats like data leakage, deepfakes, and automated phishing. They may also avoid using AI altogether, missing out on the productivity and efficiency gains it can bring, because they lack the confidence or training to do so safely.
Organisations must take a proactive, role-specific approach to training. Technical teams need a clear understanding of how to evaluate AI systems for security and compliance risks, while non-technical staff should be equipped to recognise AI-generated content, assess output critically, and flag concerns. Training should not be limited to one-off sessions, but built into ongoing professional development, with support from certification programmes, workshops and internal experts who can embed AI awareness into the culture of the organisation.
As AI use continues to develop, so too must the workforce. Bridging the skills gap isn’t just about reducing risk, it’s about unlocking the full value of the technology, and ensuring that innovation is matched by understanding, safety and control.
Securing the Future of AI in Cybersecurity
AI is not a future challenge, it’s a present cybersecurity reality. From strengthening threat detection to amplifying phishing attacks, its impact is already reshaping the way organisations defend themselves. The tools are changing fast, and so are the tactics used to exploit them.
The gap between adoption and preparedness is one of the most pressing risks facing cybersecurity teams today. But it’s also where the greatest opportunities lie. Organisations that move early to embed AI governance, upskill their people, and align with emerging regulatory expectations will be better equipped to defend against evolving threats – and to lead with confidence in an increasingly complex digital environment.
The stakes are high, but so is the potential. For AI to strengthen, not weaken, cyber resilience, it must be approached with structure, foresight and trust.