
Artificial intelligence is no longer a future ambition. Over the past couple of years, it has become embedded in every area of modern business operations and an essential for organisations seeking to deliver value and maintain competitiveness. However, each advancement in capability simultaneously provides threat actors with a fresh avenue for attack.
Recent, high-profile breaches have shown that the UK remains vulnerable to increasingly sophisticated cyber threats. CyberArk research revealed that AI is creating a threefold danger: being a weapon for attackers, a tool for defenders, and a rapidly growing source of new vulnerabilities. The UK’s cybersecurity agency, NCSC, recently declared that AI “will almost certainly continue to make elements of cyber intrusion operations more effective”. The only way to navigate this new reality is to make identity security the foundation of corporate AI strategies if they wish to build future resilience.
Old Scams, New Disguises
Cybercriminals have always preyed on human trust, but AI makes their job easier. Phishing, the most common entry point for identity breaches, has evolved beyond poorly worded emails to sophisticated deception, with life-like voice clones, photorealistic deepfakes, and perfectly worded messages that mimic colleagues or suppliers. Last year, 84% of UK organisations experienced phishing attacks, which resulted in the most breaches out of any kind of cyberattack. This proves that training and technical safeguards can’t neutralise AI-enhanced phishing attacks alone, especially when attackers use AI to pose as trusted people and use human psychology to their advantage.
Conventional perimeter defences are no longer enough to stop such threats. Organisations need to implement more stringent identity security processes, foster a culture of “verify before trust”, and make it second nature to question anything that feels suspicious.
AI as a security force multiplier
Although AI is bolstering the cybercriminal’s arsenal, it is also completely changing how security teams operate. Just under nine in ten UK organisations now deploy AI to monitor network behaviour, identify emerging threats and automate manual, time intensive tasks that used to take hours to complete. AI has become an essential force amplifier that provides smaller security teams with the capacity needed to manage an increasing workload.an essential force amplifier that provides smaller security teams with the capacity needed to manage an increasing workload.
More than 70% of cybersecurity buyers at large organisations are “highly willing” to invest in AI cybersecurity tools, reflecting an increased understanding that human analysts alone can’t keep up with modern attacks. However, there’s a trap in trusting AI algorithms without human oversight. Overreliance without supervision leads to blind spots and false reassurances. Security teams must employ disciplined AI governance to avoid this pitfall, ensuring that AI tools are trained on quality data, are tested and audited regularly, and keep skilled human analysts in the loop.
The Machine Identity Explosion
Perhaps the least visible element of the triple threat is the sheer volume of machine identities and AI agents in today’s IT estates. As employees plug in more AI tools to boost productivity, the number of machine identities that can access critical data has surged, to the point where they outnumber human identities by 82 to one. A significant portion of these machine identities have elevated privileges without the necessary oversight. Compounding this problem further, these machine identities often have weak credentials and inconsistent lifecycle management processes, leaving tempting gaps for intruders.
The rise of shadow AI makes matters worse. 57% of employees reported that they use unapproved AI services to speed up work, usually to automate tasks or generate content quickly. While efficient, the security consequences of this are significant. Unauthorised tools can process data without essential safeguards, running the risk of data leakages, failure to comply with regulations, and reputational damage.
For organisations embracing AI, this risk requires more than technical controls. Businesses must establish clear policies on acceptable AI use and educate their staff on the risk of bypassing security. Additionally, as workers use AI to improve their productivity, organisations should also provide approved, secure alternatives that meet their needs without generating vulnerabilities.
Making Identity Security the Backbone of AI Strategies
Identity security can’t be an afterthought for the AI-driven enterprise. It must be built into an organisation’s digital strategy. This starts with having complete, real-time visibility over every identity in an environment, whether it be human, machine, or AI agent. Access should follow the principle of least privilege, and any unusual behaviour must be detected and investigated without delay.
Companies are already reshaping their identity and access management to reflect AI’s unique risks. This includes granting temporary, on-demand permissions to machine accounts, keeping a close watch on privilege escalation and holding AI-powered systems to the same trust and verification standards as human users.
AI has the potential to bring about massive benefits for organisations that embrace it responsibly. But without strong identity controls, its advantages can quickly turn into vulnerabilities. The organisations that will lead the next era are those that understand resilience is no longer optional, it’s the basis of sustainable growth.
When both defenders and attackers are supercharged by AI, one truth remains clear: securing AI starts with securing identity.