The cybersecurity landscape continues to experience a profound transformation at the hands of AI. While AI is making its mark on many sectors, the cybersecurity industry has a far smaller margin of error for getting it right. With cybercriminals rushing to leverage AI to drive traditional and novel attacks, applying AI to bolster defences is imperative. The stakes in cybersecurity are uniquely high, and the sector cannot afford to wait for technology to mature or risks to diminish before adopting AI.
AI-Enhanced Cyber Threats
Generative AI technologies, such as Large Language Models (LLMs), are significantly enhancing the effectiveness and sophistication of traditional cyber threats like phishing. These AI tools enable cybercriminals to create highly convincing phishing emails by leveraging publicly available information.
For example, by scraping social media profiles, attackers can personalise messages to targets based on their interests, affiliations, or recent activities, making the emails appear almost indistinguishable from legitimate communications. This method greatly increases the likelihood of recipients engaging with the phishing attempt.Ā The UKās National Cyber Security Centre (NCSC) has cited AI resulting in the āevolution and enhancement of existing tactics, techniques and proceduresā as the number one cyber risk for 2025.
However, technology is also making newer, more complex threats increasingly accessible. Take deepfakes for instance. These AI-generated fake photographs, videos or audio recordings of public figures or corporate leaders have the potential to drastically manipulate public perception, erode trust in digital communications, and facilitate identity theft. Beyond enhancing phishing campaigns, deepfakes can also be used for creating fraudulent instructions for financial transactions, spreading disinformation to manipulate stock markets or public elections, or even impersonating officials to gain unauthorised access to secure information.
This escalating cyber risk poses a significant threat across all sectors, especially those like government, banking, and financial services, which manage sensitive data for billions worldwide. As technology continues to become more accessible, the proliferation of these AI-powered threats is expected to accelerate, challenging traditional security measures and demanding more sophisticated defences to protect against the evolving landscape.
AI and the Evolution of Biometric Security
One cornerstone of combating AI-driven threats lies in enhancing identity security. Biometrics-based verification such as facial recognition can significantly reduce phishing, especially when used as a step-up authentication mechanism for sensitive actions such as password resets or performing privileged actions.
Decentralised identity systems stand out as a powerful method to empower individuals by granting them full ownership and control over their identity. Personal Identifiable Information (PII) is encrypted, signed, and safeguarded with digital keys, ensuring an individualās identity is confirmed without exposing sensitive details to external parties. In such systems, organisations do not store critical identity attributes; instead, users maintain all encrypted identity-related information in a digital wallet, securely located on a hardware-protected area of their mobile device. The European Union Identity Wallet (EUIW) is an example of an initiative for better control of PII, whereby governments in the EU will be required to offer a digital ID wallet for citizens to use for authentication and e-signatures with an emphasis on āselective disclosureā of data.
However, while decentralised identity systems empower individuals with control over their personal information, this should be coupled with equally robust systems for ensuring the authenticity of digital media. To bolster the integrity of media in the face of AI-generated content, and deep fakes, the concept of digitally signing media emerges as a vital solution.
Creators could digitally sign their videos, images, or documents, allowing the audience to verify, through PKI, the authenticity of the content. This method ensures that the media is indeed produced by its purported creator, adding a layer of trust and security in digital communications. Automated PKI as a service offering could be particularly instrumental in enabling this verification process, providing a robust framework for authenticating digital media in an era increasingly dominated by AI-generated content.
To adopt this user-centric approach, organisations must develop robust verification methods. Thanks to technological advancements, this paradigm shift is now feasible. Biometrics, especially facial recognition technology, has seen significant advancements. Facial biometrics are praised for their high accuracy and ease of use, adeptly performing tasks like matching faces to identity documents and detecting liveness, thus ensuring the authenticity of individuals. The banking sector, in particular, has leveraged biometrics to securely facilitate transactions and verify customer identities, all while ensuring a frictionless user experience.
AI’s integration into biometric security systems marks a significant enhancement, bridging the gap between identity verification and proactive threat detection. AI algorithms can now analyse biometric data with remarkable precision, constantly learning from new inputs to improve accuracy and security measures. By continuously updating and adapting to new threat patterns, AI ensures that biometric systems remain at the forefront of secure and user-friendly authentication methods.
AI Defending Against AI
Enhancing biometrics is just one of the many ways AI is impacting cybersecurity. AI’s true prowess lies in its deep analytical capabilities, sifting through vast datasets to spot anomalies that could indicate a security breach. This ability for early detection is invaluable against sophisticated threats like zero-day exploits and advanced persistent threats (APTs), which bypass traditional security measures, and go undetected for extended periods of time.
A significant part of this is fuelled by the emergence of generative AI in creating specific models, chatbots, or AI assistants tailored for cybersecurity, as seen in Microsoft’s Security Copilot and Google’s SEC Pub. These tools, informed by extensive internal and external threat data, security best practices, and knowledge on secure software configurations, empower users to improve attack analysis, and malware defence, and create automated security measures. Generative AI can also be used to generate diverse data sets for training machine learning models in scenarios where data is scarce or lacks diversity.
Finally, the impact of AI extends into the realm of training and awareness. Organisations can use AI to craft realistic simulation environments, where security professionals sharpen their threat detection and response skills. These simulations offer an immersive learning experience that prepares teams for real-world challenges.
Proactivity is Key
Ultimately, the interplay between AI and cybersecurity will only become more intricate as the technology improves. The landscape of cyber threats is ever-changing, with AI both escalating risks and offering new defence methods. In this environment, proactivity is essential.
One of the key areas where organisations must take proactive measures is user onboarding and authentication, ensuring that only authorised individuals can access sensitive data and systems.
By integrating advanced biometrics, behaviour analytics, and automated threat detection and response systems into their cybersecurity frameworks, organisations can enhance their identity security and mitigate the risk of AI-powered attacks. This proactive approach extends beyond technology adoption; it requires a mindset shift towards continuous innovation, collaboration, and adaptation in the face of new challenges.