During Gartner’s recent Security & Risk Management Summit, it announced the top cybersecurity trends for 2025. Unsurprisingly, Gartner expects the evolution of generative AI (GenAI) to heavily influence this year’s trends – taking the technology to heights never before seen, especially within cybersecurity. Analysts noted that while most cybersecurity efforts and financial resources have traditionally been focused on protecting structured data, such as databases, the rise of GenAI is transforming data security programmes and shifting their focus to protecting unstructured data like text, images and videos.
For organisations and their employees, the artificial intelligence (AI) boom has presented them with a number of benefits. These include cost reduction, securing data and enhanced productivity in the workplace through swift content generation, pattern recognition, and summarisation. However, as businesses have realised AI’s immense benefits and begun using it more rapidly throughout their day-to-day processes, increased cybersecurity concerns about AI have simultaneously come to the forefront, with threat actors taking advantage of the technology’s capabilities to improve the success of their attacks.
AI boosting phishing attack success rates
GenAI tools provide attackers with increasingly credible methods of executing successful social engineering attacks. For instance, large language models (LLMs), a form of GenAI, can instantly craft flawless messages, each tailored to a specific individual.
In recent years, a targeted ‘spear-phishing’ attack required time-consuming research and effort on the attacker’s part. Now, cyber criminals can easily automate these attacks by gathering personal information often made publicly visible through social media profiles. Attackers are taking advantage of tools like ChatGPT to improve base-level quality issues with most phishing campaigns, such as addressing poor grammar and obviously inaccurate information. With AI making life easier for attackers, threat actors are beginning to require less skill to carry out successful attacks. In the future, we can expect to see even entry-level cyber criminals with limited knowledge experience increased success.
As these emerging attack methods become more understood, fears will continue to grow around the threat posed by advanced technologies like AI. In fact, according to Yubico’s recent State of Global Authentication survey of 20,000 employees, respondents said that they believe online scams and phishing attacks have become both more sophisticated (72 percent) and more successful (66 percent). These concerns will only be exacerbated as AI capabilities increase and as threat actors use the technology more widely.
A deeper look at the evolving AI threat on businesses
Attackers can use AI to enhance their phishing attacks in a multitude of ways. For example, a typical AI-assisted phishing attack might produce an email claiming to be from a company the victim is familiar with. These emails may purport to be from a business the victim has purchased from or interacted with and seek a one-off payment that requires credit card information. By referencing specific details and flawlessly reproducing the expected tone and writing style, these phishing emails are virtually impossible to identify as scams.
Perhaps the most concerning advancement in AI is its ability to clone voices and likenesses from audio and video clips or images found online, known as vishing. Combined with tools that mimic caller ID, cyber criminals can fool targets by calling them and claiming to be a family member, friend or loved one seeking urgent assistance. These technologies are already being widely used by attackers and – coupled with cyber criminals becoming better educated and more comfortable with using AI – we can expect to see innovative new uses of AI to power cyber attacks in the near future.
AI-powered cyber attacks are especially common in today’s business landscape of geographically distributed workforces, where employees are used to receiving countless authentication requests when signing into their work accounts on different devices. Such workforces increase the success of AI tools as employees work from their own, less secure networks and devices, providing increased entry points for cyber criminals to execute successful social engineering attacks. Adding to the threat, controls that were once difficult for cyber criminals to circumvent, such as voice verification for identity on a password reset, have become no match for attackers.
Bolstering protection against AI-driven cyber attacks for businesses and their employees
When it comes to AI-powered cyber attacks, an organisation’s user access and authentication processes are especially at risk of being compromised – highlighting the need for enterprises to secure them before it’s too late. If they are not secured, businesses risk having sensitive company and employee data exposed to cyber criminals, who may share it or demand a ransom for its safe return.
To avoid these potentially disastrous effects of falling victim to cyber scams, companies must ensure they are implementing phishing-resistant multi-factor authentication (MFA), including passkeys like hardware security keys, to protect critical data and assets. Passkeys work by authenticating users through cryptographic security keys stored on their computer or device. Passkeys are considered a superior alternative to passwords and other legacy MFA methods since users are not required to recall or manually enter long sequences of characters that can be forgotten, stolen or intercepted by hackers. This also means that only the key holder can gain access to their accounts.
Additionally, to ensure the highest level of security and mitigate AI cyber threats in their entirety, businesses must implement measures beyond merely investing in phishing-resistant authentication – they must prioritise developing phishing-resistant users. Rather than just a reactive measure, this is a proactive strategy to remove the risk of phishing by eliminating all phishable events from the entire user lifecycle. To accomplish this, organisations must equip their employees with phishing-resistant MFA and establish phishing-resistant account registration and user recovery procedures for all. This is underpinned by using purpose-built and portable hardware security keys as the foundation for the highest-assurance security.
Implementing a business strategy centred around phishing-resistant MFA and establishing phishing-resistant users is critical, but it must be supported by ongoing security education in the workforce to bolster defences. This ensures that businesses are in the best possible position for success when it comes to mitigating emerging AI-powered cyber threats.
Although bad actors have added AI to their arsenal, businesses do not have to remain defenceless to the evolving threats it has created. Instead, they can respond by bolstering their defences for every single employee by providing them with phishing-resistant MFA and helping them become phishing-resistant users – in turn, limiting the entry points for attackers.