Future of AIAI

AI is arming both hackers and defenders, so where does that leave businesses in the cybersecurity battle?

By Ade Taylor, Head of Security Services, Roc Technologies

AI is transforming both sides of the cyber security battlefield. On one side we have threat detection automation, analysis of vast datasets in real time and faster response times for frontline analysts. On the opposing side we have attackers who are adopting generative AI to craft more convincing phishing emails, develop malware and support cybercrime-as-a-service models. Criminals are even using AI powered real-time translation and sentence structure support in order to help them negotiate in online chat with their ransomware victims.

The very technology that is acting to help businesses is also levelling the playing field for bad actors to pursue their own nefarious means. The solutions that once required advanced technical knowledge are now accessible to anyone with an internet connection and a budget, making the possibilities much more terrifying.

The AI use-cases for Cyber Security are among the strongest, but that applies to both sides.

With open-source AI developments becoming available to the bad and good side at exactly the same time and at exactly the same rate, the old truism of staying “one step ahead” of the threat actors is quickly becoming redundant. To keep pace with the rate of change, businesses need to take a layered, pragmatic approach to ensure resilience against this growing risk.

Sophisticated tools in the hands of cyber criminals

For many years now, corporate employees have been seen as the softest target by cyber criminals. Staff regularly click on links or open files in phishing emails to open the way for bad actors to steal sensitive data. Incident responders continue to document that for many ransomware victims the smoking gun is a carefully crafted malicious email.  It’s a popular attack vector because it’s cheap and easy to do at scale, and you only need to get lucky once out of hundreds of thousands of attempts.

There have been concerted efforts by organisations to enhance training for employees on how to spot the hallmarks of a phishing email, such as suspicious-looking links, poor grammar and spelling or urgent language. However, the increasing sophistication of AI means organisations must now go further, and the tools they use need to adapt faster.

Attackers are utilising advanced language models to create phishing messages that are both convincing to the recipient and very hard to detect with traditional security measures. The growth of AI-powered voice and video cloning has also drastically enhanced the ability for social engineering attacks to be successful. Algorithms are increasingly able to accurately replicate the voices and images of trusted people and organisations, meaning victims can be more easily fooled to act on phishing requests.

AI is even evolving to the point where bad actors are able to access tools that simply create malware programs on-the-fly, negating the need for sophisticated cyber knowledge. Even if a built-in security solution detects it, the malware can be easily tweaked later to bypass it the next time.

At the moment, AI generated malware is part of a growing business for cybercrime-as-a-service models, where criminals can simply purchase a bespoke solution that can be deployed quickly, but as the ability to run the tools at home or in small virtual private servers grows with advancements in accessibility to the arcane world of AI development, amateur cyber criminals will soon no longer be dependent on anybody.

With criminal groups ready for war with an arsenal of readymade tools at their disposal, the defence against these potential attacks needs to be multifaceted and comprised of several layered elements to strengthen their infrastructure and prevent vulnerabilities.

Businesses must go further

Luckily, it’s almost impossible to implement a cyber-security technology today without benefiting from the AI capability the manufacturer has built into it – whether that’s local, on-board functionality or related cloud-based analysis, if you’re using a modern firewall, web or email filtering tool or anti-malware on your endpoints, chances are you’re already using AI to help defend your digital assets.

Desktop tools are also being more widely adopted to work alongside users and provide real-time warnings about suspicious content and activity across all of the tools these use to do their job. There’s significant value in encouraging staff to just take a moment to pause and think about whether the email they’ve received in their inbox is genuine. But mistakes do happen, so leaders must also foster a culture where employees feel able to speak up if they accidently click on a malicious link. This helps to mitigate risk should an incident occur.

These solutions, combined with ongoing training about spotting potential phishing emails, should make a positive difference. But, because AI-driven innovation is in the hands of the good and bad guys, security posture should involve resilience as well.

If we look beyond the defences that businesses can put up to protect their operations, what processes are in place to recover data should it be tampered with, deleted or stolen? Preventing attacks is just one element of cyber security. Another is ensuring rapid, efficient recovery processes should any damage be done.

Risk management is so important because unfortunately it’s near enough impossible to stop every single possible cyber incident from occurring. One element of reducing risk is the implementation of containment techniques, which can reduce opportunities for criminals once they have crossed the security perimeter.

Traditional, old-fashioned provisions such as backups are also very important. Nefarious individuals can have all the AI-driven sophistication in the world, but having an offline backup of all your important data can provide resilience if ransomware has had a significant negative impact on an active dataset.

Even with backups in place, it can often take days to restore from bare metal in a worst-case scenario. For some businesses, that could be long enough for a damaging long-term financial hit or even the end of operations altogether. The focus should be on implementing more efficient restoration processes, such as cloud-native redundancy and disaster recovery solutions, alongside those backups to help minimise business downtime and speed up restores.

Crucially, a constant review of the process to make sure we don’t back up what we don’t need, and we are always aware of what our critical data is and where it’s stored and processed so that we can prioritise getting our major systems back online should the worst happen, must be part of our routine business continuity planning and testing.

Reflecting the current reality

The contest between attackers and defenders is escalating as AI becomes a weapon wielded by both sides. While it offers organisations powerful tools to detect and respond to threats, it’s also giving cyber criminals the power to bypass traditional barriers more easily. Security strategies need to evolve into a layered, multi-faceted approach to reflect that reality.

Success lies in combining AI-enabled detection with smart containment strategies, strengthened backup systems and faster recovery provisions. AI isn’t going anywhere, so preparation, continuity and the ability to adapt quickly to the unexpected is more important than ever.

Author

Related Articles

Back to top button