Cyber Security

Combatting the rise of malicious AI-powered malware

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

The ability of large language models (LLMs) to analyse vast amounts of data enables cyber criminals to faster and more efficiently target the weak spots in an organisation’s externally facing systems and create code to exploit these weaknesses. At the same time, LLMs can also quickly identify and monitor misconfigurations and vulnerabilities to assist in combatting malicious attacks faster than humans ever could. Therefore, it makes good sense that organisations seriously consider how they can best deploy AI tools to beat cybercriminals at their own game.

How AI increases the ransomware threat

Today, the most potent methods for spreading ransomware remain rooted in social engineering techniques like phishing and the exploitation of vulnerabilities or weak authentication factors in externally accessible systems, such as Remote Desktop Protocols (RDP) / Virtual Private Networks (VPNs) endpoints, and application zero-days. With the assistance of AI, cybercriminals are now able to craft deceptive documents with unprecedented sophistication, minimising the usual telltale signs of phishing attempts and thus making them even more enticing.

Furthermore, as suggested by the NCSC report, cybercriminals can harness AI to enhance various aspects of their operations, including reconnaissance and coding, which directly bolsters the exploitation vector. Leveraging AI, threat actors can rapidly sift through vast amounts of data to pinpoint vulnerabilities within an organisation’s external systems and devise customised exploits, whether exploiting known vulnerabilities or discovering new ones.

AI also presents an opportunity to surpass the conventional and often ineffective methods of guarding against insider threats by enabling behaviour-based monitoring capabilities and policies. These can effectively identify instances where team members, succumbing to pressure or temptation, attempt to pilfer or leak company information. A common scenario involves the unauthorised forwarding or sharing of sensitive data with unauthorised individuals. While various preventive measures can be implemented to curb such actions, many existing detection mechanisms lack the sophistication required to effectively identify, halt, and prevent such breaches.

By leveraging AI to augment these attack vectors, the probability of successfully deploying ransomware through these conventional means is significantly heightened, posing a greater threat to organisations’ cybersecurity posture. On the other hand, AI also offers the potential to address this gap by providing innovative solutions that surpass the capabilities of current tools.

Actionable short-term and long-term countermeasures

At this point, every organisation should have incorporated guidelines for the “Use of AI” into their existing Acceptable Use Policy, complemented by ongoing security awareness training for all personnel. This ensures that employees feel equipped to use AI in a manner that safeguards both themselves and the organisation. AI technology significantly enhances productivity and speeds up tasks that would otherwise require considerable amounts of time to complete.

However, with this enhanced capability comes a heightened responsibility—particularly in the realm of AI usage. Therefore, it is imperative to educate organisational staff on the boundaries governing

the appropriate use of AI and to actively encourage them to leverage AI in their day-to-day tasks. By doing so, organisations can ultimately amplify their output and efficiency.

It is also worth considering how AI can enable automated governance and compliance when it comes to regulations and industry standards. AI tools can continuously monitor systems identifying anomalies and reacting to security breaches that would invoke non-compliant status. By keeping track of evolving governance rules, these tools will ensure organisations are always up to date.

Short-term measures for protection include:

● Educating cyber security teams comprehensively about AI, requiring even the most technically proficient teams to delve into understanding not just the application but also the underlying technology powering AI capabilities.

● Implementing phishing-resistant authentication methods to safeguard organisations against the threat of phishing attacks aimed at acquiring authentication tokens for accessing environments.

● Establishing policies and automated mechanisms to empower team members with knowledge to defend against social engineering attacks.

● Continuously fortifying the organisation’s internet-facing perimeters and internal networks to mitigate the effectiveness of such attacks.

Long-term measures for protection include:

● Executing all the short-term actions listed above, recognising that for many organisations, these tasks require significant time and effort to implement effectively.

● Collaborating with cybersecurity service providers to incentivise them to use their data and AI infrastructure to develop models that can detect and automatically respond to attacks more effectively, such as blocking or mitigating them, and preparing for the potential onslaught of diverse attack types in the future.

What the future holds for AI threats

The future of AI threats is likely to be characterised by increasingly sophisticated and pervasive attacks that exploit the capabilities of the technology. As AI becomes more accessible and powerful, malicious actors will doubtless leverage it to launch more targeted, automated, and evasive cyberattacks across various domains.

One major concern is the emergence of AI-driven cyberattacks that can adapt in real time, making them highly challenging to detect and mitigate using traditional security measures. For example, AI-powered malware could continuously evolve its tactics and techniques to evade detection by security systems, leading to longer dwell times and greater damage to targeted systems.

The proliferation of AI-generated deepfakes and misinformation campaigns also poses a significant threat to individuals, organisations, and even democratic processes. Deepfake technology can be used to create convincing but false audio, video, and text content, making it increasingly difficult to discern truth from fiction. This has profound implications for public trust, privacy, and national security.

In response to these evolving threats, we must develop advanced AI-driven security solutions to bolster cyber-defence capabilities. These include AI-powered threat detection and response systems that can analyse vast amounts of data in real-time to identify anomalies and potential security breaches. Additionally, AI algorithms can be used to automate incident response processes, enabling faster and more effective mitigation of cyberattacks.

Furthermore, research is underway to explore the potential of AI in enhancing cybersecurity resilience through proactive threat intelligence, predictive analytics, and adaptive security controls. By leveraging AI to anticipate and adapt to emerging threats, organisations can stay one step ahead of cyber adversaries and minimise the impact of attacks. Ongoing research and collaboration are critical to ensuring that AI remains a force for good in the fight against cyber-threats.

Author

  • Matt Hillary

    Matt Hillary currently serves as VP of Security and Chief Information Security Officer at Drata. With more than 15 years of security experience, Matt has a track record of building exceptional security programs. He most recently served as SVP, Systems and Security, and CISO at Lumio, and he’s also held CISO and lead security roles at Weave and Workfront, Instructure, Adobe, MX, and Amazon Web Services.

Related Articles

Back to top button