Cyber Security

INSIGHTS FROM THE SOC: DETECTING MALWARE, INCLUDING AI VARIANTS

It is widely recognised that along with its benefits to the way we do business, AI can also be used for malicious reasons and poses a security risk. Since GenAI’s mainstream adoption in 2023, the use of deepfakes has doubled every six months, while ChatGPT usage has led to an enormous 1,265% phishing surge in Q4 2023 compared to Q4 2022. And as AI becomes increasingly sophisticated, it is likely the threats will also increase.

According to research carried out by Sapio Research for Deep Instinct, almost half (46%) of security professionals surveyed fear generative AI will heighten organisational vulnerability to attacks. And the backdrop to this growing fear, is a rising number of malicious AI tools, thanks to AI jailbreaking. These malicious AI tools include Worm-GPT, Malware-GPT, Evil-GPT and most recently, DarkGemini.

As an example of how popular these malicious AI tools are, Worm-GPT, a powerful and unethical AI chatbot that has been designed to help hackers, gained over 5,000 followers in its Telegram channel within days of its launch.

We are now in an era where quick and easy malware authoring is available to everyone. This includes non-technical amateurs, curious individuals, script kiddies, dissatisfied employees, and, of course, hackers. Cyber security experts are coming across a daily barrage of unknown and unconventional malware variants, spawned by a much larger pool of individuals. This is making it harder than ever to identify patterns – at least by using conventional, traditional means. 

State sponsored malicious AI

At Obrela, we are already aware of several state-sponsored hacker groups that are using malicious AI for nefarious ends. These include the China-based Aquatic Panda, which exploited Log4Shell to attack universities, highlighting the risks posed by zero-day vulnerabilities and the increasingly sophisticated tactics now used by state-sponsored hacking groups.

Fancy Bear, a Russian cyber espionage group probably associated with Russian military intelligence, the GRU, has also incorporated malicious AI into its operations. By doing this the group has enhanced its ability to carry out sophisticated cyberattacks, such as automatically generating phishing emails, developing evasive malware, and analysing large volumes of data to identify high-value targets. 

Another state-sponsored adversary that is using AI to enhance its cyber espionage activities is Iran’s Imperial Kitten. By using AI to generate sophisticated code snippets and phishing emails, this group is creating more and more convincing and effective social engineering attacks. Imperial Kitten has also used evasive code that is able to bypass traditional security measures, making its malware harder to detect and mitigate. 

Finally, Kimsuky is a North Korean state-backed group that targets South Korea for espionage purposes. It uses malicious AI to generate spearphishing campaigns, gleaning information regarding the likes of think tank operations and nuclear power operators.

Exploiting LLMs

Large language models (LLMs) such as ChatGPT, are increasingly being exploited. A key example of this is the development of the AI-synthesized, polymorphic keylogger, BlackMamba.

Perhaps the key feature of Black Mamba is its ability to modify its own program on-the-fly, changing its appearance and behaviour dynamically to avoid signature-based detection methods used by antivirus software. This adaptability makes it particularly challenging for conventional security measures to identify and neutralise, further highlighting the potential dangers of AI-enhanced malware.

This proof-of-concept not only demonstrates that LLMs can be exploited, it also underscores the importance of continuous research into the capabilities and limitations of current detection and prevention tools, while also alerting the community to the potential misuse of such technologies.

The Malware Paradox

The question you need to ask is – can AI-powered malware defeat your current security capabilities?  To answer this, it is important to consider the malware paradox. This means that no matter how stealthy or sophisticated a malware may be, if it isn’t actually executed, it remains ineffective.

Malware needs to run to be able to fulfil its purpose, and this presents cyber security teams and SOCs with numerous opportunities for detection – which is, of course, a paradox. 

It is easy to use malicious AI like ‘MalwareGPT’ to create basic infostealer code in seconds, then make it more sophisticated, (including encryption, anti-analysis techniques, code obfuscation, polymorphic behaviour and anti-forensic techniques) again, in seconds. 

While debugging or modification might be required, this can also be performed by AI in seconds along with the final step, of creating a phishing email or other malicious tool to gain access to the target. But it still needs to avoid detection.

Testing the SOC’s detection and triage

When we ran malware that had been created using MalwareGPT, the abnormal behaviour was immediately flagged. This in turn immediately triggered an alert so that SOC analysts knew to investigate. They were then able to verify whether the suspect file was indeed related to Infostealer malware, especially given that an exfiltration request was also detected.

The SOC was then able to proceed with immediate remediation actions to prevent further spread or damage. The actions taken include banning malicious hash and isolating the infected device.

In a real-world situation, after informing the customer of the situation, the SOC team would then normally continue their investigation, looking for further instances of the malware in the environment, and to try to discover its origins and search for any possible persistence mechanisms or other suspicious findings.

How AI can enhance a SOC

As well as the increased security risk posed by AI, there are also five main ways that AI can actually enhance a SOC, and therefore an organisation’s cyber resilience. These are threat detection and response capabilities, improved threat intelligence, reducing alert fatigue, performing advanced behavioural analysis, and increased overall efficiency and scalability. 

These benefits will collectively help any organisation to build a more resilient and proactive cybersecurity posture.

Authors

Related Articles

Back to top button