Early last year, government researchers at the UK’s National Cyber Security Centre (NCSC) warned that AI was set to increase the volume and heighten the impact of cyber-attacks dramatically. With pronouncements like this, it’s easy to believe that the advantage is shifting decisively towards adversaries. However, the arrival of generative AI (GenAI) has created opportunities on both sides. The key for security operations centre (SOC) teams will be to find the balance between human and machine, to boost analyst productivity, surface insight, and ultimately keep their organisation safer.
Adversaries automate to accelerate
It’s important to note that the NCSC’s researchers did not say that AI is set to enable new forms of attack. Although there has been plenty of fear and uncertainty about the types of AI-powered attacks that are likely to emerge, for now much of it remains a distant prospect. The threat today is more about AI optimising the productivity of adversaries, enabling them to carry out malicious campaigns at a speed and scale previously impossible.
For example, threat actors can use a GenAI chatbot to build highly convincing social engineering content in multiple languages, with faultless grammar. Or they could use one to quickly research and find details on high-value victim assets like ICS/SCADA systems, or refine scripts for software exploits. These use cases are about optimizing existing threats, rather than devising new ways of breaching defences. The reason we aren’t seeing more sophisticated use cases is because it’s still difficult for adversaries to integrate large language models (LLMs) into agent frameworks or plug them into external data sources to complete malicious tasks. Access to state of the art AI models is also tightly controlled, but that is beginning to change. New standards like the Model Context Protocol (MCP), which acts like a USB for AI, are making it easier to link models with tools and data. We’re already seeing early examples that show MCP being used to automate command and control, and even run hacking tools autonomously. Over the next year there will likely be an uptick in adversaries using LLMs in this way.
Many adversaries are also effectively jailbreaking tools like ChatGPT to do their dirty work for them. OpenAI claims that, in the first 10 months of 2024, it disrupted more than 20 operations and deceptive networks trying to use its platform for malicious purposes. Some researchers have dubbed this category of threats “promptware” —a reference to the prompt engineering techniques that enable malicious actors to sidestep the safety guardrails designed by LLM developers. Specialised, self-hosted AI models based on open source LLMs are also starting to appear on the dark web, allowing adversaries to bypass legitimate GenAI tools altogether.
Fighting fire with fire
Such threats have the potential to overwhelm already stretched SOC teams if they go unchecked. Research tells us that over a 12 month period, 87 percent of organisations experienced a security incident where they were unable to detect and neutralise a threat before it impacted the business. Security analysts are drowning in alerts, which increases the risk of threats being overlooked and in turn leads to stress and burnout. Some 60 percent of security leaders say there’s too much noise in the SOC and a similar number report high rates of staff churn due to overwork.
If network defenders can start to use GenAI as productively as their adversaries, there are many reasons to be optimistic. LLM-powered GenAI agents can do more of the heavy lifting for human SOC teams, automating away manual toil to enhance their productivity.
Take a typical event that might trigger detection analytics, such as a suspicious employee login. A GenAI agent could automatically gather all the relevant contextual information, so that the detection engineer can get on with the most important part—working out if it was indeed malicious or not. In this scenario, the AI could gather data on IP addresses that the employee has used before and any recent risky logins, alongside their VPN and ISP usage patterns and device characteristics. The human analyst can then use their experience and innate critical thinking—attributes AI does not possess—to decide the best course of action. This could reduce investigations from 40 minutes to less than five.
A problem shared is a problem halved
GenAI also makes for a great SOC co-pilot. By drawing on training data related to historical detection and response events, it can make suggestions to an analyst on what to do next—whether that’s threat hunting, system hardening, or remediation. Once again, the AI is doing the heavy lifting in the background to ease some of the pressure and boost the productivity of the human expert.
There are of course some caveats. While entirely autonomous AI agents offer potentially eye-catching advances in security analytics and other use cases, their reliability isn’t proven. That’s why it’s essential to always have a human in the loop. It’s also important to remember the quality of a GenAI’s output is only as good as the data that the underlying LLM was trained on. Organisations that outsource detection and response should therefore look for a provider leveraging its own foundational models, based on thousands of carefully mapped detection analytics, threat profiles, and response and remediation recommendations.
Supercharging the SOC
By harnessing GenAI in these ways, organisations can unleash the strategic and creative problem-solving skills of their SOC teams—to improve detection, reduce alert fatigue and churn, and minimise cyber risk. Underscoring the sense of optimism, nearly half (46 percent) of security professionals believe GenAI will be a positive for their industry, versus just 6 percent who say it will be a negative.
Organisations should take heart from that and make sure they’re poised to reap the benefits of AI in their own security operations. The new cyber landscape is constantly evolving, and staying ahead of the curve is more crucial than ever.