AIFuture of AI

Addressing the Staffing Gap in Cybersecurity Security Operations with AI Agents

By Ambuj Kumar, Co-Founder and CEO, Simbian

Many times we see quite a few security experts share a blind spot that can have damaging results. 

Take for example when a new threat comes on the radar and first gets reported by the media. The next thing we know, it becomes a trending chat on Twitter/X and other social media channels, raising the alarm. This causes the CISO/CEO/board to jump into action asking the security team for a game plan, adding to their already full plate of managing day-to-day security tasks.

While the security teams are busy researching that threat, hundreds of other threats and vulnerabilities get reported – and the cycle continues. As it turns out, many of these under-the-radar threats could be more damaging to the organization than the original because of their specific exposure. 

The more time security teams spend chasing an irrelevant threat, the less time they have for other potentially damaging ones. What makes a threat damaging is very much dependent on the specifics of the organization. So, it’s not necessarily the most talked-about threat that will harm the organization, but something that exploits a critical weakness. 

Since there are always 10x or even 100x more threats to analyze than the human resources available, the first threat response step needs an automated process. This automation should answer a simple but very key question: “Is my organization impacted by this threat?”

Additionally, security teams should look at any existing protection. For example, if a vulnerable application is deep inside the firewall and accessible only to employees, it may not be as important as taking care of a threatened application with healthcare data used by end customers. 

Only the relevant threats for a specific organization should warrant human attention – deciding what not to focus on is just as important as deciding what to focus on. 

Security teams cannot keep up with the operational tasks they must do each day, despite years of investment in in-house automation and tools to make them more effective. The automated check for threat relevancy should consider factors such as software footprint, business profile, type of application (customer data, payment data, customer facing), etc. 

Traditional approaches to security automation no longer suffice in today’s dynamic environments. Talent is scarce, while threat vectors are getting more complex. Most businesses have a mix of software from multiple vendors and in-house software. Each business and each member of a security team have unique, ever-changing security needs. Security is a domain of ever-increasing complexity. Every day, security incidents bring new variables. Security is dynamic, and hackers will always find new gaps in security, which defenders rush to fix. This dynamic nature means security teams are always struggling with finding people trained in the “latest thing.”

AI has the potential to augment humans in security in a fundamental way. AI-driven security solutions can greatly improve threat detection, speed remediation, and reduce complexity. But while security vendors are increasingly using GenAI, off-the-shelf GenAI models come with many security risks, including hallucinations, prompt injection risks, and exposure of PII and other confidential data. 

AI Agents in CyberSecurity Operations

Traditionally the default approach in cybersecurity has been to hire more staff for security operations. The problem is that there simply are not enough trained security personnel left to hire. The U.S. alone is short by about 500,000 security personnel to meet the required needs, while internationally the deficit is estimated at over 3 million.

The other approach organizations have taken is to buy more security tools off the shelf, in the hope that automation will relieve the pressure on staffing. But while such tools have been useful in sharing security insights, every tool needs additional staff to maintain it, operate it, and to sort the signal from the noise in their outputs. So this approach exacerbates the staffing gap.

In addition, traditional approaches to automation have had limited success because security decisions need organizational context that is only in people’s heads and never written down or is written in unstructured documents. In addition, the security landscape evolves rapidly, so decisions involve correlating and pattern matching across diverse data points in real time.

Enter AI Agents. There are many SecOps tasks that AI Agents are beginning to take on today. Examples include triaging alerts like a SOC Analyst, behavior-based hunting for threat actors like a Threat Hunter, answering security questionnaires, and assessing vendor risk. As LLM generation costs decrease and speed improves, it will become feasible to have network environments where blue and red agent teams continuously hone their skills. Blue team AI agents could become ubiquitous and able to deliver cost-effective defense against increasingly complex red team attacks. Agentic flows will continue to enhance red teaming, particularly in areas such as jailbreaking, since regulations scrutinize offensive uses of LLMs. Security experts leveraging insights from blue team AI agents across multiple environments will be invaluable, especially as agentic tool risks diminish individual knowledge due to ease of use. 

Conclusion

GenAI has tremendous potential in automating laborious Security Operations, it takes expertise to get predictable results and contain the risks. The most promising path forward is AI Agents built by security experts that internally use LLMs. Agents are not just an interesting technology, they are rapidly becoming a critical necessity in Security Operations because as the rate and complexity of attacks continues to rise rapidly and staffing remains limited, AI Agents are the only way to close the gap. Security Operations teams would be wise to evaluate and pilot AI Agents in 2025.

Author

Related Articles

Back to top button