Agentic

Agentic AI Might Just be the Cybersecurity Tool that Today’s Threats Require 

Ever since the COVID-19 pandemic accelerated digital transformation and connectivity in 2020, cyber threats have also surged in both scale and sophistication. Ransomware, phishing attacks and data breaches have become increasingly prevalent, targeting everything from remote work infrastructure to renowned organizations like the World Health Organization (WHO) in Switzerland, the Center for Disease Control (CDC) in the United States, Software AG in Germany, and the Gates Foundation. 

But hackers are not just targeting high-profile victims for their attacks. In fact, although 2025 has seen companies like North Face and governments like the Czech reporting cyber attacks, small and medium-sized businesses (SMBs) face higher vulnerability. 

“SMBs are highly vulnerable to the threat of cyber intrusion and tempting gateways to bigger prizes such as large enterprises, global supply chains, and critical infrastructure, representing the prime targets of bad actors,” noted Karen S. Evan, managing director at the Cyber Readiness Institute, a non-profit. 

Cyber security has thus become more essential than ever, now tasked with accurately detecting threats, automating responses, and flagging risks while keeping up the pace with threat transformation across every sector. In a parallel fashion, artificial intelligence (AI) has emerged as a vital force in an increasingly interconnected world, where leveraging emerging technologies is no longer optional, but necessary for business security. 

As per Mckinsey & Company, AI is redefining cyber security- both positively and negatively. On the one hand, it is accelerating the speed of cyber attacks, with industrious hackers leveraging AI tools to create convincing phishing emails, fake websites, and deepfake videos that bypass traditional detection mechanisms. 

However, AI is also powering a new cyber security defense. Organizations using AI in their defense systems see improved efficiency when it comes to detecting, responding and recovering after attacks, as well as automating lower-risk tasks like routine system monitoring. 

But as both technologies and attacks evolve, new security tools are flowing into the market to address new threats, generating an indistinguishable and confusing amount of data. Such an influx produces more confusion, leaving security professionals overwhelmed and struggling to discern what matters the most in their businesses. 

Enter agentic AI: a new intelligent system designed not just to detect threats, but to act on them.

Because AI can simulate human intelligence and behaviors, its integration in the cyber security industry has resulted in automation going beyond human capabilities, too. And while it has been implemented for this purpose since the late 1980s, when the first anomaly detection system was implemented, hackers have managed to increasingly adjust their cyber violence to AI systems in what a 2021 study deemed “the game of cat and mouse.” 

Article Co-Author Salomé Beyer Vélez

In this technology’s fast-paced evolution, then, agentic AI has positioned itself within the cyber security space. Built on the foundations of large language models (LLMs) and multi-agent architectures, agentic AI leverages large and diverse data- including institutional knowledge, asset databases, threat intelligence feeds and configuration data- to quickly offer insights and make decisions. 

Simply put, agentic AI goes beyond processing inputs following pre-defined rules, but rather taking autonomous action depending on the information that its LLM-based infrastructure deems as alarming, important, or worthy of prioritization. In an industry that suffers from talent shortage, growing alert volume, and a persistent need for redefinition and redirection, this technology could not come at a better time. 

Just in the past decade, we have seen an increase in IoT vulnerabilities, state-sponsored cyberattacks, a boost in cyber crime driven by the rise of cryptocurrency, and geopolitical tensions that have unstabilized the established security order. The introduction of agentic AI, however, will prove “a fundamental pivot in the foundations of the cybersecurity ecosystem,” according to Nvidia. 

Modern digital environments are fragmented, with cloud services, third-party apps, remote devices and hybrid networks complicating the job for security teams who aim at maintaining full visibility of potential exposures. Last year alone, 82% of companies reported to have a widening gap between security exposures and their ability to mitigate them. 

“Existing tools provide endless technical details, many of them misleading or inconsequential, leaving security teams able to see ‘trees’ but not the ‘forest’,” noted Sharon Isaaci, CEO or cybersecurity startup Tonic Security, while in conversation with The AI Journal

Tonic Security launched from stealth on July 28 with $7 million USD in seed funding. Tasked with cutting through the noise and complexity of the current cyber security space, the company’s agentic AI platform aligns and interprets data from IT and security tools, contextualizing information and quickly adapting from the environment.  

“Much of the vulnerabilities and alerts generated by existing tools suffer from significant ‘noise’: they lack meaningful context, are tainted with false positives and duplicate findings, and present minor issues that masquerade as major ones,” Isaaci continued. 

The transformation from vulnerability to exposure management represents a necessary evolution of cybersecurity operations, setting forth necessary questions in today’s risk landscape: which assets are exposed now? Are they business-critical? Is there an exploit available? Do compensating controls exist? 

Agentic AI is thus revolutionizing security operations through its automation and learning faculties. Organizations adopting agentic AI report benefits like faster decision-making and improved risk management, a 2025 study found. But concerns, like cyber criminals using this technology for malicious purposes, have also been put forth. 

The rise of this technology might mark a paradigm shift in which attackers “autonomously strategize, reason, and execute multi-stage operations.” Regardless, these considerations will become increasingly redundant as AI agents mature and security teams adapt to a future where attacks are faster, cheaper, and self-optimizing. At the current rate of innovation, however, security operations teams will be able to establish proper safeguards, continuous monitoring mechanisms and transparent AI designs that mitigate such risks. 

Such is the scenario that Isaaci proposes. “Rather than focusing solely on identifying software flaws, exposure management broadens scope to cover new areas of the attack surface, encompasses disparate types of findings, and continuously enriches them with multi-dimensional context.” 

Article co-authored by Salomé Beyer Vélez 

Author

Related Articles

Back to top button