
Agentic AI, autonomous systems that can plan, act and adapt with minimal human oversight, is redefining cybersecurity as both a powerful shield and a potent weapon. This shift presents a dual challenge: attackers now wield AI to launch hyper-targeted, autonomous campaigns at scale, while defenders race to deploy AI agents that can match or exceed this speed and complexity, adopting tools that are able to independently hunt for vulnerabilities, analyze logs and data, prioritize in real-time, and even execute multi-stage operations.
Cybersecurity professionals must consider a new truth. Only AI can realistically prevent the majority of today’s and tomorrow’s cyberattacks, and yet the same technology is what’s arming the adversaries. AI attacks versus AI defense is the new battleground.
Lowering the Barrier for Attackers
Let’s consider what agentic AI can already do for attackers, enabling threat actors to become more effective with far less effort.
Attackers are using agentic AI in the wild to:
- Automate reconnaissance and target selection: By scraping data from the deep, dark and public web, attackers can fuel hyper-targeted phishing and social engineering campaigns. Tools like Cobalt Strike are being enhanced with AI, allowing them to autonomously select tactics based on the defensive measures encountered, making them harder to detect and stop.
- Accelerate the success of campaigns: Recent data from Palo Alto Networks shows the mean time to exfiltrate data (MTTE) has dropped from nine days in 2021 to just two days in 2024, with one in five attacks now reaching exfiltration in under an hour, thanks to AI-driven automation. BlackMatter ransomware is one example that now employs AI to analyze and adapt to victims’ defenses in real-time, refining encryption strategies and evading endpoint detection and response (EDR) tools. Broader than ransomware, AI-generated phishing scams can adapt in real-time to behavior, language and context, and as a result, from 2023 to 2025, AI’s phishing performance relative to elite human red teams improved by 55%.
- Scale identity fraud: By using stolen PII, autonomous AI bots can complete Know Your Customer (KYC) checks, open fraudulent accounts, and even launder funds at scale and speed. Deepfake voice assistants can even impersonate executives to bypass biometric security. One Indonesian financial institution was targeted with more than 1,100 deepfake fraud attempts, resulting in potential losses of $138.5 million. Forrester anticipates biometrics vendors will allocate 20–30% of R&D budgets to enhance deepfake detection by 2025.
- Streamline operations: AI can now be used for everything from negotiations during ransomware attacks that break language barriers and maximize extortion payouts to replicating successful attack playbooks across thousands of targets. By targeting more victims with less effort, attackers are leveraging AI to significantly increase their return on investment (ROI).
Economic Incentives: Agentic AI = Higher ROI for Attackers
For attackers, leveraging AI means cybercrime has become more profitable and less risky. With self-directed systems, attackers can target a greater volume of victims with minimal incremental effort, increase the payout by targeting more intelligently to maximize ransom demands, and act faster, thereby reducing operational risk while giving defenders less time to detect, disrupt, and respond.
Additionally, companies should consider the risk they create (consider it an opportunity for the bad guys) by their own use of AI in-house. Just as AI can protect systems, it can be subverted. Attackers can exploit machine learning (ML) models by poisoning training datasets, manipulating inputs with adversarial examples, or even reverse-engineering deployed models. As AI becomes central to cybersecurity infrastructure, it also becomes a high-value target. As AI is still in its infancy, its protections and safeguards are not yet fully formed — often making it an easy target, too.
The Defensive Arms Race: Vendors Respond
Let’s change perspective. As attackers leverage agentic AI, solution providers are in an arms race to embed similar capabilities into defensive tools. Alongside this shift, I can see the focus naturally moving from reactive, alert-driven defense to the kind of proactive, threat-led approaches that only AI can enable. AI incorporated into defensive tools, whether open source or embedded into products, looks like:
- End-to-end automation: AI agents that can independently uncover, analyze, contain and remediate threats, either in near real-time or in real-time.
- Proactive threat hunting: Systems that can actively hunt, reason and act, continuously scanning for IOCs and vulnerabilities to enable early detection of threats.
- Real-time collaboration: When intelligence can be shared and coordinated immediately, this increases the speed and accuracy of any defensive action.
The Shift to Threat-Led Security
As vendors compete to implement AI that can fight AI — traditional risk-based approaches are no longer sufficient. Instead, the future of cybersecurity defense is threat-led. This means organizations need to design detection and response strategies informed by real-world adversary behavior, not just compliance frameworks. Businesses should be able to understand and anticipate adversarial TTPs (Tactics, Techniques and Procedures) and build defenses that adapt as quickly as the tools on the other side of the table.
This is broadly similar to what we are hearing from analysts and lines up with a recent Forrester Research Report, “AI is increasingly used to enhance threat intelligence but is also a source of new threats. AI-generated attacks will become more complex and lower the barrier for new threat actors. Convincing deepfakes and complex narrative attacks are harbingers of more sophisticated threats, driving the need to fight fire with fire.”
Key recommendations for defenders include:
- Actively test against agentic attacks: Don’t think AI-driven threats aren’t already a reality. Incorporate red teaming and simulation frameworks to evaluate your readiness.
- Invest in threat intelligence that uses AI: Like Forrester says, fight fire with fire. Your threat intelligence should leverage AI to aggregate and analyze threat data, enabling it to adapt to emerging tactics.
- Consider AI tools across the cybersecurity lifecycle: Autonomous AI agents are already embedded in security tools and open source, and can operate across detection, triage, response, remediation, and forensics. Leverage them.
- Continuously monitor and audit AI systems: Regularly review logs, monitor AI agent behavior, and perform security audits to detect anomalies, adversarial inputs, or model drift.
- Protect training data and AI pipelines: Secure your data integrity by protecting training data, models, and pipelines from poisoning, inversion, and tampering attacks. Implement robust data governance and encryption practices to safeguard the integrity of your AI systems.
- Maintain human oversight: While AI can automate much of the response, human expertise is essential for tuning systems and making critical decisions. A balanced approach with both smart technology and the human element will remain key.
Agentic AI is not just a technological leap; it is a paradigm shift, and a strategic inflection point in both attack and defense. The adoption of threat-led, AI-driven security will define the leaders and the laggards. Organizations must architect defenses that are as adaptive, fast, and autonomous as the threats they face — because in this race, standing still means falling behind.