Future of AIAI

When AI Meets Insecurity: Why Reactive Cyber Defense Needs a Reset

By Shai Mendel, Co-Founder & CPO, Nagomi Security

Headlines in the cybersecurity media often highlight surprisingly basic security failures: weak passwords, misconfigured access controls, or unpatched systems exposing sensitive data. On the surface, these seem like rookie mistakes that no mature, global brand should still be making. But that’s precisely the point: even the largest organizations continue to stumble over foundational cybersecurity practices, all while racing to deploy advanced AI technologies at the same time. 

AI deployment can evidently create risk, but can also create opportunity. AI is already reshaping how defenders and attackers operate alike. It is helping defenders analyze data at scale, automate repetitive work, and respond with greater efficiency. At the same time, attackers are moving faster, using AI to exploit missteps in minutes. The difference between falling behind and staying ahead is not whether you use AI, but how you apply it. 

According to IBM’s 2025 Cost of a Data Breach Report, 97% of companies that experienced an AI-related security incident lacked proper AI access controls. That number highlights the reality: layering new technology on top of weak foundations does not improve security. But when AI is implemented with the right guardrails, it becomes a force multiplier, giving defenders the speed and precision they need to match the scale attackers are operating at. 

AI Needs Context to Deliver 

The question every board asks is simple: are we secure? But AI cannot answer that on its own. Security teams do not just drown in threats, they drown in decisions: which alerts matter, which gaps to fix first, which exposures to accept. 

AI adds speed and efficiency, but without the right context, it does not add clarity. And without clarity, there is no control. This is exactly the challenge that Continuous Threat Exposure Management (CTEM) is built to solve: giving organizations a structured way to discover exposures, prioritize them, validate defenses, and drive the right fixes. 

That is also where the next wave, agentic AI, comes into play. CTEM has given organizations a structured way to identify exposures, but too often it stops short of producing actionable outcomes. Lists of vulnerabilities or dashboards of posture metrics do little to reduce real risk if there is no clear path to fix them. Agentic AI has the potential to close this gap by accelerating each stage of the CTEM cycle, but only if it is grounded in context. Agents need to know which assets are at risk, which controls are already in place, which threats are relevant, and whether an exposure is actually exploitable. Without that context, AI agents risk creating more noise. With it, they can move from suggestion to action: validating defenses, mapping fixes to live threats, and driving resolution across existing tools.    

AI as an Amplifier 

There is no shortage of vendors promising AI agents to defend the enterprise. But if you do not understand what you are automating, you are only accelerating your own mistakes. 

Misconfigured controls are already one of the top drivers of breaches. Gartner reports that 61% of security leaders suffered a breach in the last year due to misconfigured defenses. Simply automating those same misconfigurations does not strengthen resilience; it magnifies the problem. 

That is why AI must be treated as an amplifier. It will make what is working better, and what is broken worse. The opportunity lies in using AI to validate and improve security posture before layering on automation. When directed this way, AI is not just a faster way to react; it becomes a multiplier for discipline and a tool that can tip the balance back in favor of defenders. 

From Illusion to Insight 

Many organizations believe they are protected because they have invested in dozens of tools. On paper, the stack looks strong. In practice, it is riddled with gaps. 

That false confidence is one reason why more than 80% of breaches could have been prevented with tools organizations already had, if only those controls were configured, tuned, and validated. 

This is where AI can help in a measurable way. Instead of showing vanity metrics, AI-powered analysis can: 

  • Measure real exposure reduction. 
  • Validate whether controls are actually working. 
  • Map controls to known threats. 
  • Provide security leaders with proof that risk is going down and not just that tools are running. 

These are the same outcomes CTEM was built to deliver, but in practice many programs stall at visibility. Dashboards and reports can show where exposures exist, yet too often they stop short of producing a clear, actionable path to resolution. This is where the combination of context and execution matters. With the right foundation, agentic AI can extend CTEM from identifying issues to helping fix them. It can make cycles faster, more consistent, and less dependent on already stretched-thin human analysts. The difference is moving from knowing about exposures to proving they are being reduced, turning CTEM from a framework into an engine for measurable outcomes.      

The Path Forward: AI as a Catalyst 

The companies that stay ahead won’t be the ones using the most AI. They’ll be the ones that know where it can be safely and efficiently deployed, and where it cannot. That starts with the basics: validate what you already have, map every control to a real threat, and fix what matters most. Only then does AI become a real catalyst, helping teams spot gaps faster, shorten response times, and stay ahead of risk. 

This is where CTEM and agentic AI work best: together. CTEM gives security teams the structure to focus on real exposures. Agentic AI brings the execution power to act on them without slowing the team down. One provides direction. The other delivers speed. 

If you’re not sure where to start, start small. Choose a use case that eats up your team’s time: like validating misconfigurations, mapping controls to threats, or confirming whether a fix actually worked. Use CTEM to guide the process and identify fixes which matter. Let agents take on the repeatable work. 

Whether you are a mature, global brand or a 20-person startup, the truth is the same: no one is immune. Overconfidence in AI may be your biggest vulnerability. Confidence in how you apply AI, however, can become your greatest advantage. 

Author

Related Articles

Back to top button