AI & Technology

Magic, Math, and Mayhem: The Real Story of AI in Cyber Defense

By Aaron R. Warner, CEO - ProCircular, Inc

The headline sounds fantastic: drop AI into your security stack, cut breach response times by 80 days,  and save almost $2 million. IBM’s 2025 Cost of a Data Breach Report promises those kinds of turnkey  returns, and security leaders everywhere are hungry for real ROI numbers to back it up.  

If you peel back the layers, while the performance data are still compelling, those numbers rest on shaky  ground. The reality of breach economics are complex, and no one layer or vendor can solve your  cybersecurity challenges. Carl Sagan famously said that “anything sufficiently complex is  indistinguishable from magic,” and the same applies to AI in cybersecurity. It may appear to be magic,  but that’s primarily due to the systems’ complexity. There’s no magic in AI or cybersecurity, but it is  complicated enough to make understanding its performance difficult. 

Rather than relying solely on IBM’s excellent research work, we compared the results to similar industry leading reports from Verizon and Mandiant. We’ve taken the extra step of including reports from the  three organizations over the last decade – comparing their estimates in 2015 to the 2025 work to check  both progress and accuracy.  

While the dollar figures and some of the performance results might be debatable, once the data are  normalized and reviewed, the trends are worth noting. AI is a worthy investment if implemented  correctly and strategically. It’s also an excellent place to throw money down a hole if you aren’t applying  it carefully.  

The $10 Million Question  

According to the same IBM report, the U.S. average breach in 2025 costs $10.22 million — up 9% from  last year, and more than double the global average of $4.44 million. That’s 15 years in a row that the U.S.  has reported the most expensive breaches.  

Why the U.S. costs more:  

  • Tighter regulations increase workload for each breach (reporting, legal fees, etc.)  
  • Higher wages for skilled labor (there is a skillset shortage, and this talent is expensive)  
  • High concentrations of high-value data and financial resources in one country  

Healthcare still takes the top spot with $9.77 million per breach, and even the “cheaper” public-sector  breaches come in at $2.55 million. Across the Atlantic, the U.K. looks almost reasonable by comparison  at £3.29M ($4.12M). Heavy users of AI/automation seem to have shaved off about £670,000 on average,  an excellent bit of progress but far from magical.  

What the Headlines Don’t Tell You 

Breach cost figures from vendors are often built on shaky ground. The CISA Office of the Chief Economist  found that median cost estimates from vendors vary wildly—from $56,000 to over $40 million— depending on the source. One of the most cited studies, the Ponemon/IBM report, is based on survey  responses from just 600 organizations globally. It includes speculative “opportunity costs” which uses a  cost-per-record model that only accounts for a fraction of actual variance—just 2% to 13%.  

That 2–13% stat tells us the cost-per-record model doesn’t hold water. It assumes breach costs scale  neatly with the number of records lost, but they don’t. A breach involving a few thousand sensitive  healthcare records can cost more than one leaking millions of usernames from a marketing database.  The model ignores what TRULY drives cost—like downtime, legal exposure, business impact, and the  complexity of response. So when a metric that underpins headlines and budget slides only explains a  sliver of real-world outcomes, it stops being a shortcut and becomes a distraction.  

Insurance data paints a different picture. NetDiligence reports median breach costs around $56,000,  while Advisen’s updated figures put the number closer to $196,000. Even if we factor in policy limits and  coverage gaps, we’re still nowhere near the $10M+ figures that vendor-funded surveys suggest.  Academic work by Sasha Romanosky backs this up: the mean breach loss may be close to $6 million, but  the median is just $170,000—another example of how outliers skew the story.  

We know these surveys aren’t ideal methodologies, but that doesn’t mean they aren’t helpful in  decision-making. We needed more data from a larger population with different collection methods over  a larger period.  

AI’s Real Impact — and Its Limits  

AI’s practical benefits are consistent across multiple independent reports. No single dataset tells the  whole story, but when you combine the findings from reliable sources, a few powerful patterns emerge.  

For example, organizations that fully embrace AI and automation are seeing a significant impact on their  breach timelines—reducing the average breach lifecycle by 80 days, from 241 to 161. That translates to  $1.9 million in savings per breach globally—and even more for U.S. organizations where the stakes are  higher.  

Looking at the long game, Mandiant reports a 92.5% drop in median dwell time—from 146 days in 2015  to just 11 in 2025. Internal detection improved even more dramatically—from 320 to 10 days. That kind  of progress doesn’t just happen without a reason, and it suggests that organizations investing in  automation and AI are keeping pace while traditional approaches are starting to lag. The slight uptick in  first-year dwell times from 10 to 11 days this year could be a sign that threat actors are now leveraging  generative AI themselves, getting better at it, and able to hide for longer. 

Figure 1A Decade of Change — Breach Cost, Dwell Time, and Detection Improvements (2015–2025).  

Normalized comparison of key breach metrics from IBM, Mandiant, and Verizon reports. The data reveal  dramatic reductions in dwell time and detection timelines, alongside inflation-adjusted improvements in  breach cost control for AI-adopting organizations.

In the U.K., the data shows that AI adopters saved roughly £670,000 per breach, and IBM’s decade-long  lens shows a 20% increase in nominal breach costs—but an 8.6% decrease when adjusted for inflation.  The cost of getting hacked is rising, but the organizations that manage it using AI seem to be showing  measurable improvement over others.

Of course, it’s not all upside, and AI adoption includes its new risks. IBM found that 13% of organizations  reported breaches involving their AI systems—97% of those due to poor access controls. And when  “shadow AI” (unauthorized tools used by employees) is widespread, breach costs jump by $670,000 on  average. Any CIO can tell you that AI risks bubble up from the user community daily when SaaS and  online solutions sell directly to departments and users rather than a traditional IT buying process. The  inmates can tend to run the asylum, and they’re now armed with their brand of unapproved AI-enabled  tools.

The bottom line? AI helps us respond faster and contain more damage—but it’s also creating new  openings. Like any tool, it’s only as effective as the hands—and policies—that guide it.

The Skillset Gap Is the Real Crisis  

We’re facing a global shortfall of over 4.8 million cybersecurity professionals worldwide, a 19% jump  from last year (ISC2). The U.S. is short more than half a million FTE in cybersecurity, and the U.K. is down  another 93,000 (ISC2). But the bigger problem isn’t just how many people are missing—it’s what the  people already in the field can (and can’t) do.  

Severe staffing shortages raise breach costs by $1.76 million on average (IBM), but skills gaps are even  more damaging. Ninety percent of organizations report lacking critical expertise; nearly two-thirds say  skills shortages hurt more than pure headcount deficits (ISC2). The most urgent needs—cloud security,  zero trust design, and emerging tech defense—aren’t just technical boxes to check. They reflect strategic  shifts many security programs haven’t yet made, leaving teams nearly twice as likely to face a material  breach (IBM).  

These risks and shortcomings become immediately apparent during a breach. Large and small companies  often ask, “Who do we even call?” when no one has the relevant experience, and each minute becomes  more expensive.  

Hiring more people won’t fix this alone. Closing the skills gap requires targeted training for existing staff  and new talent. Partnering with educators to align today’s curricula with real-world threats and input  from industry should ensure that tomorrow’s hires are ready from day one. Whether through Ph.D.  cybersecurity programs at schools like Carnegie Mellon and Iowa State or certificate programs at a local  community college, cybersecurity and AI education is an essential security layer.   

At the same time, economic pressures are pulling organizations in the wrong direction. In 2024, a  quarter of organizations cut security staff, over a third faced budget reductions, and hiring freezes hit  38% (ISC2). The result is a burnout loop: fewer people, more pressure, lower satisfaction, and rising  turnover—draining talent and capability. That’s why many companies are turning to private cybersecurity firms to run or augment their limited programs, bringing in specialized expertise that  would be too costly or time-consuming to build in-house. If we want a resilient cybersecurity workforce, we must build it—not just staff it.  

Third Parties: Accountability Without Authority  

One of the most troubling shifts in the threat landscape is the surge in breaches involving third parties.  According to Verizon’s DBIR (2025), the share of incidents tied to vendors, partners, or other external  providers has doubled from 15% to 30% in a single year. The headline cases—Snowflake, Change  Healthcare, CDK Global—show how one compromised provider can ripple across an entire industry.  When a single vendor holds a dominant position in a market, the blast radius of a breach can be  enormous. Nowhere is this more dangerous than in healthcare, where interconnected provider networks  make these some of the costliest breaches in history.  

The technical side of this shift is just as alarming. Edge devices and VPNs—once a niche entry point— now account for 22% of exploitation targets, up from just 3% last year (Verizon DBIR 2025). Median time  to mass exploitation is effectively zero; in many cases, attackers begin exploiting vulnerabilities on the  same day a CVE is published. Much of this can be traced to the pandemic’s rapid shift to remote work.  VPNs were rolled out at scale, often without the hardening they’d get under normal timelines, and  attackers have been capitalizing on that widened surface area ever since. Many weaknesses are still  easily detected with a Shodan.io search of a company’s domain name.  

If we zoom out and look at the past decade, the speed race between attackers and defenders has been  staggering. In 2015, top-performing organizations could detect a breach in just over a month; now,  leaders spot threats in 30 minutes to four hours—a 99% improvement (Mandiant, Verizon DBIR).  Response has accelerated even more dramatically, with mean time to recovery dropping from two to  three weeks down to just a few hours—a 98% improvement.  

Unfortunately, attackers have also been evolving and improving their performance. In 2015, about 60%  of compromises occurred within minutes, and by 2025, this figure had increased to 87% over that short  period. Both good guys and bad guys are running faster than ever, but those in defense are still behind  their adversaries. Blue teams defending organizations must be right all day, every day, whereas a hacker  only needs to be right once.

The Human Factor Is Still the Greatest Challenge  

For all the technology we throw at the problem, people remain the wild card. Verizon’s DBIR (2025)  shows that 60% of breaches still involve a human element—social engineering, simple mistakes, or  insider actions. That’s down a bit from the historical 68–74% range (Verizon DBIR), but it proves you can’t  firewall away human behavior.  

Ransomware is where the human and technical worlds collide most visibly. It now appears in 44% of  breaches, up from 32% (Verizon DBIR). The good news is that the economics are shifting: the median  ransom payment has dropped to $115,000 from $150,000, and 64% of victims now refuse to pay,  compared to 50% two years ago. That’s a sign that better backups and sharper incident response plans are making a difference. But the picture changes with scale—small businesses are still hit hardest. In  SMB breaches, 88% include ransomware, compared to just 39% for enterprises (Verizon DBIR).  

While 45% of security teams report that they are now using GenAI tools, 72% of employees are signing  up for AI accounts with personal email addresses, bypassing corporate controls and creating fresh  “shadow IT” exposure (FBI, IBM). The bad actors have noticed too—malicious emails built with AI have  doubled in just two years, rising from 5% to 10% (Verizon DBIR). Many of the same tools that make  defenders faster are the same ones that make attackers more efficient. 

Recommendations for Security Leaders  

  1. Deploy AI with strong guardrails from the start. 

An 80-day reduction in breach lifecycle and 30–40% cost savings (IBM) are compelling results, but only if  the foundation is solid. Establish clear governance, define approved uses, and lock down access to  prevent misuse. Begin with detection and response automation and apply AI in well-tested areas where  benefits are proven and risks are more manageable, reducing the risks associated with using GenAI  technology itself. Be sure not to forget the less sexy work of hardening and continually monitoring your  edge assets, thereby reducing that post-COVID VPN exposure and other commonly exploited issues. (AI  scanning can help with that too)  

  1. Expand capability through education, not just hiring. 

With a global shortage of 4.8 million cybersecurity professionals (ISC2), hiring alone can’t close the gap.  Embrace technical and non-tech staff and welcome them to try new tools within your program. You’ll  never eliminate Shadow IT, but you can bring those users into the IT fold. Internal development  conferences, training, mentoring, and other programs can enhance your visibility into risks, protect users  and the organization from themselves, and improve the results of the work they’re trying to accomplish.  Doing so makes their projects and tool usage easier to monitor and improves results. Pair these with  targeted training to close critical skills gaps—especially in cloud security, zero trust design, and AI  governance—and manage workloads to prevent burnout.  

  1. Modernize third-party risk programs. 

With 30% of breaches linked to vendors and partners (Verizon DBIR), the old approach of static  questionnaires is outdated. Implement continuous monitoring, require least privilege and multi-factor  authentication for all vendor access, and include supply chain scenarios in your incident planning.  

  1. Build with the assumption of compromise. 

Adversaries can move laterally in under 90 seconds, while detection still takes too long in many  organizations (Mandiant). Design detection and response as if the attacker is already inside. Test  playbooks quarterly and ensure recovery systems reduce the value of anything intruders can access.  

  1. Engage early with law enforcement and trusted networks. 

Organizations that work with law enforcement recover an average of $1 million more per incident and  have a 66% success rate in fund recovery (FBI). The FBI has a vast network of resources available, and its  guidance during a breach can help prevent costly mistakes. Build these relationships well ahead of any  issues and participate in information-sharing groups such as InfraGard ahead of time. You don’t want to  look for the FBI’s phone number at 2:30 am during a breach. 

Conclusion  

AI delivers measurable gains for organizations that use it well—faster detection, shorter containment,  and reduced breach impact. At the same time, technology alone isn’t the answer. Your modern  cybersecurity program should combine AI with governance, skilled people, strong processes, and realistic  third-party risk strategies.  

Companies will always make high claims on the return on investment. In the case of GenAI, the risk of  inaction far outweighs the potential downsides. In an environment where costs can swing by orders of  magnitude and dwell times remain a persistent challenge, even modest gains in speed and containment  can tip the balance.  

The most successful leaders will be those who acknowledge the new reality of GenAI, adapt early,  involve their employees and teams in their latest security program, and treat AI as a core security  technology. 

Author

Related Articles

Back to top button