Cyber SecurityAI & Technology

Fighting AI with AI: A Practical View of the New Cybersecurity Reality

By Nick Mo, co-founder & CEO, Ridge Security

This year, attackers began using AI in a much more aggressive and systematic way. The threat is real and will continue to grow in 2026. We have already seen real breaches where attackers use advanced AI models to automate most of their steps. Moreover, we are seeing malware that can rewrite itself in real time to avoid detection. 

While attackers are advancing their techniques, defenders still have significant constraints when it comes to employing AI in the cybersecurity field. Internal corporate policies must be followed, which include respecting compliance requirements and privacy concerns. Furthermore, agreement and alignment are required across many teams before any new technology can be adapted into regular practice. Attackers face none of these limitations. 

This imbalance is becoming more and more obvious, creating new problems as traditional approaches to security fall behind. Periodic testing is no longer meaningful when attackers are adapting continuously. Detection alone is not enough. 

How Attackers Are Using AI Today 

A clear example of this new threat is the recent Anthropic report where a Chinese state-backed group used the Claude model to automate up to ninety percent of a cyber-espionage operation. While the hackers simply guided it at a high level, the AI handled reconnaissance, vulnerability discovery, exploitation, credential harvesting. The AI broke tasks into smaller pieces, framed them as “legitimate testing”, and bypassed the guardrails. This is the first known case where AI essentially acted as the operator, not just a helper. 

Another example is Google’s PROMPTFLUX case, where malware used an LLM during execution to rewrite its own code to evade detection. This represents a new type of adaptive malware that learns during the attack. 

There are also deep-fake-enabled fraud cases and AI-powered phishing engines like WormGPT and FraudGPT on the dark market. These allow attackers to scale social engineering and credential theft in a way that human security training simply cannot match. 

All these examples show a common point: attackers are not waiting for rules or regulations. They readily adopt the newest AI capabilities, leaving defenders trailing behind.  

Why Defenders Are Falling Behind 

For defenders, everything is slower. Before adopting any AI capabilities, many regulations must be met. Compliance, privacy requirements, risk management concerns, and internal policies must be checked. Customer data must be secured to ensure that no exposure or leaks occur. Moreover, employed AI models must behave in a predictable way. All these concerns are valid, but they also limit speed and efficiency when combatting attackers. 

Environments today depend on too many third-party cloud platforms that are not under direct control. The Salesforce breach and the recent Slack incident are reminders that even if internal controls are strong, the security posture of these external platforms are still exposed. This is very different from when most systems were on-premise. 

There is also the human factor. Employees inside the organization are still the biggest source of security risk, whether intentional or accidental. Even with AI defense tools, a single compromised device or a simple mistake can open a door that attackers can easily exploit with automated tools. 

Traditional detection-focused security simply cannot keep up with this reality. Attackers are moving at AI-powered speed, while defenders are still working with traditional manual processes and reactive thinking.  

The New Direction: Autonomous Security Validation 

To defend against AI-enabled attackers, we need to start using AI differently. The most important direction is autonomous security validation. This means using AI to continuously think like an attacker, simulate real attack paths, and check our environment for weaknesses before attackers find them. 

This is not the same as anomaly detection or running a vulnerability scanner. It is more about adopting the attacker’s mindset and letting AI automatically test our identity systems, cloud configurations, privileges, access paths, collaboration platforms, and other areas of the environment. Instead of waiting for alerts, exposure is discovered proactively.  

Future AI-integrated defense should function continuously, be proactive rather than reactive. 

What Security Leaders Need to Focus On Now 

First, shift the mindset from “are we protected” to “can an attacker break in right now.” That is the starting point for autonomous validation. 

Second, focus on identity and credential hygiene. Almost every AI-supported attack starts with credential theft or privilege escalation. 

Third, improve visibility across all the cloud platforms and collaboration systems your teams depend on. You cannot protect what you cannot see, and these platforms are outside your control. 

Fourth, adopt a zero-trust way of thinking. Assume no user, device, or platform is trustworthy without verification. 

Finally, keep in mind that AI will never completely remove the human factor. Human behavior, mistakes, and internal processes still matter considerably. AI can help, but it cannot compensate for bad hygiene or careless actions. 

What Comes Next 

AI has already changed the balance between attackers and defenders. Unlimited by rules, processes or governance, attackers now have a major advantage in the game. Defenders can catch up, but only if we start using AI not just for detection but for continuous validation, thinking like the attacker, and identifying exposures before they are exploited. 

The future of cybersecurity will depend on how quickly we can move from reactive detection to proactive validation. In short, we have to fight AI with AI. 

Author

Related Articles

Back to top button