Cyber SecurityAI & Technology

Fighting AI with AI: A Practical View of the New Cybersecurity Reality

By Nick Mo, co-founder & CEO, Ridge Security

This year, attackers began using AI in a much more aggressive and systematic way. Theย threat is realย andย willย continue toย grow inย 2026. Weย have already seen real breaches where attackers use advanced AI models to automate most of their steps. Moreover, we are seeing malware that can rewrite itself in real time to avoid detection.ย 

While attackers are advancing their techniques, defenders still have significant constraints when it comes to employing AI in the cybersecurity field. Internal corporate policies must be followed, which include respecting compliance requirements and privacy concerns.ย Furthermore, agreement and alignmentย areย requiredย across many teams before anyย new technologyย can be adapted into regular practice.ย Attackers face none of these limitations.ย 

This imbalance is becoming more and more obvious, creating new problems as traditional approaches to security fall behind.ย Periodic testing is no longer meaningful when attackers are adapting continuously. Detection alone is not enough.ย 

How Attackers Are Using AI Todayย 

A clear example of this new threat is the recent Anthropic report where a Chinese state-backed group used the Claude modelย to automateย up to ninety percent of a cyber-espionage operation. While the hackers simply guided it atย a high level, the AI handled reconnaissance, vulnerability discovery, exploitation, credentialย harvesting. The AIย brokeย tasks into smaller pieces, framed them as โ€œlegitimate testingโ€,ย and bypassed the guardrails.ย This is the first known case where AIย essentially actedย as the operator, not just a helper.ย 

Another example isย Googleโ€™s PROMPTFLUX case, where malware used an LLM during execution to rewrite its own code to evade detection. Thisย representsย a new type of adaptive malware thatย learnsย during the attack.ย 

There are alsoย deep-fake-enabled fraud cases and AI-powered phishing engines likeย WormGPTย andย FraudGPTย on the dark market. These allow attackers to scale social engineering and credential theft in a way that human security training simply cannot match.ย 

All these examples show a common point: attackers are not waiting for rules or regulations. They readily adopt the newest AI capabilities, leaving defenders trailing behind.ย ย 

Why Defenders Are Falling Behindย 

For defenders, everything is slower. Before adopting any AI capabilities, many regulations must be met.ย Compliance, privacy requirements, risk management concerns, and internal policiesย must be checked.ย Customer data must be secured to ensure that no exposure or leaks occur.ย Moreover, employed AI models must behave in a predictable way. All these concerns are valid, but they also limit speed and efficiency when combatting attackers.ย 

Environments today depend on too many third-party cloud platforms thatย are not under directย control. The Salesforce breach and the recent Slack incidentย are remindersย that even if internal controls are strong, the security posture of these external platformsย areย still exposed. This isย very differentย from when most systems wereย on-premise.ย 

There is alsoย theย human factor. Employees inside the organization are still the biggest source of security risk, whether intentional or accidental. Even with AI defense tools, a single compromised device or a simple mistake can open a door that attackers can easily exploit with automated tools.ย 

Traditional detection-focused security simply cannot keep up with this reality. Attackers are movingย atย AI-powered speed, while defenders are still working with traditional manual processes and reactive thinking.ย ย 

The New Direction: Autonomous Security Validationย 

To defend against AI-enabled attackers, we need to start using AIย differently.ย The most important direction is autonomous security validation. This means using AI to continuously think like an attacker, simulate real attack paths, and check our environment for weaknesses before attackers find them.ย 

This isย not the same asย anomaly detection or running a vulnerability scanner. It is more about adopting the attackerโ€™s mindset and letting AI automatically test our identity systems, cloud configurations, privileges, access paths, collaboration platforms, and other areas of the environment. Instead of waiting for alerts, exposureย is discoveredย proactively.ย ย 

Future AI-integrated defenseย should functionย continuously,ย be proactive rather than reactive.ย 

What Security Leaders Need to Focusย Onย Nowย 

First, shiftย theย mindset from โ€œare we protectedโ€ to โ€œcan an attacker break in right now.โ€ That is the starting point for autonomous validation.ย 

Second, focus on identity and credential hygiene.ย Almost everyย AI-supported attack starts with credential theft orย privilegeย escalation.ย 

Third, improve visibility across all the cloud platforms and collaboration systems your teams depend on. You cannot protect what you cannot see, and these platforms are outside your control.ย 

Fourth, adopt a zero-trust way of thinking. Assume no user, device, or platform is trustworthy without verification.ย 

Finally, keep in mind that AI will never completely remove the human factor. Human behavior, mistakes, and internal processes still matterย considerably. AI can help, but it cannot compensate for bad hygiene or careless actions.ย 

What Comes Nextย 

AI has already changed the balance between attackers and defenders. Unlimited by rules,ย processesย or governance, attackers now have a major advantage in the game.ย Defenders can catch up, but only if we start using AI not just for detection but for continuous validation, thinking like the attacker, andย identifyingย exposures before they are exploited.ย 

The future of cybersecurity will depend on how quickly we can move from reactive detection to proactive validation.ย In short, we have to fight AI with AI.ย 

Author

Related Articles

Back to top button