
Artificial intelligence (AI) is transforming cybersecurity for both attackers and defenders. While AI has opened doors to efficiency and innovation for organizations, it has also supercharged the capabilities of malicious actors. The result is a threat landscape that evolves faster than many organizations can adapt.
From Floods to Precision Strikes
For years, distributed denial-of-service (DDoS) attacks were defined by their brute force. A flood of traffic would overwhelm servers and knock services offline. But in 2025, the rules are now different. Attackers can deploy AI to identify weak points in real time, mimic legitimate user behavior, and launch targeted “precision strikes.” These attacks are harder to detect, costlier to mitigate, and often arrive with very little warning.
This isn’t just theoretical. Availability – once taken for granted – is now viewed as a top risk, with the average minute of application downtime costing thousands of dollars, and creating far bigger problems in critical service sectors such as healthcare.
How We Got Here
The rapid rise of AI in cyberattacks didn’t happen in isolation. Geopolitical conflicts have normalized cyber warfare as a legitimate tool, and global tensions have amplified its frequency and impact. Events like Russia’s invasion of Ukraine showcased how cyber weapons can accompany physical conflict – and then ripple outward to affect allies and private industry.
At the same time, AI tools have become accessible to anyone with a laptop and an internet connection. Techniques that once required advanced expertise – think reconnaissance, phishing, or traffic obfuscation – can now be automated and personalized at scale. Attackers share scripts, collaborate, and iterate rapidly. And AI makes them even quicker than before.
This is in contrast to how cybersecurity professionals usually work – in silos, with little to no publicly shared information or collaboration. There are many reasons for this, but the bottom line is it creates a situation where defenders operate on quarterly roadmaps, and attackers move in seconds.
AI in the Hands of Attackers
Attack surfaces are growing with the expansion of cloud infrastructure, IoT devices, and hybrid work. Meanwhile, AI gives attackers the ability to probe those surfaces autonomously, identify weaknesses, and strike without rest.
In short, AI enables attackers to:
● Conduct automated reconnaissance of networks and systems.
● Identify vulnerabilities in business logic, not just infrastructure.
● Generate polymorphic traffic and phishing lures that evade traditional filters.
● Bypass defenses like CAPTCHAs, or even exploit multi-factor authentication workflows.
AI can be used to craft messages that are indistinguishable from legitimate communications – making phishing even more scalable and impactful. It can even be used to automate “one-time password” scams that trick users into giving attackers the very codes meant to protect them.
As AI models become integral to business operations, they too are becoming targets. Emerging research has demonstrated “data poisoning” and “no-click” exploits aimed at manipulating AI systems themselves. For organizations that rely on AI for core workflows, an attack on their models could prove more damaging than a temporary outage of traditional IT systems.
One of the leading voices in the AI industry, OpenAI’s Sam Altman, illustrates this further when he says that one of his biggest fears is that a “bad guy” gets superintelligence first and uses it before the rest of the world has a powerful enough version to defend itself, stealing money or causing havoc that’s impossible to counter.
How to Defend Better
Defenders face structural disadvantages. Regulatory and business processes slow implementation cycles, while attackers iterate freely. Security teams are often siloed, with separate tools and teams for application, network, and endpoint threats, whereas attackers view an organization holistically.
Given this reality, trying to “keep up” with every new attack technique is a losing battle. Instead, defenders need to focus on the advantages they have, that attackers can’t replicate. These include:
● Deep Understanding of Legitimate Behavior – While attackers can mimic, they can’t perfectly replicate the patterns of real users and systems over time. AI-powered defenses should be set up to model “peacetime” behavior, including user journeys, transaction flows, and geographic norms – and then to flag anomalies. This behavior-based detection can catch even new zero-day attacks.
● Unified Visibility – Fragmented defenses are easier to bypass. Organizations benefit from security architectures where detection and mitigation tools share data and context across application, network, and cloud environments. If a botnet probes an application login page, that intelligence should inform DDoS defenses and fraud detection systems automatically.
● Leveraging AI for Efficiency – Just as attackers use AI to scale, defenders can use it to accelerate detection, automate responses, and bridge gaps between siloed systems. AI can surface weak signals buried in telemetry and orchestrate cross-platform mitigations faster than human analysts alone.
● Anticipating New Targets – As reliance on AI models deepens, they will require their own protections and best practices. Security standards for LLMs and other AI systems are still emerging. Organizations should proactively assess how their AI dependencies could be manipulated or disrupted – and build in resilience, even before regulations mandate it.
What’s Next
AI is often compared to the internet revolution, but its pace of adoption is far faster and its potential impact far greater. Within weeks of new tools becoming available, attackers and defenders alike are experimenting with them. Organizations that don’t incorporate AI are making an active choice to stay behind.
The goal isn’t to eliminate all risk – that’s impossible – but instead to shift the balance of power. Organizations that understand their operations intimately, unify their defenses, and leverage AI intelligently have a fighting chance against adversaries moving as fast as AI will let them.
AI has changed the game. The next wave of attacks won’t just be bigger; they’ll be smarter. The question for organizations is whether their defenses will be too.



