Future of AICyber SecurityAI

AI in cybersecurity is a double-edged sword

By Adam Winston, Field CTO for Managed Services at WatchGuard Technologies

Artificial intelligence (AI) has already transformed the cybersecurity landscape, acting as a powerful defence mechanism and a threat vector. Its dual nature presents organisations with a growing paradox. While AI promises faster, smarter protection, it also enables increasingly sophisticated attacks. But in this double-edged dynamic, the greatest risk may not lie in AI itself but rather in how we choose to integrate or depend upon it.Ā 

Over the past decade, security teams have faced exponential data growth, evolving threats and a chronic skills shortage. AI has stepped in as a critical enabler, automating many of the repetitive, high-volume tasks that previously overwhelmed analysts. Pattern recognition, anomaly detection, event correlation and even alert triage are now often delegated to machine learning models. For many businesses, these tools provide the only viable means of maintaining visibility across sprawling digital environments.Ā 

However, human capabilities still provide a broader perspective, enabling analysts to interpret historical data or locate the key information needed to resolve an incident. Functions like threat hunting or interpreting anomalous patterns require technical skills, analytical judgment and situational awareness. So, companies that integrate AI with human capabilities can build better, resilient cybersecurity models. A hybrid approach that allows them to anticipate, detect and respond effectively.Ā 

Certainly, one of the clearest benefits of AI is its speed in processing and interpreting vast data streams. AI systems can parse millions of events in real time, flagging potential threats with greater efficiency than any human could hope to match. For example, in incident response, AI can initiate predefined containment actions before an analyst even logs in. In these ways, AI is no longer an experimental layer in cybersecurity architecture, it’s foundational.Ā 

Of course, foundational does not mean infallible. A core limitation of AI in cybersecurity is its reliance on training data. Machine learning models are only as good as the data they are fed. Bad in, bad out, as they say. If a model is trained on historical logs that fail to include novel or evolving threats, it will lack the insight to detect them. And if the data contains bias or mislabelled inputs, that bias will be reproduced at speed and scale.Ā 

As the UK’s National Cyber Security Centre noted, adversarial actors can exploit these vulnerabilities through techniques like data poisoning and evasion attacks, manipulating inputs to deceive models and undermine their effectiveness (NCSC, 2023). This has led to growing concern about the weaponisation of AI, particularly generative models like large language models (LLMs), which can generate human-like text that can bypass traditional phishing filters or mimic legitimate users and entities with alarming fidelity.Ā Ā 

These capabilities are already being leveraged in the wild. Cybercriminals are actively sharing techniques for using AI tools to create malware, phishing scripts and social engineering content in underground forums.Ā 

AI also poses risks in less obvious ways. When decision-making becomes opaque, such as when algorithms classify an event as a threat without explanation, security teams are left navigating a black box. Trust becomes a question mark. And in highly regulated industries like finance or healthcare, this lack of interpretability can hinder compliance and lead to costly oversight failures.Ā Ā Ā 

Machines, however capable, still lack context. They do not understand nuance, business impact, or ethical ramifications. They cannot interpret whether a login at an odd hour is a threat or simply an executive travelling internationally. They cannot distinguish between an unusual network spike and a poorly configured update. They cannot weigh the legal implications of blocking a transaction or shutting down a service.Ā 

This is where human expertise remains irreplaceable. Security professionals bring a wealth of tacit knowledge, intuition, and critical thinking that AI cannot replicate. They can draw on experience, consider environmental context, and make decisions that reflect ethical, legal, and business concerns.Ā Ā 

Clearly, the most effective cybersecurity strategies are those that combine AI’s speed and scale with human judgement and oversight. This hybrid model is already being adopted by leading managed detection and response (MDR) providers, where automated threat detection is paired with expert human analysts. These teams can investigate ambiguous activity, validate alerts, and tune systems over time, ensuring AI tools remain aligned with operational reality.Ā 

This alliance is especially crucial given the regulatory trajectory around AI and cybersecurity. The EU’s Artificial Intelligence Act, adopted in 2024, places significant obligations on organisations using AI in high-risk areas, including cyber defence. It mandates transparency, human oversight, and accountability for decisions made by AI systems (EUR-Lex, 2024). Similar frameworks are emerging globally, from Singapore’s Model AI Governance Framework to the UK government’s pro-innovation regulatory approach.Ā 

Security teams must build systems that not only defend against threats but comply with evolving legal standards. This mandates a strategic approach to AI adoption, which understands its capabilities, anticipates its weaknesses, and never loses sight of the human judgment it cannot replace.Ā Ā 

As cybercriminals continue to leverage AI to create more believable scams, generate polymorphic malware, and overwhelm systems with synthetic traffic, defenders must respond in kind. But automation alone is not a silver bullet. What’s needed is a mindset shift: one that sees AI not as a replacement, but as a force multiplier.Ā 

To borrow a phrase from security technologist Bruce Schneier, security is not a product, it’s a process. AI may accelerate that process, but it is the people behind the tools who ultimately determine the outcome. And in this age of intelligent machines, human intelligence has never mattered more.Ā 

Author

Related Articles

Back to top button