Future of AIAI

AI in Cybersecurity: Keeping Up with Evolving Threats

By Spencer Summons, Multi-award-winning Cyber Strategy Leader and Mentor Founder of Opliciti

Artificial Intelligence is fundamentally reshaping the cybersecurity landscape, empowering both cyber defenders and cyber criminals with unprecedented capabilities. On the defensive side, AI enables faster threat detection, automated incident response, and predictive analytics. However, cyber criminals are also leveraging AI to develop more deceptive, adaptive, and scalable threats—raising the critical question: who is winning the AI cyber arms race? 

In a U.S. Congressional hearing held in June 2025, Representative Andrew Garbarino, Chairman of the Subcommittee on Cybersecurity and Infrastructure Protection1, emphasized the transformative potential of AI in strengthening US national cybersecurity. He stated: 

“AI is upscaling cybersecurity teams’ ability to manage vulnerabilities, detect and analyze threats, track regulatory compliance, and automate responses to security incidents.” 

Garbarino also highlighted that these capabilities are especially vital in light of the ongoing shortage of skilled cybersecurity professionals. By reducing workloads and enhancing outcomes, AI offers a powerful force multiplier for security teams. 

Yet, while defenders are cautiously integrating AI within regulatory and ethical frameworks, cyber criminals face no such constraints. This allows them to adopt and weaponize AI technologies more rapidly, potentially outpacing defenders in agility and innovation. Some of the perceived benefits for both cyber attackers and cyber defenders, include:  

For cyber defenders 

Threat Detection and Anomaly Recognition – AI integrated threat and detection systems can analyse even greater amounts of network traffic and user behaviour to identify anomalies that may indicate a breach.  These systems can learn from historical data, enabling them to detect zero-day exploits and previously unknown malware. 

Automated Incident Response – AI-driven security platforms can automate responses to certain threats, reducing the time between detection and mitigation. For instance, if ransomware is detected on a network, AI can isolate the affected systems, halt the spread, and initiate recovery protocols without human intervention.  

Predictive Analytics – By analysing patterns and trends, AI can forecast potential vulnerabilities and attack vectors. This proactive approach allows organizations to patch weaknesses before they are exploited. 

Security Orchestration and Automation and Response (SOAR) – AI enhances SOAR platforms by integrating threat intelligence, automating workflows, and prioritizing alerts. This reduces alert fatigue among security analysts and ensures that critical threats receive immediate attention. 

For Cyber Attackers 

While defenders use AI to build smarter defences, attackers are equally adept at exploiting AI to enhance their capabilities.  

AI-Powered Phishing and Social Engineering – Generative AI models can craft highly convincing phishing emails, mimicking the writing style of trusted contacts or tailoring messages based on social media data. These personalised attacks are more likely to deceive recipients, increasing the success rate of phishing campaigns. 

Malware Evasion and Polymorphism – AI can be used to create malware that adapts its behaviour to avoid detection. By analysing how antivirus software works, attackers can train AI models to generate code that bypasses security filters. This results in polymorphic malware that changes its signature with each iteration. 

Automated Vulnerability Discovery – Just as defenders use AI to find weaknesses in their systems, cyber criminals use it to scan for vulnerabilities in software and networks. AI can rapidly analyse codebases, identify exploitable bugs, and even suggest payloads for exploitation. 

Deepfake and Voice Spoofing Attacks – AI-generated deepfakes and synthetic voices are being used in impersonation attacks. For example, a deepfake video or voice message from a CEO could trick employees into transferring funds or revealing sensitive information. 

Is anyone winning? 

While it may be tempting to measure success in cybersecurity by looking at high-profile, publicised attacks, this presents only part of the picture. In reality, we have limited visibility into how many attacks are successfully thwarted, making it difficult to assess who is truly “winning.” 

Instead, one useful lens is to examine the relative speed of AI adoption. By comparing how quickly cyber criminals and defenders are able to integrate and leverage emerging technologies like artificial intelligence, we gain insight into their agility and capacity to innovate. This perspective helps us understand not just outcomes, but the evolving capabilities on both sides. 

Adoption of AI by Cyber Criminals  

No governance nor ethical oversight – Cyber criminals experiment freely with generative AI, deepfakes and automated attack tools without worrying about data protection, regulation or reputational risk.  Criminal groups are also devoid of governance, compliance, and ethical oversightthat slow down corporate adoption.  

Fewer and lower barriers to entry – According to the World Economic Forum (February 2024), AI is “evolving and enhancing existing tactics, techniques, and procedures, while lowering the access barrier for cyber criminals by reducing the technical expertise required to launch attacks.” This democratisation of cybercrime means more actors can participate with minimal skill. 

Greater incentive and ROI– AI-powered attacks offer a high return on investment. Many tools are freely available, easy to repurpose, and require no approval cycles. This enables criminals to deploy more convincing and effective attacks—such as deepfakes and AI-generated phishing—faster and with greater financial reward. 

Readiness vs. Potential – While the landscape may seem primed for a surge in AI-driven cybercrime, current capabilities are still evolving. The UK’s National Cyber Security Centre (NCSC) notes that “cyber threat actors are almost certainly already using AI to enhance existing tactics, techniques, and procedures (TTPs), including victim reconnaissance, vulnerability research, exploit development, social engineering, basic malware generation, and data processing.” 

However, the report also emphasizes that only highly capable state actors are likely to harness the full potential of AI in advanced cyber operations, in the near term. 

Adoption of AI by Cyber Defenders 

The integration of AI into cybersecurity presents significant opportunities to improve threat detection, incident response, and overall operational efficiency. However, the success of AI in this domain is not guaranteed—it depends heavily on several foundational factors. 

AI Potential vs. Organizational Readiness 

AI tools offer powerful capabilities, but their effectiveness hinges on the presence of robust infrastructure, comprehensive visibility, and skilled personnel. Many organisations lack these essentials, particularly clean, centralised data and integrated systems, which limits the real-world impact of AI in cybersecurity. 

Despite these challenges, recent research from ISC2 reveals strong momentum toward adoption. Nearly 30% of cybersecurity teams have already integrated AI tools—such as AI-enabled operations and generative AI for automated actions—into their workflows. An additional 42% are actively evaluating or testing these technologies, while only 10% have no current plans to adopt AI. 

Maturity as a Limiting Factor 

Organisations with lower cybersecurity maturity often struggle to realize the full benefits of AI. In these environments, poor data quality, fragmented systems, and limited oversight can lead to unreliable AI outputs. This creates a critical gap between AI’s theoretical potential and its practical effectiveness. 

The Effectiveness Challenge 

While adoption is accelerating, operational effectiveness remains a major hurdle. Without the right foundations—particularly infrastructure, data hygiene, and skilled personnel—AI tools may underperform or even introduce new risks. This disconnect is especially pronounced in less mature organisations, where enthusiasm for AI may outpace readiness. 

Skilled Resources Are Essential 

Successful AI integration requires professionals who understand both cybersecurity and AI. These experts are vital for interpreting results, validating alerts, and guiding AI learning. In an already under-resourced industry, this demand adds pressure. AI is not replacing cybersecurity roles—it’s reshaping them, necessitating a shift in education and training. 

Trust and Assurance Challenges 

AI can now automate tasks traditionally handled by entry-level analysts, such as alerting, triage, and basic correlation. However, its reliability depends on high-quality, assured data. In environments with limited visibility or conflicting tools, AI may struggle to deliver accurate insights. Human validation remains essential to maintain trust in AI-driven decisions. 

Strategic Use in Low-Maturity Environments 

Even in less digitally mature organisations, AI can deliver value in targeted areas such as anomaly detection, knowledge enhancement, and documentation generation. These use cases can serve as stepping stones toward broader AI integration. A carefully planned roadmap is essential to align AI potential with organisational readiness and maturity. 

Conclusion 

The accelerating adoption of AI is reshaping the cybersecurity landscape—on both sides of the battlefield. Cyber criminals, unburdened by regulation and ethics, are leveraging AI to scale attacks with unprecedented speed and sophistication. Meanwhile, defenders face a more complex path, constrained by organisational readiness, data quality, and resource limitations. Although AI offers transformative potential for improving threat detection and response, its success depends on strategic implementation, skilled personnel, and trust in its outputs. As the gap between cyber criminal agility and defender maturity narrows, the race to harness AI effectively will be a defining factor in the future of cybersecurity. The stakes are high, and the urgency to act is clear: those who fail to adapt risk falling behind in an increasingly AI-driven threat environment. 

Author

Related Articles

Back to top button