
For years, discussions around artificial intelligence in cybersecurity have focused on how AI could make sophisticated attacks even more advanced. However, a far more transformative shift is taking shape – AI is giving everyone the capabilities once reserved for highly skilled adversaries, making cybercrime today more about access than expertise.
Lowskill actors, high impact threats
In the defence sector, where operational networks, supply chains, and missioncritical systems form sprawling digital ecosystems, the implications of this shift are profound. Adversaries no longer require deep technical knowledge or the resources once needed to conduct advanced intrusions.
What many in the defence sector once considered “Hollywoodlevel” cyber capability has now been reduced to something anyone can access and deploy. Prebuilt AI frameworks, jailbroken models, and userfriendly offensive toolkits allow even lowskilled individuals to orchestrate campaigns that previously required coordinated, wellfunded teams.
This collapse in the skill barrier is perhaps the most consequential development in modern cyber risk. The threat landscape is no longer defined by a small number of highly capable actors, but by a rapidly expanding base of inexperienced individuals who can execute highimpact AI attacks even without necessarily understanding how or why they work.
The ethical gap: early talent and early risk
AI is accelerating capability development at a pace that traditional education and career pathways cannot match, while introducing new ethical challenges that the defence sector must be prepared to address.
Young people are now able to acquire advanced cyber skills, or at least the ability to execute advanced techniques, long before they enter formal training environments. This creates a dilemma: if organisations don’t actively guide and channel this emerging talent into the right pathways, some may drift into harmful or adversarial activity.
Organisations must take a more active role in identifying early talent, nurturing it ethically, and preventing AIaccelerated capability from being misused. What was previously a pipeline challenge is now a safeguarding challenge. We’ve moved from needing more talent to needing to protect and guide that talent, as increasingly powerful skills appear earlier and outside of formal pathways.
The explosion of insecure systems
At the same time, as this trend is unfolding across all sectors, organisations in the defence industry are also rushing to integrate AI technologies to remain competitive. This urgency is understandable, with AI permeating workflows from logistics to intelligence analysis. However, rapid adoption brings equally rapid exposure.
In practice, this means organisations are introducing new attack surfaces at a pace they cannot yet fully secure. IoT devices, cloud platforms, and AIenabled tools now make it possible for applications to be generated and deployed within minutes. However, these tools increasingly rely on opaque or unvetted thirdparty components. For example, AIgenerated code – often referred to as “white coding” – may appear clean and functional while still containing severe vulnerabilities.
The push to “AIenable everything”, from training to command and control, risks embedding vulnerabilities faster than they can be identified or mitigated. The result is a rapidly widening attack surface, created not by malicious actors but by wellintentioned organisations racing to innovate.
AI will overwhelm defenders before it outsmarts them
Another poorly understood shift is how AI will disrupt operational stability, not just security. AI enables attackers to flood systems, Security operations centre (SOC) teams and even publicsector services with machinegenerated alerts, queries, or noise at a scale no human team can manage.
The defence sector is already witnessing this: automated phishing waves, AI-generated spam campaigns designed to exhausting public processes, and SOCs receiving vast volumes of meaningless alerts designed to obscure genuinely malicious activity. Machinescale chaos is becoming a strategy in itself.
Training will define resilience
As AI becomes embedded in both attack and defence, cybersecurity within the defence sector is evolving into something new: the discipline of managing and ‘taming’ AI. This requires organisations to adopt a dual mindset; AI is both the greatest threat and the greatest opportunity in cybersecurity.
It is essential to understand not only how AI amplifies adversarial capability, but also how it can enhance defensive decision making, accelerate triage, and strengthen organisational resilience.
Cybersecurity today extends far beyond protecting networks. It now includes safeguarding people, processes, and even organisational judgment from the unintended consequences of rapid AI adoption. While AIdriven security tools are important, relying on them alone will only take organisations so far. True capability comes from people who trained to operate effectively in AIcontested environments.
This is why cyber training is no longer optional but essential. Teams must be exposed early to “the uncomfortable things”, from AIdriven redteam attacks and machinespeed reconnaissance to rapidfire exploitation within controlled environments to be able to think and respond at the pace of automated threats.
Without this depth of readiness, organisations will continue to fall behind adversaries. In this new AI-defined era, the organisations that thrive will be those that prioritise capability over convenience, training over tooling, and preparedness over assumptions.



