
A decisive decade for cyber‑risk
Artificial intelligence is no longer a research novelty; as with truly revolutionary technology, its effect is complex, and it has extreme pros and cons. Few can dispute that it is now the new nucleus of software creation and software exploitation. Whether we arrive at a future in which AI automatically defends the infrastructure we rely on—or one in which autonomous malware attacks tears through supply‑chains at machine speed—hinges on how quickly defenders modernize today. Recent data points already provide insights into how both paths are developing.
Proof of Concepts now compile themselves
Software contains vulnerabilities that need to be identified and then patched. It’s the nature of the iterative process of software. The process for patching code requires firstly that a Proof of Concept be developed so that secondly a comprehensive patch can be designed. This Proof of Concept (PoC) is typically a minimal, often non-malicious demonstration that proves a vulnerability exists, its scope, and how it can be triggered.
It’s like a blueprint or a test case—enough to prove the flaw is real, and provide insights for how it can be fixed, but not necessarily harmful on its own. Previously, developing a POC required days or weeks of manual reverse‑engineering. Today, large language models can ingest a CVE description, leverage salient code fragments using LLMs, and suggest viable proof‑of‑concept (PoC) and patches in minutes. In aggregate, this means that code could be patched faster, a net positive for the defenders.
Weaponized Exploits now Compile Themselves:
However, this same technology can be used to modify a POC, not to patch it but to exploit it. A weaponized exploit, on the other hand, takes that PoC and turns it into a fully functional attack tool. It’s engineered for reliability, stealth, and impact—often with payloads that enable remote code execution, privilege escalation, or lateral movement and other capabilities for attackers. These are the versions threat actors use in real-world attacks in a process called weaponizing. The ability to transpose POCs into exploits, not patches, is a net positive for attackers.
In other words, AI is accelerating both the creation of proofs of concept/patching for the defenders and weaponized exploits for the attackers. A clear example of how the situation is complex, is that the patching POC developed by AI and used for good can then also be weaponized and used for bad. Researchers recently showed an AI assistant building a working exploit for an Erlang SSH bug (CVE‑2025‑32433) the same afternoon the patch shipped.
The speed is not a single anecdote. Multiple threat‑intelligence reports note a steep rise in attacks that weaponize public vulnerabilities almost as soon as they are disclosed. One summary found that the use of exploited vulnerabilities as an initial breach vector nearly tripled year over year. Meanwhile, Deloitte logged a 400% surge in IoT‑focused malware, underscoring how automation scales beyond traditional IT into every connected asset. Well-known reports like Verizon’s DBIR and Mandiant’s M Report echo same. In short, the once‑comfortable patch window, of a few days between a vulnerability being discovered and it needing to be patched, is closing.
Vibe Code volume is exploding faster than humans can secure it Generative coding tools—GitHub Copilot, Amazon Q, Replit Ghostwriter—have changed how code is developed. So called Vibe Coding, is a new paradigm in software development where you describe what you want in natural language, and AI generates the code for you. The volume of AI Vibe generated code is staggering. GitHub says
Copilot already writes 46 percent of the code in the average repository and more than 60 percent in Java projects. Venture capital is following: start‑ups building AI coding assistants attracted nearly $1 billion in fresh funding in the past 12 months.
That productivity story, the pro of being able to develop code faster, has an infosec con subplot. More code ships every day, but the global pool of experienced security professionals isn’t growing at the same rate. Putting security teams under pressure and the backlog volume of code awaiting security approval is growing. Each AI‑generated module expands the attack surface; even if individual snippets are no less secure than hand‑written code, the sheer volume guarantees more latent vulnerabilities, even if the ratio of vulnerabilities to lines of code remains the same. Without commensurate automation on the defensive side, the vulnerability counts are increasing and organizations are widening the vulnerability footprint that adversaries already exploit.
The fragile integrated software supply chain
If velocity alone were not enough, the integrated software supply chain raises the stakes. The 2024 XZ Utils backdoor, which took years of social‑engineering groundwork, was a near‑miss that could have granted remote‑root access to countless Linux servers. Five years after the SolarWinds breach, experts still rate supply‑chain attacks among the top existential threats, and the latest ReversingLabs/ISACA report warns that such campaigns are “accelerating and rapidly evolving” in 2025.
Why is the chain so fragile? Modern applications are built from thousands of sub-components typically written by different parties, which include open‑source dependencies, container images and infrastructure‑as‑code templates. AI accelerates that composability: a prompt can add a dozen new libraries to a project in seconds, each one a potential Trojan horse. Once again, speed and scale is the enemy, and the opportunity.
What a dystopian trajectory looks like
Software is now practically everywhere, it is in our homes, our cars, in our transportation, energy, healthcare, and financial ecosystems. Combine AI‑authored code, AI‑driven exploit generation and a fragile supply chain and one gets a recipe for semi‑autonomous campaigns that strike faster than any human can triage them. Picture ransomware crews chaining a fresh CVE to compromise a build system, swapping in a tainted dependency, and distributing malicious updates to tens of thousands of
downstream customers, all before breakfast. Extensive critical infrastructure outages, poisoned data pipelines, and physical safety incidents shift from hypothetical to routine.
The uncomfortable truth is that pieces of this future are already visible. The Erlang PoC above was created the same day as disclosure. Proof‑of‑concept malware targeting AI‑enabled industrial devices piggybacks on the IoT surge Deloitte flagged. And when an attacker quietly shepherded malicious code through the open‑source review process to land in XZ Utils, the community caught it only thanks to an engineer’s curiosity about an odd benchmark result, hardly a repeatable or assured result.
The utopian counter‑narrative: Flow Defending
Utopia is not guaranteed, but it is technically achievable. The same algorithms that slash exploit development time can also be incorporated in the flow of software development, shrinking the number of unpatched vulnerabilities, and defenders’ mean‑time‑to‑remediate, but only if organizations embrace true automation and embed these capabilities into their SDLC. Flow defending technologies like:
- Autonomous remediation agents that correlate scanner output with configuration management data and file pull requests or push patches directly to CI/CD pipelines are moving from prototype to production. While a modest degree of human intervention is currently needed, early deployments report backlog elimination “in the blink of an eye,” freeing analysts for higher‑order tasks.
- Intelligent Software bills of materials (SBOMs) give security teams continuous line‑of‑sight into risks, transitive dependencies and licensing obligations. The OpenSSF’s 2025 guidance stresses choosing generators that integrate into the flow of a developer workflows to make this SBOM information actionable so transparency and implementation is the default, not material an after‑the‑fact audit.
- Hardened build ecosystems verify every package and sign every artifact, offering a tamper‑evident chain of custody from source to production. When paired with in‑line AI code‑reviewers that flag insecure patterns before merge, the loop from creation to validation becomes as swift as the threat cycle.
Together, these moves flip the narrative: speed and scale work for defenders, not against them. Every new component is catalogued automatically; every critical patch is synthesized and tested by an agent long before an attacker finishes a phishing email.
Governance—and the human element
Technology alone will fail without policy guardrails and the incentives to enforce them. Boards should consider balancing the significant investment in automated code generation with a dedicated percentage to automated defense security tooling. Regulators can accelerate change by mapping liability to the absence of basic controls such as SBOM publication or exploit‑ready Mean Time to Patch metrics. And enterprise procurement teams must factor “secure‑by‑design proof points” into vendor selection: signed artifacts, reproducible builds and machine‑readable assurance reports.
Deterrence: An important missing component.
The deterrence debate is complex. On a simplistic level, on the attacking side, the fundamental reason why a vast industry of bad actors exists is because it pays. Follow the money and it’s there. The risk-rewards equation is asymmetrical; if you attack and fail there are almost no consequences, systems are probed thousands of times a day, and if you succeed, you receive financial benefit. Raising the costs of being a malicious actor needs to be right-sized and implemented.
On the defensive side, holding companies more accountable for having vulnerabilities in their AI-generated code that are exploited and cause material harm, might force companies to prioritize more investment in AI defenses. Currently, the majority of the momentum is focused on using AI code generation while code defense lags.
In general, and perhaps something we can all agree on, it is more realistic and practical to impose meaningful penalties on bad actors than on their victims.
A narrow window for choice
History suggests breakthrough technologies amplify existing incentives. AI will not magically produce either a cybersecurity utopia or dystopia, it will intensify whatever environment we create. We already see AI tipping the scales of offense; the question is whether defenders will match that curve before the next catastrophic level incident slips through.
The defensive playbook is being developed today: codify the supply chain, automate remediation, and implement AI co‑pilots that look for security debt as aggressively as
they suggest new code. Organizations that treat these capabilities as cost centers may soon find themselves funding recovery crises instead.
The longer we wait, the more the “response window” collapses into an impossibly permanent gap. But if we act with the same ingenuity attackers are showing, AI’s remarkable velocity can just as readily carry us toward a safer, more resilient digital ecosystem.
The future is not pre‑written; it is being compiled right now. Let’s choose to secure our software supply chain with every secure build we ship.