Future of AIAI

AI’s fork in the road: Will software development using Vibe Coding or supply‑chain security using flow defending lead us to utopia or dystopia?

By Russ Andersson, COO & Co-Founder, RapidFort

A decisive decade for cyberrisk  

Artificial intelligence is no longer a research novelty; as with truly revolutionary technology, its effect is complex, and it has extreme pros and cons. Few can dispute that it is now the new nucleus of software creation and software exploitation. Whether we arrive at a future in which AI automatically defends the infrastructure we rely on—or one in which autonomous malware attacks tears through supplychains at machine speed—hinges on how quickly defenders modernize today. Recent data points already provide insights into how both paths are developing.  

Proof of Concepts now compile themselves  

Software contains vulnerabilities that need to be identified and then patched. It’s the nature of the iterative process of software. The process for patching code requires firstly that a Proof of Concept be developed so that secondly a comprehensive patch can be designed. This Proof of Concept (PoC) is typically a minimal, often non-malicious demonstration that proves a vulnerability exists, its scope, and how it can be triggered. 

It’s like a blueprint or a test case—enough to prove the flaw is real, and provide insights for how it can be fixed, but not necessarily harmful on its own. Previously, developing a POC required days or weeks of manual reverseengineering. Today, large language models can ingest a CVE description, leverage salient code fragments using LLMs, and suggest viable proofofconcept (PoC) and patches in minutes. In aggregate, this means that code could be patched faster, a net positive for the defenders.  

Weaponized Exploits now Compile Themselves:  

However, this same technology can be used to modify a POC, not to patch it but to exploit it. A weaponized exploit, on the other hand, takes that PoC and turns it into a fully functional attack tool. It’s engineered for reliability, stealth, and impact—often with payloads that enable remote code execution, privilege escalation, or lateral movement and other capabilities for attackers. These are the versions threat actors use in real-world attacks in a process called weaponizing. The ability to transpose POCs into exploits, not patches, is a net positive for attackers.  

In other words, AI is accelerating both the creation of proofs of concept/patching for the defenders and weaponized exploits for the attackers. A clear example of how the situation is complex, is that the patching POC developed by AI and used for good can then also be weaponized and used for bad. Researchers recently showed an AI assistant building a working exploit for an Erlang SSH bug (CVE202532433) the same afternoon the patch shipped.  

The speed is not a single anecdote. Multiple threatintelligence reports note a steep rise in attacks that weaponize public vulnerabilities almost as soon as they are disclosed. One summary found that the use of exploited vulnerabilities as an initial breach vector nearly tripled year over year. Meanwhile, Deloitte logged a 400% surge in IoTfocused malware, underscoring how automation scales beyond traditional IT into every connected asset. Well-known reports like Verizon’s DBIR and Mandiant’s M Report echo same. In short, the oncecomfortable patch window, of a few days between a vulnerability being discovered and it needing to be patched, is closing.  

Vibe Code volume is exploding faster than humans can secure it Generative coding tools—GitHub Copilot, Amazon Q, Replit Ghostwriter—have changed how code is developed. So called Vibe Coding, is a new paradigm in software development where you describe what you want in natural language, and AI generates the code for you. The volume of AI Vibe generated code is staggering. GitHub says 

Copilot already writes 46 percent of the code in the average repository and more than 60 percent in Java projects. Venture capital is following: startups building AI coding assistants attracted nearly $1 billion in fresh funding in the past 12 months.  

That productivity story, the pro of being able to develop code faster, has an infosec con subplot. More code ships every day, but the global pool of experienced security professionals isn’t growing at the same rate. Putting security teams under pressure and the backlog volume of code awaiting security approval is growing. Each AIgenerated module expands the attack surface; even if individual snippets are no less secure than handwritten code, the sheer volume guarantees more latent vulnerabilities, even if the ratio of vulnerabilities to lines of code remains the same. Without commensurate automation on the defensive side, the vulnerability counts are increasing and organizations are widening the vulnerability footprint that adversaries already exploit.  

The fragile integrated software supply chain  

If velocity alone were not enough, the integrated software supply chain raises the stakes. The 2024 XZ Utils backdoor, which took years of socialengineering groundwork, was a nearmiss that could have granted remoteroot access to countless Linux servers. Five years after the SolarWinds breach, experts still rate supplychain attacks among the top existential threats, and the latest ReversingLabs/ISACA report warns that such campaigns are “accelerating and rapidly evolving” in 2025.  

Why is the chain so fragile? Modern applications are built from thousands of sub-components typically written by different parties, which include opensource dependencies, container images and infrastructureascode templates. AI accelerates that composability: a prompt can add a dozen new libraries to a project in seconds, each one a potential Trojan horse. Once again, speed and scale is the enemy, and the opportunity.  

What a dystopian trajectory looks like  

Software is now practically everywhere, it is in our homes, our cars, in our transportation, energy, healthcare, and financial ecosystems. Combine AIauthored code, AIdriven exploit generation and a fragile supply chain and one gets a recipe for semiautonomous campaigns that strike faster than any human can triage them. Picture ransomware crews chaining a fresh CVE to compromise a build system, swapping in a tainted dependency, and distributing malicious updates to tens of thousands of 

downstream customers, all before breakfast. Extensive critical infrastructure outages, poisoned data pipelines, and physical safety incidents shift from hypothetical to routine.  

The uncomfortable truth is that pieces of this future are already visible. The Erlang PoC above was created the same day as disclosure. Proofofconcept malware targeting AIenabled industrial devices piggybacks on the IoT surge Deloitte flagged. And when an attacker quietly shepherded malicious code through the opensource review process to land in XZ Utils, the community caught it only thanks to an engineer’s curiosity about an odd benchmark result, hardly a repeatable or assured result.  

The utopian counternarrative: Flow Defending  

Utopia is not guaranteed, but it is technically achievable. The same algorithms that slash exploit development time can also be incorporated in the flow of software development, shrinking the number of unpatched vulnerabilities, and defenders’ meantimetoremediate, but only if organizations embrace true automation and embed these capabilities into their SDLC. Flow defending technologies like:  

  • Autonomous remediation agents that correlate scanner output with configuration management data and file pull requests or push patches directly to CI/CD pipelines are moving from prototype to production. While a modest degree of human intervention is currently needed, early deployments report backlog elimination “in the blink of an eye,” freeing analysts for higherorder tasks. 
  • Intelligent Software bills of materials (SBOMs) give security teams continuous lineofsight into risks, transitive dependencies and licensing obligations. The OpenSSF’s 2025 guidance stresses choosing generators that integrate into the flow of a developer workflows to make this SBOM information actionable so transparency and implementation is the default, not material an afterthefact audit. 
  • Hardened build ecosystems verify every package and sign every artifact, offering a tamperevident chain of custody from source to production. When paired with inline AI codereviewers that flag insecure patterns before merge, the loop from creation to validation becomes as swift as the threat cycle. 

Together, these moves flip the narrative: speed and scale work for defenders, not against them. Every new component is catalogued automatically; every critical patch is synthesized and tested by an agent long before an attacker finishes a phishing email. 

Governance—and the human element  

Technology alone will fail without policy guardrails and the incentives to enforce them. Boards should consider balancing the significant investment in automated code generation with a dedicated percentage to automated defense security tooling. Regulators can accelerate change by mapping liability to the absence of basic controls such as SBOM publication or exploitready Mean Time to Patch metrics. And enterprise procurement teams must factor “securebydesign proof points” into vendor selection: signed artifacts, reproducible builds and machinereadable assurance reports.  

Deterrence: An important missing component.  

The deterrence debate is complex. On a simplistic level, on the attacking side, the fundamental reason why a vast industry of bad actors exists is because it pays. Follow the money and it’s there. The risk-rewards equation is asymmetrical; if you attack and fail there are almost no consequences, systems are probed thousands of times a day, and if you succeed, you receive financial benefit. Raising the costs of being a malicious actor needs to be right-sized and implemented.  

On the defensive side, holding companies more accountable for having vulnerabilities in their AI-generated code that are exploited and cause material harm, might force companies to prioritize more investment in AI defenses. Currently, the majority of the momentum is focused on using AI code generation while code defense lags.  

In general, and perhaps something we can all agree on, it is more realistic and practical to impose meaningful penalties on bad actors than on their victims.  

A narrow window for choice  

History suggests breakthrough technologies amplify existing incentives. AI will not magically produce either a cybersecurity utopia or dystopia, it will intensify whatever environment we create. We already see AI tipping the scales of offense; the question is whether defenders will match that curve before the next catastrophic level incident slips through.  

The defensive playbook is being developed today: codify the supply chain, automate remediation, and implement AI copilots that look for security debt as aggressively as 

they suggest new code. Organizations that treat these capabilities as cost centers may soon find themselves funding recovery crises instead.  

The longer we wait, the more the “response window” collapses into an impossibly permanent gap. But if we act with the same ingenuity attackers are showing, AI’s remarkable velocity can just as readily carry us toward a safer, more resilient digital ecosystem.  

The future is not prewritten; it is being compiled right now. Let’s choose to secure our software supply chain with every secure build we ship. 

Author

Related Articles

Back to top button