
In the age of AI, the way we write code is changing, and fast. Developers are turning to AI to help them generate code on the fly, prototype faster, and experiment freely. The hyperscalers are racing to add value by accelerating products to market via previews such as Kiro from Amazon. This cultural shift has given rise to what’s being dubbed “vibe coding”; an approach that prizes momentum, iteration, and experimentation over the traditional rigor of design-first engineering.
At its best, vibe coding lowers barriers to innovation. With a few prompts, developers can produce functional code in seconds. The productivity gains are undeniable, especially for teams under pressure to ship fast. But as with any disruption, there’s a catch. Beneath the surface of this new movement lies a significant and growing security challenge.
Fast Code, Fragile Foundations
AI-assisted development tools have transformed how quickly software gets built, but in doing so, they’ve introduced new layers of risk. In many cases, AI models are reproducing existing code from the public domain, including code with vulnerabilities. These vulnerabilities are then compounded when they’re incorporated, often unknowingly, into new applications.
According to recent reports, software vulnerabilities are rising at an alarming rate, with some estimates pointing to a 30% year-over-year increase in known CVEs. This isn’t simply a byproduct of AI hype. It reflects a deeper issue: modern software development is moving faster than security can keep up.
Outdated functions, flawed open source packages, and poorly integrated AI-generated code, are now making their way into production environments. In many organisations, security teams are playing catch-up, trying to retrofit protections around systems that were built more for speed than resilience.
The Context Conundrum
One of the major challenges of securing AI-generated code lies in understanding context and authenticity. Did a human write this code or a machine? Was it reviewed, load-tested, or simply dropped into production after a successful prototype run? These are not academic questions; they directly impact how we assess risk. Traditional AppSec tools are struggling to make sense of these new realities. They weren’t built to differentiate between machine-generated and human-written code, or to evaluate the subtle risks that come from combining the two. As a result, security teams are often overwhelmed, forced to prioritise some threats while hoping others don’t escalate into real-world breaches. Think of it like holding back a flood. You can’t block every wave, so you build barriers around the ones that will cause the most damage. But in an ocean of AI-generated code, identifying the riskiest waves is no small task.
DevSecOps: Still Our Best Defence
In this era of high-speed development, DevSecOps practices remain one of the most effective ways to balance innovation with responsibility. By integrating security checks early into the development pipeline, teams can catch critical issues before code ever sees production.
Automation plays a key role. Autofix tools can quickly resolve known issues, freeing up human experts to focus on complex or novel threats. But AI isn’t a silver bullet. It still takes skilled engineers to interpret results, fine-tune fixes, and make judgment calls. There’s no shortcut for experience. As the saying goes, “There’s no compression algorithm for wisdom derived from experience, and experience derived from errors.”
Regulation Can Help—but It’s Not the Whole Answer
Governments and regulatory bodies are beginning to recognise the risks that come with rapid AI adoption. The EU’s Cyber Resilience Act and the AI Act are just two examples of emerging frameworks designed to promote safer development practices. These regulations will help raise the bar, but they won’t solve the problem alone. Ultimately, it will fall to the industry to adopt responsible approaches to AI-augmented development. This includes investing in secure-by-design principles, continuous education for developers, and transparency around the origin and intent of machine-generated code.
When Vibes Become Technical Debt
The danger with vibe coding isn’t that it encourages experimentation, it’s that the experiments are increasingly ending up in production. What starts as a quick prototype can quickly become the backbone of a critical service. Without validation, testing, and hardening, this creates brittle systems, regressions, and security gaps. When code is written purely for momentum; when the goal is to “just ship it”, we risk accumulating technical debt that’s costly to unwind later. Security scans start flagging issues. Compliance audits raise concerns. Systems buckle under pressure. And what once felt like innovation starts to feel like a mess. The lesson here is simple: it’s easy to write code that runs. It’s much harder to build code that lasts.
A Call for Intentionality
Vibe coding, like many grassroots movements in tech, is not inherently bad. It reflects a shift in how people want to work; faster, freer, and with fewer constraints. But if we want to turn short-term wins into long-term success, we need to ground that freedom in a framework of secure engineering. That means equipping teams with the right tools to test and validate code before it hits production. It means enabling developers to experiment, but not at the expense of maintainability, compliance, or user safety. And it means building a culture where velocity is supported by vision.
Building the Future with Both Speed and Safety
Consider how Levi Strauss revolutionised workwear by introducing durable riveted jeans during the California Gold Rush. His innovation met the moment, providing workers with something built to last. We’re facing a similar inflection point in software development. AI offers a transformative opportunity, but it must be paired with durability, security, and foresight.
The future of software may well be shaped by vibe coding. But if we want that future to be sustainable, secure, and successful, we must look beyond the vibes and focus on building with intention.
To learn more about Black Duck, please click here.