Cyber SecurityAI

AI-generated code: Innovation, built-in security risk, or both?

By Crystal Morin, Sysdig Senior Cybersecurity Strategist

Collins Dictionary named “vibe coding” its Word of the Year, defined as “the use of artificial intelligence prompted by natural language to assist with the writing of computer code. The term and its usefulness have already captivated many. Companies like Replit and Lovable claim tens of millions of users are taking advantage of their AI systems to build apps without having any traditional programming expertise.  

The appeal is obvious. Vibe coding lowers the barrier to entry and empowers non-developers to turn ideas into software, accelerating experimentation and creativity. However, as quickly as the excitement of AI-generated code came crashing into our world, so did the questions about its long-term security, reliability, and sustainability. So, is vibe coding a shortcut to innovation or a fast track to more threats?  

The not-so-hidden risks of vibe coding  

One of the most important things to remember while implementing AI-generated code is that the resulting application is not inherently secure. While the code may work, and some models are trained on large volumes of existing software code and security best practices, there is no guarantee that your code will be safe. Code that works can still contain serious flaws.  

For example, an AI-generated tool may struggle when it comes time to integrate with pre-existing applications or infrastructure, like databases. This can result in insecure defaults, public exposure, excessive permissions, or skipped authentication checks during the debugging process. That might allow the app to work, but it opens up your environment to undue risk and potential attacks.  

Experienced developers may be able to identify these risks off-hand in their own code project, but vibe coding is also creating a large volume of AI-generated code backlogs for practitioners to review, test, and maintain in addition to their own tasks. Without guardrails, this speed granted by vibe coding often comes at the expense of quality.  

Human oversight isn’t optional  

To use AI-generated code effectively, it is important to be intentional about why you are using it. Are you using it for a quick prototype mockup to check a theory? Then the risks are manageable, and you can budget for an additional build and harden phase where that prototype can be rebuilt for production use. Alternatively, if you are looking at vibe coding to build a production-ready system, then you must think about security from the start. 

Striking the right balance between vibe code efficiency and security requires structure. This can be done by systematically following frameworks and processes that reduce potential risks spawned from AI-generated code. One example is Microsoft’s STRIDE threat model. This framework can help to avoid any known threats related to the AI-generated code when applied early. Alongside this model, following checklists like the OWASP Top 10 for LLM Applications will help teams avoid common weaknesses. These frameworks aren’t meant to slow development. They are about ensuring speed doesn’t undermine security and trust. 

A hybrid approach to safer AI use 

To vibe code more effectively, organisations should consider implementing a hybrid approach to usage and security. Non-developers can vibe code to get services built quickly, but experienced developers should review the code, test the app, and approve the output before it reaches production. This human oversight ensures that AI-generated code aligns with the company’s standards, requirements, and operational realities.  

AI can also play a role in a hybrid risk reduction model. Ask models to review their own output by prompting them to act as a security engineer or senior developer. This process should surface obvious issues earlier in the review process and can be done before the code is passed off for human review.  

Planning ahead is a vibe 

Security doesn’t end. Even once the code is reviewed, it is important to take accountability, maintain visibility, track key details around changes, and assign ownership once an app exists. That sense of responsibility makes it easier to catch problems before they reach users.   

While AI makes software development more accessible, the responsibility to educate the community also grows, especially in the open source ecosystem. Veteran developers hold a key role in advocating for safer defaults, better scanning, and continuous monitoring of code contributions. New developers, whether human or AI-assisted, need guidance on secure coding practices, dependency management, and the realities of software maintenance. For those without direct access to senior expertise, using peers and AI tools to validate your work should be standard practice. 

Innovation with intent 

As technology continues to evolve, users will increasingly leverage AI to add more value to software development. AI-generated code is not a silver bullet or inherent liability, but it is a powerful tool whose outcome reflects the intent and discipline of the user. 

By combining proven security frameworks, human oversight, and thoughtful planning, individuals and organisations can harness the benefits without compromising trust. Innovation, speed, and security do not need to be at odds. Meeting all three simply requires intention, not just automation. 

Author

Related Articles

Back to top button