Cyber SecurityAI & Technology

AI-generated code: Innovation, built-in security risk, or both?

By Crystal Morin, Sysdig Senior Cybersecurity Strategist

Collins Dictionary named “vibe coding” itsย Word of the Year, defined as โ€œthe use of artificial intelligence prompted by natural language to assist with the writing of computer code.โ€ย The term and its usefulness have already captivated many. Companies likeย Replitย and Lovable claimย tens ofย millions of usersย are taking advantage of their AI systems to build apps without having any traditional programmingย expertise.ย ย 

The appeal is obvious. Vibe coding lowers the barrier to entry and empowers non-developers to turn ideas into software, accelerating experimentation and creativity. However, as quickly as the excitement of AI-generated code came crashing into our world, so did the questions about its long-term security, reliability, and sustainability. So, is vibe coding a shortcut to innovation or a fast track to more threats?ย ย 

The not-so-hidden risks of vibe codingย ย 

One of the most important things to remember while implementing AI-generated code is that the resulting application is not inherently secure. While the code may work, and some models are trained on large volumes of existing software code and security best practices, there is no guarantee thatย yourย code will be safe. Code that works can stillย containย serious flaws.ย ย 

For example, an AI-generated tool may struggle when it comes time to integrate with pre-existing applications or infrastructure, like databases. This can result in insecure defaults, public exposure, excessive permissions, or skipped authentication checks during the debugging process. That might allow the app to work, but itย opens upย your environment to undue risk and potential attacks.ย ย 

Experienced developers may be able toย identifyย these risks off-hand in their own code project, but vibe coding is also creating a large volume of AI-generated code backlogs for practitioners to review, test, andย maintainย in addition to their own tasks. Without guardrails, this speed granted by vibe coding often comes at the expense of quality.ย ย 

Human oversightย isnโ€™tย optionalย ย 

To use AI-generated code effectively, it is important to be intentional aboutย whyย you are using it. Are you using it for a quick prototypeย mockupย to check a theory? Then the risks are manageable, and you can budget for anย additionalย build and harden phase where that prototypeย can be rebuilt for production use. Alternatively, if you are looking at vibe coding to build a production-ready system, then you must think about security from the start.ย 

Striking the right balance between vibe code efficiency and security requires structure. This can be done by systematically following frameworks and processes that reduce potential risks spawned from AI-generated code. One example is Microsoftโ€™sย STRIDE threat model. This framework can help to avoid any known threats related to the AI-generated code when applied early. Alongside this model, following checklists like theย OWASP Top 10 for LLM Applicationsย will help teams avoid common weaknesses. These frameworksย arenโ€™tย meant to slow development. They are about ensuring speedย doesnโ€™tย undermine security and trust.ย 

A hybrid approach to safer AI useย 

To vibe code more effectively, organisations should consider implementing a hybrid approach to usage and security. Non-developers can vibe code to get services built quickly, but experienced developers should review the code, test the app, and approve the output before it reaches production. This human oversight ensures that AI-generated code aligns with the companyโ€™s standards, requirements, and operational realities.ย ย 

AI can also play a role in a hybrid risk reduction model. Ask models to review their own output by prompting them to act as a security engineer or senior developer. This process should surface obvious issues earlier in the review process and can be done before the code is passed off for human review.ย ย 

Planning ahead is a vibeย 

Securityย doesnโ€™tย end. Even once the code is reviewed, it is important to take accountability,ย maintainย visibility, track key details around changes, and assign ownership once an app exists. That sense of responsibility makes it easier to catch problems before they reach users.ย ย ย 

While AI makes software development more accessible, the responsibility to educate the community also grows, especially in theย open sourceย ecosystem. Veteran developers hold a key role in advocating for safer defaults, better scanning, and continuous monitoring of code contributions. New developers, whether human or AI-assisted, need guidance on secure coding practices, dependency management, and the realities of software maintenance. For those without direct access to seniorย expertise, using peers and AI tools toย validateย your work should be standard practice.ย 

Innovation with intentย 

As technology continues to evolve, users will increasinglyย leverageย AI to add more value to software development. AI-generated code is not a silver bullet or inherent liability, but it is a powerful tool whose outcome reflects the intent and discipline of the user.ย 

By combining proven security frameworks, human oversight, and thoughtful planning, individuals and organisations can harness the benefits without compromising trust. Innovation, speed, and security do not need to be at odds. Meeting all three simply requires intention, not just automation.ย 

Author

Related Articles

Back to top button