AI

AI is shaping the future of software development – but is it secure?

By Eran Kinsbruner, VP of Product Marketing, Checkmarx 

AI coding assistants have become a cornerstone of modern software development. Most developers now use these tools to support their coding to some degree and, in many enterprises, the majority of new development projects make at least some use of AI assistants as teams race to meet rising demands and faster release cycles. 

The boosts to speed and productivity which tools such as Claude Code and GitHub Copilot offer, are undeniable benefits in an industry that is always under pressure to do more and do it faster. Yet without the right safeguards, AI can cause more problems than it solves, creating low-quality code and introducing critical security vulnerabilities.  

So, what are the biggest challenges, and how can organisations reap the benefits of AI securely?  

The growing risk of shadow AI 

Our research recently found that over half of organisations now use AI tools in their coding processes. In many cases, AI has moved on from a useful support tool to a critical element of the software development lifecycle (SDLC). We found that over a third (34%) of teams now generate more than 60% of their code with AI. 

Yet, as more developers lean into AI to accelerate their SDLCs and hit deadlines, oversight has often fallen behind. Just 18% of respondents said they had AI governance policies in place. One result is a lack of checks into the provenance of AI-generated code, making it more likely that vulnerabilities will be introduced. Another issue is “shadow AI,” where unapproved or unmonitored tools start shaping critical code without effective human oversight.  

This added layer of risk comes at a time when code security is already a widespread issue. An overwhelming 98% of respondents in our research suffered some form of breach in the last 12 months due to vulnerable code.  

Relying on unchecked AI-generated code can result in catastrophic failures, opening a new attack surface that cyber criminals are quick to exploit, from supply chain compromises to misconfigured cloud environments. Without guardrails, the tools intended to accelerate progress can instead accelerate insecurity. 

Added to this, AI platforms themselves are also at risk from malicious actors through tactics such as prompt injections, LLM poisoning, and model hijacking. Attackers may seek to exploit the AI tool’s access to data and systems, or interfere with its output. Organisations will be exposed to even greater risk if these threats aren’t accounted for.   

Why is governance lagging? 

If the risks of shadow AI are so significant, why are organisations behind the curve when it comes to establishing governance? The answer lies in the collision of business pressure, cultural habits, and tool fatigue. 

First, speed often trumps caution. Security has often fallen by the wayside as organisations pursue shorter SDLCs and AI feeds into this mindset. This was strongly evident in our research, with an alarming 81% of respondents knowingly releasing vulnerable code over the last year.  

Delving into why, we found that meeting deadlines was the most common reason. Many developers also deployed risky code with the intention of fixing it later through patching, while others simply hoped the flaws wouldn’t be discovered. With cybercriminal groups actively searching for vulnerable code to exploit, this mindset poses a significant risk to end users.  

Awareness gaps are another major issue and many developers may assume that AI-generated code is secure by default. We found a lack of awareness around both the risks that AI code can introduce and knowledge of the right guardrails to use it safely.  

Finally, security fatigue plays a role. With limited time and mounting backlogs, developers may disengage from security processes that feel like barriers to productivity. AI coding governance needs to be implemented in a way that integrates into SDLCs without creating blocks or additional steps, with streamlined workflows and clear prioritisation. 

Three priority actions to keep AI code secure 

These pressures expose a deeper issue: development is moving too quickly for security practices and traditional AppSec processes to keep up.  There is an even greater need – and urgency – for development teams to rethink how security is embedded in the development lifecycle in the AI era.   

There are three main areas that are critical for making full use of AI-supported coding without creating unnecessary risk.  

  1. Prepare for agentic AI in AppSec
    Traditional review cycles cannot keep up with the volume of AI-generated code, so organisations need to ensure their quality and security control processeshave the same agility. While AI can introduce more risks into coding, it’s also part of the solution for applying strong AppSec principles at the speed and scale required. Agentic AI tools , also known as AI Coding Security Assistants (ACSA) are one of the most important capabilities here, facilitating real-time code analysis, automated policy enforcement, and proactive risk mitigation. This efficiency will also help developers make the crucial switch from detection to prevention, shutting down issues long before they can cause problems later in production.  
  2. Fuel cultural change with developer empowerment
    Creating a culture of security is a complex issue, but one of the mostimportant factors is to give developers a sense of control. Providing dev teams with contextual feedback delivered directly in their IDEs will help deliver this without slowing them down.   

Establishing a clear prioritisation of critical vulnerabilities and providing just-in-time training will help them to identify issues and remediate issues quickly. Setting up ‘security champions’ who also have expertise with AI can help to further bridge the gap and shape a secure AI culture.  

The roles of development and security are changing: siloed approaches are no longer effective and practices must adapt to reflect this.  When secure coding is fully integrated into the SDLC, dealing with AI coding will be no different. This not only facilitates closer collaboration between development and security teams but also improves the focus on comprehensive product security. 

  1. Govern AI use in development
    You cannot manage what you cannot see.To eliminate shadow AI, leaders must establish visibility into which AI coding assistants and IDEs are approved, continuously update sanctioned tools, and enforce audit trails for AI-generated code. 

Our research has found a wide split between different AI security approaches currently being deployed, thanks in part to how new the practice is. AI-specific code reviews, code protection controls and audit trail requirements are all popular choices, and developers need to find the mix that works for them. 

Governance should also differentiate between legacy monolithic code, modern microservices built with open source and proprietary components, and new AI-generated code.  

Each carries unique risks and requires tailored policies. Success should be tracked through metrics such as mean time to remediate vulnerabilities, reduction of backlog, and the share of vulnerabilities caught pre-commit versus in production. Finally, organisations must account for any risks inherent in the AI platform itself as well as its output.  

Turning risky norms into secure advantage 

AI-generated code is here to stay, but the rising risks of unsecure, ungoverned code doesn’t have to be an inevitable consequence.  

The organisations that will thrive are those that treat security as a shared responsibility between development and security teams, and embed it directly into development workflows. By preparing for the AI age with the right tools and processes, business leaders can continue to reap the advantages of AI-supported coding without introducing unnecessary risks.   

Author

Related Articles

Back to top button