Future of AI

Mind the gap: Aligning AI beliefs with secure realities

By Danny Allan, Chief Technology Officer at Snyk

Artificial intelligence has quickly become the go-to co-pilot for developers, promising faster coding, smarter recommendations and more secure software. At least, that’s the theory. In reality, there’s a stark mismatch between how secure we think AI is making code and the actual number of vulnerabilities it may be quietly introducing.

The latest Snyk State of Open Source Security report shows that while AI has been widely embraced across development workflows, in 2024, 77.9% of developers and security professionals told us they believe AI has improved code security. That’s up slightly from the previous year and reflects an optimistic, perhaps even enthusiastic view of AI’s role in modern software development.

However, more than half of respondents (56.1%) also admitted they’re concerned about vulnerabilities introduced by AI coding tools. This cognitive dissonance – believing that AI is making things better, while acknowledging it may be making things worse – should be a warning sign.

What we’re seeing is a growing blind trust in AI. Teams love the speed, convenience and perceived quality of AI-generated code, but they’re not always scrutinising it with the same rigour they’d apply to human-written code. And that’s where potential dangers lie. 

The mirage of security

In theory, AI tools should speed up secure development. They can enforce best practices, catch common mistakes and even write security-conscious boilerplate code. In practice, however, AI-generated code is only as good as the data it’s trained on, lacking the critical thinking of human developers. This means it can easily propagate outdated patterns, insecure configurations or flawed logic.

Snyk’s research continues to find frequent, serious vulnerabilities in AI-generated code. Despite this, belief in its security benefits persists. This contradiction suggests a significant education gap. Organisations are often not adequately trained or informed on how to assess the real risks associated with AI-assisted development, leading to a false sense of security.

This isn’t just a theoretical problem. The data also shows that 52% of teams fail to meet their own service-level agreements (SLAs) for fixing high-severity vulnerabilities. Even as AI promises to make things faster, teams are falling behind on basic security hygiene – a symptom of the wider AppSec exhaustion we’re seeing across the industry.

A plateau in progress

Despite all the talk of AI, automation and DevSecOps, many organisations have hit a wall. Tooling improvements and better developer experiences haven’t translated into faster or more secure code. Teams are overwhelmed, AppSec adoption is stalling, and investment in proactive security is slipping.

From 2023 to 2024, the number of organisations implementing new tools to address supply chain vulnerabilities dropped by over 11%. Training investment fell even further – from 53.2% to just 35.4%. These aren’t just budget decisions; they reflect burnout, with the report suggesting that organizations may feel overwhelmed or fatigued by the continuous pressure of supply chain security demands. AI is being leaned on as a shortcut – a convenient way to shift left without doing the hard work of securing the pipeline. 

A silver lining: Sensible scrutiny

The research did find a hopeful message, as 84.1% of respondents said they apply the same scrutiny to open source components suggested by AI tools as they would to those recommended by human developers. That’s the perfect mindset. We need to treat AI just as we would other members of the development team. Think of it as an eager intern – useful and full of potential, for sure, but also fallible.

This approach shows that many in the industry do recognise that AI suggestions are not inherently safe. But there’s still work to be done in closing the gap between awareness and action. The security maturity of open source supply chains remains low, with many basic practices, from artifact signing to SBOM verification, still underutilised.

Rebalancing the equation

So what can we do to close the gap? The promise of AI in development isn’t going away, and that’s a potential benefit for all of us. It’s definitely time to recalibrate expectations, however. After all, AI is no silver bullet for software security. It’s a tool, and like any tool, it must be tested, validated and monitored.

To realign belief with reality, organisations need to educate teams on the risks of AI-generated code, not just its benefits. It’s critical to establish clear policies for validating and testing code, regardless of whether the author is human or machine.

Organisations also need to invest in foundational security practices, especially in the open source and supply chain ecosystems. Treat AI suggestions like pull requests – it’s definitely useful, but not immune from error. Perhaps most importantly, it remains important to avoid overreliance on automation. Remember, for all the benefits of AI, critical thinking can’t be outsourced.

The blind trust we’re seeing today is reminiscent of past cycles in software development – anyone who’s been around for a while has experienced waves of excitement that lead to overconfidence and resulting security incidents. But this time, we can get ahead of it. By acknowledging the limits of AI, and by educating teams on how to properly evaluate and secure AI-generated code, we can harness its strengths without compromising our defences.

AI needs oversight, not assumptions

AI can and should be a powerful force for good in software development. But blind trust is not a sound security strategy. Our research shows a growing disconnect between belief and reality. It’s a gap that could leave organisations vulnerable if not addressed. Security will always need discipline, and while AI may be changing how code is written, it doesn’t change the fact that nothing in development secures itself.

Author

Related Articles

Back to top button