
AI coding agents are no longer experimental. According to a recent survey by StackOverflow, 81% of developers are using AI tools to develop software. From AI code editors like Cursor, Windsurf, VS Code, and others, to Model Context Protocol (MCP) servers, enterprises are adopting these systems to accelerate delivery and meet rising expectations for speed and scale.
AI promises to make software development dramatically more efficient, but it also amplifies security risk. Conventional weaknesses—input validation gaps, command injections, hard-coded secrets, vulnerable dependencies—are now joined by design-level flaws that alter security posture and architecture. Together, they’re forcing a rethink of how security must operate in the age of AI-assisted development.
AI introduces familiar and novel risks
Large language models (LLMs) are trained on vast repositories of open source code. As a result, they take the good and the bad. When these models generate code, they can replicate those flaws. While exact percentages vary, academic studies suggest that roughly a third of AI-generated code contains known vulnerabilities.
The challenge with AI is that not everything looks as it seems. Endor Labs found that only 1 in 5 open source dependencies imported by AI coding agents were safe. The rest included hallucinated dependencies—packages that don’t exist but sound plausible—or dependencies with known security vulnerabilities. Attackers have already begun exploiting this behavior by creating malicious packages that match hallucinated names, a new twist on typosquatting known as slopsquatting.
Where AI truly diverges from human developers, however, is in design reasoning. A developer considers architecture and intent; a model predicts the next token. That’s why AI often introduces subtle design flaws that weaken security—swapping cryptographic libraries, altering token lifetimes, or modifying authentication logic. Research also shows that each successive prompt to an AI agent can increase vulnerability count—highlighting how iterative prompting compounds risk.
Expanding the attack surface
The code itself is only part of the problem. The AI development ecosystem introduces a new supply chain: models and MCP servers that connect assistants to live environments. Each component can expose sensitive data or inject unvetted logic into the build process.
In other words, you’re no longer just securing the application—you’re securing the model that wrote it, the integrations it used, and the context it was given. This layered interdependence makes visibility and policy enforcement exponentially harder.
Why traditional tools fall short
Traditional AppSec tooling isn’t built to catch these complex flaws. Most static analysis or dependency scanners assume human authorship or predictable code patterns. They struggle to identify the new classes of risk that AI can introduce into the software supply chain. As AI adoption continues to accelerate, this gap between traditional security tooling and AI-generated risk will only widen.
Security can’t remain an afterthought. The solution is not to slow innovation, but to make security intrinsic—to build systems that produce secure-by-default code. That means embedding protection into every phase of the AI-assisted SDLC.
Building secure-by-default code
It begins at design, where teams encode security requirements directly into prompts and define tests the model must pass before code is accepted. Organizations should also formalize their unique security policies—like enforcing a specific library for input sanitization—into rules consumable by AI agents.
We also need a new class of tools. During generation, MCP servers can validate code in real time, using security intelligence to enforce guardrails automatically and require agents to verify their work. And multi-agent security review systems can reason across files to help human reviewers identify logic flaws and design drift from secure design patterns—something human reviewers often miss under time pressure.
Finally, CI/CD must maintain a full audit trail, tracking the provenance of every model, agent, and dependency to ensure each component entering the system is known, verified, and policy-compliant.
AI is reshaping how software is written. To keep pace, our security model has to evolve just as fast. That starts with treating AI-generated code as untrusted, unverified input. It requires the same scrutiny and evaluation as any other untrusted dependency. We need to build systems that make “secure by default” not an aspiration, but an outcome.



