Future of AIAI

Resilience and Developer Risk Management: Two Pillars of Success in the Era of Secure by Design and AI Coding

By Pieter Danhieux, CEO & Co-Founder, Secure Code Warrior

Change is afoot in cybersecurity governance, and it couldn’t come at a more transformative time for security leaders worldwide. The White House issued a recent Executive Order (EO) designed to “reprioritize cybersecurity efforts to protect America”, and with it, reduce friction related to overzealous government oversight to focus on protecting critical digital assets and enhanced secure technology practices. Coupled with significant cuts to CISA, one could be forgiven for being apprehensive of the right approach going forward, especially in the wake of rapid AI technology progression and Secure by Design initiatives.  

It appears that we are entering a new era, characterized by a risk-based approach to cyber strategy and a deep focus on proactive resilience. For some, it may feel like free fall compared to the heavy mandates previously seen in the United States and elsewhere around the globe, but it presents a unique opportunity to strengthen enterprise software with high-impact, customizable guardrails that actively reduce several areas of digital risk. 

What does “resilience” mean in the modern security program? 

As is true of our health, prevention is far better than a cure, especially from a cost and time perspective. As such, a “defense-in-depth” cyber strategy — that is, a multi-layered approach factoring in diverse threat vectors and avoiding a single point of failure—is common in enterprise security programs and has been for some time. This issue is that, in 2025 and beyond, the software development and cyber landscape is evolving faster than most leaders can adapt to and mitigate, especially when factoring in the risks of AI coding, including both emerging new vulnerabilities and misuse by developers with low security proficiency. To that end, security leaders would be well-placed to build both technological and human resilience into their programs.  

Generally, you can’t fix what you can’t measure, and the “people” aspect of a security program is notoriously difficult to analyze. This presents a problem, given that the human factor accounts for 95% of all data breaches, as revealed in Mimecast’s 2025 State of Human Risk report. The same report states that while 95% of organizations are implementing AI to assist with threat detection and cyberattacks, 55% of respondents feel ill-equipped to deal with AI-driven vulnerabilities. With agentic AI agents now creating code at head-spinning speeds (with generally poor security outcomes, mind you), resistance to cyberattacks will largely depend on how well developers are equipped to identify and mitigate vulnerabilities, as well as review AI code output with a highly proficient security lens. 

Additionally, Bedrock Security’s 2025 Enterprise Data Security Confidence Index indicates that 82% of security leaders report glaring gaps in overall visibility. This hinders risk management and compliance efforts, but can also have the undesirable effect of security programs lacking the data and insights needed to target genuine problems, such as effectively managing sensitive data and access control, and focusing too much on meeting regulatory requirements without actually reducing risk.  

Ultimately, security leaders must modernize their visibility and governance tooling, with special consideration made towards developer risk management. Security-skilled developers can effectively manage code-level vulnerabilities, but there must be granular visibility into each developer’s security skills, knowledge gaps, and the security accuracy of the code they commit. Without this dataset, people-driven resilience will remain elusive in the enterprise.  

Why Secure by Design is more relevant than ever 

The EO has a sharp focus on reducing regulatory red tape for the enterprise sector, but refreshingly, does not diminish security accountability for software vendors; they will have to stand by the safety of the software they release into the wild. On the one hand, there is the relief of less rigidity in government regulatory efforts, but on the other, the onus is on security leaders to demonstrate meaningful compliance with Secure by Design principles.  

The realities of the cyber landscape, especially when adding the complications posed by AI coding assistants in both introducing issues and creating AI-borne threat vectors like tool poisoning, have made meaningful Secure by Design initiatives imperative for organizations that want to protect their data, systems, business operations and reputations. In large part, creating secure software is in the hands of developers, but they need assistance in the form of a thorough upskilling program that provides precision, Just-in-Time knowledge and insights, and allows for hands-on experience with the issues they are most likely to encounter in their jobs.   

Software that is “secure by design and default” doesn’t emerge from general compliance exercises and one-time video training; it requires a fundamental shift towards vendor security responsibility, where developers must possess verified skills in security best practices and the ability to implement and manage strong, secure code. Despite the apparent leniency on offer as part of the recent EO, this framework is not going away, and it’s by far the most potent (and cheapest) way of improving software quality and security to the standards demanded by the current threat environment. 

Developer risk management is the key to safer use of AI agents and tools 

With the focus on resilience, critical infrastructure, and risk-based approaches, software-driven organizations (which is basically all of them) will need to demonstrate they are proactively reducing software risk, especially in the absence of intricate, widespread government controls. This is made exponentially more difficult by the constantly changing landscape of AI coding tools. 

Ongoing experimentation and the resulting data from BaxBench suggest that, on average, around half of the correct coding solutions generated by the LLMs tested are insecure, raising concerns about current metrics and any third-party evaluations that focus solely on code correctness. With agentic AI’s ability to work autonomously, this is a concerning situation. These tools, in the wrong hands, will supercharge the production of potentially vulnerable code, as developers with low security proficiency will not be able to detect insecure components, let alone fix them, given the applied security skills and the business context required to maintain secure functionality.  

Developer risk management in the form of governance, repository gatekeeping, and meaningful on-the-job upskilling is the missing ingredient in many security programs, but it is the key to resilience and Secure by Design compliance. These elements cannot remain in the “too hard” basket; CISOs must lead their CEO counterparts as a united front in bringing these issues to light in the boardroom. The time to modernize is now.  

Author

Related Articles

Back to top button