
Artificial Intelligence continues to change the world, and in recent years, we have seen an unprecedented rate of technological advancement across machine learning, natural language processing, and deep learning. Unsurprisingly, these new capabilities have been adopted by organizations across most every sector, offering decision-makers an accessible and cost-effective way to improve day-to-day operations. Developers themselves, who sit at the forefront of innovation, are technical adopters, employing AI as a means of shortening the software development lifecycle and helping them cope with mounting pressure to produce large volumes of code within a short timeframe.
However, for all of the many efficiencies and benefits AI introduces, it also presents risk with the average number of AI based cyber-attacks increasing. With studies revealing that 87% of organizations globally have been impacted by such attacks, and fresh research from an IDC report finding that half of developers spend 19% of their weekly hours on security related tasks – often outside working hours adding stress, reducing velocity and costing organizations as much as $28,000 per developer each year, it is clear to see that organizations are facing a real challenge when it comes to defending and utilizing AI in a secure manner. This raises the question: What things are critical to have in place and secure before adopting AI? What are the critical foundations that need to be in place before adopting AI across an organization?
Policy as the frontline
Keeping organizations safe does not mean refraining from using these technologies because the benefits they offer are indeed transformativ. However, any AI integration must go hand in hand with a clear, robust AI/LLM security policy. AI applications are no longer esoteric tools that only developers or specialists can use; employees are adopting them company-wide. Therefore, policy must reflect this. All employees should be educated and guided on what LLMs and agent applications they are permitted to use and equally as importantly, what type of data they can input into these systems. In doing so, policy can work as a firm foundation and first line of defence against cyber-attacks while allowing the benefits of the technology to be leveraged.
However, a successful security policy goes beyond just defining what can and cannot be used and how, it also considers what risk levels an individual or organization can tolerate. Errors and false positives will always be present, and every organization will find a unique risk rating acceptable to them by their industry and, dictated by what operations they conduct and the sensitivity of their data. These security policies exist to keep organisations and their customers safe, and the more coherent and accessible they are the less likely it is that gaps in AI system and application security will be the cause of a breach.
Education fuels security
A second fundamental pillar for successful and secure AI use is relevant knowledge and education. For developers within software organizations, using AI tools and agents has clear advantages in terms of producing more code at a faster rate. Nevertheless, using this technology competently hinges on tools being assessed and developers being trained on how to write code securely. This knowledge must be consistently updated, reinforced, and comprehensive, covering common vulnerabilities, secure design principles, and the secure implementation of software features. Doing so can be the distinction between AI being used effectively and securely rather than introducing a potential vulnerability that could be detrimental to the organization.
Despite the importance of secure code training, very few developers are academically taught about secure code or application security, with none of the top fifty undergraduate computer science programs in the US requiring it for majors. Basic coding knowledge is simply not enough; developers must see secure code as a mindset and ensure all code is written clearly, elegantly, and securely to minimize the risk of an attack. The need for this is emphasized by the ongoing Appsec dilemma and Tech layoffs. Therefore, training should satisfy three key criterions:
- Reflect skill level:
Training must be tailored to specific organisations and their daily operations. It must be relevant for an individual’s role because a crucial element of this training is business flow and context, and teaching developers not only what the solution is, but also why it is important. Additionally, it must be delivered in their respective coding languages and frameworks since code that is too complex is fragile and detrimental to the organization
- Go beyond the surface:
Taking part in a single training session does not provide sufficient understanding or knowledge depth; education must be relevant, consistent, and thorough in order to gain an architectural and technological understanding and become confident in decision-making. Programs should also adapt to reflect the rapid pace of advancement, such as ever-changing malicious AI Agents and increasingly complex supply chains.
Have realistic milestones
The progress of participants must be measured in order to provide insights into the success of training programs. One basic way to do this is to compare the number of vulnerabilities present in a developer’s code prior to training and after. This information can then be translated into positive feedback, incentivizing developers to continuously engage in security best practices day-to-day. Success stories can also be deployed to secure buy-in from stakeholders, helping training be as consistent and, therefore, effective as possible.