Future of AIAI

Securing the future: building trust in AI through standards

By Scott Cadzow, Chair of Technical Committee, ETSI

AI is rapidly transforming how businesses and consumers interact with all forms of technology. Machines are no longer limited to fixed tasks: equipped with AI they can now iteratively learn, adapt and even make decisions that influence critical systems. This brings enormous potential for telcos, from optimising networks to powering customer experience with AI chatbots for customer help or for ever more personalised services. But the same qualities that make AI valuable also create new risks, with cyber-attackers exploiting AI for malicious purposes. 

AI has been extensively integrated into new businesses and created avenues for attackers to exploit business software and as a tool for attackers to improve their own techniques.  Although companies are developing new defensive tools, integrating AI into their cybersecurity strategy and response, industries must collaborate and develop a unified approach to securing AI. 

New standards and frameworks will guide the integration of AI within cybersecurity, creating security from the ‘ground up’ and ensuring businesses have the right safeguards to protect their organisation form the next generation of threats. But these standards need to happen now: with AI getting more complex , agreeing on the right standards and building them into business practice and law will protect the industry as a whole.  

A new generation of threats 

In traditional cybersecurity, attackers and defenders were stuck in a back-and-forth cycle. Every time there was an attack, there was a rush to defend. Tools such as anti-virus software and firewalls provide strong protection for a time, but are always followed by new attack methods . With AI, that cycle is accelerating and becoming more complex. Attackers can now train the systems themselves to learn from failures and continuously adapt their strategies. 

With this, it’s a challenge of scale. An AI system can process thousands of scenarios far faster than a human operator, making it possible for attackers to launch automated campaigns that probe for weaknesses until they succeed. In this environment, defending with static rules or one-off patches are no longer efficient. Security protections must evolve to be just as adaptive as the threats it faces.  

On top of this, we’re also seeing new kinds of attacks. With the rise of AI and many businesses using it across devices, many attackers are exploiting and attacking models directly. Techniques such as model poisoning, which feeds malicious data into a system to alter its behaviour, can undermine trust in automated decision making. These attacks can be subtle, evolving silently over time, truly Trojan Horses. To counter AI-driven risks, industries need clear, consistent rules and guidance for security.  

This is why standards to secure AI are critical, to help organisations build defences that can match the scale and sophistication of AI-driven threats. At the same time, work is also underway to define AI to defend itself, harnessing the strengths of its learning to strengthen cybersecurity, setting rules for trusted use of machine learning in detection, monitoring and automated response. By establishing guardrails on both sides, standards provide a foundation for resilience in the years ahead. 

Why standards are essential now 

Standards provide baseline protections that everyone can follow, while giving developers, operators and regulators a shared language for addressing risks. They set out practical measures such as safeguarding data, documenting processes and monitoring models once deployed. 

In April 2025, a technical specification set an international benchmark for securing AI systems and providing a framework to demonstrate how AI can improve cybersecurity. It outlined the requirements across the AI supply chain, including data integrity, transparency and lifecycle management. These measures can ensure that models are trained on trusted data, documented properly and monitored once deployed. 

Equally important are the ethical principles that build trust. Standards bodies have highlighted transparency, accountability and human oversight as essential. These principles ensure organisations can explain decisions, trace system behaviour and keep people responsible for critical outcomes. 

The importance of standardisation at this stage shouldn’t be taken lightly. AI is spreading quickly into healthcare, finance, telecoms and public services. If each industry builds its own defences in isolation, the result may lead to uneven levels of security. Standards make it possible to embed security consistently, avoiding gaps that competitors can exploit. They also help smaller organisations by providing accessible frameworks that reduce the burden of developing defences from scratch. 

AI standards in practice 

AI is still a novel technology but its moving fast to be integrated across machines at an extreme pace. This means that we need to know how to protect it early and anticipate how it’s going to be exploited which is why it’s important that governments encourage standards to be adopted. 

The recent publication of the standard, ETSI TS 104 223, derived from the UK Government’s Code of Practice for the Cyber Security of AI, is one example of standards being used to defend AI for the greater good of the industry. It outlines practical principles for developers and operators, from protecting training data to testing systems against common attack techniques. By integrating the standard as a priority, it means that businesses operating in the UK and across the world and across the world will, in time, be expected to demonstrate compliance with these principles as part of their security assurance, a shift that makes early adoption not just good practice, but a necessity to avoid future regulatory penalties. 

Securing the future together 

AI will continue to reshape digital systems, making them more powerful but also more exposed. Protecting against these threats requires not only technical innovation but also agreed standards, ethical safeguards and regulatory alignment. At the same time, AI can be a powerful tool for cybersecurity itself, enabling faster detection, smarter monitoring and adaptive responses. 

When industry, government and standards bodies work together, they can unlock this potential safely, creating defences that are stronger than any one organisation could build on its own. Standards are the foundation for this cooperation, helping to ensure that AI remains a source of progress rather than vulnerability.  

Author

Related Articles

Back to top button