Future of AIAI

Integrating ethical AI into the UK public sector

By Tony Holmes, Practice Lead for Solutions Architects in the Public Sector, Pluralsight

As the UK attempts to turbocharge lagging productivity, the public sector is integrating AI tools. Government departments are piloting their own AI assistant Humphrey, expanding digital skills training for civil servants and long-term strategies like the NHS 10-Year Plan put AI and technology at the heart of transformation. Yet, while adoption moves quickly, regulation is not keeping apace.

There is currently no overarching UK law governing AI. Following controversy around data use, copyright and creators’ rights, a UK AI Act is expected in 2026. The EU’s AI Act and Code of Practice provide a preview of what UK rules might resemble, with a strong focus on accountability and transparency.

However, regulation is not just about drafting laws, it also depends on people. Without integrating the skills and culture to apply regulation, even robust legislation risks falling flat. As the UK develops both its digital workforce and its regulatory framework, ethics and responsible AI must sit at the heart of public service.

Why ethics matter in the public sector

The risks of limited AI literacy among civil servants are manifold. Public services have a direct impact on citizen’s lives, often in high-stakes areas like healthcare, policing, and social benefits, it is essential that the public sector leads in ethical AI practices and eventually, regulatory compliance.

A lack of ethical AI literacy among civil servants could result in amplified biases. In healthcare, AI trained on skewed datasets could misdiagnose patients from minority backgrounds. In policing, predictive algorithms can disproportionately target certain communities. And in welfare, automated decision-making can deny support to eligible claimants. Unlike private companies, errors or biases within public sector AI can undermine trust in government, create inequities or even cause harm to vulnerable populations.

Learning from the EU’s regulation

The EU AI Act is the most comprehensive legal framework on AI in the world, providing regulations for AI systems operating within the EU. The accompanying Code of Practice is a guiding document, a set of non-legally binding guidelines designed to help companies demonstrate compliance in areas like transparency, copyright and safety.

The development of this Code was controversial with some tech companies warning that it would stifle innovation. Meta refused to sign the code with their Chief Global Affairs Officer, Joel Kaplan saying the Code’s ‘over-reach will throttle the development and deployment of frontier AI models in Europe’. Although Google signed the Code, it similarly emphasised the regulations went too far.

Despite this contentious reaction, the Code is likely to shape global standards. The UK, which has yet to set out its own AI regulations, could adopt many of its principles to maintain public trust while fostering innovation. The EU’s approach demonstrates that governments can convene industry around common standards that put responsible use at the centre.

In contrast, the US’ recent AI Action Plan sets out a starkly different approach, prioritising speed and innovation over responsibility.

The UK should avoid both extremes. Too much regulation risks stalling progress; too little risks bias, harm, and loss of trust. The real opportunity lies in a balanced approach: enabling innovation while embedding ethics, transparency, and accountability.

The opportunity lies in a middle ground: regulation that enables innovation while embedding trust, ethics, and transparency from the outset.

Understanding AI ethics is vital

Before formal regulations come into effect, the UK has the opportunity to start building the right culture by embedding ethics into every digital upskilling programme. Civil servants should learn more than how to use AI; they also need to understand when to use it, how to govern it and why responsibility matters. This will position the public sector to comply when regulation does arrive because even the best frameworks will fail if staff lack the skills to implement them.

The government has recognised the urgency of digital skills and committed to upskilling plans to alleviate digital skills gaps. For example, an upskilling plan was introduced for 7,000 Senior Civil Servants and the government launched the NHS Digital Academy to educate NHS staff on basic digital and data competence.

However, as it stands only 21% of Senior Civil Servants feel confident in digital and data essentials, suggesting that progress is uneven. The public sector still relies on external contractors for digital skills, with 55% of digital and data spending in 2023 going to external providers, creating barriers to institutional knowledge.

As AI becomes embedded in essential services, we need a workforce capable of spotting AI errors or ethical risks. Without that capability, we could embed discrimination and erode trust at the very moment AI adoption accelerates. Integrating ethical literacy now will allow the public sector to adapt quickly when regulation arrives, rather than rush to retrofit new behaviours later.

Embedding ethics in AI adoption

AI regulation and ethical skills must be seen as two sides of the same coin. Civil servants should be model users of AI, upholding transparency, accountability, and fairness.

The UK’s current “light-touch” regulatory stance offers a chance to lead globally, showing that rapid adoption can still be responsible. To seize this moment, ethics and capability must be treated as core infrastructure, not optional extras.

If the UK can combine the EU’s accountability-first approach with serious investment in skills, it has the chance to lead the world in public sector AI in both speed of adoption and building trust.

Author

Related Articles

Back to top button