On January 13, 2025, the UK Government published its long-awaited AI Action Plan, which aims to strengthen the UK’s global competitiveness as a leader in AI development and its adoption. AI and technology have become hotly contested topics, both with regards to what forms regulation takes and how legislators keep up with its rapid development.
While the UK has no overarching framework that governs its use, there is a range of legislation such as data protection, product safety and equality law that applies to it – but this plan marks a distinct shift in the UK’s approach to AI, including a clear ambition for public sector development and investment in the technology. This new approach to regulating AI is a pivotal step in developing the UK AI regulatory landscape.
The UK has moved from discussions predominantly focused on “safety and risk” to an “opportunity and growth” agenda, and the government is positioning the UK as open to innovation and a key contender in the global AI arms race. There is an abundance of opportunities for the UK in exploring and implementing AI, but it also comes with its challenges. Regulatory and legislative challenges are critical amongst them, and this is where the Government must focus attention.
What opportunities does the UK AI Action Plan represent?
The recommendations made focus very much focus on investing in world-class computing and data infrastructure, as it outlines “a long-term plan for the UK’s AI infrastructure needs backed by a 10-year investment commitment”. The plan acts as a foundation to enable AI, and the UK will solidify itself as a forefront of AI development by increasing access to talent and building a world class AI compute ecosystem, and this will be impactful in the public sector and in fostering home-grown AI technologies and talent.
Public sector to rapidly adopt and pilot AI products and services
Previous AI ‘pro-innovation’ initiatives have influenced the action plan, and it implores the public sector to adopt AI products so it can contribute to the shaping of the AI market.
“Regulatory sandboxes” are a pivotal element in encouraging AI development and live consumer testing in a controlled environment, allowing AI developers to test their innovative AI models under regulatory oversight. This concept has proven successful in other industries such as fintech and the development of the UK’s cybersecurity industry.
The aim is to boost the domestic ecosystem, expanding the array of AI products offered by the UK market, and it also envisages AI benefitting the public sector by adopting trustworthy, high-performing AI at scale to boost efficiency and productivity in the delivery of public services.
Secure our future with domestic sovereign AI
The proposal recognises that “privately owned and operated AI will position the UK as a leading AI economy”. One of the more ambitious aims of the action plan is the Government’s goal of influencing the governance of frontier AI for the UK by positioning the UK’s private sector as the next global Big Tech leader.
The Government will launch a new unit, UK Sovereign AI, to strengthen the UK’s position in frontier AI development by proactively interacting with the private sector. The unit will have a powerful mandate to partake in international collaboration and facilitate the growth of AI companies by removing barriers to AI and making deals to enable development of competitive AI champions from within the UK.
The Government must consider how it can elevate key suppliers to the global stage whilst ensuring that such support and direct investment will comply with the Subsidy Control Act 2022.
What does the future regulatory landscape look like?
The Government has maintained a light touch approach to regulation. However, Matt Clifford, the Prime Minister’s Adviser on AI Opportunities, claims: “Regulation, safety and assurance have the power to drive innovation and economic growth.” This appears to mark a shift towards regulation, and the plan acknowledges that well designed and implemented regulation can fuel safe development and adoption of AI.
Despite the emphasis on growth and innovation, the action plan recommends preserving the current pro-innovation approach to AI regulation by retaining the sector based approach to AI to continue fostering public trust in the technology and prioritising consumer safety.
Regulators are obligated to fulfil their regulatory functions with due regard to the five cross-sector principles set out in the Conservative White Paper of 2023. These principles address safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
There are concerns that strong AI regulations have the potential to hinder innovation by reducing regulatory freedom. The action plan acknowledges this and in response cautions against ineffective regulation that could hold back adoption in crucial sectors such as the medical sector.
Increased accountability for regulators
The Action plan outlines a number of measures to enhance the regulatory landscape for AI. A radical recommendation includes placing greater accountability on the current regulatory bodies to secure growth and safety in order to match the Government’s ambitions.
Certain regulators will need to identify their future AI capability needs and set out how they intend to mitigate AI risks whilst balancing the promotion of growth. Notably, the bodies with significant AI activity will be required to publish annual reports, setting out how they have enabled innovation and growth driven by AI in their sector.
If evidence demonstrates that the existing regulators are failing to match the Government’s ambition for AI growth and innovation, Clifford recommends introducing a new central body with statutory powers and higher risk tolerance to drive innovation – a major proposal which will shake up the current UK regulatory framework.
Will the UK and Ireland have a defined regulatory landscape?
The Government has previously stated that it intends to regulate the providers of the larger ‘frontier models.’ The Action plan is silent as to whether any such regulation should be introduced. Instead, the plan focuses on the adoption of an agile approach in response to the rapid pace of technological advancement.
Whilst the plan does invoke predictions on the future landscape of AI regulation, we still await the forthcoming AI legislation announced last year. It is expected that the legislation will remain light touch and narrow in scope so as to only target ‘frontier’ AI risks. This will avoid disadvantaging and hindering AI development and investment, a direct contradiction to what the UK Action Plan aims to achieve.
Potential Impact
The UK AI Action Plan represents an opportunity for the UK Government to act as a catalyst for bringing AI technological advancement to commercial and consumer markets on a wider scale. The plan is ambitious, and the sudden push for innovation and technological innovation comes at a time where other jurisdictions are becoming increasingly attractive to potential investors.
By expanding AI infrastructure, providing a regulatory framework and actively supporting AI development, the Government is representing the UK as a compelling investment opportunity and potential customer for AI solutions.
However, there is a balance to be found between the UK retaining its competitive edge in the “AI arms race” and maintaining its self-imposed regulatory obligations to safeguard individuals.
Whilst the plan recognises the importance of AI safety and transparency, it remains to be seen whether the proposal can maintain a balance between championing the forefront of innovation whilst preserving fundamental rights such as privacy and data protection, as well as addressing ethical considerations and risk issues.
The plan represents a step forward for developing the UK AI regulatory landscape, but businesses will need to be alert to legislative and regulatory updates.