Regulation

AI Regulatory Crossroads: Can the UK Forge its Own Way?

By Roch Glowacki, Managing Associate, Lewis Silkin LLP

The UK has reached a critical juncture in its approach to AI regulation. Historically, the UK has adopted a largely reactive and non-committal stance towards AI regulation. For years, it has observed from the sidelines the drafting process and now the implementation of the EU’s AI Act. It has delayed addressing pressing AI and intellectual property issues at home. Initially, this may have appeared to be a conscious and sensible hedge. However, time is running out, and the government’s next steps will set the tone for years to come.

The UK’s Approach

In March 2023, the previous UK government published its AI White Paper, outlining initial proposals to develop a “pro-innovation” regulatory framework for AI. By that time, the work on the EU’s AI Act was already well advanced.  It seemed as though the UK was a couple of years behind the EU in its thinking about AI regulation, perhaps distracted by Brexit, numerous changes in prime ministers, the pandemic and the war in Ukraine. The concepts and principles (transparency, explainability, fairness, trustworthiness, societal well-being etc.) contained in the UK AI White Paper closely resembled those in the EU HLEG’s Ethics Guidelines for Trustworthy AI (from 2019) and EU’s own white paper on this subject matter (from 2020). Therefore, the content of the UK’s paper wasn’t particularly novel.

When the EU first started advocating for an ethics-first, principle-based approach to regulation, many sceptics argued that it provided too “fluffy” of a basis for developing a legislative framework. The EU’s white paper was also clear that by creating a clear regulatory framework based on its fundamental values, Europe aspires to become a global leader in innovation. Meanwhile, the UK’s approach, also principle-driven, has been expressly branded as “pro-innovation” even though it seemed that the UK has simply reached a milestone in its thinking about AI regulation which the EU experts got to a few years earlier.

Recent Developments

A lot has happened in the past couple of years. President Biden’s 2023 Executive Order on AI has recently been repealed by the re-elected President Donald Trump. Last year, UK’s new Labour government expressed its intention to regulate AI in their manifesto, and the King’s Speech (which sets out the government’s legislative agenda) mentioned plans to establish appropriate legislation to regulate the most powerful AI models. However, in a possible policy shift aimed at aligning with the Trump administration, these plans are now expected to face further delays. Meanwhile, China’s often overlooked, regulatory landscape governing AI, is also rapidly evolving. However, after a brief period of potential regulatory convergence between the UK, EU and the US, we are at a risk of serious market fragmentation where conflicting national policies stand in the way of progress.

The UK has recently published its AI Opportunities Action Plan, which calls for improving data access, reforming regulation, developing AI talent, and driving adoption across both public and private sectors. Technology stakeholders have been advocating for many of these actions for years, so the ideas are not ground-breaking. It has also been two years since the UK’s AI White Paper and not much has happened since. Government ministers are trying to position the new plan as an opportunity for the UK to forge its own path in the AI landscape. But how does such way, particularly in regulatory terms look like?

Existing Approaches

The EU, US and China have each opted for distinct, though fundamentally different, stances on AI regulation, often characterised as a “rights-driven”, “market-driven” and “state-driven” approaches. The EU’s approach prioritises protecting fundamental rights, US focuses on unleashing market forces, while China’s approach centres around reinforcing government authority.

Among the three, the EU receives the most criticism. The argument against the EU’s approach is often based on the belief that regulation stifles innovation. However, even in the UK, Lord Holmes, who has recently re-introduced the Artificial Intelligence (Regulation) Bill (which had broad support during last Parliament), speaks about the need for right-sized regulation to support, not stifle, innovation. The argument goes that regulated markets perform better, and right-sized regulation can foster for innovation and attracts inward investment.

Does Regulation Really Stifle Innovation?

One does not need to look far to see how regulation can foster innovation. Take healthcare and finance sectors as examples. Consider the last time you transferred money, had a medical scan, or were prescribed medication. Behind those moments are entire regulatory frameworks that make sure that those systems are safe, secure and reliable. In finance, rules relating to open banking and e-money fostered a wave of innovation and let apps like Monzo, Wise or Revolut to flourish.

Consumers can now move money instantly, manage spending, or invest from their phones. And that’s not because innovation was left unchecked, but because regulation provided a stable foundation for systems that people could trust. The same applies in healthcare. When a doctor prescribes a new medication, many of us don’t question its safety or effectiveness. That’s because we trust, perhaps only subconsciously, that the drug has passed rigorous clinical trials and is subject to regulatory scrutiny. The roll-out of the COVID vaccine during the pandemic is a case in point.

Trust is a prerequisite for adoption. When we see risk or unfairness, we often pull back. Air travel is a good example. When planes were first invented in 1903, flying was perceived as and indeed was incredibly risky, but a combination of technological progress and rigorous safety regulation transformed aviation into the safest and most trusted mode of transport. That’s, ultimately, also the purpose of the EU’s AI Act which been drafted as a piece of product safety legislation.

It is also worth noting that it is estimated that the AI Act will cover only a fraction (approximately 10-20%) of all AI use cases, focusing on high-risk AI systems. These rules are not as much about banning innovation but ensuring that where we interact with AI systems, we can trust that those systems have been deployed in a safe and responsible manner. It’s akin to ensuring every car has a seatbelt, not prescribing what kind of car can be built.

Those who argue that Europe is stifling innovation through regulation often fail to appreciate that, in fact, a much larger contributing factor affecting Europe’s ability to innovate is the absence of a truly single digital market. The European market remains heavily fragmented due to factors such as language and cultural differences, capital market fragmentation, and diverging national rules.

Falling Behind in the AI Race?

Another argument frequently levelled against regulation is the fear that might exclude a country from the so-called “AI race.” But what does winning this race actually mean? Is it about developing the most powerful general-purpose model, even if that model remains opaque, and only deployable a handful of tech giants? Or is it about generating the most revenue, even if that wealth accrues to the few, deepening existing inequalities?

Imagine the following scenario: you’re working from home, and during every video call, an AI system monitors and analyses your facial expressions, tone of voice and spoken contributions. Without your knowledge, it then feeds this data into a performance management algorithm that ranks you against your colleagues. When your manager consults the tool, it tells them that you were only “engaged” during just 20% of your meetings and you’re flagged as underperforming. Shortly after, you’re fired. While this may sound dystopian, under the EU’s AI Act, such emotional surveillance is generally prohibited with very limited exceptions. However, in jurisdictions lacking equivalent safeguards, there may be little to prevent the widespread deployment of such systems. Is the ability to deploy these systems at scale what winning the AI race truly entails?

Value of International Collaboration

International treaties and initiatives are important for fostering trust and coordination between nations. However, for those to be meaningful, they must translate into concrete national action. In the past, international collaboration has been instrumental in curbing the proliferation of nuclear weapons or driving the debate on human cloning.

Recently, the UK, alongside the US, refused to sign the Paris AI declaration. However, this is neither the first nor the last AI-related international initiative. The UK has previously signed up to the 2019 OECD G20 AI Principles and the 2024 Council of Europe convention on AI and human rights, democracy, and the rule of law. Therefore, the consequences of not endorsing a particular declaration should not be overstated.

It is important to remember that many of these international AI-related initiatives take the form of non-binding pledges. They are not legally binding on signatories and so rarely result in the implementation of specific laws. Whether or not the UK had signed this particular declaration, it is unlikely to have any tangible impact on the UK’s regulatory approach to AI.

What ultimately matters are the action taken at the national level. Effective AI regulation requires a dual approach: a top-down framework driven by international cooperation to shape the global direction of AI governance, and a bottom-up effort focused on implementing detailed, well-crafted legislation within individual countries. The EU’s AI Act is somewhat unique in this regard, as it does both. If the UK wants to realise its AI ambitions, it must now take concrete steps at the domestic level to match its actions with its international rhetoric.

Conclusions

The UK’s “wait and see” approach has arguably provided its legislators with an opportunity to learn from other jurisdictions. However, this strategy positions the UK more as a follower than a leader in the AI debate, and time is gradually running out. The risk is that there is no “fourth” way that the UK can carve out for itself. The longer it takes for the UK to establish its stance on AI regulation, the more likely it is that it will need to align with one of the three existing approaches.

Given past experiences in sectors such as healthcare, finance, and aviation, it is hard to envision a scenario where AI remains unregulated if we are serious about fostering trust in this transformative technology. There is a genuine concern that, for the UK to take a definitive stance, it may require an AI equivalent of the 2016 Cambridge Analytica scandal to catalyse regulatory action, much like that event prompted a re-evaluation of data privacy regulations.

Author

Related Articles

Back to top button