Future of AIAI

The EU AI Act: Which parts can UK lawmakers copy and paste?

By James Fox, AI Lawyer at Gardner Leader

EU’s AI Act introduces regulation aimed at standardising the development, deployment, and use of AI systems across the European Union. This includes a focus on General-Purpose AI (GPAI), with a Code of Practice (Code) to guide compliance during the transition to formal standards. 

The EU AI Act, amongst other aspects, categorises the use of AI systems based on their risk and potential impact on people’s rights and safety, with stricter laws for those posing higher risks. 

These new EU guidelines will help providers of AI models comply with the Act’s obligations – providing a bridge between the Act’s initial implementation and the adoption of formal standards, and while not legally binding, adhering to the Code provides a presumption of conformity with the Act. 

In the UK, the approach to AI regulation has so far been more flexible and sector-specific, with a white paper released in 2023 and plans to introduce more formal legislation, potentially in 2026. The intention is to reduce regulatory burdens and position the UK as a global hub for AI development, with less ’red tape’ than the EU. 

Data protection and the legal use of copyrighted materials for training AI systems are key concerns. The UK’s Information Commissioner (IC) has already published guidance on AI and data protection, and these topics are expected to be central to future AI legislation in the UK, with recent relevant legislation being passed this year. 

The scale of the issue 

The AI market is vast and growing rapidly. Current spend within the AI technology market is set to amount to around $244bn in 2025 and is forecast to rise to more than $800bn in the next five years. 

In the UK, the AI industry stands out (by some metrics) as the third largest globally, trailing only the United States and China, and holds the title of the largest AI market in Europe. The CBI estimates that UK AI firms (directly and through their supply chain) support £9.1 billion in GVA and over 120,000 FTE jobs. 

Due to the ground-breaking nature of the technology, there is little in the way of regulation that is yet truly fit for purpose to properly regulate AI development and use, particularly in the UK. 

This is why getting our AI legislation right is so important. As a world leader in this technology, it is our responsibility to provide a global hub for artificial intelligence development without sacrificing safety or security. 

The EU vs UK: Getting it right  

Getting AI regulation ‘right’ means, in summary, balancing innovation with user safety and copyright protection, while considering ethical concerns, safety and reliability, accountability, inclusivity, and promoting global cooperation and flexibility. 

1. A risk-based approach: 

The EU categorises AI systems into acceptable, high, limited and minimal risk (plus a general-purpose AI overlay). This means that the higher the potential harm (like AI used in hiring or medical devices) the more models are subject to tighter regulation.  

This is good as it makes sure that regulation is proportionate, and not a one-size fits all. It reflects the various AI models which are available on the market, and which might be available on the market in the future. UK lawmakers can learn from this structure and adopt a similar multi-tier risk framework. UK law could go further by combining this with a more flexible, sector-by-sector approach. For example, healthcare AI might require separate rules to financial AI. This way it will give businesses clear guardrails without stifling innovation. These approaches could create a more nuanced and effective regulatory environment, provided the challenges (such as, but not limited to, complexity are managed well).  

2. Strict prohibition of real-time biometric surveillance 

The EU’s legislation prohibits the use of real-time biometric surveillance in public, as well as social-scoring AI, in order to protect the fundamental rights of people at the highest risk level; unless used in very limited circumstances (such as law enforcement under strict conditions). Many readers would have seen the recent controversy of the Metropolitan Police trailing use. 

This sets out clear boundaries on uses of AI models which threaten civil liberties. It would be a good step forward for UK law to reflect the EU AI Act’s unambiguous stance on these issues. 

3. Transparency & conformity for GPAI 

GPAI providers must disclose training data, copyright compliance policies, and explain how their models operate – setting a credible baseline for transparency. 

When GPAI providers are required to disclose what data their models were trained on, how they’re handling copyright, and how the systems actually work, it helps level the playing field. Smaller developers and startups gain confidence that they aren’t being undercut by companies using massive but opaque data sets, while investors and customers can more easily assess legal and reputational risks. 

In short, these rules make it easier to compete fairly, reduce the risk of sudden litigation disrupting markets (as we’re seeing with growing IP claims), and encourage AI development that’s legally sound from the outset. That kind of predictability is good for long-term growth, responsible scaling, and attracting global investment into the UK’s AI ecosystem. 

Areas for UK lawmakers to steer away from: 

1. A fixed list of “high risk” systems  

Within the EU’s Code, it highlights areas such as education, recruitment and border control as high risk. The issue with this is that this fixed list could become outdated or excessive (although it does have some flexibility). The UK may be better served focusing on how the AI is used, not just the sector it is in and keep flexibility in play. 

2. High risk systems being signed off without independent review  

Some of the EU’s AI systems can be signed off by the companies themselves, although some require third party conformity assessment. This risks businesses potentially ‘policing’ themselves or in some cases not policing at all in some areas; the fox guarding the hen house analogy seems apt here, not least for my namesake. The UK (as has already been suggested could install an independent body who regulates AI systems and make this applicable to all. 

3. Defining AI’s manipulative behaviour  

The EU’s definition as to what counts as “AI” or “manipulative” behaviour are often too unclear or broad – his has been an area of real criticism. Without going into detail here, this could lead to confusion, delays and spikes in litigation. The UK should try and aim for precision and plain language. 

4. Environmental impact

The EU Act doesn’t say much about how energy-intensive AI systems are (especially the large language models). The UK could lead the way by requiring sustainability reporting for large-scale AI training and use, particularly as the UK has a well-developed and respected framework for calculating energy use and carbon emissions. 

The reality is that AI is growing thick and fast on all counts. More AI is being developed, more AI is being integrated, and more regulation is incoming. Striking the right balance between innovation and regulation will be key for the UK to maintain its status as a global leader in AI development, while ensuring public trust, legal compliance, and ethical integrity. As the UK shapes its own AI laws, it has a unique opportunity to learn from the EU’s approach while forging a path that supports innovation, protects rights, and leads on sustainability. Let’s get it right. 

Author

Related Articles

Back to top button