Future of AIAI

AI Copyright at a Crossroads: Why International Perspectives Matter

By Nick Breen, Partner, Reed Smith

In the UK, right holders and AI companies must navigate the fast-developing international regulatory landscape, particularly with respect to copyright law. So far, the UK has been reluctant to show its hand as to how it intends to tackle the question of AI training and copyright, struggling to find the correct balance between the EU’s more prescriptive regime and the U.S.’s comparatively AI-friendly fair use model. 

The U.S. approach and the Anthropic Settlement: A Landmark  

While U.S. courts have so far favoured a broad interpretation of the fair use exception in the context of AI training, finding that the training process generally amounts to a “transformative” use, it is not all plain sailing for AI operators, with courts diverging on the scope of such exception and applying notable limitations; particularly in respect of training data not legitimately sourced. 

In September 2025, a U.S. federal court preliminarily approved a US $1.5 billion class action settlement between Bartz and Anthropic. This resolved claims that Anthropic had used pirated books throughout its training process. The settlement attributes approximately US $3,000 per work that was misused, and it also obliges Anthropic to destroy the pirated content in its “central library”. This is the first settlement in a line of similar copyright cases and serves as a benchmark for what is to come for AI operators.  

As a reaction to the Anthropic case, record labels swiftly amended their lawsuit in the U.S. against Suno (an AI music generator), by placing more emphasis on the manner in which Suno obtained the recordings used for training; claiming that Suno illegally “stream-ripped” sound recordings from YouTube. The labels are seeking statutory damages not only per work, but per act of circumvention, which could exponentially increase the amount of damages claimed. The effect is to frame Suno’s conduct as piracy rather than simply as a disputed use of content.  

UK-based operators require an International Outlook 

As mentioned, the UK has been slow off the mark, adopting a ‘wait-and-see’ approach to regulation. Each time the government appears to signal a preferred approach, it receives immediate and vocal backlash from stakeholders on both sides. In particular, the government’s consultation from the start of the year saw thousands of responses from a broad range of industries. 

The UK courts have also not had the opportunity yet to provide much clarity on the treatment of AI training under UK copyright law. The recent Getty Images v Stability AI litigation is a missed opportunity at demystifying this issue, as the primary copyright infringement claims were dropped due to lack of evidence, and due to the added complexity that the AI model being trained outside the UK. However, the UK court’s upcoming ruling on secondary infringement is expected to have broader implications as to how AI models are provided to UK users, even if the training takes place entirely outside the UK.  

This uncertainty on the domestic plane is further complicated by EU’s attempt to impose extraterritorial effect in the application of the controversial EU AI Act, a move that threatens the very idea of copyright territoriality.  

Ensuring Compliance  

The challenges for operators, whether based in the UK and seeking to operate globally, or based internationally and seeking to operate in the UK, largely overlap. This is because the task of training and operating AI models is largely a cross-border feat.  

In a global economy, with employees, customers, suppliers and data centres hosted across multiple jurisdictions, it is critical for operators to keep abreast of key developments across the world. 

Good practice for any operator would include: 

  • Data governance: Maintaining an audit trail of all data and resources used to train an AI model. Records should be kept of any licences, permissions or assignments of the works protected by copyright.  
  • Model testing: Risk management should be proactive. AI models should be monitored and tested to ensure that AI outputs do not infringe upon copyright protected works. Models should also be programmed to not allow prompts that specifically direct the model to infringe (i.e. reproduce) copyright protected works.  
  • Opt outs: Amidst the uncertainty in the UK, it would be prudent for operators seeking to operate in the UK or EU to offer right holders an ‘opt out’.  

Closing remarks 

As the UK’s position on AI and copyright remains unsettled, UK-based operators are encouraged to adopt a pro-active approach to meet the standards imposed not only at UK level, but also EU and US level.  

Recent litigation demonstrates the inevitability of enforcement in one market spilling over into another. Operators seeking to ensure compliance should follow international developments closely, and be quick to adapt and respond accordingly.  

Author

Related Articles

Back to top button