Data

Deepfakes: The legal reality behind the unreal

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

Did this really happen? Did they really say that? Questions like these will become increasingly tricky as video and audio deepfakes (i.e. content that has been manipulated using artificial intelligence (AI)) permeate every aspect of our lives.

While the use of AI to power facial recognition technologies has been subject to increased levels of security from regulators and authorities both in Europe and in the US, deepfakes have largely flown underneath the radar of policymakers.

Until recently, deepfakes were easily detectable and of clunky quality. However, the technology has matured incredibly fast, resulting in fake content being practically indistinguishable from authentic media and giving rise to a fresh wave of concerns.

Deepfakes, the good and the bad

Many associate synthetic media with harmless, often whimsical parodies of real-life people or playful filters on social media apps. Deepfakes have also many benevolent or incredibly practical uses, such as creating digital voices for those who have lost their ability to speak.

In the creative industries, they facilitate editing and recording of new content without having to re-shoot. However, a recent report from University College London ranked fake audio and video content as the most worrying deployment of AI technology.

Fake content can be utilised for a variety of purposes including discrediting public figures, fabricating news reports, or manufacturing plausible evidence. It could also be used for scams such as staging a fake kidnapping and extracting funds by impersonating a couple’s son or daughter in a video call.

Applying the law to deepfakes

The existing law provides a number of tools that may assist in tackling deepfakes. For example, the material used to generate the deepfake might be protected by copyright and therefore infringe on third party rights. The use of synthetic voices that resembles an identifiable individual is likely to raise image rights issues. Given that the recording of a voice involves the processing of personal data, General Data Protection Regulation rights will also be implicated. The synthetic content could also be defamatory, trigger obligations on online platforms to take down illicit content, or amount to a criminal offence.

Unfortunately, none of these legal tools provides a perfect solution. In an era when information is reposted, retweeted and shared 24/7, stopping the wildfire of fake content is not easy. Policing synthetic media in one jurisdiction can have a limited effect on content that is being used or disseminated beyond national boundaries, as is the case with fake news stories on social media.

Although steps can be taken to block material in a particular jurisdiction, leakage often occurs. Enforcing national court judgments abroad can be difficult and if the defendant does not have deep pockets or cannot be located, seeking vindication in this way can be an unsatisfactory and expensive exercise.

Many of the above-mentioned legal solutions provide individuals or companies with retrospective vindication only. By the time judgment has been given, the reputational damage or erosion of people’s trust caused by deepfakes has often already been done and may be irreversible.  

More pre-emptive tools are required to prevent harmful synthetic content from spreading or being deployed in the first place. There are currently no European laws that specifically tackle the issue of deepfakes. The latest set of European proposals for regulating AI only consider sectors where risks are both most likely to occur and where significant risks are likely to arise.

These proposals are, for example, likely to capture autonomous robots that pose an imminent threat to human life or safety. Therefore, deepfakes, which have never been a specific policy area, are very unlikely to fall within the scope of the EU’s evolving AI regulatory landscape.

Rising to the challenge

That being said, the EU Commission is working to tackle online disinformation in Europe (including the use of deepfakes) by developing the first worldwide self-regulatory set of standards.

Last month, the UK government also published a set of recommendations, one of which is that social media platforms should have clear policies in place for the removal of deepfakes. In fact, many platforms are already doing this and have a wide-ranging variety of content restrictions in their terms of service. Several US states have gone a step further and, for example, banned distribution of political deepfakes during the election period altogether.

The challenge in tackling deepfakes is as much legal as it is technological.

Governments all over the globe are bolstering the capabilities of their digital forensic units. Pentagon’s Defence Advanced Research Projects Agency (DARPA) has led the way by establishing the Media Forensics program focused on developing technologies that automatically detect and assess the integrity of digital media.

As deepfakes become increasingly sophisticated, so will parallel technologies used to authenticate them. A deepfakes arms race is, therefore, inevitable.

Confronting reality

Synthetic content can be deployed at scale and in no time – whether to perpetrate crimes or amplify the reach of fake media. As with COVID-19, the pandemic of deception has no respect for national borders.

Given that the existing legal frameworks that may apply to deepfakes are so extensive, a regulatory overhaul is neither likely nor would arrive in time. Instead, a prompt concerted international response supported by quasi-legal measures may be the best option.

Author

  • Roch Glowacki

    Roch is an associate in the Entertainment and Media Industry Group, and a transactional lawyer with experience in both, advisory as well as contentious matters. He advises companies on a wide range of projects involving emerging technologies (such as artificial intelligence, cloud computing and virtual reality), software and IT solutions, content distribution and licensing projects, digital media and e-commerce issues, advertising, media transparency, gaming and telecoms.

Related Articles

Back to top button