Future of AIAI

Deepfake Regulation: A Double-Edged Sword?

By Graeme Murray, Senior Associate at Marks & Clerk, and Michael Shaw, Partner at Marks & Clerk

Deepfake technologyย hasย emergingย asย AIโ€™s latestย โ€˜Pandoraโ€™s box. No longer limited to producing parodic content of politiciansย (whoโ€™llย ever forgetย the Pope sporting Moncler?),ย we are now seeingย generative AIย beingย actively weaponised, fromย misleading political deepfakes,ย clickbaitย celebrity advertisements,ย to schoolย children deep-faking explicit pictures of classmates.ย ย 

Asย the capabilities ofย AI technologyย race ahead of regulation,ย many areย growingย concernedย of theย very realย threatย it poses. Newย legislation isย coming in โ€“ butย much of it is too narrow, orย tooย vague to protect people comprehensively. And on theย flip side, these new rules haveย implications that could easily catch outย professionals trying toย utilise generative AIย in legitimate ways.ย ย 

So,ย what legal protection currently existsย around deepfake technologies, and what behaviours are prohibited?ย 

Theย Varieties of Visageย 

First,ย itโ€™sย important toย actually defineย what makes a deepfake, a deepfake.ย After all, similarities exist in nature โ€“ย thereโ€™sย theย old adageย that seven people in the world look like youย –ย ย butย toย what extent of similarityย are you protected by regulation โ€“ and where can you slip up as a business?ย 

Aย usefulย example is the 2019 ruling againstย vape companyย Diamond Mist. Theย businessโ€™ย adverts included one with the straplineย โ€œMoโ€™s mad for mentholโ€ย accompanied byย imageryย of aย male model with a bald head and thick eyebrows.ย ย 

Mo Farah took to Twitter to complain about the potential confusion,ย concerned people would think he had endorsed the product.ย Ultimately, theย Advertising Standards Agencyย (ASA)ย ruled that the advertย did indeedย gave a โ€œmisleading impressionโ€: while โ€˜Moโ€™ isย a common moniker,ย the modelโ€™sย head and eyebrows were โ€œreminiscentโ€ enough of the athleteย that viewers would associate it withย Mo Farahย asย one of the most well-known figures in the UK by that name.ย ย 

Herein lies the crux: while the imageย wasnโ€™tย a deepfake, it was similar enough to confuse viewers, and the sameย applies to deepfakes. Ifย itโ€™sย misleadingย enough to confuseย someone else,ย you have grounds to considerย litigation.ย ย 

Conversely, as a business, you need to considerย all potential interpretations of imageryย to ensure you can use generative AI without getting caught up in legalย complications.ย Just because theย stock gen-AI photoย youโ€™reย using to head upย a LinkedIn article seemsย generic,ย doesnโ€™tย mean it is.ย Voice, gestures, and context are all factors taken into consideration., butย ultimatelyย theย question is: did it confuse viewers?ย ย 

Current Legislation around Deepfakesย 

To date, there is noย singleย piece of legislation within the UK that provides blanket protection againstย deepfakes.ย Instead, individuals are protected under anย assortment of regulations depending on the nature of the deepfake.ย ย 

Online Safety Actย 

Theย Online Safety Actย has one main provision against deepfakes. While it has been illegal to share intimate or explicit images of someone without their consentย since 2015,ย the Onlineย Safety Act has compounded this ruling to make it illegal to alsoย shareย intimate AI-generated images of someone without their consent.ย Crucially,ย unlike theย ruling about genuineย intimate content,ย you do not need to prove that the creator intended to cause distressย in the case of deepfake imagery,ย althoughย it is considered a further serious offence if a sexualย intention can beย demonstrated.ย ย 

Itโ€™sย vital to note that this ruling does not criminalise theย creationย of an explicit deepfake,ย onlyย the sharing.ย The Online Safety Act is also primarily focused on removing offensive content;ย many are concerned thatย provisionsย will prove ineffective whileย the creation of intimate deepfakes continues to be unregulated,ย whileย perpetratorsย escapeย punishment.ย ย 

Advertising Standards Agencyย ย 

Theย ASAย stepsย in when advertisementsย containย misleading content. In terms of deepfakes, this mostly arises in the case ofย scamย adverts or clickbait;ย itโ€™sย unlikely to affect everyday people, and those running businesses should know not toย use celebrities, who have usually trademarked their likeness, gestures, and voice,ย for example.ย 

More interestingly, however, is the grey area of similarity that deepfakes are set toย exacerbate.ย One thing that the Mo Farah case particularly highlighted was thatย likenessย doesnโ€™tย need to be identical, it just needs to confuse the viewerย enough to createย reasonable doubt.ย ย 

With generative AI drawing from copyrighted material, there is now a danger that businesses could accidentally infringe ASA regulations by using gen-AI output that is accidentally similar enough to real-life celebrities to cause confusion. Intent in this caseย isnโ€™tย relevant:ย all that matters isย whether viewersย have beenย misled, and it could land businesses in hot water with the ASA.ย ย 

Civil Lawย 

The final recourse forย UK citizens is under civil law.ย While thereย is no specific legislation addressing deepfakes,ย individuals could seek recourse in the following situations:ย ย 

  • Privacy: aย deepfake could be considered a violation of oneโ€™s right to privacy,ย especially if they are able to proveย the creatorย used personal data to create it, which is protected under UK GDPR and the Data Protection Act 2018.ย ย 
  • Harassment: multiple deepfakes with intentย to cause alarm or distressย couldย form the basis of aย harassment suitย ย 
  • Defamation:ย if a deepfake has an adverse effect on oneโ€™s reputation by portraying them in a false or damaging way, there is the potential for a defamation caseย ย 

In such cases, an individual would be best to seekย legal guidance on how toย proceed.ย ย 

Future of deepfake legislationย 

So, where does legislation go from here? Hopefully, forward.ย The UK government took a considerable step back from the issue in the run-up to the election,ย but with theย EU AI Actย leading the wayย itโ€™sย likelyย weโ€™llย see new regulationย coming down the trackย soon.ย ย 

The greater issue, however, is enforcement. Between the three bodiesย weโ€™veย discussed above, the Online Safety Act, the Advertising Standards Agency, andย UK civil law,ย all are centred on regulating output on a case-by-case basis. Currently, the UK has no regulation in place orย proposalsย to input greater safety measures around the programmes themselves.ย In fact, manyย are celebrating the lack of regulation in the UK following the EU AI Act, hoping it leads to a boon in AI industries.ย ย 

Current strategies, however, remainย inefficient. Victims require legal support to make any headway inย cases, and creators continue to escape repercussions. Widespread control of technology is similarly impractical โ€“ one only needย look at GDPRย to get a sense of that.ย Efforts to do so, such as theย EU AI Act,ย stillย fail toย tackle the problem withย open-source generative technologiesย remainingย completelyย unregulated.ย ย 

It appearsย that an independent adjudicatorย will beย requiredย โ€“ anย Ofcom for AIย โ€“ but how independent, or effective, this will prove remains to be seen.ย Letโ€™sย justย hope that the new Government manage toย strike some kind of balance betweenย industry, personal protection, and business innovation.ย ย 

Author

Related Articles

Back to top button