
Deepfake technologyย hasย emergingย asย AIโs latestย โPandoraโs box. No longer limited to producing parodic content of politiciansย (whoโllย ever forgetย the Pope sporting Moncler?),ย we are now seeingย generative AIย beingย actively weaponised, fromย misleading political deepfakes,ย clickbaitย celebrity advertisements,ย to schoolย children deep-faking explicit pictures of classmates.ย ย
Asย the capabilities ofย AI technologyย race ahead of regulation,ย many areย growingย concernedย of theย very realย threatย it poses. Newย legislation isย coming in โ butย much of it is too narrow, orย tooย vague to protect people comprehensively. And on theย flip side, these new rules haveย implications that could easily catch outย professionals trying toย utilise generative AIย in legitimate ways.ย ย
So,ย what legal protection currently existsย around deepfake technologies, and what behaviours are prohibited?ย
Theย Varieties of Visageย
First,ย itโsย important toย actually defineย what makes a deepfake, a deepfake.ย After all, similarities exist in nature โย thereโsย theย old adageย that seven people in the world look like youย –ย ย butย toย what extent of similarityย are you protected by regulation โ and where can you slip up as a business?ย
Aย usefulย example is the 2019 ruling againstย vape companyย Diamond Mist. Theย businessโย adverts included one with the straplineย โMoโs mad for mentholโย accompanied byย imageryย of aย male model with a bald head and thick eyebrows.ย ย
Mo Farah took to Twitter to complain about the potential confusion,ย concerned people would think he had endorsed the product.ย Ultimately, theย Advertising Standards Agencyย (ASA)ย ruled that the advertย did indeedย gave a โmisleading impressionโ: while โMoโ isย a common moniker,ย the modelโsย head and eyebrows were โreminiscentโ enough of the athleteย that viewers would associate it withย Mo Farahย asย one of the most well-known figures in the UK by that name.ย ย
Herein lies the crux: while the imageย wasnโtย a deepfake, it was similar enough to confuse viewers, and the sameย applies to deepfakes. Ifย itโsย misleadingย enough to confuseย someone else,ย you have grounds to considerย litigation.ย ย
Conversely, as a business, you need to considerย all potential interpretations of imageryย to ensure you can use generative AI without getting caught up in legalย complications.ย Just because theย stock gen-AI photoย youโreย using to head upย a LinkedIn article seemsย generic,ย doesnโtย mean it is.ย Voice, gestures, and context are all factors taken into consideration., butย ultimatelyย theย question is: did it confuse viewers?ย ย
Current Legislation around Deepfakesย
To date, there is noย singleย piece of legislation within the UK that provides blanket protection againstย deepfakes.ย Instead, individuals are protected under anย assortment of regulations depending on the nature of the deepfake.ย ย
Online Safety Actย
Theย Online Safety Actย has one main provision against deepfakes. While it has been illegal to share intimate or explicit images of someone without their consentย since 2015,ย the Onlineย Safety Act has compounded this ruling to make it illegal to alsoย shareย intimate AI-generated images of someone without their consent.ย Crucially,ย unlike theย ruling about genuineย intimate content,ย you do not need to prove that the creator intended to cause distressย in the case of deepfake imagery,ย althoughย it is considered a further serious offence if a sexualย intention can beย demonstrated.ย ย
Itโsย vital to note that this ruling does not criminalise theย creationย of an explicit deepfake,ย onlyย the sharing.ย The Online Safety Act is also primarily focused on removing offensive content;ย many are concerned thatย provisionsย will prove ineffective whileย the creation of intimate deepfakes continues to be unregulated,ย whileย perpetratorsย escapeย punishment.ย ย
Advertising Standards Agencyย ย
Theย ASAย stepsย in when advertisementsย containย misleading content. In terms of deepfakes, this mostly arises in the case ofย scamย adverts or clickbait;ย itโsย unlikely to affect everyday people, and those running businesses should know not toย use celebrities, who have usually trademarked their likeness, gestures, and voice,ย for example.ย
More interestingly, however, is the grey area of similarity that deepfakes are set toย exacerbate.ย One thing that the Mo Farah case particularly highlighted was thatย likenessย doesnโtย need to be identical, it just needs to confuse the viewerย enough to createย reasonable doubt.ย ย
With generative AI drawing from copyrighted material, there is now a danger that businesses could accidentally infringe ASA regulations by using gen-AI output that is accidentally similar enough to real-life celebrities to cause confusion. Intent in this caseย isnโtย relevant:ย all that matters isย whether viewersย have beenย misled, and it could land businesses in hot water with the ASA.ย ย
Civil Lawย
The final recourse forย UK citizens is under civil law.ย While thereย is no specific legislation addressing deepfakes,ย individuals could seek recourse in the following situations:ย ย
- Privacy: aย deepfake could be considered a violation of oneโs right to privacy,ย especially if they are able to proveย the creatorย used personal data to create it, which is protected under UK GDPR and the Data Protection Act 2018.ย ย
- Harassment: multiple deepfakes with intentย to cause alarm or distressย couldย form the basis of aย harassment suitย ย
- Defamation:ย if a deepfake has an adverse effect on oneโs reputation by portraying them in a false or damaging way, there is the potential for a defamation caseย ย
In such cases, an individual would be best to seekย legal guidance on how toย proceed.ย ย
Future of deepfake legislationย
So, where does legislation go from here? Hopefully, forward.ย The UK government took a considerable step back from the issue in the run-up to the election,ย but with theย EU AI Actย leading the wayย itโsย likelyย weโllย see new regulationย coming down the trackย soon.ย ย
The greater issue, however, is enforcement. Between the three bodiesย weโveย discussed above, the Online Safety Act, the Advertising Standards Agency, andย UK civil law,ย all are centred on regulating output on a case-by-case basis. Currently, the UK has no regulation in place orย proposalsย to input greater safety measures around the programmes themselves.ย In fact, manyย are celebrating the lack of regulation in the UK following the EU AI Act, hoping it leads to a boon in AI industries.ย ย
Current strategies, however, remainย inefficient. Victims require legal support to make any headway inย cases, and creators continue to escape repercussions. Widespread control of technology is similarly impractical โ one only needย look at GDPRย to get a sense of that.ย Efforts to do so, such as theย EU AI Act,ย stillย fail toย tackle the problem withย open-source generative technologiesย remainingย completelyย unregulated.ย ย
It appearsย that an independent adjudicatorย will beย requiredย โ anย Ofcom for AIย โ but how independent, or effective, this will prove remains to be seen.ย Letโsย justย hope that the new Government manage toย strike some kind of balance betweenย industry, personal protection, and business innovation.ย ย



