Cyber Security

Why the deepfake problem is not going away anytime soon

How do we eliminate the deepfake threat? Short answer: get rid of Generative AI.

Donā€™t worry, Generative AI isnā€™t going anywhere ā€“ but in my view, neither are deepfakes.

Tackling the threat of deepfakes has been a priority for businesses, a persistent headache for policymakers, and a rising concern amongst the general public ever since ChatGPT took the world by storm at the end of 2022.

Since that time, we have seen the development of a whole array of deepfake detection tools and an increasingly fortified line of defence against cybersecurity threats. Yet the deepfake threat continues to grow in both volume and sophistication, haunting the newly unleashed world of AI-powered content creation like some back-to-front digital retelling of Pandoraā€™s box.

With the UKā€™s general election taking place today (4th July 2024), the deepfake threat is once again front of mind as citizens go out to vote for their favoured candidates in an atmosphere newly fraught with misinformation concerns arising from the potential impact of deepfakes on public opinion.

Lewis Shields, Director of Dark Ops at ZeroFox, warns of the risk of increasingly sophisticated misinformation campaigns and propaganda on social media channels that are being used to shape public opinion, influence social trends, and cast real life events in a particular light.

ā€œOrganisations should be on guard for mass disinformation campaigns deployed across social media to spread false information. Although not likely to directly disrupt voting, these campaigns push geopolitical propaganda and disinformation that are likely to influence opinion. As part of this, threat actors are expected to leverage GenAI to create more effective and persuasive content, including highly realistic synthetically-generated images and deepfakes of politicians to discredit and undermine opposition candidates.ā€

Lewis Shields, Director of Dark Ops at ZeroFox

This is a reality that not just the UK, but every political system throughout the world will have to face and deal with in coming years, given that deepfakes are essentially just the unavoidable dark side of the shiny new coin that is Generative AI.

In this article, we consider the evolving and persistent threat of deepfakes, their impact on the future of public trust, and how we can best handle the permanent presence of deepfakes in our day-to-day lives.

The threat evolves: the power of multimodal AI tools

The use of multimodal content in deepfakes appears to be the latest fad for adversaries, who are utilizing AI tools to not only generate text, but also image, audio, and video content to further refine their tactics.

In fact, a recent study by Google DeepMind found that the use of AI for the creation of fake images, videos, and audio content to imitating people is almost twice as high as the next most common misuse of GenAI tools (the use of chatbots and language-based applications to generate misinformation).

Amir Sadon, Director of IR Research at Sygnia, recognizes the use of multimodal content in deepfakes as a key emerging trend in social engineering attacks, highlighting that scams which utilize a more diverse range of media (i.e. image, video, audio) are significantly more convincing than those involving text only.

ā€œDeep fakes are emerging as a significant new vector for social engineering attacks. Historically, social engineering-based phishing has evolved alongside technological advancements such as social media and mobile phones. AI applications and tools have already proven effective in executing both targeted and broad phishing attacks by generating convincing text that aligns with the target’s language, culture and profession. The advancement and widespread availability of deepfake tools enable the creation of synthetic images, videos and audio, which can significantly increase the likelihood of targets clicking on malicious links, downloading harmful files or sharing sensitive information.ā€

Amir Sadon, Director of IR Research at Sygnia

Similarly, Tim Callan, Chief Experience Officer at Sectigo, warns that the advances in multimodal GenAI tools, which are facilitating increasingly convincing imitations of practically any aspect of someoneā€™s persona, may bring about a fundamental shift in how we interact with media and consume content.

“It’s alarming to see the rise ofĀ deepfakeĀ technology now being used to mimic news anchors to spread misinformation. People donā€™t realise how far AI deepfake technology has come and how democratised the technology is. Unfortunately, anything about your physical appearance can be replicated, i.e. eyes, face, voice. This is no longer something that only exists in films, as more people are now capable of creating convincingĀ deepfakes. As the landscape has dramatically changed, people’s mindset when consuming media must shift with it. They must now exercise more caution than ever in what they watch and reconsider the validity of the source and its trustworthiness.ā€

Tim Callan, Chief Experience Officer at Sectigo.Ā 

This points to one of the most significant long-term dangers of deepfakes, namely, their power to erode public trust in the very organisations and social structures that are there to serve them. In fact, this paradigm has already started to play out, with the proliferation of fake news on social channels over the last few years leading to falling levels of trust in the media industry as a whole.

Now, with multimodal GenAI tools enabling adversaries to unleash deepfakes into the audio-visual realms of content creation more easily than ever before, we must face up to the reality that no form of digital content is immune from the deepfake threat. As such, finding convenient and reliable ways to authenticate, validate, and trace online content is going to become an increasingly important way for organisations to gain public trust and maintain their public image.

Fighting fire with fire

Some experts propose that AI detection tools are our greatest weapon against deepfakes, although the efficacy of such tools is notoriously short-lived due to the rapidly development of AI technologies.

Nevertheless, there are a range of tools available to choose from, many of which specialize in a particular type of content, or detection method. Some of the most powerful tools to date include:

  • Microsoftā€™s Video Authenticator: this tool was released back in 2020 ahead of presidential elections in the US and helps users to identify whether images and videos are likely to have been artificially manipulated. It does this by detecting subtle fading and/or greyscale elements within the content that are invisible to the naked eye but indicate a synthetic origin.
  • XceptionNet: released in 2018, this tool utilizes a Deep Learning algorithm trained on over 1,000 videos and specializes in identifying facial manipulation in video content.
  • Sensity.ai (formerly Darktrace): released earlier this year, this tool is currently one of the most comprehensive deepfake detection tools available, providing real-time, in depth analysis of visual, audio, and video content. It utilizes AI-powered pixel level analysis, voice analysis, and file forensics techniques to produce an automated report on the authenticity of the content. Targeted at professional and corporate users, it also provides details on which parts of the content are suspicious and why, to provide greater visibility for its users.

While these tools can provide fast, convenient, and relatively accurate deepfake detection (sensity.ai, for example, claims an accuracy rate of 98%), it remains to be seen whether the use of AI tools to detect deepfake content generated by other AI tools is actually a productive way to deal with the threat of deepfakes. Arguably, this method of ā€˜fighting fire with fireā€™ could simply fan the flames of the problem, giving deepfake creators further incentives to keep advancing the technology they have to evade the capabilities of detection software.

Indeed, a key problem with AI-powered deepfake detection is that it seems to entail a never-ending, machine vs machine battle on the cybersecurity battleground. This puts constant pressure on software developers and security teams to keep bringing out better deepfake detection products in order to remain one step ahead of their opponents. It also means that the work of security software developers ultimately just plays into the hands of their opponents, which allows adversaries to essentially profit from their labour.

Nevertheless, for the moment, they remain one of the most effective tools for organisations and individuals to tackle the threat of misinformation posed by deepfakes. As such, Sadon argues that the development of such tools remains a priority for cybersecurity teams.

ā€œAs AI continues to advance, it is anticipated that threat actors will systematically enhance their capabilities, employing scalable methods as part of what could be considered an ā€˜industrial revolutionā€™ in cybercrime. A significant challenge for the cybersecurity community will be to develop prevention and detection methods that leverage the same AI technologies, ensuring robust defences against these sophisticated and evolving threats.ā€

Amir Sadon, Sygniaā€™s Director of IR Research

Meanwhile, Audra Streetman, Splunk security analyst, suggests that AI deepfake detection tools are not a solution to the deepfake problem, but can provide an additional line of defence against deepfake scams that can help to slow adversaries down and decrease their chances of success.

ā€œI think [that the deepfake threat] is going to entail a cat and mouse game of [security teams] creating technology to detect deepfakes, and then on the flip side, [opponents] advancing technologies to evade that detection. I can see that continuing for the indefinite future. I don’t think it’s the type of problem where there’s a solution to get rid of deep fakes.ā€

Audra Streetman, security strategist at Splunk SURGe.

Proving the authenticity of content: how effective is watermarking?

The AI detection tools we looked at above can certainly provide a useful form of defence against deepfake attacks, particularly when combined with human critical thinking. But is the ability to detect deepfakes enough?

Maybe in the short term. However, given that deepfakes keep evolving along with emerging technologies, organisations are going to need to adopt more proactive measures in order to tackle the longer-term threats that deepfakes pose to society, such as the erosion of public trust in the media, and the exacerbation of fake news culture.

Organisations can do this is by using digital authentication techniques to source, trace, and verify their content online. These techniques help to not only safeguard intellectual property from artificial manipulation, but also reassure their customers and followers of their contentā€™s authenticity.

One of the well-known ways to authenticate content online is to use watermarks. These are essentially a type of digital label that can be embedded into images and videos to indicate the source of that content. Traditionally, watermarks are visible additions to images, such as transparent logo placed over the image, or a small logo/sign in the corner of the image.

But as Streetman explains, these traditional types of watermarking do not provide much protection against artificial manipulation because they can be cropped/edited out.

ā€œ[These forms of watermarking] are a lot easier to manipulate or remove, because you could just crop the image with a screenshot and then you’ll have completely different metadata for that image. So there are workarounds for them. Theyā€™re not as secure as something that’s cryptographic and embedded in the media.ā€

Audra Streetman, Security Strategist at Splunk SURGe

There are also dozens of watermark editing tools online, which are effective at both removing and adding these more traditional and visible types of watermarks.

Digital signatures have emerged as a more secure form of watermarking that authenticate content by marking it with a unique digital signature (a bit like an invisible barcode), which can then be cross-checked against an encrypted version of that same signature that is stored in a tamper-proof digital ledger.

Streetman highlights one of the emerging standards for digital signatures, C2PA, that utilizes cryptographic asset hashing technology to mark media content. The emergence of this standard has also paved the way for initiatives such as Project Origin, a collaboration between Microsoft and the BBC that aims to tackle disinformation in the news, and the Content Authenticity Initiative, which Adobe launched in order to help systems provide history and context for digital media.

ā€œ[The C2PA] is a standard that’s really early in its development. It essentially uses cryptographic asset hashing to add a digital signature embedded into media, and so it is tamper-proof. It’s similar to the concept of blockchain where you have the signature that’s held in a tamper-proof ledger, and then the signature is also held in the media. This provides a way for people to easily trace the origin and authenticity of an image or video. They can use a browser extension that checks the digital signature of media content, and then match that against the ledger to verify its authenticity. This technique can also be used to keep track of where that media originated, and whether or not it was changed or altered over time. It can be really helpful in tackling concerns over fake news and misinformation, especially for media organisations that are concerned about credibility.ā€

Audra Streetman, Security Strategist at Splunk SURGe

Another similarly secure digital verification system is public key infrastructure (PKI), which is a well-established security framework used extensively in internet encryption services. It is based on a system of digital certificates, which are used to authenticate a userā€™s identity. These certificates enables users to both encrypt content (using a public key) and decrypt it (using a private key), and essentially creates a user access control system for content.  

According to Callan, the PKI system has remained an effective and reliable method of authenticating digital identity amidst the disinformation threat in the increasingly sophisticated wave of deepfakes.

ā€œWe must look at better and smarter ways to validate the authenticity of what we see. One of the best solutions that can evade the fraudulent use of AIĀ deepfakesĀ is PKI-based authentication. PKI does not rely on biometric data that can be spoofed or faked. By using public and private keys, PKI ensures a high level of security that can withstand threats of disinformation.”

Tim Callan, Chief Experience Officer at Sectigo

However, one of the current limitations of both digital signature watermarking and PKI-based authentication is that they are not yet mainstream enough to really tackle the threat of misinformation that deepfakes pose to everyday social media users and the general public.

They are more useful from an organisational perspective, as they allow businesses and media to prove the authenticity of the content they are publishing online, and provide the general public assurance that the content that they provide is more or less immune to the misinformation threat (except if their encryption software was hacked in a targeted scam, for example).

The future of content authenticity in the age of deepfakes

If used on a more mainstream level, the authentication methods discussed above could prove particularly effective in the political sphere especially, helping to minimize the degree of social disruption that deepfakes have potential to stir up. This is because they can essentially provide the general public with a way to cross-check the political campaign content they might be seeing on social channels against the verified information provided by trusted media organisations and official governmental websites.

This could become particularly important as deepfakes become a more widely used form of content that do not necessarily indicate misinformation or fake news. For example, Indiaā€™s recent general election saw one of the candidates use deepfake technology in a promotional video for his political campaign. The video which contained footage of the candidateā€™s deceased father endorsing the candidate with a virtual blessing from beyond the grave, thus explicitly indicating the fictional nature of the video.

Similarly, a pair of AI-generated videos were used to promote the campaigns of two of Indiaā€™s biggest politicians, Mr. Modi, and Mamata Banerjee, showing footage of the two opponents emulating a viral YouTube video of the American rapper Lil Yachty. This move raised controversy, with election officer Mr. Sen viewing it as a harmless political satire, given that ā€œpeople know this is fakeā€.

Dubbed as ā€˜soft fakesā€™, such applications of GenAI tools are blurring the lines of what counts as a deepfake, and demonstrating the long-term potential of such technology for legitimate use-cases, whether this is political campaigns, advertising, or educational videos.

ā€œWeā€™re seeing examples of candidates using deepfakes in their campaigns, not necessarily to deceive anyone, but just to help get their message out. In some cases, they’re creating deep fakes of themselves, almost to not have to record themselves. They can just create a deep fake of themselves doing something, maybe to cut down on their workload a bit as well. I’ve also seen new examples of companies using text to video generation for their ad campaigns. We could see marketing use-cases for this as well that would save potentially budget on marketing and advertising. So there are use-cases that aren’t necessarily malicious where we’re seeing this same technology being used.ā€

Audra Streetman, Security Strategist at Splunk SURGe

Overall, such use-cases indicate that we need to expand the ways in which we think about deepfakes, recognizing them as not just a misinformation threat, but as the product of GenAIā€™ creative potential. However, embracing them as such requires us to also draw clearer lines between what is fake and what is real. This is where content authentication tools such as digital signatures and PKI verification will come to play an increasingly key role in the moderation of digital content going forwards.

Author

  • Hannah Algar

    I write about developments in technology and AI, with a focus on its impact on society, and our perception of ourselves and the world around us. I am particularly interested in how AI is transforming the healthcare, environmental, and education sectors. My background is in Linguistics and Classical literature, which has equipped me with skills in critical analysis, research and writing, and in-depth knowledge of language development and linguistic structures. Alongside writing about AI, my passions include history, philosophy, modern art, music, and creative writing.

Related Articles

Back to top button