Future of AI

Authenticity’s fragile frontier in the AI era

“How we move forward in the age of information is going to be the difference between whether we survive or whether we become some kind of f*cked-up dystopia,” Obama told viewers in a BuzzFeed video. Or did he?

Back in 2018, with uncanny accuracy, Jordan Peele harnessed deepfakes – computer-generated replications of individuals saying and doing things they never did – to produce a public service announcement made by the former president. The message and medium unearth a deep-rooted paradox: in acknowledging the existence of such deceptions, does society inadvertently cast doubt upon the authenticity of all that we perceive, even when it aligns with objective reality?

As deepfakes continue to evolve in sophistication, all consumers of information – from individuals to businesses – are being forced to grapple with the ethical implications of such advancements. In China, for instance, the world’s first AI news anchor has been modeled to replicate a human newsreader, Xinhua’s Qiu Hao.

Qiu’s digital personage will be able to deliver news around the clock and will say whatever text it is programmed with. Meanwhile, in 2020, landmark research indicated that 96% of all deepfakes online were used in a pornographic context, with close to 100% of cases featuring women. It’s little wonder that deepfakes are the most dangerous form of artificial intelligence (AI) crime, according to University College London.

In the interests of balance, not all applications of deepfakes are nefarious. New use cases are emerging daily, driving accessibility and engagement in education, for instance. In 2023 a UK-based health charity harnessed deepfake technology to have the highly influential David Beckham deliver an anti-malaria message in nine languages. For all of his talents, such a globally impactful campaign could only be achieved through technology.

Regardless of how deepfakes are harnessed and by whom, they pose new questions which must be urgently addressed. Truth has now become a malleable commodity in today’s digital age, shaped by keystrokes and algorithms rather than fact.

The rising prevalence of deepfakes

Navigating the blurred lines between truth and fiction is both a burden and a challenge. Our polling of UK residents using the Prolific platform found that over half of UK adults (51%) regularly encounter deepfake technologies on social media. This rose to three-quarters (73%) of Gen Z adults (aged between 18 and 24), which could highlight perceived confidence in recognizing fiction from fact, or perhaps simply more time spent online.

As deepfake images of politicians, celebrities, and even members of the general public become more realistic, the consequences surrounding personal safety and privacy online grow, while trust in news becomes diminished. Across all UK adults, the top concerns around the spread of deepfakes included scams (34%), the spread of disinformation (34%), and election interference (17%). However, concerns varied significantly between age groups. Over 65 were the group most worried about the risk of scams from deepfake technologies, while Gen Z adults said that reputational damage from deepfakes for celebrities including actors and artists was of more concern than interference with election outcomes.

Seeing is no longer believing

With deepfake news anchors delivering authentic reports and esteemed politicians appearing to propagate disinformation, the reverberations from deepfakes could shake the foundations of civilisation. The ability to tell the truth from fiction is a non-negotiable of a healthy society.

For consumers, deepfakes can enable highly convincing scams by making it appear that trusted figures like celebrities, politicians, or company executives are endorsing fraudulent investments or money transfers. Deepfakes could also be used to create fake videos that defame or humiliate people, spreading harmful misinformation about their actions or statements. And finally, with a pivotal year of elections ahead, deepfake videos could depict political leaders or public figures saying or doing things they never actually said or did, misleading voters and eroding trust in institutions.

Both consumers and businesses must adopt a heightened sense of skepticism and invest in technologies to detect and combat deepfakes, as the proliferation of these manipulations can undermine the integrity of communication, commerce, and democratic processes. Failure to address this challenge could have severe ramifications for personal safety, privacy, and the overall fabric of society.

Building ethics into the core of AI

Deepfakes are the latest technological advance that raises important questions over the promotion of transparency and accountability in the development and deployment of AI. The answer is to bring ethics to the forefront.

Building ethics into the core of AI requires a holistic approach beyond just the initial training data. Such a complex task demands robust governance with principles guiding responsible practices across data sourcing, model development, fine-tuning, deployment, and monitoring. For fine-tuning, this means prioritising consent, privacy, fair labor, and psychological safety and well-being of human AI-taskers.

This is a key step in mitigating harmful biases and ensuring safety constraints and transparency. Cultivating societally beneficial AI is a multi-stakeholder duty requiring collaboration between ethicists, experts, impacted groups, policymakers, and tech providers. In short, ethics must be embedded at every stage – from data collection to model deployment – to unlock AI’s potential while avoiding risks like deepfakes and retaining public trust. Only by institutionalizing ethics can transformative AI be achieved responsibly.

But as the history books show, relying on such self-regulation is not enough and there is a growing need for transparent and impartial oversight. The EU’s AI Act has sought not to ban deepfakes entirely, but instead demand transparency from creators. This measure aims to empower consumers with knowledge about the nature of the content they encounter, potentially rendering them less susceptible to deceptive manipulations.

While well-intentioned, transparency alone may prove insufficient in curbing the nefarious potential of deepfakes. If creators find ways to circumvent disclosure requirements, the risks remain stubborn and unresolved. In short, uncertainty persists, particularly regarding legal liability and whether the current regulatory framework can adequately address the ever-evolving threats posed by these digital artifices.

As deepfakes blur boundaries, we’ve reached a societal crossroads. Do we accept deceptive synthetic realities or double down on our commitment to authenticity and integrity? Addressing such a challenge requires redefining our relationship with truth through heightened awareness, continual scrutiny and a pledge to cement ethics into the core of all AI applications.

Author

Related Articles

Back to top button