Questions such as the one appearing as the title of this reflection are inherently difficult to answer, as they presuppose clarity on a host of thorny and poorly defined issues. I’m not talking about explaining what “a problem” means in this context or offering a rarefied definition of an AI deepfake, although no doubt there should be extensive considerations of both.
No – I’m talking about root questions surrounding our understanding of truth, reality, and authenticity – all of which intersect with “the problem” of deepfakes.
The initial difficulty in addressing AI deepfakes, then, is finding our footing: What stance is one taking in articulating the question itself, given the myriad possibilities and disputes regarding what we mean by more basic notions such as truth and reality?
From a practical standpoint, focusing on such abstruse topics at the start is frustrating. Clearly, there are a number of pressing issues related to the use of AI platforms to produce convincing but false renditions of reality, from creating non-consensual pornography and impersonating individuals to disrupting elections and entire economies. Whether it is the use of AI deepfakes to defraud people of their money or to undermine the institutions of democracy, it seems obvious and irrefutable that the capacity to use AI to manipulate and control public perception presents not just one problem but many.
And indeed, it does.
It does so in part because of what the philosopher Donald Davidson refers to as the principle of charity, that is, the tendency of human beings to believe what others tell them as a condition of making sense of what’s being said.
In other words, AI deepfakes are a problem because we tend to give others the benefit of the doubt, believing that people typically tell the truth. If this were not the case – if people were to embrace what is commonly referred to as a “zero-trust mindset” – the resulting suspicion would bog communication down in questions of truth and justification to such an extent that shared understanding would be nearly impossible.
Doubting everything puts one in the position of believing only what can be proven directly, an idea that Rene Descartes revealed as conceptually impracticable three hundred years before Davidson put pen to paper.
For practicality’s sake, let us grant that AI deepfakes create a host of problems by exploiting our propensity to believe what we see and hear from and about others. It should also be granted that the degree to which deepfakes are indistinguishable from trusted-but-mediated representations of reality (e.g., photographs and recorded speech) makes them unique in the long and sordid history of human deception. Although attempts to deceive and control others are as old as language itself, AI can render falsehoods in ways that make them far more difficult to identify than other forms of deceit and manipulation.
Yet not impossible. While adopting a zero-trust mindset is impractical as a worldview, it can be helpful in targeted contexts. In 2010, John Kindervag introduced the idea of putting cultivated distrust to work for the common good as a way of reorienting cybersecurity toward a new model – one based on the premise that trust must never be taken for granted and must be continuously evaluated.
For cybersecurity, this meant a shift away from more traditional “perimeter-based” defenses of networks and data in favor of the repeated verification of a user’s identity. As stated in a memorandum from the US Office of Management and Budget, this approach represents “… a dramatic paradigm shift in [the] philosophy of how we secure our infrastructure, networks, and data, from verify once at the perimeter to continual verification of each user, device, application, and transaction.”
A variant of this philosophy is familiar to educators through the work of Peter Elbow, whose 1973 book Writing Without Teachers encouraged dialectical engagement with text in the form of “the doubting game” and the “believing game” – orientations that invite scrutiny and skepticism, on the one hand, and empathetic engagement with new perspectives on the other.
The value of adapting Elbow’s methods to AI-generated content is that it cultivates the ability to discriminate between the pernicious deepfake and the use of AI facsimile for genuinely creative or artistic purposes. Tentatively, we might refer to this adapted version of Elbow’s methodology as the deepfake game.
Whatever the necessary attunements in education, a comparable shift in philosophy is advisable where the broader public’s engagement with media is concerned, given the capacity for AI deepfakes to find their way into the information ecosystem through social media, news, and politics. It is not difficult to imagine how the philosophy of a zero-trust mindset or the deepfake game might be incorporated – in limited ways – into the set of dispositions and skills needed to use AI ethically and effectively, what is commonly referred to as “AI Literacy.”
Whether the context is traditional education or industry upskilling, cultivating the ability to identify and evaluate AI deepfakes is an extension of more traditional critical thinking skills, such as establishing evidence, assessing context, and weighing information against an actor’s likely intentions. Adjusting these skills to the AI landscape and combining them with new AI tools for deepfake detection is essential to our collective response to the AI deepfake problem.
While there is a need for individual due diligence in response to AI deepfakes, there is also a strong incentive for companies and public institutions to combat AI deception, namely, the collective value of truth.
Implicitly at least, we know that the ability to distinguish fact from fiction, truth from lies, makes the transactions of daily life possible (this is Davidson’s point writ large): consumers must be able to trust information about the products they buy; businesses must be able to trust regulatory, financial, and legal frameworks that make transactions possible; governments must be able trust intelligence information, diplomacy, and the systems that allow society to run – none of which is possible if there is universal cynicism about the distinction between what’s real and what isn’t.
AI deepfakes strike at the heart of the knowledge necessary for the management of daily life, individually and collectively, which places a premium on the preservation and cultivation of truth and accuracy for society.
Fortunately, there are approaches to addressing AI deepfakes that may go a long way to preserving the integrity of the information we consume – some public, some private. For example, the Federal Trade Commission is in the process of expanding its legal toolkit to protect consumers from AI fraud, which will provide resources to confront aspects of the deepfake problem nationally, and the World Economic Forum’s Global Coalitions for Digital Safety is developing a framework for tackling AI deepfakes along with other types of disinformation.
Cross-industry efforts such as the Content Authenticity Initiative (founded by Adobe in 2019) also promise to shore up the values and practices necessary for the pursuit and maintenance of knowledge in an AI-infused society, and there are mechanisms for ensuring that AI-generated content is authenticated and traceable using distributed ledger technology (e.g., Blockchain) and decentralized cloud storage. While none of these potential remedies are foolproof, they suggest the possibility of developing a robust, public-private matrix for combating the nefarious uses of AI deepfakes.
Which brings us to a pivotal aspect of the AI deepfake conversation, namely, its potential to revive and invigorate a communal interest in truth and representation. As abstract as these terms are, a consideration of AI deepfakes must be grounded in a shared understanding of how technology shapes the ways we represent the world and each other. We are steeped in simulacra, to be sure, but our continuously mediated engagement with the world has largely obscured the social value of veracity, objectivity, consensus, and fact.
The premium placed on truth and accuracy is far less these days than on its social value, where “social value” is measured in terms of the number of shares and clicks the information receives as a post on social media. In a recent study of how members of Gen Z evaluate information online, the authors note: “Rather than information literacy, our participants sought what we term information sensibility: a socially-informed awareness of the value of information encountered online, relying on folk heuristics of credibility.”
The social value of truth and representation has also been eclipsed by the hard-won epistemic pluralism of our day, i.e., the recognition that these concepts, borne of Enlightenment ideals, have obscured diverse perspectives essential to a more robust and ethical vision of reality. What is in danger of being overlooked, however, is that such ideals remain essential to communal life as grounding or regulatory concepts that encourage healthy social engagement. Without a shared understanding of some aspects of the human condition, social fragmentation hardens into fault lines, factions, and eventual dissolution.
Differently put, our collective response to AI deepfakes may (and perhaps should) invite us to reconsider where the contours of our diverse perspectives on the world intersect, and where we can agree about our interests, values, and aspirations. This is a moral and political point as much as it is an epistemological one: our mediated, digitized social engagements have contributed to unsettling social fragmentation, encouraging feelings of alienation and exacerbating the impression that there is no ground – no reality – on which to rebuild a social contract suitable to the age. Consequently, it may be worth leveraging our worry over AI deepfakes to reconsider foundational questions about the necessity of mediating our commitment to epistemic pluralism with the ideal of e pluribus unum (out of many, one).
It is easy to get lost in such political and moral thickets, and, arguably, more practical considerations should take precedence in the short term. Fortunately, measures for addressing AI deepfakes are available, and we are likely to see progress toward more discriminating detection technologies shortly.
Undoubtedly, such tools will inspire bad actors to greater levels of ingenuity, and the ongoing tug-of-war between truth and deception will persist. In this context, it is the role of education to deepen our collective understanding of how an AI-mediated engagement with the world will shift or usurp conventional practices and norms, just as it is the role of the consumer and citizen to cultivate healthy skepticism and AI literacy. Ultimately, the response to deepfakes will depend on the convergence of interests, both public and private, to ensure that a social premium is placed on honesty over deception and truth over falsity.
This is a perennial challenge for society. AI deepfakes only alter the landscape.