AI & Technology

Tighter regulations are essential to protect against AI impersonations

By Alain Goudey, Associate Dean for Digital at NEOMA Business School

Improperly regulated AI tools now pose a real threat to people’s reputations and identities. In recent years, several high-profile actors and musicians, including Scarlett Johansson and Taylor Swift, have been subjected to fake AI-generated images, videos, and audio clips. 

The implication is that AI has pushed the boundaries in terms of transforming personal data into cloneable assets. Now, a person’s voice, face, expressions, speech patterns, and mannerisms can be replicated, monetized, and misused in a matter of seconds. 

No wonder then that Matthew McConaughey captured headlines in January for filing trademark applications to restrict unauthorized AI replications of his voice, image, and signature catchphrases, all of which form the bedrock of his personal brand as an actor and celebrity. 

But while Hollywood stars may be the most visibly affected group by this issue, it has the potential to impact anyone and everyone. In fact, the problem is most serious for the many people who are not in a position to take the kind of legal action McConaughey has pursued. 

Addressing the societal challenge  

Robust legislation is needed to protect people’s identities from being cloned without their consent. In Europe, regulations currently cover systems, data governance, and official digital identity, but there is no truly comprehensive framework in place to protect a person’s “sensory identity” – referring to their voice, face, style, and other physical traits. 

For instance, the European Union (EU) AI Act entered into force in 2024, controlling high-risk uses of the technology and introducing transparency obligations for synthetic content and deepfakes. But it is not without flaws. Weak points include lighter protections around deepfakes classified as “limited risk” and relatively few mechanisms for individuals who claim to have been harmed by AI systems to seek redress. 

Similarly, the Data Act has applied since 2025 and focuses on data sharing and access, but includes a glaring blind spot: what happens when an AI model has already been trained on your visual or vocal footprint? 

An update to the EU’s electronic identification regulation – eIDAS 2.0 – mandates that member states provide secure, interoperable European Digital Identity Wallets (EUDI Wallets) by 2026, allowing citizens to store and share IDs, licenses, and credentials securely across borders. This would strengthen administrative identity verification and user control, but still falls short of a blanket protection on individuals’ sensory identities.  

The market of identity 

It’s essential that countries start implementing legal protections that safeguard people’s sensory identities. Otherwise, a gap could emerge between those who can protect themselves through filing trademark applications or patents and the rest who cannot. 

Everyone has a stake in ensuring that all people have this basic level of protection, as the consequences of AI cloning are potentially serious. 

In a business context, AI deepfakes could pose a cybersecurity risk by fraudulently impersonating a CEO or an employee’s voice or appearance; reputations could be damaged by false statements and public appearances; and the spread of misinformation could erode stakeholder and consumer trust, impacting investment and sales revenue. 

Firms would also need to face the ethical implications of using digital twin technology to create AI avatars of employees. HR complaints could arise if these chatbots exhibit behaviours or sentiments that cross an employee’s personal boundaries. Overarching legislation could prevent many such cases from arising by providing companies with a clear framework for how these digital tools can and should be used.  

Moving beyond the right to be forgotten 

Existing data protection laws often centre the conversation of data security with AI around the right to be forgotten, also called the right to erasure. In essence, this means personal data should be deleted from AI systems under particular circumstances – for instance, when it is no longer necessary for the system to keep it or when an individual’s consent is withdrawn. 

While sound in principle, this is challenging to enforce in practice. Because generative AI systems absorb vast quantities of data during the training process, removing a person’s information from the system typically requires complex “machine unlearning” or retraining. 

Instead of continuing to focus on this approach, regulations should pivot towards the right to non-replication. 

The questions that many people are asking are “who controls my digital doppelganger?” and “what might it be used for?” The right to non-replication would ensure that everyone retains control over their own voice and physical likeness, with sufficient penalties for content that duplicates these qualities without their express consent. 

Some countries, such as Spain, are already moving in this direction. In January, the Spanish government approved draft legislation to restrict AI deepfakes and tighten consent rules around images. The decision was made alongside a concerted EU-wide effort to criminalise non-consensual AI deepfakes containing sexual content by 2027.  

These are steps in the right direction and show that governments are moving along the right path. Now, they need to move faster and expand the scale of regulations. 

Author

Related Articles

Back to top button