Ethics & ResponsibilityAI & Technology

EU Legislators and Tech Industry Leaders Take Steps to Facilitate Compliance Under EU AI Act

By Diane Moss, Lowenstein Sandler LLP

Deepfakes can be created and used for many purposes. If properly revealed as a deepfake and the creation does not otherwise violate applicable law, the existence of the deepfake does not generally raise a red flag concern to lawmakers. According to Fortinet, โ€œThe guiding principle is whether the existence of the deepfake violates someone’s privacy or intellectual property rights, or if the deepfake misleads consumers, the general public, or notable individuals who have not been properly warned.โ€

However, there are red flags around the threat to security and privacy this technology poses with its power to share misinformation and influence behavior through deception. Its ability to impersonate people and cause corporate and personal reputational harm and serve as vehicle for scammers to extract large amounts of money from individuals and institutions positions it as a prime breeding ground to destabilize public trust.

Deepfake files increased dramatically from $500K in 2023 to $8M in 2025. Along with the proliferation of files, there was an alarming 3,000 percent increase in fraud attempts in 2023. In 2024, deepfake incidents occurred at a rate of 1 every 5 minutes and it is projected that financial losses in the U.S. alone facilitated by generative AI will rise from โ€œ$12.3 billion in 2023 toย $40 billion by 2027, a compound annual growth rate of 32 percent.โ€

The growing misuse of deepfakes and the growing risk to consumer, privacy, and intellectual property rights has elicited concern from international communities. In turn, legislators around the globe have responded, instituting various laws addressing the rights they have identified as most vulnerable to violation by deepfakes. The current focus of enacted and proposed laws regulating deepfakes center on:

  • transparency (labeling of AI generated content)
  • consent (permission from the person whose image was published, replicated or manipulated by the AI tool)
  • take down requirements (imposed on companies and platforms publishing the deepfake content).

The EU AI Act (the โ€œActโ€), the first set of laws to provide a legal framework addressing the risks associated with AI, became effective in August 2024 and calls for a phased approach to implementation. Since its promulgation, the Act has garnered attention and provoked conversations ranging from its potential set precedent for future AI laws around the world, to criticism of overregulation and the threat posed to innovation in the tech space. Drawn to the concerns raised by the tech industry that the Act imposes cumbersome responsibilities and vague guidance for compliance and will prevent the EU from keeping competitive pace with other countries such as the United States and China in the area of innovation, the EU Commission (the โ€œCommissionโ€) proposed a โ€œDigital Simplification Packageโ€ aimed at streamlining the rules under the Act. The โ€œDigital Omnibusโ€ portion of the package modifies the Act in a manner intended to afford more predictable and efficient application of the Actโ€™s requirements.

In addition, the Commission proactively scheduled meetings with tech industry groups such as Digital Europe and the Information Technology Industry Council to specifically discuss the labeling requirements under the EU law, which will become effective in August 2026. To ensure platforms responsible for compliance understand how to comply, regulators are working with tech industry insiders to establish a set of protocols โ€œknown as a code of practice,โ€ that will provide companies with guidance on how they can comply with the law. The two sides will negotiate the terms of the voluntary code over the coming months, with the first draft being published December 2025, followed by a second draft in June and the final in August 2026.

Since regulatory frameworks vary from country to country, companies that operate on an international stage must align with all requirements in the jurisdictions where they have a business presence. For businesses operating in any of the EU countries, attention to the requirements of the EU AI Act is of utmost importance. Even if companies are not operating in those locations, the Act is frequently referenced and has the potential to serve as a model for legislation in other jurisdictions, similar to the tremendous and rippling influence of the GDPR.

Highlighting the importance all sides are placing on facilitating a smooth pathway to compliance under the Act, competitor tech companies such asย  OpenAI, Microsoft, Google and Meta have come together to form the Coalition for Content Provenance and Authenticity (โ€œC2PAโ€), creating a technique to encode information about the origin of content and whether itโ€™s AI-generated in the metadata. In tandem with the advancing AI technology, it is likely we will see more innovators using innovation to simplify regulatory compliance.

Author

Related Articles

Back to top button