\

Insight

\

Which Companies Must Comply with the EU AI Act's Deepfake Requirements?

Scott Steinhardt

The EU AI Act's deepfake labeling requirements, taking effect August 2nd, 2025, cast a much wider net than many organizations realize. While tech companies grab headlines, the regulation's transparency rules apply to any company that creates, uses, or distributes AI-generated content—regardless of industry or size.

Understanding who must comply isn't just about avoiding penalties up to €35 million or 7% of global revenue. It's about recognizing that deepfake regulation touches virtually every sector of the modern economy.

The Two Categories of Obligated Companies

The EU AI Act creates obligations for two distinct groups handling synthetic content.

AI Providers are companies that develop or offer AI systems capable of generating synthetic images, audio, or video. This includes generative AI startups, major tech companies, and AI research labs that create the foundational technology. These organizations must ensure their systems can facilitate proper content labeling.

Deployers and Distributors represent the broader category that catches many organizations off-guard. Any business that uses, publishes, or distributes AI-generated content to the public falls under these requirements. This means the regulation extends far beyond the companies that create AI technology to encompass virtually any organization that uses it.

Industries Facing Compliance Requirements

Technology and Social Media companies face the most obvious obligations. Platforms like Facebook, Instagram, TikTok, and YouTube must implement systems to identify and label deepfakes shared across their services. These platforms must develop technical infrastructure to detect unlabeled synthetic content while ensuring properly labeled content maintains its identification markers.

Advertising and Marketing represents one of the most overlooked sectors. Agencies creating AI-generated advertisements, promotional videos, or marketing materials must label this content appropriately. Marketing networks distributing AI-enhanced campaigns across multiple channels need systems to maintain labeling integrity throughout the distribution process.

Media and Entertainment organizations using AI for content creation face comprehensive requirements. News outlets employing AI for graphics, video enhancement, or synthetic journalism must clearly identify such content. Film studios and streaming services using AI for visual effects, voice synthesis, or content modification need labeling systems that work across their production and distribution pipelines.

Gaming companies producing titles with AI-generated characters, environments, or narrative elements must consider how synthetic content labeling applies to their products. This includes not just obvious cases like AI-generated cutscenes, but also dynamic content creation where AI generates game elements in real-time.

Telecommunications providers operating messaging platforms, video calling services, or content distribution networks need systems to identify and maintain labels on AI-generated content passing through their infrastructure.

Beyond Obvious Industries

The regulation's broad language captures organizations that might not immediately consider themselves subject to deepfake requirements. Corporate Communications teams using AI for internal videos, training materials, or executive presentations must label synthetic content. Educational Institutions employing AI for instructional videos, simulated scenarios, or digital learning content face similar obligations.

Financial Services firms using AI for customer communication, training simulations, or marketing materials must ensure proper labeling. Even Healthcare Organizations employing AI for patient education materials, training simulations, or public health communications need compliance strategies.

Geographic Reach and Third-Country Impact

The EU AI Act's territorial scope extends beyond European borders. Companies based outside the EU must comply if their AI systems or synthetic content reach EU users. This means global organizations cannot simply ignore the regulation based on their headquarters location.

Third-country providers whose AI-generated content appears in EU markets—whether through direct distribution, platform sharing, or indirect circulation—face the same labeling obligations as EU-based companies.

Preparing for Comprehensive Compliance

Organizations across these industries must recognize that deepfake compliance isn't a one-time technical implementation. The regulation requires ongoing vigilance as AI technology evolves and new forms of synthetic content emerge.

Successful compliance strategies address the full lifecycle of AI-generated content within an organization. This includes creation processes, distribution channels, and long-term content management. Companies must prepare for scenarios where synthetic content appears in their operations regardless of its original source or labeling status.

As August 2nd approaches, organizations in every sector should evaluate their potential exposure to AI-generated content requirements. The EU AI Act's deepfake provisions represent one of the most broadly applicable aspects of the regulation, touching companies far beyond the technology sector that many assumed would be primarily affected.

Understanding your organization's compliance obligations today prevents costly scrambles tomorrow and positions your company for success in an increasingly AI-powered business environment.

Get in touch