\
Insight
\
\
Insight
\
Gabe Regan
VP of Human Engagement
Deepfake laws and regulations are gaining momentum, growing from a handful of bills to hundreds of active proposals around the world in just a couple of years.
The European Union is leading the way with its AI Act and U.S. states are introducing more than 50 AI-related bills weekly, including 25 deepfake bills per week, according to BSA research. Asia-Pacific countries are also developing their own regulatory frameworks.
The fragmented nature of regulatory and legislative frameworks leaves companies with a heavy compliance burden. Given the deep financial risk associated with breaches — with businesses losing nearly $450,000 to deepfakes — companies are under pressure to play offense and defense, including through targeted compliance strategies.
The EU AI Act defines a deepfake as AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful. Anyone deploying an AI system to develop deepfake content must clearly and distinguishably disclose that the content is artificially generated or manipulated. Providers also need to ensure the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. If the deepfake content is part of art, satire, fiction, or similar creative work, limited disclosure requirements still apply.
Beyond Europe, other jurisdictions are expanding deepfake regulations. Japan has criminalized non‑consensual intimate images and protects personality rights under laws governing private sexual content, with criminal penalties for violators. The U.K. Online Safety Act requires platforms to remove or disable access to illegal pornographic content — including non-consensual intimate images and deepfake pornography — as soon as they are notified. Meanwhile, China’s regulations require all AI-generated content to be clearly labeled, both visibly and in metadata. The use of AI to generate news content is also restricted, with unlicensed providers prohibited from publishing AI-generated news.
These global trends underscore clear priorities: transparency, consent and rapid content takedown requirements. Businesses operating internationally must prepare for overlapping but distinct regulatory frameworks. The impact of noncompliance is significant. Penalties under the EU AI Act can reach up to €35,000,000 or, for companies, up to 7% of their total worldwide annual turnover for the preceding financial year, whichever is higher.
Federal legislation addresses certain aspects of AI harm, but ultimately falls short of comprehensive protection.
The TAKE IT DOWN Act, signed into law on May 19, 2025, criminalizes knowingly publishing or threatening to publish non-consensual intimate imagery, including AI-generated deepfakes. Penalties include fines and up to three years in prison. Covered platforms must remove such content within 48 hours upon notice and make efforts to delete copies.
An additional five bills are pending.
The Department of Homeland Security’s AI framework, released in November 2024, offers recommendations for AI safety and security across all U.S. critical infrastructure sectors, including financial services, healthcare and energy. While it does not explicitly designate sectors for enhanced deepfake protections, it establishes baseline expectations for responsible AI deployment and highlights the importance of addressing a broad range of AI-related risks, including potential threats from deepfakes. It reflects growing concern about AI risks to essential services and encourages voluntary adoption of best practices.
As of mid-2025, nearly every U.S. state has active AI-related bills, with hundreds of proposed measures introduced in state legislatures this year.
California Governor Gavin Newsom signed multiple deepfake-related laws last year, including mandates for disclaimers on AI-generated political ads, removal of deceptive political content by platforms, and strong erotic content protections, including protections against non-consensual AI-made sexual imagery involving minors and adults. SB 11, the Artificial Intelligence Abuse Act, would require consumer warnings on AI systems capable of creating deepfake content and strengthen civil and criminal liability for unauthorized or harmful deepfakes. The bill is currently pending.
Among recently passed New York laws, the state’s digital replica law requires written consent, clear contracts and compensation for using a person’s likeness created with AI. New York’s law requiring disclosures for AI-generated political content, passed in April 2024, mandates clear labeling of any AI-altered political material that could be mistaken for authentic. New York’s disclosures for AI-generated political content law passed in April 2024 requires clear labeling of any AI-altered political content that could be confused for the real thing. New York’s Hinchey law, enacted in 2023, makes it a crime to create or share sexually explicit deepfakes of real people without their consent and gives victims the right to sue. New York's Stop Deepfakes Act, introduced in March 2025, would require AI-generated content to carry traceable metadata and is pending in committee.
Several other states have enacted deepfake and AI laws. For example, Tennessee’s ELVIS Act, effective July 1, 2024, provides civil remedies for unauthorized use of a person’s voice or likeness in AI-generated content. Minnesota’s updated criminal code now penalizes non-consensual deepfakes — including intimate and political ones — with gross misdemeanors or felony charges. Colorado’s AI Act mandates risk and impact assessments for high-risk AI systems starting February 1, 2026.
Clear patterns are emerging across state legislation. Election protection measures include mandatory disclaimers on political content and blackout periods before elections. Criminal code updates create felony classifications for deepfake-enabled fraud, harassment and child exploitation. Consumer protection provisions establish "right to know" requirements when interacting with AI and opt-out mechanisms. Platform liability rules impose content moderation requirements and specific takedown procedures.
Businesses face unique challenges under emerging AI deepfake laws. Financial services must implement enhanced KYC requirements, multi-factor authentication mandates and accept potential liability for deepfake fraud losses, according to FS-ISAC guidance. Healthcare organizations must ensure HIPAA compliance when handling synthetic medical content and obtain patient consent when using AI-generated communications.
Technology platforms face content detection obligations, user verification requirements, and potential liability for hosted deepfakes. Media and entertainment industries are grappling with expanding personality rights, union negotiations over digital replicas, and watermarking requirements. Retail and e-commerce companies are expected to disclose when customers are interacting with AI-supported services and to clearly label AI-generated influencers or synthetic models in marketing content. Educational institutions face new restrictions on deepfakes involving minors, are updating academic and disciplinary policies to address AI-generated content, and are boosting protections for educators against synthetic harassment.
Across all sectors, emerging business AI regulations create common requirements: incident response planning specific to deepfake threats, employee training on detection methods, and vendor due diligence for any AI tools that could generate synthetic content.
The regulatory timeline is accelerating. Enforcement of the EU AI Act began this year, and a wave of state laws in the U.S. take effect. Additional federal legislation is under consideration. This compressed timeline makes immediate action essential.
As regulatory pressure around deepfakes grows, companies need to focus on three core compliance priorities: detection capabilities, clear disclosure policies and incident response plans tailored to synthetic media threats. Staying ahead means using tools like industry information-sharing networks (ISACs), regulatory monitoring platforms and enterprise AI governance frameworks. Early movers gain a competitive edge by reducing risk and influencing emerging standards. These efforts also strengthen brand trust.
The wave of deepfake regulations is here, and businesses need expert guidance to navigate complex requirements. Reality Defender helps companies address deepfake compliance through advanced detection technology and regulatory expertise. Contact us today for a regulatory readiness assessment.
\
Insights