Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More

\

Insight

\

The State of Deepfake Regulations: What Businesses Need to Know

Gabe Regan

VP of Human Engagement

Updated January 2026

In just a few years, deepfake regulation has accelerated from a handful of proposals to hundreds of active bills and laws worldwide.

The European Union is setting the pace with its AI Act, while U.S. states and countries worldwide continue to introduce new AI bills and regulatory frameworks.

The fragmented nature of regulatory and legislative frameworks places a heavy burden on companies, who are under pressure to play offense and defense, including through targeted compliance strategies.

How Europe is Setting the Standard in AI Regulation

Europe is taking a structured approach to AI regulation, pairing enforceable obligations with experimentation to address risk while supporting innovation.

Clear Definitions and Disclosure Requirements Under the EU AI Act

The EU AI Act defines a deepfake as AI-generated or manipulated image, audio, or video content that resembles real people, objects, places, entities, or events, and falsely appears authentic or truthful. Anyone deploying an AI system to develop deepfake content must clearly and distinguishably disclose that the content is artificially generated or manipulated.

Providers must also mark AI system outputs in a machine-readable format so they are detectable as artificially generated or manipulated. Limited disclosure requirements still apply when deepfake content is for art, satire, fiction, or similar creative works.

The consequences of noncompliance under the EU Artificial Intelligence Act are significant. Penalties could theoretically reach up to €35 million or up to 7 percent of a company’s total worldwide annual turnover from the previous financial year, whichever is higher.

Platform Accountability Under the UK Online Safety Act

The UK Online Safety Act establishes a baseline of responsibility for online services that host, recommend, or distribute user-generated content.Platforms are required to remove illegal pornographic content, including non-consensual intimate images and AI-generated sexual imagery, once notified. These requirements reinforce expectations for rapid takedown and place responsibility squarely on platforms rather than users

Regulatory Experimentation Through the UK AI Growth Lab

Alongside enforcement-focused regulation, the UK is exploring adaptive regulatory models. On October 21, 2025, the Department for Science, Innovation, and Technology launched a consultation on a proposed UK AI Growth Lab. The initiative would introduce regulatory sandboxes to test AI systems under controlled conditions. Findings from these pilots could inform future guidance or legislation, while preserving core safeguards.

Public Awareness and Consumer Protection Initiatives

Regulators increasingly pair enforcement actions with public education. In 2025, UK policing bodies and mobile operator EE launched joint guidance and education campaigns to help families identify AI-generated content and AI-enabled scams. This effort reflects how deepfakes are now treated as a mainstream consumer protection and public safety issue, requiring coordination between regulators, law enforcement, and trusted consumer-facing organizations.

How Asia Is Approaching AI Governance

Beyond Europe, other jurisdictions are expanding deepfake regulations. Japan has criminalized non‑consensual intimate images, whether real or AI-generated, and protects personality rights under laws governing private sexual content, with criminal penalties for violators. Meanwhile, China’s regulations require all AI-generated content to be clearly labeled, both visibly and in metadata. Regulations also restrict the use of AI to generate news content and prohibit unlicensed providers from publishing AI-generated material.

In December 2025, Taiwan enacted the Artificial Intelligence Basic Act, a new regulation governing artificial intelligence. It prohibits AI applications that infringe on personal safety, freedom, property rights, or privacy, or that threaten social order, national security, or environmental sustainability.

These developments underscore shared priorities across Asia: transparency, consent, and rapid content takedown requirements. Businesses operating internationally must prepare for overlapping but distinct regulatory frameworks. 

Legislative Efforts in the U.S. Federal Level

In the United States, AI regulation remains fragmented. While federal laws address specific forms of AI-related harm, there is still no comprehensive framework governing deepfakes or synthetic media.

The TAKE IT DOWN Act, signed into law on May 19, 2025, makes it a crime to knowingly publish or threaten to publish non-consensual intimate imagery, including AI-generated deepfakes. Penalties include fines and up to three years in prison. Online services that host or distribute user-generated content must remove such content within 48 hours upon notice and make efforts to delete copies.

The Department of Homeland Security’s AI framework, released in November 2024, offers recommendations for AI safety and security across all U.S. critical infrastructure sectors, including financial services, healthcare, and energy. While it does not explicitly designate sectors for enhanced deepfake protections, it sets baseline expectations for the responsible deployment of AI. It underscores the need to address a broad range of AI-related risks, including deepfake threats. It reflects growing concern about AI risks to essential services and encourages voluntary adoption of best practices.

Six additional federal AI bills remain under consideration: 

Beyond enacted laws, several federal bills signal how US policymakers are approaching deepfake risks across privacy, elections, fraud, and consumer protection. While none have yet established a comprehensive framework, together they reflect growing bipartisan concern over manipulated and generated media.

Bills currently under consideration include:

  • DEFIANCE Act (Disrupt Explicit Forged Images and Nonconsensual Edit) would allow victims of non-consensual deepfake pornography to sue perpetrators in civil court and pursue damages tied to violations of consent and privacy. The U.S. Senate passed the bill unanimously in July 2024 and reintroduced it in May 2025. It now awaits presidential action.
  • Protect Elections from Deceptive AI Act Introduced in the Senate on March 31, 2025, this bill would prohibit the distribution of materially deceptive AI-generated audio or visual content about federal candidates with the intent to influence elections or solicit funds.
  • AI Fraud Deterrence Act Introduced in the US House of Representatives during the 119th Congress on November 25, 2025, the bill would increases penalties for existing financial crimes and for impersonating federal officials when AI is used to facilitate those offenses.
  • NO FAKES Act Introduced in the Senate on April 9, 2025, this proposal would make it illegal to create or distribute unauthorized AI-generated replicas of a person’s voice or likeness, with exceptions for satire, news, and commentary.
  • DEEP FAKES Accountability Act would require creators of AI-generated deepfake audio, video, or images to label or watermark such content clearly. The bill was introduced in the U.S. House of Representatives on September 20, 2023, but has not advanced beyond the committee referral stage.
  • Protecting Consumers from Deceptive AI Act was introduced in the house on March 21, 2024. It directs the National Institute of Standards and Technology to develop standards for labeling AI-generated content. It would require generative AI platforms and online services to apply machine-readable disclosures to AI-generated audio and visual content.

AI Legislation and Policy Developments at the State Level 

While approaches vary, state activity is beginning to cluster around a small number of recurring policy priorities.

AI Regulation for Elections and Political Content

States are moving quickly to regulate AI-generated political content, with a strong focus on transparency. California has enacted laws requiring disclaimers on AI-generated political advertisements and mandating the removal of deceptive political content by platforms while. New York passed a similar disclosure law in April 2024.

However, these efforts face significant legal headwinds. In August 2025, a federal judge struck down one of the California laws (AB 2655), citing conflicts with Section 230 and signalled constitutional concerns with a second measure following legal challenges from major tech platforms. Opposition may continue into 2026 as The Wall Street Journal reports that major technology investors and executives are assembling more than $100 million to oppose stricter state-level AI rules.

Non-Consensual AI-Generated Content

Another primary focus is non-consensual AI-generated imagery, particularly sexually explicit content. California expanded protections against non-consensual AI-generated sexual content involving both minors and adults. Although the SB 11 Artificial Intelligence Abuse Act was passed by the legislature, Governor Gavin Newsom vetoed it in October 2025. The bill would have introduced consumer warnings for AI systems capable of creating deepfake content and strengthened civil and criminal liability.

New York’s Hinchey law, enacted in 2023, criminalizes the creation or distribution of sexually explicit AI-generated content without consent and provides victims with a private right of action. Minnesota has updated its criminal code to penalize non-consensual AI-generated political and intimate content, with offenses classified as misdemeanors or felonies.

Likeness, Voice, and Digital Replica Protections

Several states are addressing the use of AI to replicate a person’s identity. New York’s digital replica law requires written consent, contractual clarity, and compensation when AI is used to recreate an individual’s likeness. Tennessee’s ELVIS Act, effective July 1, 2024, establishes civil remedies for the unauthorized use of a person’s voice or likeness in AI-generated content. These laws reflect growing concern about identity misuse as voice and likeness replication becomes more accessible.

Risk-Based AI and Consumer Protection Laws

Some states are beginning to regulate AI systems based on risk and impact rather than specific use cases. Colorado’s AI Act requires risk and impact assessments for high-risk AI systems, with enforcement beginning February 1, 2026. In June 2025, Texas Gov. Greg Abbott signed H.B. 149, the Texas Responsible Artificial Intelligence Governance Act (TRIAGA), into law. This move could have wide-ranging effects on the use of biometric technologies across the security industry. 

Other states are introducing consumer-facing requirements, including disclosures when interacting with AI systems and opt-out mechanisms for certain automated decisions. The Stop Deepfakes Act, introduced in New York in March 2025, would require AI-generated content to carry traceable metadata and is pending in committee.

Key Patterns Emerging Across State AI Legislation

Despite differences in scope and enforcement, state-level AI laws show several consistent patterns:

  • Election integrity: disclosure requirements and timing restrictions
  • Criminal liability: AI-enabled harassment, exploitation, and impersonation
  • Consumer protection: transparency and consent obligations
  • Platform duties: content moderation and takedown requirements

Taken together, these efforts reflect a broader shift toward regulating AI based on real-world impact and harm, rather than the underlying technology itself.

AI Regulation Implications Across Industries

Emerging deepfake regulations are creating sector-specific obligations, while also introducing common compliance expectations across industries:

  • Financial services: Enhanced KYC controls, stronger authentication requirements, and potential liability for losses tied to deepfake-enabled fraud.
  • Healthcare: HIPAA compliance for synthetic medical content and consent requirements for AI-generated patient communications.
  • Technology platforms: Content detection and takedown obligations, user verification requirements, and increased liability exposure for hosting deepfakes.
  • Media and entertainment: Expanding personality rights, union negotiations over digital replicas, and emerging labeling or watermarking requirements.
  • Retail and e-commerce: Disclosure obligations when customers interact with AI-supported services and labeling requirements for AI-generated influencers or synthetic models.
  • Education: Restrictions on deepfakes involving minors, updated academic and disciplinary policies, and stronger protections against synthetic harassment.

Across all sectors, emerging AI regulations create common requirements: incident response planning for deepfake threats, employee training on detection methods, and vendor due diligence for AI tools that could generate synthetic content. 

How to Prepare for What Comes Next in AI Regulation

As regulatory pressure around deepfakes grows, companies need to focus on three core compliance priorities: detection capabilities, clear disclosure policies, and incident response plans tailored to synthetic media threats. Staying ahead means leveraging tools such as industry information-sharing networks (ISACs), regulatory monitoring platforms, and enterprise AI governance frameworks. Early movers gain a competitive edge by reducing risk and influencing emerging standards. These efforts also strengthen brand trust.

Reality Defender helps companies address deepfake compliance through advanced detection technology and regulatory expertise. Contact us today to learn more.