\
Insight
\
Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More
\
Insight
\
Gabe Regan
VP of Human Engagement
Updated January 2026
In just a few years, deepfake regulation has accelerated from a handful of proposals to hundreds of active bills and laws worldwide.
The European Union is setting the pace with its AI Act, while U.S. states and countries worldwide continue to introduce new AI bills and regulatory frameworks.
The fragmented nature of regulatory and legislative frameworks places a heavy burden on companies, who are under pressure to play offense and defense, including through targeted compliance strategies.
Europe is taking a structured approach to AI regulation, pairing enforceable obligations with experimentation to address risk while supporting innovation.
The EU AI Act defines a deepfake as AI-generated or manipulated image, audio, or video content that resembles real people, objects, places, entities, or events, and falsely appears authentic or truthful. Anyone deploying an AI system to develop deepfake content must clearly and distinguishably disclose that the content is artificially generated or manipulated.
Providers must also mark AI system outputs in a machine-readable format so they are detectable as artificially generated or manipulated. Limited disclosure requirements still apply when deepfake content is for art, satire, fiction, or similar creative works.
The consequences of noncompliance under the EU Artificial Intelligence Act are significant. Penalties could theoretically reach up to €35 million or up to 7 percent of a company’s total worldwide annual turnover from the previous financial year, whichever is higher.
The UK Online Safety Act establishes a baseline of responsibility for online services that host, recommend, or distribute user-generated content.Platforms are required to remove illegal pornographic content, including non-consensual intimate images and AI-generated sexual imagery, once notified. These requirements reinforce expectations for rapid takedown and place responsibility squarely on platforms rather than users
Alongside enforcement-focused regulation, the UK is exploring adaptive regulatory models. On October 21, 2025, the Department for Science, Innovation, and Technology launched a consultation on a proposed UK AI Growth Lab. The initiative would introduce regulatory sandboxes to test AI systems under controlled conditions. Findings from these pilots could inform future guidance or legislation, while preserving core safeguards.
Regulators increasingly pair enforcement actions with public education. In 2025, UK policing bodies and mobile operator EE launched joint guidance and education campaigns to help families identify AI-generated content and AI-enabled scams. This effort reflects how deepfakes are now treated as a mainstream consumer protection and public safety issue, requiring coordination between regulators, law enforcement, and trusted consumer-facing organizations.
Beyond Europe, other jurisdictions are expanding deepfake regulations. Japan has criminalized non‑consensual intimate images, whether real or AI-generated, and protects personality rights under laws governing private sexual content, with criminal penalties for violators. Meanwhile, China’s regulations require all AI-generated content to be clearly labeled, both visibly and in metadata. Regulations also restrict the use of AI to generate news content and prohibit unlicensed providers from publishing AI-generated material.
In December 2025, Taiwan enacted the Artificial Intelligence Basic Act, a new regulation governing artificial intelligence. It prohibits AI applications that infringe on personal safety, freedom, property rights, or privacy, or that threaten social order, national security, or environmental sustainability.
These developments underscore shared priorities across Asia: transparency, consent, and rapid content takedown requirements. Businesses operating internationally must prepare for overlapping but distinct regulatory frameworks.
In the United States, AI regulation remains fragmented. While federal laws address specific forms of AI-related harm, there is still no comprehensive framework governing deepfakes or synthetic media.
The TAKE IT DOWN Act, signed into law on May 19, 2025, makes it a crime to knowingly publish or threaten to publish non-consensual intimate imagery, including AI-generated deepfakes. Penalties include fines and up to three years in prison. Online services that host or distribute user-generated content must remove such content within 48 hours upon notice and make efforts to delete copies.
The Department of Homeland Security’s AI framework, released in November 2024, offers recommendations for AI safety and security across all U.S. critical infrastructure sectors, including financial services, healthcare, and energy. While it does not explicitly designate sectors for enhanced deepfake protections, it sets baseline expectations for the responsible deployment of AI. It underscores the need to address a broad range of AI-related risks, including deepfake threats. It reflects growing concern about AI risks to essential services and encourages voluntary adoption of best practices.
Beyond enacted laws, several federal bills signal how US policymakers are approaching deepfake risks across privacy, elections, fraud, and consumer protection. While none have yet established a comprehensive framework, together they reflect growing bipartisan concern over manipulated and generated media.
Bills currently under consideration include:
While approaches vary, state activity is beginning to cluster around a small number of recurring policy priorities.
States are moving quickly to regulate AI-generated political content, with a strong focus on transparency. California has enacted laws requiring disclaimers on AI-generated political advertisements and mandating the removal of deceptive political content by platforms while. New York passed a similar disclosure law in April 2024.
However, these efforts face significant legal headwinds. In August 2025, a federal judge struck down one of the California laws (AB 2655), citing conflicts with Section 230 and signalled constitutional concerns with a second measure following legal challenges from major tech platforms. Opposition may continue into 2026 as The Wall Street Journal reports that major technology investors and executives are assembling more than $100 million to oppose stricter state-level AI rules.
Another primary focus is non-consensual AI-generated imagery, particularly sexually explicit content. California expanded protections against non-consensual AI-generated sexual content involving both minors and adults. Although the SB 11 Artificial Intelligence Abuse Act was passed by the legislature, Governor Gavin Newsom vetoed it in October 2025. The bill would have introduced consumer warnings for AI systems capable of creating deepfake content and strengthened civil and criminal liability.
New York’s Hinchey law, enacted in 2023, criminalizes the creation or distribution of sexually explicit AI-generated content without consent and provides victims with a private right of action. Minnesota has updated its criminal code to penalize non-consensual AI-generated political and intimate content, with offenses classified as misdemeanors or felonies.
Likeness, Voice, and Digital Replica Protections
Several states are addressing the use of AI to replicate a person’s identity. New York’s digital replica law requires written consent, contractual clarity, and compensation when AI is used to recreate an individual’s likeness. Tennessee’s ELVIS Act, effective July 1, 2024, establishes civil remedies for the unauthorized use of a person’s voice or likeness in AI-generated content. These laws reflect growing concern about identity misuse as voice and likeness replication becomes more accessible.
Some states are beginning to regulate AI systems based on risk and impact rather than specific use cases. Colorado’s AI Act requires risk and impact assessments for high-risk AI systems, with enforcement beginning February 1, 2026. In June 2025, Texas Gov. Greg Abbott signed H.B. 149, the Texas Responsible Artificial Intelligence Governance Act (TRIAGA), into law. This move could have wide-ranging effects on the use of biometric technologies across the security industry.
Other states are introducing consumer-facing requirements, including disclosures when interacting with AI systems and opt-out mechanisms for certain automated decisions. The Stop Deepfakes Act, introduced in New York in March 2025, would require AI-generated content to carry traceable metadata and is pending in committee.
Despite differences in scope and enforcement, state-level AI laws show several consistent patterns:
Taken together, these efforts reflect a broader shift toward regulating AI based on real-world impact and harm, rather than the underlying technology itself.
Emerging deepfake regulations are creating sector-specific obligations, while also introducing common compliance expectations across industries:
Across all sectors, emerging AI regulations create common requirements: incident response planning for deepfake threats, employee training on detection methods, and vendor due diligence for AI tools that could generate synthetic content.
As regulatory pressure around deepfakes grows, companies need to focus on three core compliance priorities: detection capabilities, clear disclosure policies, and incident response plans tailored to synthetic media threats. Staying ahead means leveraging tools such as industry information-sharing networks (ISACs), regulatory monitoring platforms, and enterprise AI governance frameworks. Early movers gain a competitive edge by reducing risk and influencing emerging standards. These efforts also strengthen brand trust.
Reality Defender helps companies address deepfake compliance through advanced detection technology and regulatory expertise. Contact us today to learn more.
\
Insights