Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More

\

Insight

\

The DEFIANCE Act Just Passed the Senate. Here’s What It Means for Survivors of Deepfake Abuse.

Ben Colman

Co-Founder and CEO

Yesterday, the U.S. Senate passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2025 (known as the DEFIANCE Act) with unanimous consent.

For advocates of digital safety and survivors of image-based sexual abuse — including and increasingly involving deepfakes — this is a milestone moment. The bill, which now moves to the House of Representatives, represents a significant federal effort to give survivors of non-consensual deepfakes a legal sword to fight back.

Yet while the legislation is a vital step toward accountability, it is important to understand exactly what the bill does — and where the gaps in protection still lie.

What the DEFIANCE Act Actually Does

The core function of the DEFIANCE Act is to establish a federal civil right of action for survivors.

Currently, if someone creates a sexually explicit AI-generated image of you, your legal recourse is a patchwork of state laws that vary wildly in effectiveness. The DEFIANCE Act changes this by allowing survivors to sue the creators, distributors, or those who knowingly possess these "intimate digital forgeries" with intent to distribute, in federal court.

Key provisions include:

  • Defining the Threat: The bill codifies the term "intimate digital forgery," covering visual depictions created by software, AI, or machine learning that are "indistinguishable from an authentic visual depiction" to a reasonable person.
  • Financial Damages: Survivors can sue for liquidated damages of up to $150,000, or $250,000 if the deepfake is linked to sexual assault, stalking, or harassment.
  • Privacy Protections: Recognizing the sensitive nature of these cases, the bill allows courts to let plaintiffs use pseudonyms (e.g., "Jane Doe") to protect their identity during proceedings.
  • Statute of Limitations: Survivors have 10 years to file a suit from the moment they discover the violation or turn 18.

This legislation was spurred by high-profile incidents, including the flood of non-consensual deepfakes targeting Taylor Swift on X and the recent controversies surrounding AI chatbots like Grok being used to generate non-consensual imagery.

The "After-the-Fact" Problem

While the DEFIANCE Act is a necessary deterrent, it is, by definition, a reactive measure. It offers a remedy after the harm has occurred, after the images have been created, and after they have likely been shared.

For a survivor, the existence of a lawsuit does not erase the trauma of the event, nor does it immediately scrub the content from the internet. The Take It Down Act, which complements this bill, focuses on criminalizing distribution and mandating removal, but the speed of the internet often outpaces the speed of the law.

Furthermore, a civil lawsuit requires a known defendant. The anonymity of the internet remains a massive hurdle. If a deepfake is created by an anonymous user on a decentralized platform or an encrypted messaging app, "suing for damages" becomes a logistical nightmare. You cannot sue who/what you do not know.

The Role of Prevention

Legal frameworks are one pillar of safety, but they must be supported by technical infrastructure that prevents these images from being generated or shared in the first place.

We have long argued that deepfake detection is vital in fighting AI-generated exploitation. The burden shouldn't be on survivors to litigate, but on platforms to prevent their tools from being weaponized and detect this type of content at the point of upload.

As Senate Democratic Whip Dick Durbin noted regarding the Grok controversy: "Even after these terrible deepfake, harming images are pointed out to Grok and to X, formerly Twitter, they do not respond. They don't take the images off of the internet. They don't come to the rescue of people who are victims." This highlights the urgent need for Trust and Safety teams to integrate real-time detection that can flag and block intimate digital forgeries before they go viral.

What Comes Next?

The bill now heads to the House of Representatives. While it stalled there in previous sessions, the unanimous support in the Senate and the rising public pressure regarding AI safety make its passage more likely.

While lawmakers work to provide legal recourse, we are focused on giving platforms the tools to stop these attacks at the source. By working with key partners on combatting deepfake-driven NCII and CSAM, by speaking with elected officials, and by empowering developers to build enterprise-grade detection directly into their applications, we can move closer to a world where non-consensual deepfakes are caught — well before they ever need to be caught by a court.