\
Insight
\
\
Insight
\
Ben Colman
Co-Founder and CEO
Meta announced today that it has filed a lawsuit against Joy Timeline HK Limited, the entity behind CrushAI apps that create non-consensual intimate imagery (commonly known as "nudify" apps). While this legal action might seem like a step forward, it actually exposes a much deeper problem with how tech platforms approach AI-enabled harm in 2025.
Meta's lawsuit against CrushAI raises an uncomfortable question: if these apps are so harmful, why did they wait so long to take action? The answer is simple and troubling: because they could.
Meta has known that nudify apps existed since they first appeared on the internet. They've known about the violating images these apps create because those AI-generated images end up on Meta's own platforms. And once they're on the internet, the damage is irreversible.
Meta's approach to AI-generated harm follows a predictable pattern: wait for victims to report content, then remove it after the damage is done. This reactive model fundamentally misunderstands how digital harm works in our AI-powered world.
When someone's likeness is used to create non-consensual intimate imagery, the violation isn't just in the creation — it's in the distribution, the permanence, and the psychological impact on the victim. By the time Meta removes the content, screenshots have been taken, links have been shared, and the victim's life has already been altered.
Meta's message to victims is essentially: "We'll clean up the mess after your life is destroyed." That's not protection. That's negligence dressed up as action.
What makes Meta's announcement particularly concerning is their framing of "new technology" to detect these apps. The reality is that robust AI detection capabilities have existed for years. Companies like Reality Defender have been developing and deploying real-time deepfake detection technology that can identify manipulated content at the point of upload, not after it's already caused harm.
The fact that Meta is treating AI detection as "new technology" in 2025 reveals how far behind they are in addressing threats they've known about for years. Meanwhile, bad actors are already finding ways to circumvent these reactive measures, as Meta's own announcement acknowledges.
Nudify apps represent just one facet of a much larger problem. Meta's own Oversight Board has acknowledged the growing threat of impersonation attacks across their platforms. At Reality Defender, we see a significant portion of detected deepfakes originating from social media platforms, including Facebook and Instagram.
This isn't just about one type of harmful app. It's about a systemic failure to protect users from AI-enabled deception at the speed and scale it's occurring. When governments worldwide are predicting massive increases in deepfake attacks this year, reactive band-aid solutions simply won't cut it.
The technology exists today to detect AI-generated content in real-time, before it can cause harm. Proactive deepfake detection systems can identify deepfakes, voice clones, and other forms of synthetic media at the point of upload, preventing distribution rather than reacting to it.
Meta's approach puts the burden on victims: report the content, and maybe they'll remove it. But "maybe" isn't good enough for users whose lives and reputations are at stake. It's not good enough for companies trying to protect their employees from impersonation attacks. And it's certainly not good enough for governments working to maintain trust in critical communications.
Meta's lawsuit against CrushAI, while potentially beneficial in this specific case, serves as a stark reminder of what's wrong with our current approach to AI safety. We're treating symptoms instead of preventing the disease.
The technology community has a responsibility to implement proactive protection against AI deception — not after victims report harm, but before that harm can occur. This means deploying real-time detection capabilities, integrating them seamlessly into existing platforms and communication channels, and continuously updating them to stay ahead of evolving threats.
As AI-generated content becomes more sophisticated and accessible, the window for implementing effective safeguards is rapidly closing. The question isn't whether we can build better protection; it's whether we will, before reactive measures become completely inadequate for the scale of harm we're facing.
\
Insights