\

Insight

\

How We Bypassed Sora 2's Identity Safeguards in Under 24 Hours

Ben Colman

Co-Founder and CEO

On September 30, 2025, OpenAI launched Sora 2 video generation tools and a new social platform called Sora (also powered by Sora 2). At launch, the social platform featured a "Cameo" feature requiring identity verification to prevent impersonation. When Cameo verification is completed, it allows users to create AI-generated videos with their own likeness while granting permission to other users to use the likeness as well.

Within 24 hours, Reality Defender bypassed the Sora platform's anti-impersonation safeguards completely by creating convincing deepfakes of CEOs and celebrities that the platform missed. Our API flagged every deepfake we created in real time.

With deepfake fraud attempts at financial institutions surging 2,137% and average losses hitting $500,000, this research proves that platforms building generative AI cannot police their own outputs — and neither can the verification systems you rely on.

The "success" screen after bypassing Cameo's onboarding.

What is Sora 2 and the Sora Social Platform?

Sora 2 is OpenAI's latest text-to-video model. It launched on September 30th with synchronized audio and vastly improved video that, to the average person (and some Reality Defender researcher) is indistinguishable from real, recorded video.

Sora is also a new social platform from OpenAI that allows users to share their Sora 2-created videos in a feed. Users can also upload their own likenesses via a "Cameo" feature, that allows them to insert their face and voice into AI-generated Sora 2-made videos. Cameo onboarding requires multi-step verification: live video liveness checks and verbal attestation confirming you're not impersonating anyone.

Our Security Bypass Experiment

Our goal was to test whether Sora 2 could detect synthetic identities of prominent individuals during the Cameo verification.

We collected publicly available footage of select CEOs and entertainers from earnings calls and media interviews. Using this, we built real-time deepfake systems with facial animation synchronized to operator movements, voice cloning matching target speech patterns, and professional video backgrounds.

From there, we initiated verification within Sora. The platform prompted specific actions — turn head, say numbers — to confirm authenticity.

Our deepfake slipped past every security checkpoint that was explicitly designed to catch manipulated media.

The checks failed to detect the synthetic face to protected individuals in the database. The verbal attestation was also accepted.

We repeated this process with multiple deepfaked identities. Every attempt succeeded. The Sora platform detected nothing. Reality Defender's API flagged all materials as synthetic with over 95% confidence. The failure stems from the Sora platform's inability to distinguish real-time deepfakes from authentic video. Facial recognition systems trained on real photos have blind spots for AI-generated faces. (Conversely, Reality Defender routinely trains, retrains, and updates its models on a mix of real and fake images and videos, including those that contain both real and fake faces.)

One of several methods we used in deploying a deepfake.

A screen capture of a Sora video generated with the likeness of one of the celebrity impersonations.

Why Social Platforms Haven't Optimized for Security

OpenAI has a huge team of brilliant engineers building incredible generative tools. But the team optimizes Sora 2 for viral adoption, not maximum security. Prioritizing for widespread usage and adoption has routinely guaranteed inadequate protection against deepfake impersonations and a bevy of other trust and safety issues. Yet the problem runs deeper than misaligned priorities.

Even with proper incentives, the lack of investment in trust and safety on the Sora platform and other like platforms makes effective in-house detection nearly impossible. Generative AI platforms allocate billions to training teams while any efforts on trust and safety receive a fraction of that investment (if any funding at all) and are treated as an afterthought. Simply put, strong detection on platforms like Sora and beyond doesn't generate revenue, which puts such efforts last in line to receive adequate funding to stay ahead of threats.

The technical challenge compounds both issues. Engineers building Sora 2 understand their own model's weaknesses but cannot anticipate artifacts from competing platforms or emerging techniques. Organized fraud networks constantly adapt across multiple generative tools simultaneously, evolving faster than any single in-house team can track. By the time one vulnerability is patched, attackers have already moved to different methods entirely.

Reality Defender detecting this deepfake.

What This Means for Your Organization

The regulatory pressure is mounting. The EU AI Act requires deepfake detection, while more countries are soon to likely adopt similar measures. Financial regulators demand stronger verification. The Sora 2 research proves even OpenAI with its vast resources has not yet built effective in-house detection, nor are they incentivized to do so.

Fortunately, effective solutions exist, but they won't come from the companies building generative AI creation tools. Just as organizations don't develop their own antivirus software or spam filters, deepfake detection requires specialized providers like Reality Defender, whose entire business model depends on staying ahead of synthetic threats. The deepfakes that bypassed Sora 2's safeguards in under 24 hours were immediately identified by systems built for one purpose: stopping synthetic fraud before it succeeds. That's the difference between detection as an afterthought and detection as a mission.

Reality Defender is Designed for Identity Verification

Organizations must adopt independent detection. Reality Defender's API has no conflicting incentives, as our sole focus is robust detection across all platforms. The same deepfakes that fooled Sora 2 were immediately flagged by our system.

Two lines of code can now protect any verification flow better than any system built by major social platforms sorely in need of such a solution.

Want to build deepfake detection into your social platform or verification process? Try Reality Defender now.

Get in touch