Industry Insights

Feb 19, 2024

Detecting the Future of Video Deepfakes Today

Person walking forward

Last week, OpenAI announced Sora, their next-generation generative video technology. As part of the company's research, Sora (which has yet to be productized) can generate up to sixty seconds of hyper-realistic video with a simple text prompt.

If you haven't seen the demo, you owe it to yourself to watch it. The media used to demonstrate Sora is as lifelike as deepfakes get, and unlike anything myself and the Reality Defender team have seen before. It is truly an impressive technical feat, one that will undoubtedly change the landscape, scope, and workforce of media in positive and negative ways that we have yet to consider.

Because of how realistic Sora's outputs are, and due to the fact that OpenAI's competitors are likely not far from reaching similar results, we are deeply concerned about the myriad of abuses Sora and other like technologies will inevitably face in the future. While OpenAI detailed the great lengths they will go to avoid abuse of Sora while watermarking each video, past attempts in doing these things with the company's existing technologies have been circumvented by low-tech individuals with malicious intent. Couple that with the inevitable breakthroughs and subsequent abilities open source "clones" of Sora that will empower bad actors and fraudsters, and you have a technological climate that will make the recent $25 million heist seem relatively tame by comparison.

Detecting Sora Today

The Reality Defender team works proactively, allowing us to build in detection capabilities for theoretical new breakthroughs like Sora in mind. We're not only building tools to detect the models of today, but looking ahead to see what tomorrow's tools may bring.

The output from Sora's models are revolutionary, but the underlying technology behind it is not entirely new. Thus, because of our work with detecting existing deepfake technologies — married with our work on detecting "theoretical" deepfakes that can possibly exist in the future, our preliminary tests show that we are, in fact, able to detect videos created by Sora.

This does not come as a surprise to our team. Instead, it is proof positive that by consistently staying several steps ahead of where deepfake technology may go — and where bad actors may flock to in the future — we are able to provide the very best deepfake detection on the market for all media types. This is what drives and will continue to drive our team: the detection of present and future generative AI and related technologies to preemptively prevent their misuse and abuse.

We look forward to the amazing things people will inevitably create with Sora. We also put our faith in eventual legislation that enforces detection of its videos (and videos created elsewhere) — detection that relies on inference, not provenance, so as not to be fooled by something as simple as a screen capture. Failure to implement laws that require platforms and entities to detect such content could result in unthinkable scenarios that dwarf the crimes and abuses we're witnessing today.

As always, we stand at the ready to help others avoid such scenarios from happening.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter