Policy

Jan 17, 2024

Reality Defender’s Approach to the 2024 Election Season

Person walking forward

With high-stakes elections set to take place globally in 2024 — including the 2024 U.S. presidential election —this year is poised to reveal the true potential of malicious deepfakes to disrupt the democratic process, sow mistrust among voters, and widen the rifts at the core of our society.

At Reality Defender, we are committed to doing our part in assisting governments, public institutions, elections officials, and news and social media platforms in protecting the integrity of elections and the democratic process. 

Staying Ahead of AI Advancements

Reality Defender’s approach to the 2024 election season is centered around our mission to provide universal, accessible tools that can keep up with the rapid pace of generative AI evolution. Our detection suite is platform-agnostic and thus capable of scanning thousands of different types of AI-generated content created with thousands of varied generation models.  

“Reality Defender’s approach to the 2024 election season is centered around our mission to provide universal, accessible tools that can keep up with the rapid pace of generative AI evolution.”

For instance, should a malicious actor create an audio deepfake of a surprise candidate endorsement, our models can analyze the suspicious audio file for traces of popular text-to-speech generative methods, while simultaneously scanning for voice conversion, replay-based methods, and the countless other techniques currently in use. These analytic processes happen concurrently and instantly as soon as the target file is uploaded to our platform via our Web App or API, which can be easily integrated into any existing workflow.

In the same vein, our approach is proactive and inference-based. In our commitment to stay a few steps ahead of malicious actors, Reality Defender’s research experts keep abreast of the latest cutting-edge generative AI research, implementing the detection of models that have not yet been productized, only hypothesized. This way, our detection technology is prepared for theoretical generative models as well as existing ones, keeping in lockstep with the latest advances in deepfake creation. This is crucial due to the likely involvement of powerful, well-funded actors utilizing very advanced deepfake generation tools in the 2024 election landscape.

Focusing on the Right Approach

While we appreciate efforts to make the identification of AI-generated content easier, we do not actively support watermarking methods and similar approaches, such as the oft-discussed “nutrition labels” for digital media. As we have seen too often, malicious actors have no interest in volunteering for standardized identification methods and can go as far as to emulate or manipulate watermarks to make their creations appear even more authentic. At this point, watermarking technologies remain too easy to sidestep or exploit.  

Because creators of deepfakes strive to catch up to detection technology and aim to always surpass it, we take serious precautions to prevent reverse-engineering and abuse of our tools. Our team is ready to respond should someone with access to Reality Defender try to manipulate the platform to trick or break a detection model, or reverse-engineer it by uploading multiple identical files or other suspicious content. At the same time, we value the privacy and security of our clients over all else. Our team can never view the contents of the files that our clients upload and scan. This way, any and all sensitive data and media can remain clients’ secure property. 

“We believe that to develop effective tools that keep up with the rapid pace of new AI technologies, a company cannot be expected to develop powerful AI platforms that make synthetic content indistinguishable from the real, while also developing detection tools to foil that same technology.”

A Single-Minded Focus on Best-In-Class Detection

Reality Defender’s purpose is to create the most effective deepfake detection technology on the market. This is our sole mission and point of focus. We believe that to develop effective tools that keep up with the rapid pace of new AI technologies, a company cannot be expected to develop powerful AI platforms that make synthetic content indistinguishable from the real, while also developing detection tools to foil that same technology. Such contradictions can create an environment where — even purely by accident — incentive can swing toward whichever approach is more lucrative or beneficial to special interests.

This point is illustrated by the numerous failures of provenance classifiers developed by some of the largest AI companies. To assure the public that tools flagging synthetic media have integrity and can be relied upon, we believe in a firm separation between companies who make it their work to enable AI-generated content, and the companies that create the tools to reliably detect and flag such content. 

We also do not believe in placing the burden of detecting deepfakes on individual citizens and voters. While we advocate for widespread awareness of the potential misuse of generative AI as part of everyone's voter literacy education, we also believe that the average citizen should not be burdened with the responsibility of constantly pondering and verifying the authenticity of media they encounter on their chosen platforms.

This is why we collaborate with some of the world's largest organizations, governments, media platforms, and institutions to implement Reality Defender's deepfake detection tools at the highest levels. By integrating deepfake detection into content moderation streams, news verification backends, and other information workflows, we can ensure individuals don’t need to worry that their chosen platforms are serving up bogus, misleading media contributing to voter confusion, political disinformation, and anti-democratic sentiments. 

As things heat up during this crucial election year, our research and development experts will be ever more vigilant in identifying new models of deepfake creation and updating our platform to respond in kind. The rest of the team will continue providing support to clients and feedback to public officials and platform moderators as they face the onslaught of AI-generated content in real time.  

Preventing malicious deepfakes from bringing chaos into elections benefits all of us. With the right tools and collaboration across private platforms and public institutions, we can ensure that our democracy survives the first true AI election year unscathed and set an example for years to come. 

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter
Raised hand against a soft cloudy sky