Policy

Nov 2, 2023

Dangerous Limitations in the AI Executive Order

Person walking forward

On Monday, President Biden issued a new Executive Order, drafted in part to manage the risks of AI-generated content. Among the objectives of the Order are to “Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.” The order also indicates that the “Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.” 

While we feel that the above guidelines are presented with the best of intentions, we must express our concerns regarding the administration’s belief that watermarking is an effective way of labeling AI-generated content to protect the public from disinformation, deception, and fraud. 

The provenance watermarking proposed in this Executive Order has been touted and developed by some of the largest tech companies and creators of the most popular AI content platforms, all with the aim to provide accountability for potential abuse of their own products. In the competitive realm of world-changing technology, it is unreasonable to expect that the companies developing these products would also create effective tools for their oversight. The risk of a self-serving approach, intentional or not, is too significant. Companies poised to reap substantial profits from AI-generated content shouldn’t be relied upon to openly admit how susceptible their products are to manipulation and misuse. As the interests of these companies and the public diverge, the government should prioritize reliable methods of deepfake detection rather than trusting implicitly in unreliable safety measures offered by the biggest players in the industry.

Watermarking as a Security Measure

Conflict of interest isn’t the only issue with this approach. The most worrisome aspect of placing our trust in watermarking is how easily this method can be circumvented or manipulated by malicious actors. Simply put, watermarking isn’t effective. In alignment with the conclusions reached by multiple research teams, a recent study conducted at the University of Maryland found that watermarks can be easily washed out, removed, manipulated, or even added to human-generated images to trigger false detection, further eroding the user’s faith in what can and cannot be viewed as real. If a watermark can be easily counterfeited, added, or eliminated —something companies now in existence offer for a fee — it cannot be relied upon as a primary mitigation system to shield users from malicious content, deepfakes, and disinformation.

Relying on malicious actors to participate in any standardized watermarking system, one that inevitably depends on good faith and good intentions within the AI space, is not a viable approach. It is our view that the government should focus its finite resources on utilizing more proactive inference-type methods of detection in real time, rather than hoping that watermarked content will retain its warning label as it circulates widely to reach users. Given the inherent flaws in this technology, how can any user place their trust in watermarks to ensure the content they engage with is reliable?

In contrast to the passive method of labeling and watermarking, Reality Defender employs state-of-the-art artificial intelligence to actively recognize and flag deepfakes and generative content in real time.  Our platform does not rely on methods that can be easily manipulated, nor does it depend on end-users to interpret contextual information. Instead, it leverages sophisticated detection models, created without conflict of interest, to detect deepfakes with high accuracy, enabling the largest entities serving media and content to billions of users to safeguard said users from misinformation and deception. 

Should the government pursue watermarking tools developed by major AI companies as the primary method for identifying malicious content and protecting the public, the harm done to users, public trust in institutions and emerging technologies, and the reliability of digital information will be incalculable.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter