\
insight
\
\
Insight
\
Ben Colman
Co-Founder and CEO
In a development that could fundamentally alter the online and offline world, language has been added to the House Energy and Commerce Committee's budget reconciliation bill that introduces a sweeping 10-year moratorium on state and local AI regulation.
This proposed legislation would prohibit state and local governments from enforcing "any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act."
This comes at a critical juncture when deepfake incidents worldwide have skyrocketed, increasing by a staggering 245% year-over-year according to Sumsub, with 25.9% of executives reporting one or more deepfake incidents targeting their financial and accounting data in just the past year. Coupled with the ease of access.use and decrease in cost of advanced AI model adoption, it's no wonder more deepfake-powered voice, video, and image scams are impacting a greater amount of people at a faster clip — something that current U.S. legislation has barely if at all combatted.
While technological advancement requires cooperation with regulators and elected officials, this moratorium raises important questions about our collective ability to protect citizens and businesses from emerging AI threats. The proposal effectively limits state regulators at a time when deepfake attacks are rising and dealing lasting irreperable damages to enterprises and citizens alike.
Reality Defender firmly believes in responsible AI growth and innovation. Yet we also recognize that without appropriate security controls to mitigate and criminalize the misuse of these powerful tools, said tools and the companies making them risk exposing individuals and organizations to increasingly sophisticated attacks. Recent legislation like the Take It Down Act demonstrates that we can address specific harmful uses of AI— in this case, reactive takedowns of non-consensual intimate imategry, or NCII — without stifling innovation.
Critically, the impact of deepfakes transcends all demographic boundaries. These sophisticated digital deceptions affect all people, regardless of political affiliation, socioeconomic status, or background. From business executives facing fraudulent impersonations to private citizens whose likenesses are misappropriated, the threat landscape is democratic in its reach. This universal vulnerability underscores the need for comprehensive protections that safeguard everyone in our increasingly AI-mediated and -mandated society.
This federal preemption approach presents a different direction from international frameworks like the EU AI Act, which implements transparency and accountability measures in AI development and deployment. Without robust detection capabilities for deepfakes in real time, enterprises and governments face substantial risks of reputational damage, breach of confidential data, and theft of assets.
The budget reconciliation bill's language indicates that federal AI preemption remains an active policy discussion in Congress. Representative Jan Schakowsky, ranking member of the Commerce, Manufacturing and Trade Subcommittee, has expressed concerns about potential implications for consumer protection.
Securing companies, governments, customers, and citizens against dangerous deepfake impersonations and malicious AI attacks is not just about compliance — it's about preserving the fundamental trust that underlies our relationships and digital interactions. As AI technology continues its rapid evolution, organizations must implement robust solutions that serve as the last line of defense against an emerging wave of AI-powered fraud, regardless of the regulatory environment.
While the proposed moratorium faces procedural hurdles in the Senate, its inclusion in the House bill signals a significant policy redirection. This development underscores why proactive technological countermeasures are becoming increasingly critical as AI threats grow in sophistication.
At Reality Defender, we support thoughtful AI regulation that balances innovation with necessary protections. Any comprehensive preemption of state and local regulation creates potential security gaps that may leave citizens and organizations vulnerable to emerging threats. The collaborative development of balanced regulatory frameworks — alongside technological solutions — represents the most promising path toward an AI landscape that fosters both innovation and trust.
\
Insights