\
Insight
\
A practical framework for responding to deepfake incidents.Download the framework
\
Insight
\
Ben Colman
Co-Founder and CEO
When we announced the founding of Reality Defender’s Ethics Committee, we said the committee would advise us on the policy, governance, and accountability questions that come with building detection at enterprise scale. Its founding members are Keith Enright (chief strategy officer at Harvey AI, formerly chief privacy officer at Google), Luciano Floridi (founding director of the Yale Digital Ethics Center), and Yoel Roth (SVP of trust and safety at Match Group, formerly head of trust and safety at Twitter).
I interviewed Keith Enright, who famously spent 14 years at Google, including a long run as its chief privacy officer, before joining Gibson, Dunn & Crutcher as a partner advising public companies and boards on privacy, cybersecurity, and AI governance. He’s now chief strategy officer at Harvey AI. He argues that the question most boards are asking about AI is the wrong one, and that the right question is far more existential. He thinks about the collapse of verifiable reality from inside the legal and regulatory frames meant to bound it. His answers are precise, sharp, and worth reading carefully.
Platforms like Reality Defender turn authenticity from an ad-hoc worry into a persistent control, which is what actually lets an organization responsibly deploy AI at scale. Risk management isn’t the brake on AI adoption; it’s the steering.
Synthetic media breaks any presumption that privacy was somehow limited to data about real people and real events. The harm is no longer “your information was exposed,” it can now include “a version of you was fabricated and distributed.” That might require evolving privacy law from a narrow data-protection regime into something closer to an identity-protection regime, with rights in one’s likeness, voice, and behavioral signature that survive the synthetic context. On the consumer side, the center of gravity will shift from “is this information accurate” to “is this information authenticated,” with provenance becoming as fundamental as encryption.
Most regulatory attention is concentrated on three high-salience problems: the protection of children, election interference, and non-consensual intimate imagery. All three are critical. But another key threat vector is enterprise and financial fraud: synthetic audio of executives authorizing wire transfers, cloned voices used against call-center authentication, fabricated earnings calls, and investor relations content. This is already a multi-billion-dollar problem, and almost no jurisdiction has a coherent framework for it. The second challenge is cross-border enforcement, as synthetic media is borderless by design, and our legal tools largely still aren’t.
Most boards ask, “How do we prevent our people from misusing AI?” The more urgent question is the inverse: how will we know when someone is using AI to impersonate us to our employees, our customers, our vendors, our investors? The first question is a policy problem. The second is an existential one, and it’s the one most organizations haven’t operationalized.
Liability will redistribute across all three tiers, but not evenly. Bad actors remain primarily liable in theory but are often judgment-proof or jurisdictionally out of reach, which means plaintiffs and regulators will keep pressing upstream. Model creators will face increasing pressure under product-liability and negligence theories. Distribution platforms may see the sharpest shift: once reliable detection exists commercially, the “we couldn’t have known” defense erodes, legacy legal defenses may be stripped away, and duty-of-care standards could rise accordingly.
Keith Enright is chief strategy officer at Harvey, the leading AI platform for legal and professional services. He was previously a partner at Gibson, Dunn & Crutcher, where he advised public companies and boards on privacy, cybersecurity, AI governance, and technology regulation. Earlier in his career, Keith spent 14 years at Google, serving as its chief privacy officer and leading global privacy, consumer protection, and regulatory engagement across the company’s consumer and enterprise products. He also serves on the board of directors of ZoomInfo (NASDAQ: GTM).
Read the announcement of Reality Defender’s Ethics Committee.
\
Insights