\
Insight
\
Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More
\
Insight
\
Ben Colman
Co-Founder and CEO
We recently announced the formation of Reality Defender’s Ethics Committee. Its founding members are Keith Enright (chief strategy officer at Harvey AI, formerly chief privacy officer at Google), Luciano Floridi (founding director of the Yale Digital Ethics Center), and Yoel Roth (SVP of trust and safety at Match Group, formerly head of trust and safety at Twitter). The committee will advise us on the policy, governance, and accountability questions that come with building detection at enterprise scale.
To mark the announcement of the Ethics Committtee, I interviewed each of the members. Below is my conversation with Luciano Floridi, one of the most cited living philosophers in the world and an architect of the EU AI Act’s ethical framework. He argues that detection and verification are not secondary to AI governance but foundational to it. His answers are direct, demanding, and worth reading carefully.
The industry needs to stop treating the harms of generative AI as a future risk and start treating them as a present cost. We have spent years producing governance frameworks, ethical principles, and voluntary commitments (what I have called normative inflation) while technology has raced ahead of every safeguard. The step back is simple in principle: accept that the ability to generate synthetic content at scale is a new form of agency, and that new forms of agency require new forms of accountability, not just new forms of aspiration. Detection, verification, and enforceable standards are not optional extras. They are infrastructure.
Accountability presupposes that you can trace an action to an agent and verify what actually happened. Deepfakes attack both conditions simultaneously: they make it trivial to fabricate evidence of events that never occurred and to deny evidence of events that did. The result is not just misinformation but an erosion of the very epistemic ground on which accountability stands. When anyone can plausibly claim that a video is fake, or plausibly produce a fake video, the cost of lying drops and the cost of truth-telling rises. That asymmetry is the accountability gap, and it widens with every improvement in generative models.
Democracy depends on a shared factual baseline, not agreement about values, but about what actually happened. Synthetic media dissolves that baseline. Detection platforms are, in this sense, part of the epistemic infrastructure of democratic life: they help preserve the conditions under which citizens can form judgements based on evidence rather than fabrication. This does not mean detection alone is sufficient: regulation, media literacy, and provenance standards all matter. But without reliable detection, the other defences operate in the dark. You cannot regulate what you cannot identify.
Responsibility is not a single burden to be assigned to one party: it is distributed across the chain, and the distribution should reflect the degree of control each actor exercises. Model creators bear responsibility for foreseeable misuse and for building safeguards into their systems. Distribution platforms bear responsibility for what they host and amplify. Bad actors bear responsibility for their actions. Even users bear some responsibility for approaching what they see and hear with greater critical awareness. But the most dangerous gap is in the assumption that responsibility can be passed along the chain until it lands nowhere. The principle should be: wherever there is some capacity to prevent or detect harm, there is a corresponding obligation to do so.
The under-discussed dilemma is the verifier’s power. As detection becomes essential infrastructure, the organisations that certify what is real and what is fake acquire enormous epistemic authority. They have the power to determine, in contested cases, what counts as authentic. That authority must itself be subject to oversight, transparency, and accountability. The risk is not that detection fails but that it succeeds so well that we outsource our judgement about reality to systems whose criteria we do not scrutinise. Detection companies must build in the governance structures that prevent them from becoming the very kind of unaccountable gatekeepers the industry was designed to check.
Luciano Floridi is the John K. Castle Professor in the Practice of Cognitive Science and Founding Director of the Digital Ethics Center at Yale University. He is also Professor of Sociology of Culture and Communication at the University of Bologna, Department of Legal Studies, where he directs the Centre for Digital Ethics. One of the most cited living philosophers in the world, he has published over 400 works on the philosophy of information, digital ethics, the ethics of AI, and the philosophy of technology. His most recent books are The Ethics of Artificial Intelligence (Oxford University Press, 2023) and The Green and The Blue (Wiley, 2023). In 2022, the President of the Italian Republic awarded him the Knight of the Grand Cross of the Order of Merit, Italy’s highest honour, for his foundational work in philosophy. He is the editor-in-chief of Philosophy & Technology (Springer).
Read the announcement of Reality Defender’s Ethics Committee.
\
Insights