\

Insight

\

Deepfake Threats Demand Action from Law Enforcement Agencies

Katie Tumurbat

Marketing Programs

In a recent webinar moderated by LTG (Ret.) Bob Ashley, we explored one of the most pressing challenges facing intelligence, government, and security leaders today. A 35-year U.S. Army intelligence veteran and former Director of the Defense Intelligence Agency, Bob knows the realities of national security and the pace of change agencies face. 

Joining the discussion were Alex Lisle, CTO of Reality Defender, and Catherine Ordun, Vice President at Booz Allen Hamilton, a global consultancy advising the U.S. Government, Department of Defense, and Fortune 500 companies on cyber, AI, national security, and intelligence strategy.

Part of the conversation focused on what’s happening in the field right now. In this post, we’ll look at where deepfake detection is already being used successfully, what leading agencies are doing differently, and the first step every organization should take to be ready. Because deepfakes are already here.

How to evaluate deepfake detection tools

Deepfake detection is already being tested in law enforcement and government environments, but only where it integrates seamlessly into existing workflows. The technology works best when it operates behind the scenes, not as a new tool analysts have to learn, but as part of the systems they already use.

That point came early in the discussion when Bob Ashley challenged the panel on how agencies should evaluate tools before signing off. Catherine Ordun explained that effective detection starts with transparency. A good system doesn’t just label content as real or fake, it shows how confident it is in that result, allowing investigators to assess the reliability of each output.

She added that agencies should also look beyond a single accuracy score. Models perform differently depending on the data they’re trained on, so metrics like precision, recall, and F1 scores provide a fuller picture of performance. And because no algorithm is perfect, leaders must plan for false positives and false negatives when results are used in sensitive investigations.

Today, detection tools are being piloted in areas such as border security, digital forensics, and public safety communications, environments where verifying media quickly can directly impact operational decisions. These pilots focus on embedding detection within existing systems so agencies can triage suspect content without disrupting evidence workflows. Reality Defender helps agencies verify evidence and confirm media authenticity, working with partners including NATO StratCom.

Catherine’s message was clear: evaluating deepfake detection is not like buying antivirus software. It’s about evidentiary intelligence, and it must be treated with the rigor and accountability of something that could directly impact prosecutions, public safety, and trust. For a closer look at how deepfakes are already reshaping evidence handling, Police1 explores the growing challenge for investigators.

The first move every agency must make; establish procedures

The discussion shifted from technology evaluation to operational readiness, specifically what agency leaders should be doing right now. When Bob Ashley asked where to start, Alex Lisle made it clear that the first step isn’t technical. It’s procedural.

Agencies must begin by building and updating deepfake response playbooks: clear processes that define who acts, how results are verified, and what communication steps to take when manipulated media is suspected. Testing a tool in a pilot is not the same as making it a reliable part of a production environment.

As Alex explained, deepfake incidents unfold with speed that breaks traditional cyber-response assumptions. “This isn’t someone doing east-west data migration. This is an immediacy attack. Part of the playbook is telling the person they’re being actively attacked.”

He also highlighted blind spots most organizations overlook. “You think about your emails, but you probably haven’t thought about your phone vector for a while — your VoIP, your Teams integration, all these inputs and outputs. You must understand your attack surfaces and build from there. Not as a science experiment, but how do I productionize this in a meaningful manner?”

The direction for agencies is clear: awareness is not readiness. Building deepfake resilience starts with procedural discipline, codifying how to identify, contain, and escalate manipulation attempts before any technology can be fully effective. INTERPOL’s Beyond Illusions report echoes this need for structured response frameworks, urging agencies to embed synthetic-media awareness into daily operations rather than treat it as a one-off training issue.

The distinction is clear: awareness of the risk is not readiness. Deepfake resilience requires production-grade operational integration, not theoretical concern.

Integration is non-negotiable 

Most failures in deepfake defense don’t stem from a lack of technology. They happen when agencies can’t operationalize what they already know.

The discussion turned to what separates successful adoption from stalled pilots: making detection work within the systems analysts already use. Bob Ashley asked about this challenge directly: “How easily can detection tools fit into the systems agencies already have in place?”

Catherine Ordun noted that integration shouldn’t mean rebuilding infrastructure. Agencies don’t need to re-engineer IT systems to use detection. It should connect through simple back-end integrations, not clunky software that changes how analysts work.

Alex Lisle agreed, stressing that tools must live where analysts already operate. “You have to have APIs. If you’re sending results somewhere else, it’s not a usable tool.”

The takeaway is clear: detection must be embedded, not added. If it sits outside core workflows, it won’t be fast enough or effective when it matters most. Reality Defender’s approach reflects that principle, delivering detection through flexible API-based integrations and partnerships with organizations such as Primer Technologies

What agency leaders (and citizens) need to know right now

If you’re a Chief Information Security Officer, senior executive, or agency director, your first move isn’t technology procurement. It is ensuring your operational playbooks are ready for real-time decisions. This begins with a critical question: Do your people know exactly what to do within five seconds of suspecting a deepfake?

Preparation must go beyond impersonation scenarios. It should also account for false repudiation, situations where an individual claims that authentic audio or video is fake.

Detection technology must also live inside the tools analysts already use, not in a separate portal that no one will remember to check during an active incident.

For citizens, the guidance is simpler: slow down. Any unexpected request involving urgency, authority, and money should be treated as a red flag. Stop and verify it through a separate, trusted channel before taking action.

Deepfake detection isn't a “future problem.” It’s a live-fire battlefield condition or as Alex put it: “awareness is not readiness.”

To hear directly from the experts, watch the webinar on demand.

Get in touch