\

Insight

\

Inside Our Live Demo: Catching Deepfake Job Candidates on Zoom

Katie Tumurbat

Marketing Programs

In our latest deminar (demo + webinar), I joined my colleagues Jacob Seidman (Senior Staff Scientist) and Javier Cuan-Martinez (Senior Software Engineer) to show just how convincing deepfake job candidates can look and how to stop them in real time.

We wanted to move beyond talking about the threat. Instead, we demonstrated how Reality Defender works inside Zoom to protect recruiters, HR teams, and enterprises from one of the fastest-growing attack vectors today: meeting impersonation.

Why this matters

Deepfakes are no longer experimental. They’re being used in real interviews and business meetings. Gartner predicts that by 2028, one in four job candidates globally could be fake. We also hear of enterprises reporting thousands of deepfake applications each year, with the same bad actor reappearing under multiple identities. Each attempt gets more polished as attackers learn from failed interviews. This isn’t just wasted interview time for HR leaders. Once inside an organization, impostors have been caught stealing customer data, planting malware, and even redirecting salaries or funds overseas. For recruiters and HR leaders, the stakes are high: a fake hire can become a full-scale security incident.

What we showed: a "fake" interview

To bring this to life, we staged a Zoom interview. The candidate “Gary” looked credible. He looked professional, spoke confidently about past projects, and seemed completely authentic.

But Gary wasn’t real. He was a deepfake of my colleague Jacob, created in under ten minutes with widely available tools.

To the human eye and ear, he passed every test. But when scanned with Reality Defender inside Zoom, the platform flagged him as manipulated in seconds.

It was a clear reminder: in live meetings, you can’t trust appearances alone. Without real-time detection, even seasoned interviewers can be fooled.

Watch the demo here

Why humans can't spot the difference

As Jacob explained during the session, today’s generative models are trained to be photorealistic and natural-sounding. The “tells” we once looked for, like six fingers, blurry edges, or robotic voices, are disappearing.

“Humans have a very hard time distinguishing between what might have come from a generative model  and what might not have…especially at scale.” — Jacob Seidman

Reality Defender looks beyond what the eye or ear can perceive. Our models analyze pixel-level traces in video and frequency patterns in audio to find signals invisible to humans. With massive datasets of both authentic and generated media, we’ve trained detectors that can pick up even the most subtle artifacts, across multiple modalities.

How Reality Defender works in Zoom 

Here’s what happens behind the scenes when you run a Reality Defender scan during a Zoom call:

  • A secure Zoom bot streams video and audio to Reality Defender’s detection pipeline.
  • Multiple models analyze each feed in parallel, trained to identify unique signals left by generative AI.
  • The system ensembles these results into a confidence score, delivered instantly to the host dashboard.
  • HR or security teams can review full reports after the meeting, complete with flagged participants and playback options.

Javier pulled back the curtain on the engineering work that makes this possible. The platform is built on a microservices architecture, allowing us to process millions of chunks of audio and video data simultaneously. That design means we can scale to thousands of calls without lag, and deploy new detection models quickly as generative AI evolves.

Key Takeaways from the session

Several questions came up repeatedly during the Q&A. Here are a few highlights worth sharing:

  • Accuracy: Detection is not about spotting “obvious glitches.” It’s about forensic signals. By training multiple models and combining their results, we maintain high confidence across modalities.
  • Keeping pace with new models: Reality Defender continuously generates new synthetic media for training, while monitoring research communities to update detectors as new techniques appear.
  • Audio-only scenarios: Even if a candidate joins with the camera off, our models analyze voice streams in real time to detect AI-cloned audio.
  • Scale: Whether you’re conducting a handful of interviews or tens of thousands per year, the system is designed to keep pace without disrupting workflows.

Looking ahead

Deepfake impersonation has shifted from a novelty to an organized, repeated attack method. Enterprises can no longer assume that a polished face and smooth voice on Zoom means the person is real.
Reality Defender was built for exactly this challenge. Solely focused on deepfake detection, our platform works across video, audio, and image, delivering results in seconds and integrating directly into tools like Zoom and Teams.

Watch the full recording here

Because the truth is, your eyes and ears aren’t enough anymore. But with the right defenses in place, catching impersonators before they cause harm is not only possible it’s practical.

Get in touch