\
Insight
\
\
Insight
\
Katie Tumurbat
Marketing Programs
In our latest deminar (demo + webinar), I joined my colleagues Jacob Seidman (Senior Staff Scientist) and Javier Cuan-Martinez (Senior Software Engineer) to show just how convincing deepfake job candidates can look and how to stop them in real time.
We wanted to move beyond talking about the threat. Instead, we demonstrated how Reality Defender works inside Zoom to protect recruiters, HR teams, and enterprises from one of the fastest-growing attack vectors today: meeting impersonation.
Deepfakes are no longer experimental. They’re being used in real interviews and business meetings. Gartner predicts that by 2028, one in four job candidates globally could be fake. We also hear of enterprises reporting thousands of deepfake applications each year, with the same bad actor reappearing under multiple identities. Each attempt gets more polished as attackers learn from failed interviews. This isn’t just wasted interview time for HR leaders. Once inside an organization, impostors have been caught stealing customer data, planting malware, and even redirecting salaries or funds overseas. For recruiters and HR leaders, the stakes are high: a fake hire can become a full-scale security incident.
To bring this to life, we staged a Zoom interview. The candidate “Gary” looked credible. He looked professional, spoke confidently about past projects, and seemed completely authentic.
But Gary wasn’t real. He was a deepfake of my colleague Jacob, created in under ten minutes with widely available tools.
To the human eye and ear, he passed every test. But when scanned with Reality Defender inside Zoom, the platform flagged him as manipulated in seconds.
It was a clear reminder: in live meetings, you can’t trust appearances alone. Without real-time detection, even seasoned interviewers can be fooled.
As Jacob explained during the session, today’s generative models are trained to be photorealistic and natural-sounding. The “tells” we once looked for, like six fingers, blurry edges, or robotic voices, are disappearing.
“Humans have a very hard time distinguishing between what might have come from a generative model and what might not have…especially at scale.” — Jacob Seidman
Reality Defender looks beyond what the eye or ear can perceive. Our models analyze pixel-level traces in video and frequency patterns in audio to find signals invisible to humans. With massive datasets of both authentic and generated media, we’ve trained detectors that can pick up even the most subtle artifacts, across multiple modalities.
Here’s what happens behind the scenes when you run a Reality Defender scan during a Zoom call:
Javier pulled back the curtain on the engineering work that makes this possible. The platform is built on a microservices architecture, allowing us to process millions of chunks of audio and video data simultaneously. That design means we can scale to thousands of calls without lag, and deploy new detection models quickly as generative AI evolves.
Several questions came up repeatedly during the Q&A. Here are a few highlights worth sharing:
Deepfake impersonation has shifted from a novelty to an organized, repeated attack method. Enterprises can no longer assume that a polished face and smooth voice on Zoom means the person is real.
Reality Defender was built for exactly this challenge. Solely focused on deepfake detection, our platform works across video, audio, and image, delivering results in seconds and integrating directly into tools like Zoom and Teams.
Because the truth is, your eyes and ears aren’t enough anymore. But with the right defenses in place, catching impersonators before they cause harm is not only possible it’s practical.
\
Insights