\
Insight
\
\
Insight
\
Katie Tumurbat
Marketing Programs
During Cybersecurity Awareness Month, thousands of professionals tested their instincts in our “Can You Spot the Deepfake?” challenge, and the results revealed a critical truth: awareness does not equal readiness. This blog and infographic break down what the data tells us and what every organization must do next.
Participants in our deepfake challenge correctly identified real versus manipulated media just over half the time. In a binary task with 50 percent odds, that's barely better than chance. In other words, even informed professionals struggled when the pressure was real.
This becomes especially dangerous when social engineering enters the picture. Attackers do not hack systems first. They hack people. They exploit psychological triggers like authority, urgency, familiarity, or scarcity to prompt action before critical thinking can engage. A request that sounds like the Chief Financial Officer. A friendly vendor following up on an “outstanding item.” A warning that access expires within minutes. These tactics are highly effective and only require one person to slip.
Attackers now use AI to generate convincing voice calls and live video deepfakes in real time, turning familiar social-engineering cues into high-pressure moments that bypass critical thinking.
Nearly half of all organizations experienced an attempted deepfake or AI impersonation attack in the past year (Gartner, 2025). Deloitte projects that AI-enabled fraud could cost U.S. businesses $40 billion annually by 2027, driven not by high-profile political events, but by everyday business interactions.
The real threat is not the viral deepfake or sensational headline. It is the convincing “CEO” who joins a live Zoom, the vendor who sounds exactly right, and who interviews flawlessly, but does not exist. These are operational failure points inside finance approvals, HR screening, customer support, investor communications, and vendor workflows where trust is assumed by default. Deepfakes have moved beyond cybersecurity into business continuity.
In a recent internal exercise, we staged a live Zoom interview with a deepfake candidate named "Gary"—created in under ten minutes using publicly available tools. He looked credible, sounded experienced, and fooled the interviewer. Reality Defender, running through a simple Zoom plug-in, flagged him as synthetic within seconds. That's the risk today. Attackers are shifting from high-profile political targets to everyday business operations: finance approvals, HR screening, customer support, and vendor management. Any workflow relying on visual or verbal identity verification is now exposed.
Most leaders today understand that deepfakes are a threat, but very few are operationally ready to respond in real time. Only 14 percent of companies have formal policies for identifying or responding to deepfakes. Just one in five provides employee training on recognizing manipulated audio or video. Almost none run simulations to test how their teams would react under pressure.
That distinction matters because deepfake attacks do not unfold over days or hours; they unfold in seconds. A cloned CEO voice on a live call does not leave time for committee review. A manipulated investor message does not come with a warning label. Decisions have to be made immediately, and most organizations are not structurally equipped for that moment.
Awareness is theoretical. Preparedness is operational. And right now, most companies are stuck in the first category while attackers are already operating in the second.
This was reinforced in a recent webinar moderated by LTG (Ret.) Bob Ashley, former Director of the U.S. Defense Intelligence Agency. One takeaway stood out: If you are a Chief Information Security Officer, executive, or agency leader, your first move is not buying technology. It is preparing your playbooks. Do your people know exactly what to do within five seconds of suspecting a deepfake? With playbooks rehearsed and verification embedded, the next step is investing in systems that operationalize these practices in real time.
Deepfake defense is not a single product or policy — it is an ecosystem strategy that connects human awareness with intelligent technology. The most effective organizations are investing across three interlocking areas:
Human awareness is essential, but true resilience comes from pairing instinct with intelligent technology. Organizations investing in readiness today will be the ones that protect their reputation, maintain compliance, and preserve public trust in the years ahead.
If you are assessing your readiness or exploring where detection fits into your workflows, our team is here to help you take the next step, wherever you are starting from.