\
Insight
\
\
Insight
\
Gabe Regan
VP of Human Engagement
Businesses are facing a surge in deepfake fraud attempts, yet executives are unprepared. More than half of firms say they’ve been targeted by attempted deepfake scamming attacks. Despite the magnitude of the threat, 6 in 10 companies lack protocols to fight them.
A deepfake is synthetic media — or fake content generated by AI — that replaces a person's appearance, voice, or likeness with someone else's. The term combines "deep learning" and "fake," and refers to content that can be nearly indistinguishable from the real thing.
Think of the Taylor Swift deepfakes that made the rounds on social media. Or the Hong Kong finance worker tricked into sending $25 million to scammers impersonating a CFO. The deepfake definition goes beyond simple photo or video editing, and can make anyone appear to say or do things they never actually did.
Deepfakes are created using Generative Adversarial Networks (GANs), a type of AI used to create realistic fake content.
GANs work like two AI systems competing against one other.
The Generator creates fake content by analyzing thousands of images, videos, or audio samples of the target person. It learns facial expressions, voice patterns and mannerisms to create convincing replicas.
The Discriminator acts as a detective, trying to identify fake content. As these two systems battle, the generator gets better at creating realistic fakes that can fool the discriminator — and eventually, human eyes and ears.
The types of deepfakes fall into three main categories, each presenting unique challenges for detection and prevention.
CFO Zoom Scam (2024): As referenced above, a multinational company fell victim to a deepfake video conference where criminals impersonated the CFO and other executives, resulting in a $25 million fraudulent transfer. This attack is considered embedded because it was woven into business workflows, making it harder to spot than a standalone event.
Pentagon Explosion Fake (2023): A deepfake image showing a fake explosion at the Pentagon briefly sent the Dow Jones down 85 points, showing how synthetic media can manipulate financial markets.
Hong Kong Heist (2020): Criminals used deepfake technology to impersonate a company director in a video call, convincing a bank manager to transfer $35 million.
Deepfakes pose personal risks including identity theft, reputation damage and targeted harassment. For businesses, the threats include financial fraud, corporate espionage and market manipulation.
Business risks are particularly severe, with deepfakes enabling sophisticated social engineering attacks that bypass traditional security measures. They also exploit trust, and cause significant financial and brand damage. The financial impact can be substantial: deepfake fraud incidents cost businesses an average of nearly $500,000 per incident, rising to $680,000 for the world's largest businesses.
Here are five simple checks that don’t require advanced tools. For comprehensive detection strategies, check out our Deepfake Detection Guide.
1. Watch the face and listen. Look for blurring or unusual lighting Notice awkward pacing or rhythm
2. Look for unusual movements and tone. Check if emotions match the conversation context Watch for flat or inappropriate emotional responses Notice unnatural movements or missed blinks
3. Monitor the audio quality. Detect odd pauses or breaks in speech Listen for flawed pronunciation of specific sounds.
4. Background noise might seem off. Notice inconsistent or artificial ambient sounds Check if background audio seems disconnected
5. Verify the source. Always verify through alternative communication channels Ask personal questions only a real person would know.
To explore how Reality Defender secures critical communications against deepfake attacks, schedule a demo with our team.
\
Insights