\

Insight

\

Can You Spot the Deepfake? Take the Cybersecurity Awareness Month Challenge

Marie Hoffman

Head of Marketing

October is Cybersecurity Awareness Month, and in this piece, we explore one of today’s fastest-growing threats: deepfakes. We’ll look at how they’ve evolved, why even experts struggle to detect them, and the risks they pose when combined with social engineering tactics. To bring this challenge to life, we’ve also created an interactive Deepfake Detection Game that lets you test your ability to spot the fakes and see just how hard it is to separate real from AI.

A Fast-Moving, Ever-Evolving Threat

Cybersecurity Awareness Month is a global initiative designed to help individuals and organizations learn about new and existing online safety threats and ways to stay safe. When the National Cybersecurity Alliance and the U.S. Department of Homeland Security launched this initiative in 2004, the internet and digital landscape were incredibly different. Social media was in its infancy, smartphones were more basic, and deepfakes and AI-powered impersonations simply didn’t exist.

Today, we’re facing a threat that is only a few years old and is evolving faster than many can understand. Not long ago, you could spot a deepfake or AI-generated person by looking for obvious errors like extra fingers, strange facial glitches, or unnatural blinking. But those days are over. Generative AI platforms have made it possible for anyone — not just experts — to create highly convincing fake videos, voices, and images in minutes with very little data and for just a few dollars.

To show just how realistic this technology has become, we even created a deepfake of our CEO on stage at Web Summit — generated entirely from a single image. While the talk itself was real, the video you’ll see is 100% fake.

Bad actors have taken notice of these advancements. Deepfakes are now being used to impersonate CEOs, scam family members, commit interview fraud, influence elections, and erode public trust on a massive scale.

Why Everyone is Vulnerable

One of the most dangerous aspects of deepfakes is that no one can spot them consistently, regardless of their age, role, or experience level. Research underscores just how wide this gap is:

  • Even the experts struggle: 59% of security professionals say detecting deepfakes is “very or highly difficult to detect deepfake attacks” (BlackCloak/Ponemon Institute, 2025).
  • A confidence gap at work: 70% of security leaders are confident that their employees can identify deepfakes of organizational leadership — but only 34% of workers agree. This overconfidence is largely driven by C-level executives, creating a dangerous blind spot (CyberArk).
  • Low confidence across the public: Fewer than one in ten (9%) of people aged 16+ feel confident in their ability to identify a deepfake. (Ofcom).

The data paints a clear picture: deepfakes affect everyone. From security professionals to everyday users, we’re all navigating a world where ‘seeing is believing’ no longer holds true. The risk rises when deepfakes are paired with social engineering, blending convincing impersonations with stolen information and urgent, time-boxed requests. These attacks can take many forms: a friend or family member in need, a supposed love interest, a customer or colleague, or even a public official asking for help.

Raising Awareness Through Play

To illustrate how difficult deepfakes are to identify, we have launched an interactive deepfake detection game designed to test your instincts under pressure — just like a real social engineering attack. Here's how it works:

  • Players will be shown 30 media files: a mix of images and videos of people that are real or AI-manipulated.
  • Players will have a few seconds to decide if each individual is real or AI.
  • When you are finished, check the leaderboard to see how you rank against others and see how difficult it is for the public to detect deepfakes with high confidence.

By putting yourself in a realistic detection scenario, you’ll see firsthand how challenging cyber threats are and why relying on gut instinct isn’t enough.

Play. Share. Spread Awareness.

Deepfakes are recognized as the fastest-growing cybersecurity threats today and are only going to get more serious as technology and bad actors become more advanced. To protect yourself and raise awareness, you can:

  1. Play the Deepfake Detection Game and challenge your friends, colleagues, and networks to do the same.
  2. Share your score on social media using the hashtag #SpotTheDeepfake and help spread the word this Cybersecurity Awareness Month.
Get in touch