Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More

\

Insight

\

When Deepfakes Meet Identity Verification: Lessons from North Korea’s Hiring Fraud Campaign

Gabe Regan

VP of Human Engagement

Synthetic identity fraud has moved beyond fake documents and stolen Social Security numbers. Today, threat actors use AI-generated faces, manipulated IDs, and automated persona-building to create believable digital humans at scale.

Since 2022, North Korean state-backed groups have demonstrated how industrialized this process can become. They combined hiring fraud, AI-generated headshots, doctored identity documents, and malware to place operatives inside Western companies. One eight-person cell earned $1.64 million over 3.5 years. A single synthetic identity pipeline created 135 personas and targeted more than 73,000 individuals.

This is not just a cybersecurity issue. It’s a point of failure for identity verification (IDV). It exposes a gap in how most organizations think about authentication in hiring and onboarding.

How Threat Actors Generate Synthetic Identities at Scale

Modern synthetic identity pipelines follow a structured process. First, attackers harvest real identity data from breaches, dark web markets, or even public records. Next, they generate convincing profile photos. They scrape images from social platforms or use AI image generators. In some cases, they use legitimate face-swapping tools to composite a real face onto a new headshot suitable for an identity document.

Then, they create fake passports and IDs using illicit document-generation services. Even when these documents contain watermarks, attackers automate Photoshop routines to remove visible traces.

With a fabricated passport and an AI-generated headshot in hand, they create email accounts and professional networking profiles. They build social proof at scale. Some operators have reportedly passed enhanced identity verification on professional platforms at rates above 40 percent.

Finally, they use local “laptop farms” to route traffic through residential IP addresses in the U.S. or U.K., making their presence appear legitimate. Once hired, they escalate privileges and access sensitive systems.
Every stage depends on convincing visual authenticity. If the image passes, the identity passes.

Why Traditional IDV Controls Are No Longer Enough

Most identity verification systems focus on validating documents, detecting liveness, and comparing biometrics. They determine whether a face matches the presented document, whether a person appears live during capture, and whether submitted data aligns with official records. These are important and necessary checks.

However, they do not answer a newer and increasingly critical question: did someone manipulate or generate the face, voice, or document using artificial intelligence?

Two high-risk scenarios are increasingly common:

  1. Face swap or AI-enhanced image edits: Attackers insert a real face onto a synthetic or modified image to create a new identity. Liveness checks may pass, but the attacker has already altered the media.
  2. Injection of a real face onto a fake ID: Fraudsters composite an authentic-looking portrait onto a fabricated or altered passport. The face is real. The document appears valid. The manipulation sits between the two.

Legacy IDV tools do not analyze whether media has been AI-generated or synthetically manipulated. As generative tools improve, this blind spot continues to widen.

Synthetic Identity Fraud Is Moving Into Live Workflows

The risk does not end once a document is uploaded or a selfie passes a liveness check. Synthetic identities increasingly move beyond static verification and into live, high-stakes interactions where decisions are made in real time. 

What begins as a manipulated headshot or altered ID can carry through an entire relationship with an organization. Identity fraud is not confined to a single touchpoint. It follows the individual into every workflow that relies on human perception.

Account Onboarding and KYC

At registration and enrollment, fraudsters submit AI-generated faces, stolen documents, and synthetic identities designed to pass standard liveness and document checks. Ghost student schemes, synthetic account openings, and KYC fraud all begin at this stage. Verification tools that authenticate documents and confirm biometric presence stop fraud at the point of entry, before a relationship is established under false pretenses.

Hiring Fraud

Fraudsters use stolen or AI-generated identities to apply for jobs, pass initial screenings, and show up to interviews as someone else entirely. Recruiters have no reliable way to confirm that the person on a video call matches the identity behind the application, or that the same person appears consistently across screening, interview, and onboarding. When hiring determines access to internal systems, sensitive data, or financial controls, an unverified hire is a direct entry point into the organization. Verified identity at the application stage creates an audit trail that follows the candidate through every step.

Account Recovery

Recovery workflows are a common target because they rely on credentials rather than the person behind them. An impersonator with the right information can pass a knowledge-based check and regain access to an account that is not theirs. Verification at the recovery stage confirms the person, not just the password, closing the gap that credential-based checks leave open.

Contact Centers and Customer Interactions

In contact centers, synthetic speech can navigate interactive voice response systems and even pass biometric authentication checks. These systems recognize known customers, but they do not determine whether the voice itself was generated by AI. As a result, fraudsters can reach agents under the guise of legitimate callers, increasing operational cost and exposure before suspicion arises.

Document Verification and Signing

Global document verification ensures that customers can be identified accurately regardless of where they are or what documents they carry. For signing workflows, deepfake detection adds a layer of analysis during the session itself, confirming that the person present is authentic before and during the signing event. A risk score generated in real time gives the host or notary immediate confidence to proceed or stop the session.

In each of these scenarios, people exercise trust in the moment. They rely on what they see and hear to make judgment calls, often under pressure. When synthetic media enters these workflows, it does not just create confusion; it changes outcomes. By the time someone questions authenticity, someone may have granted access, transferred funds, or shared sensitive information. The window for prevention shrinks to seconds, which is why teams must assess authenticity during the interaction, not after it ends.

Where Deepfake Detection Fits Into IDV

Deepfake detection adds a missing layer to identity verification. Instead of validating only identity data or a biometric match, it analyzes whether the audio, video, or image itself has been generated or manipulated using AI.

Reality Defender addresses synthetic identity fraud in three key ways:

1. Real-Time Synthetic Voice Detection 

  • Detects synthetic voice during live calls
  • Provides live analysis and alerts
  • Integrates directly with call center and fraud operations platforms
  • Flags suspicious activity before routing or agent engagement

This helps organizations stop AI-generated callers at the point of contact, rather than investigating after fraud occurs.

2. Live Audio and Video Protection in Meetings

  • Detects manipulated voices and video in Zoom and Microsoft Teams
  • Flags impersonators even when cameras are off
  • Protects sensitive sessions, including executive briefings and M&A calls
  • Supports hiring fraud prevention in remote interview workflows

This provides authenticity signals during live decision-making, not after damage spreads.

3. Image and Document Authenticity Signals for IDV

  • Detects AI-generated or face-swapped images
  • Identifies manipulated identity documents
  • Adds authenticity checks alongside biometric and liveness controls
  • Supports step-up verification before access is granted

This strengthens IDV by analyzing the integrity of the media itself.

Checklist: Does Your IDV Program Cover Synthetic Identity Risk?

If you oversee identity verification, fraud prevention, or security operations, ask:

  • Can we detect AI-generated faces during onboarding?
  • Can we identify face swaps or composited ID images?
  • Do we analyze voice authenticity during live calls?
  • Can we detect manipulated video during remote interviews?
  • Do we receive early signals before approval or access decisions are finalized?
  • Are detection alerts integrated into our fraud or SOC workflows?
  • Can our controls operate in real time, not just after upload?

If the answer to most of these questions is no, your IDV stack likely verifies identity data, but not media authenticity.

The Shift: From Identity Matching to Authenticity Verification

Synthetic identity fraud at scale demonstrates a structural shift. Threat actors no longer need to steal a complete identity when they can generate one. They no longer need a perfect fake document; they just need a convincing one. When AI-generated and manipulated media sit at the core of the attack chain, authenticity becomes the control point.

Deepfake detection does not replace identity verification. It strengthens it. It ensures that the face, voice, or document being evaluated is not itself synthetic or altered. As industrialized identity fraud continues to scale, the organizations that embed authenticity detection directly into IDV and live workflows will be better positioned to prevent infiltration, financial loss, and reputational damage.

Synthetic identity fraud is not slowing down. The question is whether your identity verification program was built for this generation of threat.

Frequently Asked Questions

What is a synthetic identity?

A synthetic identity is a fabricated persona created by combining real and fabricated information, often supported by AI-generated or manipulated images, audio, or documents to make it appear legitimate.

How is synthetic identity fraud different from traditional identity theft?

Traditional identity theft relies on stealing a real person’s full identity. Synthetic identity fraud builds a new identity using partial real data and fabricated elements, often enhanced with AI-generated media.

Can liveness detection stop synthetic identities?

Liveness checks confirm that a person is present during capture, but they do not reliably detect whether someone altered or generated the face or voice using AI.

Why is real-time detection important?

Teams make many high-risk decisions during live calls, meetings, and interviews. Real-time authenticity signals allow organizations to pause, verify, and intervene before someone grants access, approves a transfer, or shares sensitive information.

Does deepfake detection replace identity verification?

No. Deepfake detection strengthens identity verification by assessing whether the media itself is authentic, adding protection where traditional document and biometric checks may fall short.