Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More

\

Insight

\

A Brief History of Deepfakes

Gabe Regan

VP of Human Engagement

Updated December 2025

Deepfakes have evolved from early CGI research into a global risk as generative AI has made creating synthetic media fast, inexpensive, and highly convincing.

The term “deepfake was coined in 2017 by a Reddit moderator under the same moniker, who founded a subreddit for users to exchange deepfake pornography they had created using photos of celebrities and open source face-swapping technology. Although the unsavory forum has since been deleted, the word deepfake has persisted as the new label for a type of AI-generated media. 

While the origin of the word is clear, the history of what we consider “deepfakes” is more complicated. 

The Origin of Deepfakes

The concept of deepfakes (or deepfaking) can be traced back to the 1990s, when researchers used CGI in attempts to create realistic images of humans. The technology gained traction in the 2010s, when the availability of large datasets, developments in machine learning, and the power of new computing resources led to major advances in the field.

A true turning point arrived in 2014 when Ian Goodfellow and his team introduced the machine learning concept known as Generative Adversarial Network (GAN). GANs set the stage for a new era of synthetic media by enabling machines to generate images, video, and audio far more realistic than anything before. Nearly every early deepfake system drew on this breakthrough.

It was the participation of everyday users, beginning in 2017, that pushed deepfake technology beyond research settings. Open-source tools such as DeepFaceLab enabled rapid experimentation, fueling both creative uses and harmful misuse, including non-consensual deepfake pornography.

This rapid democratization revealed the need for countermeasures that can detect manipulation before it spreads. Today, deepfake technology powers legitimate applications in media, accessibility, and education, while also enabling fraud, disinformation, and impersonation attacks.

A Timeline of the Rise of Deepfakes 

From early research to everyday reality, deepfakes have evolved faster than most defenses.

2018: Public Concern and Platform Policies

By 2018, experts began sounding alarms about the rapid pace of deepfake development and the risks associated with widespread access. Major technology platforms introduced their first policies to moderate manipulated media.

This was also the year Reality Defender’s original nonprofit initiative was founded, launching our earliest research into detecting synthetic media. That work later evolved into the deepfake detection company we are today.

2019: Early Legislation 

Governments around the world, including the United States, began exploring legislation to regulate the creation and distribution of deepfakes. Results have been mixed, but 2019 marked the beginning of formal regulatory interest and the introduction of the Deepfake Report Act of 2019.

2020: Breakthroughs in Synthetic Text

OpenAI released GPT-3, a powerful language model capable of producing human-like text that ranged from poetry to computer code. While not a visual deepfake tool, GPT-3 marked a significant expansion of generative AI into written language, setting the stage for more advanced multimodal systems.

2021: Acceleration Across Modalities

Deepfake technology matured across audio, video, and image synthesis. Tools became more accessible and more convincing, and researchers focused on improving identity preservation, lip-sync accuracy, and full-body motion transfer. High-quality synthetic voices also became significantly easier to produce.

2022: Deepfakes Become Easy to Make

The public release of Stable Diffusion in August 2022 marked a turning point for synthetic media. For the first time, anyone could generate high-quality, photorealistic images on consumer hardware. This moved deepfakes from specialist labs into the hands of the general public and bad actors alike. Later that year, OpenAI’s release of ChatGPT brought generative AI into the mainstream. 

By this point, deepfakes were becoming everyday digital artifacts.

2023: Rapid Growth and Government Response

Generative technologies continued to spread across social media; deepfake content grew more than 550 percent between 2019 and 2023.

By 2023, regulators began treating deepfakes as a national and economic risk. In the United States, the White House issued its Executive Order on Safe, Secure, and Trustworthy AI, explicitly calling out synthetic media and impersonation fraud as emerging threats (October 2023). In the European Union, lawmakers advanced the EU AI Act, including transparency requirements for deepfakes and synthetic content labeling. In the United Kingdom, deepfakes were folded into the Online Safety Act and broader election integrity discussions, with officials warning about AI-generated impersonation and misinformation ahead of national votes.

2024: Deepfakes Become a Global Risk

In 2024, the World Economic Forum identified deepfakes and AI-driven disinformation as one of the top global risks. Real-world harm became increasingly visible. A McAfee report found that 26 percent of individuals encountered a deepfake scam in 2024, and 9 percent fell victim to one. 

In one high-profile incident, a Hong Kong finance worker transferred 25 million dollars after criminals used deepfaked executives in a live video meeting. This year made clear that deepfakes had become active tools for fraud, manipulation, and large-scale deception.

2025: Deepfakes in Daily Life

By 2025, deepfakes actively shaped everyday digital interactions. People could create convincing synthetic media with almost no effort. A Business Insider journalist proved this by cloning their own voice using a low-cost online tool and then using that clone to navigate a bank’s phone system. The voice sounded slightly robotic, but it still passed as real and raised no suspicion. What once required specialized expertise now sits in the hands of anyone willing to spend a few dollars.

At the same time, demand for generative AI skyrocketed. Platforms like OpenAI’s Sora and Google’s Gemini 3 Pro started limiting free generations as global usage spiked and compute costs surged. These restrictions signal a world where synthetic media tools are not fringe experiments but everyday utilities that millions rely on, for both legitimate and malicious purposes.

Regulators responded by accelerating legislative action. A small cluster of early deepfake laws quickly grew into hundreds of active proposals worldwide. The European Union advanced comprehensive protections through the AI Act. In the United States, state lawmakers introduced more than 50 AI-related bills each week, including roughly 25 bills focused specifically on deepfakes. Governments across the Asia-Pacific region also began shaping their own regulatory frameworks to address escalating risks.

In 2025, deepfakes stopped being an emerging threat and became an active, disruptive force that governments, businesses, and individuals could no longer ignore.

The New Frontier of Synthetic Media

Deepfakes have moved far beyond face-swapping. What began as an experimental curiosity is now a complete synthetic media ecosystem capable of manipulating or fabricating nearly every dimension of human communication. Faces, voices, bodies, and environments can now be generated or altered with remarkable realism and speed. This evolution unlocks powerful creative and commercial opportunities, but it also introduces profound risks.

These advances span several forms of manipulation, each representing a new way synthetic media can be created, altered, or misused.

Hyperreal Audio Cloning

Modern voice-cloning systems can replicate a person’s voice using only a few seconds of audio, capturing tone, pacing, and emotional nuance with striking fidelity. These capabilities enhance accessibility tools, support multilingual dubbing, and preserve the voices of public figures and loved ones. Yet the same technology fuels impersonation scams, fabricated audio statements, and non-consensual voice replicas. Voice, once considered a trustworthy biometric, can now be forged with little effort.

Full-Body Manipulation

Synthetic media has expanded beyond facial reenactment to full-body manipulation. Systems can transfer a person’s entire movement pattern to another person, enabling breakthroughs in film production, virtual training, and athletic analysis. At the same time, these tools can fabricate surveillance footage, create realistic explicit content without consent, and stage political or social events that never occurred. As these models improve, the assumption that video provides objective evidence continues to erode.

Text-to-Video and Text-to-Image Generation

Diffusion models, which generate images by transforming random noise into coherent visuals, have introduced a new category of deepfakes by creating synthetic scenes and characters entirely from text prompts. This capability accelerates creative ideation, supports education and research, and enables rapid visualization for design and storytelling. 

However, it also allows the invention of political events, misinformation campaigns, and hyper-realistic fictional content that can spread widely before verification is possible. The ability to generate believable imagery from language alone democratizes powerful illusion at a level once reserved for professional VFX studios.

Democratization and Accessibility

The most significant shift in deepfakes is the dramatic reduction in the skill and resources required to produce them. Open-source software, web-based tools, and low-cost cloud services make high-quality synthetic media creation accessible to nearly anyone. 

This democratization empowers creators and educators, but it also enables mass production of explicit deepfakes, widespread manipulation, and a faster cycle of disinformation. As creation becomes easier, detection must scale at the same pace.

Real-Time Generation and Live Interaction

The newest frontier is real-time deepfakes. These systems can generate synthetic faces and voices during live video calls, enabling advanced features like real-time dubbing and virtual avatars. The same technology also makes impersonation attacks far easier. Fraudsters can appear and sound like someone else in the moment, allowing them to bypass identity checks. Real-time deepfakes shift the threat from static misinformation to active, interactive deception.

Deepfakes are reshaping communication, trust, and security across every sector. Understanding their capabilities, and the opportunities and risks they create, is essential for any organization preparing for the next era of synthetic media.

The Urgent Need for Deepfake Defense

Synthetic media is now a dominant force reshaping how society perceives truth, identity, and authenticity. Each new capability brings powerful benefits, but also new attack surfaces for fraud, disinformation, and coercion.

Organizations cannot rely on manual review, human intuition, or outdated safeguards. Detection must operate at the same speed and scale as generation. Verification must become as routine as spam filtering or malware scanning.

Automated deepfake detection, built into the systems organizations already use, is rapidly becoming a core requirement for institutions, governments, and platforms. Reality Defender was founded in 2021 to provide scalable deepfake detection and help organizations protect identity and trust.