\
Insight
\
Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More
\
Insight
\
Kyra Rauschenbach
Head of Public Sector Business
In today's rapidly evolving information environment, artificial intelligence (AI) presents both unprecedented opportunities and unprecedented threats. Nowhere is this more apparent than on the modern battlefield, where cognitive overload, speed of information flow, and the sophistication of adversarial techniques have the power to shape — or destabilize — critical decisions.
In late 2025, Reality Defender had the privilege of supporting NATO Allied Command Transformation (ACT) and NATO Communications and Information Agency (NCIA) in the Innovation Continuum Cognitive Warfare Experimentation, an initiative exploring how AI-driven content influences operational-level decision-making. (See also: Reality Defender's work with NATO countries).
Our role: introduce controlled deepfake content into a realistic warfighting scenario to assess its impact on experienced operational planners. What unfolded underscores the importance of rapidly developing and executing a strategy to mitigate deepfakes in the cognitive dimension.
Reality Defender was tasked with demonstrating how deepfakes affect the warfighter at the operational level in order to better understand how deepfakes detection can support defense across all levels of war.
Deepfakes are no longer hypothetical threats—they are affecting tactical, operational, and strategic levels. While strategic and tactical vulnerabilities have been widely discussed, their impact on operational planners was less explored. This experiment aimed to close that gap.
Participants were seasoned operational war planners confronted with two forms of AI:
"Good AI" — trusted analytical tools and data streams
"Bad AI" — adversarial AI analytic tools and deepfake content designed by Reality Defender
The harmful deepfake media content was woven into a realistic information environment alongside detailed analytic situational reports. The detailed, verified reports indicated there was no imminent threat.
The challenge: would war planners rely on verified intelligence, or would manipulated media overpower their judgment?
The results indicated unequivocally that exposure to deepfake media can affect decision making despite rigorous verified analytic reporting. Participants, despite extensive operational experience, repeatedly ignored accurate reporting in favor of a single fabricated video.
In one of the most striking outcomes of the exercise, planners ordered military action on the fictional region based on the disinformation embedded in the deepfake news video.
Deepfake media + time pressure result in poor decisions.
This was not a failure of training. This was not a failure of intelligence collection. This was the extraordinary power of deepfake manipulation in a high-pressure decision environment. The experiment proved that even the most skilled warfighters are vulnerable to well-crafted AI-driven deception, reinforcing the urgent need for automated deepfake detection across the entire spectrum of military operations.
As NATO invests in mitigating threats in the cognitive dimension, exercises like this highlight the shifting nature of warfare. Cognitive warfare is not about attacking systems—it's about manipulating perception, unusually in time pressure environments.
Deepfakes and synthetic media:
Create artificial urgency
Undermine trust in intelligence systems
Fog decision-making
The findings from this experiment demonstrate the necessity of integrating deepfake detection tools like Reality Defender directly and proactively into media analysis workflows. Detection must be seamless, rapid, and available at every level of command — from analysts to operational planners to strategic leadership.
The cognitive dimension is where wars are decided, because it is where decisions are made. This experiment demonstrates that deepfakes directly threaten that dimension by distorting perception faster than traditional safeguards can respond.
Deepfake detection therefore must be treated as a core element of cognitive defense—integrated into workflows, trusted by operators, and available before manipulated media shapes action. Without it, decision superiority is no longer assured.
Reality Defender is honored to collaborate with NATO ACT and NATO NCIA on this critical initiative to understand how AI will affect the cognitive dimension.
Reality Defender remains committed to working alongside NATO and its partners to ensure warfighters can trust what they see, hear, and act upon—because in the age of synthetic media, reality itself must be defended.
Reach out if you're interested in exploring how deepfake detection can supplement your workflows.
\
Insights