\
Insight
\
Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More
\
Insight
\
Aphrodite Brinsmead
Product Marketing Lead
A fraud analyst submits an image. The result comes back: Manipulated. If a manager, regulator, or auditor asks why, what's the answer?
A probability score doesn't tell you why the image was flagged. Neither does a verdict with no rationale behind it. A result needs to be actionable and explainable. In fraud and compliance workflows, the difference between those two things can determine whether a finding holds up.
This post walks through how Reality Defender arrives at a detection result. We'll cover model-based detection, contextual analysis, and metadata signals like C2PA provenance records, plus how to use what you're seeing whether you need to act fast or defend a conclusion.
Accuracy is the first question most teams ask when evaluating a detection tool, and it's a reasonable one. But accuracy measures how often a model is right across a test set. It doesn't tell you what any individual result means in practice. A model that's 97% accurate can still return a result you can't act on, explain, or defend. It doesn't tell you what drove the verdict on the file in front of you.
Most detection tools compound this problem by returning either a raw confidence score or a binary verdict. Both formats are difficult to interpret without knowing what the model was trained to detect, what data it used, and when it was last updated. Detection is a continuous discipline, and any single accuracy number captures one moment in a threat landscape that shifts constantly. In fraud, compliance, and legal contexts, that explanation isn't optional. It's what makes a result usable downstream.
Reality Defender starts from a different premise. A verdict without a rationale isn't operationally useful.
RealScan analyzes images using Reality Defender's proprietary detection models. Admins can enable two additional methods via settings, Context Aware analysis and metadata detection, to surface further context alongside the model results.
Every scan returns:
The sections below cover what models, Context Aware analysis, and metadata each contribute to the final result.
RealScan results view showing Overall Results Score, Final Conclusion of Manipulated, and three tabs (RD Models, Context, Metadata).
Detection models are the core of Reality Defender's products. They analyze the image itself for artifacts of generation and manipulation that aren't visible to the human eye.
Reality Defender runs multiple specialized models analyzing the same file independently, each trained to detect different manipulation techniques. For images, that means two groups. Face-focused models, which cover GANs, diffusion, faceswaps, and visual noise analysis, look for inconsistencies in how a face was generated or swapped. They flag unnatural blending at the edges, diffusion artifacts in skin texture, or noise patterns inconsistent with a real camera capture. Full-frame models analyze the entire image, not just the face. They look for generation signatures that appear outside facial regions, in the background, the body, or the surrounding context.
Each model returns its own signal. When models disagree, that's expected behavior. They're designed to catch different things. A weighted ensemble combines those signals, accounting for how reliable each model tends to be on a given type of content. The result is a single calibrated Overall Results Score that reflects the combined evidence across all models, not a simple majority vote or the loudest signal.
Context Aware analysis looks beyond the image itself. It evaluates the context around the image rather than the content within it.
For images, a vision-language model assesses whether the image is real or AI-generated, informed by the results of a Reverse Image Search. Both signals appear separately in the results under General Context and Reverse Image Search.
If Context Aware identifies manipulation and the models do not, that result takes precedence. If both identify manipulation, the Final Conclusion uses the model score. If Context Aware finds nothing, the conclusion falls back to model analysis.
Manipulated image showing General Context and Reverse Image Search results
Metadata is the third signal. Unlike models or Context Aware analysis, it doesn't interpret the image at all. It reads the file.
When an AI generation tool creates a file, it often leaves traces. These include embedded tags from tools like Stable Diffusion, Midjourney, or DALL-E, C2PA provenance records that track an image's origin and edit history, and encoding patterns inconsistent with a camera-captured image. These signals don't require model analysis to interpret. They're either present or they're not.
Metadata analysis appears in its own tab in RealScan, separate from model output. When metadata reveals signals, they factor into the Final Conclusion. When metadata reveals nothing, the conclusion rests on model and Context Aware analysis.
It's a fast, deterministic signal. In cases where generative tools have left clear markers, it can be the most straightforward indicator of AI generation in the result.
Metadata reveals AI generation markers in the image file. When metadata is the primary signal, no Overall Results Score appears.
How you use a result depends on what you need to do with it.
For automated and API-driven workflows, the Final Conclusion is the signal. Authentic, Suspicious, or Manipulated gives your team what they need to trigger a next step. That might be surfacing a notification, applying rules for how each verdict is handled, or feeding results directly into existing platforms to automate a response. Your organization's policies determine those actions.
For investigations, compliance reviews, or legal proceedings, the Final Conclusion, Overall Results Score, and the detail underneath matter. The insights showing which manipulation techniques the models detected, any Context Aware findings, and whether metadata revealed any AI generation markers are what you'd use to brief a wider team, document a decision for audit, or support a legal or compliance case.
Suspicious is worth calling out specifically. It represents a narrow range of the Overall Results Score where the likelihood of manipulation is present but not conclusive. Enough to warrant a closer look before your team makes a final decision, but not a finding on its own.
The Final Conclusion gives you something to act on immediately. The detail underneath gives you something to stand behind.
Schedule a RealScan demo to see detection results in your workflow.
\
Insights