\
Insight
\
Reality Defender Recognized by Gartner® as the Deepfake Detection Company to BeatRead More
\
Insight
\
Aphrodite Brinsmead
Product Marketing Lead
Somewhere in your contact center right now, an AI agent is probably on a call. Not a human using an AI tool — an autonomous agent that dialed in, navigated your IVR, and is working through your authentication workflow. I sat down with Matt Smallman of SymNex Consulting and Brian Levin, Chief Customer Officer at Reality Defender, to talk about what that actually means for the organizations on the receiving end.
We covered why the voice channel has become the most exposed surface in enterprise security, how to think about the different types of agentic threat, and what a practical detection and response strategy actually looks like.
A useful framework that came out of the conversation is how to think about the different types of agentic callers. The response to each one is different.
Most people are already familiar with this category: credential harvesting, IVR reconnaissance, large-scale probing of customer databases. What's changed is the scale. Bad actors can now spin up hundreds of simultaneous calls for pennies, turning a manual attack into an automated one.
Less discussed but increasingly common. These are automated callers using your infrastructure — your IVRs, your agents, your business processes — as a free training dataset for third-party models. They're not obviously fraudulent. But they're consuming your resources and extracting your operational data.
Perhaps the most significant emerging risk. These are legitimate-use agents acting on behalf of real customers — disputing charges, checking balances, switching to better rates. The customer authorized them. But they're operating outside your security perimeter, often holding your customer's credentials, and are entirely indifferent to your cost model. They'll wait in a queue forever. They don't care whether they reach a human or an IVR. And in industries with low switching costs, they can silently erode your customer relationships and margins over time.
The malicious category is familiar territory for fraud and security teams. The parasitic and shadow categories are where most organizations are least prepared — and where the volume is growing fastest.
There's a lot of noise in the market about high-profile deepfake attacks — voice cloning of executives, targeted fraud against VIPs. These are real, and they matter. But for the majority of contact center operations, the more immediate risk is high-volume, generic synthetic bots hitting IVRs at scale.
These don't require a sophisticated voice clone. They use off-the-shelf text-to-speech. They're cheap, fast to deploy, and most contact centers have no way to detect them.
The investment data tells you where this is heading: spending on agentic AI is projected to reach $155 billion by 2030. These are legitimate businesses selling voice agents to your customers — agents that will then call your contact center on their behalf.
We're already seeing this play out. Some B2B contact centers are seeing 15–20% of inbound call volume come from agentic callers at peak times. When a contact center opens at 9am and every agentic caller starts dialing simultaneously, the human callers, who won't wait, lose out immediately.
As Matt Smallman put it: the bots will wait forever. The humans won't.
Regulatory pressure is also building. Singapore's Monetary Authority has already issued guidance specifically addressing deepfake threats to financial institutions, with recommendations spanning biometric authentication, staff awareness, and incident response. It's an early signal of where regulators in other markets are heading.
"The conversation has moved from a question purely about impersonation and fraud to one about the efficiency and effectiveness of the contact center. We're no longer just talking to the CISO — we're talking to the people responsible for running the operation." — Brian Levin, Chief Customer Officer, Reality Defender
The scale of the problem is clear. The harder question is what to do about it.
The thread that ran through the entire conversation was this: if you don't know that agentic AI is in your call traffic, you can't do anything about it.
That sounds obvious. But most contact center applications stacks weren't built to answer the question of whether the caller is human or synthetic. IVRs route on intent. Metadata — caller ID, ANI, carrier headers — tells you nothing reliable about whether a voice is real. Behavior-based approaches fall short because modern agentic callers are explicitly designed to behave like cooperative human callers.
What's missing is a caller-type signal: a reliable way to classify inbound traffic as human or synthetic before routing, authentication, or escalation decisions are made.
"If you don't know, you can't do anything about it. The right treatments will vary significantly — but the key to adaptation is detection." — Matt Smallman, Founder, SymNex Consulting
For that signal to be operationally useful, it has to be:
By the time a call reaches your queue, you've already committed the capacity and cost. Detection at that stage may be informative. It's no longer actionable.
Getting the detection signal is step one. What you do with it is a policy decision — and that's yours to make.
At Reality Defender, we return a classification — manipulated, suspicious, or clear — within seconds of audio capture, before routing decisions are made. What happens next depends on the type of caller and your organization's policy:
One of our customers routes all synthetic traffic to a pre-recorded line explaining that their policy is not to conduct business with AI agents, deliberately choosing a channel that prevents an AI agent from following up autonomously. Authorized agents are then selectively whitelisted.
That's one approach. Others are using detection to catch and observe; understanding which accounts agentic callers are accessing and for what purpose, before committing to a harder response. Matt made the point that denying service too early tips off bad actors that you can detect them, which just accelerates their iteration.
The right policy varies by industry, by caller type, and by how mature your detection capability is. But the starting point is always the same: get the signal first.
If you're a contact center, fraud, or security leader thinking about where to start, here's how we'd frame it:
The starting point is assuming it's already happening. If you're not seeing agentic traffic in your current data, that's more likely a visibility gap than an absence of the problem.
From there, map where live audio becomes available in your call flow. The earliest point — typically IVR entry — is where detection needs to operate, and where you'll get the most actionable signal.
Don't design your response before you have your baseline. Understand what share of your inbound traffic is synthetic, what type, and where in the call flow it's showing up. The data shapes the policy, not the other way around.
Finally, build shared ownership now. Inbound AI traffic sits at the intersection of operations, platform engineering, fraud, and security. Detection works best when those teams have already agreed on how synthetic callers should be handled — before the volume forces the conversation.
We recorded the webinar, including Matt and Brian's live Q&A. Watch it on demand. And, if you're exploring how to identify and handle AI-generated callers in real time, we'd love to talk.