\

Insight

\

Reality Defender Submits Comments to FINRA on Deepfakes

Ben Colman

Co-Founder and CEO

Today's financial services faces an unprecedented threat that traditional security measures weren't designed to handle: AI-powered deepfakes. Which is why we felt compelled to submit formal comments to FINRA's Regulatory Notice 25-07, which seeks input on modernizing rules for the digital workplace.

(You can read our comments here.)

The stakes could not be higher. Just last year, FINRA fined SoFi Securities $1.1 million for fraud prevention failures that enabled criminals to create 800 fraudulent accounts and steal millions. If basic identity theft can cause such damage, imagine the havoc that sophisticated deepfake impersonations could wreak.

The Reality of the Threat

Every day, FINRA members face increasingly sophisticated attacks. Criminals now use AI-generated content to bypass biometric verification systems during customer onboarding, impersonate registered representatives in client communications, authorize fraudulent transactions through deepfaked executive calls, and manipulate vulnerable investors by impersonating trusted contacts.

The numbers are sobering: AI-enabled fraud losses in the U.S. financial sector could reach $25 billion by 2027. FINRA members already report difficulty identifying customers, and a significant amount of fraud occurs during onboarding — precisely where deepfakes prove most effective.

Our Call to Action

Our comment letter is not just about identifying problems — it's about providing solutions. Thus, we recommended specific amendments to FINRA rules, including:

  • Rule 3110 (Supervision): Requiring supervisory procedures that can detect AI-generated content in communications
  • Rule 2165 (Financial Exploitation): Recognizing deepfake impersonation as a form of financial exploitation
  • Rule 4512 (Customer Account Information): Establishing authentication protocols for digital communications
  • Rule 3310 (Anti-Money Laundering): Requiring technological measures to detect synthetic media during verification

Building Trust in an AI-Powered World

At Reality Defender, our mission is to enable trust in an AI-powered world. We believe financial institutions shouldn't have to choose between innovation and security. This is why we developed patented real-time multimodal detection technology that integrates seamlessly with existing systems.

FINRA's modernization initiative presents a critical opportunity to strengthen the regulatory framework against deepfake threats while enabling members to deploy necessary detection technologies. By explicitly recognizing real-time deepfake detection as a reasonable and necessary component of fraud prevention programs, FINRA can provide the clarity firms need to protect their clients effectively.

The future of financial services depends on our ability to adapt to technological threats as quickly as they emerge. We're committed to working with regulators, financial institutions, and the broader industry to ensure that trust remains the foundation of every financial interaction — even and especially in an age of synthetic media.

Get in touch