\
Insight
\
\
Insight
\
Brian Levin
CRO
The Monetary Authority of Singapore (MAS) released a comprehensive analysis that should concern every organization. The September 2025 report reveals how deepfake technology is actively compromising financial institutions right now — from defeated biometric systems to scaling fraud substantially — allowing AI-generated media to exploit critical vulnerabilities across the financial sector today.
The scale is alarming, and the threat landscape is accelerating. The MAS report details three distinct threat vectors that demand immediate attention — each backed by real incidents that have already cost institutions millions and continue to evolve in complexity. From Singapore to Hong Kong to Indonesia, financial institutions are experiencing successful deepfake breaches across authentication systems, executive communications, and brand reputation.
The findings align with what we observed at Reality Defender: deepfake attacks are growing in both sophistication and frequency, targeting everything from customer onboarding to video conferencing to public-facing content. What's particularly concerning is how rapidly the barrier to entry is falling, while the window for proactive defense is closing rapidly as these attacks become easier to execute and harder to detect without specialized infrastructure.
Fraudsters are using AI-generated deepfakes to bypass biometric authentication systems during both account creation and login attempts. The MAS report documents multiple successful attacks.
In August 2024, an Indonesian financial institution discovered fraudsters using virtual camera technology to present AI-generated deepfake photos to their digital KYC process for loan applications. The synthetic images fooled facial recognition systems by making pre-prepared media appear as live, legitimate inputs.
That same year in Vietnam and Thailand, attackers deployed malware that extracted victims' photos and videos from mobile phones, along with banking credentials. These stolen assets were used to create deepfakes that successfully defeated facial biometric authentication at multiple banks.
The implications extend beyond consumer banking. In August 2023, Hong Kong institutions approved $25,000 in fraudulent loans after attackers used GenAI tools to create doctored images for loan scams targeting financial institutions and moneylenders.
MAS recommends implementing liveness detection techniques that analyze motion, texture, and 3D depth during authentication. Organizations should prompt users to perform specific actions during verification rather than accepting static images. For non-facial biometrics like fingerprint or palm vein recognition, detection techniques must be tailored to identify synthetic reproductions of those specific characteristics.
Deepfake technology amplifies traditional social engineering by creating hyper-realistic impersonations of executives, colleagues, and trusted contacts during video calls and voice communications.
The most striking example comes from Hong Kong in January 2024, where fraudsters targeted an employee in a multinational company's finance department. Using publicly available video footage, attackers created deepfake videos of the CFO and colleagues for a video conference. The employee was convinced to transfer $25 million to the fraudsters.
Singapore itself wasn't spared. In March 2025, a company finance director was tricked into transferring over $499,000 after attending a fake Zoom video conference featuring an impersonation of the company's CEO and other colleagues. The victim only became suspicious when a second transfer request for $1.4 million was made. Fortunately, collaboration between Singapore and Hong Kong authorities recovered the initial funds.
Beyond financial fraud, deepfakes are infiltrating recruitment processes. In July 2024, a firm unknowingly hired a North Korean hacker who used a stolen U.S. identity and an AI-generated photograph to pass multiple interviews and background checks. Upon receiving a company workstation, the hacker began loading malware before security teams detected the activity.
MAS emphasizes that organizations must implement multi-factor authentication for high-privilege accounts and high-risk activities, including wire transfers and access to sensitive data. When video or audio-based authentication is used in sensitive business processes, additional verification mechanisms like code words and one-time passwords become critical. Separation of duties for critical financial transactions provides another essential defense layer.
Deepfakes enable coordinated campaigns that can trigger immediate market reactions before institutions even know the content exists. In May 2023, a deepfake image of an explosion at the Pentagon circulated on social media. Despite being entirely fabricated, it briefly triggered panic-selling and caused a noticeable dip in the S&P U.S. stock market.
Scams targeting public figures are increasingly common. In March 2025, Singapore's Prime Minister Lawrence Wong warned about scams using deepfakes of him to promote cryptocurrency and other fraudulent schemes. Similar attacks targeted Senior Minister Lee Hsien Loong in December 2023.
MAS recommends implementing monitoring tools to detect deepfake-based brand abuse and impersonation attempts across digital channels, including social media, websites, video platforms, and news sources. These tools should incorporate anomaly detection and deepfake-specific algorithms to identify inconsistencies like manipulated visuals, synthetic audio, or misleading narratives.
The MAS report makes one thing clear: deepfake defense cannot be reactive. Organizations must implement detection infrastructure now, integrate it into authentication workflows and communication systems, and establish incident response protocols before attacks occur.
Reality Defender's multimodal detection platform addresses all three threat vectors identified by MAS, providing real-time analysis across audio, video, and image content. Our API enables financial institutions to build deepfake detection into existing systems, implementing the technical controls MAS recommends as foundational security infrastructure.
Just as antivirus and spam filtering became foundational security layers decades ago, deepfake detection must now become part of your core infrastructure. The financial institutions documented in the MAS report learned this lesson after multimillion-dollar losses. Your organization can learn it before the first attack succeeds.
You can read the report in full here.
Reality Defender secures communication channels and digital content against AI-generated threats through patented multimodal detection. Learn how organizations implement deepfake defense at scale.
\
Insights