$25 Million Deepfake Video Call Fraud
An employee joined a video call with what appeared to be the company's CFO and senior executives. Every participant was an AI-generated deepfake. The company wired $25 million before discovering the fraud.
The Attack
In early 2024, an employee at a multinational finance firm's Hong Kong office received a message requesting they join an urgent video conference. The meeting was supposedly called by the company's UK-based Chief Financial Officer to discuss a confidential transaction.
When the employee joined the call, they saw and heard the CFO — along with several other senior executives they recognized. The video quality was good. The voices sounded right. The executives discussed a sensitive business matter and instructed the employee to wire funds to specific accounts as part of the confidential deal.
The employee followed instructions and initiated 15 separate wire transfers totaling approximately $25 million USD.
It was all fake. Every person on that video call was an AI-generated deepfake, created using publicly available video and audio of the real executives. The attackers had scraped enough material from earnings calls, interviews, and corporate videos to train convincing synthetic versions.
$25,000,000
Lost to AI-generated deepfake impersonation in a single video call
How Deepfakes Work
Deepfake technology uses machine learning to create synthetic video and audio that mimics real people. To create a convincing deepfake, attackers need:
- Training material — Videos and audio recordings of the target. These are often publicly available from corporate presentations, interviews, social media, and earnings calls.
- Processing power — GPUs to train the model. Cloud services make this accessible to anyone.
- Real-time rendering — Software that can generate the fake video/audio in real-time during a live call.
The technology has advanced rapidly. What once required Hollywood-level resources is now accessible to sophisticated criminal groups with modest budgets.
Why This Attack Worked
- Visual confirmation bias — The employee saw familiar faces and heard familiar voices. This bypassed normal skepticism.
- Authority pressure — Requests appeared to come from the CFO and senior leadership. Few employees question direct orders from executives.
- Urgency and secrecy — The "confidential" nature of the transaction discouraged the employee from seeking verification through normal channels.
- Lack of verification protocols — No out-of-band confirmation process existed for large wire transfers.
The lesson: In the AI era, seeing is no longer believing. Video and audio can be faked convincingly. Organizations need verification processes that don't rely on visual or audio confirmation alone.
What Could Have Prevented This
- Out-of-band verification — Any request for large transfers should be verified through a separate, pre-established channel (e.g., callback to a known phone number, not one provided in the request).
- Multi-person authorization — Large transactions should require approval from multiple people who independently verify the request.
- Code words or challenge phrases — Pre-established verification phrases that attackers wouldn't know.
- Transaction limits and delays — Automatic holds on large or unusual transfers that allow time for verification.
- Security awareness training — Education about deepfake threats and social engineering tactics.
- AI-detection tools — Emerging technology that can identify synthetic video and audio.
The Bigger Picture
This isn't an isolated incident. Deepfake-enabled fraud is growing rapidly:
- Deepfake fraud attempts are projected to reach 8 million incidents in 2025
- 83% of SMBs believe AI has raised their threat level
- 15% of employees paste sensitive company data into AI tools without oversight
- Business Email Compromise (BEC) has generated $55 billion in global losses — deepfakes are the next evolution
How RMA Helps
- Security awareness training — We train employees to recognize social engineering, including AI-powered attacks. Our training includes deepfake examples and verification protocols.
- Policy development — We help create and implement verification procedures for sensitive transactions that don't rely on visual confirmation.
- Incident response planning — Procedures for responding when fraud is suspected, including escalation paths and law enforcement coordination.
- Executive protection — Guidance on limiting publicly available video and audio that could be used to create deepfakes.
Source
This case study is based on reporting by CNN and documented in the SensCy 2025 Threat Intelligence Report on AI-enabled fraud.
Is your organization ready for AI-powered threats?
Free 30-minute call. We'll discuss your current verification procedures and identify gaps before attackers do.
Schedule Assessment