The Attack

In early 2024, an employee at a multinational finance firm's Hong Kong office received a message requesting they join an urgent video conference. The meeting was supposedly called by the company's UK-based Chief Financial Officer to discuss a confidential transaction.

When the employee joined the call, they saw and heard the CFO — along with several other senior executives they recognized. The video quality was good. The voices sounded right. The executives discussed a sensitive business matter and instructed the employee to wire funds to specific accounts as part of the confidential deal.

The employee followed instructions and initiated 15 separate wire transfers totaling approximately $25 million USD.

It was all fake. Every person on that video call was an AI-generated deepfake, created using publicly available video and audio of the real executives. The attackers had scraped enough material from earnings calls, interviews, and corporate videos to train convincing synthetic versions.

$25,000,000

Lost to AI-generated deepfake impersonation in a single video call

How Deepfakes Work

Deepfake technology uses machine learning to create synthetic video and audio that mimics real people. To create a convincing deepfake, attackers need:

The technology has advanced rapidly. What once required Hollywood-level resources is now accessible to sophisticated criminal groups with modest budgets.

Why This Attack Worked

The lesson: In the AI era, seeing is no longer believing. Video and audio can be faked convincingly. Organizations need verification processes that don't rely on visual or audio confirmation alone.

What Could Have Prevented This

The Bigger Picture

This isn't an isolated incident. Deepfake-enabled fraud is growing rapidly:

How RMA Helps

Source

This case study is based on reporting by CNN and documented in the SensCy 2025 Threat Intelligence Report on AI-enabled fraud.

Read more about the SensCy 2025 Report →

Is your organization ready for AI-powered threats?

Free 30-minute call. We'll discuss your current verification procedures and identify gaps before attackers do.

Schedule Assessment