The Attack
In early 2024, an employee at a Hong Kong finance firm received a message to join an urgent video call. The meeting was supposedly called by the UK-based CFO to discuss a confidential transaction.
When they joined, they saw and heard the CFO — along with several other executives they recognized. The video quality was good. The voices sounded right. They discussed a sensitive deal and instructed the employee to wire funds.
The employee initiated 15 separate wire transfers totaling $25 million USD.
It was all fake. Every person on that call was an AI-generated deepfake, created using publicly available video from earnings calls and interviews.
Lost to AI-generated deepfake impersonation in a single video call
Why It Worked
- Visual confirmation bias — They saw familiar faces and heard familiar voices
- Authority pressure — Orders appeared to come from the CFO
- Urgency and secrecy — "Confidential" discouraged verification
- No verification process — No out-of-band confirmation for large transfers
The lesson: In the AI era, seeing is no longer believing. Video and audio can be faked convincingly. Organizations need verification processes that don't rely on visual confirmation alone.
What Could Have Prevented This
- Out-of-band verification — Callback to a known number, not one provided in the request
- Multi-person authorization — Large transfers require multiple independent approvals
- Code words — Pre-established verification phrases
- Transaction limits and delays — Automatic holds on unusual transfers
- Security awareness training — Education about deepfake threats
Source: SensCy 2025 Threat Intelligence Report / CNN