Related
The Deepfake Trick That Used to Work—And Why It’s Failing Now
10 minute read
In this episode of GetReal Talks, Hany Farid, Co-Founder and Chief Science Officer at GetReal Security and Professor at UC Berkeley, and Emmanuelle Saliba, Chief Investigative Officer at GetReal Security, show why the old advice fails. The “put your hand over your face” test once broke live face-swap attacks. Today’s deepfake engines are occlusion-aware, they track faces in 3D, infer hidden features, adapt to lighting and accessories, and keep the mask intact. Visual cues that worked months ago often fail now, and the shelf life of any manual tip keeps shrinking.
What Changed
- Occlusion-aware tracking, models infer mouth, eyes, and brows even when partially covered.
- 3D facial models and lighting adaptation, more natural texture, fewer glitches.
- Commodity hardware, modern laptops and GPUs generate high-quality fakes in real time.
- Noise and scale, in a 10-person grid, tiny artifacts vanish, humans cannot reliably spot them.
Why This Matters for Security
- Social engineering is supercharged, familiar faces and voices raise compliance pressure in crises, wire transfers, password resets, and late-night “urgent” calls.
- Hiring fraud is rising, attackers can impersonate remote candidates or employees and bypass basic checks.
- One mistake is enough, a single employee believing an impostor can trigger loss, exposure, or data theft.
The New Playbook
- Assume visual inspection is insufficient, start with forensic detection, then review what the tools flag.
- Verify identities continuously, not just at login. Learn the real person’s voice, movement, camera, and room acoustics to detect anomalies over time.
- Harden high-risk workflows, approvals, payments, and access resets need verified calls, strong step-up checks, and out-of-band confirmations.
- Train for modern threats, retire outdated tips, teach response to real-time deepfake scenarios.