More fake applicants are trying to trick HR, thanks to the rise of deepfakes

10 minute read

Author

Date

19/5/2025

Share

In my decades of working in cybersecurity, I have never seen a threat quite like the one we face today. Anyone’s image, likeness, and voice can be replicated on a photorealistic level cheaply and quickly. Malicious actors are using this novel technology to weaponize our personhood in attacks against our own organizations, livelihoods, and loved ones. As generative AI technology advances and the line between real and synthetic content blurs even further, so does the potential risk for companies, governments, and everyday people.

Businesses are especially vulnerable to the rise of applicant fraud—interviewing or hiring a phony candidate with the intent of breaching an organization for financial gain or even nation-state espionage. Gartner predicts that by 2028, 25% of job candidates globally will be fake, driven largely by AI-generated profiles. Recruiters already encounter this mounting threat by noticing unnatural movements when speaking with candidates via videoconferencing.

For many companies, the proverbial front door is wide open to these attacks without adequate protection from deepfake candidates or “look-alike” candidate swaps in the HR interview process. It’s no longer enough to just protect against the vulnerabilities in our tech stacks and internal infrastructures. We must take security a step further to address today’s uncharted AI-driven threat landscape, protecting our people and organizations from fraud and extortion before trust erodes and can no longer be restored.

Fraud isn’t new, but it is taking a new form

Here’s the thing: Synthetic identity fraud happens in the real world every day, and has for years. Think of the financial industry, where stolen Social Security numbers and other government identifiers allow fraudsters to open and close accounts in other people’s names and ransack savings and retirement funds.