Meet the deepfake fraudster who applied to work at a deepfake specialist

10 minute read

Author

Date

21/7/2025

Share

Last year, security company KnowBe4 helped spark a wave of interest in fraudulent workers when it revealed extensive details of how it uncovered a rogue North Korean operative who had been hired by the company.

The rogue North Korean IT worker scam has piqued the interest of security experts and human resources (HR) professionals across the UK, the US and around the world. And it seems to be spreading, at least according to Sandra Joyce, vice-president of Google Threat Intelligence, who recently warned that the scam was going global.

“It’s not just about the US anymore. We’re seeing it expand to Europe and, in some cases, we’re seeing some real abuses,” she said, speaking to reporters on the fringes of Google Cloud Next back in April 2025. “We saw one individual who was running nine different personas and providing references for each of the personas for the others.”

Along with this expansion in scope is coming an expansion in targeting, with the fraudulent North Korean workers even observed conducting extortion operations in addition to drawing down their salaries to help boost the isolated regime’s coffers – which is usually their most basic objective.

But before the North Koreans, or whoever else may be seeking to defraud a company in this way, can begin to do so, they must first get hired. To aid in this, fraudsters and other threat actors are now turning to generative artificial intelligence (GenAI), using large language models (LLMs) and deepfake videos to create plausible candidates who can easily slip through a recruiter’s net.

Meet Pindrop’s deepfake candidate

In many cases they are successful, or almost successful, as Pindrop, a supplier of voice security and fraud detection solutions, discovered when its recruiters found themselves face-to-face with a deepfake candidate who “applied” not just once, but twice.

According to Pindrop, for one job posting alone the firm received more than 800 applications in a matter of days and, when it applied deeper analysis to 300 of the candidate profiles, it found that over 100 of those were entirely fabricated identities, many using AI-generated resumes, manipulated credentials and even deepfake technology to simulate live interviews.

The Pindrop team put its Pindrop Pulse deepfake detection tech to use in an interview with an “individual” to “whom” it has since given the pseudonym Ivan X. Ivan applied for a job with Pindrop that, at first glance, he seemed like a great fit for.

However, during Ivan’s first interview, the Pindrop Pulse software identified three red flags that enabled the team to tell immediately they were in danger of hiring a deepfake candidate.

First, Ivan’s facial movements seemed unnatural and slightly out of synch with the words he was saying, likely indicating the video had been manipulated. Secondly, the interview was dogged by audio-visual lag, and Ivan’s voice occasionally dropped out or did not align with his lip movements. Finally, when the interviewer asked an unexpected technical question, Pindrop Pulse identified an “unnatural” pause, as if the system was processing a response before playing it back.

Vijay Balasubramaniyan, Pindrop CEO, says: “When this happened, the crazy thing is the recruiter was psyched because she got an alert that she was talking to a deepfake and, the deepfake candidate obviously didn’t know it, but the position ‘he’ was applying for was not just a software engineer, it was a software engineer in the deepfake detection team, which is just super meta.”

Seconds out…round two

Pindrop had had a lucky escape. However, eight days later, Ivan resurfaced with a new application that arrived through a different recruiter. Curiosity aroused, the team decided to let him get through to the interview stage.

The second time round, it was immediately obvious that the candidate joining the interview was visually a completely different person but with the same identity and credentials as the first. Within minutes, Ivan X 2.0 encountered connection issues, dropped the call and rejoined, likely an attempt to recalibrate the deepfake software. When the interview was finally able to proceed, the same issues as before popped up, although the deepfake itself seemed to have been improved slightly.

This validation backed up the team’s suspicion that it was not dealing with an isolated incident but rather a deliberate and coordinated attack on the Pindrop hiring process using deepfake tech.

Balasubramaniyan says he has since tasked many of his hiring team to interviewing deepfake candidates on the side, and he is genuinely enthusiastic about testing the company’s rapidly developing deepfake detection technology out on them.

“The cool thing about Pindrop is we pull on a thread and we go deep – that’s how our products got created – so we’ve gone deep down this rabbit hole and we’re now seeing clearly documented proxy relays from North Korea. And we’ve interviewed all of them – we’re now setting up honeypots to interview them,” he says.

We are not prepared for what’s coming

Pindrop’s experience makes for a funny story, but according to Matt Moynahan, CEO of GetReal Security, another startup making waves in the expanding field of deepfake detection, it’s deadly serious. He is incredibly worried about what’s coming and tells Computer Weekly that we have no idea how bad this problem might get.

“The history of security is all about impersonation and always has been,” he says. “That’s been going on forever. But what’s happening now is you’ve got these incredibly sophisticated capabilities.

“When you think about this world with GenAI where I can steal not just your credentials but your name, image and likeness, what’s the difference between somebody who you think you know and see every day and turns against you, versus an adversary whose got your credentials and your name, image and likeness on the Zoom call that you think is real and you trust, and they turn against you? It’s almost worse.

“So, when you think about this notion of trickery and impersonation, it’s going to be out of control. I don’t know where this thing stops. It’s a complete mess,” he says. “And it’s not just North Koreans – they’re the ones who have been caught.”

Balasubramaniyan adds: “Fraud is a percentage-driven game, and even the best fraud campaigns run at a 0.1% rate of success. One in a thousand work. But the point is when they work, they work big. Sometimes you win the jackpot – certainly enough for someone in a developing country to make a very nice living out of this. And that’s the point – what deepfake AI technology allows these fraudsters to do is scale the operation. We have seen a lot of candidate fraud, and I don’t personally think it’s because we’re special, I think it’s because we’re looking.”