Why Process Controls Alone Won’t Protect IT Service Desks Against Social Engineering and Deepfakes

10 minute read

Recent trends in cybercrime have forced a new responsibility upon IT Service Desk staff. They are now required to authenticate reality during live voice and video interactions with employees and customers. This means having to determine whether or not the voice they hear or the face they see actually belongs to the person making an identity claim on the other end of a call.

This was not a real-world concern 10 years ago. But today, generative AI has advanced to the point where most humans can’t consistently distinguish real from AI-generated voices, and tools like Kling’s motion-controlled deepfakes show that AI-generated video is approaching the same level of realism.

These capabilities aren’t theoretical. The FBI has warned of adversaries already making use of AI-powered impersonation of senior U.S. officials in vishing, or voice phishing, campaigns using voice calls and voice memos to deceive targets into taking actions against their or their organization’s interest.

Adding to the challenge is IT help desk and customer support staff primarily assessed and incentivized based on caller satisfaction and time-to-resolution, which doesn’t lend itself to deliberation and scrutiny in interactions with callers.

In the past, organizations have counted on multi-factor authentication (MFA), biometrics, caller ID, training, scripts, escalation, and policies to prevent social engineering of the help desk. But when employee and customer identity can be impersonated at scale, those approaches are no longer sufficient.

IT help desk and customer support staff need additional support in a remote-first work world where AI has helped adversaries become more convincing in their schemes. Just as the adversaries can quickly and inexpensively scale their deception operations, so too do IT help desks need technological support in identifying and stopping said deception.

Scattered Spider and the Weaponization of IT Help Desks

Recent examples of cybercrime groups targeting IT help desks directly with voice-based social engineering attacks demonstrate the impact of this shift. 

In a complaint filed late last year with the United States District Court of New Jersey, the FBI stated that the cyber threat group variously referred to as Scattered Spider, Octo Tempest, UNC3944, and Oktapus had participated in 120 network intrusions, collected $155 million in ransom payments, and caused millions of dollars in damages to enterprise victims.

The complaint details eight incidents of the cybercriminal group gaining unauthorized access to a number of victim companies around or between June and November 2023. At least two of those incidents, victim companies six and seven, involved the criminals using social engineering to contact the victim companies’ help desks and convince a representative to reset the password of another user at the company. Once completed, the criminal(s) were able to exfiltrate data from the compromised networks. The same point of initial access was used in a ninth incident in January of 2025 which targeted the U.S. Courts’ IT help desk.

The FBI also updated a public service announcement in July 2025 explaining that Scattered Spider operatives both impersonate IT help desk staff when targeting employees and pose as employees when calling into the IT service desk. In some cases, they pose as IT to trick employees into divulging credentials or sensitive information; in others, they impersonate employees to convince  help desk staff to reset passwords or transfer multi-factor authentication to attacker-controlled devices. To support these efforts, the group conducts extensive reconnaissance on employees via platforms like LinkedIn.

Gaps in Many Organizations’ IT Service Desk Security

What’s missing from IT help desk security posture is not diligence, but verification methods that operate beyond human perception, which deepfakes are specifically designed to defeat. Help desk staff are asked to judge identity using knowledge-based questions (information attackers can gather in advance), credentials that can be stolen, multifactor authentication susceptible to person–in-the-middle attacks, or biometric verification such as voice that is now vulnerable to AI-generated impersonation. 

Each of these controls are bypassed by attackers at scale. Expecting representatives to detect AI manipulation in real time is unrealistic. Off-loading identity verification and deepfake detection to purpose-built technology shifts the burden off the human. This improves both security and caller experience by reducing friction for legitimate users while applying zero-trust principles to the human layer, where identity is assessed and decisions are made in digital interactions.

This challenge extends beyond IT service desks to the broader human capital supply chain, which we explore in more depth in our Securing the Human Capital Supply Chain whitepaper.

Questions to Ask as You Assess Your Own Exposure

To begin to understand where your IT service desk may be vulnerable to Scattered-Spider-like attacks, explore the following:

Which IT service desk workflows rely on a human alone to determine whether the caller is who they claim to be?

Be sure to include situations that include time pressure, authority pressure (i.e., a CEO demanding access), or particularly frustrated customers.

What high-risk service desk actions warrant independent, auditable identity verification and deepfake detection?

Password resets, multi-factor authentication re-enrollment, new device enrollment, or access restoration become initial access points for breaches.

How does your culture influence a representative’s confidence in slowing down or questioning urgency and authority expressed by a caller?

Are agents only incented to serve customers quickly and satisfactorily? Or also for security and caution?

Why the “Human Layer” Needs Its Own Security Controls

Social engineering, deepfakes, and recent Scattered Spider tradecraft make one thing clear: knowledge-based answers, MFA codes, or even a familiar voice or face are no longer sufficient proof of a caller’s identity. This is not a question of IT service desk staff competence. They’re up against continuously improving tools designed to scale deception and target existing verification processes. All of which is engineered to exploit human perception and thinking under pressure. Training and process remain critical aspects of any security program, but this is no longer a fair fight. Attackers can fabricate identity faster and more convincingly than service desk staff can question it. 

Organizations that adapt to this new environment will equip staff with technological support that verifies identity independently of human perception. Deepfake detection becomes a core requirement, as does automated policy enforcement to rapidly filter out fraudulent callers and ensure time and attention is spent serving legitimate users.

To see how GetReal Security’s Digital Trust and Authenticity Platform (DTAP) automates deepfake detection and identity verification for IT service desks, contact us to schedule a demo.