Adversarial thinking and threat research in a deepfake world

10 minute read

New developments in generative AI (GenAI) broadly impact every facet of technology. Like most tools, GenAI can be used in both constructive and malicious ways, depending on the user’s intent.

It’s clear that deepfake tools are becoming more convincing and easier to use over time. Threat actors are gaining greater experience with them and finding new avenues to employ them. They are also becoming harder to detect.

Combatting these threats will involve collecting new kinds of threat intelligence, which is why I recently joined GetReal Security as the new Head of Threat Research. 

Understanding the new threat landscape

Since joining the team just a few weeks ago, I’ve been exposed to fascinating research and innovative engineering projects. However, a few questions continue to arise for us: What are we seeing out there? How are threat actors using deepfakes? What are the bad guys actually doing with this stuff? What do enterprise leaders need to know to prepare for these threats? 

It all starts with understanding the threat landscape. This week, we learned that a member of Congress, a U.S. Governor and some foreign ministers all received fake voice messages on Signal impersonating U.S. Secretary of Defense Marco Rubio, sent with the "goal of gaining access to information or accounts." We've seen similar attacks throughout the year so far, impersonating Whitehouse Chief of Staff Susie Wiles and executives within the Cryptocurrency industry. In some cases these threat actors aim to persuade their targets into authorizing funds transfers; in others, they seek access credentials, or intend to install malware. 

These types of attacks are easy to pull off. There are tons of AI voice cloning applications available that can model a person's voice using just a short sample of audio — an easy thing to obtain for any target that has done public speaking. Messaging apps can be used to send recorded voice messages to targets, and the attacker doesn't even need to engage in an interactive conversation using the voice they are impersonating. 

There is also a tremendous amount of deepfake-based fraud targeting financial institutions and individual consumers. Poorly written online guides provide simple tricks for evading automated KYC identification checks that banks and cryptocurrency exchanges employ. The "verified" accounts can be sold on dark web forums to money laundering operations for €150 or more. A targeted version of the same attack can "recover" access to someone's personal account. Romance scams that impersonate celebrities and cryptocurrency scams with fake celebrity endorsements are hauling in millions of dollars from consumers.

In attacks against enterprises, we’ve seen malicious actors upgrade interactions from text messages to real-time deepfake videoconference calls, fully convincing the target that they are speaking with the right people. 
When I think about the sort of enterprise weak spots a threat actor might target with real-time deepfakes, I tend to focus on calls with people who are expected to be external or unauthenticated, such as third-party suppliers who receive wire transfers, or remote employees who claim to have lost their cellphone and need to reset their credentials to get back to work. There is also potential for the use of deepfakes to trick physical security teams into authorizing access into a facility. 

One example of an external, unauthenticated caller is a job candidate you are interviewing. The infiltration of hundreds of US companies by fake IT workers from North Korea has been one of the biggest cybersecurity discussions over the past 12 months. This is an issue that we spend a considerable amount of time thinking about at GetReal, and we're developing a set of security capabilities that address the problem. Often, these remote workers use their real voices and faces, but Microsoft reports that they've begun to see some adoption of deepfake technology in these cases. 

Another challenge that enterprises will face as these tools proliferate is how employees use them. Most of these uses are benign or just fun — people have been using goofy AI filters on calls for years. Maybelline offers a virtual makeup plugin for Teams calls. But some are less benign. People have AI avatars sitting in on calls for them, allowing them to appear attentive while doing other things. It would not be surprising to see cases of mockery, harassment, or fraud using AI deepfake tools. 

Joining a world-class team of digital forensic experts

GetReal Security has assembled a world-class team of experts in digital media forensics, machine learning, and threat investigations to build state-of-the-art capabilities that will help organizations combat threats from malicious use of generative AI and deepfakes. This is, by far, one of the smartest groups of people working on one of the most important causes for today’s society. 

Our founder, Hany Farid, is one of the world's foremost academic experts in digital media forensics. Emmanuelle Saliba, our Chief Investigations Officer, is sought after by news media organizations and governments worldwide for her expertise in identifying manipulated media and uncovering its true origins and motives. And I bring decades of experience in cybersecurity threat and vulnerability research. 

Together, we're building new models of the threats and threat actors who use deceptive digital media against enterprises, as well as the tools and techniques they employ, and all the different kinds of indicators of compromise associated with them.