Why AI Means Identity Is The New Security Perimeter

10 minute read

Author

Date

7/8/2025

Share

BLACK HAT 2025, LAS VEGAS – Identity has the potential to become the most important security battleground as AI threats evolve.

Identity has been the forgotten part of cyber for a long time,” Deepak Jeevankumar, Managing Director at Dell Technologies Capital, tells Expert Insights.  

“I think there’s going be a huge resurgence in identity because it’s kind of the forgotten perimeter of cybersecurity. And with AI, identity becomes more important.”

Expert Insights has been at BHUSA25 this week, chatting to experts in the identity space about where the puck is headed and how you can prepare your business.

Phishing On AI Steroids

One of the most common ways that cybercriminals target identity is via social engineering attacks and phishing.

We don’t know the full extent of how threat actors are using AI—but we do know they are using AI tools, says Brett Stone-Gross, Senior Director of Threat Intelligence at Zscaler. When ransomware gang Black Basta had their chat logs leaked, we could see ransomware developers recommending the use of AI to write code. “AI makes us more efficient, but also makes them more efficient,” he says.

Recent research from Zscaler’s ThreatLabz team highlights a good example—they spotted threat actors using GenAI tools to produce phishing templates mimicking the Brazilian government. These pages were able to rank on Google via SEO poisoning and attempted to reach potentially millions of users.

AI can definitely help cybercriminals break into accounts, says Matt Mullins, former banking red teamer now working for Reveal Security, a company that provides pre-protection against identity risks by detecting identity attacks after an attacker has successfully authenticated.

Increasingly, cybercriminals use social engineering to bypass MFA controls by going directly to IT departments and asking for password resets. AI tools that can clone voices or disguise accents make it even easier for threat actors to gain access.

“There are now AIs that can take your accent away in real time. I could be calling somebody in a foreign nation, now my voice sounds just like you. Those things are just accelerants to the adversary,” Mullins says.

Deepfake Disasters

Matt Moynahan is a veteran CEO in the cybersecurity space, leading OneSpan, Forcepoint and Veracode.  

He’s now running AI deepfake detection startup, GetReal, and warns that the risk of deepfakes is currently massively underappreciated in the cybersecurity world.  

“This is one of the greatest—and I mean this without hyperbole—one of the greatest unmanaged risks that I’ve ever seen in the history of cyber,” he tells Expert Insights.  “You can see it coming. And it’s sprinkling, it’s not raining; there’s not enough people actually talking about it now.”

“Deepfakes are everywhere. The problems we’re trying to solve are ubiquitous. If you’re having a telemedicine session, you better make damn sure that the person prescribing medicine is somebody who can prescribe it and not a deepfake trying to poison someone. And really, it sounds like James Bond, right?”

We have already begun to see some attacks in this space. Famously, British engineering giant Arup was conned out of $25 million USD after they were hit by a CEO deepfake scam.

“Hackers and criminal organizations go to where the money is: fraud. And they’re starting to move from outside-in attacks to internal business processes, which means Zoom and Teams,” says Moynahan.

Protecting against these attacks is hard. We can’t stop using Zoom and Teams, but asking call participants to wave to prove they are real people will only work for so long as AI capabilities improve. Robust protection will mean investing in tools and processes to equip users with both the tools and knowledge to detect deepfake threats. This could be a new frontier of identity security.

Machine Identities

An increasingly important trend in the identity space is the need to protect machine identities, not just human identities. And in the world of AI, this could become more important than ever before.  

“Every AI entity is a machine entity,” Jeevankumar says. “An agent is a machine identity, an agent identity. And the problem with human entities versus machine entities or AI entities is that, firstly, the quantity of AI entities and machine entities is like 100x or 1,000x more than human entities.

“The second thing is they have much shorter half-life […] they could exist today, they could not exist tomorrow, or it could be a different version with no continuity between these different identities. So, it’s very ephemeral.

“And the entire identity ecosystem was built around the concept of human identities, which are not ephemeral, and which last a much longer period of time.”

What does this mean? We need a revolution in identity security, Jeevankumar argues.

“The entire IT infrastructure stack has been built for programs or applications making deterministic decisions based on sets of rules. But artificial intelligence changes the game. The results from generative AI are usually not deterministic; they’re probabilistic and stochastic.  

“And when you have an entire infrastructure stack built on the fundamental assumption of determinism, but your new technologies are not deterministic—they’re probabilistic and stochastic—lot of fundamental things change.  

“Because of all of this, the challenges of identity ecosystem in the AI world are going to be far more complex.”

Don’t Miss The Fundamentals

Some of the ideas we’ve explored here are controversial and may not be applicable to all enterprises, let alone all SMBs.  

For most businesses today, the priority will be to address the “basics” like ensuring all users have MFA deployed, which is certainly not as easy as anybody would like.

But the trends are clear: AI-powered identity risks are coming down the track, and it may well be that we are entering a new era of cybersecurity, where identity will be pushed to the forefront like never before.