5 AI threats that companies might not see coming
10 minute read
The artificial intelligence train is traveling at full speed in 2025, with company leaders struggling to keep up and counter rising operational risks.
Consider Stanford University’s 2025 Index Report, which shows AI-linked privacy and security incidents grew by 56.4% in 2024. The study identified data breaches and algorithmic failures that compromise sensitive information as the most common types of AI incidents.
Even more alarming is the gap between the knowledge of AI risk at the management level and the specific actions taken by companies to curb such threats, despite adverse outcomes such as regulatory penalties and irreparable damage to consumer trust looming large.
“Our research shows that 73% of enterprises experienced an AI-related security incident last year, often without realizing the vulnerabilities until it was too late,” said Dr. Ja-Naé Duane, a behavioral scientist at Brown University and co-author of the upcoming book "SuperShifts: Transforming How We Live, Learn, and Work in the Age of Intelligence." "The real issue is that many C-suite leaders equate AI risk with technical bugs, missing the broader human, operational, and reputational downsides, such as plummeting staff morale, misinformation, and adversarial manipulation."
Duane points to multiple AI incidents that have triggered negative issues at big brand companies:
· Amazon’s AI recruiting tool began downgrading female applicants because of biased training data.
· Microsoft’s Tay chatbot posted offensive content within hours of launch due to poor safeguards.
· A New York lawyer submitted fake legal cases generated by ChatGPT and was fined.
· A Chevrolet chatbot mistakenly offered new SUVs for $1.
In many instances, situational awareness of AI threats among corporate leadership is all too common.
“Most recently, nearly half of companies have abandoned AI initiatives due to poor implementation, lack of trust, and governance failures,” Duane added. “These examples show that the most significant AI risks stem from leadership blind spots, not just technological ones.”
Step one in getting ahead of AI risks is to know the threats that present the biggest challenges. Quartz spoke with management experts who highlighted specific AI dangers for which all companies should prepare.
1. Coding issues that lead to security threats
As AI grows more pervasive inside companies, “overcoding” is leading to cybersecurity breaches.
“Software engineering teams have been using AI to generate software incredibly quickly, but the code always comes with a dramatic increase in bloat, particularly unused, unnecessary lines of code,” said George Manuelian, chief strategist at RapidFort, a cybersecurity firm in Atherton, California.
What many company teams don’t realize is that software code bloat directly expands their organization's attack surface, leading to more software vulnerabilities that hackers can exploit. “We’ve seen firsthand how software generated or accelerated by AI includes unnecessary things that just increase risk without adding value, let alone having a specific function,” Manuelian said.
To make matters worse, security teams are now struggling to keep up with the rapid pace of software development. “This causes a serious bottleneck, severely weakening the security posture of an organization,” Manuelian added.
2. Deepfake scams
One of the most pressing and underappreciated risks of AI implementation is the rise of deepfake-enabled impostor threats.
“We’re now seeing adversaries, like North Korean threat actors, successfully infiltrate U.S. companies by posing as legitimate job candidates,” said Matt Moynihan, CEO of GetReal Security in Boston, Massachusetts. “Using generative AI, they can forge documents, falsify credentials, and even simulate live video interviews with convincingly realistic personas.”
The deepfake issue isn’t hypothetical, and it’s not limited to just one industry or company size.
“Everyone from Fortune 500s to mid-sized businesses is vulnerable and the consequences are far-reaching,” Moynahan said. “These actors aren’t just targeting jobs, they’re targeting access. Once inside, they can exfiltrate sensitive intellectual property, compromise customer data, or gather information that has national security implications.”
What makes this threat particularly dangerous is how easily it can bypass traditional vetting processes. Without specialized tools, these deepfake identities are nearly impossible to detect.
“That’s why companies need to shift their approach, investing in cross-functional detection capabilities that combine cybersecurity, HR, and digital forensics to validate identity in real time,” Moynihan added.
3. Unexpected behaviors
One of the biggest risks is how easily AI systems can spiral into unexpected behaviors that senior executives didn’t expect.
“Without the right controls, AI begins to interact with other tools, make assumptions, and create consequences no one planned for,” said Tim Armandpour, chief technology officer at PagerDuty, an operations performance management company in Los Angeles, California. “We’re seeing AI agents that trigger alerts, escalate incidents, or touch sensitive data with little human involvement.”
Employees are the first to feel the effects when workflows are disrupted or trust breaks down. In contrast, shareholders feel the impact when those incidents lead to downtime, security gaps, or public fallout, Armandpour noted. “These are not edge cases. This is happening now, and the complexity is only growing,” he said.
4. Treating AI as a software upgrade
While AI fundamentally changes how humans work, communicate, and relate to each other, most organizations implement AI as if it were merely another software upgrade.
“AI changes how we find information, how we make decisions, how we collaborate, and even how we build trust with customers and our own teams,” said Aaron Perkins, founder at Market-Proven AI, an AI workplace training firm in Dover, Delaware.
When organizations ignore these human dynamics, they create what Perkins calls "connection debt.” “In this context, connection debt is this growing gap between what the technology can do and what people are willing or able to do with it,” Perkins noted.
AI is creating an entirely new generation of work, which requires new competencies in human-AI collaboration, yet many companies approach training with the wrong perspective. “Most organizations focus on teaching people how to use the tools rather than how to work effectively alongside and with AI while maintaining human connections,” Perkins said.
5. AI hallucination syndrome
Another issue is that AI can fabricate information, even when trained on specific company data, and companies can never be 100% certain that it hasn't invented something out of thin air.
“We all know those internet memes where you ask AI to draw an image without an elephant, and it draws one with an elephant anyway,” said Jeff Tilley, founder and CEO at Muncly, a Salesforce solutions services company. “That’s because elephants exist in the world, and the AI falls into logical fallacies based on how you prompt it.”
The hallucination issue highlights the core problem that AI lacks common sense and a contextual understanding of what's happening, even though C-suite executives may not prioritize this line of thinking.
“It's a statistical model trying to fake real human conversation,” Tilley said. “When it gives you false information and you make decisions based on that, then build more decisions on top of those false foundations, you're extrapolating massive amounts of error that can lead to catastrophic business decisions.”
The way forward with AI management risks
On the upside, there are safeguards that CEOs, CROs, and CTOs can put into place to curb AI deployment and usage risks significantly. Start by training and tasking staffers to be the first line of defense and work your way forward from there.
“Empower employees, not just to execute, but to question, raise flags, and shape what AI means for their work,” said Andrea Schnepf, founder at Nepf LLC, a management consulting firm in Irvine, California. “Also, establish AI 'surgeries' or open forums where people can surface concerns early.”
A safe culture can be an executive's best early-warning system for AI threats on the job.
“When uncertainty looms, over-communicate the need for vigilance,” Schepf said. “A CEO who’s transparent about AI’s purpose, scope, and limits earns trust, even if the tools aren’t perfect yet.”