Future of AIAI

The New Insider: How AI Is Redefining Insider Threats

By Dr. Margaret Cunningham, Director, Security & AI Strategy 

OpenAI recently revealed it had banned ChatGPT accounts reportedly linked to North Korean IT operatives who used the tool to generate fake resumes, solve coding challenges and configure spoofing tools —part of a broader effort to infiltrate US organizations by posing as legitimate workers. 

These operatives weren’t hacking their way in. They were acting their way in, behaving like insiders with the help of AI.  

Traditionally, an insider threat referred to an employee, contractor, or partner with legitimate access who misused that access, either through negligence, rule-bending, or malicious intent. But in an era defined by global uncertainty, rapid AI adoption, and increasingly fragmented digital identities, that definition no longer captures the full scope of risk.  

Today, an insider threat might not be a person at all. It might be an AI-generated voice message, a deepfake video, or an LLM powered agent impersonating your coworker. The game has changed. 

The Rise of Synthetic Insiders  

AI has introduced a new class of insider threat: outsiders who convincingly look like insiders. Deepfakes, voice cloning, and other synthetic media enable threat actors to impersonate executives, IT administrators, or trusted employees with unprecedented realism.   

A voice clone of a CFO can pressure an employee to transfer funds. A deepfake video message of a CEO can bypass ordinary skepticism. In a world where trust is often established based on a familiar voice, face, or job title, attackers now have the tools to forge all three. The traditional definition of an “insider” simply can’t keep up with this evolution.  

When synthetic personas infiltrate networks and exploit human trust, the line between internal and external becomes dangerously blurry. Humans aren’t good at consistently spotting synthetic media, and organizations shouldn’t rely on them as the last line of defense.  

Instead of relying on gut instinct or visual cues, organizations should equip employees with structured response protocols: verify through secondary channels, escalate suspicious requests, and never act on urgency alone. Trust should be earned through process, not appearance.  

AI Is Supercharging Traditional Insider Threats Too  

While synthetic media expands who can be considered an insider, AI is also transforming more traditional threats or those originating from legitimate employees, contractors, or partners. Threat actors are now using AI to conduct faster, more targeted reconnaissance on employees by scraping social media, blogs, job listings, and even public organization charts to tailor social engineering campaigns. This means insiders, especially those under stress, working in high-pressure roles, or already predisposed to risky behavior, are easier to target and manipulate than ever before.   

Inside organizations, generative AI can unintentionally create risk. Employees trying to work faster and smarter may accidentally expose sensitive data to public-facing AI tools by uploading confidential content or intellectual property. Others may rely too heavily on overly agreeable chatbots that reinforce bad decisions or normalize security shortcuts. These behaviors aren’t just technical risks, they’re signals of how people are adapting to pressure, complexity, and the limitations of sanctioned tools.   

Meanwhile, security teams are often in the dark. Most organizations have limited visibility into what tools use AI, who is interacting with them, and what information is being shared or processed. The rise of “shadow AI”, unsanctioned tools introduced without IT oversight, is accelerating the sprawl of enterprise data, making it harder to know where sensitive information lives and who has access to it.  

Shadow AI and unauthorized workarounds don’t stem from malice, but from a disconnect between organizational policy and the reality of how people work. To truly understand insider risk, security teams need to understand those human systems: what tools people trust, where they feel friction, and how they adapt. This requires a shift from rule enforcement to behavioral insight.  

Detection Must Shift from Static Controls to Dynamic Understanding   

The current landscape reflects a broader collapse of boundaries—between people and technology, internal and external actors, and intention and automation. As these lines blur, our ability to rely on static rules or rigid classifications erodes. We need new strategies, ones that account for the complexity of human-machine systems and the unpredictability of behavior in dynamic environments. 

Managing insider risk today requires a reimagined approach to detection and response of these threats. Static role-based controls and access models no longer reflect the fluid, tech-enabled reality of modern work. What’s needed is behavioral intelligence: systems that learn what “normal” looks like for individuals and teams, and that can flag subtle deviations—especially among users in sensitive roles.   

This starts with establishing baselines: How do employees typically interact with systems, data, and colleagues? What’s normal for one person may be anomalous for another, even in the same role. By building peer group comparisons, tracking behavioral deviations, and flagging unusual patterns—especially among those in sensitive roles or with elevated access—security teams can detect subtle warning signs before damage occurs.  

This aligns with a more human view of security—one that acknowledges the gap between how we think people work and how they actually work. By embracing unsupervised behavioral modeling, we can detect not just known threats, but emerging patterns that signal stress, friction, or risk.  

Toward Resilience in the Age of AI  

Resilience in this new era requires more than better tools. It requires a shift in mindset.  

We must accept that our existing assumptions and schemas for understanding behavior, identity, and risk may no longer apply. We must embrace ambiguity, adapt to fluid boundaries, and use technology not just to enforce rules, but to understand behavior in context.  

AI is not just a threat vector—it’s also a potential ally. With the right safeguards, AI can help us detect anomalies, surface hidden risks, and support employees in making better decisions. But to harness that potential, we must first acknowledge the complexity of the world we now operate in.  

As the lines between insider threats and external threats continue to dissolve, it’s getting harder for security teams to identify. And in many cases, it no longer wears a badge or carries a keycard. It may speak with a familiar voice, appear in a trusted inbox, or emerge from a tool we didn’t even know we were using. 

 

 

Author

Related Articles

Back to top button