Future of AIAI

The New Insider: How AI Is Redefining Insider Threats

By Dr. Margaret Cunningham, Director, Security & AI Strategyโ€ฏ

OpenAIย recently revealedย it had bannedย ChatGPTย accountsย reportedly linkedย to North Korean IT operatives who used the tool to generate fake resumes, solve codingย challengesย and configure spoofing tools โ€”part of a broader effort to infiltrate US organizations by posing as legitimate workers.ย 

These operativesย werenโ€™tย hacking their way in. They were acting their way in, behaving like insiders with the help of AI.ย ย 

Traditionally, an insider threatย referredย to an employee, contractor, or partner with legitimate access who misused that access, either through negligence, rule-bending, or malicious intent. But in an era defined by global uncertainty, rapid AI adoption, and increasingly fragmented digital identities, that definition no longer captures the full scope of risk.ย ย 

Today, an insider threat might not be a person at all. It might be an AI-generated voice message, a deepfake video, or an LLM powered agent impersonating your coworker. The game has changed.ย 

The Rise of Synthetic Insidersย ย 

AI has introduced a new class of insider threat: outsiders who convincingly look like insiders. Deepfakes, voice cloning, and other synthetic media enable threat actors to impersonate executives, IT administrators, or trusted employees with unprecedented realism.โ€ฏย ย 

A voice clone of a CFO can pressure an employee to transfer funds. Aย deepfakeย video messageย ofย a CEO can bypass ordinary skepticism. In a world where trust is oftenย establishedย based on a familiar voice, face, or job title, attackers now have the tools to forge all three. The traditional definition of an โ€œinsiderโ€ simplyย canโ€™tย keep up with this evolution.ย ย 

When synthetic personas infiltrate networks and exploit human trust, the line between internal and external becomes dangerously blurry. Humansย arenโ€™tย good at consistently spotting synthetic media, and organizationsย shouldn’tย rely on them as the last line of defense.ย ย 

Instead of relying on gut instinct or visual cues, organizations should equip employees with structured response protocols: verify through secondary channels, escalate suspicious requests, and never act on urgency alone. Trust should be earned through process, not appearance.ย โ€ฏย 

AI Is Supercharging Traditional Insider Threats Tooย ย 

While synthetic media expands who can be considered an insider, AI is also transforming more traditional threats or those originating from legitimate employees, contractors, or partners. Threat actors are now using AI to conduct faster, more targeted reconnaissance on employees by scraping social media, blogs, job listings, and even public organization charts to tailor social engineering campaigns. This means insiders, especially those under stress, working in high-pressure roles, or already predisposed to risky behavior, are easier to target and manipulate than ever before.ย ย โ€ฏย 

Inside organizations, generative AI can unintentionally create risk. Employees trying to work faster and smarter may accidentally expose sensitive data to public-facing AI tools by uploading confidential content or intellectual property. Others may rely too heavily on overly agreeable chatbots that reinforceย bad decisions or normalize security shortcuts. These behaviorsย arenโ€™tย just technical risks,ย theyโ€™reย signals of how people are adapting to pressure, complexity, and the limitations of sanctioned tools.โ€ฏย ย 

Meanwhile, security teams are often in the dark. Most organizations have limited visibility into what tools use AI, who is interacting with them, and what information is being shared or processed. The rise of โ€œshadow AIโ€,ย unsanctioned tools introduced without IT oversight, is accelerating the sprawl of enterprise data, making it harder to know where sensitive information lives and who has access to it.ย ย 

Shadow AI and unauthorized workaroundsย donโ€™tย stem from malice, but from a disconnect between organizational policy and the reality of how people work. Toย truly understandย insider risk, security teams need to understand those human systems: what tools people trust, where they feel friction, and how they adapt. This requires a shift from rule enforcement to behavioral insight.ย ย 

Detection Must Shift from Static Controls to Dynamic Understandingโ€ฏย ย 

The current landscape reflects a broader collapse of boundariesโ€”between people and technology, internal and external actors, and intention and automation. As these lines blur, our ability to rely on static rules or rigid classifications erodes. We need new strategies, ones that account for the complexity of human-machine systems and the unpredictability of behavior in dynamic environments.ย 

Managing insider risk today requires a reimagined approach to detection and response of these threats.ย Static role-based controls and access models no longer reflect the fluid, tech-enabled reality of modern work.ย Whatโ€™sย needed is behavioral intelligence: systems that learn what โ€œnormalโ€ looks like for individuals and teams, and that can flag subtle deviationsโ€”especially among users in sensitive roles.ย ย โ€ฏย 

This starts withย establishingย baselines: How do employees typically interact with systems, data, and colleagues?ย Whatโ€™sย normal for one person may be anomalous for another, even in the same role. By building peer group comparisons, tracking behavioral deviations, and flagging unusual patternsโ€”especially among those in sensitive roles or with elevated accessโ€”security teams can detect subtle warning signs before damage occurs.ย ย 

This aligns with a more human view of securityโ€”one that acknowledges the gap between how we think people work and how theyย actually work. By embracing unsupervised behavioral modeling, we can detect not just known threats, but emerging patterns that signal stress, friction, or risk.ย ย 

Toward Resilience in the Age of AIย ย 

Resilience in this new era requires more than better tools. It requires a shift in mindset.ย ย 

We must accept that our existing assumptions and schemas for understanding behavior, identity, and risk may no longer apply. We must embrace ambiguity, adapt to fluid boundaries, and use technology not just to enforce rules, but to understand behavior in context.ย ย 

AI is not just a threat vectorโ€”itโ€™sย also a potential ally. With the right safeguards, AI can help us detect anomalies, surface hidden risks, and support employees in making better decisions. But to harness that potential, we must first acknowledge the complexity of the world we nowย operateย in.ย ย 

As the lines between insider threats and external threats continue to dissolve,ย itโ€™sย getting harder for security teams toย identify. And in many cases, it no longer wears a badge or carries a keycard. It mayย speak with a familiar voice, appear in a trusted inbox, orย emergeย from a tool weย didnโ€™tย even know we were using.ย 

โ€ฏย 

ย 

Author

Related Articles

Back to top button