Cyber SecurityAI & Technology

Why AI Anxiety Might Be Your Biggest Security Risk Yet

By James Robinson, CISO, Netskope

AI is reshaping the enterprise, but the biggest risk isn’t always the technology—it can be the people reacting to it. AI adoption figures create the impression that everyone is happily embracing AI, with new research reporting 89% of organizations are actively using at least one SaaS genAI app. However, AI enthusiasm isn’t a given within any organization.  

Research from Writer confirms that 31% of employees are already engaging in behaviors that undermine AI adoption. And within the more resistant user groups we’re seeing a new category of insider threat emerging, rooted in fear and uncertainty. This isn’t about malicious actors. It’s about individuals who are unsure what AI means for their job or their future, and whose behavior changes because of that uncertainty 

We already know that uncertainty, distraction and a lack of confidence increases the likelihood of human error, with 68% of breaches globally now involving the human element. When these behaviors show up around AI, even subtle actions can widen the human attack surface—something security teams need to plan for as AI adoption accelerates. 

The rise of anxious insiders 

Insider risk is changing because our relationship with AI is evolving all the time. And in organizations adopting AI at pace, we are seeing three potentially problematic personas emerging as early sources of friction and risk:  

1. The Silent Saboteur – negative behavior driven by mistrust and resistance 

Resistant employees who intentionally and actively slow or disrupt AI adoption in subtle ways—delaying projects, reverting to manual processes, or quietly influencing others to hold back. 

2. The Silent Detractor  avoidant behavior driven by fear and insecurity 

Employees who quietly avoid AI tools because they worry about what AI adoption means for their role. Their avoidance creates workflow gaps and visibility blind spots as they try to work around systems. 

3. The Overwhelmed Insider  erroneous behavior driven by cognitive overload 

Employees who want to keep up but can’t. They’re drowning in complexity, misusing AI unintentionally, or making mistakes that agents later amplify under their identity. 

Fear is a real security problem. Fear is predictable—and anything predictable becomes something attackers can exploit. A workforce operating under anxiety of the changes that AI brings is more likely to make mistakes simply because they’re avoiding the tools, not completing the training, and filling gaps with guesswork. Stress has a direct impact on an employee’s ability to perform. So when someone isn’t confident in how an AI agent works, they don’t just work more slowly, they work less safely and are far more likely to misconfigure an agent, fall for manipulation, or accidentally widen the blast radius of a bad instruction.  

Once an anxious employee triggers an agent into taking the wrong action, the attacker doesn’t need the human at all. Instead of phishing a human, attackers can now target an agentgetting it to schedule a meeting with a malicious link or alter a summary with a harmful URL. In other words, one slip from a hesitant user can hand an attacker control of an autonomous system. 

Agentic systems also introduce new failure modes: agents acting on bad instructions, drifting outside scope or taking actions because they had too much access or misunderstood a prompt. These behaviors don’t map cleanly to traditional insider models, yet they appear insider-like when they unfold. 

Together, human-driven uncertainty and agent-driven autonomy create an expanded attack surface that continues to grow faster than organizations can keep up with. 

What CISOs need to do now  

We can’t solve AI insider risk with technical controls alone. We have to support the people navigating this change.  

Firstly, we need to upskill and support people before fear becomes avoidance. When employees understand how AI works and what’s expected of them, they stop seeing it as a threat. 

We must also be clear about what good looks like when it comes to cybersecurity hygiene in the era of AI. In a period of uncertainty, people want straightforward, consistent expectations. 

Next, governance must be distributed. Traditional governance committees are too slow to keep up with the pace of AI adoption, and they can’t be the only control point. One of the most effective things we did was launch an AI ambassador program. These ambassadors sit inside the business—outside of the CISO team—understand both the technology and the guardrails, and act as trusted peers who can answer questions and guide decisions long before anything ever reaches a governance committee. It creates a community-led model of oversight—people policing their own environments in a healthy, informed way—so AI decisions don’t bottleneck or drift simply because security wasn’t close enough to the work. 

Finally, controls need to assume human error, not human competence. Agentic systems will amplify mistakes. We need safeguards that account for drift, over-permissioned agents, and bad instructions. We cannot just hope they won’t happen.  

Tackling AI anxiety 

AI is changing how organizations operate, but the real shift is happening in the people trying to keep up with it. Anxiety, uncertainty and hesitation don’t stay contained. They influence how employees behave, how decisions get made, and where security gaps quietly open. These patterns won’t fix themselves, and they won’t wait for slower governance cycles to catch up. 

That’s why CISOs need to take a more intentional, human-centered approach. The technical controls will evolve as time goes on, but the cultural impact of AI requires active leadership. It’s not that different from teaching someone to drive. An untrained driver behind the wheel is inevitably dangerous, but we don’t stop people from driving. We train them, build their confidence and help them manage the anxiety that comes with learning something new. That’s what makes them safe on the roads. 

It is the same with AI. People need clear expectations, informative training, practical guidance and an environment where they feel supported rather than threatened as AI becomes part of their daily work.  

Ignoring the human side of AI doesn’t make the risk go away. It just ensures we’re surprised by it later. 

Author

Related Articles

Back to top button