AI & TechnologyCyber Security

How AI agents are creating a security crisis in SaaS environments

AI agents are spreading through enterprise SaaS environments faster than security teams can track — and most organizations have no idea how much access they’ve already handed over.

In August 2025, attackers gained access to Salesforce environments at more than 700 organizations — including Cloudflare, Palo Alto Networks, and Zscaler — without exploiting a single vulnerability. They didn’t phish anyone, neither did they directly break in anywhere. Instead, they used OAuth tokens belonging to Drift, an AI-powered chatbot that hundreds of enterprises had connected to their Salesforce installations. When threat actors compromised Salesloft’s internal systems and extracted those tokens, every downstream connection became a doorway. The activity looked like normal software behavior because, from the system’s perspective, it was.

The problem with how AI enters the enterprise

That question about a lack of governance is becoming impossible to avoid. A March 2026 survey of 500 U.S. CISOs by security firm Vorlon found that 99.4% of organizations experienced at least one SaaS or AI ecosystem security incident in 2025. Only three out of 500 reported zero incidents — and yet 89.2% of those same CISOs claimed strong OAuth governance. The gap between confidence and outcomes, the report concluded, was not a problem of awareness. Instead, it was a problem of architecture.

Part of what makes the situation so difficult to contain is how unremarkable it looks at the start. Employees connect AI writing assistants to inboxes, link scheduling tools to calendars, and grant coding agents access to internal repositories — each decision framed as a productivity call rather than a security one. The access is never formally reviewed. And the AI agent, unlike a dormant shadow IT application, begins acting immediately.

“The most dangerous dynamic is that, unlike a shadow IT app that just sits there, an AI agent is active,” says Gal Nakash, co-founder and Chief Product Officer at Reco, SaaS and AI security platform provider. “It reads, writes, summarizes, and acts. The risk is not static.”

Why existing security tools are struggling to keep up

The tools most enterprises rely on were not designed for this problem. Cloud Access Security Brokers — CASBs — were built for a world where the primary threat was a human employee accessing an unauthorized cloud application. They enforce policy at the network layer and watch for behavioral anomalies that look like a person doing something they shouldn’t.

Unfortunately, AI agents behave nothing like that. They authenticate through OAuth tokens and API keys — digital credentials that grant third-party applications permission to access data on a user’s behalf, often without requiring them to log in again — and not browser sessions and operate continuously, often overnight, touching dozens of systems simultaneously. A security tool built to detect human behavioral anomalies will not catch an AI agent quietly accumulating access far beyond its intended scope.

Nakash explains a fundamentally different approach is needed, which is reflected in the company’s platform. Rather than watching the perimeter, the Reco platform maps the full landscape of human and non-human identities across an organization’s SaaS environment and establishes a behavioral baseline for each one. When an agent begins interacting with systems or data outside its expected scope, the platform flags it. “CASB watches the front door,” Nakash says. “Reco watches what’s already inside.”

What exposure looks like in practice

What that visibility reveals tends to catch organizations off guard. In one scenario Nakash says Reco encounters regularly are an AI meeting assistant — independently connected by several employees to their Microsoft 365 accounts — had accumulated read access to the inboxes and calendars of more than 40 people, including members of the executive leadership and legal teams.

While the tool was not malicious, the vendor’s data retention policy was ambiguous, the access extended to full email content, and the firm had data-handling obligations that made the entire arrangement an unreviewed compliance exposure. The security team had no idea the exposure existed until Reco mapped the full picture.

Once they did, the remediation was straightforward: the broad OAuth grants were revoked, access was re-established under a restricted IT-managed configuration, and an approval process was put in place so future AI tool connections required a security review before going live. “Within the first month, the firm had cut its third-party AI agent exposure by more than 60%,” explains Narkash.

The scale of what’s coming

Cases like these are becoming harder to treat as exceptions. By the end of 2026, Gartner projects that 40% of enterprise applications will integrate with task-specific AI agents, a significant increase from less than 5% today. IBM’s 2025 Cost of a Data Breach Report found that organizations with high levels of shadow AI paid an average of $670,000 more per breach than those without it. And Reco’s own research shows that 91% of AI tools currently operate without IT oversight or approval.

The agents now running inside enterprise SaaS environments are not rogue. They are doing exactly what they were connected to do. The problem is that in most organizations, nobody decided who should be watching them — and that decision is now long overdue.​​​​​​​​​​​​​​​​

Author

Related Articles

Back to top button