
Widespread “YOLO Mode” risks in AI coding tools are creating significant supply chain and data breach exposure
MOUNTAIN VIEW, Calif., Feb. 4, 2026 /PRNewswire/ — UpGuard, a leader in cybersecurity and risk management, released new research highlighting a critical security vulnerability within developer workflows. UpGuard’s analysis of more than 18,000 AI agent configuration files from public GitHub repositories identified a concerning pattern: one in five developers have granted AI code agents unrestricted access to perform high-risk actions without human oversight.
In using AI to improve efficiency, developers are granting extensive permissions to download content from the web, and read, write, and delete files on their machines without requiring developer permission. This comes at the cost of essential security guardrails, exposing organizations to major supply chain and data security risks.
“Security teams lack visibility into what AI agents are touching, exposing, or leaking when developers grant vibe coding tools broad access without oversight,” said Greg Pollock, director of Research and Insights at UpGuard. “Despite the best intentions, developers are increasing the potential for security vulnerabilities and exploitation. This is how small workflow shortcuts can escalate into major supply chain and credential exposure problems.“
Key Findings:
- Widespread Potential for Damage: 1 in 5 developers granted AI agents the permission for unrestricted file deletion, allowing a small error or prompt injection attack to recursively wipe a project or system.
- Risk from Unchecked AI Development: Almost 20% of developers let the AI automatically save changes to the project’s main code repository, skipping a necessary human review. This automated setup creates a serious security gap, as it allows an attacker to easily insert harmful or malicious code directly into the production system or open-source projects, which could lead to widespread security compromises.
- High-Risk Execution Permissions: A significant number of files granted permissions for arbitrary code execution, including 14.5% for Python and 14.4% for Node.js, effectively giving an attacker full control over the developer’s environment through a successful prompt injection.
- MCP Typosquatting Threat: Analysis of the Model Context Protocol (MCP) ecosystem revealed extensive use of lookalike servers, creating ripe conditions for attackers to impersonate trusted technology brands. In the registries where users look for these AI tools, for every server provided by a verified technology vendor there were up to 15 lookalikes from untrusted sources.
These risks highlight a critical governance gap, slowing down incident response and increasing the likelihood of credential and data exposure. To read UpGuard’s recent research on vibe coding, visit the following:
- YOLO Mode: Hidden Risks in AI Coding Agents: https://hubs.li/Q041Cb0H0
- Emerging Risks: Typosquatting in the MCP Ecosystem: https://hubs.li/Q041CCpq0
About UpGuard’s Breach Risk
UpGuard’s Breach Risk solution is designed to turn hidden shortcuts such as misconfigurations or overly broad permissions to early threat signals like dark web chatter, into clear, actionable visibility. By providing deep insight into the AI-generated changes, access, and data flow, UpGuard’s Breach Risk solution helps security teams enforce a strict governance framework.
About UpGuard
Founded in 2012, UpGuard is a leader in cybersecurity and risk management. The company’s AI-powered platform for cyber risk posture management (CRPM), provides a centralized, actionable view of cyber risk across an organization’s vendors, attack surface, and workforce. Trusted by thousands of companies, UpGuard’s platform is designed to help security teams manage cyber risk with confidence and efficiency. To learn more, visit www.upguard.com.
View original content to download multimedia:https://www.prnewswire.com/news-releases/new-research-from-upguard-1-in-5-developers-grant-ai-vibe-coding-tools-unrestricted-workstation-access-302678210.html
SOURCE UpGuard

