CIO Influence
CIO Influence News Machine Learning Security

New Research from UpGuard: 1 in 5 Developers Grant AI Vibe Coding Tools Unrestricted Workstation Access

New Research from UpGuard: 1 in 5 Developers Grant AI Vibe Coding Tools Unrestricted Workstation Access

Widespread “YOLO Mode” risks in AI coding tools are creating significant supply chain and data breach exposure

UpGuard, a leader in cybersecurity and risk management, released new research highlighting a critical security vulnerability within developer workflows. UpGuard’s analysis of more than 18,000 AI agent configuration files from public GitHub repositories identified a concerning pattern: one in five developers have granted AI code agents unrestricted access to perform high-risk actions without human oversight.

In using AI to improve efficiency, developers are granting extensive permissions to download content from the web, and read, write, and delete files on their machines without requiring developer permission. This comes at the cost of essential security guardrails, exposing organizations to major supply chain and data security risks.

“Security teams lack visibility into what AI agents are touching, exposing, or leaking when developers grant vibe coding tools broad access without oversight,” said Greg Pollock, director of Research and Insights at UpGuard. “Despite the best intentions,ย developers areย increasingย the potential for security vulnerabilities and exploitation. This is how small workflow shortcuts can escalate into major supply chain and credential exposure problems.”

Also Read:ย CIO Influence Interview with Gera Dorfman, Chief Product Officer at Orca

Key Findings:

  • Widespread Potential for Damage:ย 1 in 5 developers granted AI agents the permission for unrestricted file deletion, allowing a small error or prompt injection attack to recursively wipe a project or system.
  • Risk from Unchecked AI Development:ย Almost 20% of developers let the AI automatically save changes to the project’s main code repository, skipping a necessary human review. This automated setup creates aย seriousย security gap, as it allows an attacker to easily insert harmful or malicious code directly into the production system or open-source projects, which could lead to widespread security compromises.
  • High-Risk Execution Permissions:ย A significant number of files granted permissions for arbitrary code execution, including 14.5% for Python and 14.4% for Node.js, effectively giving an attacker full control over the developer’s environment through a successful prompt injection.
  • MCP Typosquatting Threat:ย Analysis of the Model Context Protocol (MCP) ecosystem revealed extensive use of lookalike servers, creating ripe conditions for attackers to impersonate trusted technology brands. In the registries where users look for these AI tools, for every server provided by a verified technology vendor there were up to 15 lookalikes from untrusted sources.

Catch more CIO Insights:ย Identity is the New Perimeter: The Rise of ITDR

[To share your insights with us, please write toย psen@itechseries.com ]

Related posts

Phunware Multiscreen-as-a-Service (MaaS) Platform for Digital Transformation Now Available in AWS Marketplace

CIO Influence News Desk

phoenixNAP Authorized Under Arizona Security, Privacy, Risk & Authorization Management Program (AZRamp)

CIO Influence News Desk

Hewlett Packard Enterprise Powers Leading SAP HANA Cloud Service Provider in France

CIO Influence News Desk