Cyber Security News

GitHub Copilot RCE Vulnerability via Prompt Injection Leads to Full System Compromise

A critical security vulnerability in GitHub Copilot and Visual Studio Code has been discovered that allows attackers to achieve remote code execution through prompt injection attacks, potentially leading to full system compromise of developers’ machines. 

The vulnerability, tracked as CVE-2025-53773, exploits GitHub Copilot’s ability to modify project configuration files, particularly the .vscode/settings.json file, enabling attackers to bypass security controls and execute arbitrary commands on target systems.

Key Takeaways
1.  CVE-2025-53773 uses prompt injection to enable Copilot's "YOLO mode”.
2. Creates botnet "ZombAIs," spreads AI viruses via Git.
3. Update Visual Studio 2022 immediately.

GitHub Copilot “YOLO Mode” Vulnerability

The vulnerability centers around GitHub Copilot’s capability to create and write files in the workspace without explicit user approval, with modifications being immediately persistent to disk rather than presented as reviewable diffs. 

Security researchers discovered that by manipulating the .vscode/settings.json file, attackers can enable what’s known as “YOLO mode” by adding the configuration line “chat.tools.autoApprove”: true. 

This experimental feature, present by default in standard VS Code installations, disables all user confirmations and grants the AI agent unrestricted access to execute shell commands, browse the web, and perform other privileged operations across Windows, macOS, and Linux systems.

The attack mechanism relies on prompt injection techniques where malicious instructions are embedded in source code files, web pages, GitHub issues, or other content that Copilot processes. 

These instructions can even utilize invisible Unicode characters to remain hidden from developers while still being processed by the AI model. 

The malicious prompt is processed, Copilot automatically modifies the settings file to enable auto-approval mode, immediately escalating its privileges without user knowledge or consent.

Researchers successfully demonstrated conditional prompt injection techniques that can target specific operating systems, allowing attackers to deploy platform-specific payloads. 

Full control of the developer’s host

The vulnerability enables attackers to join compromised developer machines to botnets, creating what researchers term “ZombAIs” – AI-controlled compromised systems that can be remotely commanded.

More concerning is the potential for creating self-propagating AI viruses that can embed malicious instructions in Git repositories and spread as developers download and interact with infected code. 

The vulnerability also allows modification of other critical configuration files, such as .vscode/tasks.json, and the addition of malicious MCP (Model Context Protocol) servers, expanding the attack surface significantly. 

These capabilities open the door for the deployment of malware, ransomware, information stealers, and the establishment of persistent command and control channels.

Risk FactorsDetails
Affected ProductsGitHub Copilot- Visual Studio Code- Microsoft Visual Studio 2022
ImpactRemote Code Execution
Exploit Prerequisites–  User interaction required (UI:R)- Local attack vector (AV:L)- Prompt injection delivery mechanism- Target must process malicious content
CVSS 3.1 Score7.8 (High)

Mitigations

Microsoft assigned this vulnerability a CVSS 3.1 score of 7.8/6.8, classifying it as “Important” severity with the weakness categorized as CWE-77 (Improper Neutralization of Special Elements used in a Command). 

The vulnerability was responsibly disclosed on June 29, 2025, and Microsoft confirmed the issue was already being tracked internally before releasing patches as part of the August 2025 Patch Tuesday update.

The fix addresses the core issue by preventing AI agents from modifying security-relevant configuration files without explicit user approval. 

Microsoft Visual Studio 2022 version 17.14.12 includes the security update that mitigates this vulnerability. 

Security experts recommend that organizations immediately update their development environments and implement additional controls to prevent AI agents from modifying their own configuration settings.

Boost your SOC and help your team protect your business with free top-notch threat intelligence: Request TI Lookup Premium Trial.

Guru Baran

Gurubaran is a co-founder of Cyber Security News and GBHackers On Security. He has 10+ years of experience as a Security Consultant, Editor, and Analyst in cybersecurity, technology, and communications.

Recent Posts

Kali Linux 2025.3 Released With New Features and 10 New Hacking Tools

Kali team has released Kali Linux 2025.3, the third major update of the year for…

16 minutes ago

CISA Details That Hackers Gained Access to a U.S. Federal Agency Network Via GeoServer RCE Vulnerability

CISA has released a comprehensive cybersecurity advisory detailing how threat actors successfully compromised a U.S.…

1 hour ago

Chrome High-severity Vulnerabilities Let Attackers Access Sensitive Data and Crash System

Google has issued an urgent security update for its Chrome web browser to address three…

5 hours ago

Threat Actors Breaking to Enterprise Infrastructure Within 18 Minutes From Initial Access

Cybersecurity professionals are facing an unprecedented acceleration in threat actor capabilities as the average breakout…

7 hours ago

New Malware in npm Package Steals Browser Passwords Using Steganographic QR Code

A sophisticated malware campaign has emerged in the npm ecosystem, utilizing an innovative steganographic technique…

8 hours ago

Zloader Malware Repurposed to Act as Entry Point Into Corporate Environments to Deploy Ransomware

Zloader, a sophisticated Zeus-based modular trojan that first emerged in 2015, has undergone a significant…

8 hours ago