Security researchers have uncovered more than 30 vulnerabilities across major AI-powered IDEs such as Cursor, Windsurf, Copilot, and others—issues that could allow attackers to steal sensitive data, manipulate code, or trigger full remote code execution through weaponized prompt injections.
ecurity researchers have identified over 30 critical vulnerabilities across a wide range of AI-powered Integrated Development Environments (IDEs), exposing developers to the risk of data exfiltration, project sabotage, and remote code execution attacks.
The findings, published by cybersecurity researcher Ari Marzouk (MaccariTA), collectively group the flaws under the name "IDEsaster." The weaknesses affect some of the most widely used AI coding assistants and extensions—Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others.
Of the issues disclosed, 24 have received officially assigned CVE identifiers, highlighting their severity and widespread impact. Speaking to The Hacker News, Marzouk said the scope of the vulnerabilities was far broader than expected: "Multiple universal attack chains affected every AI IDE tested. The most surprising finding is that all AI IDEs completely ignored the base IDE in their threat model."
Traditional IDE features—long considered safe—become attack vectors once an autonomous AI agent is permitted to execute tasks, read files, or modify code without strict guardrails. These vulnerabilities collectively exploit a three-stage attack pattern.
Traditional IDE features—long considered safe—become attack vectors once an autonomous AI agent is permitted to execute tasks, read files, or modify code without strict guardrails. These vulnerabilities collectively exploit a three-stage attack pattern:
Attackers embed malicious instructions inside code comments, documentation files, dependency updates, pull requests, and project configuration files. Once the AI model processes these rogue prompts, it can be instructed to:
Many AI IDE assistants ship with auto-execution features, autonomous "agents," approved read/write operations, and powerful file-system access. These create zero-click attack pathways, meaning the IDE executes dangerous actions automatically without asking for user confirmation.
The third stage leverages features that already exist in traditional IDEs—such as terminal commands, extension APIs, workspace settings, debugging tools, and file watchers. By chaining these with AI automation, attackers can escalate from simple context hijacking to full remote code execution (RCE) inside a developer's environment.
Earlier AI security research focused mainly on prompt injection + malicious tool misuse, such as tricking an AI into reading files or modifying settings. IDEsaster goes further. It proves that even long-standing, trusted IDE features can become deadly when controlled by an AI model.
Examples include triggering build scripts, running shell commands, auto-editing configuration files, and sending logs or project data over the internet. This means attackers don't need zero-days—they can weaponize tools developers already rely on.
Security leaders warn that AI IDEs are often granted more permissions than human developers, making the blast radius of compromise far more severe. As AI assistants become integrated into development workflows, they gain access to sensitive source code, API keys, configuration files, and deployment credentials.
The research highlights several critical risks:
Experts warn that the rush to adopt AI coding tools has outpaced security considerations. Many organizations deploy these tools without proper security reviews, assuming they operate within safe sandboxes. The IDEsaster vulnerabilities reveal that these assumptions are dangerously incorrect.
Experts recommend immediate actions to mitigate these risks:
While vendors are issuing fixes, the research shows the industry needs a fundamental redesign of security boundaries in AI-driven developer tools. This includes better isolation models, permission systems, and auditing capabilities specifically designed for AI-assisted development.
The IDEsaster disclosures highlight a growing truth in cybersecurity: AI-powered coding assistants dramatically increase both development speed and the potential attack surface. As AI agents become more autonomous, more integrated, and more empowered, developers must assume that traditional IDE safety assumptions no longer apply—and that every automated action could be manipulated by crafted prompts or malicious files.
Organizations must balance the productivity benefits of AI coding tools with robust security controls, recognizing that these tools introduce new attack vectors that extend beyond traditional software vulnerabilities into the realm of AI manipulation and supply chain compromise.
The research identified vulnerabilities in the following popular AI development tools with assigned CVE identifiers: