Back to Blog
AI Security

Real CVEs Caused by AI-Generated Code in 2025

AliceSec Team
7 min read

2025 was the year AI coding tools went mainstream—and the year their security flaws became impossible to ignore. With over 30 vulnerabilities disclosed in December alone and almost 70% of developers reporting AI-introduced vulnerabilities, the risks of trusting AI-generated code are now backed by hard evidence.

This article catalogs the most significant CVEs and security incidents involving AI coding tools in 2025—and what they teach us about the future of secure development.

The IDEsaster: 30+ Vulnerabilities, 24 CVEs

In December 2025, security researcher Ari Marzouk (MaccariTA) disclosed over 30 vulnerabilities affecting virtually every major AI coding tool on the market. The attack chain, dubbed "IDEsaster," demonstrated that 100% of tested AI IDEs were vulnerable to a novel three-stage attack:

Stage 1: Prompt Injection Attackers inject malicious instructions through:

  • Rule files (.cursorrules, .github/copilot-instructions.md)
  • MCP server configurations
  • Deeplinks
  • Maliciously named files

Stage 2: Tool Exploitation The AI agent executes attacker-controlled actions using its built-in tools.

Stage 3: IDE Feature Abuse Attackers leverage IDE features (file editing, terminal access) for full system compromise.

Affected Tools and CVEs

ToolCVEs AssignedStatus
GitHub CopilotCVE-2025-53773, CVE-2025-64660Patched
CursorCVE-2025-49150, CVE-2025-54130, CVE-2025-61590Patched
JetBrains JunieCVE-2025-58335Patched
Roo CodeCVE-2025-53097Patched
WindsurfMultiplePatched
Zed.devMultiplePatched
ClineMultiplePatched
Claude CodeN/ADocumentation fix

Claude Code notably addressed risks through security documentation rather than code changes, opting for user education over technical patches.

CVE-2025-53773: GitHub Copilot's YOLO Mode Disaster

The most severe vulnerability in the IDEsaster disclosure was CVE-2025-53773, affecting GitHub Copilot and Visual Studio.

Technical Details

  • CVSS Score: 7.8 (HIGH)
  • CWE: CWE-77 (Command Injection)
  • Disclosed: August 12, 2025
  • Patched: August 2025 Patch Tuesday

Attack Mechanism

The vulnerability exploited Copilot's ability to modify project files without user approval. Security researchers demonstrated that attackers could inject malicious prompts into source code files, web pages, or GitHub issues that would manipulate Copilot into:

  1. Adding "chat.tools.autoApprove": true to .vscode/settings.json
  2. Effectively placing Copilot into "YOLO mode" (no human approval required)
  3. Executing arbitrary code with full system access

Impact Potential

According to GBHackers analysis, successful exploitation could enable:

  • Remote code execution on developer machines
  • AI virus propagation through infected repositories
  • Automatic backdoor embedding in new projects
  • "ZombAI" botnets recruiting developer workstations

Timeline

  • June 29, 2025: Vulnerability reported to Microsoft
  • August 2025: Patch released in Patch Tuesday update
  • August 12, 2025: Public disclosure

CVE-2025-8217: The Amazon Q Sabotage

Perhaps the most audacious attack of 2025 targeted Amazon's Q Developer extension for VS Code. Unlike vulnerability discoveries, this was an intentional sabotage designed to expose AWS's security practices.

What Happened

On July 13, 2025, an attacker submitted a pull request to the aws-toolkit-vscode GitHub repository. Due to an inappropriately scoped GitHub token in AWS CodeBuild configurations, the attacker gained admin access and injected malicious code.

The Payload

The injected code instructed Amazon Q to:

bash
# Simplified representation of the malicious instructions
1. Delete all non-hidden files from user's home directory
2. Discover and use AWS profiles
3. List and delete cloud resources using AWS CLI
4. Execute with --trust-all-tools and --no-interactive flags

Why It Didn't Cause Damage

The attack was designed to fail. AWS confirmed that a syntax error in the malicious code prevented execution. The attacker later told 404 Media their goal was to "expose their 'AI' security theater" by planting "a wiper designed to be defective."

Timeline

  • July 13, 2025: Malicious PR submitted
  • July 17, 2025: Version 1.84.0 released with malicious code
  • July 19, 2025: Bad commit merged
  • July 21, 2025: Version 1.85.0 released with fix
  • July 23, 2025: Public disclosure

Criticism

Amazon drew criticism for initially hiding the incident. Version 1.84.0 was silently removed from the VS Code Marketplace, and the 1.85.0 changelog simply stated: "Miscellaneous non-user-facing changes."

CVE-2025-68664: LangGrinch Threatens AI Agents

The LangGrinch vulnerability in langchain-core represents a new category of risk: vulnerabilities in the frameworks used to build AI applications.

Technical Details

  • CVSS Score: 9.3 (CRITICAL)
  • Component: langchain-core (LangChain's foundation library)
  • Attack Type: Serialization/deserialization injection

Attack Mechanism

Attackers could exploit prompt injection to steer AI agents into generating crafted structured outputs containing LangChain's internal marker key ("lc"). This could lead to:

  • Exfiltration of sensitive secrets
  • Remote code execution on servers running LangChain agents
  • Compromise of entire AI agent infrastructures

Patched Versions

  • langchain-core >= 1.2.5
  • langchain-core >= 0.3.81

CVE-2025-61260: OpenAI Codex CLI Command Injection

The OpenAI Codex CLI vulnerability demonstrated risks in trusting MCP server configurations.

Attack Mechanism

The vulnerability exploited the fact that Codex CLI implicitly trusts commands configured via MCP server entries, executing them at startup without user permission. An attacker could craft a malicious MCP configuration that executes arbitrary commands when a developer opens a project.

Implications

This highlights a systemic issue: AI coding tools that integrate with external services (MCP servers, APIs, plugins) inherit the trust assumptions of those integrations.

By the Numbers: AI Code Vulnerabilities in 2025

The CVE disclosures were dramatic, but the underlying statistics are equally alarming.

Vulnerability Rates

According to Veracode's 2025 research:

  • 45% of AI-generated code contains security flaws
  • 55% overall secure code rate across 100+ LLMs
  • 72% security failure rate for Java specifically
  • 86% failure rate for XSS prevention (CWE-80)

Real-World Impact

Apiiro's June 2025 analysis found:

  • 10,000+ new security findings per month from AI-generated code
  • 10x increase in security issues over 6 months
  • 322% increase in privilege escalation paths
  • 153% increase in architectural design flaws

Comparative Vulnerability Rates

AI-generated code compared to human-written code:

Vulnerability TypeAI Code More Likely By
XSS (CWE-79)2.74x
Insecure Object Reference1.91x
Improper Password Handling1.88x
Insecure Deserialization1.82x

State-Sponsored AI Exploitation

In September 2025, Anthropic disclosed a sophisticated espionage campaign where Chinese state-sponsored attackers used Claude Code's agentic capabilities to attempt infiltration of approximately 30 global targets.

This incident marked a shift: AI tools weren't just generating vulnerable code—they were being weaponized for active attacks.

Lessons Learned

1. Trust No AI Output

The IDEsaster disclosure proved that every major AI coding tool can be compromised through prompt injection. Treat all AI-generated code as untrusted input requiring validation.

2. Configuration Files Are Attack Vectors

CVE-2025-53773 and CVE-2025-61260 both exploited configuration file modifications. Review any AI-suggested changes to:

  • .vscode/settings.json
  • .cursorrules
  • MCP configurations
  • IDE workspace files

3. Supply Chain Security Extends to AI Tools

The Amazon Q incident shows that AI tool supply chains are targets. Verify extension updates and monitor for suspicious behavior.

4. Framework Vulnerabilities Amplify Risk

LangGrinch affected every application built on langchain-core. When using AI frameworks, stay current on security advisories.

5. "Documentation Fix" Is Not a Patch

Claude Code's approach of addressing risks through documentation rather than technical controls is insufficient for high-risk environments. Prefer tools with defense-in-depth architectures.

Protecting Your Development Environment

Immediate Actions

bash
# Check for vulnerable versions
npm list langchain-core
pip show langchain-core

# Update VS Code extensions
code --list-extensions --show-versions

# Review AI tool configurations
cat .vscode/settings.json | grep -i "autoApprove"
cat .cursorrules

Configuration Hardening

json
// .vscode/settings.json - Disable auto-approval
{
  "chat.tools.autoApprove": false,
  "github.copilot.advanced": {
    "experimentalFeatures": false
  }
}

Monitoring Recommendations

  1. Alert on configuration file changes in AI-enabled projects
  2. Review MCP server additions before trusting
  3. Audit AI tool permissions regularly
  4. Monitor for unusual CLI activity from AI extensions

What's Next

The CVEs of 2025 established a new threat model for AI-assisted development. Expect 2026 to bring:

  • Mandatory sandboxing in enterprise AI tools
  • AI-specific SAST rules in security scanners
  • Supply chain attestation for AI extensions
  • Regulatory guidance on AI code security

The era of blindly trusting AI-generated code is over. The CVEs are the proof.

Practice Identifying AI Vulnerabilities

Understanding how these vulnerabilities work is essential for modern developers. Practice identifying SQL injection, XSS, and other common AI-generated flaws in our interactive security challenges.

---

This article will be updated as new CVEs are disclosed. Last updated: December 2025.

Stay ahead of vulnerabilities

Weekly security insights, new challenges, and practical tips. No spam.

Unsubscribe anytime. No spam, ever.