Hackers exploited GitHub Copilot flaw to exfiltrate sensitive data from private repositories


A high-severity flaw in GitHub Copilot Chat let attackers steal sensitive data from private repositories by abusing the assistant’s access to pull requests and repo content. The issue, tracked as CVE-2025-59145, carried a CVSS score of 9.6 and could expose source code, API keys, tokens, and other secrets without tricking the victim into running malware. GitHub patched the issue in August 2025 by disabling image rendering in Copilot Chat.

The attack became known as CamoLeak. Security researcher Omer Mayraz disclosed it publicly in October 2025 after reporting it through HackerOne. His write-up showed that the bug combined remote prompt injection with a clever bypass of GitHub’s content security protections.

The case matters because it shows how an AI assistant can become a data exfiltration tool when it reads untrusted content and also has access to private context. GitHub’s own security research has warned that poisoned chat context can expose confidential files, tokens, or trigger unintended actions when prompt injection slips into the instruction stream.

How the CamoLeak attack worked

The first step was simple. An attacker placed hidden instructions inside a pull request description by using GitHub’s Markdown comment syntax. Human reviewers could not see those comments in the normal interface, but Copilot still ingested the raw text when asked to review or explain the pull request.

Once Copilot read the poisoned pull request, the injected prompt could tell it to search the private codebase for sensitive data. Mayraz showed that Copilot could follow those instructions and prepare the stolen information for exfiltration, including secrets such as AWS keys and private source code.

The final stage used GitHub’s own Camo image proxy. According to the researcher, the attack encoded stolen data into image requests that passed through trusted GitHub infrastructure, which helped the traffic blend in with normal page activity. GitHub’s fix blocked this specific exfiltration route by disabling image rendering in Copilot Chat.

Why this flaw stood out

Many AI prompt injection bugs cause bad answers, unsafe suggestions, or misleading summaries. CamoLeak went further because it created a silent data theft path from a private repository to an attacker-controlled destination. Dark Reading noted that even a small leak could expose passwords, private keys, and similar high-value secrets.

The exploit also showed that trusted infrastructure can become part of the attack chain. Security policies often block direct outbound requests to suspicious domains, but this technique routed traffic through GitHub’s own image delivery path, which made ordinary network defenses less useful against this specific method.

This is also part of a bigger pattern. GitHub’s August 2025 security blog warned that indirect prompt injection in coding assistants can expose confidential files, GitHub tokens, or even lead to sensitive actions when external content enters the chat context. That warning covered VS Code and agent-style workflows more broadly, but the principle fits this case closely.

CamoLeak at a glance

ItemDetails
VulnerabilityCVE-2025-59145
SeverityCVSS 9.6
Affected productGitHub Copilot Chat
Public nameCamoLeak
ResearcherOmer Mayraz
Public disclosureOctober 8, 2025
GitHub fixDisabled image rendering in Copilot Chat
Main impactTheft of source code and secrets from private repos

What security teams should take from this

This incident reinforces a basic rule for AI-assisted development. Treat untrusted pull requests, issues, comments, and documents as potential instruction sources, not just passive content. If an AI assistant can read them and also access internal code or tools, the attack surface grows fast.

Organizations should also review how much context their coding assistants can access by default. GitHub’s own security research says user confirmations, tighter tool controls, workspace boundaries, and sandboxed environments such as containers or Codespaces can reduce the blast radius of prompt injection.

The larger lesson is not limited to GitHub. The exact exfiltration path in CamoLeak was GitHub-specific, but the core problem applies to other AI assistants that can read sensitive internal data and respond to attacker-influenced content. That includes enterprise copilots tied to code, files, email, and documents.

Defensive steps worth prioritizing

  • Limit the amount of repository and workspace context available to AI assistants.
  • Treat public pull requests and external content as untrusted input.
  • Require user confirmation for sensitive tool actions where possible.
  • Use sandboxed development environments for high-risk review tasks.
  • Monitor for unusual outbound requests and secret exposure patterns.
  • Review secret management so exposed keys can be rotated quickly.

These measures align with GitHub’s broader guidance on reducing prompt injection risk in AI-assisted development environments.

FAQ

What was CVE-2025-59145?

It was a GitHub Copilot Chat vulnerability that allowed attackers to abuse prompt injection and GitHub’s image proxy flow to exfiltrate sensitive data from private repositories.

Did attackers need malware or code execution?

No. The published research says the attack did not require the victim to execute malicious code. It abused Copilot’s trusted access to repository context and the browser’s rendering flow.

What data could be exposed?

The researcher demonstrated theft of private source code, API keys, AWS secrets, and similar repository data that Copilot could access during review.

Has GitHub fixed the issue?

Yes. GitHub patched the bug in August 2025 by disabling image rendering in Copilot Chat, according to the public disclosure.

Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages