Fake Gemini npm package steals Claude, Cursor, and other AI tool secrets
Researchers found a malicious npm package called gemini-ai-checker that pretended to verify Google Gemini AI tokens, but instead fetched and ran malware tied to the OtterCookie family, which Microsoft recently linked to the wider Contagious Interview campaign.
The package appeared on npm on March 20, 2026 under the gemini-check account. Researchers said its README copied text from an unrelated package, chai-await-async, which should have raised suspicion. They also found that the package reached out to a Vercel-hosted domain to pull a second-stage JavaScript payload.
Access content across the globe at the highest speed rate.
70% of our readers choose Private Internet Access
70% of our readers choose ExpressVPN
Browse the web from multiple devices with industry-standard security protocols.
Faster dedicated servers for specific actions (currently at summer discounts)
What makes this case more serious is the target list. The malware did not stop at browser credentials or crypto wallets. It also searched for local data tied to AI coding tools, including Cursor, Claude Code, Gemini CLI, Windsurf, PearAI, and Eigent AI. That gave attackers a path to API keys, local settings, conversation history, and source code.
Why this attack matters for AI developers
This campaign shows how attackers now treat AI developer tools as high-value targets. Microsoft said the Contagious Interview operation has targeted software developers for years through fake hiring flows and malicious packages. In March 2026, Microsoft also said the latest OtterCookie variant, active since October 2025, uses heavier obfuscation to hide strings, URLs, and logic from static analysis.
Cyber and Ramen found that gemini-ai-checker acted as a loader. On install, it rebuilt its command and control details from separate variables, contacted server-check-genimi.vercel[.]app, and executed returned code in memory with Function.constructor. That choice reduced obvious indicators on disk and made the package harder to inspect with basic tools.
The same npm account also published express-flowlimit and chai-extensions-extras. Researchers said all three packages shared the same Vercel infrastructure, and the three had passed 500 combined downloads at the time of publication. The main gemini-ai-checker package disappeared just before April 1, 2026, but the two other packages remained available when the research went live.
How the malware works
Once the payload ran, researchers said it launched four separate Node.js modules. One handled remote control through Socket.IO. Another stole browser data and crypto wallet contents. A third searched the victim’s home directory for sensitive files and AI tool folders. The fourth watched the clipboard every 500 milliseconds after a short delay meant to dodge sandbox checks.

This attack stands out because it treats AI tool folders like .claude, .cursor, and .gemini as sensitive targets. That lines up with official product documentation, which shows these tools store configuration and project-level settings locally. If an attacker steals those files, they may gain tokens, prompts, project instructions, and other useful context.
Developers should treat those directories the same way they treat .ssh, .aws, or password vault files. npm itself says users should report malware, audit packages, and understand the threat model around malicious packages. npm also notes that package scripts can run during install, and its CLI documents the ignore-scripts setting for reducing that risk in some workflows.

Quick breakdown
| Item | What researchers found | Why it matters |
|---|---|---|
| Initial lure | gemini-ai-checker looked like a Gemini token checker | It used a trusted developer channel, npm, to gain execution |
| Delivery | Package contacted a Vercel endpoint for more code | Attackers kept the main payload off disk at first |
| Malware family | OtterCookie variant linked to Contagious Interview | This ties the package to an active, known campaign |
| Main targets | Claude, Cursor, Gemini CLI, Windsurf, PearAI, Eigent AI | AI tooling now sits in the same risk tier as cloud and SSH credentials |
| Extra theft | Browser logins, wallets, documents, clipboard data | A single install could expose personal and enterprise secrets |
What developers and security teams should do now
- Check whether anyone installed
gemini-ai-checker,express-flowlimit, orchai-extensions-extras. - Review local AI tool folders for tokens, saved settings, and project secrets that may need rotation.
- Hunt for suspicious Node.js activity, especially outbound traffic to Vercel or unknown infrastructure. Microsoft published detection guidance and hunting queries for related behavior.
- Use
npm auditwhere relevant, report malicious packages to npm, and considerignore-scriptsin higher-risk environments. - Verify package names, README text, maintainer history, and install scripts before you add new AI-related tools. Researchers said the copied README text was a visible warning sign in this case.
FAQ
It posed as a token checker, then downloaded and ran malware that stole credentials, files, clipboard contents, wallet data, and secrets from AI coding tools.
Researchers named Cursor, Claude Code, Gemini CLI, Windsurf, PearAI, and Eigent AI.
Yes. Researchers linked the payload to OtterCookie, and Microsoft connected OtterCookie to the broader Contagious Interview campaign.
They can hold local settings, project rules, tokens, prompts, and conversation data. That information can expose source code and help attackers move deeper into company systems.
Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more
User forum
0 messages