Fake Gemini npm package steals Claude, Cursor, and other AI tool secrets


Researchers found a malicious npm package called gemini-ai-checker that pretended to verify Google Gemini AI tokens, but instead fetched and ran malware tied to the OtterCookie family, which Microsoft recently linked to the wider Contagious Interview campaign.

The package appeared on npm on March 20, 2026 under the gemini-check account. Researchers said its README copied text from an unrelated package, chai-await-async, which should have raised suspicion. They also found that the package reached out to a Vercel-hosted domain to pull a second-stage JavaScript payload.

What makes this case more serious is the target list. The malware did not stop at browser credentials or crypto wallets. It also searched for local data tied to AI coding tools, including Cursor, Claude Code, Gemini CLI, Windsurf, PearAI, and Eigent AI. That gave attackers a path to API keys, local settings, conversation history, and source code.

Why this attack matters for AI developers

This campaign shows how attackers now treat AI developer tools as high-value targets. Microsoft said the Contagious Interview operation has targeted software developers for years through fake hiring flows and malicious packages. In March 2026, Microsoft also said the latest OtterCookie variant, active since October 2025, uses heavier obfuscation to hide strings, URLs, and logic from static analysis.

Cyber and Ramen found that gemini-ai-checker acted as a loader. On install, it rebuilt its command and control details from separate variables, contacted server-check-genimi.vercel[.]app, and executed returned code in memory with Function.constructor. That choice reduced obvious indicators on disk and made the package harder to inspect with basic tools.

The same npm account also published express-flowlimit and chai-extensions-extras. Researchers said all three packages shared the same Vercel infrastructure, and the three had passed 500 combined downloads at the time of publication. The main gemini-ai-checker package disappeared just before April 1, 2026, but the two other packages remained available when the research went live.

How the malware works

Once the payload ran, researchers said it launched four separate Node.js modules. One handled remote control through Socket.IO. Another stole browser data and crypto wallet contents. A third searched the victim’s home directory for sensitive files and AI tool folders. The fourth watched the clipboard every 500 milliseconds after a short delay meant to dodge sandbox checks.

De-obfuscated code snippet (Source – Cyber and Ramen)

This attack stands out because it treats AI tool folders like .claude, .cursor, and .gemini as sensitive targets. That lines up with official product documentation, which shows these tools store configuration and project-level settings locally. If an attacker steals those files, they may gain tokens, prompts, project instructions, and other useful context.

Developers should treat those directories the same way they treat .ssh, .aws, or password vault files. npm itself says users should report malware, audit packages, and understand the threat model around malicious packages. npm also notes that package scripts can run during install, and its CLI documents the ignore-scripts setting for reducing that risk in some workflows.

De-obfuscated code snippet (Source – Cyber and Ramen)

Quick breakdown

ItemWhat researchers foundWhy it matters
Initial luregemini-ai-checker looked like a Gemini token checkerIt used a trusted developer channel, npm, to gain execution
DeliveryPackage contacted a Vercel endpoint for more codeAttackers kept the main payload off disk at first
Malware familyOtterCookie variant linked to Contagious InterviewThis ties the package to an active, known campaign
Main targetsClaude, Cursor, Gemini CLI, Windsurf, PearAI, Eigent AIAI tooling now sits in the same risk tier as cloud and SSH credentials
Extra theftBrowser logins, wallets, documents, clipboard dataA single install could expose personal and enterprise secrets

What developers and security teams should do now

  • Check whether anyone installed gemini-ai-checker, express-flowlimit, or chai-extensions-extras.
  • Review local AI tool folders for tokens, saved settings, and project secrets that may need rotation.
  • Hunt for suspicious Node.js activity, especially outbound traffic to Vercel or unknown infrastructure. Microsoft published detection guidance and hunting queries for related behavior.
  • Use npm audit where relevant, report malicious packages to npm, and consider ignore-scripts in higher-risk environments.
  • Verify package names, README text, maintainer history, and install scripts before you add new AI-related tools. Researchers said the copied README text was a visible warning sign in this case.

FAQ

What did the fake Gemini npm package actually do?

It posed as a token checker, then downloaded and ran malware that stole credentials, files, clipboard contents, wallet data, and secrets from AI coding tools.

Which AI tools did it target?

Researchers named Cursor, Claude Code, Gemini CLI, Windsurf, PearAI, and Eigent AI.

Is this linked to a known threat campaign?

Yes. Researchers linked the payload to OtterCookie, and Microsoft connected OtterCookie to the broader Contagious Interview campaign.

Why do AI tool folders matter so much?

They can hold local settings, project rules, tokens, prompts, and conversation data. That information can expose source code and help attackers move deeper into company systems.

Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages