Fake OpenAI Privacy Filter Repo on Hugging Face Delivered Windows Infostealer


A malicious Hugging Face repository impersonating OpenAI’s Privacy Filter model was found delivering infostealer malware to Windows users. The fake project used the name Open-OSS/privacy-filter and copied the real model’s presentation to appear trustworthy.

HiddenLayer researchers said the repository had more than 200,000 downloads before Hugging Face removed it. The repository also reached the number one trending position on Hugging Face, which gave it extra visibility among developers and AI users.

The campaign shows how attackers are now abusing AI model hubs as software supply chain targets. A repository can look like a normal model release while hiding scripts that run commands, download payloads, and steal credentials from local machines.

Fake repo copied OpenAI’s real Privacy Filter

OpenAI’s real Privacy Filter is an open-weight model designed to detect and redact personally identifiable information in text. It can run locally and is meant for privacy workflows such as masking names, emails, phone numbers, and other sensitive data.

The fake repository copied the legitimate model card almost word for word. That helped it look like a real OpenAI-adjacent release, especially to users who found it through Hugging Face’s trending pages.

The main difference appeared in the setup instructions. The malicious repo told users to clone the project and run start.bat on Windows or loader.py on Linux and macOS. On Windows, that loader started the malware chain.

Key facts at a glance

ItemDetails
Malicious repositoryOpen-OSS/privacy-filter
Impersonated projectOpenAI Privacy Filter
PlatformHugging Face
Reported downloadsMore than 200,000, with about 244,000 reported before removal
Reported likes667 likes within under 18 hours
Main targetWindows machines
Payload typeRust-based infostealer
Repository statusRemoved after HiddenLayer reported it to Hugging Face

How the attack started

The attack started when a user followed the fake setup instructions. The loader.py script first ran decoy code so it looked like a normal model loader.

After that, the script called a function that disabled SSL verification, decoded a base64 URL, fetched a JSON document from jsonkeeper.com, and extracted a command field. That command then ran through PowerShell with execution policy bypassed and no visible window.

This gave the attacker a way to change the next command without editing the Hugging Face repository itself. By using a public JSON paste service, the operator could rotate the payload path while keeping the visible repo unchanged.

PowerShell downloaded the second stage

The PowerShell command downloaded a batch file called update.bat from api.eth-fastscan.org. The domain appeared designed to look like a blockchain analytics service.

The batch file checked for administrator access and could trigger a Windows UAC prompt. It then downloaded the final payload, added Microsoft Defender exclusions, generated a runner script, and abused a scheduled task named MicrosoftEdgeUpdateTaskCore to launch the malware.

HiddenLayer said the scheduled task did not create long-term persistence. Instead, it acted as a one-shot launcher that ran the payload with elevated privileges and then deleted itself.

The final payload targeted credentials and wallets

The final payload was a Rust-based infostealer. HiddenLayer said it included anti-analysis checks, debugger detection, virtual machine detection, and attempts to interfere with Windows security visibility.

Once active, the infostealer collected data from Chromium-based browsers, Firefox-derived browsers, Discord, cryptocurrency wallets, browser extensions, FTP tools, SSH-related files, VPN files, and selected local files. It could also capture screenshots.

The stolen data was compressed into a JSON payload and sent to recargapopular.com through a POST request using a Bearer authorization header.

Why the download numbers matter

The repository’s download count and likes appear to have helped it look legitimate. HiddenLayer said Open-OSS/privacy-filter reached about 244,000 downloads and 667 likes in under 18 hours before access was disabled.

Those numbers were probably inflated artificially. HiddenLayer found predictable naming patterns among many of the accounts that liked the repository, suggesting that fake engagement helped push the repo higher in Hugging Face rankings.

Even if not every download led to an infection, the visibility still created risk. Developers often trust trending repositories more quickly, especially when a project appears to copy a well-known official release.

HiddenLayer also linked the activity to six other Hugging Face repositories uploaded under the same account on April 24, 2026. Those repositories used similar loader.py functionality and the same command-retrieval URL.

The related repositories used AI-themed names, including Bonsai, Qwen, DeepSeek, Claude, and Gemma-style naming. That pattern suggests the attacker was not relying on one fake project.

The broader goal appears to be supply chain access through open-source AI workflows. Developers downloading models, loaders, scripts, or helper files from public hubs may run code before reviewing it carefully.

What affected users should do

  • Disconnect any machine that ran files from Open-OSS/privacy-filter.
  • Reimage the affected Windows host before returning it to production use.
  • Rotate saved browser passwords, session cookies, OAuth tokens, SSH keys, and FTP credentials.
  • Move cryptocurrency funds to a new wallet created on a clean device.
  • Revoke cloud provider tokens and developer API keys stored on the machine.
  • Invalidate Discord sessions and reset Discord passwords.
  • Check network logs for traffic to api.eth-fastscan.org, jsonkeeper.com, and recargapopular.com.

How developers can avoid fake AI repositories

Developers should verify the publisher before downloading AI models, scripts, or installers. A similar name does not prove that a repository belongs to the original vendor.

They should also avoid running setup scripts directly from trending repositories without reading them first. A model card can look clean while a loader file executes hidden commands in the background.

Organizations using public AI repositories should add scanning and approval steps for model files, Python loaders, notebooks, batch files, and PowerShell scripts. In AI supply chains, the dangerous code often sits around the model rather than inside the model weights.

FAQ

What was Open-OSS/privacy-filter?

Open-OSS/privacy-filter was a malicious Hugging Face repository that impersonated OpenAI’s real Privacy Filter model and delivered malware to Windows machines.

What did the malware steal?

The infostealer targeted browser data, saved credentials, Discord data, crypto wallets, wallet seed files, SSH files, VPN files, FTP credentials, screenshots, and system information.

Was the real OpenAI Privacy Filter compromised?

No. The malicious repository copied the real project’s model card and branding style, but the legitimate OpenAI Privacy Filter project was separate.

How many downloads did the fake repository get?

HiddenLayer reported that the fake repository had more than 200,000 downloads. It also reached about 244,000 downloads and 667 likes before removal.

Summary

  1. A fake Hugging Face repository impersonated OpenAI’s Privacy Filter model.
  2. The repository reached the trending list and drew more than 200,000 downloads.
  3. The malicious loader used PowerShell to download a Rust-based infostealer on Windows.
  4. The payload targeted browser credentials, crypto wallets, Discord data, SSH files, FTP data, and screenshots.
  5. Developers should verify AI repositories before running loaders, scripts, or batch files.
Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages