A compromised version of the Mistral AI Python SDK was published to PyPI with malicious code that ran when developers imported the package on Linux systems.

The affected release is mistralai 2.4.6. Mistral says the package was uploaded around May 12 at 00:05 UTC and that the PyPI project was later quarantined. Previous PyPI versions are not affected by the advisory.

The attack targeted developer environments by turning a trusted AI package into an import-time downloader. Once triggered, the malicious code downloaded a second-stage payload named transformers.pyz and executed it in the background.

What happened

The compromise affected the mistralai package on PyPI, which developers use to connect Python applications to Mistral AI services.

According to Mistral’s security advisory, the malicious code was injected into src/mistralai/client/__init__.py. That file runs when the package gets imported, which gave the attackers a direct execution point inside developer workflows.

The code downloaded a file from a hardcoded IP address and saved it as /tmp/transformers.pyz. It then launched the file as a detached background process, making the activity harder to notice during normal development work.

At a glance

Item Details
Affected package mistralai on PyPI
Affected version 2.4.6
Target platform Linux systems
Execution trigger Importing the compromised package
Dropped payload /tmp/transformers.pyz
Main risk Credential theft, cloud secret exposure, CI/CD compromise, and persistence

Why this compromise is serious

Software supply chain attacks are dangerous because they hide inside tools that developers already trust. In this case, the malicious code did not need a separate phishing email or a suspicious attachment.

A developer could trigger the malware by installing the affected version and importing the package in a Linux environment. That makes CI/CD runners, development servers, cloud build machines, and AI application environments important areas to review.

The payload name also helped the attack blend into AI workflows. The file transformers.pyz looks similar to Hugging Face Transformers, a widely used machine learning library, which could reduce suspicion during a quick manual review.

How the malicious code worked

The injected code first checked whether the system was running Linux. It also used the MISTRAL_INIT environment variable as a guard to avoid repeated execution in the same process context.

If the conditions matched, the code used curl to download the second-stage payload from 83.142.209.194. It saved the file in the Linux temporary directory and started it as a background Python process.

The code also suppressed visible errors. That means a failed download, blocked connection, or execution issue could happen without alerting the developer inside the running application.

Indicators of compromise

Type Indicator Description
Package mistralai==2.4.6 Compromised PyPI package version
IP address 83[.]142[.]209[.]194 Remote payload host used by the malicious PyPI code
URL hxxps://83[.]142[.]209[.]194/transformers.pyz Second-stage payload download location
File path /tmp/transformers.pyz Downloaded payload location on Linux systems
File src/mistralai/client/__init__.py Package file that contained the injected import-time code
Environment variable MISTRAL_INIT=1 Execution guard used by the malicious code
Service pgsql-monitor.service Reported persistence artifact linked to the PyPI payload
File pgmonitor.py Reported malicious file used for persistence

What the payload tried to steal

Mistral says the malicious PyPI package spawned a background process to harvest credentials from common locations. JFrog’s analysis also described the updated PyPI payload as a credential stealer targeting local files, cloud providers, Kubernetes, Vault, password managers, and developer tooling secrets.

This type of access can create a much larger breach than a single infected workstation. Developer machines and build systems often hold GitHub tokens, API keys, SSH keys, cloud credentials, package registry tokens, and deployment secrets.

If those secrets get stolen, attackers may be able to access private repositories, cloud environments, container registries, deployment pipelines, or internal tools.

Connection to the wider Mini Shai-Hulud campaign

The mistralai PyPI incident appeared during a broader supply chain campaign that affected npm and PyPI packages across several developer ecosystems.

Mistral says it was impacted by a supply chain attack related to the TanStack security incident. It also says an automated worm associated with the attack led to compromised npm and PyPI package versions being published.

Security researchers tracking the wider campaign have described worm-like behavior, credential theft, package republishing, and attacks against developer and CI/CD environments.

What Mistral says about the incident

Mistral says its current investigation indicates that an affected developer device was involved. The company says it has no indication that Mistral infrastructure was compromised.

The official advisory also says the compromised npm packages were removed by the registry and that the compromised PyPI release was quarantined. The PyPI advisory covers mistralai 2.4.6.

Developers should still check private package mirrors, build caches, deployment images, container base images, and lockfiles. A removed or quarantined package can still remain inside internal environments.

What developers should do now

Teams that installed or imported mistralai 2.4.6 should treat affected Linux systems as potentially compromised until reviewed.

The safest response is to isolate affected developer machines and CI/CD runners before rotating credentials. This reduces the chance that active malware can capture new secrets during cleanup.

Security teams should then remove malicious artifacts, rebuild affected systems where needed, and rotate every secret that may have been accessible from those environments.

  • Check dependency files and lockfiles for mistralai 2.4.6.
  • Search Linux systems for /tmp/transformers.pyz.
  • Look for processes running python /tmp/transformers.pyz.
  • Check for outbound connections to 83[.]142[.]209[.]194.
  • Look for pgsql-monitor.service and pgmonitor.py.
  • Rotate GitHub tokens, cloud keys, SSH keys, API keys, CI/CD secrets, and package registry tokens.
  • Rebuild affected CI/CD runners from clean images.

Why AI developer packages are high-value targets

AI development environments often connect to cloud services, model APIs, data stores, source repositories, and deployment systems. That makes them attractive targets for supply chain attackers.

A single compromised SDK can reach many projects quickly if teams automatically update dependencies or rebuild containers from cached package sources.

This incident shows why developers should pin versions, review sudden new releases, use dependency cooldown policies, monitor package behavior during install and import, and restrict secrets available to local development and CI jobs.

FAQ

Which Mistral AI PyPI package version was compromised?

The affected PyPI package version is mistralai 2.4.6. Mistral says previous PyPI versions are not affected by the advisory.

How did the malicious mistralai package run?

The malicious code was injected into src/mistralai/client/__init__.py and ran when the package was imported on Linux systems.

What file did the malware download?

The malicious code downloaded a second-stage payload from 83.142.209.194 and saved it as /tmp/transformers.pyz.

What should affected developers rotate?

Affected developers should rotate GitHub tokens, cloud credentials, SSH keys, API keys, package registry tokens, CI/CD secrets, and any other credentials stored on exposed systems.

Was Mistral infrastructure compromised?

Mistral says its current investigation found no indication that Mistral infrastructure was compromised. The company says an affected developer device appears to have been involved.

Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages