Hugging Face LeRobot flaw enables unauthenticated remote code execution


A critical vulnerability in Hugging Face LeRobot can let unauthenticated attackers run commands on systems that expose the framework’s async inference service to a network.

The flaw is tracked as CVE-2026-25874 and affects LeRobot versions through 0.5.1. VulnCheck rates it as critical with a 9.3 CVSS v4 score, while NVD lists a 9.8 CVSS v3.1 score.

The issue comes from unsafe Python pickle deserialization inside LeRobot’s async inference pipeline. If an attacker can reach the gRPC service, they can send a crafted serialized payload and execute arbitrary code on the server or client.

What happened

The vulnerability sits in LeRobot’s async inference components, mainly policy_server.py and robot_client.py. These components help offload robot policy computation to a separate GPU server.

The problem is that the service receives raw bytes over gRPC and passes them into pickle.loads(). Python pickle can execute code during deserialization, so it should never process data from untrusted sources.

The gRPC channel also uses insecure communication without TLS or authentication. That means a network-reachable attacker can connect directly to the service if it is exposed beyond a trusted local environment.

At a glance

ItemDetails
CVECVE-2026-25874
Affected projectHugging Face LeRobot
Affected versionsLeRobot through 0.5.1
Vulnerability typeUnsafe deserialization of untrusted data
CWECWE-502
CVSS score9.3 CVSS v4 from VulnCheck, 9.8 CVSS v3.1 from NVD
Main impactUnauthenticated remote code execution
Affected componentsPolicyServer and RobotClient
Network protocolgRPC over insecure channels
Fix statusFix planned for version 0.6.0

How the vulnerability works

LeRobot’s async inference mode lets a robot client send observations to a server that performs heavy policy inference, often on a GPU machine. The server then returns actions for the robot to perform.

To exchange data, the implementation uses protobuf messages with raw bytes fields. Those bytes contain Python objects serialized with pickle.

When the server or client receives that data, it calls pickle.loads() before proper validation. A malicious pickle payload can run code during that loading process.

Attack paths

gRPC callWhere the payload goesImpact
SendPolicyInstructionsPolicySetup.dataCan trigger code execution on the policy server
SendObservationsObservation.dataCan trigger code execution after the server reassembles observation data
GetActionsActions.dataCan trigger code execution on the robot client if a malicious server responds

Why pickle deserialization is dangerous

Pickle is convenient for Python developers, but it was not designed for untrusted network input. A crafted pickle object can call system functions as soon as the application deserializes it.

That makes the bug serious even before the application checks whether the incoming data has the correct type. In this case, the payload can execute before LeRobot confirms that it received a valid policy configuration or observation object.

The issue becomes more dangerous when the async inference server runs on a GPU host with access to models, datasets, robot control systems, API keys, SSH credentials, or internal networks.

Why the gRPC setup increases the risk

The vulnerable service uses insecure gRPC communication. Researchers noted the use of add_insecure_port(), which means no built-in TLS protection and no built-in authentication.

The default local setup may limit casual exposure, but real robotics deployments often place the policy server on a separate machine. In those cases, teams may bind the service to 0.0.0.0 so robots or clients can reach it across a network.

If that port becomes reachable from an untrusted network, attackers do not need credentials. They only need network access to send a malicious serialized payload.

Potential impact

  • Remote command execution on the LeRobot policy server.
  • Remote command execution on a robot client in some attack paths.
  • Theft of Hugging Face tokens, API keys, SSH keys, and model files.
  • Access to proprietary datasets and robotics research.
  • Lateral movement from a GPU server into the wider network.
  • Model corruption or sabotage of inference workflows.
  • Disruption of connected robot operations.

Why AI and robotics teams should care

LeRobot is used for real-world robotics and machine learning research. Its async inference feature exists because robot policy computation can require dedicated GPU resources.

Those GPU systems often sit inside trusted lab, research, or production networks. They may also store valuable machine learning models, training data, experiment outputs, and cloud credentials.

That makes a remote code execution flaw especially risky. A compromised inference server can become a gateway into the wider AI infrastructure.

The safetensors irony

The vulnerability has drawn attention because Hugging Face created safetensors, a format designed to avoid the security risks of pickle-based model loading.

Researchers pointed out that LeRobot still used pickle for network serialization in the async inference module. The affected lines also included # nosec comments, which suppress automated security linter warnings.

The safer approach is to use JSON or protobuf-native fields for configuration data and safetensors or another safe format for tensor data.

Mitigation steps for teams

Until a fixed release is deployed, teams should treat any network-exposed LeRobot async inference service as high risk.

  • Do not expose the LeRobot async inference gRPC service to the public internet.
  • Bind the service to 127.0.0.1 or another trusted internal interface where possible.
  • Use firewalls, VPNs, or API gateways to restrict access to trusted hosts only.
  • Block untrusted network access to the policy server port.
  • Monitor for unexpected gRPC traffic to LeRobot inference systems.
  • Run the inference service with the lowest privileges possible.
  • Move secrets away from GPU hosts where they are not needed.
  • Watch the LeRobot 0.6.0 release and apply the official fix when available.

Hardening checklist

AreaRecommended action
Network exposureKeep the async inference service off public networks
BindingAvoid binding the server to 0.0.0.0 unless access is strictly filtered
AuthenticationPlace authentication in front of any reachable gRPC service
Transport securityUse TLS or a trusted encrypted tunnel where remote access is required
SecretsRemove unnecessary API keys and SSH credentials from inference hosts
Privilege levelRun LeRobot services with restricted user permissions
MonitoringAlert on unusual gRPC calls, shell execution, and outbound connections
Patch planningTrack the planned 0.6.0 fix and test it quickly

What defenders should monitor

Security teams should look for unexpected inbound connections to LeRobot policy servers, especially from machines that do not normally participate in robotics workflows.

They should also monitor for shell execution spawned by Python processes running LeRobot. Commands launched from a policy server process can signal exploitation or testing activity.

Outbound traffic from inference hosts also deserves attention. A compromised GPU server may attempt to download second-stage tools, exfiltrate model files, or connect to attacker infrastructure.

Why this matters

This vulnerability highlights a growing security problem in AI infrastructure. Research tools often become production tools before their security model matures.

Robotics frameworks create an even larger risk because software output can influence physical systems. A compromised inference pipeline can affect data, models, automation workflows, and connected robots.

The practical message is simple. AI and robotics teams should treat inference services like sensitive production services, not just research utilities.

FAQ

What is CVE-2026-25874?

CVE-2026-25874 is a critical LeRobot vulnerability that allows unauthenticated remote code execution through unsafe pickle deserialization over gRPC.

Which LeRobot versions are affected?

LeRobot versions through 0.5.1 are affected.

Is the flaw patched?

A full fix is planned for LeRobot 0.6.0. Teams should restrict network access and avoid exposing the async inference service until the fix is available and deployed.

How can attackers exploit it?

An attacker who can reach the gRPC service can send a crafted pickle payload through calls such as SendPolicyInstructions, SendObservations, or GetActions.

Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages