Hugging Face LeRobot flaw enables unauthenticated remote code execution
A critical vulnerability in Hugging Face LeRobot can let unauthenticated attackers run commands on systems that expose the framework’s async inference service to a network.
The flaw is tracked as CVE-2026-25874 and affects LeRobot versions through 0.5.1. VulnCheck rates it as critical with a 9.3 CVSS v4 score, while NVD lists a 9.8 CVSS v3.1 score.
Access content across the globe at the highest speed rate.
70% of our readers choose Private Internet Access
70% of our readers choose ExpressVPN
Browse the web from multiple devices with industry-standard security protocols.
Faster dedicated servers for specific actions (currently at summer discounts)
The issue comes from unsafe Python pickle deserialization inside LeRobot’s async inference pipeline. If an attacker can reach the gRPC service, they can send a crafted serialized payload and execute arbitrary code on the server or client.
What happened
The vulnerability sits in LeRobot’s async inference components, mainly policy_server.py and robot_client.py. These components help offload robot policy computation to a separate GPU server.
The problem is that the service receives raw bytes over gRPC and passes them into pickle.loads(). Python pickle can execute code during deserialization, so it should never process data from untrusted sources.
The gRPC channel also uses insecure communication without TLS or authentication. That means a network-reachable attacker can connect directly to the service if it is exposed beyond a trusted local environment.
At a glance
| Item | Details |
|---|---|
| CVE | CVE-2026-25874 |
| Affected project | Hugging Face LeRobot |
| Affected versions | LeRobot through 0.5.1 |
| Vulnerability type | Unsafe deserialization of untrusted data |
| CWE | CWE-502 |
| CVSS score | 9.3 CVSS v4 from VulnCheck, 9.8 CVSS v3.1 from NVD |
| Main impact | Unauthenticated remote code execution |
| Affected components | PolicyServer and RobotClient |
| Network protocol | gRPC over insecure channels |
| Fix status | Fix planned for version 0.6.0 |
How the vulnerability works
LeRobot’s async inference mode lets a robot client send observations to a server that performs heavy policy inference, often on a GPU machine. The server then returns actions for the robot to perform.
To exchange data, the implementation uses protobuf messages with raw bytes fields. Those bytes contain Python objects serialized with pickle.
When the server or client receives that data, it calls pickle.loads() before proper validation. A malicious pickle payload can run code during that loading process.
Attack paths
| gRPC call | Where the payload goes | Impact |
|---|---|---|
SendPolicyInstructions | PolicySetup.data | Can trigger code execution on the policy server |
SendObservations | Observation.data | Can trigger code execution after the server reassembles observation data |
GetActions | Actions.data | Can trigger code execution on the robot client if a malicious server responds |
Why pickle deserialization is dangerous
Pickle is convenient for Python developers, but it was not designed for untrusted network input. A crafted pickle object can call system functions as soon as the application deserializes it.
That makes the bug serious even before the application checks whether the incoming data has the correct type. In this case, the payload can execute before LeRobot confirms that it received a valid policy configuration or observation object.
The issue becomes more dangerous when the async inference server runs on a GPU host with access to models, datasets, robot control systems, API keys, SSH credentials, or internal networks.
Why the gRPC setup increases the risk
The vulnerable service uses insecure gRPC communication. Researchers noted the use of add_insecure_port(), which means no built-in TLS protection and no built-in authentication.
The default local setup may limit casual exposure, but real robotics deployments often place the policy server on a separate machine. In those cases, teams may bind the service to 0.0.0.0 so robots or clients can reach it across a network.
If that port becomes reachable from an untrusted network, attackers do not need credentials. They only need network access to send a malicious serialized payload.
Potential impact
- Remote command execution on the LeRobot policy server.
- Remote command execution on a robot client in some attack paths.
- Theft of Hugging Face tokens, API keys, SSH keys, and model files.
- Access to proprietary datasets and robotics research.
- Lateral movement from a GPU server into the wider network.
- Model corruption or sabotage of inference workflows.
- Disruption of connected robot operations.
Why AI and robotics teams should care
LeRobot is used for real-world robotics and machine learning research. Its async inference feature exists because robot policy computation can require dedicated GPU resources.
Those GPU systems often sit inside trusted lab, research, or production networks. They may also store valuable machine learning models, training data, experiment outputs, and cloud credentials.
That makes a remote code execution flaw especially risky. A compromised inference server can become a gateway into the wider AI infrastructure.
The safetensors irony
The vulnerability has drawn attention because Hugging Face created safetensors, a format designed to avoid the security risks of pickle-based model loading.
Researchers pointed out that LeRobot still used pickle for network serialization in the async inference module. The affected lines also included # nosec comments, which suppress automated security linter warnings.
The safer approach is to use JSON or protobuf-native fields for configuration data and safetensors or another safe format for tensor data.
Mitigation steps for teams
Until a fixed release is deployed, teams should treat any network-exposed LeRobot async inference service as high risk.
- Do not expose the LeRobot async inference gRPC service to the public internet.
- Bind the service to
127.0.0.1or another trusted internal interface where possible. - Use firewalls, VPNs, or API gateways to restrict access to trusted hosts only.
- Block untrusted network access to the policy server port.
- Monitor for unexpected gRPC traffic to LeRobot inference systems.
- Run the inference service with the lowest privileges possible.
- Move secrets away from GPU hosts where they are not needed.
- Watch the LeRobot
0.6.0release and apply the official fix when available.
Hardening checklist
| Area | Recommended action |
|---|---|
| Network exposure | Keep the async inference service off public networks |
| Binding | Avoid binding the server to 0.0.0.0 unless access is strictly filtered |
| Authentication | Place authentication in front of any reachable gRPC service |
| Transport security | Use TLS or a trusted encrypted tunnel where remote access is required |
| Secrets | Remove unnecessary API keys and SSH credentials from inference hosts |
| Privilege level | Run LeRobot services with restricted user permissions |
| Monitoring | Alert on unusual gRPC calls, shell execution, and outbound connections |
| Patch planning | Track the planned 0.6.0 fix and test it quickly |
What defenders should monitor
Security teams should look for unexpected inbound connections to LeRobot policy servers, especially from machines that do not normally participate in robotics workflows.
They should also monitor for shell execution spawned by Python processes running LeRobot. Commands launched from a policy server process can signal exploitation or testing activity.
Outbound traffic from inference hosts also deserves attention. A compromised GPU server may attempt to download second-stage tools, exfiltrate model files, or connect to attacker infrastructure.
Why this matters
This vulnerability highlights a growing security problem in AI infrastructure. Research tools often become production tools before their security model matures.
Robotics frameworks create an even larger risk because software output can influence physical systems. A compromised inference pipeline can affect data, models, automation workflows, and connected robots.
The practical message is simple. AI and robotics teams should treat inference services like sensitive production services, not just research utilities.
FAQ
CVE-2026-25874 is a critical LeRobot vulnerability that allows unauthenticated remote code execution through unsafe pickle deserialization over gRPC.
LeRobot versions through 0.5.1 are affected.
A full fix is planned for LeRobot 0.6.0. Teams should restrict network access and avoid exposing the async inference service until the fix is available and deployed.
An attacker who can reach the gRPC service can send a crafted pickle payload through calls such as SendPolicyInstructions, SendObservations, or GetActions.
Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more
User forum
0 messages