Critical LiteLLM SQL injection flaw is already being exploited in the wild


A critical SQL injection vulnerability in LiteLLM is already being targeted in the wild, giving attackers a way to query the proxy database before authentication. The flaw is tracked as CVE-2026-42208 and affects LiteLLM versions 1.81.16 through 1.83.6. LiteLLM fixed it in version 1.83.7.

LiteLLM is an open-source gateway used to route AI requests to providers such as OpenAI, Anthropic, and other model services. That makes this bug especially risky because LiteLLM often stores virtual API keys, provider credentials, master keys, model settings, and proxy configuration data.

Sysdig’s Threat Research Team said it saw the first exploitation attempt 36 hours and seven minutes after the advisory entered the global GitHub Advisory Database. The activity was not a random SQL injection scan. Researchers said the attacker targeted specific LiteLLM database tables that store high-value secrets.

What makes CVE-2026-42208 dangerous

The flaw sits in LiteLLM’s proxy API key verification flow. GitHub’s advisory says a database query mixed the caller-supplied API key value into the query text instead of passing it as a separate parameter. An unauthenticated attacker could send a crafted Authorization header to an LLM API route and reach the vulnerable query through the proxy’s error-handling path.

Sysdig explains that the attack abuses the Authorization: Bearer header. A fake token containing a single quote can escape the intended SQL string and append attacker-controlled SQL. Because this happens before authentication finishes, any HTTP client that can reach the proxy can attempt the attack.

The impact goes beyond normal database exposure. GitHub says attackers could read proxy database data and may also modify it, which could lead to unauthorized access to the proxy and the credentials it manages.

At a glance

ItemDetails
VulnerabilitySQL injection in LiteLLM proxy API key verification
CVECVE-2026-42208
GitHub advisoryGHSA-r75f-5x8p-qvmc
SeverityCritical
CVSS score9.3
Affected versions1.81.16 through 1.83.6
Fixed version1.83.7
Authentication requiredNo
Main riskDatabase access and possible credential theft
Exploitation statusExploitation observed by Sysdig

What attackers tried to access

Sysdig said the observed attacker focused on three sensitive tables: LiteLLM_VerificationToken, litellm_credentials, and litellm_config. These tables can hold virtual API keys, stored provider credentials, and environment-variable configuration.

That focus shows a clear goal. The attacker was not simply testing whether SQL injection worked. They appeared to understand LiteLLM’s internal schema and tried to reach the data most useful for abusing AI provider accounts.

Sysdig also noted that it did not see confirmed follow-up abuse using stolen keys in the traffic it analyzed. However, the precision of the attempts shows that AI gateway databases have become a direct target for attackers.

How fast exploitation started

GitHub published the maintainer-side advisory on April 20, 2026, and the issue entered the global GitHub Advisory Database on April 24. Sysdig observed the first SQL injection attempt on April 26 at 04:24 UTC.

The first source IP was 65.111.27.132, followed by another IP, 65.111.25.67, from the same autonomous system. Sysdig said both used the same Python user-agent and appeared consistent with one operator rotating egress points.

The attack timeline is important for defenders. It shows that public advisory indexing can be enough for attackers to build targeted exploitation attempts quickly, even without waiting for a widely circulated proof of concept.

Why AI gateways are high-value targets

LiteLLM sits between applications and AI model providers. In many deployments, it handles routing, authentication, rate limits, budgets, and provider credentials.

That central role makes it attractive to attackers. A successful database extraction may expose keys that unlock OpenAI, Anthropic, AWS Bedrock, or other AI services. Sysdig compared the blast radius to a cloud account compromise because one database row can contain expensive, production-grade provider credentials.

This also creates financial risk. Attackers who obtain valid AI keys may generate large bills, access private prompts or workflows, disrupt applications, or resell working credentials.

What LiteLLM fixed

GitHub’s advisory says LiteLLM fixed the issue in version 1.83.7. The patched version passes the caller-supplied value to the database as a separate parameter instead of mixing it into the query text.

Sysdig also states that the fix replaces string interpolation with a parameterized query. Organizations using versions 1.81.16 through 1.83.6 should patch immediately.

LiteLLM has already been under closer security review this month. Earlier in April, the project said it brought in Veria Labs after a March supply chain incident, fixed multiple vulnerability reports, and launched a bug bounty program.

What admins should do now

Organizations should upgrade LiteLLM to version 1.83.7 or later immediately. If an instance was reachable from the internet while running a vulnerable version, teams should treat it as potentially compromised and rotate secrets.

GitHub’s advisory also lists a temporary workaround for teams that cannot upgrade right away. Admins can set disable_error_logs: true under general_settings, which removes the path that lets unauthenticated input reach the vulnerable query. This should only serve as a short-term measure.

Security teams should also move LiteLLM behind an internal network or a mutually authenticated reverse proxy. AI gateways should not remain directly exposed to the internet when they hold production model-provider credentials.

Indicators and log checks

IndicatorWhat to look for
Request pathPOST /chat/completions or /v1/chat/completions
Header patternAuthorization: Bearer sk-litellm'
SQL keywordsUNION SELECT, FROM, OR 1=1, --
Targeted tablesLiteLLM_VerificationToken, litellm_credentials, litellm_config
User-agentPython/3.12 aiohttp/3.9.1
Source IPs reported by Sysdig65.111.27.132, 65.111.25.67
  • Upgrade LiteLLM to 1.83.7 or later.
  • Rotate all LiteLLM virtual API keys and master keys.
  • Rotate stored OpenAI, Anthropic, AWS Bedrock, and other provider credentials.
  • Review web server logs for sk-litellm' in the Authorization header.
  • Search logs for UNION SELECT and targeted LiteLLM table names.
  • Check upstream AI provider billing for unexpected usage.
  • Review /chat/completions, /key/generate, and /key/info requests.
  • Move internet-facing LiteLLM deployments behind an internal gateway.
  • Add reverse proxy filtering for suspicious Authorization headers.
  • Inventory unofficial or team-managed AI gateway deployments.

Why this matters for enterprise AI security

CVE-2026-42208 shows how AI infrastructure can become a direct credential target. Many companies treat AI gateways as developer utilities, but these systems often hold the same level of value as cloud access brokers.

The attack also shows that adversaries now understand the structure of popular AI tools. Sysdig’s findings suggest attackers used schema-aware payloads instead of generic scanning, which points to more focused exploitation of AI infrastructure.

For enterprises, the lesson is clear. AI gateways need the same controls as sensitive production services: patching, network restrictions, secrets management, audit logging, and billing monitoring.

FAQ

What is CVE-2026-42208?

CVE-2026-42208 is a critical SQL injection vulnerability in LiteLLM’s proxy API key verification flow. It allows unauthenticated attackers to reach a vulnerable database query through crafted API requests.

Which LiteLLM versions are affected?

GitHub lists affected versions as 1.81.16 through versions earlier than 1.83.7. The fixed version is 1.83.7.

Is CVE-2026-42208 being exploited?

Yes. Sysdig said it observed targeted SQL injection attempts 36 hours and seven minutes after the advisory appeared in the global GitHub Advisory Database.

What data can attackers target?

Attackers can target LiteLLM database tables containing virtual API keys, stored provider credentials, and proxy environment configuration. Sysdig saw attempts against LiteLLM_VerificationToken, litellm_credentials, and litellm_config.

Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages