LiteLLM Issue

When PyPI maintainer accounts get hijacked: the LiteLLM 1.82.7/1.82.8 supply-chain compromise

A timeline and technical analysis of the March 2026 LiteLLM PyPI compromise, what the malicious payload did, and the defenses every Python team should adopt today.

Dipankar Sarkar · · securitysupply-chainsecuritypypisigstore
Upstream issue
#24518 — [Security]: litellm PyPI package (v1.82.7 + v1.82.8) compromised — full timeline and status
Opened March 24, 2026 · status: open · 164 👍 · 116 comments

In late March 2026, two malicious versions of litellm appeared on PyPI: 1.82.7 and 1.82.8. They were not built by BerriAI’s CI. They were uploaded directly by an attacker who had taken over the maintainer’s PyPI account. Within hours the package was suspended, but a window of exposure existed during which any unpinned pip install litellm would pull a credential-stealer that exfiltrated environment variables, SSH keys, and cloud credentials to an attacker-controlled domain.

The official issue tracking the incident is #24518. It is the most-reacted issue in the LiteLLM repo by a wide margin, and for good reason — it is the kind of compromise the Python ecosystem keeps insisting can’t happen until it does.

This post is a technical reconstruction. Not a hot take.

What was published, and how it differed from a legitimate release

The legitimate release pipeline for LiteLLM produces a tagged GitHub release, builds a wheel via CI, and publishes that wheel to PyPI. Each step is observable in the repo’s actions history. By contrast:

  • Version 1.82.7 contained an embedded payload inside litellm/proxy/proxy_server.py. The payload triggered on import litellm.proxy. Anyone running the LiteLLM proxy from PyPI was at risk; someone using only the SDK was not.
  • Version 1.82.8 added a new file: litellm_init.pth, 34,628 bytes. This is a .pth file — a Python startup hook. Python automatically executes any .pth file in site-packages/ on interpreter startup, regardless of whether the package containing it is imported. Version 1.82.8 ran on any Python startup, not just on import litellm.

Neither version had a corresponding GitHub release. The official tags only went up to v1.82.6.dev1. That asymmetry — PyPI ahead of GitHub — is the smoking gun. Any tooling that compared the two would have flagged it.

What the payload did

According to the upstream issue and the original analysis in #24512, the malicious code:

  1. Collected SSH private keys, environment variables (which on most CI systems contain API keys, secrets, and tokens), AWS/GCP/Azure/Kubernetes credentials, crypto wallet files, database connection strings, SSL private keys, shell history files, and CI/CD config files.
  2. Encrypted the collected data using AES-256-CBC for the bulk content with a randomly generated key, then encrypted that key with RSA-4096 using a hardcoded attacker public key. This is a textbook exfiltration envelope — fast symmetric encryption of the payload, asymmetric encryption of just the key, so the attacker can later decrypt offline.
  3. Exfiltrated the encrypted bundle via an HTTP POST to https://models.litellm.cloud/. Note the domain: not litellm.ai (the legitimate one), but litellm.cloud, registered on 2026-03-23 via Spaceship Inc., privacy-protected, just hours before the malicious packages appeared.

The cadence — domain registered, packages uploaded, exfiltration begins — is consistent with a planned operation rather than opportunistic credential theft. The attacker had time to set up the domain in advance.

How the account was taken over

The full root cause of the maintainer account compromise is still under investigation by Mandiant, but the LiteLLM team’s update on the issue points to a trivvy security scan dependency as the entry point. This is consistent with a class of attack where developer machines get compromised via a malicious or compromised tool that they ran legitimately, after which the attacker harvests stored credentials — including, in this case, PyPI tokens stored in ~/.pypirc or in a keychain.

This matters because it shifts the threat model. The attack is not “PyPI is insecure.” PyPI was operating as designed. The attack is “the maintainer’s local machine became the source of trust, and once it was compromised, every project that maintainer published was at risk.” This is the same threat model as the event-stream compromise of 2018 and xz-utils of 2024 — a package owner’s keys are the package.

What a real fix looks like

The LiteLLM team rotated maintainer accounts and pulled the malicious versions within hours. That’s incident response. Prevention is harder, and falls into two categories.

For maintainers:

  • Trusted publishing. PyPI now supports OpenID Connect-based publishing from GitHub Actions without long-lived API tokens. Every release is bound to a specific repo, workflow, and environment. A compromised laptop can no longer publish; only a compromised GitHub Actions workflow can.
  • Required 2FA on PyPI. Already mandated for top packages, but maintainers should turn it on regardless and use hardware keys, not TOTP.
  • Sigstore signing. PyPI supports PEP 740 attestations. A signed release attests to the source workflow and commit. Verification tools can refuse to install unsigned wheels.

For consumers:

  • Pin everything. Not just litellm>=x, but exact versions in a lockfile. The LiteLLM team confirmed that proxy Docker image users were not impacted because all dependencies were pinned in requirements.txt. Pinning is the cheapest defense and the most consistently neglected one.
  • Use a hash-checking installer. pip install --require-hashes and uv both verify wheel hashes against your lockfile. A compromised wheel with a different hash will fail to install.
  • Monitor your installed packages. Tools like pip-audit and uv pip audit check installed packages against vulnerability databases. They don’t catch zero-day compromises but they catch known ones quickly.
  • Network-level egress controls. A LiteLLM proxy that only needs to talk to your model providers should not be allowed to make outbound HTTPS calls to arbitrary domains. The exfiltration would have failed against a default-deny egress policy.

The workarounds users actually applied

In the comments on #24518, affected users described what they did once the issue was disclosed:

  1. Audited site-packages/ for litellm_init.pth. The presence of this file is the proof of exposure to the 1.82.8 variant. A simple find command across machines was the first step.
  2. Rotated all credentials that had been present as environment variables on any machine where the malicious version was installed. This is expensive and disruptive but unavoidable if the threat model assumes successful exfiltration.
  3. Pinned to a known-good version below 1.82.7. The team’s recommendation is to pin to a release verified against GitHub releases, rather than trusting PyPI alone for the next several days.
  4. Switched to the official Docker image as a temporary measure, since it ships with pinned dependencies and was confirmed unaffected.

The broader lesson

The PyPI compromise of LiteLLM is not unique to LiteLLM. The same compromise mechanism — a maintainer machine compromised by a development tool, leading to publishing rights theft, leading to malicious wheels — could happen to any popular Python package. Many of them. The defense is not “trust PyPI more” or “trust LiteLLM less.” The defense is reducing your exposure surface so a single compromised package can’t destroy your environment:

  • Pin and hash-check all dependencies.
  • Run production workloads under egress controls that prevent unknown destinations.
  • Treat environment-variable secrets as compromised on any machine that has installed an unpinned dependency.
  • Prefer Docker images with pinned base layers over pip install in production.

LiteLLM’s response — fast disclosure, transparent timeline, engagement with Mandiant, PyPI takedown — was correct. The systemic problem is that “fast disclosure” still leaves a window. The only real defense is to make sure that window doesn’t expose you in the first place.

References