If you use LiteLLM in your projects, pay attention. On March 24, a hacking group called TeamPCP published compromised versions of the popular Python package on PyPI. The affected versions — 1.82.7 and 1.82.8 — were available for about three hours and contained a credential stealer.
What Happened
LiteLLM is a widely used proxy that wraps multiple LLM APIs under a single interface. Thousands of developers rely on it to switch between Claude, GPT, and other models.
The attack was sophisticated: TeamPCP had previously stolen the LiteLLM maintainer’s PyPI credentials through a compromised GitHub Action in Aqua Security’s Trivy scanner. Using those credentials, the attackers uploaded the malicious packages directly to PyPI, bypassing the official CI/CD pipeline entirely.
The compromised package contained a .pth file that executes automatically on every Python process startup. It collects credentials and sends them encrypted to a domain unrelated to LiteLLM.
What You Should Do
If you updated LiteLLM between March 24 and the cleanup, check your installed version immediately. The compromised versions are 1.82.7 and 1.82.8.
Worst case scenario: rotate every API key and credential that was accessible in the environment where LiteLLM was running. That potentially includes your OpenAI, Anthropic, Azure, and any other LLM API keys.
The Bigger Picture
This wasn’t an isolated incident. TeamPCP has been running a coordinated supply chain campaign for weeks. Besides LiteLLM, Aqua Security’s Trivy and Checkmarx were also compromised. Simon Willison documented the case in detail on his blog, estimating that the compromised packages were downloaded roughly 47,000 times.
For the AI developer community, this is a wake-up call. We’re stuffing more and more API keys into our environments, depending on more and more packages — and the supply chain is a wide-open attack vector. Pin your dependencies, verify hashes, and keep your eyes open.
Sources: