LLMs can’t stop making up software dependencies and sabotaging everything • The Register
April 15, 2025

Exploiting hallucinated package names represents a form of typosquatting, where variations or misspellings of common terms are used to dupe people. Seth Michael Larson, security developer-in-residence at the Python Software Foundation, has dubbed it “slopsquatting” – “slop” being a common pejorative for AI model output.”
We’re in the very early days looking at this problem from an ecosystem level,” Larson told The Register. “It’s difficult, and likely impossible, to quantify how many attempted installs are happening because of LLM hallucinations without more transparency from LLM providers. Users of LLM generated code, packages, and information should be double-checking LLM outputs against reality before putting any of that information into operation, otherwise there can be real-world consequences.”
Source: LLMs can’t stop making up software dependencies and sabotaging everything • The Register
Supply chain attacks via package managers are a well known security risk. If you’ve worked with LLMs yo help generate code you’ll likely have come across them hallucinating non existent packages to include in your code. Mostly you’ll quickly discover these don’t exist. But it seems malicious actors are going and creating these packages to exploit this (or at the very least it’s a potential attack vector worth being mindful of).