Saturday, April 19, 2025
HomeTechnologyArtificial IntelligenceDevelopers Beware: Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks...

Developers Beware: Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks TechTricks365


Security researchers and developers are raising alarms over “slopsquatting,” a new form of supply chain attack that leverages AI-generated misinformation commonly known as hallucinations. As developers increasingly rely on coding tools like GitHub Copilot, ChatGPT, and DeepSeek, attackers are exploiting AI’s tendency to invent software packages, tricking users into downloading malicious content.

What is slopsquatting?

The term slopsquatting was originally coined by Seth Larson, a developer with the Python Software Foundation, and later popularized by tech security researcher Andrew Nesbitt. It refers to cases where attackers register software packages that don’t actually exist but are mistakenly suggested by AI tools; once live, these fake packages can contain harmful code.

If a developer installs one of these without verifying it — simply trusting the AI — they may unknowingly introduce malicious code into their project, giving hackers backdoor access to sensitive environments.

Unlike typosquatting, where malicious actors count on human spelling mistakes, slopsquatting relies entirely on AI’s flaws and developers misplaced trust in automated suggestions.

AI-hallucinated software packages are on the rise

This issue is more than theoretical. A recent joint study by researchers at the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma analyzed more than 576,000 AI-generated code samples from 16 large language models (LLMs). They found that nearly 1 in 5 packages suggested by AI didn’t exist.

“The average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat,” the study revealed.

Even more concerning, these hallucinated names weren’t random. In multiple runs using the same prompts, 43% of hallucinated packages consistently reappeared, showing how predictable these hallucinations can be. As explained by the security firm Socket, this consistency gives attackers a roadmap — they can monitor AI behavior, identify repeat suggestions, and register those package names before anyone else does.

The study also noted differences across models: CodeLlama 7B and 34B had the highest hallucination rates of over 30%; GPT-4 Turbo had the lowest rate at 3.59%.

How vibe coding might increase this security risk

A growing trend called vibe coding, a term coined by AI researcher Andrej Karpathy, may worsen the issue. It refers to a workflow where developers describe what they want, and AI tools generate the code. This approach leans heavily on trust — developers often copy and paste AI output without double-checking everything.

In this environment, hallucinated packages become easy entry points for attackers, especially when developers skip manual review steps and rely solely on AI-generated suggestions.

How developers can protect themselves

To avoid falling victim to slopsquatting, experts recommend:

  • Manually verifying all package names before installation.
  • Using package security tools that scan dependencies for risks.
  • Checking for suspicious or brand-new libraries.
  • Avoiding copy-pasting install commands directly from AI suggestions.

Meanwhile, there is good news: some AI models are improving in self-policing. GPT-4 Turbo and DeepSeek, for instance, have shown they can detect and flag hallucinated packages in their own output with over 75% accuracy, according to early internal tests.


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments