top | item 46478932

Show HN: Phantom Guard – Detect AI-hallucinated package attacks

2 points| matteo1782 | 1 month ago |github.com

Phantom Guard is a CLI tool that catches "slopsquatting" attacks before they compromise your supply chain.

The attack vector: AI assistants hallucinate package names → attackers register those names with malware → developers install malware thinking it's legit.

How it works: 1. Checks if packages exist on registries 2. Matches against 10+ AI hallucination patterns 3. Detects typosquats of top 3000 packages 4. Analyzes metadata (age, downloads, maintainers)

``` pip install phantom-guard phantom-guard validate flask-gpt-helper # HIGH_RISK: Package not found, matches pattern ```

Performance: <10ms cached, <200ms uncached.

Try the live demo: https://matte1782.github.io/phantom_guard/

GitHub: https://github.com/matte1782/phantom_guard

2 comments

order

dmarwicke|1 month ago

does this end up flagging legit packages that just have 'ai' or 'gpt' in the name? feels like half of pypi would trigger at this point

matteo1782|1 month ago

Great question! No, Phantom Guard won't flag legit packages like openai, langchain-openai, or gpt-engineer.

  The primary signal is whether the package exists on the registry. We query PyPI/npm directly:
  - If a package exists → it gets a low/safe risk score
  - If a package doesn't exist → that's the main red flag for slopsquatting

  Pattern matching (like AI-related terms) is just one of many weighted signals, and it's far outweighed by existence. In fact, popular packages get a negative weight that actively reduces their risk score.

  The attack we're detecting is when an LLM hallucinates a package name like flask-gpt-utils that sounds plausible but doesn't exist. A real attacker could then register that name and wait for developers to pip install it.

  We test against the top 1000 PyPI packages and target <5% false positive rate. If you're importing openai or transformers, you're fine.