Hey HN — I built this after getting fed up with AI-generated PRs slipping
through code review unnoticed. TODOs everywhere, placeholder variables,
empty except blocks — the usual slop.
roast-my-code scans your repo with static analysis rules specifically tuned
for AI-generated code patterns, then calls an LLM (Groq free tier by default,
so $0 to try) to generate a brutal, specific roast referencing your actual
file names and issues.
Stack: Python + Typer + Rich + Jinja2. The HTML report exports a shareable
shields.io badge with your score.
Try it: pip install roast-my-code
Would love to hear what patterns you'd add — especially if you've spotted
AI slop in the wild that my analyzer doesn't catch yet.
Rohan51|1 day ago
roast-my-code scans your repo with static analysis rules specifically tuned for AI-generated code patterns, then calls an LLM (Groq free tier by default, so $0 to try) to generate a brutal, specific roast referencing your actual file names and issues.
Stack: Python + Typer + Rich + Jinja2. The HTML report exports a shareable shields.io badge with your score.
Try it: pip install roast-my-code
Would love to hear what patterns you'd add — especially if you've spotted AI slop in the wild that my analyzer doesn't catch yet.
ksaj|1 day ago
I'm terrible about placeholder variables and functions. This thing might rip me to shreds.