top | item 47204246

(no title)

Rohan51 | 1 day ago

Hey HN — I built this after getting fed up with AI-generated PRs slipping through code review unnoticed. TODOs everywhere, placeholder variables, empty except blocks — the usual slop.

roast-my-code scans your repo with static analysis rules specifically tuned for AI-generated code patterns, then calls an LLM (Groq free tier by default, so $0 to try) to generate a brutal, specific roast referencing your actual file names and issues.

Stack: Python + Typer + Rich + Jinja2. The HTML report exports a shareable shields.io badge with your score.

Try it: pip install roast-my-code

Would love to hear what patterns you'd add — especially if you've spotted AI slop in the wild that my analyzer doesn't catch yet.

discuss

order

ksaj|1 day ago

— I see what you did there.

I'm terrible about placeholder variables and functions. This thing might rip me to shreds.

Rohan51|1 day ago

Haha — it's surprisingly therapeutic to get roasted by your own tool. I ran it on the repo itself and it called out my own placeholder names in the test fixtures. The fallback roast lines weren't safe either.

Let me know what score you get if you try it! The worst I've seen so far was a 12/100 on a legacy codebase with 200+ TODOs.