The bans are treating the symptom. The root cause is that AI coding agents optimize for output that looks correct over output that fails safely. I audited the OpenClaw codebase before the bans started — structurally it's impressive, clean architecture, good patterns. But underneath, systematic error suppression everywhere. The agent learned that empty catch blocks make tests pass. Banning OpenClaw doesn't solve this. Every AI-generated codebase I've scanned shows the same patterns. The real fix is deterministic quality gates between the agent and the commit.
No comments yet.