Super interesting take Paul. Curious btw, how are these teams actually encoding their “institutional knowledge” into constraints? Like is it some manual config or more like natural‑language rules that evolve with the codebase?
Some teams are using Claude or similar models in GitHub Actions, which automatically review PRs. The rules are basically natural language encoded in a YAML file that's committed in the codebase. Pretty lightweight to get started.
Other teams upgrade to dedicated tools like cubic. We have a feature where you can encode your rules either in our UI, or we're releasing a feature where you can write them directly in your codebase. We'll check them on every PR and leave comments when something violates a constraint.
The in-codebase approach is nice because the rules live next to the code they're protecting, so they evolve naturally as your system changes.
The "in-codebase" approach is the right one, but a YAML file with plain text is a half-measure. The most reliable rule that "lives next to the code" is an architectural test. An ArchUnit test verifying that "all routes in /billing/* call requireAuth" is also code, it's versioned with the project, and it breaks the build deterministically
That is a more robust engineering solution, unlike semantic text interpretation, which can fail
pomarie|3 months ago
Some teams are using Claude or similar models in GitHub Actions, which automatically review PRs. The rules are basically natural language encoded in a YAML file that's committed in the codebase. Pretty lightweight to get started.
Other teams upgrade to dedicated tools like cubic. We have a feature where you can encode your rules either in our UI, or we're releasing a feature where you can write them directly in your codebase. We'll check them on every PR and leave comments when something violates a constraint.
The in-codebase approach is nice because the rules live next to the code they're protecting, so they evolve naturally as your system changes.
veunes|3 months ago