top | item 46656759

Show HN: I built a tool to assist AI agents to know when a PR is good to go

45 points| dsifry | 1 month ago |dsifry.github.io

I've been using Claude Code heavily, and kept hitting the same issue: the agent would push changes, respond to reviews, wait for CI... but never really know when it was done.

It would poll CI in loops. Miss actionable comments buried among 15 CodeRabbit suggestions. Or declare victory while threads were still unresolved.

The core problem: no deterministic way for an agent to know a PR is ready to merge.

So I built gtg (Good To Go). One command, one answer:

$ gtg 123 OK PR #123: READY CI: success (5/5 passed) Threads: 3/3 resolved

It aggregates CI status, classifies review comments (actionable vs. noise), and tracks thread resolution. Returns JSON for agents or human-readable text.

The comment classification is the interesting part — it understands CodeRabbit severity markers, Greptile patterns, Claude's blocking/approval language. "Critical: SQL injection" gets flagged; "Nice refactor!" doesn't.

MIT licensed, pure Python. I use this daily in a larger agent orchestration system — would love feedback from others building similar workflows.

35 comments

order

rootnod3|1 month ago

Sorry, so the tool is now even circumventing human review? Is that the goal?

So the agent can now merge shit by itself?

Just the let damn thing push nto prod by itself at this point.

danenania|1 month ago

I don’t think “ready to merge” necessarily means the agent actually merges. Just that it’s gone as far as it can automatically. It’s up to you whether to review at that point or merge, depending on the project and the stakes.

If there are CI failures or obvious issues that another AI can identify, why not have the agent keep going until those are resolved? This tool just makes that process more token efficient. Seems pretty useful to me.

ljm|1 month ago

Someone’s gonna think about wiring all this up to Linear or Jira, and there’ll be a whole new set of vulnerabilities created from malicious bug reports.

blutoot|1 month ago

At a scale, I don't see a net negative of AI merging "shit by itself" if the developer (or the agent) is ensuring sufficient e2e, integration and unit test coverage prior to every merge, if in return I get my team to crank out features at a 10x speed.

The reality is that probably 99.9999% of code bases on this earth (but this might drop soon, who knows) pre-date LLMs and organizing them in a way that coding agents can produce consistent results from sprint to sprint, will need a big plumbing work from all dev teams. And that will include refactoring, documentation improvements, building consensus on architectures and of course reshaping the testing landscape. So SWE's will have a lot of dirty work to do before we reach the aforementioned "scale".

However, a lot of platforms are being built from ground-up today in a post-CC (claude code) era . And they should be ready to hit that scale today.

glemion43|1 month ago

Man if you are so frustrated by AI just stop reading articles relevant to it if you don't even take the time to read it properly.

And yes there are plenty of use cases were ai code doesn't hurt anyone even if it gets merged automatically...

See it as an interesting new field of r&d...

literalAardvark|1 month ago

In some workflows it's helpful for the full loop to be automated so that the agent can test if what's done works.

And you can do a more exhaustive test later, after the agents are done running amok to merge various things.

squeaky-clean|1 month ago

It sounds like the goal is to get the code to human review without it being obviously broken in CI but the agent has no idea that's the case.

tayo42|1 month ago

No,

The linked page explains how this fits into a development workflow

eg.

> A reviewer wrote “consider using X”… is that blocking or just a thought?

> AMBIGUOUS - Needs human judgment (suggestions, questions)

baxtr|1 month ago

I’m not saying this is, but if I were a malicious state actor, that’s exactly the kind of thing I’d like to see in widespread use.

dsifry|1 month ago

No, it just prepares the PR - it doesn't automatically merge. That would be very dangerous, imho!

philipp-gayret|1 month ago

Very interesting! This has a gem in the documentation: Using the tool itself as a CI check. I hadn't considered unresolved comments by say a person, or CodeRabbit or similar tool being a CI status failure. That's an excellent idea for AI driven PR's.

On a personal note; I hate LLM output to advertise a project. If you have something to share have the decency to type it out yourself or at least redact the nonsense from it.

dsifry|1 month ago

Lol, I thought it did a reasonably good job, but to each their own - this was the difference between releasing the project so others could use it with decent documentation, or not releasing and just using it internally. :)

joshribakoff|1 month ago

I dislike the idea of coupling my workflow to saas platforms like github or code rabbit. The fact that you still have to create local tools is a selling point for just doing it all “locally”.

furyofantares|1 month ago

Then you had the LLM write the blog post as well as your post on HN.

forgotpwd16|1 month ago

That repo is quintessentially surreal. AI-written code, published in AI-made PRs, reviewed by multiple AI bots (one of which being same model that wrote code & made the PR, maybe others too just accessible via 3rd vendor), merged by AI (assuming dogfooding).

aaronbrethorst|1 month ago

“The problem no one talks about” is a bit of breathless LLM spew, and an even better tell than an emdash

nyc1983|1 month ago

I don’t understand how this provides anything above using GitHub status checks and branch protections to require conversations to be resolved before merging. Combined with the GitHub CLI, this gives agents everything they need to achieve the same result. More AI slop on top of AI slop. At this point when seeing these kinds of posts I feel like Edward Norton in front of the copy machine.

dsifry|1 month ago

Some github comments are marked as sctionable, some have threads and suggestions, some are suggestions or are nitpicks. This provides you with a deterministic, reliable red/green approach that you cn use to enforce your policy. Give it a try and you will see how it is much more reliable than using a nondeterministic agent, especially for complex reviews!

mcolley|1 month ago

Super interesting, any particular reason you didn't try to solve these prior to pushing with hooks and subagents?

dsifry|1 month ago

I did! The issue however, is having a clear, deterministic method of defining when the code review was 'done'. So the hooks can fire off subagents, but they are non-deterministic and often miss vital code review comments - especially ones that are marked in an inline comment, or are marked as 'Out of PR Scope' or 'Out of range of the file' - which are often the MOST important comments to address!

So gtg builds all of that in and deterministically determines whether or not there are any actionable comments, and thus you can block the agent from moving forward until all actionable comments are thoroughly reviewed, acted upon or acknowledged, at which point it will change state and allow the PR to be merged.

joshuanapoli|1 month ago

This looks nice! I like the idea of providing more deterministic feedback and more or less forcing the assistant to follow a particular development process. Do you have evidence that gtg improves the overall workflow? I think that there is a trade-off between risk of getting stuck (iteration without reaching gtg-green) versus reaching perfect 100% completion.

dsifry|1 month ago

I found that it has improved overall code quality significantly, at the cost of somewhat slower velocity. But it has meant fewer interruptions where the ai is just waiting for me, or saying "Everything is ready!" only to find that ci/cd failed or there were clearly existing comments/issues.