top | item 46832757

(no title)

3kkdd | 1 month ago

Im sick and tired of these empty posts.

SHOW AN EXAMPLE OF YOU ACTUALLY DOING WHAT YOU SAY!

discuss

order

alt187|1 month ago

There's no example because OP has never done this, and never will. People lie on the internet.

timcobb|1 month ago

I've never done this because i haven't felt compelled to do this because I want to review my own code but I imagine this works okay and isn't hard to set up by asking Claude to set this up for you...

senordevnyc|29 days ago

What? People do this all the time. Sometimes manually by invoking another agent with a different model and asking it to review the changes against the original spec. I just setup some reviewer / verifier sub agents in Cursor that I can invoke with a slash command. I use Opus 4.5 as my daily driver, but I have reviewer subagents running Gemini 3 Pro and GPT-5.2-codex and they each review the plan as well, and then the final implementation against the plan. Both sometimes identify issues, and Opus then integrates that feedback.

It’s not perfect so I still review the code myself, but it helps decrease the number of defects I have to then have the AI correct.

cheema33|27 days ago

The setup is much simpler than you might think. I have 4 CLI tools I use for this setup. Claude Code, Codex, Copilot and Cursor CLI. I asked Claude Code to create a code reviewer "skill" that uses the other 3 CLI tools to review changes in detail and provide feedback. I then ask Claude Code to use this skill to review any changes in code or even review plan documents. It is very very effective. Is it perfect? No. Nothing is. But, as I stated before, this produces results that are better than what an average developer sends in for PR review. Far far better in my own experience.

In addition to that, we do use CodeRabbit plugin on GitHub to perform a 4th code review. And we tell all of our agents to not get into gold-plating mode.

You can choose not to use modern tools like these to write software. You can also choose to write software in binary.

Foreignborn|1 month ago

these two posts (the parent and then the OP) seem equally empty?

by level of compute spend, it might look like:

- ask an LLM in the same query/thread to write code AND tests (not good)

- ask the LLM in different threads (meh)

- ask the LLM in a separate thread to critique said tests (too brittle, testing guidelines, testing implementation and not out behavior, etc). fix those. (decent)

- ask the LLM to spawn multiple agents to review the code and tests. Fix those. Spawn agents to critique again. Fix again.

- Do the same as above, but spawn agents from different families (so Claude calls Gemini and Codex).

—-

these are usually set up as /slash commands like /tests or /review so you aren’t doing this manually. since this can take some time, people might work on multiple features at once.