top | item 47122860

(no title)

kneel25 | 6 days ago

> After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.

I feel like you just know it’s doomed. What this is saying is “I didn’t want to and cannot review the code it generated” asking models to find mistakes never works for me. It’ll find obvious patterns, a tendency towards security mistakes, but not deep logical errors.

discuss

order

zamadatix|6 days ago

Somehow they did use this as part of their approach to get to 0 regressions across 65k tests + no performance regressions though + identical output for AST and bytecode though. How much manual review was part of the hundreds of rounds of prompt steering is not stated, but I don't think it's possible to say it couldn't find any deep logical errors along the way and still achieve those results.

The part that concerns me is whether this part will actually come in time or not:

> The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.

Of course, it wouldn't be the first time Andreas delivered more than I expected :).

kneel25|6 days ago

That’s convincing and impressive, but I wouldn’t say it proves it can spot deep errors. If it’s incredible at porting files and comparing against the source of truth then finding complicated issues isn’t being tested imo.

herrkanin|6 days ago

Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well.

kneel25|6 days ago

They’re not equal. Humans are capable of actually understanding and looking ahead at consequences of decisions made, whereas an LLM can’t. One is a review, one is mimicking the result of a hypothetical review without any of the actual reasoning. (And prompting itself in a loop is not real reasoning)

DetroitThrow|6 days ago

>Your argument is just as applicable on human code reviewers.

The tests many of us use for how capable a model or harness is is usually based around whether they can spot logical errors readily visible to humans.

Hence: https://news.ycombinator.com/item?id=47031580

Fervicus|6 days ago

With humans though, I wouldn't have to review 20k lines of code at once.

u_sama|6 days ago

That is what the testing suite is there to check, no?

layer8|6 days ago

No. Testing generally can only falsify, not verify. It’s complementary to code review, not a substitute for it.

kneel25|6 days ago

You mean the testing suite generated by AI?

cardanome|6 days ago

Yeah, I lost all interest in the ladybird project now that it is AI slop.

No one wants to work with this generated, ugly, unidiomatic ball of Rust. Other than other people using AI. So you dependency AI grows and grows. It is a vicious trap.