(no title)
eunoia | 1 month ago
When I investigated I found the docs and implementation are completely out of sync, but the implementation doesn’t work anyway. Then I went poking on GitHub and found a vibed fix diff that changed the behavior in a totally new direction (it did not update the documentation).
Seems like everyone over there is vibing and no one is rationalizing the whole.
klodolph|1 month ago
I can’t understand how people would run agents 24/7. The agent is producing mediocre code and is bottlenecked on my review & fixes. I think I’m only marginally faster than I was without LLMs.
gpm|1 month ago
And specifically: Lots of checks for impossible error conditions - often then supplying an incorrect "default value" in the case of those error conditions which would result in completely wrong behavior that would be really hard to debug if a future change ever makes those branches actually reachable.
heliumtera|1 month ago
Claude Code creator literally brags about running 10 agents in parallel 24/7. It doesn't just seems like it, they confirmed like it is the most positive thing ever.
TrainedMonkey|1 month ago
Full disclosure - I am a heavy codex user and I review and understand every line of code. I manually fight spurious tests it tries to add by pointing a similar one already exists and we can get coverage with +1 LOC vs +50. It's exhausting, but personal productivity is still way up.
I think the future is bright because training / fine-tuning taste, dialing down agentic frameworks, introducing adversarial agents, and increasing model context windows all seem attainable and stackable.
MrDarcy|1 month ago
skerit|1 month ago
einpoklum|1 month ago
That is not an uncommon occurrence in human-written code as well :-\
tobyjsullivan|1 month ago
> Automation doesn't just allow you to create/fix things faster. It also allows you to break things faster.
https://news.ycombinator.com/item?id=13775966
Edit: found the original comment from NikolaeVarius
nrds|1 month ago
data_ders|1 month ago