top | item 47153364

(no title)

wolfejam | 5 days ago

Well said. And it's potentially a 7% swing when you think about it — +4% with good human-written context vs. -3% with LLM-generated noise. That's a significant delta from just the quality of the information.

The real value is exactly what you described: the tribal knowledge, the "we tried X and it broke because Y", the constraints that live in someone's head and nowhere in the code. LLM-generated files miss this because the LLM is just restating what it can already see. Of course that doesn't help.

discuss

order

No comments yet.