top | item 45426537

(no title)

mcoliver | 5 months ago

The counterpoint to this is that LLMs cannot only write code, they can comprehend it! They are incredibly useful for getting up to speed on a new code base and transferring comprehension from machine to human. This of course spans all job functions and is still immature in its accuracy but rapidly approaching a point where people with an aptitude for learning and asking the right questions can actually have a decent shot at completing tasks outside of their domain expertise.

discuss

order

int_19h|5 months ago

This would be great if said comprehension is reliable. But I've seen tools designed to "understand" and document repos hallucinate many times, often coming up with a plausible but completely wrong explanation of how things actually work, or, even more subtly, of why they work the way they work.

And while I could catch that because I wrote the code in question and know the answers to those questions, others do not have that benefit. The notion that someone new to the codebase - especially a relatively unexperienced dev - would have AI "documentation" as a starting point is honestly quite terrifying, and I don't see how it could possibly end with anything other than garbage out.

roncesvalles|5 months ago

I agree. Almost all of the value that I'm getting out of LLMs is when it helps me understand something, as opposed to when it helps me produce something.

I'm not sure how or why the conversation shifted from LLMs helping you "consume" vs helping you "produce". Maybe there's not as much money in having an Algolia-on-steroids as there is in convincing execs that it will replace people's jobs?