top | item 45427362

(no title)

romaniv | 5 months ago

No, this is not a pre-existing problem.

In the past the problem was about transferring a mental model from one developer to the other. This applied even when people copy-pasted poorly understood chunks of example code from StackOverflow. There was specific intent and some sort of idea of why this particular chunk of code should work.

With LLM-generated software there can be no underlying mental model of the code at all. None. There is nothing to transfer or infer.

discuss

order

captainkrtek|5 months ago

It’s even worse because the solution an LLM produces is not obvious as to whether it was inherently chosen by the user and favored over a different approach for any reason, or it was just what happened to be output and “works”.

I’ve had to give feedback to some junior devs who used quite a bit of LLM created code in a PR, but didn’t stop to question if we really wanted that code to be “ours” versus using a library. It was apparent they didn’t consider alternatives and just went with what it made.