top | item 35952454

(no title)

grumpymouse | 2 years ago

It’s probably the same deal as with LLMs generating code: it can crank out something that’s probably broken, and the person using the LLM needs to be able to know how to code to see where it’s broken. Companies might be able to reduce the headcount of programmers / copywriters / artists but certainly not replace them right now (or possibly ever).

discuss

order

ptdn|2 years ago

I suspect that a coordination between a human programmer and an LLM doesn't require strong programming skills, but it does require strong debugging fundamentals. A month ago I had ChatGPT write a function in Racket just given a text description. Take two lists of symbols of any arbitrary length (but only if both lists are the same size) and construct a new list which selects one at random from the other two lists at the same location. There was some other logic in there, too, based on the way I'd done the structs.

ChatGPT wrote the function perfectly on the first shot, but then I realized it was only working most of the time -- turned out ChatGPT had done a really obvious off-by-one error in the loop, and it was breaking on (1/n) attempts where n is the size of the list.

It's exactly the same as how ChatGPT usually knows what formulas and approaches to take when solving graduate-level mathematics, and its reasoning about the problem is pretty good, but it can't get the right answer because it can't add integers reliably.

gtirloni|2 years ago

> strong debugging fundamentals

Something that experienced (and expensive) programmers are good at, incidentally.

alfalfasprout|2 years ago

Even if the code isn't broken the issue is that the vast majority of code isn't written in a vacuum. Refactoring, rearchitecting, etc. is quite tricky.

And writing code is the easy part. Architecting is where things get tricky and there are a lot of subjective decisions to be made. That's where soft skills become really important.

sirsinsalot|2 years ago

> it can crank out something that's probably broken, and the person using the LLM needs to be able to know how to code to see where it's broken.

Same with my junior devs

maaanu|2 years ago

I see this claim so often and I fail to understand it every time... What kind of junior devs do you hire, where this is the case? And what kind of tasks do you give them?