top | item 35843205

(no title)

rapiz | 2 years ago

Every post that claimed using ChatGPT to achieve non-trivial tasks turned out to have non-trivial human intervention.

> (from the original article) In fact, I found it better to let ChatGPT generate a toy-ish version of the code first, then let it add things to it step-by-step. This resulted in much better output than, say, asking ChatGPT to generate production-quality code with all features in the first go. This also gave me a way to break down my requirements and feed them one at a time - as I was also acting as a code-reviewer for the generated output, and so this method was also easier for me to work with.

It takes a human who really knows the area to instruct ChatGPT and review the output, point out silly mistakes in the generated non-sense, and start next iteration. This kind of curated posts always cut off the most part of the conversations and the failed attempts, and then concatenate successful attempts with outputs of quality. Sure, it will be helpful as a super-IntelliSense. But not as helpful as the post suggested.

I've tried to do something like in the post, but I was quickly bored with waiting output, reviewing, all the iterations. One important aspect about programming is that reading code may not be easier than writing code. And in my case, it's more painful.

discuss

order

seunosewa|2 years ago

ChatGPT is a junior developer whose knowledge is broad but shallow.

killthebuddha|2 years ago

IMO this leaves out some salient details. For example, I'd say ChatGPT is a very, very good junior developer. The kind of junior developer that loves computer science, has been screwing around with miscellaneous algorithms and data structures its whole life, has a near-perfect memory, and is awake 24/7/365, but has never had to architect a data-intensive system, write future-proof code, or write code for other developers. Of course, these last three things are a big deal, but the rest of the list makes for a ridiculously useful teammate.

zerr|2 years ago

The worst part is that it doesn't know when it doesn't know, so makes up a garbage.

pojzon|2 years ago

Now, who is going to mentor real human junior developers? Coz they wont progress by themselves (or not many will).

Whats the initiative for companies to invest into junior developers now ?

Sparkyte|2 years ago

Doubt it will go beyond that either. It is equivalent to a spell checker and a calculator having a baby.

It will take the world by storm and change the way we do things. But it won't change the work that needs to be done. Just the way we consume it.

wseqyrku|2 years ago

Disagee. GPT is a senior and knows it all but doesn't know where to start unless you precisely instruct them what to do.

visarga|2 years ago

> Every post that claimed using ChatGPT to achieve non trivial tasks turned out to have non trivial human intervention.

That means full autonomy reached in 0% of applications. How do we go from 0 to 1? By the way, until we remove the human from the loop the iteration speed is still human speed, and number of AI agents <= number of human assistants.

The productivity boost by current level AI is just 15%, as reported in some papers, percentage of code written by Copilot is about 50% it just helps writing out the easy parts and not much for debugging, designing, releasing, etc which take the bulk of the time, so it's probably back to 15% boost.

MattRix|2 years ago

Ok but this is extremely new tech, all of that stuff will get better over time, and the AI will require less and less intervention.

rapiz|2 years ago

I don't think so. Ultimately there's not enough information in prompts to produce "correct" code. And any attempt to deliver more information will result in a worse programming language, or as it is now, more iterations.