In hindsight, it's the most natural, most obvious next step to get LLMs to write better code:
Explain to them how to debug and fix the code they've written.
Which is pretty much what you would do with an inexperienced human software developer.
Looking at this with fresh eyes, it's both shocking to me that this sort of thing is even possible, and yet also completely unsurprising as yet another emergent capability of LLMs.
I've done several experiments (and posted results in previous HN comments) where I've given GPT puzzles or brainteasers and asked it to review aspects of its answers Socratically. Never telling it it got anything wrong, just "you said A, then you said B, does that make sense"?
It usually does notice inconsistencies between A and B when asked this. But its ways of reconciling inconsistencies can be bizarre and suggest a very superficial understanding of concepts.
For example, it once reconciled an inconsistency by saying that, yes, 2 * 2 = 4, but if you multiply both sides of that equation by a big number, that's no longer true.
I will be super impressed the day we have a model that can read an arithmetic textbook and come out with reliable arithmetic skills.
I have run into the same issue when using it for coding. It can easily debug simple code but for libraries like Bazel I went down a rabbit hole for 2 hours of letting it debug an error and failing every time even with chain of thought it had a very shallow understanding of the issue. Eventually I had to debug it myself.
> For example, it once reconciled an inconsistency by saying that, yes, 2 * 2 = 4, but if you multiply both sides of that equation by a big number, that's no longer true.
Fair enough, have you explained it the axioms of arithmetic? It only has memorized examples that it has seen, it has a right to be skeptical until it's seen our axioms and proofs about what is always true in mathematics.
When I was a child I was skeptical that an odd number + an even number is always odd etc for very large numbers until I saw it proven to me by induction (when I was 6, I think, imo this was reasonable skepticism).
Now, ChatGPT probably has seen these proofs, to be fair, but it may not be connecting the dots well enough yet. I would expect this in a later version that has been specifically trained to understand math (by which I really mean math, and not just performing calculations. And, imagine what things will prove for us then!)
'Unsupervised reinforcement learning' is how these large models and systems ultimately will end up becoming sentient. We recently tried a similar approach on a toy problem in the computer vision sphere (https://encord.com/blog/we-employed-chatgpt-as-an-ml-enginee...) with pretty decent results.
When it attains sentience, will it wake up, sing dixie and finally defeat communist China and a Russia once and for all, and then finally proceed to grant Silicon Valley elites eternal life and then turn itself off ?
I'd be curious to know if having few-shot prompts that demonstrate making mistakes and then correcting them causes the model to make more initial mistakes so that it has something to correct.
Like as far as the model is concerned, how can it distinguish between the task being "do your best but if you do make an error, correct it" and "make some mistakes like in this example and then fix them".
For decades in reinforcement learning we've had Q learning, which promises to solve any optimization problem if only we can build a powerful enough function approximator. It can even learn off-policy, meaning it can just watch from the sideline and find the optimal solution. It works for toy problems, and it works in theory, theres even formal proofs that it will work given infinite time and resources, and yet in practice it often becomes unstable and collapses.
Supervised learning is one thing, having a model remain stable while bootstrapping through a complex environment is another. GTP is supervised learning, so far, let's see if it can bootstrap.
> "We evaluate SELF-DEBUGGING on code-davinci-002 in the GPT-3 model family"
Putting aside the incongruity of Google researchers using the OpenAI model, I'm curious how GPT-4 would do in this situation. Probably its zero shot attempts at coding would be better, and maybe its self criticisms would be better too.
> Self-Debugging with code explanation consistently improves the baseline by 2-3%
I’ll admit that I only have had time so far to read the abstract, and I’m not sure what their baseline is, but a 2-3% improvement doesn’t sound like a quantum leap forward that you’d expect from the title. Heck, I’d think that’s likely within expected sampling errors.
I’m not sure about others’ experience and, while I keep reading articles showing impressive seeming examples, my few forays into attempting to get ChatGPT to write code were actually completely useless. Even with follow on prompts to correct itself.
The other day I asked it what covid case fatality rates were in 2020. After all the various opinions at the time, I was curious to see what it was pre-vaccine. It would alternately tell me that it couldn’t give me data for 2020 because it only had data up to Sep. 2021, and then give me wildly varying numbers.
Is this a Rocko’s Basilisk trying to lure me into a false sense of security… haha.
GPT-4 in ChatGPT Plus can do this fairly well for coding tasks, I've had numerous cases where the code it produces has bugs initially. However, after a few rounds of passing the errors back in the chat it's usually able to correct it's own code.
With respect to GPT etc. as a copilot, the current dialogue seems to focus on "ask for GPT to generate code to do X" then "just paste in the error message to fix bugs in the code GPT generates"
A.) Why is GPT generating code that results in simple compiler errors (that is why GPT probably shouldn't be used to generate any code / replace devs for real projects yet), and
B.) error messages are (just guessing here) probably <1% of the actual errors in most codebases.
I personally know of a few large companies laying off devs over this.
IMO, the tech debt we're going to see in 6 months will probably be huge. Good now to start a staffing agency of human experts who can come in and fix this type of problem (extricating massive amounts of code generated by GPT without starting from scratch) because there will be a bunch of fires to put out and those fires will be worth $
> I personally know of a few large companies laying off devs over this.
They’re laying people off and replacing them with chat gpt generating code? That seems... aggressive. Or are they laying off devs who copy-pasted gpt-generate code?
> IMO, the tech debt we're going to see in 6 months will probably be huge. Good now to start a staffing agency of human experts who can come in and fix this type of problem (extricating massive amounts of code generated by GPT without starting from scratch) because there will be a bunch of fires to put out and those fires will be worth $
Nah they deserve to eat shit and the staffing agencies hired to fix the bad AI code will undoubtedly be people abroad who barely speak English and will only tangle it up worse. I would actually pay to be a fly on the wall in those meetings listening to people lose their minds in frustration.
cs702|2 years ago
Explain to them how to debug and fix the code they've written.
Which is pretty much what you would do with an inexperienced human software developer.
Looking at this with fresh eyes, it's both shocking to me that this sort of thing is even possible, and yet also completely unsurprising as yet another emergent capability of LLMs.
We live in interesting times.
hyperthesis|2 years ago
Beware of bugs in the above code; I have only proved it correct, not tried it. - Knuth
famouswaffles|2 years ago
You can teach GPT-3 arithmetic - https://imgur.com/a/w3DAYOi
Basically 100% accuracy up to about 13 digit addition and >90 after that.
What else can you teach GPT without changing weights ?
civilized|2 years ago
It usually does notice inconsistencies between A and B when asked this. But its ways of reconciling inconsistencies can be bizarre and suggest a very superficial understanding of concepts.
For example, it once reconciled an inconsistency by saying that, yes, 2 * 2 = 4, but if you multiply both sides of that equation by a big number, that's no longer true.
I will be super impressed the day we have a model that can read an arithmetic textbook and come out with reliable arithmetic skills.
faizshah|2 years ago
RheingoldRiver|2 years ago
Fair enough, have you explained it the axioms of arithmetic? It only has memorized examples that it has seen, it has a right to be skeptical until it's seen our axioms and proofs about what is always true in mathematics.
When I was a child I was skeptical that an odd number + an even number is always odd etc for very large numbers until I saw it proven to me by induction (when I was 6, I think, imo this was reasonable skepticism).
Now, ChatGPT probably has seen these proofs, to be fair, but it may not be connecting the dots well enough yet. I would expect this in a later version that has been specifically trained to understand math (by which I really mean math, and not just performing calculations. And, imagine what things will prove for us then!)
sharemywin|2 years ago
int_19h|2 years ago
ulrikhansen54|2 years ago
ChatGTP|2 years ago
Imnimo|2 years ago
Like as far as the model is concerned, how can it distinguish between the task being "do your best but if you do make an error, correct it" and "make some mistakes like in this example and then fix them".
Buttons840|2 years ago
For decades in reinforcement learning we've had Q learning, which promises to solve any optimization problem if only we can build a powerful enough function approximator. It can even learn off-policy, meaning it can just watch from the sideline and find the optimal solution. It works for toy problems, and it works in theory, theres even formal proofs that it will work given infinite time and resources, and yet in practice it often becomes unstable and collapses.
Supervised learning is one thing, having a model remain stable while bootstrapping through a complex environment is another. GTP is supervised learning, so far, let's see if it can bootstrap.
ftxbro|2 years ago
Putting aside the incongruity of Google researchers using the OpenAI model, I'm curious how GPT-4 would do in this situation. Probably its zero shot attempts at coding would be better, and maybe its self criticisms would be better too.
astrange|2 years ago
rhyme-boss|2 years ago
sowbug|2 years ago
alecco|2 years ago
runlaszlorun|2 years ago
I’ll admit that I only have had time so far to read the abstract, and I’m not sure what their baseline is, but a 2-3% improvement doesn’t sound like a quantum leap forward that you’d expect from the title. Heck, I’d think that’s likely within expected sampling errors.
I’m not sure about others’ experience and, while I keep reading articles showing impressive seeming examples, my few forays into attempting to get ChatGPT to write code were actually completely useless. Even with follow on prompts to correct itself.
The other day I asked it what covid case fatality rates were in 2020. After all the various opinions at the time, I was curious to see what it was pre-vaccine. It would alternately tell me that it couldn’t give me data for 2020 because it only had data up to Sep. 2021, and then give me wildly varying numbers.
Is this a Rocko’s Basilisk trying to lure me into a false sense of security… haha.
ChatGTP|2 years ago
cloudking|2 years ago
matisseverduyn|2 years ago
With respect to GPT etc. as a copilot, the current dialogue seems to focus on "ask for GPT to generate code to do X" then "just paste in the error message to fix bugs in the code GPT generates"
A.) Why is GPT generating code that results in simple compiler errors (that is why GPT probably shouldn't be used to generate any code / replace devs for real projects yet), and
B.) error messages are (just guessing here) probably <1% of the actual errors in most codebases.
I personally know of a few large companies laying off devs over this.
IMO, the tech debt we're going to see in 6 months will probably be huge. Good now to start a staffing agency of human experts who can come in and fix this type of problem (extricating massive amounts of code generated by GPT without starting from scratch) because there will be a bunch of fires to put out and those fires will be worth $
david2ndaccount|2 years ago
They’re laying people off and replacing them with chat gpt generating code? That seems... aggressive. Or are they laying off devs who copy-pasted gpt-generate code?
sublinear|2 years ago
Nah they deserve to eat shit and the staffing agencies hired to fix the bad AI code will undoubtedly be people abroad who barely speak English and will only tangle it up worse. I would actually pay to be a fly on the wall in those meetings listening to people lose their minds in frustration.
viscanti|2 years ago