The Reflexion paper (https://arxiv.org/abs/2303.11366) that came out recently shows how this kind of mistake might be overcome. Asking the model to think about the answer after it's generated a first draft greatly improves accuracy. Also, prompt engineering such as copying the generated code, pasting it in a new chat and saying "There's a bug in this code, please find it" can go a long way. There is so much low hanging fruit in harnessing the power of these models that is just being ignored because some even lower hanging fruit (RLHF, system messages, context window size, plugins, etc) is being released seemingly every few days.
yawnxyz|2 years ago
Vespasian|2 years ago
It only "knows" what it writes down and if you force it to print the intermediate step it can more accurately get to the final answer.
wolfi008|2 years ago
These models do not "think". This is a fundamental misunderstanding of how they work. It's not AI. It's not even language. It's just text inference.
pmx|2 years ago
I would suggest that a person saying "ask the model to think about" in this context in no way implies that that person is confused about the nature of the model, it is simply a convenient piece of language that helps us to achieve the desired result.
silveraxe93|2 years ago
It looks you just pattern-matched on the word _think_ and replied with a pre-made opinion about how AIs can't think. Ironic...
themodelplumber|2 years ago
So it's still fair to say you can ask it to think