This doesn't seem like a major difference, since LLMs are also choosing from a probability distribution of tokens for the most likely one, which is why they respond a token at a time. They can't "write out' the entire text at a time, which is why fascinating methods like "think step by step" work at all.
Jensson|2 years ago
Such answers will be very hard for an LLM to find, instead you mostly get very verbose messages since that is how our current LLM thinks.
bytefactory|2 years ago
xcv123|2 years ago
It can be instructed to study its previous answer and find ways to improve it, or to make it more concise, etc, and that is working today. That can easily be automated by LLMs talking to each other.
ewild|2 years ago