(no title)
fauigerzigerk | 14 days ago
Current LLMs simply don't do reasoning by any reasonable definition of reasoning.
It's possible that this particular question is too short to trigger the "reasoning" machinery in some of the "reasoning" models. But if and when it is triggered, they just do some more pattern matching in a loop. There's never any actual reasoning.
No comments yet.