When a model "reasons through" a problem its just outputting text that is statistically likely to appear in the context of "reasoning through" things. There is no intent, consideration of the options available, the implications, possible outcomes.
However, the result often looks the same, which is neat
No comments yet.