Note that this is not relevant for reasoning models, since they will think about the problem in whatever order it wants to before outputting the answer. Since it can “refer” back to its thinking when outputting the final answer, the output order is less relevant to the correctness. The relative robustness is likely why openai is trying to force reasoning onto everyone.
adastra22|4 months ago
But specialized instructions to weigh alternatives still works better as it ends up thinking about thinking, thinking, then making a choice.
simianwords|4 months ago