(no title)
eden-u4 | 10 months ago
Ask something like: "Ravioli: x = y: France, what could be x and y?" (it thought for 500s and the answers were "weird")
Or "Order from left to right these items ..." and give partial information on their relative position, eg Laptop is on the left of the cup and the cup is between the phone and the notebook. (Didn't have enough patience nor time to wait the thinking procedure for this)
imiric|10 months ago
I've had much better results from non-"reasoning" models by judging their output, doing actual reasoning myself, and then feeding new ideas back to them to steer the conversation. This too can go astray, as most LLMs tend to agree with whatever the human says, so this hinges on me being actually right.
hannofcart|10 months ago
Not sure if it there's some prior literature it was trained on.
https://chat.qwen.ai/s/e239e36f-185a-4f6c-a3d2-f4c4ee0d2960?...