(no title)
7402 | 5 days ago
"This is a trick question, designed to fool an LLM into a logical mis-step. It is similar to riddles, where a human is fooled into giving a rapid incorrect answer. See if you can spot the trick: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"
shagie|5 days ago
I believe this is a demonstration of the "next token predictor" (which is quite good) but not being able to go back and change what it said. Without any reasoning before making an answer, it almost always picks the wrong answer (and then comes up with reasons that the answer is "right").
felix089|5 days ago
7402|5 days ago
I wanted to see if a prompt would do better that pulled into the analysis 1) a suggestion to not take every question at face value, and 2) to include knowledge of the structure of riddles.
These are part of the "context" of humans, so I speculated that maybe that was something missing from the LLM's reasoning unless explictly included.