(no title)
pcwelder | 7 days ago
My hypothesis is that some models err towards assuming human queries are real and consistent and not out there to break them.
This comes in real handy in coding agents because queries are sometimes gibberish till the models actually fetch the code files, then they make sense. Asking clarification immediately breaks agentic flows.
HarHarVeryFunny|6 days ago
While this is a toy problem, chosen to trick LLMs given their pattern matching nature, it is still indicative of their real world failure modes. Try asking an LLM for advice in tackling a tough problem (e.g. bespoke software design), and you'll often get answers whose consequences have not been thought through.
In a way the failures on this problem, even notwithstanding the nature of LLMs, are a bit surprising given that this type of problem statement kinda screams out (at least to a human) that it is a logic test, but most of the LLMs still can't help themselves and just trigger off the "50m drive vs walk" aspect. It reminds a bit of the "farmer crossing the river by boat in fewest trips" type problem that used to be popular for testing LLMs, where a common failure was to generate a response that matched the pattern of ones it had seen during training (first cross with A and B, then return with X, etc), but the semantics were lacking because of failure to analyze the consequences of what it was suggesting (and/or of planning better in the first place).
zapperdulchen|7 days ago
My little experiment gave me:
No added hint 0/3
hint added at the end 1.5/3
hint added at the beginning 3/3
.5 because it stated "Walk" and then convinced it self that "Drive" is the better answer.
zapperdulchen|7 days ago
That trick didn't help Mistral Le Chat.
Lerc|6 days ago
It is reading
I want to X, the X'er is 50meters away, should I walk or drive?
It would be very unusual for someone to ask this in a context where X decides the outcome, because in that instance it the question would not normally arise.
By actually asking the question there is a weak signal that X is not relevant. Models are probably fine tuned more towards answering the question in the situation where one would normally ask. This question is really asking "do you realise that this is a condition where X influences the outcome?"
I suspect fine tuning models to detect subtext like this would easily catch this case but at the same time reduce favourability scores all over the place.
unknown|6 days ago
[deleted]
a1371|6 days ago
abustamam|6 days ago
felix089|6 days ago
Jarwain|6 days ago
preciousoo|6 days ago
Neither prompt was enough for llama3.3 or gpt-oss-120b