top | item 47133295

(no title)

rand42 | 7 days ago

> "Obviously, you need to drive. The car needs to be at the car wash."

Actually, this isn't as "obvious" as it seems—it’s a classic case of contextual bias.

We only view these answers as "wrong" because we reflexively fill in missing data with our own personal experiences. For example:

- You might be parked 50m away and simply hand the keys to an attendant.

- The car might already be at the station for detailing, and you are just now authorizing the wash.

This highlights a data insufficiency problem, not necessarily a logic failure. Human "common sense" relies on non-verbal inputs and situational awareness that the prompt doesn't provide. If you polled 100 people, you’d likely find that their "obvious" answers shift based on their local culture (valet vs. self-service) or immediate surroundings.

LLMs operate on probabilistic patterns within their training data. In that sense, their answers aren't "wrong"—they are simply reflecting a different set of statistical likelihoods. The "failure" here isn't the AI's logic, but the human assumption that there is only one universal "correct" context.

discuss

order

SadWebDeveloper|7 days ago

There are no contextual bias, the goal of the prompt is very explicit and not about probabilistic patterns, but about the models transformer layers dynamically assigning greater weight to words like "meters" (distance) than to other tokens in the prompt.

This should be fixed in the reasoning layer (the inner thoughts or chain-of-thought) were the model should focus on the goal "I Want to Wash My Car" not the distance and assign the correct weight to the tokens.

rand42|7 days ago

The point is not that there is bias in promt - What makes the result obvious to OP is their bias - which is different for model and "fixing" it one way is biased.

Why? - It is the same reason that makes 30% of people respond in non-obvious sense.