top | item 47031759

(no title)

thenoblesunfish | 13 days ago

Okay, funny. What does it prove? Is this a more general issue? How would you make the model better?

discuss

order

Jean-Papoulos|13 days ago

It proves that this is not intelligence. This is autocomplete on steroids.

hugh-avherald|13 days ago

Humans make very similar errors, possibly even the exact same error, from time to time.

cynicalsecurity|13 days ago

It proves LLMs always need context. They have no idea where your car is. Is it already there at the car wash and you simply get back from the gas station to wash it where you went shortly to pay for the car wash? Or is the car at your home?

It proves LLMs are not brains, they don't think. This question will be used to train them and "magically" they'll get it right next time, creating an illusion of "thinking".

ahtihn|13 days ago

> They have no idea where your car is.

They could either just ask before answering or state their assumption before answering.

gitaarik|13 days ago

We make the model better by training it, and now that this issue has come up we can update the training ;)

S3verin|13 days ago

For me this is just another hint on how careful one should be in deploying agents. They behave very unintuitively.