(no title)
tnjm | 2 years ago
"Two trains on different and separate tracks, 30 miles from each other are approaching each other, each at a speed of 10 mph. How long before they crash into each other?"
...it spots the trick: https://chat.openai.com/share/ee68f810-0c12-4904-8276-a4541d...
Likewise, if you add emphasis it understands too:
"Two trains on separate tracks, 30 miles from each other are approaching each other, each at a speed of 10 mph. How long before they crash into each other?"
https://chat.openai.com/share/acafbe34-8278-4cf7-80bb-76858c...
Not to anthropomorphize, but perhaps it's not necessarily missing the trick, it just assumes that you're making a mistake.
tyingq|2 years ago
Enginerrrd|2 years ago
To paraphrase XKCD: Communicating badly and then acting smug about it when you're misunderstood is not cleverness. And falling for the mistake is not evidence of a lack of intelligence. Particularly, when emphasizing the trick results in being understood and chatGPT PASSING your "test".
The biggest irony here, is that the reason I failed, and likely the reason chatGPT failed the first prompt, is because we were both using semantic understanding: that is, usually, people don't ask deliberately tricky questions.
I suspect if you told it in advance you were going to ask it a deliberately tricky question, that it might actually succeed.