Well, I think Academic curiosity should be quenched by the lovely link from jaybosamiya.
I however have more practical thoughts. "A lot" is understandable. If you're trying to suss out a bot without alerting or annoying others I think a fancy convoluted question will kind of give you away. Also I think a clever bot (not even AI) would just sidestep weird or out of context questions. Restricting the queries to a chat room hamstrings one's attempts, but that's the basis of classic Turing tests.
But here's the rub. Why do we assume the agent in question is going to answer our questions Naively (sincerely and to the best of their ability) unless they're in a testing context? Isn't that what is required?
I was imagining the context of an online poker room where AI agent pretending to be a human will be unwanted from the other human players since an AI that can win poker games is already here. But the human players are there to play poker not to inspect for bots so they need for quick way to get an idea what's the deal with the bots around :)
kleer001|10 years ago
I however have more practical thoughts. "A lot" is understandable. If you're trying to suss out a bot without alerting or annoying others I think a fancy convoluted question will kind of give you away. Also I think a clever bot (not even AI) would just sidestep weird or out of context questions. Restricting the queries to a chat room hamstrings one's attempts, but that's the basis of classic Turing tests.
But here's the rub. Why do we assume the agent in question is going to answer our questions Naively (sincerely and to the best of their ability) unless they're in a testing context? Isn't that what is required?
mitenmit|10 years ago