(no title)
jaccola | 1 month ago
To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).
No LLM/AI system today can pass the Turing test.
zahlman|1 month ago
Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.
caspar|1 month ago
That is, the main thing that makes it possible to tell LLM bots apart from humans is that lots of us have over the past 3 years become highly attuned to specific foibles and text patterns which signal LLM generated text - much like how I can tell my close friends' writing apart by their use of vocabulary, punctuation, typical conversation topics, and evidence (or lack) of knowledge in certain domains.