The patterns picked up by the training don't seem to offer too much variation than a simple Markov chain. The author finds the generated texts similar to a conversation, because that's what they're looking for. But they look just as random as simply selecting the next random word that comes after the current word from the training set.
Am I missing something? This reads like gobbledygook. I have a sneaking suspicion that this very post is made to seem legitimate/credible while I don't necessarily believe it is? My mind is exploding a little bit. I am confused. Am I? Hmm.
Other idea: make a computer that is good at finding whether a chat user is a human or a computer (A sort of Turing Test judge bot). Then use it as a fitness function to evolve a bot.
It's also what I don't like about the Turing Test, the core trait that is rewarded by the test is deceit.
[+] [-] leviathan|10 years ago|reply
[+] [-] futuravenir|10 years ago|reply
[+] [-] okonomiyaki3000|10 years ago|reply
[+] [-] jobigoud|10 years ago|reply
It's also what I don't like about the Turing Test, the core trait that is rewarded by the test is deceit.
[+] [-] m0g|10 years ago|reply
[+] [-] drdeca|10 years ago|reply
Just judge it based on "whether you would be able to tell if you didn't know"
[+] [-] the_d00d|10 years ago|reply
[+] [-] okonomiyaki3000|10 years ago|reply