I haven't yet seen bot that could answer question about specific setting that you are in when talking to it. for example if i ask bot to repeat what i ask it just before this. it can't tell. because bots like cleverbot operates by crowdsourcing answers and than parroting it randomly.
Also when i ask it same question multiple times I get different answers. Maybe bots should store procedures in their database and not just crowdsourced answers. and keep context of current session. or something.
They maybe passed the turing test But that just convinced me that Turing test really means nothing.
You can't really determine anything for sure with turing test but only probability. (many humans acts or are stupid, if machine is stupid some may decide that machine is like human)
What i would like to see more instead of machine passing turing test is building machine species that could survive in nature, if it can survive autonomously like some kind of animal than this machine species could be said to be intelligent.
Lets assume that humans are complex molecular machines. That is our bodies obey the laws of physics and are made of atoms that behave in predictable ways.
If we know what we are made of (the molecules and how they are arranged) and how these molecules can be modeled (i.e. quantum mechanics) then it is only a matter of time until an entire human can be modeled on a computer. Once you can model the entire body, then you can have a computer that can pass the Turing test, because it more or less is human.
If humans can be modeled as deterministic systems that follow physical laws, then computers will simulate them at some point in the future.
First, the complexity in the physics/chemistry of our molecular machines is such that "only a matter of time" may extend longer than the time span of our species' existence.
Second, it is completely possibly that we are more than molecular machines. There may be an aspect to our functioning that is beyond the physical. That aspect of our existence may not be possible to replicate.
Cleverbot, supposedly has passed the Turing test. I have my doubts, but there is something to say about this Internet accomplishment. There will always be doubt about passing this test as it is not mathematical proof, which is probably why Turing proposed it.
> We haven't made much progress in building intelligent machines since the late AI winter.
We haven't?
We got the AI winter because DARPA was for a while willing to fund projects that made outrageous promises, so they got outrageous promises that nobody could possibly deliver on and eventually gave up.
I mean, they were funding people who wanted to do things like build semi-autonomous robots that could drive a car to deliver supplies or fly a plane to do air reconnoissance or wheel around to sweep a field for mines. They funded claims that computers would be able to do speaker-independent voice recognition. Computer programs could beat the best human players at a game of chess. Or imagine a computer program that had enough common knowledge indexed and accessible that it could win contests that involve wordplay and trivia questions.
Wait, all that stuff has happened already, much of it just in the last few years.
The real problem with AI is it's defined to be "stuff we can't do yet". As soon as we manage to do one of those things, it stops being called "AI".
In short, I like Kurzweil's odds on this one. 2029 is a long time away in computer years and we've made a heck of a lot of progress. And he's right that humans think linearly. Exponential growth curves just aren't intuitive to us, so we underestimate what a reasonable amount of progress on a long-term goal looks like. (IBM's win at Jeopardy, Apple's Siri launch, and Google's self-driving cars are all things that happened since Ray made his prediction. Are these not AI progress?)
Using any functional, practical definition of "artificial intelligence" that I've ever heard, we have certainly made a lot of progress since the late AI winter.
It sounds like you're using the unfortunate definition that essentially defines any task as "not requiring intelligence" the instant a machine is able to perform it well. This has been done with voice recognition, facial recognition, music composition, etc., and is actually one of the main reasons we even had the AI winter.
While I wouldn't go so far as to call him an "idiot," I do think he should be shifting his focus towards emergent phenomena on the Internet rather than a single silicon mind in a box. He's stuck in a 20th century perspective.
We also have to remember that the Turing test isn't a real measure of intelligence. After all a computer following rules is NOT any more intelligent than the rules themselves.
For example, if someone gave me a ton of rules on how to convert sentences from English to another language, and the output was amazing, would that mean I'm intelligent? Not likely, just that I can follow the language conversion rules.
I think the issue is that too much emphasis is being put on Turing tests as a measure of intelligence when in fact it's really more a measure of how well a computer can follow conversations and social norms. Just because you can fool a real person, doesn't mean you're intelligently interacting with that person. Just like if I can fool another person I can speak another language by following translation rules, it doesn't actually mean I can speak the other language at all!
I disagree with your conclusion that such a system is inherently not intelligent.
If you accept that humans are intelligent, and that they can judge that another human is intelligent by conversing with them across a text-only channel, then you run into a big problem by stating that a Turing-test-passing algorithm is unintelligent. To do so would expose the fact that your definition of "intelligence" secretly includes the class "...and is a human," which makes "intelligent machine" a contradiction of terms. It is essentially an example of the No True Scotsman fallacy, because you're revealing a new facet of your claim when faced with an apparent counterexample.
If you're defining "intelligence" to be a purely human trait, then come right out and say so, and everyone will agree that on your terms a machine cannot be intelligent. Of course, I would argue that such a definition isn't very useful, since it basically means that the adjectives "intelligent" and "humans" are synonyms.
I actually agree, but for different reasons. I don't think human judges (That is, NORMAL human judges, non-geeks.) are a good measure of an entities intelligence. I personally have managed to convince at least one person that I'm a machine.
Sufficiently good pattern matching to produce reasonable-enough sounding sentences would probably fool most casual observers. For me, the validity to a turing test relies heavily on how long and under how much pressure the A.I has to keep up it's illusion of humanity.
A more objective test of intelligence would be nice though.
[+] [-] dws|14 years ago|reply
[+] [-] yalimkgerger|14 years ago|reply
[+] [-] majmun|14 years ago|reply
They maybe passed the turing test But that just convinced me that Turing test really means nothing. You can't really determine anything for sure with turing test but only probability. (many humans acts or are stupid, if machine is stupid some may decide that machine is like human)
What i would like to see more instead of machine passing turing test is building machine species that could survive in nature, if it can survive autonomously like some kind of animal than this machine species could be said to be intelligent.
[+] [-] montecarl|14 years ago|reply
If we know what we are made of (the molecules and how they are arranged) and how these molecules can be modeled (i.e. quantum mechanics) then it is only a matter of time until an entire human can be modeled on a computer. Once you can model the entire body, then you can have a computer that can pass the Turing test, because it more or less is human.
If humans can be modeled as deterministic systems that follow physical laws, then computers will simulate them at some point in the future.
[+] [-] grannyg00se|14 years ago|reply
First, the complexity in the physics/chemistry of our molecular machines is such that "only a matter of time" may extend longer than the time span of our species' existence.
Second, it is completely possibly that we are more than molecular machines. There may be an aspect to our functioning that is beyond the physical. That aspect of our existence may not be possible to replicate.
[+] [-] unimpressive|14 years ago|reply
[+] [-] zitterbewegung|14 years ago|reply
[+] [-] kruhft|14 years ago|reply
[+] [-] batista|14 years ago|reply
I seriously doubt it. Here's a conversation I just had with it:
Me: What's your name?
CB: My name?
Me: Yes.
CB: What's your name?
Me: Suzan.
CB: Nope.
Me: Nope what?
CB: Nope, allessander is not my name.
This is sub-Eliza quality...
[+] [-] wololo|14 years ago|reply
[+] [-] jhuni|14 years ago|reply
[+] [-] glenra|14 years ago|reply
We haven't?
We got the AI winter because DARPA was for a while willing to fund projects that made outrageous promises, so they got outrageous promises that nobody could possibly deliver on and eventually gave up.
I mean, they were funding people who wanted to do things like build semi-autonomous robots that could drive a car to deliver supplies or fly a plane to do air reconnoissance or wheel around to sweep a field for mines. They funded claims that computers would be able to do speaker-independent voice recognition. Computer programs could beat the best human players at a game of chess. Or imagine a computer program that had enough common knowledge indexed and accessible that it could win contests that involve wordplay and trivia questions.
Wait, all that stuff has happened already, much of it just in the last few years.
The real problem with AI is it's defined to be "stuff we can't do yet". As soon as we manage to do one of those things, it stops being called "AI".
In short, I like Kurzweil's odds on this one. 2029 is a long time away in computer years and we've made a heck of a lot of progress. And he's right that humans think linearly. Exponential growth curves just aren't intuitive to us, so we underestimate what a reasonable amount of progress on a long-term goal looks like. (IBM's win at Jeopardy, Apple's Siri launch, and Google's self-driving cars are all things that happened since Ray made his prediction. Are these not AI progress?)
[+] [-] baddox|14 years ago|reply
It sounds like you're using the unfortunate definition that essentially defines any task as "not requiring intelligence" the instant a machine is able to perform it well. This has been done with voice recognition, facial recognition, music composition, etc., and is actually one of the main reasons we even had the AI winter.
[+] [-] batista|14 years ago|reply
[+] [-] iRobot|14 years ago|reply
[+] [-] batista|14 years ago|reply
[+] [-] md224|14 years ago|reply
[+] [-] FollowSteph3|14 years ago|reply
For example, if someone gave me a ton of rules on how to convert sentences from English to another language, and the output was amazing, would that mean I'm intelligent? Not likely, just that I can follow the language conversion rules.
I think the issue is that too much emphasis is being put on Turing tests as a measure of intelligence when in fact it's really more a measure of how well a computer can follow conversations and social norms. Just because you can fool a real person, doesn't mean you're intelligently interacting with that person. Just like if I can fool another person I can speak another language by following translation rules, it doesn't actually mean I can speak the other language at all!
[+] [-] baddox|14 years ago|reply
If you accept that humans are intelligent, and that they can judge that another human is intelligent by conversing with them across a text-only channel, then you run into a big problem by stating that a Turing-test-passing algorithm is unintelligent. To do so would expose the fact that your definition of "intelligence" secretly includes the class "...and is a human," which makes "intelligent machine" a contradiction of terms. It is essentially an example of the No True Scotsman fallacy, because you're revealing a new facet of your claim when faced with an apparent counterexample.
If you're defining "intelligence" to be a purely human trait, then come right out and say so, and everyone will agree that on your terms a machine cannot be intelligent. Of course, I would argue that such a definition isn't very useful, since it basically means that the adjectives "intelligent" and "humans" are synonyms.
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] unimpressive|14 years ago|reply
I actually agree, but for different reasons. I don't think human judges (That is, NORMAL human judges, non-geeks.) are a good measure of an entities intelligence. I personally have managed to convince at least one person that I'm a machine.
Sufficiently good pattern matching to produce reasonable-enough sounding sentences would probably fool most casual observers. For me, the validity to a turing test relies heavily on how long and under how much pressure the A.I has to keep up it's illusion of humanity.
A more objective test of intelligence would be nice though.