The argument that artificial intelligence requires understanding how intelligence works is an argument that natural intelligence requires Intelligent Design. (Its also an argument that fortuitous discoveries--such as of pharmaceuticals with utility in treating conditions they were not designed for and whose mechanisms we do not understand--cannot occur.)
Obviously, understanding intelligence better would promote more effective directed research toward artificial intelligence. But if we can identify it (which the Turing Test is about), then it is quite possible that we can develop it -- and know that we have -- without understanding it. (And it may only be through developing it that we end up understanding how it works.)
I upvoted you for the insightful reasoning but I disagree with your stated premise that
> The argument that artificial intelligence requires understanding how intelligence works is an argument that natural intelligence requires Intelligent Design.
I think that statement makes sense when phrased that way, so it is an attractive idea. However, I don't think it's true. From an evolutionary standpoint, biological intelligence developed naturally because biological components are natural. Furthermore, machines do not develop when left in isolation, while biological organisms do. If you leave a large population of simple machines running in an environment, this is overwhelmingly not likely to result in a machine intelligence millions of years later. Machines were developed by humans and do not develop in the same way; comparing the two as in your Intelligent Design argument doesn't make much sense.
I do not think artificial intelligence can arise naturally because its components are not natural. This discussion could also foster a discussion on two other interesting questions:
1. Does intelligence require organic components that operate in a deterministic way (i.e. the brain) to transcend it from merely a "machine"?
2. If intelligence requires biology, where do you draw the line between creating an intelligence through natural human reproduction and creating an artificial intelligence another way?
Personally, I believe artificial intelligence does not require biological components, and I believe that under certain circumstances, it could develop unintentionally from a relatively advanced computer, but that is not the same as naturally.
I do think that understanding intelligence is critical for developing AI, because computers are not subject to the same evolutionary circumstances. If computers were, then they could be. Now if you are talking about an organic computer, and you mean to include existing animal brains, well, people already knew that, and they meant to talk about the inorganic computer. Maybe after (or concurrently) inorganic AI, people will take on the challenge of designing intelligent organisms that fit in the ecosystem.
I think intelligence may be composed of several sub-modules, the combination of which, working in concert, produces the sought-after effects people often talk about when they talk about the Turing test.
I do think tests which propose to pin AI on human characteristics is not as useful as an investigative avenue as it could be. I think the first real test of AI is the identification of causal factors in a phenomena. I think another major point of intelligence would be if AI could perform arbitrary analogical mapping. I think all of math is arbitrary analogical mapping, where you start with a set of capricious but useful building blocks, build a big structure and then... analogically map the math onto a phenomena.
I think these two ingredients makes for the kind of mental abilities that people have been craving for in AI, abilities like a computer developing its own software to use hardware, or a computer which models and reasons about phenomena.
I don't feel like the author is familiar with modern approaches to AI. For instance, he mentions "creativity" as a stumbling block for AGI, but there is a whole class of existing algorithms that exhibit prototypical creativity, namely generative models.
Essentially, the idea is that "creativity" is the act of sampling from a distribution over the kind of thing you are trying to create. Learning algorithms like Boltzmann Machines will learn a distribution over the inputs they see. One thing you can do with such a distribution is checking the probability of a given input under it, which may be good to classify them. Another thing you might want to do is generate representative samples, i.e. generate an example E with probability X iff P(E) = X under the model. The latter is what I would call "creativity".
Under this definition, creativity depends on both the learned distribution (which should only assign high probability to meaningful data) and of course on the sampling algorithm. As it turns out, it is very hard to write good sampling algorithms for non-trivial distributions (naive MCMC will often get stuck). So creativity is hard, but so are a lot of other tasks, so I don't think it's fair to single it out.
I don't think generative models are sufficient to model the kind of creativity that the author is thinking about. See this quote from the article:
The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible.
He is looking for solutions outside of the distribution that has been previously observed.
This author makes a good case that current approaches toward producing AGI are misguided:
> The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality.
But I don't think he's done a very good job of supporting his assertion that thinking (in the AGI sense) is a computational process. The closest he comes is:
> But that’s not a metaphor: the universality of computation follows from the known laws of physics.
In any case, I think that's a point very much relevant to the analogy: when you do more of the same, but at a scale a few orders of magnitude larger, you do get qualitatively different results.
That's very much applicable to AI. As an illustration, when we started to use GPUs for training neural nets with the exact same algorithms as a decade earlier but with an order of magnitude more parameters, we got dramatically better results (see Ciresan et al. 2010 [1]).
Building the same dumb skyscrapers but making them 100 times taller might in fact get them to fly.
Your "skyscraper" would need to have its top in geostationary orbit, or else it would be wrapping up around the earth as its top would be in LEO or HEO and thus would be moving at a different speed than its base. I.e. your skyscraper (or cable) would be crashing down.
So that's a space elevator. But can a space elevator be said to be "flying"? It stays above a single spot the whole time.
tl;dr Human intelligence is special. Computer accomplishments like being good at chess is not real intelligence. In fact anything computers ever do is not real intelligence, because they aren't like us and possibly never will be because there is some divine truth and wisdom that only humans posses.
How many times do we have to hear these arguments to realize they are just hot air? Either computers are capable of intelligence, or they are not. The answer to that question depends entirely on how you define intelligence. If you define it as the set of things humans are capable of and computers are not, then the answer will always be no. But just like humans, computers learn and are taught new things every day, and as time goes on the set of things human's are uniquely capable of grows smaller and smaller.
The article stands in direct contradiction to your supposed summary of it. To wit:
> Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.
> [Turing] concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written. This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken.
etc etc etc. He literally spends more than 60% of the (very long) essay arguing against your "summary".
Anyway, a real tl;dr is right at the top of the page: "Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough." and then the last sentence: "it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever."
In other words, AGI is provably possible, but the author believes that we're going about it all wrong (behaviorist-inspired neural nets running on training sets, etc.) and need a philosophical (specifically: epistemological) breakthrough to move forward.
>In fact anything computers ever do is not real intelligence
No, he doesn't say that. On the contrary he names the principle ('Universality of Computation'; see paragraph 4) which guarantees that computers are capable of true intelligence, since, if programmed correctly, they can simulate the behaviour of any physical object, including human brains.
>there is some divine truth and wisdom that only humans possess.
He also explicitly repudiates supernatural explanations (see para immediately before the one mentioning John Searle).
Why is it so hard to remove our self from that equation? Why should intelligence be a skill only humans acquire? It is not only about machines: There is a constant stream of papers revealing animals are not that dumb as thought either. We survived recognizing we are not the center of the universe, may be we are also not alone on the top of the intelligence pyramid.
Latest edge question "What do you think about machines that think?" provides a good and very broad overview of many aspects. I go with George Church: "What do you care what other machines think?"
Every time an AI system beats a human at a task that was previously thought hard (chess, jeopardy, face recognition, etc) many people immediately dismiss it as not reflecting real intelligence any longer.
> Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable.
This is where I gave up. Deutsch dismisses thousands of years of thought in a sentence. Not to mention that "justified true belief" is a phrase you find much more often in a textbook or an encyclopedia article than in a real work of philosophy.
The idea that we ought to have a better philosophical underpinning for AGI makes a lot of sense. Unfortunately the author blows past this and starts making a lot of tortured claims that don't entirely make sense.
The example of years that started with 20s seemed quite odd. Given the easiest way to understand numbers at least for me is inductively. Of course if you lop off a bunch of information such as the digits after 20 or 19 it sounds like an impossible problem.
[+] [-] dragonwriter|11 years ago|reply
Obviously, understanding intelligence better would promote more effective directed research toward artificial intelligence. But if we can identify it (which the Turing Test is about), then it is quite possible that we can develop it -- and know that we have -- without understanding it. (And it may only be through developing it that we end up understanding how it works.)
[+] [-] dsacco|11 years ago|reply
> The argument that artificial intelligence requires understanding how intelligence works is an argument that natural intelligence requires Intelligent Design.
I think that statement makes sense when phrased that way, so it is an attractive idea. However, I don't think it's true. From an evolutionary standpoint, biological intelligence developed naturally because biological components are natural. Furthermore, machines do not develop when left in isolation, while biological organisms do. If you leave a large population of simple machines running in an environment, this is overwhelmingly not likely to result in a machine intelligence millions of years later. Machines were developed by humans and do not develop in the same way; comparing the two as in your Intelligent Design argument doesn't make much sense.
I do not think artificial intelligence can arise naturally because its components are not natural. This discussion could also foster a discussion on two other interesting questions:
1. Does intelligence require organic components that operate in a deterministic way (i.e. the brain) to transcend it from merely a "machine"?
2. If intelligence requires biology, where do you draw the line between creating an intelligence through natural human reproduction and creating an artificial intelligence another way?
Personally, I believe artificial intelligence does not require biological components, and I believe that under certain circumstances, it could develop unintentionally from a relatively advanced computer, but that is not the same as naturally.
[+] [-] threatofrain|11 years ago|reply
I think intelligence may be composed of several sub-modules, the combination of which, working in concert, produces the sought-after effects people often talk about when they talk about the Turing test.
I do think tests which propose to pin AI on human characteristics is not as useful as an investigative avenue as it could be. I think the first real test of AI is the identification of causal factors in a phenomena. I think another major point of intelligence would be if AI could perform arbitrary analogical mapping. I think all of math is arbitrary analogical mapping, where you start with a set of capricious but useful building blocks, build a big structure and then... analogically map the math onto a phenomena.
I think these two ingredients makes for the kind of mental abilities that people have been craving for in AI, abilities like a computer developing its own software to use hardware, or a computer which models and reasons about phenomena.
[+] [-] breuleux|11 years ago|reply
Essentially, the idea is that "creativity" is the act of sampling from a distribution over the kind of thing you are trying to create. Learning algorithms like Boltzmann Machines will learn a distribution over the inputs they see. One thing you can do with such a distribution is checking the probability of a given input under it, which may be good to classify them. Another thing you might want to do is generate representative samples, i.e. generate an example E with probability X iff P(E) = X under the model. The latter is what I would call "creativity".
Under this definition, creativity depends on both the learned distribution (which should only assign high probability to meaningful data) and of course on the sampling algorithm. As it turns out, it is very hard to write good sampling algorithms for non-trivial distributions (naive MCMC will often get stuck). So creativity is hard, but so are a lot of other tasks, so I don't think it's fair to single it out.
[+] [-] chriskanan|11 years ago|reply
The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible.
He is looking for solutions outside of the distribution that has been previously observed.
[+] [-] ForHackernews|11 years ago|reply
> The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality.
But I don't think he's done a very good job of supporting his assertion that thinking (in the AGI sense) is a computational process. The closest he comes is:
> But that’s not a metaphor: the universality of computation follows from the known laws of physics.
That's it? Because physics?
[+] [-] ajuc|11 years ago|reply
The center of gravity far enough should make them sattelites, if they were made from something that can withstand the forces involved?
[+] [-] fchollet|11 years ago|reply
That's very much applicable to AI. As an illustration, when we started to use GPUs for training neural nets with the exact same algorithms as a decade earlier but with an order of magnitude more parameters, we got dramatically better results (see Ciresan et al. 2010 [1]).
Building the same dumb skyscrapers but making them 100 times taller might in fact get them to fly.
[1] http://arxiv.org/abs/1003.0358
[+] [-] fchollet|11 years ago|reply
So that's a space elevator. But can a space elevator be said to be "flying"? It stays above a single spot the whole time.
[+] [-] soup10|11 years ago|reply
How many times do we have to hear these arguments to realize they are just hot air? Either computers are capable of intelligence, or they are not. The answer to that question depends entirely on how you define intelligence. If you define it as the set of things humans are capable of and computers are not, then the answer will always be no. But just like humans, computers learn and are taught new things every day, and as time goes on the set of things human's are uniquely capable of grows smaller and smaller.
[+] [-] 21echoes|11 years ago|reply
> Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.
> [Turing] concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written. This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken.
etc etc etc. He literally spends more than 60% of the (very long) essay arguing against your "summary".
Anyway, a real tl;dr is right at the top of the page: "Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough." and then the last sentence: "it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever."
In other words, AGI is provably possible, but the author believes that we're going about it all wrong (behaviorist-inspired neural nets running on training sets, etc.) and need a philosophical (specifically: epistemological) breakthrough to move forward.
[+] [-] oppositereally|11 years ago|reply
tl;dr: AGI is possible, but we're going about it all wrong and won't create it until we philosophically understand it.
[+] [-] ytturbed|11 years ago|reply
No, he doesn't say that. On the contrary he names the principle ('Universality of Computation'; see paragraph 4) which guarantees that computers are capable of true intelligence, since, if programmed correctly, they can simulate the behaviour of any physical object, including human brains.
>there is some divine truth and wisdom that only humans possess.
He also explicitly repudiates supernatural explanations (see para immediately before the one mentioning John Searle).
[+] [-] noiv|11 years ago|reply
Why is it so hard to remove our self from that equation? Why should intelligence be a skill only humans acquire? It is not only about machines: There is a constant stream of papers revealing animals are not that dumb as thought either. We survived recognizing we are not the center of the universe, may be we are also not alone on the top of the intelligence pyramid.
Latest edge question "What do you think about machines that think?" provides a good and very broad overview of many aspects. I go with George Church: "What do you care what other machines think?"
http://edge.org/responses/what-do-you-think-about-machines-t...
[+] [-] pesenti|11 years ago|reply
[+] [-] java-man|11 years ago|reply
[+] [-] ForHackernews|11 years ago|reply
[+] [-] dcre|11 years ago|reply
This is where I gave up. Deutsch dismisses thousands of years of thought in a sentence. Not to mention that "justified true belief" is a phrase you find much more often in a textbook or an encyclopedia article than in a real work of philosophy.
[+] [-] robohamburger|11 years ago|reply
The example of years that started with 20s seemed quite odd. Given the easiest way to understand numbers at least for me is inductively. Of course if you lop off a bunch of information such as the digits after 20 or 19 it sounds like an impossible problem.
[+] [-] bitdiddle|11 years ago|reply
[1] http://edge.org/responses/q2015