I see a big parallel with the predictions for advances in neuroscience with all the predictions that were made prior to the sequencing of the human genome (the author touches on this a bit too). Lots of smart scientists really believed that once the human genome was sequenced, we would have the keys to the biological kingdom. What has actually happened is that we have discovered that the complexity of the system is probably an order of magnitude more complex than previously thought. Knowing the sequence of a gene turns out to be important, but a pretty minor factor in explaining its function. Plus we are learning that all sorts of simple rules we thought were true aren't always the case.
I suspect a similar thing is playing out in neuroscience. As we peel back the layers of the onion, ever more complexity will be revealed. The things Ray Kurzweil predicts may well come true. He is a brilliant guy. But the timetable is very optimistic.
The march of biological progress is very slow, in part because all the experimentation involves living things that grow, die, get contaminated, run away, don't show up for appointments, get high, etc... Lots of people from other scientific disciplines, especially engineering related ones underestimate just how long even the simplest biological experiments can take.
"Lots of smart scientists really believed that once the human genome was sequenced, we would have the keys to the biological kingdom."
Here's my (a computer scientist's) view on the matter:
Imagine that you have a relatively complex computer system written with object oriented principles. Now, imagine that you are looking at the binary representation of this system and trying to make sense out of the whole thing. Also, imagine that you have no knowledge of how computer systems work and how the layers between the computer program, programming language, possibly virtual machine, and native code work.
There are layers involved between these objects and their binary representation. I imagine that there are also layers between our genome (analogy to binary code) and the leveraged representation of ourselves (analogy to object oriented system).
I think that this is why it is hard to make much sense out of the genome, even though the human genome was sequenced.
I also imagine that this is why it is hard to make sense out of the brain by looking at the brain directly. An analogy would be that we are again looking at binary representation of information.
It would be far more useful to figure out how this stuff works. I am not sure how this is done at this time.
He would be correct if creation of AI depended on a thorough understanding of neuroscience. But I hope we needn't wait that long.
It's the old "Birds fly. To fly, man must fully understand bird flight." argument. Yet today we still don't completely understand bird flight but planes _do_ fly.
The analogy is not complete: we have yet to find the "air", the "turbulence", a "Bernoulli principle", etc. of intelligence. That is to be determined. But this approach is the only reasonable one.
As the author implies, waiting for neuroscience is like waiting for Godot.
Exactly, we've had airplanes for a century but working orthnocopters are still something of a black art. Like so many things in engineering, it is easier, and better, to not pay naturally occurring phenomenon undue attention. We are capable of engineering better.
Exactly, and he's talking about the human practice of neuroscience. When we manage to build a sufficiently advanced AI we can set it to work on these types of problems, that's what's exciting to me.
Not interested in arguing about his time table, but the example of DNA sequencing only affording a linear increase in undersanding is bogus and he ought to know that. It has significantly accelerated genetics research by making mapping a matter of a browser search. As an example, the fly lines developed by Gerry Rubin et al, which can be manipulated to express any reporter gene in any genetically defined brain locus. That would have been completely infeasible prior to complete genomic sequencing of the fly.
The OP asks reasonable technical questions about medical nanorobots. I'm not going to defend Kurzweil, but some less-sloppy thinkers have written about this kind of stuff, like Merkle, Freitas, and Drexler. E.g. http://www.merkle.com/cryo/techFeas.htmlhttp://www.nanomedicine.com/NMIIA/15.3.6.5.htm
They do tackle questions like how do you power these things; I wish he'd read and criticize them instead.
A 7-micron-long medical nanorobot sounds pretty damned big to me, btw -- in _Nanosystems_ Drexler fits a 32-bit CPU in a 400nm cube, less than 1/300 of the volume if we're talking about a 1-micron-radius cylinder.
This article was a very similar one to ones that biologists were publishing in mid 80's when Kurzweil predicted the mapping of the human genome within 15 years. It's interesting how exponential progress is counter-intuitive even for those who have been experiencing it in their fields for years.
I always thought that was Kurzweil's main and strongest point. We tend to predict linearly when a lot of progress appears to happen exponentially. The rest are embellishments.
The main call to action should be how to protect, organize, and invest in ourselves given possible developments from the above.
My big problem with Kurzweil's singularity is the massive handwaving he does between 'computers are getting exponentially faster' and 'AI will arise'.
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form. Maybe it is, maybe it isn't, but the lack of progress in hard AI in the last 30 years isn't a good sign.
There's never any progress in AI, because once we figure out how to do something, we stop calling it "AI".
In the last 30 years, computers have won at chess, won at Jeopardy, learned to recognize spam with better than 99.5% accuracy, learned to recognize faces with better than 95% accuracy, achieved semi-readable automatic translation, figured out what movies I should add to my Netflix queue, and started to recognize speech. We've seen huge advances in computer vision and statistical natural language processing, and we're seeing a renaissance in machine learning. Most of this stuff was considered "hard AI" as recently as 1992, but the goalposts have moved.
And if intelligence can't be represented in algorithmic form, then what's the brain doing? Even if we have immaterial souls that don't obey the laws of physics, why do some brain lesions cause weirdly specific impairments to our thought process? A huge chunk of our intelligence is clearly subject to the laws of physics, and therefore can be wedged somewhere into the computational complexity hierarchy.
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form.
We have a working definition and know that it is algorithmically representable nowadays (http://www.hutter1.net/ai/aixigentle.htm). Now the question is how you can make the algorithm efficiently computable.
Nice to see a post on this topic from a neuroscientist, as I am very interested in this area but know little biology.
One question though--the author says "while the fundamental insights that have emerged to date from the human genome sequence have been important, they have been far from evelatory." While not guaranteed, doesn't it seem likely that we will understand much, much more about the human genome once the economies of scale come into play? The price of sequencing a genome is currently on the order
of about $10000, and if they continue to fall at the rate they have (which seems likely, based both past price decay and in-development technologies), the cost to sequence a genome will be on the order of $100 well before the end of this decade. Once we sequence millions-billions of genomes and compare the information in said genomes with data from the corresponding human subjects, I suspect we will learn a lot more than we would by trying to understand a single
person's genome. Moreover, given that the human genome is on the order of roughly a gigabyte, it would seem difficult, but not unreasonably so, to try and understand most the information in our DNA.
I've never been impressed by the "simulate a single human" approach to AGI.
I don't know why it appeals to people. Has Christianity infected people with a desire for personal immortality? Are people inured to flushing billions and billions down the drain on biomedical research?
Another issue is that humans aren't that great anyway. The "game of life" is really about statistical inference and people aren't that good at it -- the success of Las Vegas proves it. If you can eliminate the systematic biases that people make dealing with uncertainty, you can make intelligence which is qualitatively superhuman, not just quantitatively superhuman.
It's much more believable that steady progress will be made on emulating and surpassing human faculties. This won't be based on any one particular methodology (symbol processing, neural nets, Bayesian networks) but will be based on picking and choosing what works. Progress is going to be steady here because progress means better systems each step of the way.
Sure, the Richard Dreyfuses will be with us each step of the way and will diminish our accomplishments... and they might still be doing so long after we're living in a zoo.
> The "game of life" is really about statistical inference and people aren't that good at it -- the success of Las Vegas proves it.
Las Vegas is run by human beings. Having members of your species be worse at a task than other members of your species doesn't prove that your species as a whole is not good at the task.
Now...
> Has Christianity infected people with a desire for personal immortality? Are people inured to flushing billions and billions down the drain on biomedical research?
Why wouldn't one want personal immortality? To be fair, religious groups are those least likely to support personal immortality of many sorts (e.g. brain uploading) because of questions such as "what happens to the soul", and "isn't this meddling in our creators work?".
Rather, I'd think that anyone who believes that this life is all that exists would want to prolong it indefinitely. It's better to be than not to be.
Singularities have happened in the past when life evolves a solution to a local problem such as photosynthesis, the social primate and agriculture.
Kurzweils singularity is just one of many potential signularities but the near future seems either contain major innovation with great generality or collapse.
The likelihood of Kurzweil's particular vision of the singularity in this case doesn't say anything about the likelihood of the singularity in general, i.e. by the creation of artificial intelligence through methods that are nearer at hand than nanobots or whole brain emulation.
For me this complexity problem could insurmountable. I think the best approach may be to side step this issue and try selective breeding of virtual (increasingly) intelligent beings.
Before knocking Kurzweil's predictions, review his predictions of the 1990's and the people who mocked them. Kurzweil does not have a perfect track record. I think his accuracy in predicting the future is way above average.
Also, I find his views of the future enlightening and useful, as he illustrates lots of "just out of reach" engineering projects for me to consider tackling.
Between the years of 1990 and 2005, Kurzweil predicted the following:
* People will mainly use portable computers.
* Portable computers will be lighter and easier to transport.
* Internet access will be available almost everywhere.
* Device cables will disappear.
* Documents will have embedded moving images and sounds.
* Virtual long distance learning will be commonplace.
Those predictions are a lot less impressive if you read the text of them rather than a short summary written after 2009 came to pass. The prediction about portable computers for 2009 that he actually made back in 1999 (in the book The Age of Spiritual Machines) reads:
"Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book."
People don't really use wearable computers now, at least not powerful enough to drive high-resolution displays. And the desktop is still with us, as are laptops in the same large form factors that were all that was available in 1990.
Looking at his other predictions for 2009, I'm not sure its fair to say that cables are disapearing since you still need them for the highest speed connections, but the statement is ambiguous enough that I'll give him that one.
"The Majority of text is created using continuous speech recognition" is totally false.
"Most routine business transactions take place between a human and a virtual personality. Often the virtual personality includes an animated visual presence that looks like a human face." No.
"Intelligent courseware has emerged as a common means of learning". Sure, but I'd already done this back in '99 so it wasn't a hard prediction.
"Pocket sized reading machines for the blind" AFAIK these exist.
"Translating telephones are commonly used for many language pairs" well, its in development. I imagine it'll be common by 2015.
"Widespread deflation" well, both the US and Japan have had bouts of deflation recently, that wasn't due to technology but rather central bank decisions.
"The neo-Ludite movement is growing" its less of a problem than in 1999, as far as I can tell.
If you look at all his predictions rather than cherry picking the best ones and rewriting the rest to sound better he doesn't come out so well.
He's not 'mocking' the predictions, he's providing a compelling, well thought out argument as a counter to a prediction, particularly interesting due to his neuro background.
I would agree he's certainly right about Kurzweil's unrealistic optimism, but I'm not sure our understanding of the brain (and other aspects of our biology for that matter) isn't increasing exponentially. Perhaps rather it just seems linear compared to the turbo-charged progress of these enabling technologies? Certainly we've come a lot further since Phineas Gage than a linear trajectory would allow.
He should have thrown around some numbers while he was at it. I wonder if he'd agree with clinical immortality by the end of this century, and mind-uploading by the end of the next?
I think you might be missing the point, though. The argument is that we're collecting exponentially more data about the brain, but that data doesn't translate directly to understanding.
You mentioned Phineas Gage. That case led to the idea of regions of the brain controlling different things, which led to lobotomy as a psychiatric treatment, which was used up until the 1960s or so. Then chemical methods improved, and people came to understand that neurotransmitters played a role too, which led to antidepressants and other drugs. Those drugs have improved, but their design hasn't changed that much in the last few decades. Obviously this is over-simplified -- but it doesn't sound like an exponential growth of understanding to me.
There's something rather disingenuous about your phrase: "not sure our understanding of the brain isn't increasing exponentially".
I think the claim that our knowledge (_actual_ knowledge, mind you) of anything is increasing exponentially (with the usual implication that the exponent isn't 1.01 per decade :-) ) is the claim that requires proof.
[+] [-] JunkDNA|14 years ago|reply
I suspect a similar thing is playing out in neuroscience. As we peel back the layers of the onion, ever more complexity will be revealed. The things Ray Kurzweil predicts may well come true. He is a brilliant guy. But the timetable is very optimistic.
The march of biological progress is very slow, in part because all the experimentation involves living things that grow, die, get contaminated, run away, don't show up for appointments, get high, etc... Lots of people from other scientific disciplines, especially engineering related ones underestimate just how long even the simplest biological experiments can take.
[+] [-] skarayan|14 years ago|reply
Here's my (a computer scientist's) view on the matter:
Imagine that you have a relatively complex computer system written with object oriented principles. Now, imagine that you are looking at the binary representation of this system and trying to make sense out of the whole thing. Also, imagine that you have no knowledge of how computer systems work and how the layers between the computer program, programming language, possibly virtual machine, and native code work.
There are layers involved between these objects and their binary representation. I imagine that there are also layers between our genome (analogy to binary code) and the leveraged representation of ourselves (analogy to object oriented system).
I think that this is why it is hard to make much sense out of the genome, even though the human genome was sequenced.
I also imagine that this is why it is hard to make sense out of the brain by looking at the brain directly. An analogy would be that we are again looking at binary representation of information.
It would be far more useful to figure out how this stuff works. I am not sure how this is done at this time.
[+] [-] giardini|14 years ago|reply
It's the old "Birds fly. To fly, man must fully understand bird flight." argument. Yet today we still don't completely understand bird flight but planes _do_ fly.
The analogy is not complete: we have yet to find the "air", the "turbulence", a "Bernoulli principle", etc. of intelligence. That is to be determined. But this approach is the only reasonable one.
As the author implies, waiting for neuroscience is like waiting for Godot.
[+] [-] burgerbrain|14 years ago|reply
[+] [-] barefoot|14 years ago|reply
[+] [-] Estragon|14 years ago|reply
[+] [-] abecedarius|14 years ago|reply
A 7-micron-long medical nanorobot sounds pretty damned big to me, btw -- in _Nanosystems_ Drexler fits a 32-bit CPU in a 400nm cube, less than 1/300 of the volume if we're talking about a 1-micron-radius cylinder.
[+] [-] Troll_Whisperer|14 years ago|reply
[+] [-] Lost_BiomedE|14 years ago|reply
The main call to action should be how to protect, organize, and invest in ourselves given possible developments from the above.
[+] [-] macavity23|14 years ago|reply
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form. Maybe it is, maybe it isn't, but the lack of progress in hard AI in the last 30 years isn't a good sign.
[+] [-] ekidd|14 years ago|reply
In the last 30 years, computers have won at chess, won at Jeopardy, learned to recognize spam with better than 99.5% accuracy, learned to recognize faces with better than 95% accuracy, achieved semi-readable automatic translation, figured out what movies I should add to my Netflix queue, and started to recognize speech. We've seen huge advances in computer vision and statistical natural language processing, and we're seeing a renaissance in machine learning. Most of this stuff was considered "hard AI" as recently as 1992, but the goalposts have moved.
And if intelligence can't be represented in algorithmic form, then what's the brain doing? Even if we have immaterial souls that don't obey the laws of physics, why do some brain lesions cause weirdly specific impairments to our thought process? A huge chunk of our intelligence is clearly subject to the laws of physics, and therefore can be wedged somewhere into the computational complexity hierarchy.
[+] [-] rsaarelm|14 years ago|reply
We have a working definition and know that it is algorithmically representable nowadays (http://www.hutter1.net/ai/aixigentle.htm). Now the question is how you can make the algorithm efficiently computable.
[+] [-] arctangent|14 years ago|reply
[+] [-] ignifero|14 years ago|reply
[deleted]
[+] [-] nwr86|14 years ago|reply
One question though--the author says "while the fundamental insights that have emerged to date from the human genome sequence have been important, they have been far from evelatory." While not guaranteed, doesn't it seem likely that we will understand much, much more about the human genome once the economies of scale come into play? The price of sequencing a genome is currently on the order of about $10000, and if they continue to fall at the rate they have (which seems likely, based both past price decay and in-development technologies), the cost to sequence a genome will be on the order of $100 well before the end of this decade. Once we sequence millions-billions of genomes and compare the information in said genomes with data from the corresponding human subjects, I suspect we will learn a lot more than we would by trying to understand a single person's genome. Moreover, given that the human genome is on the order of roughly a gigabyte, it would seem difficult, but not unreasonably so, to try and understand most the information in our DNA.
Thanks for any insight you can provide.
[+] [-] PaulHoule|14 years ago|reply
I don't know why it appeals to people. Has Christianity infected people with a desire for personal immortality? Are people inured to flushing billions and billions down the drain on biomedical research?
Another issue is that humans aren't that great anyway. The "game of life" is really about statistical inference and people aren't that good at it -- the success of Las Vegas proves it. If you can eliminate the systematic biases that people make dealing with uncertainty, you can make intelligence which is qualitatively superhuman, not just quantitatively superhuman.
It's much more believable that steady progress will be made on emulating and surpassing human faculties. This won't be based on any one particular methodology (symbol processing, neural nets, Bayesian networks) but will be based on picking and choosing what works. Progress is going to be steady here because progress means better systems each step of the way.
Sure, the Richard Dreyfuses will be with us each step of the way and will diminish our accomplishments... and they might still be doing so long after we're living in a zoo.
[+] [-] true_religion|14 years ago|reply
Las Vegas is run by human beings. Having members of your species be worse at a task than other members of your species doesn't prove that your species as a whole is not good at the task.
Now...
> Has Christianity infected people with a desire for personal immortality? Are people inured to flushing billions and billions down the drain on biomedical research?
Why wouldn't one want personal immortality? To be fair, religious groups are those least likely to support personal immortality of many sorts (e.g. brain uploading) because of questions such as "what happens to the soul", and "isn't this meddling in our creators work?".
Rather, I'd think that anyone who believes that this life is all that exists would want to prolong it indefinitely. It's better to be than not to be.
Do you have an argument against that?
[+] [-] randallsquared|14 years ago|reply
Where by "Christianity" you mean the Epic of Gilgamesh?
Anyway, simulating a human has the main bonus that we don't have to fully understand how intelligence works to make progress.
[+] [-] zmanian|14 years ago|reply
Singularities have happened in the past when life evolves a solution to a local problem such as photosynthesis, the social primate and agriculture.
Kurzweils singularity is just one of many potential signularities but the near future seems either contain major innovation with great generality or collapse.
[+] [-] mwhite|14 years ago|reply
[+] [-] jsmcgd|14 years ago|reply
[+] [-] maeon3|14 years ago|reply
Also, I find his views of the future enlightening and useful, as he illustrates lots of "just out of reach" engineering projects for me to consider tackling.
Between the years of 1990 and 2005, Kurzweil predicted the following:
Mock his current predictions with care. http://www.associatedcontent.com/article/8181399/the_predict...[+] [-] Symmetry|14 years ago|reply
"Personal computers with high-resolution visual displays come in a range of sizes, from those small enough to be embedded in clothing and jewelry up to the size of a thin book."
People don't really use wearable computers now, at least not powerful enough to drive high-resolution displays. And the desktop is still with us, as are laptops in the same large form factors that were all that was available in 1990.
Looking at his other predictions for 2009, I'm not sure its fair to say that cables are disapearing since you still need them for the highest speed connections, but the statement is ambiguous enough that I'll give him that one.
"The Majority of text is created using continuous speech recognition" is totally false.
"Most routine business transactions take place between a human and a virtual personality. Often the virtual personality includes an animated visual presence that looks like a human face." No.
"Intelligent courseware has emerged as a common means of learning". Sure, but I'd already done this back in '99 so it wasn't a hard prediction.
"Pocket sized reading machines for the blind" AFAIK these exist.
"Translating telephones are commonly used for many language pairs" well, its in development. I imagine it'll be common by 2015.
"Widespread deflation" well, both the US and Japan have had bouts of deflation recently, that wasn't due to technology but rather central bank decisions.
"The neo-Ludite movement is growing" its less of a problem than in 1999, as far as I can tell.
If you look at all his predictions rather than cherry picking the best ones and rewriting the rest to sound better he doesn't come out so well.
[+] [-] simonsquiff|14 years ago|reply
[+] [-] Almaviva|14 years ago|reply
[+] [-] bluedanieru|14 years ago|reply
He should have thrown around some numbers while he was at it. I wonder if he'd agree with clinical immortality by the end of this century, and mind-uploading by the end of the next?
[+] [-] azza-bazoo|14 years ago|reply
You mentioned Phineas Gage. That case led to the idea of regions of the brain controlling different things, which led to lobotomy as a psychiatric treatment, which was used up until the 1960s or so. Then chemical methods improved, and people came to understand that neurotransmitters played a role too, which led to antidepressants and other drugs. Those drugs have improved, but their design hasn't changed that much in the last few decades. Obviously this is over-simplified -- but it doesn't sound like an exponential growth of understanding to me.
[+] [-] onan_barbarian|14 years ago|reply
I think the claim that our knowledge (_actual_ knowledge, mind you) of anything is increasing exponentially (with the usual implication that the exponent isn't 1.01 per decade :-) ) is the claim that requires proof.
[+] [-] ignifero|14 years ago|reply
[deleted]
[+] [-] hootmon|14 years ago|reply
[deleted]