(no title)
EliAndrewC | 11 years ago
- There are something like 100,000,000,000 neurons in the human brain, each of which can have up to around 10,000 synaptic connections to other neurons. This is basically why the brain is so powerful.
- Modern CPUs have around 4,000,000,000 transistors, but Moore's law means that this number will just keep going up and up.
- Several decades from now (probably in the 2030s), the number of transistors will exceed the number of synaptic connections in a brain. This doesn't automatically make computers as "smart" as people, but many of the things that the human brains does well by brute-forcing them via parallelism will become very achievable.
- Once you have an AI that's effectively as "smart" as a human, you only have to wait 18 months for it to get twice as smart. And then again. And again. This is what "the singularity" means to some people.
The other form of this argument which I see in some places is that all you need is an AI which can increase its own intelligence and a lot of CPU cycles, and then you'll end up with an AI that's almost arbitrarily smart and powerful.
I don't hold these views myself, so hopefully someone with more information can step in to correct anything I've gotten wrong. (LessWrong.com seems to generally view AI as a potential extinction risk for humans, and from poking around I found a few pages such as http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/)
tptacek|11 years ago
I do understand where the notion of hockey-stick increases in intellectual ability comes from.
I do understand the concept that it's hard to predict what would come of "superintellectual" ability in some sort of synthetic intelligence. That we're in the dark about it, because we're intellectually limited.
I don't understand the transition from synthetic superintellectual capability to actual harm to humans.
'Micaiah_Chang seems to indicate that it would result in a sort of supervillain, who would... what, trick people into helping it enslave humanity? If we were worried about that happening, wouldn't we just hit the "off" switch? Serious question.
The idea of genetic engineering being an imminent threat has instant credibility. It is getting easier and cheaper to play with that technology, and some fraction of people are both intellectually capable and psychologically defective enough to exploit it to harm people directly.
But the idea that AI will exploit genetic engineering to do that seems circular. In that scenario, it would still be insufficient controls on genetic engineering that would be the problem, right?
I'm asking because I genuinely don't understand, even if I don't have a rhetorical tone other than "snarky disbelief".
'sama seems like a pretty pragmatic person. I'm trying to get my head around specifically what's in his head when he writes about AI destroying humanity.
Micaiah_Chang|11 years ago
But for the "off" switch question specifically, a superintelligence could also have "persuasion" and "salesmanship" as an ability. It could start saying things like "wait no, that's actually Russia that's creating that massive botnet, you should do something about them", or "you know that cancer cure you've been looking for for your child? I may be a cat picture AI but if I had access to the internet I would be able to find a solution in a month instead of a year and save her".
At least from my naive perspective, once it has access to the internet it gains the ability to become highly decentralized, in which case the "off" switch becomes much more difficult to hit.
jacquesm|11 years ago
JoeAltmaier|11 years ago
The singularity will happen when the digital brains are figuring out how to make themselves better. Then they will really take off, and not slow down, ever.