top | item 9059251

(no title)

EliAndrewC | 11 years ago

Here's the standard argument, as I understand it:

- There are something like 100,000,000,000 neurons in the human brain, each of which can have up to around 10,000 synaptic connections to other neurons. This is basically why the brain is so powerful.

- Modern CPUs have around 4,000,000,000 transistors, but Moore's law means that this number will just keep going up and up.

- Several decades from now (probably in the 2030s), the number of transistors will exceed the number of synaptic connections in a brain. This doesn't automatically make computers as "smart" as people, but many of the things that the human brains does well by brute-forcing them via parallelism will become very achievable.

- Once you have an AI that's effectively as "smart" as a human, you only have to wait 18 months for it to get twice as smart. And then again. And again. This is what "the singularity" means to some people.

The other form of this argument which I see in some places is that all you need is an AI which can increase its own intelligence and a lot of CPU cycles, and then you'll end up with an AI that's almost arbitrarily smart and powerful.

I don't hold these views myself, so hopefully someone with more information can step in to correct anything I've gotten wrong. (LessWrong.com seems to generally view AI as a potential extinction risk for humans, and from poking around I found a few pages such as http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/)

discuss

order

tptacek|11 years ago

Ok, to both you and 'Micaiah_Chang cross-thread:

I do understand where the notion of hockey-stick increases in intellectual ability comes from.

I do understand the concept that it's hard to predict what would come of "superintellectual" ability in some sort of synthetic intelligence. That we're in the dark about it, because we're intellectually limited.

I don't understand the transition from synthetic superintellectual capability to actual harm to humans.

'Micaiah_Chang seems to indicate that it would result in a sort of supervillain, who would... what, trick people into helping it enslave humanity? If we were worried about that happening, wouldn't we just hit the "off" switch? Serious question.

The idea of genetic engineering being an imminent threat has instant credibility. It is getting easier and cheaper to play with that technology, and some fraction of people are both intellectually capable and psychologically defective enough to exploit it to harm people directly.

But the idea that AI will exploit genetic engineering to do that seems circular. In that scenario, it would still be insufficient controls on genetic engineering that would be the problem, right?

I'm asking because I genuinely don't understand, even if I don't have a rhetorical tone other than "snarky disbelief".

'sama seems like a pretty pragmatic person. I'm trying to get my head around specifically what's in his head when he writes about AI destroying humanity.

Micaiah_Chang|11 years ago

Er, sorry for giving the impression that it'd be a supervillain. My intention was to indicate that it'd be a weird intelligence, and that by default weird intelligences don't do what humans want. There are some other examples which I could have given to clarify (e.g. telling it to "make everyone happy" could just result in it giving everyone heroine forever, telling it to preserve people's smiles could result in it fixing everyone's face into a paralyzed smile. The reason it does those things isn't because it's evil, but because it's the quickest+simplest way of doing it; it doesn't have the full values that a human has)

But for the "off" switch question specifically, a superintelligence could also have "persuasion" and "salesmanship" as an ability. It could start saying things like "wait no, that's actually Russia that's creating that massive botnet, you should do something about them", or "you know that cancer cure you've been looking for for your child? I may be a cat picture AI but if I had access to the internet I would be able to find a solution in a month instead of a year and save her".

At least from my naive perspective, once it has access to the internet it gains the ability to become highly decentralized, in which case the "off" switch becomes much more difficult to hit.

jacquesm|11 years ago

If the math would work out that way a cluster of 25 or so computers should be able to support a full blown AI. But clusters of 10's of thousands of computers are still simply executing relatively simplistic algorithms. So I would estimate that the number of transistors required for AI would be either much higher than the number of neurons (which are not easily modeled in the digital domain) or that our programming bag of tricks would need serious overhaul before we could consider solving the problem of hard AI.

JoeAltmaier|11 years ago

That sounds about right. There's speed of thought (wetware brains currently win) and then there's speed of evolution. Digital brains definitely win that one. Because some wetware brains are spending all their time figuring out how to make the digital ones better. Nobody is doing that for the soggy kind.

The singularity will happen when the digital brains are figuring out how to make themselves better. Then they will really take off, and not slow down, ever.