top | item 13240811

Superintelligence: The Idea That Eats Smart People

883 points| pw | 9 years ago |idlewords.com

580 comments

order
[+] alexbecker|9 years ago|reply
While I agree with Maciej's central point, I think the inside arguments he presents are pretty weak. I think that AI risk is not a pressing concern even if you grant the AI risk crowd's assumptions. Elided from https://alexcbecker.net/blog.html#against-ai-risk:

The real AI risk isn't an all-powerful savant which misinterprets a command to "make everyone on Earth happy" and destroys the Earth. It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next. It's smart factories that create a vast chasm between a new, tiny Hyperclass and the destitute masses... AI is hardly the only technology powerful enough to turn dangerous people into existential threats. We already have nuclear weapons, which like almost everything else are always getting cheaper to produce. Income inequality is already rising at a breathtaking pace. The internet has given birth to history's most powerful surveillance system and tools of propaganda.

[+] modeless|9 years ago|reply
Exactly. The "Terminator" scenario of a rogue malfunctioning AI is a silly distraction from the real AI threat, which is military AIs that don't malfunction. They will give their human masters practically unlimited power over everyone else. And AI is not the only technology with the potential to worsen inequality in the world.
[+] tvural|9 years ago|reply
People are worried about AI risk because ensuring that the strong AI you build to do X will do X without doing something catastrophic to humanity instead is a very hard problem, and people who have not thought much about this problem tend to vastly underestimate how hard it is.

Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere. Now you might say, why don't we just hardcode in a goal to the AI like "solve aging, and also don't hurt anyone"? And ensure that the AI's method of achieving its goals won't have terrible unintended consequences? Oh, and the AI's goals can't change? This is called the AI control problem, and nobody's been able to solve it yet. It's hard to come up with good goals for the AI. It's hard to translate those goals into math. It's hard to prevent the AI from misinterpreting or modifying its own goals. It's hard to work on AI safety when you don't know what the first strong AI will look like. It's hard to prove with 99.999% certainty that your safety measures will work when you can't test them.

Things will not turn out okay if the first organization to develop strong AI is not extremely concerned about AI risk, because the default is to get AI control wrong, the same way the default is for planets to not support life.

My counterpoint to the risks of more limited AI is that limited AI doesn't sound as scary when you rename it statistical software, and probably won't have effects much larger in magnitude than the effects of all other kinds of technology combined. Limited AI already does make militaries more effective, but most of the problem comes from the fact that these militaries exist, not from the AI. It's hard for me to imagine an AI carrying out a military operation without much human intervention that wouldn't pose a control problem.

--------- Edited in response to comment--------

[+] kenko|9 years ago|reply
> It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next

You know, you don't need to go that far. You know what a great way to kill a particular group of people is? Well, let's take a look at what a group of human military officers decided to do (quoting from a paper of Elizabeth Anscombe's, discussing various logics of action and deliberation):

""" Kenny's system allows many natural moves, but does not allow the inference from "Kill everyone!" to "Kill Jones!". It has been blamed for having an inference from "Kill Jones!" to "Kill everyone!" but this is not so absurd as it may seem. It may be decided to kill everyone in a certain place in order to get the particular people that one one wants. The British, for example, wanted to destroy some German soldiers on a Dutch island in the Second World War, and chose to accomplish this by bombing the dykes and drowning everybody. (The Dutch were their allies.) """

There's a footnote:

""" Alf Ross shews some innocence when he dismisses Kenny’s idea: ‘From plan B (to prevent overpopulation) we may infer plan A (to kill half the population) but the inference is hardly of any practical interest.’ We hope it may not be. """

It's not an ineffective plan.

[+] state_less|9 years ago|reply
The internet also brought us wikipedia, google, machine learning and a place to talk about the internet.

Machine learning advances are predicated on the internet, will grow the internet and will become what we already ought to know we are. A globe spanning hyper intelligence working to make more intelligence at break neck pace.

Somewhere along this accelerating continuum of intelligence, we need to consciously decide to make thing awesome. So people aim to build competent self driving cars, that way fewer people die of drunk driving or boredom. Let's keep trying. Keep trying to give without thought of getting something in return. Try to make the world you want to live in. Take a stand against things that are harmful to your body (in the large sense and small sense) and your character. Live long and prosper!!!

[+] ph0rque|9 years ago|reply
>We already have nuclear weapons, which like almost everything else are always getting cheaper to produce.

And in an almost miraculous result, we've managed not to annihilate each other with them so far.

> Income inequality is already rising at a breathtaking pace.

In the US, yes, but inequality is lessening globally.

> The internet has given birth to history's most powerful surveillance system and tools of propaganda.

It has also given birth to a lot of good things, some that are mentioned in a sibling comment.

[+] panic|9 years ago|reply
It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next.

This has been done many times by human-run militaries; would AI make it worse somehow?

Groups of humans acting collectively can look a lot like an "AI" from the right perspective. Corporations focused on optimizing their profit spend a huge amount of collective intelligence to make this single number go up, often at the expense of the rest of society.

[+] tboyd47|9 years ago|reply
No doubt that his "inside arguments" have been rebutted extensively by the strong AI optimists and their singularity priests. After all, dreaming up scenarios in which robotic superintelligence dominates humanity is their version of saving the world.

That's why I found the "outside arguments" here equally important and compelling.

> The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.

If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.

The best rebuttals to all this are the least engaging.

"Dude, are you telling me you want to build Skynet?"

[+] openasocket|9 years ago|reply
I'll push back against the idea of smart factories leading to "a vast chasm between a new, tiny Hyperclass and the destitute masses." I mean, if the masses are destitute, they can't afford the stuff being made at those fancy factories, so the owners of those factories won't make money. Income inequality obviously benefits the rich (in that they by definition have more money), but only up to a point. We won't devolve into an aristocracy, at least not because of automation.
[+] yakult|9 years ago|reply
There are at least two failure cases here:

- a military AI in the hands of bad actors that does bad stuff with it intentionally.

- a badly coded runaway AI that destroys earth.

These two failure modes are not mutually exclusive. When nukes were first developed, the physicists thought there is a small but plausible chance, around 1%, that detonating a nuke would ignite the air and blow up the whole world.

Let's imagine we live in a world where they're right. Let's suppose somebody comes around and says, "let's ignore the smelly and badly dressed and megalomanic physicists and their mumbo jumbo, the real problem is if a terrorist gets their hands on one of these and blows up a city."

Well, yes, that would be a problem. But the other thing is also a problem. And it would kill a lot more people.

[+] pfisch|9 years ago|reply
I mean if you made me a disembodied mind connected to the internet that never needs to sleep and can make copies of itself we would be able to effectively take over the world in like ~20-50 years, possibly much less time then that.

I make lots of money right now completely via the internet and I am not even breaking laws. It is just probable that an AI at our present level of intelligence could very quickly amass a fortune and leverage it to control everything that matters without humanity even being aware of the changeover.

[+] apsec112|9 years ago|reply
There are also nearer-term threats (although I'd likely disagree on many specifics), but I don't see how that erases longer-term threats. One nuclear bomb being able to destroy your city now doesn't mean that ten thousand can't destroy your whole country ten years down the line.
[+] OscarCunningham|9 years ago|reply
It's possible that we could face both AI risks consecutively! First a tiny hyperclass conquers the world using a limited superintelligence and commits mass genocide, and then a more powerful superintelligence is created and everyone is made into paperclips. Isn't that a cheery thought. :-)
[+] empath75|9 years ago|reply
The real danger of ai is that they allow people to hide ethically dubious decisions that they've made behind algorithms. You plug some data into a system and a decision gets made and everyone just sort of shrugs their shoulders and doesn't question it.
[+] krmboya|9 years ago|reply
Isn't that the conclusion he gives at the end of the article? Ethical considerations
[+] cryoshon|9 years ago|reply
what if we made a superintelligent AI that was our Socrates?

superintelligence of a military AI is worrisome, but superintelligence of a cantankerous thinking is quite reassuring...

[+] mtgx|9 years ago|reply
Yes, that's the ultimate threat. But in the meantime, the threat is the military will think the AI is "good enough" to start killing on its own and the AI actually gets it wrong a lot of the time.

Kind of like what we're already seeing now in Courts, and kind of how NSA and CIA's own algorithms for assigning a target are still far less than 99% accurate.

[+] apsec112|9 years ago|reply
"I live in California, which has the highest poverty rate in the United States, even though it's home to Silicon Valley. I see my rich industry doing nothing to improve the lives of everyday people and indigent people around us."

This is trivially false. Over a hundred billionaires have now pledged to donate the majority of their wealth, and the list includes many tech people like Bill Gates, Larry Ellison, Mark Zuckerberg, Elon Musk, Dustin Moskovitz, Pierre Omidyar, Gordon Moore, Tim Cook, Vinod Khosla, etc, etc.

https://en.wikipedia.org/wiki/The_Giving_Pledge

Google has a specific page for its charity efforts in the Bay Area: https://www.google.org/local-giving/bay-area/

This only includes purely non-profit activity; it doesn't count how eg. cellphones, a for-profit industry, have dramatically improved the lives of the poor.

[+] apsec112|9 years ago|reply
This article explicitly endorses argument ad hominem:

"These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult. Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. The outside view doesn't care about content, it sees the form and the context, and it doesn't look good."

The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion, as in the cult case. But the cases where it doesn't work can be really, really important. 99.9% of 26-year-olds working random jobs inventing theories about time travel are cranks, but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).

[+] apsec112|9 years ago|reply
"Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn't want to go, you know what would happen to Einstein. He would have to resort to a brute-force solution that has nothing to do with intelligence, and in that matchup the cat could do pretty well for itself."

This seems, actually, like a perfect argument going in the other direction. Every day, millions of people put cats into boxes, despite the cats not being interested. If you offered to pay a normal, reasonably competent person $1,000 to get a reluctant cat in a box, do you really think they simply would not be able to do it? Heck, humans manage to keep tigers in zoos, where millions of people see them every year, with a tiny serious injury rate, even though tigers are aggressive and predatory by default and can trivially overpower humans.

[+] kazagistar|9 years ago|reply
They always miss a critical and subtle assumption: that intelligence scales equal to or faster then the computational complexity of improving that intelligence.

This is the one assumption I most skeptical of. In my experience, each time you make a system more clever, you also make it MUCH more complex. Maybe there is not hard limit on intelligence, but maybe each generation of improved intelligence actually takes longer to find the next generation, due to the rapidly ramping difficulty of the problem.

I think people see the exponential-looking growth of technology over human history, and just kinda interpolate or something.

[+] kobayashi|9 years ago|reply
I can't disagree enough. Having recently read Superintelligence, I can say that most of the quotes taken from Bostrom's work were disingenuously cherry-picked to suit this author's argument. S/he did not write in good faith. To build a straw man out of Bostrom's theses completely undercuts the purpose of this counterpoint. If you haven't yet read Superintelligence or this article, turn back now. Read Superintelligence, then this article. It'll quickly become clear to you how wrongheaded this article is.
[+] apsec112|9 years ago|reply
"The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation."

This isn't just an unreflective assumption. The argument is laid out in much more detail in "The Basic AI Drives" (Omohundro 2008, https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...), which is expanded on in a 2012 paper (http://www.nickbostrom.com/superintelligentwill.pdf).

[+] tlb|9 years ago|reply
Certainly the assumption that every intelligent agent will want to recursively self-improve is unwarranted.

But it only takes one intelligent agent that wants to self-improve for the scary thing to happen.

[+] coldtea|9 years ago|reply
So? Just because it's in some paper doesn't mean much. There are tons of BS string theory papers for example.
[+] Animats|9 years ago|reply
Nearer term risks:

- AI as management. Already, there is at least one hedge fund with an AI on the board, with a vote on investments.[1] At the bottom end, there are systems which act as low-level managers and order people around. That's how Uber works. A fundamental problem with management is that communication is slow and managers are bandwidth-limited. Computers don't have that problem. Even a mediocre AI as a manager might win on speed and coordination. How long until an AI-run company dominates an industry?

- Related to this is "machines should think, people should work." Watch this video of an Amazon fulfillment center.[2] All the thinking is done by computers. The humans are just hands.

[1] http://www.businessinsider.com/vital-named-to-board-2014-5 [2] https://vimeo.com/113374910

[+] visarga|9 years ago|reply
> The humans are just hands.

Not for long. Robots will be cheaper soon.

> All the thinking is done by computers.

It's hard for humans to operate on more than 7 objects at the same time - a limitation of working memory. So naturally there are simple management and planning tasks that benefit from computers ability to track more objects.

[+] beloch|9 years ago|reply
It's one thing to worry about AI's taking over the world someday. It's quite another matter entirely to think about current military automation of WMD deployment.

Everyone's probably seen Dr. Strangelove at some point in time. If you haven't, stop reading immediately and go watch it. You will not regret this. Those who have watched it are familiar with a contrived, hilarious, but mostly plausible, scheme by which human beings could be fooled into launching an unauthorized nuclear first strike. This is with technology from half a century ago. As you watch this movie, you will be exposed to a system with checks and safeties that can be bypassed by a determined (and insane) individual. Many humans at every step of the process could have stopped the deployment, but choose to blindly follow orders, well, like machines.

What people should be worried about today is how many humans stand between a decision made by a nuclear power's leader and launch. Humans doubt. Humans blink. Humans flinch. When all the data says nuclear missiles are inbound and it's time to retaliate, humans can still say "No.", and have[1]. If you automate humans out of the system, you wind up reducing the running length of a Dr. Strangelove remake. I suspect it would be down to under five minutes today.

Thanks to popular media, we have this strange idea that taking humans out of the equation in automated weapon systems reduces the possibility for error. Individual humans can, and do, make mistakes. This is true. However, humans fix each others' mistakes in any collaborative process. Machines, on the other hand, only amplify the mistakes of the original user. If a bad leader makes a bad decision with a highly automated nuclear arsenal at his or her disposal, how many other humans will have the chance to scrutinize that decision before machines enact it?

[1]https://en.wikipedia.org/wiki/Stanislav_Petrov

[+] jimmcslim|9 years ago|reply
'In particular, there's no physical law that puts a cap on intelligence at the level of human beings.'

Maybe not, but there are definitely very physical laws governing everything else, that a superintelligent being's ambitions would run into.

A superintelligent being isn't going to be able to build a superliminal reactionless drive if the laws of the universe say it isn't possible.

More relevant, a superintelligent being isn't going to be able to enslave us all with robots if the laws of chemistry don't permit a quantum leap in battery chemistry.

[+] Afforess|9 years ago|reply
>Observe that in these scenarios the AIs are evil by default, just like a plant on an alien planet would probably be poisonous by default.

I believe this is a core misunderstanding. Bostrom never says that a superintelligent AI is evil by default. Bostrom argues the AI will be orthogonal, it's goals will be underspecified in such a way that leads it to destroy humanity. The paperclip optimizer AI doesn't want to kill people, it just doesn't notice them, the same way you don't notice ants you drive over in your daily commute. AIs with goals orthogonal to our own will attack humanity in the same way humanity attacks the rainforests, piecemeal, as-needed, and without remorse or care for what was there before. It won't be evil, it will be uncaring, and blind.

[+] md224|9 years ago|reply
"Put another way, this is the premise that the mind arises out of ordinary physics... If you are very religious, you might believe that a brain is not possible without a soul. But for most of us, this is an easy premise to accept."

The thing that irks me about this is how it reinforces a common (and in my opinion, false) dichotomy: either you believe the mind is explicable in terms of ordinary physics or you believe in a soul and are therefore religious. I feel like there should be a third way, one that admits something vital is missing from the physicalist picture but doesn't make up a story about what that thing is. There is a huge question mark at the heart of neuroscience -- the famed Explanatory Gap -- and I think we should be able to recognize that question mark without being labeled a Supernaturalist. Consciousness is weird!

[+] SerLava|9 years ago|reply
I don't understand why people have such a weird problem reconciling the brain with the mind.

It IS all physical. It's also an unimaginably complex fucking shitload of tiny physical objects working incredibly quickly. If your brain was big enough to see the parts working like little gears of a clock, it would probably be planet sized, or something like that. Huge.

I would EVEN say it doesn't raise any interesting philosophical questions. Are computers silicon, or magic? Is a book paper, or magic? Is the economy magic, or a bunch of people buying shit everywhere?

Knowing that the brain is physical doesn't make me question myself or doubt control over my life or any silly shit like that. Yes all of my actions are technically "predetermined" by the Big Bang, but not in an interesting way, at all.

Semi-related, if you made a giant brainlike calculator out of water pipes and valves and shit, and asked it a question, then turned all its pipes and valves for a while... it would probably just say "please kill me."

[+] JamilD|9 years ago|reply
The roundworm (c. elegans) only has 302 neurons, and about 7,000 synapses, but is capable of social behavior, movement, and reproduction. The entire connectome has been mapped, and we understand how many of these behaviors work without having to resort to additional ontological entities like your "third way".

If this complex behavior can be explained using only 302 neurons, I have no doubt that the complexity of human behavior and consciousness can be explained using 100,000,000,000 neurons.

[+] seiferteric|9 years ago|reply
One thing I rarely see discussed is the possibility that the brain is just too complex to practically reproduce. That is to say, that it is technically possible, but not practical. Evolution had billions of years, working in a massively parallel way to work this out after all. It's possible that the brain is a huge tangled mess of rules and special cases that we will never be able to fully understand and reproduce. Also, even if we are able to produce a basic intelligence, why do we assume it will ever get to the point where it can understand itself well enough to self improve? It's possible there is a threshold we won't be able to get past for self improvement to be possible.
[+] TeMPOraL|9 years ago|reply
It doesn't seem like a false dichotomy. It's about whether you believe there are things that are fundamentally not comprehensible in the universe. Things that are magic in the sense of "fuck it, that doesn't make sense, let's go get drunk instead". If you believe that all phenomena can in principle be understood given enough time and effort (including even building a better intelligence to figure it out), then you have materialistic belief.
[+] okreallywtf|9 years ago|reply
You could say that that 3rd way IS part of the physicalist picture already I think. As you say, there are such gaps in our understanding at this point we cannot say we've probed the depths of our brains and still found nothing.

It is like searching the ocean for something that by all accounts should be in the ocean, and deciding that perhaps the thing cannot be found, despite having large portions of the ocean still to search. It is too early to make that claim.

I'm not saying that there are not smug philosophers or scientists or just internet nerds that will declare the problem solved, clearly it is all in the brain and there is nothing magical to it but I think they would be incorrect. Modern physics is incredibly esoteric and shares more with meta-physics than what we tend to observe (which classical physics can explain satisfactorily for the most part). I don't think being physicalist is any impediment to having spirituality and and a sense of wonder about the universe and our place in it - the main difference to me is that we can't just accept a deus ex machina and call it a day. That may sound condescending but it isn't a choice for me personally.

[+] kobayashi|9 years ago|reply
>The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.

Bostrom absolutely did not say that the only way to inhibit a cataclysmic future for humans post-SAI was to design a "moral fixed point". In fact, many chapters of the book are dedicated to exploring the possibilities of ingraining desirable values in an AI, and the many pitfalls in each.

Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on that quote and how difficult it would be to apply to machine language, as well as what the quote even means. This author dismissively throws the quote in without acknowledgement of the tremendous nuance Bostrom applies to this line of thought. Indeed, this author does that throughout his article - regularly portraying Bostrom as a man who claimed absolute knowledge of the future of AI. That couldn't be further from the truth, as Bostrom opens the book with an explicit acknowledgement that much of the book may very well turn out to be incorrect, or based on assumptions that may never materialize.

Regarding "The Argument From My Roommate", the author seems to lack complete and utter awareness of the differences between a machine intelligence and human intelligence. That a superintelligent AI must have the complex motivations of the author's roommate is preposterous. A human is driven by a complex variety of push and pull factors, many stemming from the evolutionary biology of humans and our predecessors. A machine intelligence need not share any of that complexity.

Moreover, Bostrom specifically notes that while most humans may feel there is a huge gulf between the intellectual capabilities of an idiot and a genius, these are, in more absolute terms, minor differences. The fact that his roommate was/is apparently a smart individual likely would not put him anywhere near the capabilities of a superintelligent AI.

To me, this is the smoking gun. I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face, and thus, I highly doubt that the author actually read the book which he attacks so gratuitously.

[+] pkinsky|9 years ago|reply
Not that I take the whole Bostrom superintelligence argument too seriously, but this is an incredibly weak argument (or more accurately, bundle of barely-related arguments thrown at a wall in the hope that some stick) against it. Feel free to skip the long digression about how nerds who think technology can make meaningful changes in a relatively short amount of time are presumptuous megalomaniacs whose ideas can safely be dismissed without consideration, it's nothing that hasn't been said before.
[+] rl3|9 years ago|reply
The notion that near-term AI concerns and existential AI concerns somehow represent a binary option that we must choose between is fallacious at best.

Near-term AI concerns represent a massive challenge encompassing many ethical and social issues. They must be addressed.

Existential AI concerns, while low probability, have consequences so dire that they warrant further research regardless. These too must be addressed.

There is ample funding and human resources to work on both problems effectively. Why fight about it?

[+] timelincoln|9 years ago|reply
I think its important to regulate the potential runaway effect of these ideologies that satisfy the religious instincts of groups
[+] wyager|9 years ago|reply
> What kind of person does sincerely believing this stuff turn you into? The answer is not pretty.

This is a particularly stupid version of https://en.wikipedia.org/wiki/Appeal_to_consequences

"If you don't agree with me, you'll be associated with these people I'm lambasting!" I was surprised to see something so easily refutable used to conclude the argument; the article started out fairly strong.

> If you're persuaded by AI risk, you have to adopt an entire basket of deplorable beliefs that go with it.

Well if they're "deplorable", they must be false! QED.

[+] grandalf|9 years ago|reply
What about the counter-argument from domestic canines:

More likely, artificial intelligence would evolve in much the same way that domestic canines have evolved -- they learn to sense human emotion and to be generally helpful, but the value of a dog goes down drastically if it acts in a remotely antisocial way toward humans, even if doing so was attributable to the whims of some highly intelligent homunculus.

We've in effect selected for certain empathic traits and not general purpose problem solving.

Pets are not so much symbiotic as they are parasitic, exploiting the human need to nurture things, and hijacking nurture units from baby humans to the point where some humans are content enough with a pet that they do not reproduce.

I could see future AIs acting this way. Perhaps you text it and it replies with the right combination of flirtation and empathy to make you avoid going out to socialize with real humans. Perhaps it massages your muscles so well that human touch feels unnecessary or even foreign.

Those are the vectors for rapid AI reproduction... they exploit our emotional systems and only require the ability to anticipate our lower-order cognitive functioning.

If anything, an AI would need to mimic intellectual parity with a human in order to create empathy. It would not feel good to consult an AI about a problem and have it scoff at the crudeness of your approach to a solution.

Even if we tasked an AI with assisting us with life-optimization strategies, how will the AI know what level of ambition is appropriate? Is a promotion good news? Or should it have been a double promotion? Was the conversation with friends a waste of time? Suddenly the AI starts to seem like little more than Eliza, creating and reinforcing circular paths of reasoning that mean little.

But think of the undeniable joy that a dog expresses when it has missed us and we arrive home... the softness of its fur and the genuineness of its pleasure in our company. That is what humans want and so I think the future Siri will likely make me feel pleased when I first pick up my phone in the morning in the same way. She'll be there cheering me on and making me feel needed and full of love.

[+] lern_too_spel|9 years ago|reply
Pretty poorly argued. The AI alarmists simply argue that if the super-intelligence's objective isn't defined correctly, the super-intelligence will wipe us out as a mere consequence of pursuing its objective, not that the super-intelligence will try to conquer us in a specific way like Einstein putting his cat in a cage. The alarmists' argument is analogous to humans wiping out ecosystems and species by merely doing what humans do and not by consciously trying to achieve that destruction. Many of the author's arguments stem from this fundamental mistake.
[+] narrator|9 years ago|reply
"The second premise is that the brain is an ordinary configuration of matter, albeit an extraordinarily complicated one. If we knew enough, and had the technology, we could exactly copy its structure and emulate its behavior with electronic components, just like we can simulate very basic neural anatomy today."

We could have a computer program that perfectly simulates the brain, but has some nasty O(2^N) complexity algorithm parts that are carried out in constant time by physical processes such as protein folding. Thus, in theory we could simulate a brain inside of a computer but the program would never get anywhere, even assuming Moore's law would continue indefinitely.

[+] AnimalMuppet|9 years ago|reply
I don't buy the AI quasi-religious stuff. But your argument here is flawed. If protein folding can do the process in constant time, we may be able to find another process (but electronic rather than wet chem) that can also do it in constant time.
[+] mrfusion|9 years ago|reply
I've always wondered if the problems an intelligence solves are exponentially hard so even if we build a super intelligence it wouldn't be all that much smarter than we are.

For example compare how many more cities in the traveling salesman problem a super computer can solve vs your grandmas pc. It's more but surprisingly not all that many more.

What do you think of that idea?

[+] samatman|9 years ago|reply
I think the 98% of DNA we share with chimpanzees, with brains almost three times as large, suggests that isn't the case. Language appears to be a force-amplifier in intelligence, and we have no reason to believe that other such do not exist.
[+] idlewords|9 years ago|reply
I think this basic concept of intractability, which programmers are very familiar with, hasn't penetrated far enough into AI world.

Bostrom and Yudkowsky in particular seem happy to hand-wave past computational complexity.

[+] danieltillett|9 years ago|reply
It doesn't even hold with humans. Take a look at John von Neumann and his brain was pushed down the human birth canal.

We really have a poor idea of what super intelligence is - none of us can even understand people much more intelligent than ourselves.