top | item 7285987

Are the robots about to rise? Google's new director of engineering thinks so

35 points| wikiburner | 12 years ago |theguardian.com | reply

70 comments

order
[+] higherpurpose|12 years ago|reply
> But isn't he simply refusing to accept, on an emotional level, that everyone gets older, everybody dies?

Why? Why should we accept on an "emotional level" that we are about to die? Just because it's currently "inevitable"? Seems like a cop-out to me. I think humans are meant to be better than just "accepting their fate", and that we should always try to improve our lives and conditions.

[+] TeMPOraL|12 years ago|reply
Exactly. Probably the fact that we are told to accept it as a "natural order of things" is one of the reasons we haven't solved death yet.
[+] spindritf|12 years ago|reply
The ideas of singularity are based on extrapolating from the past (progress of technology). If you extrapolate from the past human lives, eventuality of death seems pretty inevitable for many generations to come. Which extrapolation you choose probably says more about you than it does about the world.
[+] blumkvist|12 years ago|reply
I think that humans are meant to accept that they cannot and should not be able to obtain massive amounts of control over world-scale events for prolonged amount of time.

We are social species. I believe every individual should carve out his piece of history and then let others do the same. Whether they chose to continue in your tracks or not. If we are to achieve any greatness at all, we should all participate even people who are yet to be born.

What you are suggesting seems like breaking nature's cycle to me.

[+] dekhn|12 years ago|reply
One point the article misses: Ray is a director of engineering, not the director of engineering. There are more than one engineering directors at Google.
[+] 1337biz|12 years ago|reply
Sometimes I think Ray runs into the risk of becoming some sort of a PR mascot for the company...
[+] flycaliguy|12 years ago|reply
I always like to remind people that the road towards immortality is going to involve a significant period in which us normals have to deal with immortal rich people. Sounds awful, like, just about the worst societal dynamic I can think of.

Don't be surprised if they realize there isn't enough room for the rest of us. These new 1% immortals may also require a special country in which they are not at risk of being tragically harmed by one of us billion mortals. Watch you don't get bit by their 2 tonne Boston Dynamics guard dog...

[+] exratione|12 years ago|reply
[Rewind to 1940s]

"I always like to remind people that effectively treating heart disease is going to involve a significant period in which us normals have to deal with long-lived rich people. Sounds awful, like, just about the worst societal dynamic I can think of."

[Back to the present]

Our age is characterized by the fact that access to types of medical technology is basically flat. Rich people get to hire better doctors, but there are no super-secret, ultra-restricted forms of medicine that are inaccessible to everyone else. Your chances of getting into clinical trials of the new new things are about as good as theirs, provided you are prepared to pick up a phone and put in the time.

[+] smsm42|12 years ago|reply
Why is it awful? All new technologies initially cost tons of money. It is the rich early adopters who keep them afloat long enough so that there's time to make them cheaper and within reach of them masses, when scale advantages kick in. But if nobody buys it when it's too expensive, there might just not be a chance to develop it to the point when it's affordable.

Why is it not only bad, but worst you can think of? I, for one, can think of much worse things than immortal Bill Gates, however annoying it may be for some. Just turn on the news and watch long enough, I'm sure you'll see some of it. Then head to the library and open any 20th century history books. I guarantee you you'll find things much worse than an annoyingly long-living billionaire - such as genocide a of millions, for example. And not once but multiple times.

>>> Don't be surprised if they realize there isn't enough room for the rest of us.

Why there wouldn't be enough room for the rest of us?

[+] Geee|12 years ago|reply
No harm done, I bet immortality will eventually come in the form of 'brain backups' in Google Drive and your physical body is merely a vessel which you can use to interact with the physical world.
[+] d0|12 years ago|reply
The immortals are only immortal from natural causes of death...

History tells us that people will quickly use that to their advantage.

[+] dag11|12 years ago|reply
Sounds just like Elysium.
[+] edoloughlin|12 years ago|reply
But he's the sort of genius, it turns out, who's not very good at boiling a kettle. He offers me a cup of coffee and when I accept he heads into the kitchen to make it, filling a kettle with water, putting a teaspoon of instant coffee into a cup, and then moments later, pouring the unboiled water on top of it. He stirs the undissolving lumps and I wonder whether to say anything but instead let him add almond milk – not eating diary is just one of his multiple dietary rules – and politely say thank you as he hands it to me. It is, by quite some way, the worst cup of coffee I have ever tasted.

Slightly off topic, but this sort of guff makes me abandon a lot of articles in the first few paragraphs. In fact, I just did exactly that to come here and complain. It's little more than the writer exercising his/her own ego. I'd much rather they get to the point, which is what their interviewee has to say.

[+] lisper|12 years ago|reply
> It's little more than the writer exercising his/her own ego.

No, it's adding what is called "human interest" to the story, which (the theory goes) makes it more interesting to non-technical people.

[+] k-mcgrady|12 years ago|reply
I found it added a little humour to the article.
[+] Killah911|12 years ago|reply
I'd suggest "The most human human". It's unfortunate that those calling themselves "futurists" somehow seem to think that being alive forever... is kind'o shallow.

Death may be inevitable, but hopefully those who preach from the pulpit, have a little more depth to them and have hopefully examined their lives more carefully than to simply say, I'll just live forever. In that sense, I think Jobs had the right idea. If life were "infinite" or even significantly prolonged (i.e. 10 times the current life expectancy), I think we'd have a lot of thinking to do to come to terms with such a new reality.

[+] TeMPOraL|12 years ago|reply
> If life were "infinite" or even significantly prolonged (i.e. 10 times the current life expectancy), I think we'd have a lot of thinking to do to come to terms with such a new reality.

I'd be happy to spend next few centuries thinking about this problem.

[+] brador|12 years ago|reply
Isn't our willingness to take risks connected with our inevitable, eventual, death? Without a guaranteed eventual death few would take risks without massive compensation.

Consider the need in human societies for revolution. With a massively long lifespan, no one will revolt when the need arises, due to risk of death. Leading to a pretty shitty state with no way out.

[+] himangshuj|12 years ago|reply
seems more like a article the virtues of Rays past predictions and is rather one sided. Ray Kurzweil, the man with the crystal ball
[+] coldtea|12 years ago|reply
Or you know, it's just another premature product from Google, to get news coverage as an "innovative" company by rehashing older stuff in not-marketable forms.

Like self-driving cars, computer-glasses, cloud-only-laptops and the like, all met with minimal success.

[+] worldsayshi|12 years ago|reply
What's with this attitude? How can you be innovative without failing 9 times out of 10. That's what innovation entails! If you haven't failed while innovating you either was extremely lucky or you got yourself some divine superpower.

The iPhone wasn't new either - as in not a rehash of old ideas. But it was a new execution. These are all new executions.

[+] Geee|12 years ago|reply
I'm predicting a future where most people will live on in a virtual world with 'unlimited' lives. Just the brain will be kept alive in a box somewhere. Well, that sounds like Matrix, but I think it's pretty inevitable.
[+] coldtea|12 years ago|reply
Or you know, where only a handful of people will live this way. In isolated, heavily guarded, areas. With tons of energy, food, toys, technology, medicine and the like.

And the majority will slave away and be harvested for work, organs, sex slaves and such.

You know, sometimes you need to provide a more realistic picture of the future (this is not totally unlike how people actually live in places like Rio or Russia for example, and even L.A. http://www.amazon.com/City-Quartz-Excavating-Future-Angeles/...).

[+] jmount|12 years ago|reply
Thats his schtick- publicly speculating on this is a big part of his fame.
[+] IsaacL|12 years ago|reply
I have to admit that I'm not a huge fan of Ray Kurzweil - he's one of a large group of people who believe that accelerating change will almost certainly be good. I think the singularity could be good, but it could also be really bad, and it's important to spend some resources on making sure it goes well.

MIRI (formally the Singularity Institute) has a mixed reputation around these parts, but after reading fairly widely over the last year I think they have the deepest thinking on the topic of AI. Here's a concise summary of their worldview: http://intelligence.org/2013/05/05/five-theses-two-lemmas-an...

As I see it, their argument goes:

1. It's tempting to think of AIs becoming either our willing servants or integrating nicely with human society. In actuality, AIs will likely be able to bootstrap themselves to superintelligence extremely rapidly; we'll soon be dealing with alien minds that we fundamentally can't understand, and there will be little stopping the AI/AIs from doing whatever they want.

2. It's tempting to think, from analogy to the smartest human beings, that superintelligent AIs would be wise and benevolent. In actuality, a superintelligent AI could easily have strange or bizarre goals. I find this makes more sense if you think of AIs as "hyperefficient optimisers", as the word "intelligence" has some misleading connotations.

3. OK, well surely we can leave the AIs with weird goals to do their thing, and build other AIs to do useful things, like cure cancer or research nuclear fusion? The trouble is that even an innocuous goal, when given to an alien superintelligence, will very likely end badly. An AI programmed to compute PI would realise that it could compute PI more efficiently by hacking all available computer systems on the planet and installing copies of itself. Or developing nanotechnology and converting all matter in the solar system into extra computational capacity. You have to explicitly the program the AI to not do this, and defining the set of things the AI should not do is a hard problem. (Remember that 'common sense' and 'empathy' are human abilities, and there's no reason that an AI would have anything like them).

4./5. OK, well, we'll build an AI with the goal of maximising the happiness of humanity. But then the AI ends up building a Brave-New-World style dystopia, or kidnaps everyone and hooks them up to heroin drips to ensure they are in constant opiated bliss. It's really hard to come up with a good set of values to program into an AI that doesn't omit some important human value (like consciousness, or diversity, or novelty, or creativity, or whatever).

I'm glad that Peter Norvig (director of research at Google) is concerned about the issue of friendly AI. I'm curious to hear what other HN readers think of these ideas.

Anticipating some common objections I hear from friends:

How could a superintelligent AI have a stupid goal like computing Pi?/Wouldn't it be smart enough to break any controls we put on it?

I think this objection assumes an AI would be wired together like a typical intelligent human mind. If you think of an AI as a pure optimisation process, it's clear that it would have no reason to reprogram the ultimate goals it begins with.

If they're smarter than us, we should just let the AI take over/AIs are like our children, ultimately we should leave them free to do whatever they want

Again, this assumes the AIs are like super-powered human minds and that they will do interesting things once they take over, like contemplate the deep mysteries of the universe. But it's clearly possible for the AIs to devote themselves to really trivial tasks, like calculating digits of Pi for all eternity.