top | item 10168837

Google self-driving car pioneer wants to teach people how to face the future

70 points| w1ntermute | 10 years ago |economist.com | reply

58 comments

order
[+] bsaunder|10 years ago|reply
I agree with the premise that machines will increasing exceed our capabilities (and are improving quicker than we are). This seems so obvious as to be not debatable.

I also think there are a lot of merits to online-education and Udacity seems to have the right general idea. On-line education needs to be more engaging and entertaining than simply videos and online collaboration.

However, this whole "people need more skills" and "we need more jobs" meme feels like a losing fight with reality. In fact this push for increased human skills seems like it will only widen the gap between the uber skilled and the 99%. In our current economic model, this seems like increased pain for most of us.

IMHO it would be much better if we could stop this denial. We need to pivot our economic system towards a basic income model. I think a much better strategy would be for us to start focusing our best and brightest on full automation of our needs and basic wants so that we can provide most essential services to people with minimal human and resource cost.

[+] mathattack|10 years ago|reply
Many great scientists (Einstein amongst them) get into trouble when they veer into Economics. Einstein predicted social unrest from massive unemployment due to productivity improvements. Productivity improvements help people in aggregate.

The question here is will a small subset of the world accrue all the benefit from the AI productivity improvement? One answer is basic income, presumably based on taxation of the rich. Is another answer some kind of universal ownership scheme of the companies creating the AI? I'm not advocating government ownership of business (this rarely ends well) but perhaps some way for the government to give shares in index funds in lieu of basic cash payments? Essentially equity based social security or welfare payments in lieu of some portion of the cash payments. I haven't thought this fully through yet, but it does allow some aspect of shared upside. (My sense is there will be a lot of shared upside anyway, similar to how we all benefit from free maps and search)

[+] bduerst|10 years ago|reply
UBI isn't the answer - there are goods and services with income-elastic prices that will be just as out of reach with UBI as it was before.

A prime example of this is housing, which is why many government welfare programs stipulate what the delivered value can be spent on (food stamps, medicaid, etc.) as opposed to just open-ended income.

Is it perfect? No. With every massive government program comes bureaucratic overhead - but throwing it out for something without the overhead but potentially economically damaging side effects isn't the solution.

[+] JoshTriplett|10 years ago|reply
> However, this whole "people need more skills" and "we need more jobs" meme feels like a losing fight with reality.

In the long-term, sure: if we can successfully build an AI smarter than humans, it's unlikely any job will survive. But at the same time, no job will need to; there's a balance there. Planning for a post-scarcity economy makes sense, but the plans that work in such an economy don't necessarily work when scarcity still exists.

[+] Mimu|10 years ago|reply
I agree with that. Not sure if the basic income is THE solution, I didn't think about it too much, but I agree with everything else.

People don't need jobs, they need money. Jobs are the only legal way to get money but money is the issue, not jobs.

[+] Florin_Andrei|10 years ago|reply
> In fact this push for increased human skills seems like it will only widen the gap between the uber skilled and the 99%.

What happens is that pressure is simply increasing all the time, and people are on a bell curve w.r.t. how well they can keep up. We are effectively grabbing that bell curve and pulling its ends much further apart.

We need a solution, otherwise things are going to get very, very ugly.

[+] criddell|10 years ago|reply
> We need to pivot our economic system towards a basic income model.

So putting a floor on what people get each year seems like a pretty good idea. How about a ceiling as well? What's an equitable spread? 10x? 100x? Should the basic income given to residents of Manhattan be 2.5x what people in Wichita get?

[+] ericjang|10 years ago|reply
Even though the standard of living is raised for all humans, it seems like advances in AI and automation will drastically reduce the percentage of productive members of society, in spite of educational opportunity.

Improving educational opportunities (a la Udacity) is a noble idea because it levels the playing field a bit (giving the underprivileged a chance), but the end result is the same - a small number of people will inevitably own AI, and the world by extension.

Assuming that the new AI titans are compelled to share their wealth with humanity, wouldn't less productive members of society still face existential threat? Improved standard of living only takes care of the first 2-3 layers of Maslow's Hierarchy of needs - but "esteem" and "self-actualization" are really important as well.

[+] seiji|10 years ago|reply
but "esteem" and "self-actualization" are really important as well.

Many people fill those needs with banal things like drinking contests or sports.

[+] astazangasta|10 years ago|reply
This is not about humans vs machines, it is STILL, after all these centuries, about labor vs capital.
[+] bduerst|10 years ago|reply
Pretty much. The massive automation that happened during early 20th century modernization sparked a unionizing movement with laborers. Kind of interesting that we don't see that level of organization now.
[+] arstin|10 years ago|reply
If Thrun is right about AI "outsmarting people in every dimension", how in the world could more entertaining job training videos organized into "nanodegrees" help us at all? I know this article was just a free ad for Udacity so he had to say self-serving things, but do you think the dude really believes this???

As others have pointed out, the real problem before us this century is to somehow decouple having a basic standard of living from performing work---more specifically from "contributing to productivity". My amateur guess is that even with our current state of food and energy production, the biggest barrier isn't the massively difficult economic and organizational problem, but advancing as a culture beyond our entrenched moral assumption that the means for basic living is something to be earned rather than a human right.

[+] stonogo|10 years ago|reply
When this company can build a phone that won't crash, I'll believe they can build a car that won't crash. Until that day comes I feel like stern warnings of the coming economic revolution are not really in order.
[+] sandworm101|10 years ago|reply
Arrogant, as is typical of those with a financial stake in a particular version of progress. This guy is under the belief that his version of the future will happen and that it is his job to help others realise that truth. Technological progress may be inevitable, but that progress envisioned by any particular person is not.

Nanodegrees may seem a good idea, and they no doubt make sense financially to google, but that does not make them inevitable. People have been attending today's traditional schools since long before the Aztecs were even a thing (Cambridge 1200s). There is wisdom in those years. It may be time to change that wisdom, but it won't happen within a generation.

Self-driving cars seem likely but are not inevitable. All sorts of safety-enhancing technologies are dropped for apparently irrelevant reasons. Why do we sell cars that can break speed limits? Why do school buses not have seat belts? Why do planes still have error-prone pilots? Why are alcohol and cigarettes still a thing? Each of those have at some point been challenged by technological progressives who though their version of the future was unavoidable. Each was proven wrong. Only the arrogant assume the future.

fyi, anyone who thinks driverless cars are inevitable should look at the futurama exhibit of 1939. We were then going to have them by 1960. Then it was ford. Now it is google. I'll believe it when I see it.

https://en.wikipedia.org/wiki/Futurama_%28New_York_World%27s...

[+] moe|10 years ago|reply
Self-driving cars seem likely but are not inevitable.

Yes they are inevitable.

All sorts of safety-enhancing technologies are dropped

Self-driving cars are not only a "safety-enhancing technology".

First and foremost they are a money-saving technology.

Over 4 million people are employed in the USA transportation industry[1]. 1.7 million of them are truck drivers[2].

Global logistics companies will save billions of dollars per year by upgrading their fleets to self-driving vehicles.

[1] http://www.bls.gov/emp/ep_table_201.htm

[2] http://www.bls.gov/ooh/transportation-and-material-moving/he...

[+] davidcgl|10 years ago|reply
Nanodegrees may seem a good idea, and they no doubt make sense financially to google

Google doesn't own Udacity.

[+] ihsw|10 years ago|reply
> To the extent we are seeing the beginning of a battle between artificial intelligence (AI) and humanity, I am 100% loyal to people.

Perhaps I'm just jaded or cynical, but fuck people.

We, as a species, are the greatest threat to ourselves, and we are our greatest asset. An individual may be brilliant, and and large groups congregating together into governments that empower hundreds of millions of individuals may be the greatest feat our species has accomplished, but there is just so much instability across a wide spectrum of life.

We, as a species, are approaching the limit of what 7B people on this planet are capable of. Globalization is piquing and revolutionary growth will pique with it, and maintaining the rate at which we grow will be accomplished by broad-sweeping reforms with the end-goal being tearing down inefficiencies -- starting with economic.

We, as a species, absolutely can bring a first-world quality of life to everybody, so why shouldn't we?

We, as a species, absolutely can cooperate and govern on a global scale, where such cooperation is codified into law, so why shouldn't we?

The answer is that we don't want to because we can't handle giving up control to those other people because they don't have our best interests in mind.

How do we face the future? By letting go of our ill-conceived notion that we are fit to govern ourselves in the current manner, and accept that operational control of various aspects of humanity (eg: supply chain management, namely natural resource allocation and food distribution) will be automated, and as such outside of the purview of human judgement.

[+] StavrosK|10 years ago|reply
This is completely off topic, but this is the first time I saw someone say "pique" when they meant "peak".
[+] d883kd8|10 years ago|reply
I disagree. You are probably being downvoted out of disagreement. (d_-1?)

I'd like to thank you for contributing your perspective in this reasoned and impassioned advocacy for authoritarianism.

[+] kaybe|10 years ago|reply
> We, as a species, absolutely can bring a first-world quality of life to everybody, so why shouldn't we?

Not with the current lifestyle and the given ressources.