"In the short term, this could mean research into the economic effects of AI to stop smart systems putting millions of people out of work."
This seems unfortunate and somewhat challenging to do. Our current economic model encourages improving efficiency of systems. This seems like a good thing. Its really too bad that people "need" jobs. Jobs should be creating value or they shouldn't exist. Artificially "creating" jobs to prop up the systems feels like fighting against reality and a bad long term plan.
We are now entering a new age where our current form of capitalism will not work. You see, when there was enough demand on the workforce (as in, there was as much or more work to do than there were workers), things worked reasonably well. Now we are entering an era where there is much less demand for workers, yet we are not yet productive enough to just sit back, relax, and let machines handle 99.9%+ of the work.
Think about it: 50% unemployment is terrible. 100% unemployment is not unemployment. It means there are enough resources to go around that nobody has to work. There is so much food, shelter, and entertainment that we don't need to get up in the morning and go to work.
Obviously, 100% unemployment cannot happen. In all likelyhood, we'll still need doctors, chefs, artists, etc. But that the direction we are going in, and we better figure out how to set up a new economy to adapt to the new realities.
I envision a shift in the mind set. Personally, if I have more money then I could spend in a lifetime, and choose to work to make myself even richer, I don't care if 10 other families are living off my wealth. People will have a choice: do something productive for society, and live a slightly better lifestyle, or pursue more personal fulfillment. We'll see a lot more artists, and a lot less custodians. In the long run, I believe this shift will be a good thing, but it will require us to stop using terms like "moocher class".
I agree that there's a benefit to making processes more efficient by replacing human labor with AI where possible, but I interpreted this to refer to systems that have wide economic effects, like robosigning foreclosures or selling company stocks.
Why do shareholders of big corporations profit from science in a grossly non-proportional way, while more than 50% of world's population has to live on under $2 a day?
It is time for the world's greatest minds to start thinking about how to fix capitalism, because it seems to be seriously broken.
And we need it fixed more than we need e.g. iPhone 7.0, or Google Adwords 2.0.
I wonder: Why should we care about what Musk and Hawking mean about AI? This article doesn't mention too much bad stuff, but earlier they have said that we should be afraid of AI/singularity.
With my thesis now in AI, I probably know far more than those two about this. And we're sooo far away from AI being a superforce destroying mankind.
You can probably assume they are in contact with the leading experts in the field. From the letter:
The initial version of this document was drafted by
Stuart Russell, Daniel Dewey & Max Tegmark, with major
input from Janos Kramar & Richard Mallah, and reflects
valuable feedback from Anthony Aguirre, Erik
Brynjolfsson, Ryan Calo, Tom Dietterich, Dileep George,
Bill Hibbard, Demis Hassabis, Eric Horvitz, Leslie Pack
Kaelbling, James Manyika, Luke Muehlhauser, Michael
Osborne, David Parkes, Heather Roff Perkins, Francesca
Rossi, Bart Selman, Murray Shanahan, and many others.
They aren't the only ones who signed it, they just get top media billing because they always do. Other people who signed include AI researchers. Peter Norvig, for one.
I haven't read your thesis but does it take things like exponential progress, deep learning and higher and higher privileges and control over infrastructure into account?
>> And we're sooo far away from this being a problem.
If we're far away from this being a problem, we may be far away from understanding how to solve it. We certainly wouldn't want the former to outpace the later, given what is at stake.
AI would, and should, succeed human intelligence properly implemented. We are nowhere near even understanding it as a problem, much less solving it. I attended an AGI conference a couple of years ago (summer of 2012 iirc). The general feeling was that we are still a lifetime away from a solution.
I don't know much about the philosophy of AI and I'm only familiar at a basic level with modern AI algorithms. From what I have been exposed to I don't see any reason to think AI is any more than a set of statistical frameworks. Is there any reason to believe that these statistical frameworks are comparable to biological intelligence?
In this context, the term "Artificial Intelligence" refers to a man-built emulation of the physical process that allows humans to reason and shape the world around us in pursuit of our goals. This hasn't been achieved yet, but humans are proof that it is physically possible. (And through evolution, which is not a strongly guided process).
Unless you believe there is something inherently special and unique about humans that make it impossible for this physical phenomenon to be replicated artificially, there is absolutely something to worry about here.
It sounds like you're talking about the kinds of AI in use today? That's not what the cautions are about, since current "AI" however good at reading individual words or flying drones is not yet capable of human-level thought. The cautions are about trans-sapient AI, which doesn't exist yet. Even if it's simply a beefed up "set of statistical frameworks" linked in the right way to get a computer behave like a human, humans develop and use nuclear weapons, humans go on shooting sprees, humans decide to go to war...
> Research into AI, using a variety of approaches, had brought about great progress on speech recognition, image analysis, driverless cars, translation and robot motion, it said.
How much of this progress required training data generated by working humans? What would feed future statistical algorithms if this source of training data was greatly reduced?
So long as we can pull the plug or disconnect the interfaces we'll be ok with AI. Once we can't, then we have a problem.
In effect the scariest AI is distributed, self propogating and can't be unpowered. Effectively a virus. I have yet to see a meaningful distributed AI, even in concept.
But it's not inconceivable that in the near-future an AI could train itself to mutate (even if this means interfacing with mechanical turk or freelance websites and paying humans to do it).
My real concern with all of this is always the uncontrolled ecosystem of steadily evolving viruses and malware. We will never have control of that... and there is no telling what it can become in the future.
I think it will be a simple error induced by some random mutation in one of these malicious progams, not some vast artificial intelligence, that causes us problems in this arena first.
We need more AI working in the domain of computer security. Stuff that learns under specific guidelines to restrict computational behavior, given specifications of expected behavior.
[+] [-] bsaunder|11 years ago|reply
This seems unfortunate and somewhat challenging to do. Our current economic model encourages improving efficiency of systems. This seems like a good thing. Its really too bad that people "need" jobs. Jobs should be creating value or they shouldn't exist. Artificially "creating" jobs to prop up the systems feels like fighting against reality and a bad long term plan.
[+] [-] IgorPartola|11 years ago|reply
Think about it: 50% unemployment is terrible. 100% unemployment is not unemployment. It means there are enough resources to go around that nobody has to work. There is so much food, shelter, and entertainment that we don't need to get up in the morning and go to work.
Obviously, 100% unemployment cannot happen. In all likelyhood, we'll still need doctors, chefs, artists, etc. But that the direction we are going in, and we better figure out how to set up a new economy to adapt to the new realities.
I envision a shift in the mind set. Personally, if I have more money then I could spend in a lifetime, and choose to work to make myself even richer, I don't care if 10 other families are living off my wealth. People will have a choice: do something productive for society, and live a slightly better lifestyle, or pursue more personal fulfillment. We'll see a lot more artists, and a lot less custodians. In the long run, I believe this shift will be a good thing, but it will require us to stop using terms like "moocher class".
[+] [-] JulianMorrison|11 years ago|reply
[+] [-] CornishHen|11 years ago|reply
[+] [-] bulletsvshumans|11 years ago|reply
[+] [-] amelius|11 years ago|reply
Why do shareholders of big corporations profit from science in a grossly non-proportional way, while more than 50% of world's population has to live on under $2 a day?
It is time for the world's greatest minds to start thinking about how to fix capitalism, because it seems to be seriously broken.
And we need it fixed more than we need e.g. iPhone 7.0, or Google Adwords 2.0.
[+] [-] blfr|11 years ago|reply
Actually, we should start treating social fixes like we do technical rollouts. Prove it on a small scale somewhere first, and then carefully expand.
This seems like common sense. Yet for some reason changes in policies tend to be sweeping, national or even international.
[+] [-] mseebach|11 years ago|reply
http://www.economist.com/news/leaders/21578665-nearly-1-bill...
[+] [-] delinka|11 years ago|reply
Put another way: if the financial beneficiary of increased efficiency is NOT they who invested, why would those investors make the investment?
[+] [-] tljr|11 years ago|reply
[+] [-] maaaats|11 years ago|reply
With my thesis now in AI, I probably know far more than those two about this. And we're sooo far away from AI being a superforce destroying mankind.
[+] [-] rndn|11 years ago|reply
[+] [-] DennisP|11 years ago|reply
[+] [-] ThomPete|11 years ago|reply
It doesn't have to be conscious to be dangerous.
[+] [-] herewego|11 years ago|reply
If we're far away from this being a problem, we may be far away from understanding how to solve it. We certainly wouldn't want the former to outpace the later, given what is at stake.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] Beltiras|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] rndn|11 years ago|reply
[+] [-] faizshah|11 years ago|reply
Am I thinking about this the wrong way?
[+] [-] marvin|11 years ago|reply
Unless you believe there is something inherently special and unique about humans that make it impossible for this physical phenomenon to be replicated artificially, there is absolutely something to worry about here.
[+] [-] harshreality|11 years ago|reply
[+] [-] unseraphim|11 years ago|reply
[deleted]
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] walterbell|11 years ago|reply
How much of this progress required training data generated by working humans? What would feed future statistical algorithms if this source of training data was greatly reduced?
[+] [-] brador|11 years ago|reply
In effect the scariest AI is distributed, self propogating and can't be unpowered. Effectively a virus. I have yet to see a meaningful distributed AI, even in concept.
[+] [-] Cakez0r|11 years ago|reply
[+] [-] saalweachter|11 years ago|reply
[+] [-] sigzero|11 years ago|reply
[+] [-] jheriko|11 years ago|reply
My real concern with all of this is always the uncontrolled ecosystem of steadily evolving viruses and malware. We will never have control of that... and there is no telling what it can become in the future.
I think it will be a simple error induced by some random mutation in one of these malicious progams, not some vast artificial intelligence, that causes us problems in this arena first.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] drcomputer|11 years ago|reply