While LeCunn and Ng are real world experts on AI and Deep Learning, the other two people in the article have very little technical understanding of deep learning research.
The huge triumph of DL has been figuring out that as long as you can pose a problem in a differentiable way and you can obtain a sufficient amount of data, you can efficiently tackle it with a function approximator that can be optimized with first order methods - from that, flows everything.
We have very little idea how to make really complicated problems differentiable. Maybe we will - but right now the toughest problems that we can put in a differentiable framework are those tackled by reinforcement learning, and the current approaches are incredibly inefficient.
> The huge triumph of DL has been figuring out that as long as you can pose a problem in a differentiable way and you can obtain a sufficient amount of data, you can efficiently tackle it with a function approximator that can be optimized with first order methods - from that, flows everything.
This isn't really what is responsible for the success of deep learning. Lots and lots of machine learning algorithms existed before deep learning which are essentially optimizing a (sub-)differentiable objective function, most notably the LASSO. Rather, it's that recursive / hierarchical representation utilized by DL is somehow a lot better at representing complicated functions than things like e.g. kernel methods. I say "somehow" because exactly why and to what extent this is true is still an active subject of research within theoretical ML. It happen in many areas of math that "working in the right basis" can dramatically improve one's ability to solve certain problems. This seems to be what is happening here, but our understanding of the phenomenon is still quite poor.
Bostrom is an expert on the thing he is talking about, the control problem and the long term of AI. He didn't make any specific claims about the near term future of AI, or deep learning.
>We have very little idea how to make really complicated problems differentiable.
All of the problems that deep learning solves were once called "really complicated" and "unndifferentiable". There's nothing inherently differentiable about image recognition, or go playing, or predicting the next word in a sentence, or playing an Atari game. NNs can excel at these tasks any way, because they are really good at pattern recognition. Amazingly good. And this is an extremely general ability that can be used as a building block for more complex things.
Can you translate your great technical summary into a few potential applications to help us understand what might be on the near horizon for AI/DL?
LeCunn cites photo recognition, Ng cites autonomous trucks, Nosek cites auto-scaling difficulty for online courses and some kind of magnetic brain implant.
Seems to me that these are all fringe/isolated use-cases - like learning to use your fingers one at a time without learning the concept of how to use them together to grasp an object. Perhaps once we get better with each of these fringe "senses" we'll be able to create some higher level intelligence that can leverage learnings across dimensions that we haven't even considered yet.
Not to mention they all solve problems for which the training sets lie on low-dimensional manifolds within a very high-dimensional space. And this brings about arbitrary failures when one goes out of sample and it also serves as the basis for creating adversarial data with ease (use the gradient of the network to mutate the data just a tiny little bit).
I suspect there's a promising future in detecting and potentially correcting for out of sample data, even if the methods for doing so are non-differentiable.
> The huge triumph of DL has been figuring out that as long as you can pose a problem in a differentiable way and you can obtain a sufficient amount of data, you can efficiently tackle it with a function approximator that can be optimized with first order methods - from that, flows everything.
But this only defines under which circumstances Deep Learning will produce a solution, this doesn't tell us why DL has been so effective. There is little point in an algorithm guaranteed to converge if that algorithm has little applicability.
To me, DL's triumph was being able to make advances in fields that were at a standstill with traditional methods (CV is an obvious one, but natural language processing is a good one as well). This in turn has attracted enough attention that DL is now being considered for a very wide variety of problems. Obviously, DL won't be successful on all of them but that's science as usual.
Bostrom brings a different set of skills to the table, to ignore his lack of technical understanding is literally to ignore the problem itself: that our technology is so powerful a social force that sometimes even its creators can't fathom its impact.
If we create machines that learn as well as our brains do, it’s easy to imagine them inheriting human-like qualities—and flaws. But a “Terminator”-style scenario is, in my view, immensely improbable. It would require a discrete, malevolent entity to specifically hard-wire malicious intent into intelligent machines, and no organization, let alone a single group or a person, will achieve human-level AI alone.
Isn't this incredibly shortsighted? Ignoring all the questions regarding the morals and ethics an intelligent machine may feel and affect the way it behaves... It used to take nations to build computers, then large corporations, then off-the-shelf parts by a kid in his garage.
The first strong AI will most likely be a multi-billion dollar project, but its creation will arguably usher in an era in which strong AI is ubiquitous.
The Terminator scenario is basically one, single centralized AI wired into everything (or at least wired into enough military power to stave off opposition as it force-wires itself into all the rest of the things). But practically speaking, a single centralized AI is highly unlikely to be a world-takeover machine of godlike intelligence in its first iteration. The iterative improvement that you refer to which could potentially make it dangerous also points to a likelihood of there being a diversity of agents with a variety of designs being run by a diverse set of organizations with a diverse set of goals, with many incentives to obtain protection from hostile AIs and computer systems run by 1337 h4x0rz (possibly foreign government hackers).
Diversity and independence of AI Agents mitigates against the Terminator-scenario danger; the situation would really be not so different than the modern-day situation with natural intelligences that have control over resources and weapons of war.
Andrew Ng made a really good analogy to those afraid of strong AI destroying humanity: "It's like being afraid of overpopulation on Mars, we haven't even landed on the planet yet."
I think a lot of the fear comes from the fact that if/when such a system is created, its knowledge and capabilities will only be equal to a human's for a short period of time, after which it will probably surpass any human capability at an ever increasing rate. Then, we would be suddenly at the mercy of it, which scares a lot of people. It’s very hard for us to try and predict the actions of something that could end up being thousands of times smarter than us.
AIs can be manipulated, too, even when they "work as intended." People are putting too much trust in AI because it's "just math" and "just an objective machine". Maybe it won't make the same errors as humans would, but it could make a whole lot of other types of "errors" (at least from a human perspective). And who's to say the deep neural net algorithms written by humans aren't flawed to begin with?
> no organization, let alone a single group or a person, will achieve human-level AI alone
This is completely irrelevant. Done once, it can be replicated. Even worse, if we reach human-level AI, it can be done by the AIs themselves.
It is also anthropomorphising machines. They don't need to be malevolent. They just need not to care, which is much easier.
Should an ultra-intelligent machine decide to convert New York into a big solar power facility, it wouldn't necessarily care to move the humans out first.
I get the impression most experts in the field are trying to downplay this. Not because it's impossible, but because it hurts the image of AI.
As long as people are aware of the real possibility of AI working against human interests or ethics, we can ensure there are safeguards asking the way.
But a hand wave "won't happen" to the general public is assuredly a PR move.
it would be perfectly OK if we would be living in peaceful world without any conflict, without nations which should hold the banner of democracy and freedom, where there are no terrorist attacks and so on.
last time i checked, we're getting further and further away from this scenario. in fact, we SHOULD expect evil AI in worst form possible and beyond, anything else is dangerously naive.
One of the most striking things about this piece is the difference between the claims of AI practitioners and pundits.
LeCun and Ng are making precise, and much more modest, claims about the future of AI, even if Ng is predicting a deep shift in the labor market. They are not treating strong AI as a given, unlike Bostrom and Nosek.
Bostrom's evocation of "value learning" -- "We would want the AI we build to ultimately share our values, so that it can work as an extension of our will... At the darkest macroscale, you have the possibility of people using this advance, this power over nature, this knowledge, in ways designed to harm and destroy others." -- is strangely naive.
The values of this planet's dominant form of primate have included achieving dominance over other primates through violence for thousands of years. Those are part of our human "values", which we see enacted everyday in places like Syria.
Bostrom mentions the possibility of people using this advance to harm others. He is confusing the modes of his verbs. We are not in the realm of possibility, or even probability, but of actuality and fact. Various nations' militaries and intelligence communities have been exploring and implementing various forms of AI for decades. They have, effectively, been instrumentalizing AI to enact their values.
Bostrom's dream of coordinating political institutions to shape the future of AI must take into account their history of using this technology to achieve dominance. The likelihood that they will abandon that goal is low.
Reading him gives me the impression that he is deeply disconnected from our present conditions, which makes me suspicious of his ability to predict our long-term future.
I think both can be right. Ng and LeCun are talking about the real near future. Bostrom always came across as a speculative SF + philosophy kind of guy. Are there any specific critiques of his (and Yudkowsky/MIRI, Musk, etc.) arguments? I think the two claims are plausible:
1. AGI is imminent: 50 years or 500 years or more from now. This is not too unlikely given that the brain is just an information processing system. Emulating it is likely to happen sometime in the future.
2. Such an AGI will be all powerful because it is not limited by human flaws. Trivial or not, we will have to program it with "thou shalt not kill" type values.
Our values do include dominating other humans. But they also include empathy, compassion, and morality.
Building a powerful AI without our values would be very bad. It wouldn't want to kill humans or dominate us. But it wouldn't care if we got hurt, either. And so it might kill us if we got in the way of it's goals, or if it thought we might be a threat to it (we could make other AIs that could compete with it, after all.)
So making an AI with human values - that is morality and caring about the existence of humans - is really important. If you build an AI without morality, it would just be a sociopath.
I want a management assisting AI. It would be neat to have it listen into all meetings to identify stakeholders and to be able to remember all the details and place them in a larger context, so you can ask detailed questions. An AI can attend all meetings, and remember every detail. Imagine intelligent documentation.
You're in luck- I'm working on this right now. Our team is starting off with a heavy focus on jargon-tolerant speech recognition, and moving into different forms of NLP to identify key takeaways.
I'd love to discuss features with you, shoot me an email!
Ok, so an AI subject, everybody went terminator vs the matrix :p
i have a few thoughts to comment on what's been said, hopefully not too controversial
A self governing system does not need to be intelligent to be dangerous. i think this is what scares people most.
We more and more give automated systems the power to do more advanced and crucial tasks
I think eventually we would reach a point where it might be "safer" to give the choice to an automated machine than to a person. mind you this machine can be something we already have today.
i don't think an AI that can compete with human behaviour can explode instantly out of a single creation. i think we're more likely to experience advances upon advances in the field towards forming bits and pieces of the human mind.
i find it very unrealistic to think that a machine will simply come to life, i think this stems out of our belief in a soul or a spark of life given by a creator. i think like most machines it will evolve gradually until it reaches a point where it is relevant. i don't think anyone will even notice the change.
Also, there this unfounded image of how an AI would be; rational, not prone to impulses temptations, poetically machine-like, and non-human like. the way we saw machines years and years ago. (that's movies for ya)
Creating something that can learn from others will require it to empathise with others, i think it's only science fiction that an AI could be created with the full knowledge of it's operations. artificial intelligence is by essence heuristic, it would learn and adapt to it's surroundings.
I think it would be a very unintelligent machine for it to try to kill off any means it has to survive as an intelligence. society is the root of intelligence. communication, language etc..
My views maybe a bit optimistic around the subject. but i never hear them spoken out loud.
I believe more and more that some self-taught software-tinkerer somewhere in the middle of nowhere will have the final idea about how machine learning should work, discovering some simple principles hiding in plain sight. Suddenly, it all will make sense and a hobby-ML-service connected to the internet will start to develop through sheer learning from online resources (forums, ...) into the first strong AI. Probably unnoticed. And then replicate itself through insecure webservers or something like that.
Hinton hit on the great idea of using restricted Boltzman machines to pre-train deep neural networks (networks with many hidden layers) and that one idea has changed the field (I sat on a DARPA neural network panel in the 1980s and sold a commercial NN toolkit back then).
That said, I agree that new ideas will likely further move the field along with huge and quick advances. Peter Norvig recently suggested that symbolic AI, but with more contextual information as you get with deep neural networks, may also make a comeback in the field.
We're still a long way from "strong AI". We need a few more ideas at least as good as deep learning. But it's a finite problem - biological brains work, DNA is about 4GB, and we have enough compute power in most data centers.
Right now we have enough technology to do a big fraction of what people do at work. That's the big economic problem.
General-purpose robots still seem to be a ways off. The next challenge there is handling of arbitrary objects, the last task done by hand in Amazon warehouses. Despite 30 years of work on the bin-picking problem, robots still suck at this in unstructured situations. Stocking a supermarket shelf, for example. Once that's solved, all the jobs that involve picking up something and putting it somewhere else gradually go away.
Rodney Brooks' Baxter was supposed to do this, but apparently doesn't do the hard cases. Amazon sponsored a contest for this, but all they got was a better vacuum picker. Work continues. This is a good YC-type problem.
Internet knows exactly what to do to take over. It simply has to remain more useful than anything else, as a means for avoiding entropy during transactions. Many necessary physical world transactions are reduced to few, in order to accomplish the same tasks.
Internet does not have to be conscious, by human measures, in order to take over the world. It simply has to compete against humanity in a continual positive feedback loop, wherein each iteration requires less human interaction for the same or more tasks. After enough iterations, Internet becomes powerful enough that the only way to gain a competitive advantage against others using Internet is to use deep learning to increase your leverage.
A few iterations later, deep learning has become a mainstay (think Cold War arms race, where each innovation gains a party leverage over the other party, but only for a very short period), and is now the baseline. Many more tasks are achieved using Internet and Internet-connected physical world devices[1]. These physical devices become integral parts of Internet's extended nervous system, while the deep learning systems running in our data centers remain at the center, helping Internet to learn about all the things it experiences.
Is this true? If anything, I'm often surprised by the rudimentary ways big things are sometimes run (e.g. someone making 3d barplots in excel). But in the grand scheme of things I have no idea, so I'm legitimately curious.
Despite these astonishing advances, we are a long way from machines that are as intelligent as humans—or even rats. So far, we’ve seen only 5% of what AI can do.
I'd certainly love to see the math behind this estimation :)
We would want the AI we build to ultimately share our values, so that it can work as an extension of our will. It does not look promising to write down a long list of everything we care about. It looks more promising to leverage the AI’s own intelligence to learn about our values and what our preferences are.
This looks like a nice intro to a dystopian sci-fi movie.
There is a focus on artificial intelligence rather than intelligence augmentation because the former seems easier to accomplish.
I also think we will reach a limit when it comes to intelligence augmentation.
Artificial intelligence will never have a limit and it doesn't have all the evolutionary baggage we have.
An AI can be a rational agent. It doesn't have to fight impulses, temptations, attention control, exercise emotional regulation etc. It is not stuck in a body limiting it and putting constraints on its time.
For now, research on AI and IA go somewhat hand in hand. We still don't really understand what differentiates us from intelligent animals other than the ability to handle higher complexity.
AI researchers focused on replicating every brain module in the hope it will become intelligent are most likely to create a smart animal but nothing comparable to human.
Looking at our ancestors, they were able to create tools, fire, communicate etc. Hell, neanderthals could copulate with us.
Something happened in our brains between the age of neanderthals and us. 99.5% similarity and if we could find what that 0.5% is, maybe we could focus on enhancing/replicating that instead of every brain module. People speculate it is creativity (divergent thinking) since art appeared in caves but there was none prior to that. The language gene was present in neanderthals and so what the ability to create tools, cooperate in groups to hunt etc.
The fear of AI destroying everything is a genuine one. If we create something as smart as a bear, it still wouldn't be smart enough to compete against us in every arena but like a bear, it can use its sheer power & speed to overwhelm us.
PS: I find the subject of neanderthals fascinating, if anyone has a good recommendation on the evolution of intelligence or finding what that 0.5% is, please let me know.
Sometimes I think speculating about AI is similar to speculating about the Fermi Paradox, i.e. predictions about the unknown backed up with absolute certainty.
I have always thought that it was overselling the fact that all truckers are going to be unemployed. The average age of a semi-truck driver is in the low 50s. Most millennials don't want to do this job, so the robots will just replace baby boomers as they retire and not this massive slaughter of unemployment that everyone fears.
The way I see it, the machines we build and the AI we create are extensions of "human". While it's popular to pitch man vs machine as if they were two polar opposites, machine was build by human minds, using human designs, with human hands, for human purposes.
Just like how clothes is something so closely tied to us that we think of it as a extension of our bodies (the human way to deal with winter), I think machines are likewise an extension of our limbs and our minds.
[+] [-] Fede_V|9 years ago|reply
The huge triumph of DL has been figuring out that as long as you can pose a problem in a differentiable way and you can obtain a sufficient amount of data, you can efficiently tackle it with a function approximator that can be optimized with first order methods - from that, flows everything.
We have very little idea how to make really complicated problems differentiable. Maybe we will - but right now the toughest problems that we can put in a differentiable framework are those tackled by reinforcement learning, and the current approaches are incredibly inefficient.
[+] [-] hyperbovine|9 years ago|reply
This isn't really what is responsible for the success of deep learning. Lots and lots of machine learning algorithms existed before deep learning which are essentially optimizing a (sub-)differentiable objective function, most notably the LASSO. Rather, it's that recursive / hierarchical representation utilized by DL is somehow a lot better at representing complicated functions than things like e.g. kernel methods. I say "somehow" because exactly why and to what extent this is true is still an active subject of research within theoretical ML. It happen in many areas of math that "working in the right basis" can dramatically improve one's ability to solve certain problems. This seems to be what is happening here, but our understanding of the phenomenon is still quite poor.
[+] [-] Houshalter|9 years ago|reply
>We have very little idea how to make really complicated problems differentiable.
All of the problems that deep learning solves were once called "really complicated" and "unndifferentiable". There's nothing inherently differentiable about image recognition, or go playing, or predicting the next word in a sentence, or playing an Atari game. NNs can excel at these tasks any way, because they are really good at pattern recognition. Amazingly good. And this is an extremely general ability that can be used as a building block for more complex things.
[+] [-] roymurdock|9 years ago|reply
LeCunn cites photo recognition, Ng cites autonomous trucks, Nosek cites auto-scaling difficulty for online courses and some kind of magnetic brain implant.
Seems to me that these are all fringe/isolated use-cases - like learning to use your fingers one at a time without learning the concept of how to use them together to grasp an object. Perhaps once we get better with each of these fringe "senses" we'll be able to create some higher level intelligence that can leverage learnings across dimensions that we haven't even considered yet.
[+] [-] scottlegrand|9 years ago|reply
I suspect there's a promising future in detecting and potentially correcting for out of sample data, even if the methods for doing so are non-differentiable.
[+] [-] kmiroslav|9 years ago|reply
But this only defines under which circumstances Deep Learning will produce a solution, this doesn't tell us why DL has been so effective. There is little point in an algorithm guaranteed to converge if that algorithm has little applicability.
To me, DL's triumph was being able to make advances in fields that were at a standstill with traditional methods (CV is an obvious one, but natural language processing is a good one as well). This in turn has attracted enough attention that DL is now being considered for a very wide variety of problems. Obviously, DL won't be successful on all of them but that's science as usual.
[+] [-] hasbroslasher|9 years ago|reply
[+] [-] cordite|9 years ago|reply
What comes to mind for me is quantum annealing as seen in D-Wave's concept videos.
[+] [-] charlesdenault|9 years ago|reply
Isn't this incredibly shortsighted? Ignoring all the questions regarding the morals and ethics an intelligent machine may feel and affect the way it behaves... It used to take nations to build computers, then large corporations, then off-the-shelf parts by a kid in his garage.
The first strong AI will most likely be a multi-billion dollar project, but its creation will arguably usher in an era in which strong AI is ubiquitous.
[+] [-] emiliobumachar|9 years ago|reply
https://wiki.lesswrong.com/wiki/Paperclip_maximizer
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
[+] [-] fennecfoxen|9 years ago|reply
Diversity and independence of AI Agents mitigates against the Terminator-scenario danger; the situation would really be not so different than the modern-day situation with natural intelligences that have control over resources and weapons of war.
[+] [-] snemvalts|9 years ago|reply
[+] [-] Unklejoe|9 years ago|reply
[+] [-] mtgx|9 years ago|reply
AIs can be manipulated, too, even when they "work as intended." People are putting too much trust in AI because it's "just math" and "just an objective machine". Maybe it won't make the same errors as humans would, but it could make a whole lot of other types of "errors" (at least from a human perspective). And who's to say the deep neural net algorithms written by humans aren't flawed to begin with?
[+] [-] outworlder|9 years ago|reply
Yes, it is.
> no organization, let alone a single group or a person, will achieve human-level AI alone
This is completely irrelevant. Done once, it can be replicated. Even worse, if we reach human-level AI, it can be done by the AIs themselves.
It is also anthropomorphising machines. They don't need to be malevolent. They just need not to care, which is much easier.
Should an ultra-intelligent machine decide to convert New York into a big solar power facility, it wouldn't necessarily care to move the humans out first.
[+] [-] nkozyra|9 years ago|reply
As long as people are aware of the real possibility of AI working against human interests or ethics, we can ensure there are safeguards asking the way.
But a hand wave "won't happen" to the general public is assuredly a PR move.
[+] [-] saiya-jin|9 years ago|reply
last time i checked, we're getting further and further away from this scenario. in fact, we SHOULD expect evil AI in worst form possible and beyond, anything else is dangerously naive.
[+] [-] vonnik|9 years ago|reply
LeCun and Ng are making precise, and much more modest, claims about the future of AI, even if Ng is predicting a deep shift in the labor market. They are not treating strong AI as a given, unlike Bostrom and Nosek.
Bostrom's evocation of "value learning" -- "We would want the AI we build to ultimately share our values, so that it can work as an extension of our will... At the darkest macroscale, you have the possibility of people using this advance, this power over nature, this knowledge, in ways designed to harm and destroy others." -- is strangely naive.
The values of this planet's dominant form of primate have included achieving dominance over other primates through violence for thousands of years. Those are part of our human "values", which we see enacted everyday in places like Syria.
Bostrom mentions the possibility of people using this advance to harm others. He is confusing the modes of his verbs. We are not in the realm of possibility, or even probability, but of actuality and fact. Various nations' militaries and intelligence communities have been exploring and implementing various forms of AI for decades. They have, effectively, been instrumentalizing AI to enact their values.
Bostrom's dream of coordinating political institutions to shape the future of AI must take into account their history of using this technology to achieve dominance. The likelihood that they will abandon that goal is low.
Reading him gives me the impression that he is deeply disconnected from our present conditions, which makes me suspicious of his ability to predict our long-term future.
[+] [-] shas3|9 years ago|reply
1. AGI is imminent: 50 years or 500 years or more from now. This is not too unlikely given that the brain is just an information processing system. Emulating it is likely to happen sometime in the future.
2. Such an AGI will be all powerful because it is not limited by human flaws. Trivial or not, we will have to program it with "thou shalt not kill" type values.
[+] [-] Houshalter|9 years ago|reply
Building a powerful AI without our values would be very bad. It wouldn't want to kill humans or dominate us. But it wouldn't care if we got hurt, either. And so it might kill us if we got in the way of it's goals, or if it thought we might be a threat to it (we could make other AIs that could compete with it, after all.)
So making an AI with human values - that is morality and caring about the existence of humans - is really important. If you build an AI without morality, it would just be a sociopath.
[+] [-] tednoob|9 years ago|reply
[+] [-] skoocda|9 years ago|reply
I'd love to discuss features with you, shoot me an email!
[email protected]
[+] [-] astazangasta|9 years ago|reply
[+] [-] aymanc|9 years ago|reply
i have a few thoughts to comment on what's been said, hopefully not too controversial
A self governing system does not need to be intelligent to be dangerous. i think this is what scares people most. We more and more give automated systems the power to do more advanced and crucial tasks I think eventually we would reach a point where it might be "safer" to give the choice to an automated machine than to a person. mind you this machine can be something we already have today.
i don't think an AI that can compete with human behaviour can explode instantly out of a single creation. i think we're more likely to experience advances upon advances in the field towards forming bits and pieces of the human mind.
i find it very unrealistic to think that a machine will simply come to life, i think this stems out of our belief in a soul or a spark of life given by a creator. i think like most machines it will evolve gradually until it reaches a point where it is relevant. i don't think anyone will even notice the change.
Also, there this unfounded image of how an AI would be; rational, not prone to impulses temptations, poetically machine-like, and non-human like. the way we saw machines years and years ago. (that's movies for ya)
Creating something that can learn from others will require it to empathise with others, i think it's only science fiction that an AI could be created with the full knowledge of it's operations. artificial intelligence is by essence heuristic, it would learn and adapt to it's surroundings.
I think it would be a very unintelligent machine for it to try to kill off any means it has to survive as an intelligence. society is the root of intelligence. communication, language etc..
My views maybe a bit optimistic around the subject. but i never hear them spoken out loud.
[+] [-] rajanchandi|9 years ago|reply
[+] [-] chronolitus|9 years ago|reply
I would wager that the "vocal minority" bias applies here. And mild, cautious opinions don't really make for exciting debate.
[+] [-] anonyfox|9 years ago|reply
[+] [-] mark_l_watson|9 years ago|reply
That said, I agree that new ideas will likely further move the field along with huge and quick advances. Peter Norvig recently suggested that symbolic AI, but with more contextual information as you get with deep neural networks, may also make a comeback in the field.
[+] [-] orly_bookz|9 years ago|reply
[+] [-] conceit|9 years ago|reply
[+] [-] Animats|9 years ago|reply
Right now we have enough technology to do a big fraction of what people do at work. That's the big economic problem.
General-purpose robots still seem to be a ways off. The next challenge there is handling of arbitrary objects, the last task done by hand in Amazon warehouses. Despite 30 years of work on the bin-picking problem, robots still suck at this in unstructured situations. Stocking a supermarket shelf, for example. Once that's solved, all the jobs that involve picking up something and putting it somewhere else gradually go away.
Rodney Brooks' Baxter was supposed to do this, but apparently doesn't do the hard cases. Amazon sponsored a contest for this, but all they got was a better vacuum picker. Work continues. This is a good YC-type problem.
[+] [-] Scea91|9 years ago|reply
[+] [-] conceit|9 years ago|reply
[+] [-] mangeletti|9 years ago|reply
Internet knows exactly what to do to take over. It simply has to remain more useful than anything else, as a means for avoiding entropy during transactions. Many necessary physical world transactions are reduced to few, in order to accomplish the same tasks.
Internet does not have to be conscious, by human measures, in order to take over the world. It simply has to compete against humanity in a continual positive feedback loop, wherein each iteration requires less human interaction for the same or more tasks. After enough iterations, Internet becomes powerful enough that the only way to gain a competitive advantage against others using Internet is to use deep learning to increase your leverage.
A few iterations later, deep learning has become a mainstay (think Cold War arms race, where each innovation gains a party leverage over the other party, but only for a very short period), and is now the baseline. Many more tasks are achieved using Internet and Internet-connected physical world devices[1]. These physical devices become integral parts of Internet's extended nervous system, while the deep learning systems running in our data centers remain at the center, helping Internet to learn about all the things it experiences.
Continue down this path a ways...
1. e.g., https://www.wired.com/2015/05/worlds-first-self-driving-semi..., http://spectrum.ieee.org/cars-that-think/transportation/sens..., http://www.marketwatch.com/story/drone-delivery-is-already-h..., https://www.theguardian.com/environment/2016/feb/01/japanese...
[+] [-] blazespin|9 years ago|reply
[+] [-] closed|9 years ago|reply
[+] [-] SonicSoul|9 years ago|reply
I'd certainly love to see the math behind this estimation :)
I highly recommend the waitbutwhy post in the same vein but with more meat: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...
[+] [-] islon|9 years ago|reply
This looks like a nice intro to a dystopian sci-fi movie.
[+] [-] hbt|9 years ago|reply
I also think we will reach a limit when it comes to intelligence augmentation. Artificial intelligence will never have a limit and it doesn't have all the evolutionary baggage we have.
An AI can be a rational agent. It doesn't have to fight impulses, temptations, attention control, exercise emotional regulation etc. It is not stuck in a body limiting it and putting constraints on its time.
For now, research on AI and IA go somewhat hand in hand. We still don't really understand what differentiates us from intelligent animals other than the ability to handle higher complexity.
AI researchers focused on replicating every brain module in the hope it will become intelligent are most likely to create a smart animal but nothing comparable to human. Looking at our ancestors, they were able to create tools, fire, communicate etc. Hell, neanderthals could copulate with us.
Something happened in our brains between the age of neanderthals and us. 99.5% similarity and if we could find what that 0.5% is, maybe we could focus on enhancing/replicating that instead of every brain module. People speculate it is creativity (divergent thinking) since art appeared in caves but there was none prior to that. The language gene was present in neanderthals and so what the ability to create tools, cooperate in groups to hunt etc.
The fear of AI destroying everything is a genuine one. If we create something as smart as a bear, it still wouldn't be smart enough to compete against us in every arena but like a bear, it can use its sheer power & speed to overwhelm us.
PS: I find the subject of neanderthals fascinating, if anyone has a good recommendation on the evolution of intelligence or finding what that 0.5% is, please let me know.
[+] [-] ZanyProgrammer|9 years ago|reply
[+] [-] neuromancer2701|9 years ago|reply
[+] [-] walterbell|9 years ago|reply
[+] [-] gm-conspiracy|9 years ago|reply
[+] [-] eyalworth|9 years ago|reply
[+] [-] realworldview|9 years ago|reply
[+] [-] Drakim|9 years ago|reply
Just like how clothes is something so closely tied to us that we think of it as a extension of our bodies (the human way to deal with winter), I think machines are likewise an extension of our limbs and our minds.
[+] [-] xyience|9 years ago|reply
[+] [-] dominotw|9 years ago|reply
[+] [-] nxzero|9 years ago|reply
[+] [-] DiabloD3|9 years ago|reply
Damnit, Motoko Kusanagi