top | item 12321853

What’s Next for Artificial Intelligence

64 points| miraj | 9 years ago |wsj.com | reply

63 comments

order
[+] rm999|9 years ago|reply
>Deep learning, modeled on the human brain, is infinitely more complex [than machine learning]. Unlike machine learning, deep learning can teach machines to ignore all but the important characteristics of a sound or image-a hierarchical view of the world that accounts for infinite variety. -Yann LeCun

I strongly disagree with a lot about this quote, even though it comes from a brilliant man I highly respect (my thesis research was inspired from some of his older work, and in my job we work with techniques he developed around deep learning). What I dislike is it utilizes hype to make deep learning seem mystical; in actuality it's a natural extension of old techniques that clearly fall under "machine learning".

Deep learning is a neural network with 3 or more layers instead of the 2 layer networks that were developed in the 1980s. People tried 3 layers back then, and they didn't work well. Yann LeCunn and other researchers found cool ways to get 3+ layers to work in the 90s and again in the mid 2000s. More recently, researchers have just thrown a ton of data and computational power at them to get them to work. But fundamentally this was an extension of established techniques. This article that recently hit the front page actually breaks it down really well: https://blogs.nvidia.com/blog/2016/07/29/whats-difference-ar...

I think my main point here is that deep learning is quite accessible to people who are learning machine learning. It's great at solving some really complex problems (that can certainly resemble true intelligence), but is not the right tool for other problems.

[+] AnimalMuppet|9 years ago|reply
Viewed strictly as a neural network, how many layers deep is the human brain?
[+] mathgenius|9 years ago|reply
> Deep learning, ... is infinitely more complex

When I read this I interpret it to mean they have no idea how deep learning works. (More specifically: why it works as well as it does.)

And the part about being modeled on the human brain just seems to be a fortunate (?) coincidence.

[+] calycosa|9 years ago|reply
>We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create.

Maybe I'm being a bit pessimistic, but can AI really create jobs just by taking over old ones? Sure we could train some new "data analysts [and] trip optimizers", but in the end can we really mass replace low skilled blue collar jobs with higher skilled ones with the wave of a wand? In the period between when robots can automate the majority of low skill jobs and virtually all jobs, there is very likely to be some sort of significant turmoil as our economy undergoes a paradigm shift of sorts, and I don't think "just retrain workers" is that viable of a solution.

[+] JoeAltmaier|9 years ago|reply
Agreed. We used to wish for the day automation would take the burden of work from our shoulders. Now that's its upon us, all I hear is "how will we put everybody back to work?!" How about: we don't have to work like that any more. Ever again.
[+] rm_-rf_slash|9 years ago|reply
When I hear "retraining workers," I usually think of Charlie (of "And the Chocolate Factory" fame)'s dad losing his job screwing in toothpaste caps and coming back in the end as a technician for the machines that took his job.

That is the wrong way to approach this new shift.

There will simply be too many menial jobs made obsolete for blue collar workers to step up into supervision/technician roles.

Instead, they will have to find new kinds of jobs. The kinds of jobs that cannot be replaced by robots without getting stuck in the Uncanny Valley. Jobs like personal trainers, yoga instructors, physical/massage therapists, tattoo artists, hairdressers, and so on. Unless a robot can 100% mimic human form and action, these jobs aren't going anywhere - certainly not overseas.

I'm calling it here and now: as jobs are lost too quickly for people to retrain (or even want to adopt an entirely new skill set) we will see a new push for the legalization of prostitution. If nothing else, you can always sell your body by the half-hour, and no robot could ever truly replicate the closest of human touch.

[+] ThomPete|9 years ago|reply
That paradigm shift is happening as we speak.
[+] vonnik|9 years ago|reply
The problem with news articles like this is that they attempt to appeal to a general readership through grand promises and fear-mongering. The reporter's basic problem is: How can I make my audience care?

So you call up Nick Bostrom and he very reliably gives you a quote about the existential threat of a superintelligence, even though no one in the industry thinks we're anywhere close to that. (What's next for AI is not superintelligence...) And you force a great researcher like Andrew Ng to talk about job loss among truck drivers, because that's what will make it relevant to people outside AI.

We should be thinking about job loss and how the job market will change, but this type of article never gets past the "Oh No AI Will Destroy US" stage of thinking. But a lot of the questions raised by AI are actually relevant now, in an economy where AI hasn't even made a big impact. That is, they're not AI issues, but we're treating them as though they are. How should our societies treat and support the humans who have become unnecessary? They obviously will not all become data analysts and robot caretakers.

What's next for AI is better natural-language processing. Right now, chatbots are pretty dumb, but in the next few years, they'll get much better, and in more languages.

What's next for AI is the wider deployment of mature technology. Many problems such as image recognition have been solved, but developers and companies have not figured out how to deploy the solutions yet. We still have chokepoints in the number of data scientists who can tune and train models, and the number of engineers who can plug them into existing stacks. So AI will be felt directly, rather than just talked about.[0]

What's next for AI is an arms race. The major powers will be escalating the how they deploy AIs against AIs, embedded in drones or through the creation of adversarial data to slip through filters. Commercially, many smaller arms races will occur in different industries, as AI drives down the costs of interpreting data, and allows rival organizations to compete on price.

What's next for AI is the combination of the flavor of the month, deep learning, with other extremely powerful algorithms like reinforcement learning and Monte Carlo Tree Search to create goal-oriented strategic decision-making agents.

[0] http://deeplearning4j.org/use_cases.html

[+] mathgenius|9 years ago|reply
Any thoughts on support vector machines & kernel methods? Is that stuff dead and buried, or what? (I've been out of the loop now for a while.)
[+] Animats|9 years ago|reply
New jobs created by AI: Oxford faculty member pontificating about values for AIs. Who says there isn't job creation.

My worry: AIs held only to the moral standards now expected of corporations. Optimize for shareholder value. We're close to this now with machine-learning assisted hedge funds.

[+] treehau5|9 years ago|reply
Ah the upcoming "automation utopia" will "free man from the chains of labour" to pursue their passions -- at least that's what they will tell us while SF firms endlessly drive towards automating more people's jobs away while they rake in the cash. My worry is the lengthy, inevitable in-between time of human suffering and corporate greed until something bursts and we actually pass 21st century laws like Basic income, vote in technology-competent politicians, etc.

It is my worry, as well.

[+] Animats|9 years ago|reply
There's smarter, but there's also faster. Historically, robots have been rather slow and clunky. That's over.

Industrial robots have become much faster. Here's Bosch's packaging robot from 2014, looking at small objects and putting them in order for packaging.[1] Then Fanuc built a faster one.[2] Faster CPUs allow using modern control theory that considers dynamics, and fast machine vision. The resulting machines are much faster than humans.

Those are production machines, able to run for long periods. Research robots are even faster.[3] That's from 2012. Progress continues.

[1] https://www.youtube.com/watch?v=BAF-ALWwlLw [2] https://www.youtube.com/watch?v=vtAEIKJLHGw [3] https://www.youtube.com/watch?v=U2sUvQ_HsU8

[+] kantian_ethics|9 years ago|reply
Some of my thoughts after reading this article:

Everyone seems to believe that achieving artificial general intelligence is inevitable. I'd argue that it's only inevitable if humanity survives long enough to make it happen. I'm not a pessimist, but the next 300-400 years will be the most difficult humanity has ever faced. In addition to climate change, population expansion, nuclear weapon proliferation, and naturally increasing inequality, humanity will face many more threats that haven't yet been perceived.

I believe building strong intelligence will optimistically take 3-4 centuries. To complete a "system that could successfully perform any intellectual task that a human being can," it is first necessary to understand what defines the human intellect and conscience. Although there is much ethical debate on what defines a conscience, scientifically it is a product of experience, and can be emulated if three requirements are met:

- The system can accept every input the human body can. - It can process every combination of stimuli in the same way a human would. - It can to the stimuli in every way a human would.

For these requirements to be met by software, we must either:

- Aquire a nearly full knowledge of the human brain (and probably body)'s information-processing mechanisms, and figure out how to implement this in software. - Build enough processing power to completely simulate the human brain/body, atom by atom.

Both will not be feasible for an extraordinary amount of time, and it's probably better to spend our time worrying about the current existential threats to humanity.

A much more relevant ethical problem for humanity is that of eugenics. Unlike AI, recent advances like CRISPR/CAS-9 make it viable now, through modification of sperm-generating stem cells (not embryos), and like AI, it offers world-changing benefits (the eradication of diseases, lengthened life spans, increased knowledge and strength, etc), while also providing the keys for modern humanity's destruction (designer babies, lack of diversity, separation of humans into casts, etc).

Perhaps increases in intelligence driven by eugenics will even cause computer systems and AI to become obsolete.

[+] notadoc|9 years ago|reply
Let's ask Siri!

"OK I found this on the web for 'whats next for artful shell in tell gins'"

[+] Cortexia|9 years ago|reply
The fact is, we don't know how deep Neural Networks work - we simply know how they form. They are adaptive systems that learn to perform complex tasks.

This is scary because it means we don't "design" an A.I., we design an adaptive system and allow it to emerge. Nothing could be more dangerous in the long term.

[+] whoops1122|9 years ago|reply
I think next step for AI is turing test, where a "machine" recognize that it is talking to a human.
[+] Hydraulix989|9 years ago|reply
I don't know much, but I do know that if somebody were to tell me What's Next for AI, it's definitely not the WSJ.
[+] CuriouslyC|9 years ago|reply
Everyone has piled on the learning bandwagon as the path to "intelligence" but honestly creativity is just as important, and it is hardly even being addressed. Even worse, while function approximation (i.e. learning) is a fairly well defined problem, creativity is nebulous and ill-defined.
[+] vonnik|9 years ago|reply
That's not true, actually. There's a ton of work being done in computational creativity. AI is actually pretty good at sensing similarities between instances of data, and recombining various elements to create something new. There's been a ton of recent work using deep learning, including DeepDream.

https://psmag.com/rise-of-the-robot-artist-5aa0e6e1b361?gi=c...

[+] arca_vorago|9 years ago|reply
I think the way forward for AI will be in simulating human conciousness processes, including whatever limitations computationally that come with that. In this sense, I feel that the kind of AI that will break barriers first is going to be in a game, and not on a factory floor.
[+] aroman|9 years ago|reply
Can you elaborate a bit more on this? I'm fascinated by consciousness, and I've often wondered about how it arises in biological systems and how it might do so in artificial ones.

My basic intuition is that consciousness arises as a byproduct of any sufficiently interconnected (i.e. Complex) network which processes input from the outside world. The problem with AI right now is bottlenecking of both kinds — we don't have computers powerful enough to support the level of complexity of the human brain (100bn neurons with an average of 7k synapses each), and we don't have good enough sensors for capturing the real world (hint: touch is a BIG one).

What sort of game do you have in mind? I definitely think a game in which humans interact with an AI in an artificially-supported information-rich environment (say, the VR of 5-10 years from now) could help provide a suitable environment for "growing" such an AI. Consider the insane amount of Stimulation and calibration human babies require for development! It takes years for the visual system to stablilize, for example.

[+] saskurambo|9 years ago|reply
Many thinks that conciousness has nothing to do with brain. Brain can be considered the hardware and conciousness the software.
[+] samblr|9 years ago|reply
Neural networks are eating the world.
[+] angelbar|9 years ago|reply
Its some kind of advertising? How can I read it outside of a paywall?
[+] arijun|9 years ago|reply
Hit the "web" link next to the link at the top of this page, and select the top result from there.