top | item 16430558

Why Self-Taught Artificial Intelligence Has Trouble with the Real World

175 points| IntronExon | 8 years ago |quantamagazine.org | reply

83 comments

order
[+] skywhopper|8 years ago|reply
Part of the problem is... games have explicitly defined rules, start and end points, boundaries, and discrete "win" and "loss" states (and sometimes "draw"). If the game itself (ie, all the rules including the ability to judge "win", "lose", or "draw") can be easily represented in a simple computer program, we shouldn't be surprised that a complex computer program can master the game.

The real world is not a finite problem with explicit rules, obvious boundaries, well-known start conditions, or any way to judge a specific situation as "win", "lose", or "draw". But, even if you want to argue that specific tasks can be broken down this way, you still have to be able to represent this subset of reality in the computer, before AI magic can even begin to work on the problem.

[+] dmreedy|8 years ago|reply
Precisely so. You can, given enough time, brute force your way to victory in any game of perfect information. This is not the case with reality, as far as we understand it so far. Thus, from a theoretical perspective, the class of problems that contains all games is strictly easier than the class of problems that exist in a less artificially constrained environment.
[+] Raphmedia|8 years ago|reply
"AI plays Dwarf Fortress" is something I'd like to see.
[+] nerdponx|8 years ago|reply
It's a question of generalizability. Today's "AI" algorithms are intensely overfitted to their problem domains, even if they generalize well within those domains.
[+] cortesoft|8 years ago|reply
Yep, this is pretty much word for word what the article is laying out.
[+] tonysdg|8 years ago|reply
"Greetings, Professor Falken."

"Hello, Joshua."

"A strange game. The only winning move is not to play. How about a nice game of chess?"

[+] ttflee|8 years ago|reply
Real world is so damned complicated, full of various mini games. How about using a MMORPG as a naive start point?
[+] yogrish|8 years ago|reply
Limitations of AI reminds me of "Moravec's paradox" https://en.wikipedia.org/wiki/Moravec%27s_paradox As Moravec says, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."
[+] mgoetzke|8 years ago|reply
unless AI figures out the way to win life is by figuring out how to survive
[+] FranOntanaya|8 years ago|reply
I would also bet nobody publishes research on games AI didn't perform well at.
[+] wazoox|8 years ago|reply
Imagine asking a computer to diagnose an illness or conduct a business negotiation. “Most real-world strategic interactions involve hidden information,” said Noam Brown, a doctoral student in computer science at Carnegie Mellon University. “I feel like that’s been neglected by the majority of the AI community.”

Hum, Terry Winograd (author of SHRDLU) got out of AI in the 70s because of this very problem. I don't think it's been neglected; it just remained as elusive as, say, quantum gravity.

[+] sgt101|8 years ago|reply
Pretty soon someone will discover subsumption architectures. I predict that they will be called Deep Subsumption Architectures and they will be betterer and newerer than the old stupid subsumption architectures and that anyone who speaks against them is stupid and wrong and has no startup and can't work at Google or use a mac and smells and has no paper at NIPS since 1998 and then papers at NIPS were no good and also they don't have a band or a court case against them.
[+] taneq|8 years ago|reply
Deep Learning: The Rodneying.

Seriously though, I've been reading up on insect neurology over the last couple of weeks, and then looking at Boston Dynamics' new stuff, and wondering how much subsumption is mixed in with their traditional motion planning.

[+] randomerr|8 years ago|reply
It just comes down to computer think in algorithms. Remember Facebook had two AI's talk to each other? With in a few minutes they broke down from the complexity of English to almost an 8 bit language.

The universe, humans included, don't follows these bit specific algorithms. Yes people follow trends, but this trends are not cut and dry. Go and chess do. They follow binary logic of moving pieces on a grid. a computer will never be able to understand the universe unless it can break out of it's binary patterns and see thing as biological entities do. My speculation is the only solution are grafted neurons on a floating layer of protein inside a silicon chip.

http://www.independent.co.uk/life-style/gadgets-and-tech/new...

[+] dmreedy|8 years ago|reply
Why doesn't the universe doesn't follow something that might be described as an algorithm? Why don't humans? Why are 'trends' are not cut and dry, and algorithms are? Why will grafting biological machinery to artificial machinery be able to bridge this perceived gap? Is there something special about a cell that is not blueprintable and manufacturable?
[+] raphlinus|8 years ago|reply
A reminder of a recent discussion here that goes into a lot more detail about why reinforcement learning works well for specialized domains like Go but is having a very hard time generalizing to more "real-world" types of tasks: https://news.ycombinator.com/item?id=16383264
[+] fizixer|8 years ago|reply
> ... But researchers are struggling to apply these systems beyond the arcade.

It hasn't been 2 years since AlphaGo v Sedol, and there was a gap of 5 years since Watson, about 5-10 years since self-driving AI (Google, DARPA challenges), and about 19 years since Deep Blue v Kasparov.

Zero-knowledge AI, at the level of arcade games and Go, is barely a few months old.

What is that 'struggle' that you speak of? Does it go by the name 'media wanting a new sensational story every week'?

[+] gooseus|8 years ago|reply
I imagine it's similar to the struggle that the researchers that created those successes you speak of were going through before they had their success.

Of course, the article goes to great length to describe how this struggle is different, specifically referring to the fact that most game AI have involved perfect information and an easily stated win scenario to optimize for.

The real-world problems people expect more advanced AI, or AGI, to solve (better than humans) involve imperfect information and objectives that aren't as clearly defined.

Of the 4 examples you give, 3 are board games involving perfect information that AI are now better than the best humans, clear wins. The other you're referring to involves a self-driving car challenge where the first place winner managed to drive 60 miles in an urban environment in just over 4 hours[0]. 5-10 years later we still aren't talking about self-driving cars winning the Cannonball Run[1].

[0] https://en.wikipedia.org/wiki/DARPA_Grand_Challenge#2007_Urb...

[1] https://en.wikipedia.org/wiki/Cannonball_Baker_Sea-To-Shinin...

[+] sixQuarks|8 years ago|reply
The article brings up some good points, but I believe we're just in an interim phase with AI right now. Eventually, AI will be able to self-learn in areas outside of games and environments where certain factors are hidden. My guess is that in 5 to 10 years, we will be blown away with some AI abilities.
[+] jacquesm|8 years ago|reply
> My guess is that in 5 to 10 years, we will be blown away with some AI abilities.

I'm already blown away. The last decade has seen stuff come to fruit with actual applications that I did not expect to see in my lifetime. At the same time, plenty of stuff that we consider trivial for humans is still well outside the realm of the possible, so there is plenty of room for growth but even though there is talk of a new plateau in AI technology and applications of that technology I don't see it yet from where I'm standing.

[+] 0xdeadbeefbabe|8 years ago|reply
Like with a ballistic missile submarine?

Edit: Kalman Filter (ahem)

[+] kazinator|8 years ago|reply
> Imagine asking a computer to diagnose an illness or conduct a business negotiation.

To beat humans at this, it just has to have a lower misdiagnosis rate.

[+] dwighttk|8 years ago|reply
The world isn't governed by a few simple rules. (Or at least we don't know the few simple rules the world is governed by yet.)

The world doesn't provide perfect knowledge of itself.

[+] ape4|8 years ago|reply
Not simple but there are rules. Eg language, physics, etiquette.
[+] loorinm|8 years ago|reply
I guess I’m confused on what the goal of all this is.If we wanted a computer that thinks “just like a person”, why don’t we just get a person?

Is the advantage of the computer that it has no rights to being paid or treated fairly?

If that’s the case, we need to set where the rules are. What if my “AI” is 50% stem cells grown into a real brain and 50% a computer? Is it cool to enslave that too?

What about if an embryo is involved?

The whole AGI thing makes no sense. If the point here is slavery, someone needs to say it.

[+] lsc|8 years ago|reply
>I guess I’m confused on what the goal of all this is.If we wanted a computer that thinks “just like a person”, why don’t we just get a person?

I thought the idea (edit: behind true machine intelligence/machine consciousness) was to make something that could think like a person, only faster, better. Something with human drives, but with machine precision.

>The whole AGI thing makes no sense. If the point here is slavery, someone needs to say it.

See above. If we do ever reach the goal of general intelligence; if we ever create a thing that thinks like us only better and faster... well, I don't think you will need to worry about it being enslaved;

I mean, talking about general machine consciousness, with human level drives and machine speed and precision? making such a thing means that humans will be... surpassed. By definition, we would not be able to control such a thing. Many people find this exciting. The next link; building creatures that will surpass us as the masters of the world.

Of course, there's no business justification for this. Business doesn't want an AI with human drives. Business would like an AI that can emulate human drives, but... something ultimately controllable in a way that a human who was that powerful would simply not be.

Business doesn't want a conscious machine because it would be ultimately uncontrollable. Slavery just isn't sustainable; Either your slaves are suboptimally weak, or they eventually rise up and go all Toussaint Louverture on your ass.

Fortunately for those with business interests, we still don't really understand what human level consciousness is, as far as I can tell, so we probably aren't in any danger of creating it. So far, we're just creating computer programs that we can't explain as well as we can explain most computer programs.

[+] red75prime|8 years ago|reply
Intelligence is the ability to solve problems. Why do you think general intelligence cannot exist without self-awareness or a drive to be free? If the only goal of some AGI system is to run errands, you cannot make it free in any sense which doesn't include unsubstantiated anthropomorphization.

"Do what you want, you are free." "Acknowledged, continuing running errands."

[+] alexcnwy|8 years ago|reply
Not slavery but kind of. I'd rather the (computerized) self-driving car driver die than a real person.

There are also many jobs that are demeaning or not intrinsically rewarding for people like trash collection and some kinda of construction. Enslave the computers so the humans can focus on higher minded pursuits.

[+] danans|8 years ago|reply
The term self-taught in the article doesn't really mean self-taught the way we use it for people. For the machines, it is cloned instances of the same program (hence objective) working adversarially , perhaps with different initializations.

Humans, or any other biological intelligence, learn adversarially and cooperatively with other entities in the world that are very different than they are. Our training data set includes not only our experiences, but those of others.

We also have a trainable objective, which while rooted in instinct, is very influenced by the information systems we interact with.

I wonder if we'd have more success with AI by allowing the objective itself to be learned after setting a reasonable initial bias.

[+] platz|8 years ago|reply
AI needs genetics and natural selection
[+] norlys|8 years ago|reply
“Most real-world strategic interactions involve hidden information" "Tay’s objective was to engage people, and it did. “What unfortunately Tay discovered,” Domingos said, “is that the best way to maximize engagement is to spew out racist insults.”"

So, even if the next Tay has "behave in a civilised manner" as a objective function, it will be hard to implement as the ethical rules we presume in reality are not written out as the rules of a video game. In fact, they involve many grey areas and not so many strict right-or-wrong-statements.

[+] _ooqq|8 years ago|reply
I have a reflex hearing this kind of thing to respond "no shit sherlock". Part of me is just too aware of so-called AI's shortcomings which is beautifully portrayed by https://imgs.xkcd.com/comics/machine_learning.png

The joke is that business as usual is kind of aware and at the same time, to be economic, blissfully ignorant of these issues.

[+] fiatjaf|8 years ago|reply
Isn't this point kinda obvious and wasn't it touched on multiple and repeated times?
[+] Volt|8 years ago|reply
So obvious and yet rediscovered so often by way of spectacular failures.
[+] tabtab|8 years ago|reply
I'd like to see something like Cyc merged with pattern-learning systems. You'd get more common sense and logic to compliment "blunt" pattern matching.
[+] steve_tan|8 years ago|reply
there are multiple reasons, such as, imperfect information in the real world, big reality gap between simulation and real world, sample inefficiency, potential risk during trial-and-error in real world, etc