top | item 12965918

(no title)

sthlm | 9 years ago

This is the best "visualization" / "explanation" of the possibilities and limits of AI that I've seen.

I can show this to someone and say:

1. The software can recognize a feather, as long as it looks similar to what it thinks a feather looks like.

2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.

This is good, because most examples focus on point #1 and -- if enough marketing is involved -- don't go enough into point #2.

People read news articles like "X can recognize cats in a picture with Y certainty!" and are quick to assume that this "AI" can make sense of a picture and understand it, when all it does is apply certain methods for a certain use case.

This does a much better job by letting people write (or draw) their own test cases and figure out the limits intuitively.

discuss

order

jawns|9 years ago

> 1. The software can recognize a feather, as long as it looks similar to what it thinks a feather looks like.

I was prompted to draw a hurricane. I drew something that looked like the typical hurricane doodle used on news reports.

The software didn't recognize it.

When the game was over and I was able to look at all of the doodles that were used to train the software to recognize a hurricane ... the majority of them instead looked like tornadoes!

So maybe we should more precisely say:

1. The software can recognize a feather, as long as it looks similar to what the humans who contributed its training set think a feather looks like.

brazzledazzle|9 years ago

My hurricane was just terrible. I ended up with a scribbled mess because I got that in the first set or two, didn't really have a plan and drew components of a hurricane as I remembered them.

I'm also ashamed to admit I drew some less than ideal stuff due to forgetting details on things and then panicking because of the timer. Like the spots on a panda's face for some odd reason.

Hopefuly my drawings were treated as outliers.

triangleman|9 years ago

Apparently most players of this game didn't see the "carrier" part in "aircraft carrier" and just drew airplanes. Probably because of the time constraint.

zaphar|9 years ago

Which is actually a pretty big win. After all you could also say this:

1. The person can recognize a feather as long as it looks similar to what the other people who contributed to it's learning think a feather looks like.

StephenConnell|9 years ago

I was asked to draw: brush I drew a hair brush. It was trained to see brush as a bunch of circles or trees.

stcredzero|9 years ago

When the game was over and I was able to look at all of the doodles that were used to train the software to recognize a hurricane ... the majority of them instead looked like tornadoes!

Idiocracy was prophetic -- except it missed the aspect that "Idiocracy" would first manifest on the Internet.

j2kun|9 years ago

It exhibits gender disparities very nicely too.

https://twitter.com/OdaRygh/status/798872670221856768

SamBam|9 years ago

Yes, an absolutely classic example of implicit biases in training sets.

On the one hand, the network should eventually learn to classify high heels as shoes.

On the other, when these classification system actually get used, they're always at some arbitrary point in their training, so you can't just wait for "all the biases to go away."

vacri|9 years ago

Erm... high heels are not the only kind of shoes women wear. They're not even the most common kind of shoes women wear. Pointing to this as a 'gender disparity experience' is showing your own bias. Yes, high heels are shoes and it should learn to recognise them, but most women don't actually wear them most of the time.

TeMPOraL|9 years ago

Makes sense to me. The training data set focuses on generic, gender-neutral shoe examples instead of highly gender-specific ones.

josefx|9 years ago

There is their problem, they had Al Bundy train the AI. How else do you get from shoe to whale in only three pictures, with one involving food.

pearjuice|9 years ago

Can we please keep gender identities discussion from Hacker News?

nopinsight|9 years ago

Humans usually can't do your 2. either. In some cases, people may be able to recognize things based on descriptions alone, but those are typically simple combinations of known entities.

For recognizing relatively simple entities, are there advantages humans still have over neural nets (assuming the same scope of knowledge)?

teekert|9 years ago

Definitely my 3 y/o can recognize a cat in an abstract drawing of a cat that is unlike any cat he has seen before.

Jugurtha|9 years ago

>Humans usually can't do your 2.

I think we do. We see a building we've never seen before and we know it's a building because it has certain features that we use to classify it as a building. The examples aren't scarce.

I also think a good indicator of us doing it is the use of "y" and "ish" and "sort".

As for sthlm's point 2:

>2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.

This is Asimo in 2009:

https://youtu.be/6rqO5eiP7_k?t=5m24s

vacri|9 years ago

You're just wrong on this one. Humans can recognise a lot of things that aren't in the form that they're used to. It's seen a lot of research in psychology.

As for advantages over neural nets, one of the primary ones is that humans can recognise things from unusual angles much more easily. When I tried QuickDraw and doodled things from non-stereotyped angles (like a three-quarter view of a car rather than the usual 2D side view), it had no idea.

The dalmation optical illusion[1] is another example of human ability to pick out patterns and assign them to belong to certain objects. Neural nets have different abilities, and are sometimes better at picking out different sorts of patterns than humans.

[1] http://cdn.theatlantic.com/assets/media/img/posts/2014/05/Pe...

tree_of_item|9 years ago

> 2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.

Why did this word "sentient" sneak in to your comment? I don't see what "sentience" has to do with what you just described; it's just a more sophisticated form of pattern matching.

"See, it can't do this! It's not self-aware!" is almost never the correct answer, because whatever thing it is you want to do will probably be solved in the future with more of the same techniques. Just about the only thing "sentience" or self-awareness is good for is an entity's private experience, which you wouldn't ever be able to see anyway.

tim333|9 years ago

I think sthlm's thinking that people as in:

>People read news articles like "X can recognize cats...

may assume sentience when it's not there

kiliantics|9 years ago

"Doesn't look like anything to me"

macspoofing|9 years ago

>The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being

Like humans brains?

>are quick to assume that this "AI" can make sense of a picture and understand it, when all it does is apply certain methods for a certain use case.

Like human brains?

mdorazio|9 years ago

No, not at all. If you only showed it a bunch of pickup trucks in various colors, it would be really good at identifying pickup trucks. But if you then showed it a Prius, or a motorcycle, it would have no idea that it was looking at a vehicle. A human brain wouldn't have much trouble with that, though, because it associates more information with the vehicle idea than just statistical similarity to previously seen shapes, and can extrapolate without having direct previous experience with the object being seen.

dr_zoidberg|9 years ago

Not exactly: if you've never seen a particular kind of feather before, you may not recognize it at first sight, but most certainly you'll sit, examine it and eventually acknowledge it's a feather -- the neural networks we're using aren't prepared to do this kind of analysis yet.

mkishi|9 years ago

Drawing your triangle upside down is enough to hit the limits!

crimsonalucard|9 years ago

#2 applies to humans as well. For example if I show a human something that looks and has all the properties of a car, the human will think I am showing him a car even if the thing I am showing him is actually called a feather.

Any neural net, artificial or not, can only recognize things as long as it looks similar to what it thinks the thing should look like.