(no title)
sthlm | 9 years ago
I can show this to someone and say:
1. The software can recognize a feather, as long as it looks similar to what it thinks a feather looks like.
2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.
This is good, because most examples focus on point #1 and -- if enough marketing is involved -- don't go enough into point #2.
People read news articles like "X can recognize cats in a picture with Y certainty!" and are quick to assume that this "AI" can make sense of a picture and understand it, when all it does is apply certain methods for a certain use case.
This does a much better job by letting people write (or draw) their own test cases and figure out the limits intuitively.
jawns|9 years ago
I was prompted to draw a hurricane. I drew something that looked like the typical hurricane doodle used on news reports.
The software didn't recognize it.
When the game was over and I was able to look at all of the doodles that were used to train the software to recognize a hurricane ... the majority of them instead looked like tornadoes!
So maybe we should more precisely say:
1. The software can recognize a feather, as long as it looks similar to what the humans who contributed its training set think a feather looks like.
brazzledazzle|9 years ago
I'm also ashamed to admit I drew some less than ideal stuff due to forgetting details on things and then panicking because of the timer. Like the spots on a panda's face for some odd reason.
Hopefuly my drawings were treated as outliers.
triangleman|9 years ago
zaphar|9 years ago
1. The person can recognize a feather as long as it looks similar to what the other people who contributed to it's learning think a feather looks like.
unknown|9 years ago
[deleted]
StephenConnell|9 years ago
stcredzero|9 years ago
Idiocracy was prophetic -- except it missed the aspect that "Idiocracy" would first manifest on the Internet.
j2kun|9 years ago
https://twitter.com/OdaRygh/status/798872670221856768
SamBam|9 years ago
On the one hand, the network should eventually learn to classify high heels as shoes.
On the other, when these classification system actually get used, they're always at some arbitrary point in their training, so you can't just wait for "all the biases to go away."
vacri|9 years ago
TeMPOraL|9 years ago
josefx|9 years ago
AWildDHHAppears|9 years ago
[deleted]
pearjuice|9 years ago
nopinsight|9 years ago
For recognizing relatively simple entities, are there advantages humans still have over neural nets (assuming the same scope of knowledge)?
teekert|9 years ago
eslaught|9 years ago
https://rocknrollnerd.github.io/assets/article_images/2015-0...
The software does:
https://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.h...
Sure, you can fool a human. But there are things AI is missing that would be embarrassing if a human made the same mistake. It's hard to say, based on anecdotes like this, how big that gap is, but it's there.
Jugurtha|9 years ago
I think we do. We see a building we've never seen before and we know it's a building because it has certain features that we use to classify it as a building. The examples aren't scarce.
I also think a good indicator of us doing it is the use of "y" and "ish" and "sort".
As for sthlm's point 2:
>2. The software can't recognize a feather if it's never seen a feather like that. It's not a sentient being.
This is Asimo in 2009:
https://youtu.be/6rqO5eiP7_k?t=5m24s
vacri|9 years ago
As for advantages over neural nets, one of the primary ones is that humans can recognise things from unusual angles much more easily. When I tried QuickDraw and doodled things from non-stereotyped angles (like a three-quarter view of a car rather than the usual 2D side view), it had no idea.
The dalmation optical illusion[1] is another example of human ability to pick out patterns and assign them to belong to certain objects. Neural nets have different abilities, and are sometimes better at picking out different sorts of patterns than humans.
[1] http://cdn.theatlantic.com/assets/media/img/posts/2014/05/Pe...
tree_of_item|9 years ago
Why did this word "sentient" sneak in to your comment? I don't see what "sentience" has to do with what you just described; it's just a more sophisticated form of pattern matching.
"See, it can't do this! It's not self-aware!" is almost never the correct answer, because whatever thing it is you want to do will probably be solved in the future with more of the same techniques. Just about the only thing "sentience" or self-awareness is good for is an entity's private experience, which you wouldn't ever be able to see anyway.
tim333|9 years ago
>People read news articles like "X can recognize cats...
may assume sentience when it's not there
kiliantics|9 years ago
vanderZwan|9 years ago
http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.ht...
macspoofing|9 years ago
Like humans brains?
>are quick to assume that this "AI" can make sense of a picture and understand it, when all it does is apply certain methods for a certain use case.
Like human brains?
mdorazio|9 years ago
dr_zoidberg|9 years ago
mkishi|9 years ago
crimsonalucard|9 years ago
Any neural net, artificial or not, can only recognize things as long as it looks similar to what it thinks the thing should look like.