(no title)
bpizzi | 4 years ago
Assuming the 3 year old already knew what a lion looks like, and point at 'things with stripes' and 'things without stripes'.
I think that a model that can already recognize separately lions and stripes should be able to tag a tiger's picture as a 'Lion with stripes', no?
mannykannot|4 years ago
Generalizing from what is formally insufficient information is something that humans are quite good at (though obviously not infallibly.)
jonplackett|4 years ago
I could also say “a Cheetah is like a lion but it’s smaller and has spots and runs a lot faster. And a leopard is like a lion but smaller and can climb trees and has spots.”
I could probably start with a house cat and describe an elephant if I wanted to and I’ll bet the kid would work it out.
The ability to take apart and reassemble knowledge is what I’m talking about here, not just add two simple bits of information together.
justinpombrio|4 years ago
The OpenAI website is unresponsive at the moment, so I can't actually demonstrate this, but you could totally tell GPT-3 that, and it would then make basic inferences. For example, saying "four" when asked how many legs a cheetah has, or guessing a smaller weight for a cheetah than a lion when asked to guess a specific weight for both. Not perfectly, but a lot better than chance, for the basic inferences.
(You wouldn't actually tell it "a Cheetah is like a lion but..." because it already knows what a Cheetah is. Instead you'd say "a Whargib is like a lion but ...", and ask it basic questions about Whargibs.)
ShamelessC|4 years ago
https://openai.com/blog/multimodal-neurons/
> The concepts, therefore, form a simple algebra that behaves similarly to a linear probe.
Because the loss encourages words to be mapped as linearly independent vectors; you can literally do addition/subtraction with concepts and it sort of works.