top | item 29580605

(no title)

bpizzi | 4 years ago

> I can say something like ‘a tiger is just a lion with stripes’ to a 3 year old and they now ‘understand’ what a tiger is almost as well as if they saw a picture of one. They could definitely identify one from a picture now.

Assuming the 3 year old already knew what a lion looks like, and point at 'things with stripes' and 'things without stripes'.

I think that a model that can already recognize separately lions and stripes should be able to tag a tiger's picture as a 'Lion with stripes', no?

discuss

order

mannykannot|4 years ago

Not necessarily, if the stripes prevent tigers from scoring highly on the lion measure.

Generalizing from what is formally insufficient information is something that humans are quite good at (though obviously not infallibly.)

jonplackett|4 years ago

Maybe… but this is just one very easy example and also using something very obvious and visual.

I could also say “a Cheetah is like a lion but it’s smaller and has spots and runs a lot faster. And a leopard is like a lion but smaller and can climb trees and has spots.”

I could probably start with a house cat and describe an elephant if I wanted to and I’ll bet the kid would work it out.

The ability to take apart and reassemble knowledge is what I’m talking about here, not just add two simple bits of information together.

justinpombrio|4 years ago

> I could also say “a Cheetah is like a lion but it’s smaller and has spots and runs a lot faster. And a leopard is like a lion but smaller and can climb trees and has spots.”

The OpenAI website is unresponsive at the moment, so I can't actually demonstrate this, but you could totally tell GPT-3 that, and it would then make basic inferences. For example, saying "four" when asked how many legs a cheetah has, or guessing a smaller weight for a cheetah than a lion when asked to guess a specific weight for both. Not perfectly, but a lot better than chance, for the basic inferences.

(You wouldn't actually tell it "a Cheetah is like a lion but..." because it already knows what a Cheetah is. Instead you'd say "a Whargib is like a lion but ...", and ask it basic questions about Whargibs.)

ShamelessC|4 years ago

It is believed that the solutions neural networks learn may be rather elegant, to a degree. It is perhaps unscientific to say that the neural network is always decomposing and re-composing tokens in a _generic_ and intelligent way; but it's becoming increasingly obvious that it probably is; particularly in the case of the "self-attention" architectures.

https://openai.com/blog/multimodal-neurons/

> The concepts, therefore, form a simple algebra that behaves similarly to a linear probe.

Because the loss encourages words to be mapped as linearly independent vectors; you can literally do addition/subtraction with concepts and it sort of works.