top | item 46729069

(no title)

iib | 1 month ago

I found Geoffrey Hinton's hypothesis of LLMs interesting in this regard. They have to compress the world knowledge into a few billion parameters, much denser than the human brain, so they have to be very good at analogies, in order to obtain that compression.

discuss

order

TeMPOraL|1 month ago

I feel this has causality reversed. I'd say they are good at analogies because they have to compress well, which they do by encoding relationships in stupidly high-dimensional space.

Analogies then could sort of fall out naturally out of this. It might really still be just the simple (yet profound) "King - Man + Woman = Queen" style vector math.

bjt12345|1 month ago

That's essentially the manifold hypothesis of machine learning, right?