(no title)
iib
|
1 month ago
I found Geoffrey Hinton's hypothesis of LLMs interesting in this regard. They have to compress the world knowledge into a few billion parameters, much denser than the human brain, so they have to be very good at analogies, in order to obtain that compression.
TeMPOraL|1 month ago
Analogies then could sort of fall out naturally out of this. It might really still be just the simple (yet profound) "King - Man + Woman = Queen" style vector math.
bjt12345|1 month ago