(no title)
monkeycantype | 3 months ago
before i go any further, let me first reference The Dude:
- "this is just like, my opinion man."
I’m down with the idea that LLM’s have been especially successful because they ‘piggyback on language’ – our tool and protocol for structuring, compressing, and serialising thought, which means it has been possible to train LLM’s on compressed patterns of actual thought and have them make new language that sure looks like thought, without any direct experience of the concepts being manipulated, and if they do it well enough we will do the decompression, fleshing out the text with our experiential context.
But I suspect that there are parts of my mind that also deal with concepts in an abstract way, far from any experiential context of the concept, just like the deeper layers of a neural network. I’m open to the idea, that just as the sparse matrix of an LLM is encoding connection between concepts without explicitly encoding edges, I think there will be multiple ways that we can look as the structure of an AI model and at our anatomy so that they are a squint and a transformation function away interesting overlaps. that will lead to and a kind of 'god of the gaps' scenario in which we conceptually carve out pieces of our minds as, 'oh the visual cortext is just an X', and deep questions about what we are.
No comments yet.