(no title)
verygoodname | 4 years ago
Current language models are much more closer to "Dissociated Press" (i.e. an equation that tells you which which words are more likely, given the context) than to "human thought", and the fact that humans learn by copying does not really change this.
vokep|4 years ago
verygoodname|4 years ago
If yes, then... how, exactly? It is basically an N x N matrix of values.
Current language models (GPT et al.), are qualitatively nothing fancier than this: probabilistic models which encode regularity in language and from which you can sample to get "plausible content".
If a bunch of values in a matrix is "self-aware", then I guess GPT can be seen as "self-aware"; if not, then it can't.
My problem is trying to imagine an N x N matrix being self-aware (like... what does that even mean in this context?).
Is GPT human-like? Sure... if you stretch the meaning of "human-like" enough (it produces content that is similar to content produced by a human). Is a human GPT-like? That's harder to argue (and I don't see how your argument would support it).