top | item 28498973

(no title)

verygoodname | 4 years ago

Humans may even "be conceptually mirrors" (whatever that may mean): this still doesn't make mirrors be humans (conceptually or otherwise).

Current language models are much more closer to "Dissociated Press" (i.e. an equation that tells you which which words are more likely, given the context) than to "human thought", and the fact that humans learn by copying does not really change this.

discuss

order

vokep|4 years ago

But it does mean mirrors might be humans/human-like, if humans are actually reducable to (self-aware) mirrors.

verygoodname|4 years ago

Would you say that a "Markov chain"-type (e.g. Dissociated Press) language model is "self-aware" in any way?

If yes, then... how, exactly? It is basically an N x N matrix of values.

Current language models (GPT et al.), are qualitatively nothing fancier than this: probabilistic models which encode regularity in language and from which you can sample to get "plausible content".

If a bunch of values in a matrix is "self-aware", then I guess GPT can be seen as "self-aware"; if not, then it can't.

My problem is trying to imagine an N x N matrix being self-aware (like... what does that even mean in this context?).

Is GPT human-like? Sure... if you stretch the meaning of "human-like" enough (it produces content that is similar to content produced by a human). Is a human GPT-like? That's harder to argue (and I don't see how your argument would support it).