top | item 28499202

(no title)

vokep | 4 years ago

But it does mean mirrors might be humans/human-like, if humans are actually reducable to (self-aware) mirrors.

discuss

order

verygoodname|4 years ago

Would you say that a "Markov chain"-type (e.g. Dissociated Press) language model is "self-aware" in any way?

If yes, then... how, exactly? It is basically an N x N matrix of values.

Current language models (GPT et al.), are qualitatively nothing fancier than this: probabilistic models which encode regularity in language and from which you can sample to get "plausible content".

If a bunch of values in a matrix is "self-aware", then I guess GPT can be seen as "self-aware"; if not, then it can't.

My problem is trying to imagine an N x N matrix being self-aware (like... what does that even mean in this context?).

Is GPT human-like? Sure... if you stretch the meaning of "human-like" enough (it produces content that is similar to content produced by a human). Is a human GPT-like? That's harder to argue (and I don't see how your argument would support it).

magusdei|4 years ago

If you know what a Markov chain is then you must also know that modern language models are nothing like Markov chains. Just as an example, a Markov chain can't do causal reasoning or correctly solve unseen programming puzzles, the way GPT-3 can.

As for self-awareness, your brain is an N x N -matrix in the same sense as an ANN, so surely it must be possible for one to be self-aware? Not claiming that GPT-3 is, of course.