top | item 23724912

(no title)

sutterbomb | 5 years ago

What makes you confident that you aren't overestimating the importance that we "experience anything"

discuss

order

api|5 years ago

When I say "I have a laptop in front of me," I am describing an understanding of something that is being experienced (sensed). If a Markov text generator outputs this text, it's just rearranging bits. I don't see any evidence that GPT-3 is doing anything more than rearranging bits in a much more elaborate way than a Markov text generator. The results kind of dazzle us, but being dazzled doesn't indicate anything in particular. I see something akin to a textual kaleidoscope toy, a generator of novel text that is syntactically valid and that produces odd cognitive sensations when read.

I maybe should have said sensed, not experienced, since experience also leads into much deeper philosophical discussions around the nature of mind and consciousness. I wasn't really going there, since I don't see anything in GPT-3 or any similar system that merits going there.

I also don't see any evidence that it is drawing any new conclusions or constructing any novel thoughts about anything. It's regurgitating similar results to pre-existing textual examples, re-arranging new ideas in new ways. If you don't think actual new ideas exist then this may be compelling, but if that's the case I have to ask: where did all the existing ideas come from then? Some creative mechanism must exist or nothing would exist, including this text.

The fact that the output often resembles pop Internet discourse says more about the mindlessness of "meme-think" than the GPT-3 model.

As for real world uses, social media spam and mass propaganda seems like the most obvious one. This thing seems like it would be a fantastic automated "meme warrior." Train it on a corpus of Qanon and set it to work "pilling" people.

the8472|5 years ago

> When I say "I have a laptop in front of me," I am describing an understanding of something that is being experienced (sensed).

I would ascribe that to two factors a) you have a more immediate, interactive interface to the physical world than GPT does, which is limited to a textual proxy and b) GPT naturally is not a human-level intelligence, it is still of very limited complexity so its understanding more akin to that of a parrot trying to understand its owner's speech patterns. It can infer a tiny bit of semantics and mimic the rest. The ratio is a continuum.

> As for real world uses, social media spam and mass propaganda seems like the most obvious one.

fragments full sentence completion useful maybe.

ianhorn|5 years ago

Take active learning versus usual learning. Often with active learning you can learn much faster. That's a kind of "experience." Out of distribution problems where it fails to generalize could be dealt with much more efficiently when a model can ask "hey what's f(x=something really weird and specific that would never come up in an entire internet's worth of training data)?" Experience isn't passive, and that makes a whole world of difference. And that's not even touching on the difficulty of "tell me all about elephants" versus "let me interact with an elephant and see it and touch it and physically study it."