top | item 39652741

(no title)

buo | 2 years ago

I think it's interesting that human minds generally (though not always!) improve when exposed to the output of other human minds. It seems to be the opposite for current LLMs.

discuss

order

diggan|2 years ago

Maybe it's less about "Human VS Robot" and more about exposure to "Original thoughts VS mass-produced average thoughts".

I don't think a human mind would be improving if they're in a echo-chamber with no new information. I think the reason the human mind is improving is because we're exposed to new, original and/or different thoughts, that we hadn't considered or come across before.

Meanwhile, a LLM will just regurgitate the most likely token based on the previous one, so there isn't any originality there, hence any output from a LLM cannot improve another LLM. There is nothing new to be learned, basically.

bluefirebrand|2 years ago

> I don't think a human mind would be improving if they're in a echo-chamber with no new information

If this were true of humans, we would have never made it this far

Humans are very capable of looking around themselves and thinking "I can do better than this", and then trying to come up with ways how

LLMs are not

ausbah|2 years ago

humans haven’t been had the same set of all encompassing “training experiences” like LLMs have. we each a subset of knowledge that may overlap with some other’s knowledge, but is largely unique. so when we interact with each other we can learn new things, but with LLMs I imagine it is a group of experienced but antiquated professors developing their own set of out of touch ideas

ben_w|2 years ago

Reproductive analogy:

A sequence of AI models trained on each other's output gets mutations, which might help or hurt, but if there's one dominant model at any given time then it's like asexual reproduction with only living descendant in each generation (and all the competing models being failures to reproduce). A photocopy of a photocopy of a photocopy — this seems to me to also be the incorrect model which Intelligent Design proponents seem to mistakenly think is how evolution is supposed to work.

A huge number of competing models that never rise to dominance would be more like plants spreading pollen in the wind.

A huge number of AI there are each smart enough to decide what to include in its training set would be more like animal reproduction. The fittest memes survive.

Memetic mode collapses still happen in individual AI (they still happen in humans, we're not magic), but that manifests as certain AI ceasing to be useful and others replacing them economically.

A few mega-minds is a memetic monoculture, fragile in all the same ways as a biological monoculture.

nonrandomstring|2 years ago

A different biological analogy occurred to me which I've mentioned before in a security context. It isn't model degeneration but the amplification of invisible nasties that don't become a problem until way down the line.

Natural examples are prions such as Bovine spongiform encephalopathy [0] or sheep scrapie. This seems to really become a problem in systems with a strong and fast positive feedback loop with some selector. In the case of cattle it was feeding rendered bonemeal from dead cattle back to livestock. Prions are immune to high temperature removal so are selected for and concentrated by the feedback process.

To really feel the horror of this, read Ken Thompson's "Reflections on Trusting Trust" [1] and ponder the ways that a trojan can be replicated iteratively (like a worm) but undetectably.

It isn't loss functions we should worry about. It's gain functions.

[0] https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopat...

[1] https://tebibyte.media/blog/reflections-on-trusting-trust/

NortySpock|2 years ago

I do get to choose what I read, though.

mewpmewp2|2 years ago

Have you ever heard of the telephone game? This is what is going on here. Or imagine an original story of something that really happened. If it goes by 100 people in a chain, how much do you think the story will resemble the original one?

KolmogorovComp|2 years ago

A more appropriate analogy would be isolating someone from the rest of the world and only being able to read their own writings from now on.

While some persons can strive in these kind of environment (think Kant for example), many would become crazy.

analog31|2 years ago

This might be my biases speaking, but I have a hunch that there's still more potential for human generated content to poison our minds, than AI.

JohnFen|2 years ago

It's almost as if LLMs and human minds operate entirely differently from each other.

BobaFloutist|2 years ago

I mean it makes sense that (even impressively functional) statistical approximations would degrade when recursed.

If anything I think this just demonstrates yet again that these aren't actually analogous to what humans think of as "minds", even if they're able to replicate more of the output than makes us comfortable.

orbital-decay|2 years ago

Humans exhibit very similar behavior. Prolonged sensory deprivation can drive a single individual insane. Fully isolated/monolithic/connected communities easily become detached from reality and are susceptible to mass psychosis. Etc etc etc. Humans need some minimum amount of external data to keep them in check as well.