top | item 36167972

(no title)

cosmolev | 2 years ago

There is a possibility that everything we output is merely a regurgitation of thousands of human answers, ideas, and thoughts we have encountered before. Including this comment of yours. And mine.

discuss

order

dvt|2 years ago

There's this new trend of what I'll dub techno-nihilism, which is essentially a counterargument to the stochastic parrot argument. The former being: well what if WE are stochastic parrots, after all that's how we learn, right? Well yes, but actually no.

It's trivially false because ChatGPT was trained on something (in this case, Stack Overflow), which, in turn was trained on something else (maybe a book), and so on. So knowledge, imagination, and genuine creativity must exist somewehere down that chain. Everything can't be just repeating what was learned prior ad infinitum, or we'd have nothing new. Ironically, even the development of large language models is an exercise in creativity.

cosmolev|2 years ago

But there is also this idea that you can't judge the system while being a part of it. You simply can't see the whole picture.

pixl97|2 years ago

At the sametime I think you may be overvaluing new answers and the importance of reiterating known answers in a method the user asking the question understands.

For example, there is no intrinsic value in something new. If I take a new solution to a problem, lock it in a box, then it has zero value. It is not improving anyway.

Now, if I take a solution and present it to you in a manner that you can understand, that has an inherent value to the end user.

By this analogy attempting to say that LLMs are useless because they only know what already exists is far to harsh of measure because the vast volume of human output is rehashing what we already know.

endisneigh|2 years ago

I hate this sort of retort since it’s fundamentally meaningless. All atoms that will ever exist were in the singularity and exploded during the big bang to encompass all of the universe, and so we are all just moving along. Ok, so what.

Ukv|2 years ago

The point of the retort is precisely to point out the meaninglessness when "regurgitation" is extended to include systems with some degree of generalization/extrapolation/discovery capabilities. The expectation set on AI seems to be to break information theory, "it's not original because it can only draw a horse (more accurately than a random guess) due to having trained on information about a horse" or so on.

omnicognate|2 years ago

A possibility that every idea necessary to put the James Webb Space Telescope in place and analyse the data it collects from the universe's earliest and most distant galaxies was already present thousands of years ago, in fact must have somehow been built in to the first humans? I don't think there is.