top | item 45173487

(no title)

barnacs | 5 months ago

But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking".

Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".

discuss

order

chpatrick|5 months ago

Humans are also trained on data made by humans.

> it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".

That's creativity which is a different question from thinking.

bluefirebrand|5 months ago

> Humans are also trained on data made by humans

Humans invent new data, humans observe things and create new data. That's where all the stuff the LLMs are trained on came from.

> That's creativity which is a different question from thinking

It's not really though. The process is the same or similar enough don't you think?

barnacs|5 months ago

I guess our definition of "thinking" is just very different.

Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human.

But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations.