top | item 35179558

(no title)

dm33tri | 3 years ago

These comments are deceptive. Yes, this is how LLMs work, but that doesn't mean they only repeat things they have seen before. LLMs are capable of following instructions to construct new words in any language they know, words never seen before.

I've seen it being dumb in maths or real world problems. But as a large language models, they understand and speak languages fine, and even mistakes they make look like mistakes humans who are not natives in the language would make.

We may as well say that when we speak, we are just predicting words we have trained on. I don't see how these models are worse than people in that regard.

The general knowledge and thinking of these models are surely limited. But seeing GPT-4 go from text only input to text with images, I think it is very possible to break the barriers very soon.

discuss

order

oska|3 years ago

Ok, since you called out the gp comment as 'deceptive', I in turn am going to call out your comment (and others like it) as delusional, and point to specific places in your comment that exhibit this state of delusion (about LLMs).

> they understand and speak languages fine

No, they neither 'understand' nor 'speak' languages. The first word here is the more delusional, they have no understanding of languages. They have simply generated a model of the language. And they do not 'speak' the language they have modeled; they generate text in that language. Speaking generally implies an active intelligence; there is no intelligence behind an LLM. There is simply a machine generating (wholly derivative) output text from input text.

> We may as well say that when we speak, we are just predicting words we have trained on

This is the delusion, commonly being repeated, that humans themselves are only LLMs. This is a dangerous delusion, in my view, and one that has no evidence behind it. It is only a supposition, and a sterile and nihilistic one in my view.

> The general knowledge and thinking of these models are surely limited [...] I think it is very possible to break the barriers very soon

The limitations are fundamental to LLMs. LLMs have no general knowledge and LLMs don't do any 'thinking'. Your understanding of what they are doing is in grave error, and that error is based on a personification of the machine. An error coming from the delusion that because they generate 'natural' language they are operating similarly to us (false and addressed above). They are never going to break the limits because they have never started to transcend those limits in the first place, nor can they. They don't and will never 'think' or 'reason'.