top | item 45701164

(no title)

wppick | 4 months ago

> It has come as a shock to some AI researchers that a large neural net that predicts next words seems to produce a system with general intelligence

When I write prompts, I've stopped thinking of LLMs as just predicting a next word, and instead to think that they are a logical model built up by combining the logic of all the text they've seen. I think of the LLM as knowing that cats don't lay eggs, and when I ask it to finish the sentence "cats lay ..." It won't generate the word eggs even though eggs probably comes after lay frequently

discuss

order

godelski|4 months ago

  > It won't generate the word eggs even though eggs probably comes after lay frequently
Even a simple N-gram model won't predict "eggs". You're misunderstanding by oversimplifying.

Next token prediction is still context based. It does not depend on only the previous token, but on the previous (N-1) tokens. You have "cat" so you should get words like "down" instead of "eggs" with even a 3-gram (trigram) model.

devmor|4 months ago

No, your original understanding was the more correct one. There is absolutely zero logic to be found inside an LLM, other than coincidentally.

What you are seeing is a semi-randomized prediction engine. It does not "know" things, it only shows you an approximation of what a completion of its system prompt and your prompt combined would look like, when extrapolated from its training corpus.

What you've mistaken for a "logical model" is simply a large amount of repeated information. To show the difference between this and logic, you need only look at something like the "seahorse emoji" case.

Philpax|4 months ago

No, their revised understanding is more accurate. The model has internal representations of concepts; the seahorse emoji fails because it uses those representations and stumbles: https://vgel.me/posts/seahorse/

nearbuy|4 months ago

If anything, the seahorse emoji case is exactly the type of thing you wouldn't expect to happen if LLMs just repeated information from their training corpus. It starts producing a weird dialogue that's completely unlike its training corpus, while trying to produce an emoji it's never seen during training. Why would it try to write an emoji that's not in its training data? This is totally different than its normal response when asked to produce a non-existent emoji. Normally, it just tells you the emoji doesn't exist.

So what is it repeating?

It's not enough to just point to an instance of LLMs producing weird or dumb output. You need to show how it fits with your theory that they "just repeating information". This is like pointing out one of the millions of times a person has said something weird, dumb, or nonsensical and claiming it proves humans can't think and can only repeat information.