top | item 46608312

(no title)

anon373839 | 1 month ago

It isn’t a criticism; it’s a description of what the technology is.

In contrast, human thinking doesn’t involve picking a word at a time based on the words that came before. The mechanics of language can work that way at times - we select common phrasings because we know they work grammatically and are understood by others, and it’s easy. But we do our thinking in a pre-language space and then search for the words that express our thoughts.

I think kids in school ought to be made to use small, primitive LLMs so they can form an accurate mental model of what the tech does. Big frontier models do exactly the same thing, only more convincingly.

discuss

order

minimaltom|1 month ago

> In contrast, human thinking doesn’t involve picking a word at a time based on the words that came before

Do we have science that demonstrates humans don't autoregressively emit words? (Genuinely curious / uninformed).

From the outset, its not obvious that auto-regression through the state space of action (i.e. what LLMs do when yeeting tokens) is the difference they have with humans. Though I can guess we can distinguish LLMs from other models like diffusion/HRM/TRM that explicitly refine their output rather than commit to a choice then run `continue;`.

Turskarama|1 month ago

Have you ever had a concept you wanted to express, known that there was a word for it, but struggled to remember what the word was? For human thought and speech to work that way it must be fundamentally different to what an LLM does. The concept, the "thought", is separated from the word.

jhbadger|1 month ago

Fine, that would at least teach them that LLMs are doing a lot more than "predicting the next word" given that they can also be taught that a Markov model can do that and be about 10 lines of simple Python and use no neural nets or any other AI/ML technology.

autoexec|1 month ago

> In contrast, human thinking doesn’t involve picking a word at a time based on the words that came before.

More to the point, human thinking isn't just outputting text by following an algorithm. Humans understand what each of those words actually mean, what they represent, and what it means when those words are put together in a given order. An LLM can regurgitate the wikipedia article on a plum. A human actually knows what a plum is and what it tastes like. That's why humans know that glue isn't a pizza topping and AI doesn't.

astrange|1 month ago

> That's why humans know that glue isn't a pizza topping and AI doesn't.

It's the opposite. That came from a Google AI summary which was forced to quote a reddit post, which was written by a human.