(no title)
anon373839 | 1 month ago
In contrast, human thinking doesn’t involve picking a word at a time based on the words that came before. The mechanics of language can work that way at times - we select common phrasings because we know they work grammatically and are understood by others, and it’s easy. But we do our thinking in a pre-language space and then search for the words that express our thoughts.
I think kids in school ought to be made to use small, primitive LLMs so they can form an accurate mental model of what the tech does. Big frontier models do exactly the same thing, only more convincingly.
minimaltom|1 month ago
Do we have science that demonstrates humans don't autoregressively emit words? (Genuinely curious / uninformed).
From the outset, its not obvious that auto-regression through the state space of action (i.e. what LLMs do when yeeting tokens) is the difference they have with humans. Though I can guess we can distinguish LLMs from other models like diffusion/HRM/TRM that explicitly refine their output rather than commit to a choice then run `continue;`.
Turskarama|1 month ago
jhbadger|1 month ago
autoexec|1 month ago
More to the point, human thinking isn't just outputting text by following an algorithm. Humans understand what each of those words actually mean, what they represent, and what it means when those words are put together in a given order. An LLM can regurgitate the wikipedia article on a plum. A human actually knows what a plum is and what it tastes like. That's why humans know that glue isn't a pizza topping and AI doesn't.
astrange|1 month ago
It's the opposite. That came from a Google AI summary which was forced to quote a reddit post, which was written by a human.