Are you a stream of words or are your words the “simplistic” projection of your abstract thoughts? I don’t at all discount the importance of language in so many things, but the question that matters is whether statistical models of language can ever “learn” abstract thought, or become part of a system which uses them as a tool.My personal assessment is that LLMs can do neither.
ACCount37|2 months ago
An LLM has: words in its input plane, words in its output plane, and A LOT of cross-linked internals between the two.
Those internals aren't "words" at all - and it's where most of the "action" happens. It's how LLMs can do things like translate from language to language, or recall knowledge they only encountered in English in the training data while speaking German.
Hendrikto|2 months ago
The heavy lifting here is done by embeddings. This does not require a world model or “thought”.
daveguy|2 months ago
balamatom|2 months ago
My "abstract thoughts" are a stream of words too, they just don't get sounded out.
Tbf I'd rather they weren't there in the first place.
But bodies which refuse to harbor an "interiority" are fast-tracked to destruction because they can't suf^W^W^W be productive.
Funny movie scene from somewhere. The sergeant is drilling the troops: "You, private! What do you live for!", and expects an answer along the lines of dying for one's nation or some shit. Instead, the soldier replies: "Well, to see what happens next!"
d-lisp|2 months ago
To me, solving problems happens in a logico/aesthetical space which may be the same as when you are intellectually affected by a work of art. I don't remember myself being able to translate directly into words what I feel for a great movie or piece of music, even if in the late I can translate this "complex mental entity" into words, exactly like I can tell to someone how we need to change the architecture of a program in order to solve something after having looked up and right for a few seconds.
It seems to me that we have an inner system that is much faster than language, that creates entities that can then beslowly and sometimes painfully translated to language.
I do note that I'm not sure about any of the previous statements though'
A4ET8a8uTh0_v2|2 months ago
Hmm, seems unlikely. They are not sounded out part is true, sure, but I question whether 'abstract thoughts' can be so easily dismissed as mere words.
edit: come to think of it and I am asking this for a reason: do you hear your abstract thoughts?
throw4847285|2 months ago
Davidzheng|2 months ago
Though I do think in human brains it's also an interplay where what we write/say also loops back into the thinking as well. Which is something which is efficient for LLMs.
gardenhedge|2 months ago
But raising kids, I can clearly see that intelligence isn't just solved by LLMs
lostmsu|2 months ago
Funny, I have the opposite experience. Like early LLMs kids tend to give specific answers to the questions they don't understand or don't really know or remember the answer to. Kids also loop (give the same reply repeatedly to different prompts), enter highly emotional states where their output is garbled (everyone loves that one), etc. And it seems impossible to correct these until they just get smarter as their brain grows.
What's even more funny is that adults tend to do all these things as well, just less often.
akoboldfrying|2 months ago
If it turns out that LLMs don't model human brains well enough to qualify as "learning abstract thought" the way humans do, some future technology will do so. Human brains aren't magic, special or different.
meheleventyone|2 months ago
They’re certainly special both within the individual but also as a species on this planet. There are many similar to human brains but none we know of with similar capabilities.
They’re also most obviously certainly different to LLMs both in how they work foundationally and in capability.
I definitely agree with the materialist view that we will ultimately be able to emulate the brain using computation but we’re nowhere near that yet nor should we undersell the complexity involved.
thesz|2 months ago
[1] https://www.nature.com/articles/s41598-024-62539-5
As the result, all living cells with DNA emit coherent (as in lasers) light [2]. There is a theory that this light also facilitates intercellular communication.
[2] https://www.sciencealert.com/we-emit-a-visible-light-that-va...
Chemical structures in dendrites, not even neurons, are capable to compute XOR [3] which require multilevel artificial neural network with at least 9 parameters. Some neurons in brain have hundredths of thousands of dendrites, we are now talking of millions of parameters only in single neuron's dendrites functionality.
[3] https://www.science.org/doi/10.1126/science.aax6239
So, while human brains aren't magic, special or different, they are just extremely complex.
Imagine building a computer with 85 billions of superconducting quantum computers, optically and electrically connected, each capable of performing computations of a non-negligibly complex artificial neural network.
d-lisp|2 months ago
While I agree to some extent with the materialistic conception, the brain is not an isolated mechanism, but rather the element of a system which itself isn't isolated from the experience of being a body in a world interacting with different systems to form super systems.
The brain must be a very efficient mechanism, because it doesn't need to ingest the whole textual production of the human world in order to know how to write masterpieces (music, litterature, films, software, theorems etc...). Instead the brain learns to be this very efficient mechanism with (as a starting process) feeling its own body sh*t on itself during a long part of its childhood.
I can teach someone to become really good at producing fine and efficient software, but on the contrary I can only observe everyday that my LLM of choice keeps being stupid even when I explain it how it fails. ("You're perfectly right !").
It is true that there's nothing magical about the brain, but I am pretty sure it must be stronger tech than a probabilistic/statistical next word guesser (otherwise there would be much more consensus about the usability of LLMs I think).
RayVR|2 months ago
jpkw|2 months ago
nephihaha|2 months ago
Animals and computers come close in some ways but aren't quite there.
littlestymaar|2 months ago
“Internal combustion engines and human brains are both just mechanisms. Why would one mechanism a priori be capable of "learning abstract thought", but no others?”
The question isn't about what an hypothetical mechanism can do or not, it's about whether the concrete mechanism we built does or not. And this one doesn't.
arowthway|2 months ago
jibal|2 months ago
> If it turns out that LLMs don't model human brains well enough to qualify as "learning abstract thought" the way humans do, some future technology will do so. Human brains aren't magic, special or different.
Google "strawman".