It's more that "thinking" is a vague term that we don't even understand in humans, so for me it's pretty meaningless to claim LLMs think or don't think.
There's this very cliched comment to any AI HN headline which is this:
"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."
or its cousin:
"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"
Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.
In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.
Thinking is better understood than you seem to believe.
We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.
Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".
This seems backwards to me. There's a fully understood thing (LLMs)[1] and a not-understood thing (brains)[2]. You seem to require a person to be able to fully define (presumably in some mathematical or mechanistic way) any behaviour they might observe in the not-understood thing before you will permit them to point out that the fully understood thing does not appear to exhibit that behaviour. In short you are requiring that people explain brains before you will permit them to observe that LLMs don't appear to be the same sort of thing as them. That seems rather unreasonable to me.
That doesn't mean such claims don't need to made as specific as possible. Just saying something like "humans love but machines don't" isn't terribly compelling. I think mathematics is an area where it seems possible to draw a reasonably intuitively clear line. Personally, I've always considered the ability to independently contribute genuinely novel pure mathematical ideas (i.e. to perform significant independent research in pure maths) to be a likely hallmark of true human-like thinking. This is a high bar and one AI has not yet reached, despite the recent successes on the International Mathematical Olympiad [3] and various other recent claims. It isn't a moved goalpost, either - I've been saying the same thing for more than 20 years. I don't have to, and can't, define what "genuinely novel pure mathematical ideas" means, but we have a human system that recognises, verifies and rewards them so I expect us to know them when they are produced.
By the way, your use of "magical" in your earlier comment, is typical of the way that argument is often presented, and I think it's telling. It's very easy to fall into the fallacy of deducing things from one's own lack of imagination. I've certainly fallen into that trap many times before. It's worth honestly considering whether your reasoning is of the form "I can't imagine there being something other than X, therefore there is nothing other than X".
Personally, I think it's likely that to truly "do maths" requires something qualitatively different to a computer. Those who struggle
to imagine anything other than a computer being possible often claim that that view is self-evidently wrong and mock such an imagined device as "magical", but that is not a convincing line of argument. The truth is that the physical Church-Turing thesis is a thesis, not a theorem, and a much shakier one than the original Church-Turing thesis. We have no particularly convincing reason to think such a device is impossible, and certainly no hard proof of it.
[1] Individual behaviours of LLMs are "not understood" in the sense that there is typically not some neat story we can tell about how a particular behaviour arises that contains only the truly relevant information. However, on a more fundamental level LLMs are completely understood and always have been, as they are human inventions that we are able to build from scratch.
[2] Anybody who thinks we understand how brains work isn't worth having this debate with until they read a bit about neuroscience and correct their misunderstanding.
[3] The IMO involves problems in extremely well-trodden areas of mathematics. While the problems are carefully chosen to be novel they are problems to be solved in exam conditions, not mathematical research programs. The performance of the Google and OpenAI models on them, while impressive, is not evidence that they are capable of genuinely novel mathematical thought. What I'm looking for is the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. That isn't here yet, and if and when it arrives it really will turn maths on its head.
chpatrick|5 months ago
There's this very cliched comment to any AI HN headline which is this:
"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."
or its cousin:
"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"
barnacs|5 months ago
Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.
In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.
shakna|5 months ago
We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.
Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".
[0] https://www.cell.com/trends/plant-science/abstract/S1360-138...
omnicognate|5 months ago
That doesn't mean such claims don't need to made as specific as possible. Just saying something like "humans love but machines don't" isn't terribly compelling. I think mathematics is an area where it seems possible to draw a reasonably intuitively clear line. Personally, I've always considered the ability to independently contribute genuinely novel pure mathematical ideas (i.e. to perform significant independent research in pure maths) to be a likely hallmark of true human-like thinking. This is a high bar and one AI has not yet reached, despite the recent successes on the International Mathematical Olympiad [3] and various other recent claims. It isn't a moved goalpost, either - I've been saying the same thing for more than 20 years. I don't have to, and can't, define what "genuinely novel pure mathematical ideas" means, but we have a human system that recognises, verifies and rewards them so I expect us to know them when they are produced.
By the way, your use of "magical" in your earlier comment, is typical of the way that argument is often presented, and I think it's telling. It's very easy to fall into the fallacy of deducing things from one's own lack of imagination. I've certainly fallen into that trap many times before. It's worth honestly considering whether your reasoning is of the form "I can't imagine there being something other than X, therefore there is nothing other than X".
Personally, I think it's likely that to truly "do maths" requires something qualitatively different to a computer. Those who struggle to imagine anything other than a computer being possible often claim that that view is self-evidently wrong and mock such an imagined device as "magical", but that is not a convincing line of argument. The truth is that the physical Church-Turing thesis is a thesis, not a theorem, and a much shakier one than the original Church-Turing thesis. We have no particularly convincing reason to think such a device is impossible, and certainly no hard proof of it.
[1] Individual behaviours of LLMs are "not understood" in the sense that there is typically not some neat story we can tell about how a particular behaviour arises that contains only the truly relevant information. However, on a more fundamental level LLMs are completely understood and always have been, as they are human inventions that we are able to build from scratch.
[2] Anybody who thinks we understand how brains work isn't worth having this debate with until they read a bit about neuroscience and correct their misunderstanding.
[3] The IMO involves problems in extremely well-trodden areas of mathematics. While the problems are carefully chosen to be novel they are problems to be solved in exam conditions, not mathematical research programs. The performance of the Google and OpenAI models on them, while impressive, is not evidence that they are capable of genuinely novel mathematical thought. What I'm looking for is the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. That isn't here yet, and if and when it arrives it really will turn maths on its head.
unknown|5 months ago
[deleted]
CamperBob2|5 months ago
Why? Team "Stochastic Parrot" will just move the goalposts again, as they've done many times before.