top | item 45802934

(no title)

educasean | 3 months ago

The debate around whether or not transformer-architecture-based AIs can "think" or not is so exhausting and I'm over it.

What's much more interesting is the question of "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?". Otherwise we go in endless circles about language and meaning of words instead of discussing practical, demonstrable capabilities.

discuss

order

Symmetry|3 months ago

"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra

oergiR|3 months ago

There is more to this quote than you might think.

Grammatically, in English the verb "swim" requires an "animate subject", i.e. a living being, like a human or an animal. So the question of whether a submarine can swim is about grammar. In Russian (IIRC), submarines can swim just fine, because the verb does not have this animacy requirement. Crucially, the question is not about whether or how a submarine propels itself.

Likewise, in English at least, the verb "think" requires an animate object. the question whether a machine can think is about whether you consider it to be alive. Again, whether or how the machine generates its output is not material to the question.

viccis|3 months ago

He was famously (and, I'm realizing more and more, correctly) averse to anthropomorphizing computing concepts.

pegasus|3 months ago

I disagree. The question is really about weather inference is in principle as powerful as human thinking, and so would deserve to be applied the same label. Which is not at all a boring question. It's equivalent to asking weather current architectures are enough to reach AGI (I myself doubt this).

esafak|3 months ago

I think it is, though, because it challenges our belief that only biological entities can think, and thinking is a core part of our identity, unlike swimming.

handfuloflight|3 months ago

What an oversimplification. Thinking computers can create more swimming submarines, but the inverse is not possible. Swimming is a closed solution; thinking is a meta-solution.

tjr|3 months ago

Without going to look up the exact quote, I remember an AI researcher years (decades) ago saying something to the effect of, Biologists look at living creatures and wonder how they can be alive; astronomers look at the cosmos and wonder what else is out there; those of us in artificial intelligence look at computer systems and wonder how they can be made to wonder such things.

paxys|3 months ago

Don't be sycophantic. Disagree and push back when appropriate.

Come up with original thought and original ideas.

Have long term goals that aren't programmed by an external source.

Do something unprompted.

The last one IMO is more complex than the rest, because LLMs are fundamentally autocomplete machines. But what happens if you don't give them any prompt? Can they spontaneously come up with something, anything, without any external input?

BeetleB|3 months ago

> Disagree and push back

The other day an LLM gave me a script that had undeclared identifiers (it hallucinated a constant from an import).

When I informed it, it said "You must have copy/pasted incorrectly."

When I pushed back, it said "Now you trust me: The script is perfectly correct. You should look into whether there is a problem with the installation/config on your computer."

IanCal|3 months ago

> Don't be sycophantic. Disagree and push back when appropriate.

They can do this though.

> Can they spontaneously come up with something, anything, without any external input?

I don’t see any why not, but then humans don’t have zero input so I’m not sure why that’s useful.

jackcviers3|3 months ago

The last one is fairly simple to solve. Set up a microphone in any busy location where conversations are occurring. In an agentic loop, send random snippets of audio recordings for transcriptions to be converted to text. Randomly send that to an llm, appending to a conversational context. Then, also hook up a chat interface to discuss topics with the output from the llm. The random background noise and the context output in response serves as a confounding internal dialog to the conversation it is having with the user via the chat interface. It will affect the outputs in response to the user.

If it interrupts the user chain of thought with random questions about what it is hearing in the background, etc. If given tools for web search or generating an image, it might do unprompted things. Of course, this is a trick, but you could argue that any sensory input living sentient beings are also the same sort of trick, I think.

I think the conversation will derail pretty quickly, but it would be interesting to see how uncontrolled input had an impact on the chat.

awestroke|3 months ago

Are you claiming humans do anything unprompted? Our biology prompts us to act

gwd|3 months ago

> The last one IMO is more complex than the rest, because LLMs are fundamentally autocomplete machines. But what happens if you don't give them any prompt? Can they spontaneously come up with something, anything, without any external input?

Human children typically spend 18 years of their lives being RLHF'd before let them loose. How many people do something truly out of the bounds of the "prompting" they've received during that time?

khafra|3 months ago

Note that model sycophancy is caused by RLHF. In other words: Imagine taking a human in his formative years, and spending several subjective years rewarding him for sycophantic behavior and punishing him for candid, well-calibrated responses.

Now, convince him not to be sycophantic. You have up to a few thousand words of verbal reassurance to do this with, and you cannot reward or punish him directly. Good luck.

omnicognate|3 months ago

> "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?"

Independent frontier maths research, i.e. coming up with and proving (preferably numerous) significant new theorems without human guidance.

I say that not because I think the task is special among human behaviours. I think the mental faculties that mathematicians use to do such research are qualitatively the same ones all humans use in a wide range of behaviours that AI struggles to emulate.

I say it because it's both achievable (in principle, if LLMs can indeed think like humans) and verifiable. Achievable because it can be viewed as a pure text generation task and verifiable because we have well-established, robust ways of establishing the veracity, novelty and significance of mathematical claims.

It needs to be frontier research maths because that requires genuinely novel insights. I don't consider tasks like IMO questions a substitute as they involve extremely well trodden areas of maths so the possibility of an answer being reachable without new insight (by interpolating/recombining from vast training data) can't be excluded.

If this happens I will change my view on whether LLMs think like humans. Currently I don't think they do.

pegasus|3 months ago

This, so much. Many mathematicians and physicists believe in intuition as a function separate from intelect. One is more akin to a form of (inner) perception, whereas the other is generative - extrapolation based on pattern matching and statistical thinking. That second function we have a handle on and getting better at it every year, but we don't even know how to define intuition properly. A fascinating book that discusses this phenomena is Nature Loves to Hide: Quantum Physics and Reality, a Western Perspective [1]

This quote from Grothendieck [2] (considered by many the greatest mathematician of the 20th century) points to a similar distinction: The mathematician who seeks to understand a difficult problem is like someone faced with a hard nut. There are two ways to go about it. The one way is to use a hammer — to smash the nut open by brute force. The other way is to soak it gently, patiently, for a long time, until it softens and opens of itself.

[1] https://www.amazon.com/Nature-Loves-Hide-Quantum-Perspective...

[2] https://en.wikipedia.org/wiki/Alexander_Grothendieck

tim333|3 months ago

That's quite a high bar for thinking like humans which rules out 99.99% of humans.

amarant|3 months ago

solve simple maths problems, for example the kind found in the game 4=10 [1]

Doesn't necessarily have to reliably solve them, some of them are quite difficult, but llms are just comically bad at this kind of thing.

Any kind of novel-ish(can't just find the answers in the training-data) logic puzzle like this is, in my opinion, a fairly good benchmark for "thinking".

Until a llm can compete with a 10 year old child in this kind of task, I'd argue that it's not yet "thinking". A thinking computer ought to be at least that good at maths after all.

[1] https://play.google.com/store/apps/details?id=app.fourequals...

simonw|3 months ago

> solve simple maths problems, for example the kind found in the game 4=10

I'm pretty sure that's been solved for almost 12 months now - the current generation "reasoning" models are really good at those kinds of problems.

xienze|3 months ago

> "If what LLMs do today isn't actual thinking, what is something that only an actually thinking entity can do that LLMs can't?"

Invent some novel concept, much the same way scientists and mathematicians of the distant past did? I doubt Newton's brain was simply churning out a stream of the "next statistically probable token" until -- boom! Calculus. There was clearly a higher order understanding of many abstract concepts, intuition, and random thoughts that occurred in his brain in order to produce something entirely new.

danielbln|3 months ago

My 5 year old won't be coming up with novel concepts around calculus either, yet she's clearly thinking, sentient and sapient. Not sure taking the best of the best of humanity as the goal standard is useful for that definition.

hshdhdhj4444|3 months ago

> Newton's brain was simply churning out a stream of the "next statistically probable token"

At some level we know human thinking is just electrons and atoms flowing. It’s likely at a level between that and “Boom! Calculus”, the complexity is equivalent to streaming the next statistically probably token.

plufz|3 months ago

Have needs and feelings? (I mean we can’t KNOW that they don’t and we know of this case of an LLM in experiment that try to avoid being shutdown, but I think the evidence of feeling seems weak so far)

jstanley|3 months ago

But you can have needs and feelings even without doing thinking. It's separate.

bloppe|3 months ago

Ya, the fact this was published on November 3, 2025 is pretty hilarious. This was last year's debate.

I think the best avenue toward actually answering your questions starts with OpenWorm [1]. I helped out in a Connectomics research lab in college. The technological and epistemic hurdles are pretty daunting, but so were those for Genomics last century, and now full-genome sequencing is cheap and our understanding of various genes is improving at an accelerating pace. If we can "just" accurately simulate a natural mammalian brain on a molecular level using supercomputers, I think people would finally agree that we've achieved a truly thinking machine.

[1]: https://archive.ph/0j2Jp

9rx|3 months ago

> Otherwise we go in endless circles about language and meaning of words

We understand thinking as being some kind of process. The problem is that we don't understand the exact process, so when we have these discussions the question is if LLMs are using the same process or an entirely different process.

> instead of discussing practical, demonstrable capabilities.

This doesn't resolve anything as you can reach the same outcome using a different process. It is quite possible that LLMs can do everything a thinking entity can do all without thinking. Or maybe they actually are thinking. We don't know — but many would like to know.

zer00eyz|3 months ago

> That is something that only an actually thinking entity can do that LLMs can't?

Training != Learning.

If a new physics breakthrough happens tomorrow, one that say lets us have FTL, how is an LLM going to acquire the knowledge, how does that differ from you.

The break through paper alone isnt going to be enough to over ride its foundational knowledge in a new training run. You would need enough source documents and a clear path deprecate the old ones...

anon291|3 months ago

The issue is that we have no means of discussing equality without tossing out the first order logic that most people are accustomed to. Human equality and our own perceptions of other humans as thinking machines is an axiomatic assumption that humans make due to our mind's inner sense perception.

xnx|3 months ago

> what is something that only an actually thinking entity can do that LLMs can't?

This is pretty much exactly what https://arcprize.org/arc-agi is working on.

deadbabe|3 months ago

Form ideas without the use of language.

For example: imagining how you would organize a cluttered room.

Chabsff|3 months ago

Ok, but how do you go about measuring whether a black-box is doing that or not?

We don't apply that criteria when evaluating animal intelligence. We sort of take it for granted that humans at large do that, but not via any test that would satisfy an alien.

Why should we be imposing white-box constraints to machine intelligence when we can't do so for any other?

embedding-shape|3 months ago

> Form ideas without the use of language.

Don't LLMs already do that? "Language" is just something we've added as a later step in order to understand what they're "saying" and "communicate" with them, otherwise they're just dealing with floats with different values, in different layers, essentially (and grossly over-simplified of course).

gf000|3 months ago

What people are interested in is finding a definition for intelligence, that is an exact boundary.

That's why we first considered tool use, being able to plan ahead as intelligence, until we have found that these are not all that rare in the animal kingdom in some shape. Then with the advent of IT what we imagined as impossible turned out to be feasible to solve, while what we though of as easy (e.g. robot movements - a "dumb animal" can move trivially it surely is not hard) turned out to require many decades until we could somewhat imitate.

So the goal post moving of what AI is is.. not moving the goal post. It's not hard to state trivial higher bounds that differentiates human intelligence from anything known to us, like invention of the atomic bomb. LLMs are nowhere near that kind of invention and reasoning capabilities.

paulhebert|3 months ago

Interestingly, I think the distinction between human and animal thinking is much more arbitrary than the distinction between humans and LLMs.

Although an LLM can mimic a human well, I’d wager the processes going on in a crow’s brain are much closer to ours than an LLM

Balinares|3 months ago

Strive for independence.