(no title)
gishh | 3 months ago
You understand how the tech works right? It's statistics and tokens. The computer understands nothing. Creating "understanding" would be a breakthrough.
Edit: I wasn't trying to be a jerk. I sincerely wasn't. I don't "understand" how LLMs "understand" anything. I'd be super pumped to learn that bit. I don't have an agenda.
frotaur|3 months ago
I would say that, except for the observable and testable performance, what else can you say about understanding?
It is a fact that LLMs are getting better at many tasks. From their performance, they seem to have an understanding of say python.
The mechanistic way this understanding arises is different than humans.
How can you say then it is 'not real', without invoking the hard problem of consciousness, at which point, we've hit a completely open question.
matthewkayin|3 months ago
When I ask it to use a specific MCP to complete a certain task, and it proceeds to not use that MCP, this indicates a clear lack of understanding.
You might say that the fault was mine, that I didn't setup or initialize the MCP tool properly, but wouldn't an understanding AI recognize that it didn't have access to the MCP and tell me that it cannot satisfy my request, rather than blindly carrying on without it?
LLMs consistently prove that they lack the ability to evaluate statements for truth. They lack, as well, an awareness of their unknowing, because they are not trying to understand; their job is to generate (to hallucinate).
It astonishes me that people can be so blind to this weakness of the tool. And when we raise concerns, people always say
"How can you define what 'thinking' is?" "How can you define 'understanding'?"
These philosophical questions are missing the point. When we say it doesn't "understand", we mean that it doesn't do what we ask. It isn't reliable. It isn't as useful to us as perhaps it has been to you.
unknown|3 months ago
[deleted]
phantasmish|3 months ago
“Do chairs exist?”:
https://m.youtube.com/watch?v=fXW-QjBsruE
unknown|3 months ago
[deleted]
encyclopedism|2 months ago
So I can categorically say LLM's do not understand by quite literally understanding what NOT understanding is.
We know what LLM's are and what they are NOT.
Please see my earlier comment above:
> LLM's do not think, understand, reason, reflect, comprehend and they never shall.
kunley|3 months ago
C'mon, this comparison seems to be very, very unscientific. No offense...
LatencyKills|3 months ago
You don’t know how your own mind “understands” something. No one on the planet can even describe how human understanding works.
Yes, LLMs are vast statistical engines but that doesn’t mean something interesting isn’t going on.
At this point I’d argue that humans “hallucinate” and/or provide wrong answers far more often than SOTA LLMs.
I expect to see responses like yours on Reddit, not HN.
6510|3 months ago
Libidinalecon|2 months ago
The standard, meaningless, HN appeal to authority. "I worked at Google, therefore I am an expert on both stringing a baroque lute and the finer points of Lao cooking. "
Gemini 3 gives a nice explanation if asked "can you explain how you don't really understand anything"
claytongulick|2 months ago
That does seem to be a bit important for any "intelligent" system.
fzeroracer|2 months ago
Humans are remarkably consistent in their behavior in trained environments. That's why we trust humans to perform dangerous, precise and high stakes tasks. Humans have the meta-cognitive abilities to understand when their abilities are insufficient or when they need to reinforce their own understanding, to increase their resilience.
If you genuinely believe humans hallucinate more often, then I don't think you actually do understand how copilot works.
socrateswasone|3 months ago
[deleted]
szundi|3 months ago
[deleted]
youwatnow|3 months ago
[deleted]
gishh|3 months ago
I suppose that says something about both of us.
ilikeatari|3 months ago
pawelduda|3 months ago
Lambdanaut|3 months ago
claytongulick|2 months ago
You're the one then. All those laggardly neurobiologists are still struggling.
0xdeadbeefbabe|3 months ago
aydyn|3 months ago
Uehreka|3 months ago
I’m always struck by how confidently people assert stuff like this, as if the fact that we can easily comprehend the low-level structure somehow invalidates the reality of the higher-level structures. As if we know concretely that the human mind is something other than emergent complexity arising from simpler mechanics.
I’m not necessarily saying these machines are “thinking”. I wish I could say for sure that they’re not, but that would be dishonest: I feel like they aren’t thinking, but I have no evidence to back that up, and I haven’t seen non-self-referential evidence from anyone else.
claytongulick|2 months ago
It's all just atoms clinging to each other.
Simple.
Heh.
neom|3 months ago
HPsquared|3 months ago
deepGem|3 months ago
Why does the LLM need to understand anything. What today's chatbots have achieved is a software engineering feat. They have taken a stateless token generation machine that has compressed the entire internet's vocabulary to predict the next token and have 'hacked' a whole state management machinery around it. End result is a product that just feels like another human conversing with you and remembering your last birthday.
Engineering will surely get better and while purists can argue that a new research perspective is needed, the current growth trajectory of chatbots, agents and code generation tools will carry the torch forward for years to come.
If you ask me, this new AI winter will thaw in the atmosphere even before it settles on the ground.
goncharom|3 months ago
LLMs activate similar neurons for similar concepts not only across languages, but also across input types. I’d like to know if you’d consider that as a good representation of “understanding” and if not, how would you define it?
emp17344|3 months ago
gishh|3 months ago
observationist|3 months ago
LLMs aren't as good as humans at understanding, but it's not just statistics. The stochastic parrot meme is wrong. The networks create symbolic representations in training, with huge multidimensional correlations between patterns in the data, whether its temporal or semantic. The models "understand" concepts like emotions, text, physics, arbitrary social rules and phenomena, and anything else present in the data and context in the same fundamental way that humans do it. We're just better, with representations a few orders of magnitude higher resolution, much wider redundancy, and multi-million node parallelism with asynchronous operation that silicon can't quite match yet.
In some cases, AI is superhuman, and uses better constructs than humans are capable of, in other cases, it uses hacks and shortcuts in representations, mimics where it falls short, and in some cases fails entirely, and has a suite of failure modes that aren't anywhere in the human taxonomy of operation.
LLMs and AI aren't identical to human cognition, but there's a hell of a lot of overlap, and the stochastic parrot "ItS jUsT sTaTiStIcS!11!!" meme should be regarded as an embarrassing opinion to hold.
"Thinking" models that cycle context and systems of problem solving also don't do it the same way humans think, but overlap in some of the important pieces of how we operate. We are many orders of magnitude beyond old ALICE bots and MEgaHAL markov chains - you'd need computers the size of solar systems to run a markov chain equivalent to the effective equivalent 40B LLM, let alone one of the frontier models, and those performance gains are objectively within the domain of "intelligence." We're pushing the theory and practice of AI and ML squarely into the domain of architectures and behaviors that qualify biological intelligence, and the state of the art models clearly demonstrate their capabilities accordingly.
For any definition of understanding you care to lay down, there's significant overlap between the way human brains do it and the way LLMs do it. LLMs are specifically designed to model constructs from data, and to model the systems that produce the data they're trained on, and the data they model comes from humans and human processes.
pwndByDeath|3 months ago
unknown|3 months ago
[deleted]
danielvaughn|3 months ago
Eddy_Viscosity2|3 months ago
emp17344|3 months ago
stanfordkid|3 months ago
Understand just means “parse language” and is highly subjective. If I talk to someone African in Chinese they do not understand me but they are still conscious.
If I talk to an LLM in Chinese it will understand me but that doesn’t mean it is conscious.
If I talk about physics to a kindergartner they will not understand but that doesn’t mean they don’t understand anything.
Do you see where I am going?
sukhdeepprashit|3 months ago
[deleted]