top | item 46908789

(no title)

noncentral | 25 days ago

Just a quick comment on the “fact vs fiction” issue. Humans don’t reliably solve that either. For most of history, people believed the Earth was flat because every local observation they had access to pointed in that direction. Their frame of reference was simply too limited to reveal the error.

RCC isn’t claiming that LLMs are uniquely flawed. The point is that any system working with partial visibility(humans included)can’t guarantee globally correct judgments. What counts as “fact” only becomes stable when there is an external reference frame, and embedded agents don’t have access to one.

RCC just states these limits in geometric and observability terms.

discuss

order

jqpabc123|25 days ago

Humans don’t reliably solve that either.

They can and have. But they don't achieve this by using probability to guess the next word in a sentence.

Once again, judgment is an important but ill defined aspect of intelligence.

An LLM has none. Instead, it relies on probability --- which we all know can easily produce incorrect results that sound plausible.

Tempered with human judgment, LLMs can still prove useful in some strictly restrained cases but it's general purpose reliability is highly suspect --- in my judgment. And this lack of reliability counters the logic for applying them in a lot of cases.

noncentral|25 days ago

The argument that “LLMs lack judgment because they only guess the next token probabilistically” starts from an overly simplistic model of how human judgment actually forms.

Humans also begin as probabilistic next-word predictors. Look at early language formation in infants:

“Mom → food” “Mom → poop”

This is literally a next-token model. There is no semantics, no reasoning—only repeated patterns, reinforced predictions, and gradual abstraction. As children grow, they expand the sequence window:

“Mom I’m hungry” → “Mom can you go to the store and get the ice cream I like”

This is the emergence of abstraction → generalization → specialization, the exact loop LLMs run internally.

Human cognition is biochemical; LLMs are computational. Different substrate, similar functional loop.

And “judgment” is not a mystical faculty. It can be decomposed into: 1. forming a generalized baseline, 2. comparing specific cases to that baseline, 3. updating through iteration, 4. selecting an output.

LLMs do exactly this. Pretraining forms the baseline, attention performs comparison, decoding performs selection.

If your definition of judgment is “access to a global, external truth-frame,” then humans do not possess judgment either. For most of history people believed the Earth was flat because their local frame of reference made it the most reasonable inference.

Judgment is always local for embedded agents—biological or computational.

This is precisely what RCC explains: LLM failures are not due to “probabilistic prediction,” but due to embeddedness and partial observability, the same geometric constraint that applies to humans.

The reliability issue is structural, not moral or mystical.