top | item 45625722

(no title)

kla-s | 4 months ago

Id also add that 5) We need some sense of truth.

Im not quite sure if the current paradigm of LLMs are robust enough given the recent Anthropic Paper about the effect of data quality or rather the lack thereof, that a small bad sample can poison the well and that this doesn’t get better with more data. Especially in conjunction with 4) some sense of truth becomes crucial in my eyes (Question in my eyes is how does this work? Something verifiable and understandable like lean would be great but how does this work with more fuzzy topics…).

discuss

order

FloorEgg|4 months ago

That's a segue into an important and rich philosophical space...

What is truth? Can it be attained, or only approached?

Can truth be approached (progress made towards truth) without interacting with reality?

The only shared truth seeking algorithm I know is the scientific method, which breaks down truth into two categories (my words here):

1) truth about what happened (controlled documented experiments) And 2) truth about how reality works (predictive powers)

In contrast to something like Karl friston free energy principle, which is more of a single unit truth seeking (more like predictive capability seeking) model.

So it seems like truth isn't an input to AI so much as it's an output, and it can't be attained, only approached.

But maybe you don't mean truth so much as a capability to definitively prove, in which case I agree and I think that's worth adding. Somehow integrating formal theorem proving algorithms into the architecture would probably be part of what enables AI to dramatically exceed human capabilities.

simonh|4 months ago

I think that in some senses truth is associated with action in the world. That’s how we test our hypotheses. Not just in science, in terms of empirical adequacy, but even as children and adults. We learn from experience of doing, not just rote, and we associate effectiveness with truth. That’s not a perfect heuristic, but it’s better than just floating in a sea of propositions as current LLMs largely are.