top | item 46108417

Formal Proof: LLM Hallucinations Are Structural, Not Statistical (Coq Verified)

2 points| ICBTheory | 3 months ago |philpapers.org

3 comments

order

ICBTheory|3 months ago

Author here.

This paper is Part III of a trilogy investigating the limits of algorithmic cognition. Given the recent industry signals regarding "scaling plateaus" (e.g., Sutskever etc.), I attempt to formalize why these limits appear structurally unavoidable.

The Thesis: We model modern AI as a Probabilistic Bounded Semantic System (P-BoSS). The paper demonstrates via the "Inference Trilemma" that hallucinations are not transient bugs to be fixed by more data, but mathematical necessities when a bounded system faces fat-tailed domains (alpha ≤ 1).

The Proof: While this paper focuses on the CS implications, the underlying mathematical theorems (Rice’s Theorem applied to Semantic Frames, Sheaf Theoretic Gluing Failures) are formally verified using Coq.

You can find the formal proofs and the Coq code in the companion paper (Part II) here:

https://philpapers.org/rec/SCHTIC-16

I’m happy to discuss the P-BOSS definition and why probabilistic mitigation fails in divergent entropy regimes.

wiz21c|3 months ago

Since we can't avoid hallucinations, maybe we can live with them ?

I mean, I regularly use LLM's and although, sometimes, they go a bit mad, most of the time they're really helpful