(no title)
noncentral | 25 days ago
RCC is not an architecture and not a training trick. It’s a set of structural axioms that describe why hallucination, inference drift, and loss of long-horizon consistency appear even as models get larger.
Axiom 1: Partial Observability An embedded system never has access to the full internal state of the manifold it operates in.
Axiom 2: Non-central Observer The system cannot determine whether its viewpoint is central or peripheral.
Axiom 3: No Stable Global Reference Frame Internal representations drift over time because there is no fixed frame that keeps them aligned.
Axiom 4: Irreversible Collapse Each inference step collapses information in a way that cannot be fully reversed, pushing the system toward local rather than global consistency.
Several predictions follow from these axioms: • Hallucination is structurally unavoidable, not just a training deficit. • Planning failures after about 8 to 12 steps come directly from the collapse mechanism. • RAG, tools, and schemas act as temporary external reference frames, but they do not eliminate the underlying boundary. • Scaling helps, but only up to an asymptotic limit defined by RCC.
I’m curious how people here interpret these constraints. Do they match what you see in real LLM systems? And do you think limits like this are fundamental, or just a temporary artifact of current model design?
Full text here: https://www.effacermonexistence.com/axioms
No comments yet.