top | item 46928014

(no title)

1 points| noncentral | 23 days ago

discuss

order

noncentral|23 days ago

Most people explain LLM failures by saying we do not have enough data, not enough RL, not enough supervision, or not enough scale. But these same problems continued through GPT3, GPT4, 4o, and now 5. At some point it feels reasonable to ask whether we are even looking in the right direction.

I spent a few days thinking about this and kept coming back to a different way of framing the issue.

LLMs behave like observers that are stuck inside a space they cannot see.

If a system makes predictions without seeing its own internal state, without seeing the container it is operating in, and without any global reference for what is correct, then the same outcomes will always show up. Hallucination. Small inconsistencies. Planning that falls apart after eight to twelve steps. Long range drift.

The model is not making these mistakes because it is stupid. It is doing this because the structure it lives inside forces these behaviors.

I call this idea Recursive Collapse Constraints, or RCC. The point is not to replace architecture but to describe the limits of any architecture that is trapped inside a larger space.

If RCC is right, then a lot of current research is trying to patch the symptoms of a deeper mismatch that cannot be fixed by scaling alone.

I am interested in what people here think. Are we spending too much time tuning artifacts of embedding instead of understanding the structure underneath it.