(no title)
ICBTheory | 7 months ago
I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.
ICBTheory|7 months ago
This is the classical foundational syllogism of computationalism. In short:
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical: Simulating a system’s physical behavior is not the same as instantiating its cognitive function.The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.
Yes, we can simulate water. -> No, the simulation isn’t wet.
Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.
And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.
Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.
If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.
ICBTheory|7 months ago
No.
This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.
If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure
…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.
That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)
In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc
That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.
ICBTheory|7 months ago
It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.
But that’s not what’s happening here.
– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.
That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.
So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.
ICBTheory|7 months ago
Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.
But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction
In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.
That is what “IOpenER” means: Information Opens, Entropy Rises.
It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?
ICBTheory|7 months ago
Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.
So unless those are now fringe newsletters, the definition stands:
- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -
If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.
ICBTheory|7 months ago
Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.
You don’t need empirical falsification to verify this. You need mathematical framing. Period.
Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.
These are not performance limitations. They are limits of principle.
ICBTheory|7 months ago
Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.
NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)
If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.
Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.
But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.
Let me know how that goes