AGI Is Mathematically Impossible (3): Kolmogorov Complexity
41 points| ICBTheory | 7 months ago
General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.
Not morally, not practically. Mathematically.
The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.
This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.
In other words: you can’t generalize from what can’t be compressed.
⸻
Here’s the abstract:
There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got
The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.
https://philpapers.org/archive/SCHAII-18.pdf
Happy to read your view.
mindcrime|7 months ago
Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe. Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."
Also, Marcus Hutter already proved that AIXI[1] is a universal intelligence, where it's only short-coming is that it requires infinite compute. But the quest of the AGI project is not "universal intelligence" but simply intelligence that approximates that of us humans. So I'd count AIXI as another bit of suggestive evidence that AGI is possible.
Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
So you're saying the human brain can do something infinite then?
Still, happy to give the paper a read... eventually. Unfortunately the "papers to read" pile just keeps getting taller and taller. :-(
[1]: https://en.wikipedia.org/wiki/AIXI
glimshe|7 months ago
Or.. "After Johnny read the paper humanity disappeared in a puff of logic"
seu|7 months ago
Since when is mathematics based on the laws of our physical universe? last time I checked, it's an abstract system with no material reality.
chrsw|7 months ago
The problem I see isn't that AGI isn't possible, that's not even surprising. The problem is the term "AGI" caught on when people really meant "AHI" or artificial _human_ intelligence, which is fundamentally distinct from AGI.
AGI is difficult to define and quite likely impossible to implement. AHI is obviously implementable but I'm unaware of any serious public research that has made significant progress towards this goal. LLMs, SSMs or any other trainable artificial systems are not oriented towards AHI and, in my opinion, are highly unlikely achieve this goal.
ben_w|7 months ago
This poster didn't understand this response last time this was raised: https://news.ycombinator.com/item?id=44349818
kamaal|7 months ago
Using your analogy, what this means is, we have to make humans to make human like intelligence. Not that we can make human like intelligence out side of humans.
>>Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."
What exactly does the brain do? Part of the problems with this is language itself might be insufficient to describe intelligence. And Language might be working a level below our thought. There are occasions where even the best of us fail to come up with how we think. We can go close and its not enough. A picture is better than a thousand words - why? Perhaps language is enough to display signs of intelligence but can't entirely contain or describe it.
Similarly, even in the case of LLMs we have seen showing spatial intelligence is whole lot different than predicting text.
Heck intelligence might not even be one monolithic thing. It could be a collection of several intelligences. And this whole idea of one grand AGI monolith could be wrong.
pama|7 months ago
mindcrime|7 months ago
Exactly. There's this "thing" you see in certain circles, where people (intentionally?) mis-interpret the "G" in AGI as meaning "the most general possible intelligence". But that's not the reality. AGI has pretty much always been taken to mean "AI that is approximately human level". Going beyond that is getting into the realm of Artificial Super Intelligence, or Universal Artificial Intelligence.
unknown|7 months ago
[deleted]
mkl|7 months ago
I think for your theory to hold up, you would need to show that physics cannot, even in principle, be simulated mathematically at sufficient scale (the number of interacting subatomic particles). That would be surprising.
At the moment it seems like your results contradict reality, meaning your starting assumptions cannot all be true.
al45tair|7 months ago
AGI is clearly possible, because our brains are fundamentally machines, and there’s no reason in principle why we couldn’t build something similar. Right now we don’t - as human beings - have the ability to do that, but it clearly isn’t impossible since cellular machinery is able to build it in the first place.
tom_morrow|7 months ago
Then I understood why not. Your paper proves that I am unable to understand your paper. It also proves that you are unable to understand your paper.
marvin-hansen|7 months ago
"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models"
your thesis of Ai's lack of capacity to abstract or at least extract understanding from noisy data was largely experimentally confirmed. I am uncertain though about the exact mechanics b/c as they used LLM's, its not transparent what happened internally that lead to constant failure to abstract the concept despite ample predictive power. One interesting experiment was the introduction of the Oracle that literally enabled the LLM to solve the task that was previously impossible without the oracle, which means, at least its possible that LLM's can reconstruct known rules. They just can't find new ones.
On a more fundamental level, I am not so sure why these experiments and mathematical proofs still are made since Judea Pearl already established about seven years ago in "Theoretical Impediments to Machine Learning " that all correlation based methods are doomed as they fail to understand anything. his point about causality is well placed, but will not solve the problem either.
The question I have though, if we ignore all existing methods for one moment, then what makes you so sure that AGI is really Mathematically impossible? Suppose some advancement in quantum computing would allow to reconstruct incomplete information, does your assertion still holds true?
https://arxiv.org/abs/2507.06952 https://arxiv.org/abs/1801.04016
Tuna-Fish|7 months ago
bsindicatr|7 months ago
Quantum entanglement?:
https://www.popularmechanics.com/science/a65368553/quantum-e...
And we’re not mathematically impossible, unless that’s some new philosophical theory: “If human intelligence is mathematically impossible, and yet it exists, then mathematics is fallible, and by inductive reasoning logic is fallible, and I can prove things with inductive reasoning, because piss off.”
automatic6131|7 months ago
00deadbeef|7 months ago
he0001|7 months ago
DragonStrength|7 months ago
stogot|7 months ago
baq|7 months ago
In practical terms, the result doesn’t matter. The race to approximate the human thought process and call it AGI (which is what matters economically) is on. If you can approximate it meaningfully faster that the real brain works in meatspace, you are winning. What it will mean for humanity or civilization is an open question.
jhanschoo|7 months ago
If you solve a problem "by accident", there are very many other people who make foolish decisions daily because they do not think. Some of those pan out too and lead to understanding. A resource-bounded agent can also maintain a notion of fuel and give a random answer when it has exhausted its fuel.
The structural incompleteness mentioned isn't really meaningful. Humans have not demonstrated the capacity to make epsilon-optimal decisions on an infinite number of tasks, since we do not do an infinite number of tasks anyway.
K-complexity, and resource-bounded K-complexity are indeed extremely useful tools to talk about generalization, I'd agree, but I think the author has misunderstood the limits that K-complexity places on generalization.
PeterStuer|7 months ago
Then I started to read the paper, and it's worse.
Every one of his 'examples' would not just be 'solved' by any existing LLM, even a 'dumb' system that just spits out a random sentence to any question would pass his first 2 'tests' with flying colors. I'm not kidding, he accepts "Leave the classroom and stop confusing everybody with your senseless questions" as a good solution.
In fact, the only system that would fail is this hypothetical AI he imagines that somehow gets into infinitely analyzing loops.
Then his 3rd test, an investment decision, gives the same outcome as himself up until the point he draws in extra information not available to the AI, after which he flips his 'answer' which he then labels as 'correct' and the previous answer based on the original info as 'false' because he made some money on the bet a few weeks later, seriously?
int_19h|7 months ago
I would politely suggest that until you do that and then come up with a convincing rebuttal for every point they make that is not self-evidently wrong, you shouldn't be wasting humans' time.
rotten|7 months ago
effed3|7 months ago
ICBTheory|7 months ago
I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.
ICBTheory|7 months ago
This is the classical foundational syllogism of computationalism. In short:
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical: Simulating a system’s physical behavior is not the same as instantiating its cognitive function.The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.
Yes, we can simulate water. -> No, the simulation isn’t wet.
Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.
And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.
Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.
If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.
ICBTheory|7 months ago
No.
This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.
If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure
…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.
That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)
In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc
That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.
ICBTheory|7 months ago
It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.
But that’s not what’s happening here.
– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.
That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.
So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.
ICBTheory|7 months ago
Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.
But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction
In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.
That is what “IOpenER” means: Information Opens, Entropy Rises.
It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?
ICBTheory|7 months ago
Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.
So unless those are now fringe newsletters, the definition stands:
- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -
If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.
ICBTheory|7 months ago
Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.
You don’t need empirical falsification to verify this. You need mathematical framing. Period.
Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.
These are not performance limitations. They are limits of principle.
ICBTheory|7 months ago
Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.
NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)
If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.
Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.
But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.
Let me know how that goes
calf|7 months ago
But I take that to mean there's no general, universal algorithm to tell us anything we want to know. But that's not what intelligence is, we're not defining some kind of absolute intelligence like an oracle for the halting problem. That definition would be a category error.
motorest|7 months ago
Also, what's the point of telling others you believe what they are doing is impossible, specially after the results we are seeing even at the free-tier, open-to-the-public services?
mrjay42|7 months ago
https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
" The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. "
:3
Veen|7 months ago
anthk|7 months ago
Like eval/apply under Lisp. Or Forth.
xbmcuser|7 months ago
unknown|7 months ago
[deleted]
geldedus|7 months ago