top | item 44548752

AGI Is Mathematically Impossible (3): Kolmogorov Complexity

41 points| ICBTheory | 7 months ago

Hi folks. This is the third part in an ongoing theory I’ve been developing over the last few years called the Infinite Choice Barrier (ICB). The core idea is simple:

General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.

Not morally, not practically. Mathematically.

The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.

This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.

In other words: you can’t generalize from what can’t be compressed.

Here’s the abstract:

There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.

This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got

The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.

https://philpapers.org/archive/SCHAII-18.pdf

Happy to read your view.

80 comments

order

mindcrime|7 months ago

AGI Is Mathematically Impossible

Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe. Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."

Also, Marcus Hutter already proved that AIXI[1] is a universal intelligence, where it's only short-coming is that it requires infinite compute. But the quest of the AGI project is not "universal intelligence" but simply intelligence that approximates that of us humans. So I'd count AIXI as another bit of suggestive evidence that AGI is possible.

Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.

So you're saying the human brain can do something infinite then?

Still, happy to give the paper a read... eventually. Unfortunately the "papers to read" pile just keeps getting taller and taller. :-(

[1]: https://en.wikipedia.org/wiki/AIXI

glimshe|7 months ago

This. First thing that came to my mind when I read the headline. Sounds like someone saying "Birds fly but we can't make planes because flying is mathematically impossible".

Or.. "After Johnny read the paper humanity disappeared in a puff of logic"

seu|7 months ago

> another "system based on the laws of our physical universe."

Since when is mathematics based on the laws of our physical universe? last time I checked, it's an abstract system with no material reality.

chrsw|7 months ago

Human intelligence is not general intelligence. If it were, you'd be able to use your conscious thoughts to fight off diseases and you wouldn't need your immune system, for example.

The problem I see isn't that AGI isn't possible, that's not even surprising. The problem is the term "AGI" caught on when people really meant "AHI" or artificial _human_ intelligence, which is fundamentally distinct from AGI.

AGI is difficult to define and quite likely impossible to implement. AHI is obviously implementable but I'm unaware of any serious public research that has made significant progress towards this goal. LLMs, SSMs or any other trainable artificial systems are not oriented towards AHI and, in my opinion, are highly unlikely achieve this goal.

ben_w|7 months ago

> …the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe.

This poster didn't understand this response last time this was raised: https://news.ycombinator.com/item?id=44349818

kamaal|7 months ago

>>Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe.

Using your analogy, what this means is, we have to make humans to make human like intelligence. Not that we can make human like intelligence out side of humans.

>>Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."

What exactly does the brain do? Part of the problems with this is language itself might be insufficient to describe intelligence. And Language might be working a level below our thought. There are occasions where even the best of us fail to come up with how we think. We can go close and its not enough. A picture is better than a thousand words - why? Perhaps language is enough to display signs of intelligence but can't entirely contain or describe it.

Similarly, even in the case of LLMs we have seen showing spatial intelligence is whole lot different than predicting text.

Heck intelligence might not even be one monolithic thing. It could be a collection of several intelligences. And this whole idea of one grand AGI monolith could be wrong.

pama|7 months ago

I didnt read your draft paper, but your premise in HN sounds a bit off to me. AGI does not assume the ability for finding or learning an optimal solution to every problem (with that assumption it would be trivial to prove it impossible in many different ways). Independent of the exact definition, a system of intelligence that is better or equal to the best human in any domain would be at least termed AGI. (If there exist a couple incompressible problems along the way you can memorize the human solution.) If you proved AGI impossible under such a (weaker?) definition you would prove that humans can no longer improve in any domain (as the set of all humans is a general intelligence). Or you would need to assume that there is something special inside humans, which no technology can ever build. I disagree with both premises.

mindcrime|7 months ago

Independent of the exact definition, a system of intelligence that is better or equal to the best human in any domain would be at least termed AGI.

Exactly. There's this "thing" you see in certain circles, where people (intentionally?) mis-interpret the "G" in AGI as meaning "the most general possible intelligence". But that's not the reality. AGI has pretty much always been taken to mean "AI that is approximately human level". Going beyond that is getting into the realm of Artificial Super Intelligence, or Universal Artificial Intelligence.

mkl|7 months ago

The physics of our brains can in principle be simulated at a subatomic quantum level mathematically, even on a classical computer. It would be absurdly expensive and slow with current technology, but it is mathematically possible. Therefore our own generally intelligent brains can be considered a counterexample.

I think for your theory to hold up, you would need to show that physics cannot, even in principle, be simulated mathematically at sufficient scale (the number of interacting subatomic particles). That would be surprising.

At the moment it seems like your results contradict reality, meaning your starting assumptions cannot all be true.

al45tair|7 months ago

And even if OP could show that physics couldn’t be simulated, it still wouldn’t follow that AGI was impossible, or even that it couldn’t be achieved by approximating the simulation that was proved to be impossible to do accurately.

AGI is clearly possible, because our brains are fundamentally machines, and there’s no reason in principle why we couldn’t build something similar. Right now we don’t - as human beings - have the ability to do that, but it clearly isn’t impossible since cellular machinery is able to build it in the first place.

tom_morrow|7 months ago

I tried to understand your paper, but could not.

Then I understood why not. Your paper proves that I am unable to understand your paper. It also proves that you are unable to understand your paper.

marvin-hansen|7 months ago

Okay, read the abstract and Intro. Recently, in the paper

"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models"

your thesis of Ai's lack of capacity to abstract or at least extract understanding from noisy data was largely experimentally confirmed. I am uncertain though about the exact mechanics b/c as they used LLM's, its not transparent what happened internally that lead to constant failure to abstract the concept despite ample predictive power. One interesting experiment was the introduction of the Oracle that literally enabled the LLM to solve the task that was previously impossible without the oracle, which means, at least its possible that LLM's can reconstruct known rules. They just can't find new ones.

On a more fundamental level, I am not so sure why these experiments and mathematical proofs still are made since Judea Pearl already established about seven years ago in "Theoretical Impediments to Machine Learning " that all correlation based methods are doomed as they fail to understand anything. his point about causality is well placed, but will not solve the problem either.

The question I have though, if we ignore all existing methods for one moment, then what makes you so sure that AGI is really Mathematically impossible? Suppose some advancement in quantum computing would allow to reconstruct incomplete information, does your assertion still holds true?

https://arxiv.org/abs/2507.06952 https://arxiv.org/abs/1801.04016

Tuna-Fish|7 months ago

How is your brain doing it then?

bsindicatr|7 months ago

> How is your brain doing it then?

Quantum entanglement?:

https://www.popularmechanics.com/science/a65368553/quantum-e...

And we’re not mathematically impossible, unless that’s some new philosophical theory: “If human intelligence is mathematically impossible, and yet it exists, then mathematics is fallible, and by inductive reasoning logic is fallible, and I can prove things with inductive reasoning, because piss off.”

automatic6131|7 months ago

I could believe we're not generally intelligent.

00deadbeef|7 months ago

My brain isn’t artificial. I hope.

he0001|7 months ago

Wouldn’t it be possible that not all brains can do it all, but some can specialize in certain problems. But when combined with everyone else’s we can approach general intelligence?

DragonStrength|7 months ago

The "A" stands for "Artificial" in contrast to what our brains do.

stogot|7 months ago

Doing artificial?

baq|7 months ago

It follows that it doesn’t.

In practical terms, the result doesn’t matter. The race to approximate the human thought process and call it AGI (which is what matters economically) is on. If you can approximate it meaningfully faster that the real brain works in meatspace, you are winning. What it will mean for humanity or civilization is an open question.

jhanschoo|7 months ago

> This is cognition at its weirdest: solving problems somewhat by accident, finding answers in the wrong place, connecting dots that aren’t even in the same picture.

If you solve a problem "by accident", there are very many other people who make foolish decisions daily because they do not think. Some of those pan out too and lead to understanding. A resource-bounded agent can also maintain a notion of fuel and give a random answer when it has exhausted its fuel.

The structural incompleteness mentioned isn't really meaningful. Humans have not demonstrated the capacity to make epsilon-optimal decisions on an infinite number of tasks, since we do not do an infinite number of tasks anyway.

K-complexity, and resource-bounded K-complexity are indeed extremely useful tools to talk about generalization, I'd agree, but I think the author has misunderstood the limits that K-complexity places on generalization.

PeterStuer|7 months ago

At first I was thinking, let's see if an argument is made that is not applicable to GI, whether artificial or not, and if not, why even mention AI at all?

Then I started to read the paper, and it's worse.

Every one of his 'examples' would not just be 'solved' by any existing LLM, even a 'dumb' system that just spits out a random sentence to any question would pass his first 2 'tests' with flying colors. I'm not kidding, he accepts "Leave the classroom and stop confusing everybody with your senseless questions" as a good solution.

In fact, the only system that would fail is this hypothetical AI he imagines that somehow gets into infinitely analyzing loops.

Then his 3rd test, an investment decision, gives the same outcome as himself up until the point he draws in extra information not available to the AI, after which he flips his 'answer' which he then labels as 'correct' and the previous answer based on the original info as 'false' because he made some money on the bet a few weeks later, seriously?

int_19h|7 months ago

Feeding your papers to any SOTA LLM is a quick way to expose all the logical holes and omissions in them.

I would politely suggest that until you do that and then come up with a convincing rebuttal for every point they make that is not self-evidently wrong, you shouldn't be wasting humans' time.

rotten|7 months ago

The human brain does not have perfect memory. It is not always logical. And more often than not it is motivated and influenced by "external" forces - health, hunger, sex drive, environmental conditions, luck, spiritual inspiration, or whatever. The perfect worker is purely logical and has perfect memory and no external influences - never gets hungry or sick or wants to be the boss themselves. The AI race is funded by folks interested in creating the perfect worker, not a human. I have to agree with the conclusions of this paper that they won't be able to make humans. (But they don't really want to.) The Vatican has also published interesting works on this idea. The question is - if you take out everything that makes it human, can you call it intelligent?

effed3|7 months ago

Probably every intelligence has its limits, as every systems (eg. mathematics, remembering goedel) has his own. This kind of AGI seems like a deity, hard to belive it's possible, but in pratice many kind of "smaller" intelligences exist (from ants to primates) less "general" but enough to solve enough problems to live and evolve, and maybe can be even created by others intelligences. IMHO It's reasonable to think a real intelligence as a property of complex evolving systems interacting with a complex environment, so to live in a complex world a not-so-general intelligence can be enough, even given some limits and errors.

ICBTheory|7 months ago

Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

ICBTheory|7 months ago

1. On “The brain obeys physics, physics is computable—so AGI must be possible”

This is the classical foundational syllogism of computationalism. In short:

   1.The brain obeys the laws of physics.
   2.The laws of physics are (in principle) computable.
   3.Therefore, the brain is computable.
   4.Therefore, human-level general intelligence is computable, and AGI is  
     inevitable and a question of time, power and compute.
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical: Simulating a system’s physical behavior is not the same as instantiating its cognitive function.

The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.

Yes, we can simulate water. -> No, the simulation isn’t wet.

Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.

And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.

Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.

If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.

ICBTheory|7 months ago

6. On “This is just a critique of current models—not AGI itself”

No.

This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.

If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure

…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.

That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)

In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc

That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.

ICBTheory|7 months ago

5. On “Kolmogorov and Chaitin are misused”

It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.

But that’s not what’s happening here.

– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.

That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.

So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.

ICBTheory|7 months ago

4. On “This is just the No Free Lunch Theorem again”

Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.

But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction

In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.

That is what “IOpenER” means: Information Opens, Entropy Rises.

It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?

ICBTheory|7 months ago

3. On “He redefines AGI to make his result inevitable”

Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.

So unless those are now fringe newsletters, the definition stands:

- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -

If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.

ICBTheory|7 months ago

2. On “This is just philosophy with no testability”

Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.

You don’t need empirical falsification to verify this. You need mathematical framing. Period.

Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.

These are not performance limitations. They are limits of principle.

ICBTheory|7 months ago

And finally 7. On “But humans are finite too—so why not replicable?”

Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.

NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)

If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.

Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.

But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.

Let me know how that goes

calf|7 months ago

I'll bite; There was a Kurt Jaimungal interview yesterday explaining that the Navier-Stokes fluid equations are not only unpredictable (chaotic), but also uncomputable (in the Turing sense). (if I recalled it correctly)

But I take that to mean there's no general, universal algorithm to tell us anything we want to know. But that's not what intelligence is, we're not defining some kind of absolute intelligence like an oracle for the halting problem. That definition would be a category error.

motorest|7 months ago

I think that any paper that argues something is impossible is fundamentally flawed, particularly when there are examples of it being possible.

Also, what's the point of telling others you believe what they are doing is impossible, specially after the results we are seeing even at the free-tier, open-to-the-public services?

mrjay42|7 months ago

You might want to check out the works of that buzzkill that Gödel is ^^

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

" The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.

The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. "

:3

Veen|7 months ago

What examples are there of the possibility of artificial general intelligence?

anthk|7 months ago

Consciusness = intrinsic information evaluating itself.

Like eval/apply under Lisp. Or Forth.

xbmcuser|7 months ago

My pet theory is that AGI is not possible until we have real quantum computing.

geldedus|7 months ago

there is a thing called quantum computing. So nope.