People on the bbchallenge Discord server are keen to speculate on how many Turing Machine states are needed to surpass Graham's Number, which is vastly larger than the 2^^2^^2^^9 achieved by the latest BB(6) champion.
We know from the functional busy beaver [1] that Graham behaviour can come surprisingly early; a 49-bit lambda term suffices. There are only 77519927606 closed lambda terms of at most that size [2], compared to 4^12*23836540=399910780272640 unique 6-state Turing Machines [3].
With the achievement of pentation in only 6 states, several people now believe that 7 states should suffice to surpass Graham's. I would still find that rather surprising.
A few days ago, I made a large bet with one of them on whether we would see proof of BB(7)>Graham's within the next 10 years.
I can't pretend to be an expert, but I'll argue BB(7) is probably larger than Graham's number.
BB has to grow faster than any computable sequence. What exactly that means concretely for BB(7) is... nothing other than handwaving... but it sort of means it needs to walk up the "operator strength" ladder very quickly... it eventually needs to grow faster than any computable operator we define (including, for example, up-arrow^n, and up-arrow^f(n) for any computable f).
My gut feeling is that the growth between 47 million and 2^^2^^2^^9 is qualitatively larger than the growth between 2^^2^^2^^9 and graham's number in terms of how strong the operator we need is (with gramah's number being g_64 and g here being roughly one step "above" up_arrow^n). So probably we should have BB(7)>Graham's number.
It boggles my mind that a number (an uncomputable number, granted) like BB(748) can be "independent of ZFC". It feels like a category error or something.
What makes BB(748) independent of ZFC is not its value, but the fact that one of the 748-state machines (call it TM_ZFC_INC) looks for an inconsistency (proof of FALSE) in ZFC and only halts upon finding one.
Thus, any proof that BB(748) = N must either show that TM_ZF_INC halts within N steps or never halts. By Gödel's famous results, neither of those cases is possible if ZFC is assumed to be consistent.
> It boggles my mind that a number (an uncomputable number, granted) like BB(748) can be "independent of ZFC".
It's BB(n) that is incomputable (that is there's no algorithm that outputs the value of BB(n) for arbitrary n).
BB(748) is computable. It's, by definition, a number of ones written by some Turing machine with 748 states. That is this machine computes BB(748).
> It feels like a category error or something.
The number itself is just a literally unimaginably large number. Independence of ZFC comes in when we try to prove that this number is the number we seek. And to do that you need theory more powerful than ZFC to capture properties of a Turing machine with 748 states.
It boggles my mind that we ever thought a small amount of text that fits comfortably on a napkin (the axioms of ZFC) would ever be “good enough” to capture the arithmetic truths or approximate those aspects of physical reality that are primarily relevant to the endeavors of humanity. That the behavior of a six state Turing machine might be unpredictable via a few lines of text does not surprise me in the slightest.
As soon as Gödel published his first incompleteness theorem, I would have thought the entire field of mathematics would have gone full throttle on trying to find more axioms. Instead, over the almost century since then, Gödel’s work has been treated more as an odd fact largely confined to niche foundational studies rather than any sort of mainstream program (I’m aware of Feferman, Friedman, etc., but my point is there is significantly less research in this area compared to most other topics in mathematics).
No individual number is uncomputable. There’s no pair of a number and proof in ZFC that [that number] is the value of BB(748). And, so, there’s no program which ZFC proves to output the value of BB(748). There is a program that outputs BB(748) though, just like for any other number.
Let X = "1 if ZF is consistent, 0 otherwise". Then the statements "X = 0" and "X = 1" are independent of ZF. Whether the definition of X is a satisfactory definition of a particular number is a question of mathematical philosophy.
BB(748) is very similar, in that I'd call it a 'definition' independent of ZF rather than a 'number' independent of ZF.
The truth value of the continuum hypothesis is either 1 or 0 (at least from a platonistic perspective). But, it is proven to be independent of ZFC. No huge numbers involved, just a single bit whose value ZFC doesn't tell you.
Many replies don't seem to understand Godel and independence (and one that might is heavily downvoted). Cliff notes:
* ZFC is a set of axioms. A "model" is a structure that respects the axioms.
* By Godel, we know that ZFC proves a statement if and only if the statement is true in all models of ZFC.
* Therefore, the statement "BB(748) is independent of ZFC" is the same as the statement "There are two different models of ZFC where BB(748) are two different numbers.
* We can take one of these to be the "standard model"[1] that we all think of when we picture a Turing Machine. However, the other would be a strange "non-standard" model that includes finite "natural numbers" that are not in the set {0,1,2,3,...} and it includes Turing Machines that halt in "finite" time that we would not say halt at all in the standard model.
* So BB(748) is indeed a number as far as the standard model is concerned, the problem only comes from non-standard models.
TL;DR this is more about the fact that ZFC axioms allow weird models of Turing Machines that don't match how we think Turing Machines usually work.
It's known that BB(14) is bigger than Graham's number, but this new finding leads me to believe that BB(7) is probably bigger than Graham's number. Intuitively, the technology required to go from pentation to Graham's number feels simpler than the technology required to go from `47,176,870` to `2 <pentate> 5`.
> So I said, imagine you had 10,000,000sub10 grains of sand. Then you could … well, uh … you could fill about 10,000,000sub10 copies of the observable universe with that sand.
I don't get this part. Is it really rounding away the volume of the observable universe divided by the average volume of a grain of sand? That is many more orders of magnitude than the amount of mass in the universe, which is a more usual comparison.
Yes, that's right, dividing by that ratio essentially barely affects the number in a sense that 'adjacent' numbers in that notation give a much bigger change.
10↑↑10,000,000 / (sand grains per universe) is vastly larger than, say, 10↑↑9,999,999
So on system we're using to write these numbers, there's really no better way to write (very big)/ (only universally big) than by writing exactly that, and then in the notation for very big, it pretty much rounds to just (very big).
Exactly. This number is so so much bigger than 10^100000 or however many grains of sand would fit, that dividing by that amount doesn’t really change it, certainly not enough to bring it down closer to 9,999,999sub10
Yes, that's only some normal number amount of orders of magnitude. Even 10,000,000^10,000,000 is already so large that it doesnt matter, let alone after exponentiating _the exponent_ nine times more.
While that question depends on what you count as an 'enumeration', there's the related question of "What's the richest logic that cannot prove the halting status of all 5-state TMs?" That is, what's the richest logic that some 5-state TM's halting status is independent of?
I've pondered that version of the question a bit, but I couldn't get very far due to my lack of expertise in first-order logic. What I do know is that Skelet #17 [0] is one of the toughest machines to prove non-halting on a mathematical level [1], so any theory sufficient to prove that Skelet #17 doesn't halt is likely sufficient to decide the rest of the 5-state machines.
> For those tuning in from home, here BB(6) is the 6th Busy Beaver number, i.e. the maximum number of steps that a 6-state Turing machine with a {0,1} alphanet can take before halting, when run on an initially all-0 input tape.
Oh! Of course! That sure clears things up for this non-expert. This is clearly a hardcore blog for people who have been doing this kind of research for decades. Kind of awesome to stumble upon something so unapologetically dense and jargony and written for a very specific audience!
That should be enough for someone with an undergrad CS education to at least get a sense of what's going on if they haven't encountered the busy beaver problem before.
Is it niche jargon, absolutely, but to say it's only accessible to people who have put in decades is selling yourself short.
>imagine you had 10,000,000_10 grains of sand. Then you could … well, uh … you could fill about 10,000,000_10 copies of the observable universe with that sand. I hope that helps people visualize it!
People can't visualize numbers that big. There's more ways to express numbers than just counting them. For example a single grain of sand has infinite states it can be in (there are an infinite amount of real numbers), so you could say a single grain of sand could represent BB(6). Combinations can grow exponentially, so that may be something useful to try and express it.
At some point big numbers become much more about the consistency strength of formal systems than “large quantities”.
I.e., how well can a system fake being inconsistent before that fact it discovered? An inconsistent system faking consistency via BB(3) will be “found out” much quicker than a system faking consistency via BB(6). (What I mean by faking consistency is claiming that all programs that run longer than BB(n) steps for some n never halt.)
I'm confused about this example, isn't the count of grains of sand equal to the count of observable universes so it'd be a single grain of sand per universe?
If you treat the observable universe as a closed system, you could try to apply the Bekenstein bound using
- R ≈ 46.5 billion light-years (radius of the observable universe)
- E ≈ total mass-energy content of the observable universe
The mass-energy includes ordinary matter, dark matter, and dark energy. Current estimates suggest the observable universe contains roughly 10^53 kg of mass-energy equivalent.
Plugging these into S ≤ 2πER/ℏc gives someting on the order of 10^120 bits of maximum information content.
It definitely isn't. The amount of information you can store in the universe is something like 10^120 bits. Even if I'm off by a trillion orders of magnitude it doesn't matter.
You’re probably referring to a state where all parts of the complete representation exist at the same time. Because if they don’t have to exist at the same time, then it might be possible to “write it down” if the universe has unbounded duration (“might” because I don’t know how the heat death plays into that). However, “at the same” time isn’t well-defined in relativistic spacetime. The sibling comments are definitely right with respect to the reference frame implied by the CMB. But I’m wondering if it wouldn’t be possible to slice spacetime in a way that actually makes a representation possible “at the same time” in some reference frame?
Any time I see such results from computation complexity theory, I realize that any current zeitgeist of "super-intelligent AI are gods" is complete bullshit.
You can convert every atom of observable Universe into a substrate for supercomputer, you can harness energies of supermassive black holes to power it, but running a humble BB(6) to halting state would be forever out of its reach.
[+] [-] tromp|8 months ago|reply
We know from the functional busy beaver [1] that Graham behaviour can come surprisingly early; a 49-bit lambda term suffices. There are only 77519927606 closed lambda terms of at most that size [2], compared to 4^12*23836540=399910780272640 unique 6-state Turing Machines [3].
With the achievement of pentation in only 6 states, several people now believe that 7 states should suffice to surpass Graham's. I would still find that rather surprising. A few days ago, I made a large bet with one of them on whether we would see proof of BB(7)>Graham's within the next 10 years.
What do people here think?
[1] https://oeis.org/A333479
[2] https://oeis.org/A114852
[3] https://oeis.org/A107668
[+] [-] gpm|8 months ago|reply
BB has to grow faster than any computable sequence. What exactly that means concretely for BB(7) is... nothing other than handwaving... but it sort of means it needs to walk up the "operator strength" ladder very quickly... it eventually needs to grow faster than any computable operator we define (including, for example, up-arrow^n, and up-arrow^f(n) for any computable f).
My gut feeling is that the growth between 47 million and 2^^2^^2^^9 is qualitatively larger than the growth between 2^^2^^2^^9 and graham's number in terms of how strong the operator we need is (with gramah's number being g_64 and g here being roughly one step "above" up_arrow^n). So probably we should have BB(7)>Graham's number.
[+] [-] Scarblac|8 months ago|reply
[+] [-] tromp|8 months ago|reply
Thus, any proof that BB(748) = N must either show that TM_ZF_INC halts within N steps or never halts. By Gödel's famous results, neither of those cases is possible if ZFC is assumed to be consistent.
[+] [-] red75prime|8 months ago|reply
It's BB(n) that is incomputable (that is there's no algorithm that outputs the value of BB(n) for arbitrary n).
BB(748) is computable. It's, by definition, a number of ones written by some Turing machine with 748 states. That is this machine computes BB(748).
> It feels like a category error or something.
The number itself is just a literally unimaginably large number. Independence of ZFC comes in when we try to prove that this number is the number we seek. And to do that you need theory more powerful than ZFC to capture properties of a Turing machine with 748 states.
[+] [-] Xcelerate|8 months ago|reply
As soon as Gödel published his first incompleteness theorem, I would have thought the entire field of mathematics would have gone full throttle on trying to find more axioms. Instead, over the almost century since then, Gödel’s work has been treated more as an odd fact largely confined to niche foundational studies rather than any sort of mainstream program (I’m aware of Feferman, Friedman, etc., but my point is there is significantly less research in this area compared to most other topics in mathematics).
[+] [-] ChadNauseam|8 months ago|reply
[+] [-] drdeca|8 months ago|reply
[+] [-] LegionMammal978|8 months ago|reply
BB(748) is very similar, in that I'd call it a 'definition' independent of ZF rather than a 'number' independent of ZF.
[+] [-] unknown|8 months ago|reply
[deleted]
[+] [-] throwaway81523|8 months ago|reply
[+] [-] bo1024|8 months ago|reply
* ZFC is a set of axioms. A "model" is a structure that respects the axioms.
* By Godel, we know that ZFC proves a statement if and only if the statement is true in all models of ZFC.
* Therefore, the statement "BB(748) is independent of ZFC" is the same as the statement "There are two different models of ZFC where BB(748) are two different numbers.
* We can take one of these to be the "standard model"[1] that we all think of when we picture a Turing Machine. However, the other would be a strange "non-standard" model that includes finite "natural numbers" that are not in the set {0,1,2,3,...} and it includes Turing Machines that halt in "finite" time that we would not say halt at all in the standard model.
* So BB(748) is indeed a number as far as the standard model is concerned, the problem only comes from non-standard models.
TL;DR this is more about the fact that ZFC axioms allow weird models of Turing Machines that don't match how we think Turing Machines usually work.
[1] https://en.wikipedia.org/wiki/Non-standard_model_of_arithmet...
[+] [-] LPisGood|8 months ago|reply
[+] [-] eapriv|8 months ago|reply
[+] [-] Straw|8 months ago|reply
[+] [-] MichaelDickens|8 months ago|reply
[+] [-] tromp|8 months ago|reply
[+] [-] sedatk|8 months ago|reply
I thought it was a typo. First time I encounter tetration.
[+] [-] lbourdages|8 months ago|reply
[1] https://en.wikipedia.org/wiki/Knuth's_up-arrow_notation
[+] [-] griffzhowl|8 months ago|reply
[+] [-] seeknotfind|8 months ago|reply
I don't get this part. Is it really rounding away the volume of the observable universe divided by the average volume of a grain of sand? That is many more orders of magnitude than the amount of mass in the universe, which is a more usual comparison.
[+] [-] Straw|8 months ago|reply
10↑↑10,000,000 / (sand grains per universe) is vastly larger than, say, 10↑↑9,999,999
So on system we're using to write these numbers, there's really no better way to write (very big)/ (only universally big) than by writing exactly that, and then in the notation for very big, it pretty much rounds to just (very big).
[+] [-] mckeed|8 months ago|reply
[+] [-] lupire|8 months ago|reply
In significant figures, 1.0 billion minus 1.0 million equals 1.0 billion.
[+] [-] Chirono|8 months ago|reply
[+] [-] Scarblac|8 months ago|reply
[+] [-] vismit2000|8 months ago|reply
Recently on HN (couple of months ago): https://news.ycombinator.com/item?id=43776477
[+] [-] phs|8 months ago|reply
[+] [-] LegionMammal978|8 months ago|reply
I've pondered that version of the question a bit, but I couldn't get very far due to my lack of expertise in first-order logic. What I do know is that Skelet #17 [0] is one of the toughest machines to prove non-halting on a mathematical level [1], so any theory sufficient to prove that Skelet #17 doesn't halt is likely sufficient to decide the rest of the 5-state machines.
[0] https://bbchallenge.org/1RB---_0LC1RE_0LD1LC_1RA1LB_0RB0RA
[1] https://arxiv.org/abs/2407.02426
[+] [-] tromp|8 months ago|reply
[+] [-] ryandrake|8 months ago|reply
Oh! Of course! That sure clears things up for this non-expert. This is clearly a hardcore blog for people who have been doing this kind of research for decades. Kind of awesome to stumble upon something so unapologetically dense and jargony and written for a very specific audience!
[+] [-] Bjartr|8 months ago|reply
Is it niche jargon, absolutely, but to say it's only accessible to people who have put in decades is selling yourself short.
[+] [-] clbrmbr|8 months ago|reply
[+] [-] charcircuit|8 months ago|reply
People can't visualize numbers that big. There's more ways to express numbers than just counting them. For example a single grain of sand has infinite states it can be in (there are an infinite amount of real numbers), so you could say a single grain of sand could represent BB(6). Combinations can grow exponentially, so that may be something useful to try and express it.
[+] [-] Xcelerate|8 months ago|reply
I.e., how well can a system fake being inconsistent before that fact it discovered? An inconsistent system faking consistency via BB(3) will be “found out” much quicker than a system faking consistency via BB(6). (What I mean by faking consistency is claiming that all programs that run longer than BB(n) steps for some n never halt.)
[+] [-] Dylan16807|8 months ago|reply
Using infinite precision to make things seem tractable is sleight of hand in my book. Stick with integers when you're describing scale.
[+] [-] unsnap_biceps|8 months ago|reply
[+] [-] fjfaase|8 months ago|reply
[+] [-] aeve890|8 months ago|reply
The mass-energy includes ordinary matter, dark matter, and dark energy. Current estimates suggest the observable universe contains roughly 10^53 kg of mass-energy equivalent.
Plugging these into S ≤ 2πER/ℏc gives someting on the order of 10^120 bits of maximum information content.
S ≤ 2πER/ℏc
S ≤ (2 × 3.141593 × 3.036e+71 × 4.399e+26)/(1.055e-34 × 299792458)
S ≤ 2.654135e+124
S ≤ 10^120
So, no.
[+] [-] kaashif|8 months ago|reply
[+] [-] Dylan16807|8 months ago|reply
[+] [-] layer8|8 months ago|reply
[+] [-] Scarblac|8 months ago|reply
[+] [-] d_silin|8 months ago|reply
You can convert every atom of observable Universe into a substrate for supercomputer, you can harness energies of supermassive black holes to power it, but running a humble BB(6) to halting state would be forever out of its reach.
[+] [-] istjohn|8 months ago|reply
[+] [-] unknown|8 months ago|reply
[deleted]
[+] [-] NooneAtAll3|8 months ago|reply
Unlike Aaronson, he actually is on the forefront of Busy Beaver research, and is one of the people behind the https://bbchallenge.org website
[+] [-] moralestapia|8 months ago|reply
Extremely bad ad hominem, I enjoyed Aaronson's read, nothing wrong with it.