top | item 4502856

Proof Claimed for Deep Connection between Prime Numbers

394 points| shashashasha | 13 years ago |nature.com | reply

89 comments

order
[+] impendia|13 years ago|reply
I personally know Brian Conrad (quoted in article). He has an encyclopedic knowledge of algebraic number theory and algebraic geometry, I would say that he's in the top ten people worldwide who could reasonably assess whether Mochizuki's proof is correct. And he has a great nose for bullshit, and little patience for it.

He is taking this claim seriously. He doesn't necessarily believe it's correct (presumably he's a bit careful in what he says to the press), but he seems to think it has a shot, and is worth paying attention to.

[+] TallboyOne|13 years ago|reply
I think it's fascinating there are only ~10 people on this earth 'smart enough' to get the proof. That is seriously some next-level stuff going on...
[+] tlb|13 years ago|reply
Changed to the Nature article that this is a repost of. The SA article had borked formatting, losing superscripts.
[+] shashashasha|13 years ago|reply
Thanks, apologies for the poorly formatted link!
[+] colanderman|13 years ago|reply
It states that for integers a+b=c, the ratio of sqp(abc)^r/c always has some minimum value greater than zero for any value of r greater than 1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)^2/c = 900/128. In this case, in which r=2, sqp(abc)^r/c is nearly always greater than 1, and always greater than zero.

Obviously I'm reading this wrong -- because as stated (and assuming that a, b, and c are positive integers) this seems trivially true -- sqp(abc) cannot be zero, r cannot be negative, and c is finite, so therefore sqp(abc)^r/c is greater than zero, QED.

Does Nature mean that the quantity does not approach zero as r tends to infinity (or some such)? Their example sure doesn't seem to indicate such.

[+] ipince|13 years ago|reply
Yes, sqp(abc)^r/c is greater than 0. But what we are saying here is that it is _bounded_.

The formulation in the article is a little confusing/non-intuitive. A perhaps more 'layman' explanation is that for every r > 1, there are only finitely many coprime tuples (a,b,c) such that a+b = c and c > sqp(abc)^r In other words, in "most" cases, sqp(abc) is greater than c!

This is equivalent to the definition in the article. Why? Well, given a fixed r, there are finitely many tuples that satisfy c > sqp(abc)^r. Since there are finitely many, we can pick a constant K > 0 st c < K*sqp(abc)^r for ALL tuples (a,b,c). Thus:

sqp(abc)^r/c > 1/K > 0, as the article mentions.

[+] magicalist|13 years ago|reply
this is for the abc conjecture[1] (I was thinking maybe they were being headline-y about the Riemann hypothesis before I clicked through), which, if proved, would be extremely interesting due to the number of conjectures and other theorems that have been shown to be equivalent to it or direct consequences of it.

The proof will be well beyond me, but the conjecture itself is pretty accessible, as are many of its connections.

This line from the article was confusing:

> Fifteen and 17 are square free-numbers, but 16 and 18 — being divisible by 42 and 32, respectively — are not.

but that's supposed to be 4^2 and 3^2, respectively.

[1] http://en.wikipedia.org/wiki/Abc_conjecture

[+] kzahel|13 years ago|reply
Also the statement for Fermat's Last Theorem isn't showing the exponents correctly (should be a^n+b^n=c^n)
[+] paulrademacher|13 years ago|reply
Also:

> It states that for integers a+b=c, the ratio of sqp(abc)r/c always has some minimum value greater than zero for any value of r greater than 1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)2/c = 900/128. In this case, in which r=2, sqp(abc)r/c is nearly always greater than 1, and always greater than zero

should be:

It states that for integers a+b=c, the ratio of sqp(abc)^r/c always has some minimum value greater than zero for any value of r greater than 1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)^2/c = 900/128. In this case, in which r=2, sqp(abc)^r/c is nearly always greater than 1, and always greater than zero

[+] mmaunder|13 years ago|reply
Any implications for RSA and other algorithms? [RSA relies on factoring the product of primes being a hard problem]
[+] alokm|13 years ago|reply
The claimed proof doesnt directly help us in factoring primes. At least not now. It has more to do with "structure" of primes.
[+] gwillen|13 years ago|reply
Warning: This article is mangled by the flattening of all the superscripts in it. Given that it's about powers, the math is all but unreadable. You have been warned. :-)
[+] tlrobinson|13 years ago|reply
Warning: this comment is out of date, the Nature article has correct formatting :)
[+] dudus|13 years ago|reply
From Wikipedia:

> Mochizuki entered Princeton University at age 16 and received a Ph.D. under the supervision of Gerd Faltings at age 22. In 2002, he became a professor at the Research Institute for Mathematical Sciences in the Kyoto University.

Very impressive.

[+] cecilpl|13 years ago|reply
I found the formulation of the theorem difficult to digest, but the wikipedia version was much clearer:

For every ε > 0, are there only finitely many triples of coprime positive integers a + b = c such that c > d (1+ε), where d denotes the product of the distinct prime factors of abc?

[+] ColinWright|13 years ago|reply
Link to a clear statement of the question, expressed as a game: http://news.ycombinator.com/item?id=4505264

This is a technique John Horton Conway uses a lot, expressing conjectures as games, where two (or more) people try to do something in turns to make, or not make, something happen.

Useful idea.

[+] dude_abides|13 years ago|reply
Relevant meta-commentary on the proof by Marty Weissman and Minhyong Kim:

http://mathoverflow.net/questions/106560/philosophy-behind-m...

[+] bugsbunnyak|13 years ago|reply
One way to deal with an overzealous mod:

Such a great answer proves that the question was worth asking. – Olivier 2 days ago

@Olivier: while I agree with your comment regarding the answer, I woould like to add that your inference regarding this particular question to me comes close to saying: the great and heroic work of the fire-fighters prove that it was worth setting the house on fire. – quid 2 days ago

On MathOverflow, though, it is possible to edit the original house so that it is no longer inflammatory, while still recording the work of the firefighter. – Terry Tao 2 days ago

[edit: formatting]

[+] ricardobeat|13 years ago|reply
Those answers read like Klingon to me :/
[+] bvaldivielso|13 years ago|reply
Ok, I've read [1] that this guy has developed a new set of mathematical objects and techniques which he is (almost) the only one to understand.

That means that if someone wants to check if the proof is right, he'll first have to learn how to use all these objects.

I'll guess we won't have a tested proof for some years.

[1] [SPA] http://gaussianos.com/posible-demostracion-de-la-veracidad-d...

[+] saalweachter|13 years ago|reply
I don't expect the new maths to cause a delay from learning so much as from verifying.

In some sense, proof-checking is a trivial linear-time operation: you check that each step follows from the last, and if they all do, the proof is correct. Proof checking is something that computers do well, except that describing a proof to a computer is usually non-trivial.

Where proof-checkers -- human and otherwise -- will get hung up, especially with the new maths, is that it may not be obvious how each step follows from the last.

What will happen is something a bit interactive: if the proof checker cannot intuitively see the logic in one step of the proof, he may first try to prove the lemma himself. If he can't, he'll go back to the original author of the proof and say, "Hey, I don't see how you go from step 518 to step 519. Can you prove it?" Sometimes the author easily can; he just didn't elaborate on that step in the proof because it was so obvious to him. Sometimes he can't; either in trying to prove that lemma he finds a contradiction, in which case the proof is incorrect, or maybe he just can't prove it, and the proof is ... incomplete.

It's this last situation which causes these grand proofs to drag out for so many years. At some point in the 500 pages, there is a jump which the author didn't notice, and it takes him another few years to prove that jump. The checkers have a much easier job, because they just have to try to follow the logic; if they fail, the burden is on the author to make the point at which they stick more clear.

[+] freyrs3|13 years ago|reply
Here's a presentation of the mathematician the article is about. It's on the nature of "Inter-universal Teichmuller Theory"[1] which apparently was the work leading up to the proof.

[1] http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal%20...

[+] samstave|13 years ago|reply
Can anyone explain in lag terms what the implications of this may be.

Just the phrase "deep connection between prime numbers" sounds really interesting, so, if this were true would there be any practical application of this in the next (N) years that would not be possible without this proof?

[+] sanxiyn|13 years ago|reply
abc conjecture is asymptotic. That is, its statement is of the form "for every positive epsilon there exists a constant". A proof can be constructive, that is, an algorithm that can calculate a constant from an epsilon, or it can be nonconstructive. For any practical application it would seem a constructive proof is needed.

I am not at all qualified to talk about the claimed proof, but I think it would be nonconstructive.

[+] bradleyjoyce|13 years ago|reply
What are the practical applications of such a proof? I'm genuinely curious.
[+] aswanson|13 years ago|reply
There are probably implications for cryptography.
[+] Mordor|13 years ago|reply
Is it naive to say these proofs are overly long - that there's something a little simpler behind them?
[+] ColinWright|13 years ago|reply
At this stage, yes, it's naive. The ideas and techniques here are pretty much completely new, and it will take years, if not decades, for people to understand them deeply enough to find the essential core of what's happening.

No, they are not overly long. It's likely that papers explaining them will effectively be an order of magnitude (base 10) longer.

[+] kzahel|13 years ago|reply
[+] carbocation|13 years ago|reply
That comment, from 2009, refers to a different paper by a different author.
[+] brasmasus|13 years ago|reply
? that link points to a comment about a different claimed proof from 2009
[+] fasteddie31003|13 years ago|reply
If there is some kind of connection between prime numbers, wouldn't a lot of the current work in encryption be thrown out?
[+] lutusp|13 years ago|reply
> If there is some kind of connection between prime numbers, wouldn't a lot of the current work in encryption be thrown out?

Unlikely. The recent work doesn't improve our ability to factor large integers, the key to modern cryptography. If someone created a function that instantly produced the prime factors for large integers without the present difficult process, that would change everything, but that's not a likely outcome.

The next real challenge to modern cryptography isn't this work, but quantum computing. If quantum computing came to full flower with no obstacles sometime in the future, that would represent a basic change because it would perform an end run around the present computational difficulties that assure the security and reliability of our present methods.