top | item 4828724

An ABC proof too tough even for mathematicians

282 points| ot | 13 years ago |bostonglobe.com | reply

142 comments

order
[+] codeulike|13 years ago|reply
If a programmer locked himself away for 14 years and then emerged and announced he'd written a completely bug free OS, there would be skepticism. Code needs to be battle tested by other people to find the bugs.

Mathematics is the same, to an extent; one guy working alone for 14 years is likely to have missed ideas and perspectives that could illuminate flaws in his reasoning. Maths bugs. If he's produced hundreds of pages of complex reasoning, on his own, however smart he is I'd say there's a high chance he's missed something.

Humans need to collaborate in areas of high complexity. With a single brain, there's too high a chance of bias hiding the problems.

[+] wging|13 years ago|reply
It is worth noting that Andrew Wiles' initial proof of Fermat's Last Theorem also had a bug. This was later fixed, with help.

From http://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem :

However, it soon became apparent that Wiles's initial proof was incorrect. A critical portion of the proof contained an error in a bound on the order of a particular group. The error was caught by several mathematicians refereeing Wiles's manuscript.[112] including Katz, who alerted Wiles on 23 August 1993.[113]

Wiles and his former student Richard Taylor spent almost a year trying to repair the proof, without success.[114] On 19 September 1994, Wiles had a flash of insight...

[+] ekianjo|13 years ago|reply
I do not think it is fair to compare Software and Abstract Mathematics. Software is mostly an Engineering problem, while Mathematics is mostly Science and Conjecture.

And even if they locked themselves for 14 years, there is no "single mind at work" there. Mathematics are built "on the shoulder of giants" through generations of brain dedicated to it. There is no such thing as pure invention - new discoveries lead to new theorems, new leads that other people take on.

[+] pseut|13 years ago|reply
I don't think he's claiming "bug free" -- and if I remember right, people have already found "bugs" and he claims to have a patch that he'll post soon. There's a huge difference between minor errors that invalidate the proof but can be fixed easily once they're noticed and fundamental errors where the proof is just wrong and can't be fixed. So most of the bugs that you're talking about don't really matter (of course, "bugs" that seem trivial at first might turn out to be fundamental irreparable errors, but that's a separate issue).
[+] tel|13 years ago|reply
I think it's more like if a programmer locked himself away for 14 years and then announced he'd written an OS before anyone else. Even if the whole thing is riddled with bugs (and let's not forget this programmer is more on the Bernstein end of things, precision and confidence are high) it's still some place potentially worth exploring.

But nobody would quite yet. Instead he announces one more thing, he's used this OS to control a rocket ship and send a satellite to Mars far before anyone else has even come close!

Now, assuming he didn't lie there, you really want to figure out what he invented because he just seems to be vastly more powerful than you.

[+] zimbatm|13 years ago|reply
Agreed but I think there is value in independently-evolving ideas.

From my own experience; the more I am connected, the more I share and optimize existing ideas. If left on my own, I will come up with rough but more original ones.

[+] gizmo686|13 years ago|reply
Even if his proof turns out to be flawed, It is likely that the new system he constructed will still provide us a new and usefull way at looking at math. The good thing about working outside of mainstream is that their is far less inertia holding to the old ways. Assuming his work does represent a new branch of math, he has reduced the problem of the rest of us discovering/inventing it from stumbling blindly to making sense of his papers.
[+] wissler|13 years ago|reply
Isaac Newton went away for two years, by himself, to invent Calculus and modern physics.
[+] Evbn|13 years ago|reply
Wolfram wrote ANKOS in the way you describe.
[+] dbaupp|13 years ago|reply
Another article with slightly more background on the ABC problem itself (and possibly slightly less sensationalist). http://www.nature.com/news/proof-claimed-for-deep-connection...

And the MathOverflow discussion referenced: http://mathoverflow.net/questions/106560/what-is-the-underly...

[+] podperson|13 years ago|reply
I think implying the article is sensationalist is a little harsh. It's definitely a click-grabbing headline, but overall a decent article on a tough subject.
[+] sek|13 years ago|reply
Just read his Wikipedia entry:

> Mochizuki attended Phillips Exeter Academy and graduated in 2 years. He entered Princeton University as an undergraduate at age 16 and received a Ph.D. under the supervision of Gerd Faltings at age 23.

He is 43 Years old now, I assume he is 100% committed to Mathematics. These people fascinate me, having a feedback loop that is unbreakable. Especially for topics where you have a knowledge of something and almost nobody else is the world is capable of understanding you. It's like Star Trek for the mind.

[+] Xcelerate|13 years ago|reply
This article seems to suggets that mathematicians are all too eager to drop his work at the slightest whiff of any flaw. Could someone more knowledgable on the subject explain to me why this is?

It is clear that he has already done some very great things in mathematics, so even if there was a flaw in his proof, I would think his papers would still have many deep insights that no else had thought of. I mean, it's not like mathematicians are pressed for time -- if I was one I would certainly dedicate a lot of time to studying something interesting like this.

[+] btilly|13 years ago|reply
You clearly do not value the time of mathematicians like they value their own time.

The estimate that I was told for the average mathematician reading the average math proof is one page per day.

Thus the average mathematician facing this will see 750 pages just to becomes of the the 50 people who have mastered the basics of anabelian geometry. That's 2 years. Then you have to take on some unknown number of years to learn "inter-universal geometry". Then your reward for doing this is that you are qualified to read a 512 page proof, which is again going to be a year and a half. Along the way if you find a mistake in any of it, your reward is to confirm the immediate guess that most mathematicians have which is that there is likely a mistake somewhere. (But with this much math, you'll probably find several "mistakes" that aren't before you find a real one.)

This is years of work, that has nothing to do with anything that you're already working on. And believe me, a professional mathematician has no shortage of problems to work on, in areas that they already have the background for.

If you think that this is unreasonable, well, why don't you volunteer to fix it? Reading the proof shouldn't take you much longer than it would take to become a mathematician. And you can learn anabelian geometry during grad school, so that time is not all wasted.

[+] rfurmani|13 years ago|reply
Right now we are waiting for his corrected paper, as the current one contradicts some known number theoretic results. Mochizuki put uP an amazing amount of scaffolding and invented a new area with new abstractions. People who I trust claim that there isn't new profound number theory in this, so then it may become a matter of whether to study an learn it for its own sake, which carries less incentive if there is no immediate application.

Plus: mathematicians really don't have spare time to learn every area that comes along. Allegedly Poincare was the last true polymath. Given how math has been developing for centuries and you never really need to throw things away it continually becomes harder to get up to speed in any particular field. There's a famous quote about how the Princeton undergraduate math program is so rigorous that it brings you up to speed, to the point of early twentieth century mathematics. In this way math is incomparable to CS.

[+] podperson|13 years ago|reply
Seems quite the opposite. The article says most mathematicians have well-established lines of research and are disinclined to devote years to understanding a different specialization. The mathematicians who are trying to understand the proof are described as young, up-and-coming mathematicians with little to lose and a lot to prove (and an opportunity to potentially get in on the ground floor of a new theoretical framework -- "inter-universal geometry")
[+] lmm|13 years ago|reply
Mathematics - at least some kinds of it - is largely a game of symbols. You define something and then see if you can prove anything about it. Most of the time, it turns out to reduce to group theory (e.g. Galois theory basically consists of reducing the study of fields to the study of groups). Sometimes you come up with a novel structure about which you can say something. Occasionally, this gives you a result that feeds back into another branch of mathematics, or even into number theory - as with Wiles' proof of Fermat's Last Theorem. But most of the time, this is just some funny structure and you say "well, that's nice" and move on.

So Mochizuki has almost certainly defined some new objects and proven some things about them. But mathematicians do that all the time (admittedly on a smaller scale). If you just want to look at something, there's plenty of "new territory" around - stuff that doesn't require learning a new notation, and hasn't had Mochizuki looking at it for ten years already. The only reason some arbitrary new structure is interesting enough to spend your time learning about is if you can tie it back to number theory.

[+] shardling|13 years ago|reply
That wasn't the takeaway I got from the article at all -- and the original proof of Fermat's Theorem by Wiles had flaws that took some work to patch over, but in no way diminished his accomplishment.

I think you've just misinterpreted some of the language used.

[+] gwern|13 years ago|reply
Consider the incentives involved in studying his work carefully for years, and those of being in academia.
[+] gizmo686|13 years ago|reply
I do not think that is the case. Their is the skeptism resulting from the huge amount of work between accepted math and his results that give mathmaticians concern. It is almost definite that his work contains some flaws, what is not definite is wheather it contains fatal flaws. Many are not willing to invest the time to learn something that might turn out to be useless.
[+] elliptic|13 years ago|reply
Is this situation similar to that of Louis de Branges & the Riemann Hypothesis a few years back? I.e, a well-respected mathematician (de Branges had settled the Bieberbach conjecture in the 80s) releases a proof of an important unsolved problem using his own poorly understood mathematical technology?

Edit - lest this sound too negative, one should realize that the Bieberbach proof took a long time to be accepted.

[+] bnegreve|13 years ago|reply
Would it be possible to use proof assistants like Coq [1] to verify this kind of proofs ? If not, does anyone know why ?

[1] http://en.wikipedia.org/wiki/Coq

[+] cperciva|13 years ago|reply
Proofs at this level are nowhere near formal enough for existing mechanical tools to understand. I've seen researchers cite "translated proof X into an automatically-verifiable form" as a major achievement -- even for very simple proofs.
[+] ph0rque|13 years ago|reply
...the proof itself is written in an entirely different branch of mathematics called “inter-universal geometry” that Mochizuki—who refers to himself as an “inter-universal Geometer”—invented and of which, at least so far, he is the sole practitioner.

In this universe, at least...

[+] ArtB|13 years ago|reply
Wouldn't the easiest way to check this proof be to enter it into something like Coq? That way you'd only have to understand how to translate each step rather than learn each field.
[+] Evbn|13 years ago|reply
You would still have to write 100 pages of Coq code, and write every definition properly, which may be just as hard as learning the field.
[+] atas|13 years ago|reply
"Release early release often" applies to Math as well. Wouldn't it be better for everyone if he hadn't been so secluded and published some of his work in the meantime?
[+] taejo|13 years ago|reply
My impression was that he had published some material on IUT before, but nobody had any reason to read it.
[+] pfanner|13 years ago|reply
I'm a physics student. Sometimes I'm thinking if I should completely change my path to math. I always sucked at it but it seems to be so huge, exciting and powerful.
[+] dbz|13 years ago|reply
Can anyone explain what "inter-universal geometry" is?
[+] hakaaak|13 years ago|reply
Different universes in mathematics play by different rules and have different components. Inter-universal Geometry is how these sets of rules and components can relate and is the bridge to understanding more complex theory.
[+] mememememememe|13 years ago|reply
Will the solution(s) to ABC proof be a nightmare to all security protocols relying on prime number factorization, such as RSA?
[+] cperciva|13 years ago|reply
No. A proof is a nice thing to have, but everybody assumes that ABC is true already.
[+] kainosnoema|13 years ago|reply
Initially I had the same concern, but after digging a bit deeper, I don't believe this would be an immediate outcome. From the little I can gather, the ABC conjecture is really just a statement about the relationship between the operations of addition and multiplication on triples of coprime positive integers. While prime factors are involved in the conjecture, it doesn't seem that the proof of the conjecture would naturally lead to prime factorization in polynomial time. I'm not a mathematician though, and given how radical Mochizuki's proof sounds, I imagine it may (if it stands) eventually lead to more efficient factorization algorithms.

Advances in computing power seem to be a more worrisome threat to RSA, especially for RSA-1024 whose factorization is probably already feasible (http://www.cs.tau.ac.il/~tromer/papers/cbtwirl.pdf). And once advances in quantum computing become reality, quantum algorithms should be able to very quickly break even RSA-2048 (http://en.wikipedia.org/wiki/Shors_algorithm).

[+] smegel|13 years ago|reply
Apart from cryptography, has prime number research and theorization produced any other practical applications?
[+] vhf|13 years ago|reply
No, it should not.