top | item 21107706

Number theorist fears many proofs widely considered to be true are wrong

87 points| lelf | 6 years ago |vice.com

101 comments

order
[+] Hasz|6 years ago|reply
Headline is waaaay overblown, but not all of it is total hyperbole -- checkout out arxiv and try to read the abstract from just about any paper.

It's basically totally impermeable. I'm almost done with an undergraduate degree in math and basically have no idea what ~90% of the research is about at anything other than a topical level. This is fine, it's written for specialists in the field (hopefully), but damn, for most fields, it's not a whole lot of people.

Compared to physics or chemistry, math gets very specialized, very quickly. Depending on the field, there's no easy real world isomorphisms either, making it even more difficult.

[+] hgtr|6 years ago|reply
I did some undergraduate research and ended up getting published. My initial drafts were written with prose so that I (and hopefully any novice) could understand. However, my professor wasn’t happy with it so I got some help from one of his grad students to re-write it. By the end of it I could barely understand my own paper. IMO the final paper had too much technical jargon which was convoluting some simple concepts. Kinda made me disenchanted with the academic world. Still have to give credit to my prof and the grad student, without their edits it probably wouldn’t have been published.
[+] imglorp|6 years ago|reply
Software is very much in crisis because of this exact problem.

Most of us build on top of a mountain of existing work; sometimes just setting a pebble on top and we have a new thing. But no human alive has seen, could remember, or could understand, each and every behavior down in that mountain, from pixels on the screen to pulses on the phy all the way to gates inside their processors.

So you read about the APIs you need; you set your pebble down with a little glue or duct tape; you hope you've connected it correctly; you hope the rest of the mountain works how you want; you hope you didn't miss any use cases for interfacing to the mountain; and you hope it doesn't have any undocumented surprises. Abelson and Sussman called this engineering by poking (1).

Of course, if you want to make something work fast with minimal work, this is great, it's the component future we dreamed about. But it also means we are clueless.

1. http://lambda-the-ultimate.org/node/5335

[+] hiphop|6 years ago|reply
> I'm almost done with an undergraduate degree in math and basically have no idea what ~90% of the research is about

Find a grad student who are just about done with their studies in, say, analysis and the odds are they understand jacksquat about anything in current research in Number Theory either.

This post is in support of your point :)

[+] knzhou|6 years ago|reply
That doesn’t mean that it’s wrong, it means that you need to read more. In a lot of fields, undergraduate material doesn’t qualify you to read papers, a PhD does.

If all papers actually were written to undergraduate level, many would be 500 pages long. Who has time to write a book-length exposition of the basics of their field every time they publish? I mean, have you ever published a technical work? If so, did you go out of your way to make it completely accessible to high schoolers? If you didn’t, you get why professors don’t do the same for undergrads.

[+] bob1029|6 years ago|reply
I feel the same way when I look at many of these papers.

I get that I don't have a post doctorate in super math or whatever the bleeding edge academic ivory tower game is these days, but at the same time I feel that if 90% of the research cannot convey any practical description of real-world implications, or at least how it relates to other abstract things that DO have real-world implications, maybe there is some BS house-of-cards fantasy game going on.

From what I understand, there are some financial benefits to cranking out research papers. Perhaps researchers are incentivized to build layer upon layer of abstraction so they can spin any tale they desire in order to turn a quick buck? What better field than math? The purest and most non-tangible of fields.

Another perspective I have on this - Virtually all computer technology or software papers I read are typically understandable to some degree, despite my not having the highest credentials in my field. It would seem the constraint of "if this is in a compsci paper, it needs some reference implementation pseudocode" seems to help keep people honest. If the computer can run it, at least you can view things in a more concrete (albeit still somewhat abstract) manner. Especially imperative code examples. These quickly take very abstract algorithmic concepts and transform them into a step-by-step understanding. If the paper is total bunk, you can usually tell pretty quickly from the results on the computer.

Automated proofs, where feasible, seem like a very reasonable requirement for published math papers. My ignorance at the higher levels of this field fails to inform me if all math papers could be proven automatically, or if there is some more 'complex' realm of reasoning unsuitable for classical/quantum/etc verification. Perhaps this should be a good constraint regardless - If you can't implement your algorithm/proof on a computer, where are you headed with it anyways? Surely piling another 500 papers worth of layers on top isn't going to move you any closer to a practical outcome.

[+] ukj|6 years ago|reply
Sounds like a sales pitch for Lean (Microsoft's theorem prover), even though Coq [1] is the standard tool for the UniMath [2] project.

[1] https://en.wikipedia.org/wiki/Coq

[2] https://github.com/UniMath/UniMath

[+] krapht|6 years ago|reply
Well, having used both, Lean really is a lot more ergonomic. Unimath is mostly category theory; other parts of mathematics are woefully underdeveloped.
[+] Ceezy|6 years ago|reply
"I believe that no human, alive or dead, knows all the details of the proof of Fermat’s Last Theorem. But the community accept the proof nonetheless," Buzzard wrote in a slide presentation[...]

That s 100% clickbait. People knows the demonstration and the demonstration helped to create entire new fields. By the way there are new and shorter proof of the theorem.

[+] hyperbovine|6 years ago|reply
Also, that is quite a remarkable and provocative statement given that Andrew Wiles is still very much alive.
[+] raymondh|6 years ago|reply
Summary: Motherboard interviewed an attendee at the Interactive Theorem Proving conference who says that he thinks Interactive Theorem Proving is important. Also, professors specializing in proof theory think their tooling may become important but "the technology still needs improvement".
[+] madez|6 years ago|reply
I'm currently working on the formalization of Catalan's conjecture (or Mihăilescu's theorem) in Isabelle/HOL/Isar.

My personal impression is that more formal verification is necessary in math, and it should be tought to every student of Mathematics in introductionary courses because it gives an unforgiving but also rewarding learning experience with formal proofs.

Feel free to ask questions.

[+] joker3|6 years ago|reply
How much research is being done on making these tools more user friendly? I spent a little bit of time using Coq about a decade ago, and while I agree that it's very important to make things like that widely used, there was a pretty steep learning curve.
[+] repolfx|6 years ago|reply
To mathematicians,

Welcome to our world!

https://xenaproject.wordpress.com/2019/09/27/does-anyone-kno...

Actually working through some of [the details], when formalising them, made me understand that some of these issues were far bigger and more complex than I had realised.

In the perfectoid project there is still an issue regarding the topology on an adic space, which I had assumed would be all over within about a page or two of pdf, but whose actual proof goes on for much longer

Yeah, that's how software developers feel, all day, every day. That's why we all moan about being asked to give estimates for things. You don't really understand a thing until you've formalised it in code, and often things that look simple and easy turn out to be unexpectedly deep.

Mathematics is a field that's been doing the equivalent of writing pseudo-code for thousands of years. There are endless reams of pseudo-code that looks like it might work, but as any programmer who's ever written pseudo-code knows, whether it actually runs when you try it out for real is anyone's guess.

So now mathematics is starting to experience the pain of multiple incompatible programming languages/ecosystems. There's Lean, Coq, Isabelle/HOL and some new JetBrains language too. Soon they'll start to discover the value of 'foreign language interop', rigorous code review, comments, style guides, design documents, package management, automatic optimisation etc.

Have you ever looked at a proof written in one of these languages? All code in these languages is a hot mess judged by the standards of professional software developers: tons of abbreviations, single letter variables, uncommented, ad-hoc module structure etc. I'm sure we'll eventually see mathematicians having raging flamewars about what clean style is and tabs vs spaces.

Luckily for mathematicians though they work at their own pace and are under no deadline pressure. They'll figure it out.

[+] adamnemecek|6 years ago|reply
Vice really does a disservice to the author. "All" math is not wrong, but I sure wish theorem provers were the preferred method of formalizing mathematics. But theorem provers really aren't quite ready for prime time either. But it sure feels like a lot of verification of mathematics is somewhat repetitive.

I think that programming/CS is to mathematics what neolithic revolution was to human culture. It allows for specialization in the stack. People can obsessively optimize different parts of the stack to everyones benefit.

[+] higherkinded|6 years ago|reply
Suggestion to rely on AI for proof verification is just laughable. Neuron weights instead of formal definitions. So reliable.
[+] drchewbacca|6 years ago|reply
It's not really like that. NN's are only used when searching for new proofs or steps which can be used to fill in details, the verification of a finished proof is done by a classical program by a series of substitutions which are very straightforward and spelled out in detail.

Here is an example.

https://arxiv.org/pdf/1608.02644.pdf

[+] not2b|6 years ago|reply
You seem to think "AI" means machine learning. In this case it does not. Theorem prover systems don't use neural nets.
[+] 6gvONxR4sf7o|6 years ago|reply
That's not the suggestion. Proof verification is easy (for a proof written in a formal system). Proof search is hard. Many (most/all?) proof assistants heavily use heuristics to guide the proof search. Machine learning on a database of proofs could lead to superior heuristics and make proving stuff in the formal systems easier.

Proving stuff in these systems is still harder than natural language proofs, so making them easier could increase adoption, which would make mathematicians' results more robust/rigorous.

[+] digama0|6 years ago|reply
Note that this is editorial license on the part of the writer; Buzzard is proposing the use of interactive theorem provers, which only use a small amount of 80's style AI (backtracking search and higher order matching). No one in ITP is seriously using modern neural net based AI in real theorem provers yet, although there are several research teams working on it.
[+] joker3|6 years ago|reply
The headline is certainly overblown, but there probably errors out there. The interesting question is where we should be looking for them. All of the major results in an undergraduate or introductory graduate textbook have been studied often and thoroughly enough that we can be reasonably confident there are no errors in their proofs. There are errors in very new or very obscure material, but those tend to get corrected as more people start paying attention to them. It therefore seems like the search should therefore be concentrated in recent results that are becoming more useful but still aren't standard.

On the other hand, we do need to build a standard library of results that can be used to prove those newer results. You can't do much in analysis without basic results on metric spaces, for instance.

[+] std_throwaway|6 years ago|reply
There is a big grey area between "not 100% correct" and "wrong and harmful". Like a faulty computer program that produces correct results in 99.999% of the cases a wrong mathematical statement can still yield useful results in practice. You just don't want to run into those pesky edge cases where it indeed is wrong.

Having a database of all mathematical proofs where all of them are checked for validity, however, is a very useful tool to actually find those edge cases where a proof is wrong and give future mathematicians a solid foundation.

Using AI as an extension of human capabilities is a given to me. Like with heavy machinery, we don't want to do the heavy lifting. We want to steer it properly to gain most benefit.

[+] cyphar|6 years ago|reply
> a wrong mathematical statement can still yield useful results in practice

Yes and no. For some basic conjectures this might be the case, but one of the greatest utilities of certain theorems is how they can be applied to completely new fields. If a theorem used in this manner is not actually true, then the proof is invalid -- and it might be the case that (several proofs down the line) the conjecture you've ended up proving is almost entirely invalid.

There's also the slippery slope of "Principle of Explosion"-type problems with accepting "mostly right" proofs -- it means you will be able to prove any given statement (even ones which are outright false).

[+] MrBuddyCasino|6 years ago|reply
Soo... Unit testing / CI for math proofs? If it can be done in software, great.
[+] std_throwaway|6 years ago|reply
To my knowledge the hard part is writing the input for the software. While in text you have a bit of leeway to glance over some details, with the computer program you must, indeed, write every last relevant detail down and you must get it right. This is very heavy work.
[+] jetzzz|6 years ago|reply
It's actually type checking. Per the Curry–Howard correspondence, proof checking is equivalent to type checking, just for much more sophisticated type system than in an ordinary programming language.
[+] jeff_friesen|6 years ago|reply
I was thinking it's a lot like property-based testing
[+] dls2016|6 years ago|reply
I had these fears about my dissertation work, but then realized that the “elder’s” papers were highly cited because they contained important machinery which could be modified and refined to prove other results... or the paper contained an important idea which went on to be proved in a variety of ways. Maybe it’s different outside of PDEs, but I doubt it.
[+] bryanrasmussen|6 years ago|reply
I just can't help feeling that either the true should be correct, or the wrong should be false. But I guess English is not a strongly typed language.
[+] iamaelephant|6 years ago|reply
This just seems like a native advertisement for Lean...
[+] lysium|6 years ago|reply
I’m sure this will help proofing papers. There will still be the problem of proofing that the software proofs what it is intended to proof.
[+] gowld|6 years ago|reply
Headline is of course absurd hyperbole that does not belong here.

> "I think there is a non-zero chance that some of our great castles are built on sand," Buzzard wrote in a slide presentation. "But I think it's small."

[+] mirimir|6 years ago|reply
Yes, the title ought to be changed. At least, "all" to "some".

But he does note:

> This overreliance on the elders leads to a brittleness in the understanding of truth. A proof of Fermat's Last Theorem, proposed in 1637 and once considered by the Guinness Book of World Records to be the world's "most difficult math problem," was published in the 1990s. Buzzard posits that no one actually completely understands it, or knows whether it's true.

[+] kurlberg|6 years ago|reply
Well, they also quote him as saying:

"I’m suddenly concerned that all of published math is wrong because mathematicians are not checking the details, and I’ve seen them wrong before,”

[+] ummonk|6 years ago|reply
Extreme clickbait headline. The subheading is '"I think there is a non-zero chance that some of our great castles are built on sand," he said, arguing that we must begin to rely on AI to verify proofs.'
[+] breck|6 years ago|reply
Anyone interested in this topic, I have a prediction that the time is right for a new math, that starts from just 1 and 0, and builds every symbol up in a rigorous way.

We're working on the infrastructure to make such a thing easier, but the project does not currently have a leader: https://github.com/treenotation/jtree/issues/83

[+] devicetray0|6 years ago|reply
Forgive me, but why is the TreeNotation/jtree library the right place to craft a new "rigorous" mathematics language?