(no title)
nicf | 7 months ago
I'm actually prepared to agree wholeheartedly with what you say here: I don't think there'd be any realistic way to produce thousand-page proofs without formalization, and certainly I wouldn't trust such a proof without some way to verify it formally. But I also don't think we really want them all that much!
The ultimate reason I think is that what really lights a fire under most mathematicians is the desire to know why a result is true; the explanation is really the product, much more so than just the yes-or-no answer. For example, I was never a number theorist, but I think most people who are informed enough to have an opinion think that the Riemann Hypothesis is probably true, and I know that they're not actually waiting around to find out. There are lots of papers that get published whose results take the form "If the Riemann Hypothesis is true then [my new theorem]."
The reason they'd still be excited by a proof is the hope, informed by experience with proofs of earlier long-standing open problems, that the proof would involve some exciting new method or perspective that would give us a deeper understanding of number theory. A proof in a formal language that Lean says is true but which no human being has any hope of getting anything from doesn't accomplish that.
mietek|7 months ago
Writing proofs in Agda is like writing programs in a more expressive variant of Haskell. Abelson said that “programs must be written for people to read, and only incidentally for machines to execute”, and by the Curry-Howard isomorphism, proofs can be seen as programs. All the lessons of software engineering can and indeed should be applied to making proofs easier for humans to read.
For a quick example, check out my mechanization of Martin-Löf’s 2006 paper on the axiom of choice:
https://research.mietek.io/mi.MartinLof2006.html
Recent HN discussion:
https://news.ycombinator.com/item?id=44269002
nicf|7 months ago
I meant to be responding specifically to the case where some future theorem-proving LLM spits out a thousand-page argument which is totally impenetrable but which the proof-checker still agrees is valid. I think it's sometimes surprising to people coming at this from the CS side to hear that most mathematicians wouldn't be too enthusiastic to receive such a proof, and I was just trying to put some color on that reaction.
daxfohl|7 months ago
I guess one could work in a brand of math whose axioms make defining and using limits impossible, which, maybe if formalization came before the invention of calculus, would make some 17th-century mathematicians feel more comfortable. Though I imagine it would make progress in physics challenging. I think the same about LEM/AoC. Given that almost every element in the power-set of reals is non-measurable, maybe stuff like Banach-Tarski is actually fundamental in real physics: it can't be predicted or computed, but it can be observed.
riku_iki|7 months ago
In programming engineering we already have this: there is human readable high-level code, and there is assembler and lots of auto-generated code.
In proof system we could have the same: key concepts/theorems should be encoded in human readable form, but no need for human to read through millions of generated lines.
kevinventullo|7 months ago
I completely agree that a machine-generated formal proof is not the same thing as an illuminating human-generated plain-language proof (and in fact I suspect without further guidance they will be quite different, see my other stream of thought comment). However, I do think machine-generated formal proofs would be interesting for a few reasons:
1. Sometimes the obvious thing is not true!
2. I think the existence or non-existence of a machine-generated proof of a mathematical claim is interesting in its own right. E.g. what kinds of claims are easy versus hard for machines to prove?
3. In principle, I would hope they could at least give a starting point for a proper illuminating proof. E.g. the process of refinement and clarification, which is present today even for human proofs, could become more important, and could itself be machine-assisted.
nicf|7 months ago
Anyway, yeah, if this scenario does come to pass it will be interesting to see just how impenetrable the resulting formal proofs end up looking and how hard it is to turn them into something that humans can fit in their heads. I can imagine a continuum of possibilities here, with thousands of pages of inscrutable symbol-pushing on one end to beautiful explanations on the other.
david-gpu|7 months ago
Of course, but a formal system like Lean doesn't merely spit out a yes-or-now answer, it gives you a fully-fledged proof. Admittedly, it may be harder to read than natural language, but that only means we could benefit from having another tool that translates Lean proofs into natural language.
daxfohl|7 months ago
Your comment reminds me of Tao's comment on the ABC conjecture: usually with a big proof, you progressively get new tools and examples of how they can be applied to other problems. But if it's hundreds of pages of formulas that just spits out an answer at the end, that's not how math usually works. https://galoisrepresentations.org/2017/12/17/the-abc-conject...
If these provers do end up spitting out 1000-page proofs that are all calculation with no net-new concepts, I agree they'll be met with a shrug.
lblume|7 months ago
daxfohl|7 months ago
In other cases, the new condition affects your theorem but doesn't completely invalidate it. So you can either accept that your theorem is weaker, or find other ways to strengthen it given the new condition.
That's all kind of abstract though. I'm not an expert on RH or what other important math depends on it holding up. That would be interesting to know.
nicf|7 months ago
In general, though, the answer to this question would depend on the specifics of the argument in question. Sometimes you might be able to salvage something; maybe there's some other setting where same methods work, or where some hypothesis analogous to the false one ends up holding, or something like that. But of course from a purely logical perspective, if I prove that P implies Q and P turns out to be false, I've learned nothing about Q.