I do not find that the article insists strongly enough on the fact that computer-aided verification of proofs is not, in any way, a threat to human creativity in mathematics.
You can just understand proof assistants as an effort to design a language for mathematical proofs, and a procedure to verify them mechanically. (Spelled out like this, it sounds a lot like Hilbert's program -- we have just understood since then that you cannot hope to have a "perfect" proof assistant.) The availability of computers means that you can actually implement the verification process and run it, but if you do not want to involve computers in the process (because you do not "trust" them, or whatever), you could always check such proofs by hand, in principle.
In a sense, a proof designed for a computer assistant is one that can be verified without any intelligence required from the reader.
> Voevodsky himself is careful to distinguish the various ways computers should or shouldn’t be put to use. “Lots of people don’t understand the difference between using computers for calculation, for computer-generated proofs, and for computer-assisted proofs,” he says. “The computer-generated proofs are the proofs which teach us very little. And there is a correct perception that if we go toward computer-generated proofs then we lose all the good that there is in mathematics—mathematics as a spiritual discipline, mathematics as something which helps to form a pure mind.”
The recent financial crisis shows even those who claim to understand the math, do not, let alone even being able progress on probability (hmm what are the odds all this could go wrong? Nah).
I wonder which proof assistant he is using. His presentation[1] mentions Coq but Wkipedia[2] lists several others. Is Coq kind of a standard tool or are there others that are widely used.
Yes, Voevodsky works in Coq. Some other people in the "univalent foundations" project that Voevodsky started have also done some developments in Agda.
Apart from Voevodsky's work, there are several other big math formalization projects. Perhaps the biggest ones are the formalization of the Feit-Thompson theorem (done by Georges Gonthier and his collaborators, in Coq), and the proof of the Kepler conjecture (done by Thomas Hales and his collaborators, in Isabelle/HOL and HOL Light).
I'm surprised they didn't mention Leslie Lamport. He apparently started writing "structured proofs" decades ago, and then developed a system TLA+ to do the same thing in a computer.
I am hazy on the details, but I think TLA+ proofs are just checked by brute force, and not with an actual theorem prover? It's for distributed algorithms with a large but finite number of states.
So it's not exactly the same as what's discussed in the article, but it's definitely related. I would appreciate if anyone has any clarification on this point.
"A method of writing proofs is described that makes it harder to prove things that are not true. The method, based on hierarchical structuring, is simple and practical. The author's twenty years of experience writing such proofs is discussed."
TLA+ is better described as a verification tool than as a proof assistant, and is better suited to analysis of software systems than to analysis of the sort of mathematics in which Voevodsky is involved. The most striking characteristics that make Coq and other tools based on dependent type theories interesting to Voevodsky are absent from systems like TLA+.
TLA+ is sufficiently different from Coq -- and in ways essential to Voevodsky's intentions -- that discussion of TLA+ in this article would be out of place.
In addition, there is a huge community of people who have worked with theorem provers for the past 20 years or more; it's not clear to me why the author would single out TLA+/Lamport over any of these other systems/researchers.
It's interesting that computer science also struggles with similar issues: program verification. But at least with a program you can run it and have it check itself to some extent (eg with invariants). But Voevodsky is talking about a whole other level of difficulty, akin to writing a massive program but not being able to run it.
You can run it, it's just that the operation of the program is not interesting. Voevodsky's system is proof relevant so, at least, the programs themselves are interesting, but other models exist where the programs aren't even interesting. Their mere existence is all that matters.
> Broadly speaking, the argument against the use of computers is a lament for the loss of the human element: intuition and understanding. Acknowledging something as true because the computer says so is not the same as knowing why it is true. One might reckon it’s analogous to relying on an Internet mash-up of reviews about the mysteries of Venice, rather than going there and splurging on the water taxi and experiencing the magic for oneself. But then again, the same conundrum arises in building upon previous results rather than working from scratch.
This is the same Voevodsky that came up with the univalence axiom[1] that is foundational in HoTT[2] (homotopy type theory)?
What's not to love about:
The univalence axiom states:
(A = B) ≃ (A ≃ B)
"In other words, identity is equivalent to equivalence. In particular, one may say that 'equivalent types are identical'."
The Oxford dictionary "intuition" as "immediate apprehension by the mind, without the need for reasoning"
When people emphasise intuition in mathematics it suggests that
(i) that they have never tried to teach mathematics formally and explicitly without appeal to intuition —if they had, it would have been a most refreshing experience for them and for those of their students that were sufficiently well-educated to appreciate simplification
(ii) that they have never taken the trouble to learn how to let the rules of the formal game guide their “writing of papers” —if they had, they would have discovered how to do mathematics way beyond their former powers.
L.E.J. Brouwer emphasized that he regarded that some mental procedures are either language-less or pre-liguistic. Intuition is an imprecisely defined notion but I think the main thing to understand is that immediacy is a key component. If we grasp something by intuition there are no steps involved. Also, I don't think that apprehending something by intuition means proving it, proofs always involve rules and mental (or machinic) operations, in this sense knowing something by intuition is not proving that thing in the formal (or informal) sense of the word.
Given all that I think the Oxford dictionary's definition is actually quite spot on. Are you quibbling with it? Happy to hear what you think.
I can post links to Brouwer's writing on these matters.
The term is also used to describe some people's ability to grok simple arguments without needing to fall back on formal methods. A trivial example might be the pigeon hole principle where most people can just "see" that it is true, even though it does require proof.
With all due respect to Dijkstra, this is so narrow-minded. I love intuition in mathematics, primarily coming from spatial or geometric reasoning. Then it's time to put it into algebraic or symbolic terms, but that's pinning a butterfly to a board to show you actually caught it.
I like point (ii) above, as learning to let the rules of the game guide your writing is a very powerful technique indeed. But teaching math formally without appeal to intuition is what a lot of people do and it's boring and unpleasant to me. It doesn't simplify, it merely traces lines in the sand or on a page without taking the trouble to look up and appreciate the larger picture (sometimes meaning literal picture).
It's clear that Dijkstra & I appreciate very different parts of mathematics :)
Mathematics is defined by its axioms and the axioms are derived purely from intuition. Therefore mathematics is ultimately 100% intuitive, if you don't agree then you haven't tamed your intuition.
If formally precise reasoning would be all there is to it, computers could do all math for us. However, even in that formal-if-anyhing-is area, they fail. Computers can't solve instances of the Halting problem or the Tiling problem, whose solution is immediately intuitively clear to a human. Those solution can also be immediately verified by fellow humans, via the informal communication method of 'speech'. The reasoning involved can't be verified by computers.
Conway is listed as being negative about using computer verified proofs, but he is well known for trashing set theory (or at least strongly protesting the shackles of set theory). So I wonder what he thinks of this new univalent foundation.
>>So I wonder what he thinks of this new univalent foundation.
The Homotopy Type Theory (HoTT) book[1] puts forth a possible interpretation of Conway's thoughts about univalent foundations in section 11.6 (look at the bottom of the page for the following very selective and paraphrased quote):
"[...] Conway's point was not to complain about ZF in particular, but to argue against all foundational theories at once [...] One might respond that, in fact, univalent foundations is not one of the "standard foundational theories" which Conway had in mind, but rather the metatheory in which we may express our ability to create new theories, and about which we may prove Conway's metatheorem."[2]
EDIT: I forgot to mention that Voevodsky's definition of "Univalent Foundations" seems to be used interchangeably with the term "Homotopy Type Theory", but in Voevodsky's mind there is a sharp distinction[3]. Voevodsky tried to make this clear in an HoTT Google Groups email thread with the following statement: "HoTT is the study of the type-theoretic approach to the meta-theory of the Univalent Foundations."[4]
Conway's opposition was to computer-generated proofs (not computer-verified proofs), because the software of the time wasn't high quality (for accuracy), and because they were so complicated that people couldn't get confidence that they were debugged.
Basically, humans can't comprehend what the computers are doing, even when they are right, and we aren't always building them well enough to be trustworthy without checking their work.
However, Zeilberger and Shalosh B Ekhad have done some highly reputable work, albeit in a narrow area. (Zeilberger designs algorithms in combinatorics, which then computer-generate proofs for specific questions)
I found a mistake in one of his proofs in his book "On Numbers and Games" using computer verified proofs. It was only a mistake in the proof, that is, the proof was easy to correct and the result was still true, but it shows that noone is above mistakes. So using a computer can certainly drastically reduce the probability of an error.
I wonder if more computer-aided proofs will lead mathematicians to discover serious conflicts in their ontologies. Critical parts of mathematics, and even the way we talk about it, may eventually need to be reworked.
Within the formalized version of mathematics, some sort of namespace mechanism will probably be necessary to deal with terminology collisions.
And certain things commonly thought of as one notion might need to be split up into multiple notions (example: in some contexts, "function" means "set of pairs satisfying the vertical line test"; in other contexts, "function" means "set of pairs satisfying vertical line test, along with designated 'codomain').
But at most, this amounts to changing the wallpaper. The structure itself is sound unless something really shocking happens like a proof that PA is inconsistent or something. Such a shocking event needn't hinge on computer-aided proofs (although computer-aided proofs will surely help to convince mathematicians who would otherwise assume such a startling result must be mistaken).
[+] [-] a3_nm|11 years ago|reply
You can just understand proof assistants as an effort to design a language for mathematical proofs, and a procedure to verify them mechanically. (Spelled out like this, it sounds a lot like Hilbert's program -- we have just understood since then that you cannot hope to have a "perfect" proof assistant.) The availability of computers means that you can actually implement the verification process and run it, but if you do not want to involve computers in the process (because you do not "trust" them, or whatever), you could always check such proofs by hand, in principle.
In a sense, a proof designed for a computer assistant is one that can be verified without any intelligence required from the reader.
[+] [-] j2kun|11 years ago|reply
> Voevodsky himself is careful to distinguish the various ways computers should or shouldn’t be put to use. “Lots of people don’t understand the difference between using computers for calculation, for computer-generated proofs, and for computer-assisted proofs,” he says. “The computer-generated proofs are the proofs which teach us very little. And there is a correct perception that if we go toward computer-generated proofs then we lose all the good that there is in mathematics—mathematics as a spiritual discipline, mathematics as something which helps to form a pure mind.”
[+] [-] nomorejulia|11 years ago|reply
[+] [-] weinzierl|11 years ago|reply
[1] https://www.math.ias.edu/vladimir/sites/math.ias.edu.vladimi...
[2] https://en.wikipedia.org/wiki/Proof_assistant
[+] [-] vilhelm_s|11 years ago|reply
Apart from Voevodsky's work, there are several other big math formalization projects. Perhaps the biggest ones are the formalization of the Feit-Thompson theorem (done by Georges Gonthier and his collaborators, in Coq), and the proof of the Kepler conjecture (done by Thomas Hales and his collaborators, in Isabelle/HOL and HOL Light).
[+] [-] chubot|11 years ago|reply
I am hazy on the details, but I think TLA+ proofs are just checked by brute force, and not with an actual theorem prover? It's for distributed algorithms with a large but finite number of states.
So it's not exactly the same as what's discussed in the article, but it's definitely related. I would appreciate if anyone has any clarification on this point.
https://scholar.google.com/scholar?cluster=14553320809011812...
"A method of writing proofs is described that makes it harder to prove things that are not true. The method, based on hierarchical structuring, is simple and practical. The author's twenty years of experience writing such proofs is discussed."
http://research.microsoft.com/en-us/um/people/lamport/pubs/p...
[+] [-] nmrm2|11 years ago|reply
TLA+ is sufficiently different from Coq -- and in ways essential to Voevodsky's intentions -- that discussion of TLA+ in this article would be out of place.
In addition, there is a huge community of people who have worked with theorem provers for the past 20 years or more; it's not clear to me why the author would single out TLA+/Lamport over any of these other systems/researchers.
[+] [-] mathgenius|11 years ago|reply
[+] [-] auggierose|11 years ago|reply
[+] [-] tel|11 years ago|reply
[+] [-] tosh|11 years ago|reply
[+] [-] vitriol83|11 years ago|reply
http://www.math.ias.edu/~vladimir/Site3/Univalent_Foundation...
[+] [-] igravious|11 years ago|reply
What's not to love about:
[1] https://en.wikipedia.org/wiki/Homotopy_type_theory#Univalenc...[2] http://homotopytypetheory.org/book/
[+] [-] CHY872|11 years ago|reply
https://www.math.ias.edu/vladimir/sites/math.ias.edu.vladimi...
[+] [-] jimminy|11 years ago|reply
https://www.youtube.com/watch?v=E9RiR9AcXeE
[+] [-] deanmen|11 years ago|reply
When people emphasise intuition in mathematics it suggests that
(i) that they have never tried to teach mathematics formally and explicitly without appeal to intuition —if they had, it would have been a most refreshing experience for them and for those of their students that were sufficiently well-educated to appreciate simplification
(ii) that they have never taken the trouble to learn how to let the rules of the formal game guide their “writing of papers” —if they had, they would have discovered how to do mathematics way beyond their former powers.
EW Dijkstra
[+] [-] igravious|11 years ago|reply
Given all that I think the Oxford dictionary's definition is actually quite spot on. Are you quibbling with it? Happy to hear what you think.
I can post links to Brouwer's writing on these matters.
[+] [-] ocfnash|11 years ago|reply
Funnily enough, I was just thinking about this very topic a week ago and scribbled down some notes with slightly less trivial examples: http://www.maths.tcd.ie/~onash/counting_whats_important_file...
[+] [-] kaitai|11 years ago|reply
I like point (ii) above, as learning to let the rules of the game guide your writing is a very powerful technique indeed. But teaching math formally without appeal to intuition is what a lot of people do and it's boring and unpleasant to me. It doesn't simplify, it merely traces lines in the sand or on a page without taking the trouble to look up and appreciate the larger picture (sometimes meaning literal picture).
It's clear that Dijkstra & I appreciate very different parts of mathematics :)
[+] [-] Klockan|11 years ago|reply
[+] [-] Confusion|11 years ago|reply
A nice overview was given by Peter of Conscious Entities recently: http://www.consciousentities.com/?p=1918.
[+] [-] mathgenius|11 years ago|reply
[+] [-] otoburb|11 years ago|reply
The Homotopy Type Theory (HoTT) book[1] puts forth a possible interpretation of Conway's thoughts about univalent foundations in section 11.6 (look at the bottom of the page for the following very selective and paraphrased quote):
"[...] Conway's point was not to complain about ZF in particular, but to argue against all foundational theories at once [...] One might respond that, in fact, univalent foundations is not one of the "standard foundational theories" which Conway had in mind, but rather the metatheory in which we may express our ability to create new theories, and about which we may prove Conway's metatheorem."[2]
[1] http://homotopytypetheory.org/book/
[2] http://www.math.ntnu.no/~stacey/documents/HoTT-online/main12...
EDIT: I forgot to mention that Voevodsky's definition of "Univalent Foundations" seems to be used interchangeably with the term "Homotopy Type Theory", but in Voevodsky's mind there is a sharp distinction[3]. Voevodsky tried to make this clear in an HoTT Google Groups email thread with the following statement: "HoTT is the study of the type-theoretic approach to the meta-theory of the Univalent Foundations."[4]
[3] http://en.wikipedia.org/wiki/Homotopy_type_theory#Univalent_...
[4] https://groups.google.com/forum/#!topic/homotopytypetheory/z...
[+] [-] harryjo|11 years ago|reply
Basically, humans can't comprehend what the computers are doing, even when they are right, and we aren't always building them well enough to be trustworthy without checking their work.
However, Zeilberger and Shalosh B Ekhad have done some highly reputable work, albeit in a narrow area. (Zeilberger designs algorithms in combinatorics, which then computer-generate proofs for specific questions)
http://www.nytimes.com/learning/teachers/featured_articles/2...
[+] [-] auggierose|11 years ago|reply
[+] [-] mekaj|11 years ago|reply
[+] [-] xamuel|11 years ago|reply
Within the formalized version of mathematics, some sort of namespace mechanism will probably be necessary to deal with terminology collisions.
And certain things commonly thought of as one notion might need to be split up into multiple notions (example: in some contexts, "function" means "set of pairs satisfying the vertical line test"; in other contexts, "function" means "set of pairs satisfying vertical line test, along with designated 'codomain').
But at most, this amounts to changing the wallpaper. The structure itself is sound unless something really shocking happens like a proof that PA is inconsistent or something. Such a shocking event needn't hinge on computer-aided proofs (although computer-aided proofs will surely help to convince mathematicians who would otherwise assume such a startling result must be mistaken).
[+] [-] marrywilliams84|11 years ago|reply
[deleted]