The remarks on computable and continuous functions can also be thought about as follows.
The law of the excluded middle is only one non-constructive aspect of classical mathematics. Dropping it leaves you with what is usually called "intuionistic logic", and the characterization of constructive mathematics as broader than classic mathematics is literally correct in that every intuionistic theorem implies a classical theorem, and every classical theorem is equivalent to an intuionistic theorem.
Another non-constructive aspect of classical mathematics, which remains non-constructive in intuionistic logic, are axioms whose conclusion involves the existential quantifier. In both classical and intuionistic logic, the inference rule, which governs how the existential quantifer may be used, looks as follows.
> Asserting that "the existence of x such that phi(x) holds entails psi" where x is a variable bound by the existential quantifer and not occurring in psi, is equivalent to assering that "phi(y) entails psi" where y is a free variable not ocurring in psi.
This inference rule (assembly language of logic) seems weird, so an example of its use might help.
> "there exists x such that phi(x) holds entails that there exists x such that phi(x) holds" by tautological inference, and by the existential quantifier inference we have that "phi(y) entails the existence of x such that phi(x) holds" where y is a free variable not appearing in "exists x such that phi(x) holds".
When doing classical or intuionistic mathematics, you would usually use axioms of the form "psi entails that there exists x such that phi(x) holds". But such axioms are non-constructive because the inference rule does not allow you to start with "there exists x such that phi(x) holds" and somehow construct an actual term t such that phi(t) holds.
Now what Mike says in his slides is that it is consistent to modify intuionistic mathematics into a "fully constructive mathematics" by strengthening the existential quantifier so that "exists x such that phi(x) holds" would imply that we can (computably) come up with a specific term t such that phi(t) holds.
As a consequence, any statement in fully constructive mathematics involving the existential quantifier as a conclusion is a stronger statement than such in classical mathematics, which is roughly why all functions end up being continuous: the definition of the notion of a function itself requires that the function be computable, i.e. that its values exist in the sense of the strengthened existential quantifier.
> every classical theorem is equivalent to an intuionistic theorem.
I understand that every intuitionistic theorem implies a classical one: give the same proof, and see what theorem you end up with in the classical world. But how is every classical theorem equivalent to an intuitionistic one? Can we construct the equivalent intuitionistic theorem in a similar way from any classical one, or is this a nonconstructive proof?
> In constructive mathematics, there can also be numbers d
such that d^2 = 0 but not necessarily d = 0.
In the footnote, it mentions that d is not a real number (which seems right). But I find this very confusing -- this seems to be using a deliberately non-constructible number to facilitate proofs in constructive mathematics?
Regardless of whether this meets a formal criteria for being a tool in constructive mathematics, it does seem to be in direct opposition to the notion of intuitionalist mathematics, which has as a fundamental goal to dispense with axiom-of-choice-style proofs of things that are clearly ridiculous (like Banach-Tarski).
We can explicitly construct such numbers using 2x2 matrices. These are called "dual numbers"[0], and the matrix representation is essentially an analoge of the same for complex numbers. So if you accept the complex unit, i, as a reasonable algebraic thing, then the "dual unit" shouldn't be too much of a stretch.
Anyway, all that is completely fine in standard foundations. The LEM contradictions only come in when we impose other axioms. For example, in Anders Kock's Sythetic Differential Geometry[1], we essentially start out by assuming that every function is differentiable.
This seems sorta crazy at first blush, and it does give contradictions, but the interesting bit as that those problems hinge solely on LEM. Thus, in a way, we're left with chosing between universal differentiability or LEM. Kock explores what happens if we pick the former, and it turns out to be pretty deep, interesting and useful.
> we can assume powerful axioms that would classically be inconsistent
Where could I find some explicit examples of this being useful?
One idea I had is that this might actually be very useful for proving things about cryptography. Generally, if you want to formally prove something is cryptographically sound, you can only do so in terms of the asymptotics of families of functions, rather than proving anything specific regarding, say, SHA256. But if you could prove constructively a statement like (an adversary can break program P) -> (exists x,y: x != y and SHA256(x)=SHA256(y)) it seems that would serve as a proof that P is secure. Statements of this form are trivially provable classically since (exists x,y: x != y and SHA256(x)=SHA256(y)) can be proven on its own by the pigeonhole principle, regardless of the correctness of P, but it seems that in order to prove this constructively you would actually have to find a collision in SHA256.
No. If a set A is finite, its elements are enumerable, and some property P is decidable, then (\forall x \in A) P(x) \vee ~P(x) is true even intuitionistically.
A list of examples of "anti-classical" axioms:
- All functions are continuous, and all sets automatically carry the "right" topology. This means, for instance, that constructing the Real numbers in the standard ways (Cauchy sequences or Dedekind cuts) also endows them with the desired topology (the Euclidean topology in this case).
- All functions are computable. This axiom is called "Church's thesis". It's nice because it shows that a lot of Computability Theory is similar to classical set theory, but with the edition of Church's thesis, and the absence of LEM.
- Some numbers in a set called the "smooth reals" square to 0... without being zero. This seems to create a universe in which every geometric object is "infinitesimally straight". I haven't fully grasped why this works. It's like the dual numbers, but somehow made all-pervasive.
Generally speaking, the ability to extended intuionistic mathematics by new axioms that are classically inconsistent is a bit of a misnomer. The real power is that these "axioms" allow you to easily and immediately specialize the generic language of intuionistic mathematics into a domain-specific language for various areas.
So that
1. adding the continuity axiom gives you a domain-specific language which can only describe continuous functions and no others
2. adding the differentiation axiom gives you a domain-specific language called synthetic differential geometry in which you can only talk about differentiable functions
3. adding the computability axiom gives you a domain-specific language in which you can only talk about computable functions.
4. adding the law of the excluded middle gives you the domain-specific language of classical mathematics.
Now, each of the first three can also be interpreted within classical mathematics (which is how we know the extensions are consistent with classical mathematics), but that's analogous to writing an interpreter in C versus merely adding new syntax to an existing language (e.g. how DSLs can be implemented in LISP).
I don't think you'll find much use for it when reasoning about a specific computable hash function because there's e.g. no clear distinction between using the pingeonhole principle and using the properties of the hash function itself.
You could however reason about a 'generic' hash function and show that a particular program can only be broken if someone can find a hash collision. That doesn't really require constructionist reasoning though.
I would like to point out that we don't have even classical way of proving something to be cryptographically sound, since we don't know if really P!=NP. Perhaps there is a constructive proof of P!=NP and this will actually let you prove cryptographic strength of something constructively.
And if you are already taking P!=NP as an axiom, what difference it makes that constructivism takes out just another axiom that you assume to be true?
I’m a bit confused about the functions always being continuous. Step functions being ill-defined makes sense, but how about modulos? f(x) = x mod 1 should be trivially discontinuous. I should also be able to contruct the set of integers Z (using the unit 1 and succession operator) and identify the largest integer that is smaller than x by intersecting [x-1, x) with Z. Do I need to show the intersection always contain only one element?
Well, x mod 1 is just x minus a series of Heaviside step functions, each of which are non-constructive. The core problem is determining if your real number x is equal to an integer. You might be able to prove that x is close to a particular n, but there may not be a constructive proof of either x<n, x=n, or x>n. Hence you don't know if x mod 1 should be x-(n-1) or x-n.
I think your definition depends on being able to decide for every real x whether n ∈ [x-1, x), for which you need that n < x. According to the slides that is not generally possible in constructive mathematics.
You sound to me like you think you're sarcastically saying something that's just so obviously false that it constitutes a devastating argument against the entire idea of constructive mathematics, but whether the reals "exist" depends a lot on your definition of "exist" and is still an ongoing debate for some definitions of "exist" that are relevant, even without bring constructive mathematics into it. And it's not that hard to draw a perfectly conventional mathematician into an interesting discussion about whether the real numbers deserve to be as fundamental as they are when we can't point to any of them. That is, it isn't just that we can't "construct" them for some definition of construct, but we can't even identify them for any definition of "identify".
Let me highlight again that the question isn't whether they are useful, or a mathematically valid construction; there are all kinds of other mathematically valid constructions far stranger than the "reals". The question is whether they deserve such a fundamental place in mathematics and education. I do recall even in high school that truly understanding the real numbers even at that level was definitely a dividing point for the advanced math class; there are some who really got the quirks, and there are some who never really did and just parroted their way through the tests wondering how the question of "numbers" got so complicated... and it was still a very partial and somewhat ad-hoc introduction to the topic ("Dedekind" doesn't come up, etc.) just enough to try to motivate limits.
Bingo! From a computational perspective, the non-computable reals are objects that not only require an infinite amount of memory to represent a single one, but also require an infinite amount of memory to even describe an algorithm to produce a single one. How could such an object possibly be constructed?
The real numbers, infinity, etc. are abstractions. Being able to think abstractly is one of the things that make the humans different from other animals.
the constructive mathematics reminds me the state of math before calculus, i.e. Achilles and turtle. Such step back in power and expressiveness is probably a necessary first step before the some kind of "closure" technique, like the limit in calculus, can be developed.
Constructive mathematics is the way forward. It's connected to linear logic via Chu spaces [0] (https://arxiv.org/abs/1805.07518). This correspondence is unreal as constructive mathematics / linear logic unlocks the door to several key advantages such as being closer to computation, probability theory, type theory, Lie groups, a generalized topology, optimization, “differentiable Turing machines”, all the quantum shit, game theory etc the list goes on.
Linear logic is fundamentally about formulating computation as an adversarial relationship between two parties. It's also the foundation of e.g. the Rust ownership system (I know, I know, affine types).
It's also the foundation of the min-max algorithm which in turn is the foundation of Generative adversarial neural networks.
Ironically, this idea isn't exactly new. It's directly related to Dialectics (https://en.wikipedia.org/wiki/Dialectic) which has been present in Western philosophy since Aristotle through Kant, Hegel and Marx (it's a funny story).
I started teaching myself Racket recently and eventually ended up looking into the innards of lambda calculus formalism. It got me thinking about what the ramifications of embracing constructivism would be towards computability in general. Can you recommend any good introductory literature on constructivism ? I'm familiar with the foundational crisis and the whole formalism vs logicism vs constructivism debate but never really dug deeper. Thanks.
How is constructive mathematics closer to Lie groups than regular mathematics? From what I read in that presentation, a bunch of basic assumptions in calculus (such as the intermediate value theorem) no longer hold, and the theory of Lie groups is built on manifolds/calculus/real analysis etc.
[+] [-] jimhefferon|7 years ago|reply
[+] [-] vladsotirov|7 years ago|reply
The law of the excluded middle is only one non-constructive aspect of classical mathematics. Dropping it leaves you with what is usually called "intuionistic logic", and the characterization of constructive mathematics as broader than classic mathematics is literally correct in that every intuionistic theorem implies a classical theorem, and every classical theorem is equivalent to an intuionistic theorem.
Another non-constructive aspect of classical mathematics, which remains non-constructive in intuionistic logic, are axioms whose conclusion involves the existential quantifier. In both classical and intuionistic logic, the inference rule, which governs how the existential quantifer may be used, looks as follows.
> Asserting that "the existence of x such that phi(x) holds entails psi" where x is a variable bound by the existential quantifer and not occurring in psi, is equivalent to assering that "phi(y) entails psi" where y is a free variable not ocurring in psi.
This inference rule (assembly language of logic) seems weird, so an example of its use might help.
> "there exists x such that phi(x) holds entails that there exists x such that phi(x) holds" by tautological inference, and by the existential quantifier inference we have that "phi(y) entails the existence of x such that phi(x) holds" where y is a free variable not appearing in "exists x such that phi(x) holds".
When doing classical or intuionistic mathematics, you would usually use axioms of the form "psi entails that there exists x such that phi(x) holds". But such axioms are non-constructive because the inference rule does not allow you to start with "there exists x such that phi(x) holds" and somehow construct an actual term t such that phi(t) holds.
Now what Mike says in his slides is that it is consistent to modify intuionistic mathematics into a "fully constructive mathematics" by strengthening the existential quantifier so that "exists x such that phi(x) holds" would imply that we can (computably) come up with a specific term t such that phi(t) holds.
As a consequence, any statement in fully constructive mathematics involving the existential quantifier as a conclusion is a stronger statement than such in classical mathematics, which is roughly why all functions end up being continuous: the definition of the notion of a function itself requires that the function be computable, i.e. that its values exist in the sense of the strengthened existential quantifier.
[+] [-] tomsmeding|7 years ago|reply
I understand that every intuitionistic theorem implies a classical one: give the same proof, and see what theorem you end up with in the classical world. But how is every classical theorem equivalent to an intuitionistic one? Can we construct the equivalent intuitionistic theorem in a similar way from any classical one, or is this a nonconstructive proof?
[+] [-] andrewla|7 years ago|reply
In the footnote, it mentions that d is not a real number (which seems right). But I find this very confusing -- this seems to be using a deliberately non-constructible number to facilitate proofs in constructive mathematics?
Regardless of whether this meets a formal criteria for being a tool in constructive mathematics, it does seem to be in direct opposition to the notion of intuitionalist mathematics, which has as a fundamental goal to dispense with axiom-of-choice-style proofs of things that are clearly ridiculous (like Banach-Tarski).
[+] [-] xelxebar|7 years ago|reply
Anyway, all that is completely fine in standard foundations. The LEM contradictions only come in when we impose other axioms. For example, in Anders Kock's Sythetic Differential Geometry[1], we essentially start out by assuming that every function is differentiable.
This seems sorta crazy at first blush, and it does give contradictions, but the interesting bit as that those problems hinge solely on LEM. Thus, in a way, we're left with chosing between universal differentiability or LEM. Kock explores what happens if we pick the former, and it turns out to be pretty deep, interesting and useful.
[0]:https://en.wikipedia.org/wiki/Dual_number
[1]:http://home.imf.au.dk/kock/sdg99.pdf
[+] [-] laichzeit0|7 years ago|reply
[+] [-] joppy|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] currycurry16|7 years ago|reply
[+] [-] dooglius|7 years ago|reply
Where could I find some explicit examples of this being useful?
One idea I had is that this might actually be very useful for proving things about cryptography. Generally, if you want to formally prove something is cryptographically sound, you can only do so in terms of the asymptotics of families of functions, rather than proving anything specific regarding, say, SHA256. But if you could prove constructively a statement like (an adversary can break program P) -> (exists x,y: x != y and SHA256(x)=SHA256(y)) it seems that would serve as a proof that P is secure. Statements of this form are trivially provable classically since (exists x,y: x != y and SHA256(x)=SHA256(y)) can be proven on its own by the pigeonhole principle, regardless of the correctness of P, but it seems that in order to prove this constructively you would actually have to find a collision in SHA256.
[+] [-] man-and-laptop|7 years ago|reply
A list of examples of "anti-classical" axioms:
- All functions are continuous, and all sets automatically carry the "right" topology. This means, for instance, that constructing the Real numbers in the standard ways (Cauchy sequences or Dedekind cuts) also endows them with the desired topology (the Euclidean topology in this case).
- All functions are computable. This axiom is called "Church's thesis". It's nice because it shows that a lot of Computability Theory is similar to classical set theory, but with the edition of Church's thesis, and the absence of LEM.
- Some numbers in a set called the "smooth reals" square to 0... without being zero. This seems to create a universe in which every geometric object is "infinitesimally straight". I haven't fully grasped why this works. It's like the dual numbers, but somehow made all-pervasive.
[+] [-] vladsotirov|7 years ago|reply
Generally speaking, the ability to extended intuionistic mathematics by new axioms that are classically inconsistent is a bit of a misnomer. The real power is that these "axioms" allow you to easily and immediately specialize the generic language of intuionistic mathematics into a domain-specific language for various areas.
So that
1. adding the continuity axiom gives you a domain-specific language which can only describe continuous functions and no others
2. adding the differentiation axiom gives you a domain-specific language called synthetic differential geometry in which you can only talk about differentiable functions
3. adding the computability axiom gives you a domain-specific language in which you can only talk about computable functions.
4. adding the law of the excluded middle gives you the domain-specific language of classical mathematics.
Now, each of the first three can also be interpreted within classical mathematics (which is how we know the extensions are consistent with classical mathematics), but that's analogous to writing an interpreter in C versus merely adding new syntax to an existing language (e.g. how DSLs can be implemented in LISP).
[+] [-] contravariant|7 years ago|reply
You could however reason about a 'generic' hash function and show that a particular program can only be broken if someone can find a hash collision. That doesn't really require constructionist reasoning though.
[+] [-] js8|7 years ago|reply
And if you are already taking P!=NP as an axiom, what difference it makes that constructivism takes out just another axiom that you assume to be true?
[+] [-] _v7gu|7 years ago|reply
[+] [-] knappa|7 years ago|reply
[+] [-] alimw|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] User23|7 years ago|reply
[+] [-] jerf|7 years ago|reply
Let me highlight again that the question isn't whether they are useful, or a mathematically valid construction; there are all kinds of other mathematically valid constructions far stranger than the "reals". The question is whether they deserve such a fundamental place in mathematics and education. I do recall even in high school that truly understanding the real numbers even at that level was definitely a dividing point for the advanced math class; there are some who really got the quirks, and there are some who never really did and just parroted their way through the tests wondering how the question of "numbers" got so complicated... and it was still a very partial and somewhat ad-hoc introduction to the topic ("Dedekind" doesn't come up, etc.) just enough to try to motivate limits.
[+] [-] logicchains|7 years ago|reply
[+] [-] yellowflash|7 years ago|reply
Though addition might not be defined on some of the reals, if represent it like that.
But Proof checkers like lean/coq all use Cauchy sequences to represent reals. So they are in effect constructible.
[+] [-] Koshkin|7 years ago|reply
[+] [-] trhway|7 years ago|reply
[+] [-] adamnemecek|7 years ago|reply
[0] https://en.wikipedia.org/wiki/Chu_space
Linear logic is fundamentally about formulating computation as an adversarial relationship between two parties. It's also the foundation of e.g. the Rust ownership system (I know, I know, affine types).
It's also the foundation of the min-max algorithm which in turn is the foundation of Generative adversarial neural networks.
Ironically, this idea isn't exactly new. It's directly related to Dialectics (https://en.wikipedia.org/wiki/Dialectic) which has been present in Western philosophy since Aristotle through Kant, Hegel and Marx (it's a funny story).
[+] [-] mikorym|7 years ago|reply
This sweeping assumption leads to my sweeping assumption that you are "a programmer".
[+] [-] aportnoy|7 years ago|reply
[+] [-] yantrams|7 years ago|reply
[+] [-] joppy|7 years ago|reply
[+] [-] alimw|7 years ago|reply
[+] [-] gbhn|7 years ago|reply