Can someone give me an absolute layman explanation to homotopy type theory (and why it's so interesting)? I'm hearing about it everywhere, but most of the introductions to the subject assume the reader is already familiar with either type theory or category theory.
Typing systems (at least some of them) are also logics. This is called the Curry-Howard correspondence (CHC).
At first the CHC was made to work for propositional logic only, i.e. logic without for-all and existential quantification. Later typing systems were developed which correspond to logic with quantification. This requires dependent types. However, early typing systems with dependent types had an inelegant handling of equality. Logic needs equality to express things like forall x. x = 3 => x > 2. To overcome this problem, Per Martin-Loef introduced a dependent type-theory (MLTT) with "identity-types" which enables an elegant handling of equality. MLTT and all other extant formalisations of mathematics still suffered from another problem, in that many mathematical constructs are 'morally' the same, but formally distinct. For example you can define a group to be a triple ( G, 1, * ) where G is a set, 1 the neutral element and * the binary operation, or you can define it as ( G, *, 1 ). The former and the latter are not the same thing, but we don't really want to say that they formalise different concepts. This is similar to how in many programming languages certain types are formally distinct but somehow capture the same content.
Homotopy type theory (HoTT) overcomes this problem (or parts of this problem) by adding a single axiom to MLTT, called the "univalence axiom" which can informally be rendered as:
Things that are 'morally' the same, really are identical.
The key idea behind the univalence axiom is that MLTT (and hence HoTT) restricts the mathematical objects that can be constructed such that whenever you have objects O1 and O2 that are morally the same, then there is a function that transforms any construction involving O1 automatically into a construction involving O2 or vice versa, but in a truth preserving way.
It turns out that proofs in HoTT are formally similar to certain aspects of geometry/topology.
Much of current research on HoTT is about the consequences of the univalence axiom and the similarity between logic and geometry/topology.
The key hope in all this is that HoTT will streamline formalised mathematics.
Another addendum to the discussion, from an angle more relevant to computer programmers: if we find a computational meaning for the univalence axiom, then it'll be a crazy nuclear weapon of generic programming and code generation.
The univalence axiom says that two types are equal if we can convert their values back and forth (without loss of information). For example, one Boolean type may be defined as an enum containing True and False. But we could rename the values to Foo and Bar and it would be still a fine implementation, because it's easy to convert between the two representations. We can imagine more far-fetched representations too, with more complicated conversion functions.
But in type theory, if two types are equal, then we can freely substitute one for another in any context! For example, if we have a function "Bool -> Bool", and we know that Bool is equal to Bool2, then we can coerce the function to type "Bool2 -> Bool2".
Imagine a complex program that uses a complex piece of data type. If we can provide a correct conversion function to another type (i. e. prove that the type is equivalent to another one), then the univalence axiom can take the conversion function and our program as input, and spits out a new program that uses the new data type, and also preserves all properties of the program. By our current standards of generic programming, this is just insane!
Now, the issue is that currently we don't actually have a computational interpretation for the univalence axiom, so the above magic doesn't yet work. But there is meaningful progress towards that, and there are already some experimental implementations that can do some magic, for example this one:
In the comments of a recent mathematics-without-apologies blog post[1], there was a back-and-forth between Jacob Lurie and Urs Schreiber (the author of the OP article on nLab). I've been trying to parse the material on nLab for years, good to know I'm not the only one who seems to have trouble with it. A refreshingly blunt discussion, not to be missed!
I'll try to keep it short and skip the details of the mathematical framework itself.
Back in the early 1900's there was a school of mathematical logic that denied a certain axiom called the law of the excluded middle, that !!p <=> p. They said that if you used this axiom (roughly) your proof was not "constructive," whereas if you avoided it your proof was "constructive." They went as far as to deny all math that was not constructive (which I think is silly).
Their ideas faded out of popularity among mainstream mathematicians, but their goal was to rewrite the foundations of all mathematics using only constructive proofs, and to their credit they got pretty damn far in certain subjects like real analysis (calculus).
Today, their ideas are seeing a resurgence in the homotopy type theorists (HTTs). Their goal is also to rewrite all of mathematics in a constructive way, but now they have the hindsight of these great tools like category theory and homology and computers and all these great things developed in the mid to late 20th century. So they are working on building this framework and seeing what it can prove.
The HTTs are also claiming that proofs in their framework can be logically checked by a computer (because it is constructive) and one of their main selling points is that their work will eventually lead to "computer assisted mathematics," not in the sense of Stephen Wolfram but rather in the sense that a computer will check your proofs as you prove them and find stupid mistakes.
I personally am not all that excited about HTT, due to what I see as misguided hype around it. My preferred way to view it is as logicians studying a really fascinating new logical system and seeing how far they can push it. Saying things like "I deny all mathematics that's not constructive" and "computers will start doing our proofs for us," makes me cringe. But it is what it is :)
Homotopy type theory formalizes informal reasoning style when things which have "the same structure" are considered equal in a constructive type theory setting. Another achievement is formulation of synthetic homotopy type theory, i.e. axiomatic as opposed to based on geometric type theory.
First the type theory part of homotopy type theory. Mathematicians have been doing logic for a long time, but mathematical statements and proofs were not themselves mathematical objects. They were just human language text with some math notation thrown in. Around 1900 Brouwer, Heyting and Komolgorov came up with a cool idea: treat proofs as first class mathematical objects. Then giving a proof is equivalent to constructing a mathematical object.
* A proof of `P and Q` is a pair of proofs (a,b) where a is a proof of P and b is a proof of Q
* A proof of `P implies Q` is a function that takes a proof of P and returns a proof of Q
* A proof of `P or Q` is a pair (x,c) where either x=0 and c is a proof of P or x=1 and c is a proof of Q
In modern terms; to give a proof of `P and Q` is to give a value of the pair type `(P, Q)`. To give a proof of `P or Q` is to give a value of the disjoint sum type `P + Q`. To give a proof of `P implies Q` is to give a value of the function type `P -> Q`.
Take the theorem `(A and B) implies (B and A)`. A proof of this is, in Haskell notation:
proof :: (A,B) -> (B,A)
proof (a,b) = (b,a)
So a proof of `(A and B) implies (B and A)` is a function that swaps the order of a pair.
An interesting property of this way of doing things is that statements are not merely true or false, but statements can in general be true in more than one way. For example the statement `(A and A) implies A` has two different proofs:
proof1 :: (A,A) -> A
proof1 (a1,a2) = a1
proof2 :: (A,A) -> A
proof2 (a1,a2) = a2
The proof1 of `(A and A) implies A` is a function that returns the first component of a pair, and proof2 a function that returns the second component.
I'll skip over dependent types.
That's the type theory part. Now the homotopy part. For a long time there was a problem with type theory. There was no really good way to do equality in it. To do math you want to use equality all the time. Since we've defined what the proofs of `P and Q`, `P or Q`, `P implies Q` are, what are the proofs of `x = y`? There has existed some notion of equality in type theory for a long time, but it wasn't satisfactory for various reasons. You couldn't prove two functions equal even though they gave the same outputs for the same inputs. Homotopy type theory fixes equality in type theory. The idea is that a proof of `x = y` depends on the type of x and y.
* Two pairs c,d are equal if their first components are equal and their second components are equal.
* Two functions f,g are equal if for all x, f x is equal to g x.
* etc.
More precisely, the proofs of equality on the type (A,B) is a pair of a proof of equality on A and a proof of equality on B. Just like previously a statement can be true in more than one way (have more than one proof), things can now also be equal in more than one way (have more than one proof of equality).
Another extension that homotopy theory theory does is with user defined types. When you define a type you are allowed to define its type of equality proofs at the same time. This allows you to define things like integers modulo k ((i mod k) is equal to (i+k mod k)), unordered sets (a set S is equal to any permutation of S), and more.
I've brushed over a lot of details and presented only a very narrow way of looking at type theory:
1. A type itself is a value.
2. Full dependent types.
3. When are two types equal?
4. When are two proofs of equality equal?
5. What exactly is a function, what is a pair, etc.? This leads us into category theory, which gives an abstract description of functions and pairs.
Hopefully this still made some amount of sense.
Two line summary:
1. Type theory: using conventional mathematical objects like pairs and functions as proof objects.
2. Homotopy type theory: extends type theory with proofs of equality in a proper way.
Something I looked for on this very intriguing site was a reconciliation in my mind between dependent types and higher kinded types. This has been a nagging question since the announcement of Scala moving to the DOT calculus[1].
While this page[2] helped a bit, doing more research unveiled an example put out by Runar here[3].
Has anyone else considered a reasonable method for addressing what higher-kinded types provide currently in a DOT calculus, especially as it pertains to "matching" the type parameters in a higher-kinded type?
reconciliation in my mind between dependent types and higher kinded types.
They are orthogonal concepts. This is made very clear in Barendregt's λ-cube [1]. Orthogonal here means that a typing system might be higher-kinded without allowing type-dependency, or it might allow type-dependency without having higher-kinds. An example of the latter is LF, the Logical Framework of Harper et al. Haskell, or at least some forms of Haskell are an example of the former.
Higher-kinded types simply allow functions and function application at the type level which can take functions as arguments and can return functions. For example (lambda x.x => bool) and (lambda xy. x => y) are functions at the type level.
An example of a dependent type is List(2+4)[bool], of lists of length 6 carrying booleans. Here (2+4) is a program. This parameterisation happens without having type-functions.
"Imagine a world in which logic, programming, and mathematics are unified, in which every proof corresponds to a program, every program to a mapping, every mapping to a proof!"
The thing I never understood about statements of this sort is that in my understanding of model theory, Godel's theorem and so-forth, a proof is a rare thing. Most of the true statements in a given model don't have proofs. Any consistent proof system admits an uncountable sets of independent theorems and so-forth.
[+] [-] Xcelerate|10 years ago|reply
[+] [-] mafribe|10 years ago|reply
At first the CHC was made to work for propositional logic only, i.e. logic without for-all and existential quantification. Later typing systems were developed which correspond to logic with quantification. This requires dependent types. However, early typing systems with dependent types had an inelegant handling of equality. Logic needs equality to express things like forall x. x = 3 => x > 2. To overcome this problem, Per Martin-Loef introduced a dependent type-theory (MLTT) with "identity-types" which enables an elegant handling of equality. MLTT and all other extant formalisations of mathematics still suffered from another problem, in that many mathematical constructs are 'morally' the same, but formally distinct. For example you can define a group to be a triple ( G, 1, * ) where G is a set, 1 the neutral element and * the binary operation, or you can define it as ( G, *, 1 ). The former and the latter are not the same thing, but we don't really want to say that they formalise different concepts. This is similar to how in many programming languages certain types are formally distinct but somehow capture the same content.
Homotopy type theory (HoTT) overcomes this problem (or parts of this problem) by adding a single axiom to MLTT, called the "univalence axiom" which can informally be rendered as:
The key idea behind the univalence axiom is that MLTT (and hence HoTT) restricts the mathematical objects that can be constructed such that whenever you have objects O1 and O2 that are morally the same, then there is a function that transforms any construction involving O1 automatically into a construction involving O2 or vice versa, but in a truth preserving way.It turns out that proofs in HoTT are formally similar to certain aspects of geometry/topology.
Much of current research on HoTT is about the consequences of the univalence axiom and the similarity between logic and geometry/topology.
The key hope in all this is that HoTT will streamline formalised mathematics.
[+] [-] Kutta|10 years ago|reply
The univalence axiom says that two types are equal if we can convert their values back and forth (without loss of information). For example, one Boolean type may be defined as an enum containing True and False. But we could rename the values to Foo and Bar and it would be still a fine implementation, because it's easy to convert between the two representations. We can imagine more far-fetched representations too, with more complicated conversion functions.
But in type theory, if two types are equal, then we can freely substitute one for another in any context! For example, if we have a function "Bool -> Bool", and we know that Bool is equal to Bool2, then we can coerce the function to type "Bool2 -> Bool2".
Imagine a complex program that uses a complex piece of data type. If we can provide a correct conversion function to another type (i. e. prove that the type is equivalent to another one), then the univalence axiom can take the conversion function and our program as input, and spits out a new program that uses the new data type, and also preserves all properties of the program. By our current standards of generic programming, this is just insane!
Now, the issue is that currently we don't actually have a computational interpretation for the univalence axiom, so the above magic doesn't yet work. But there is meaningful progress towards that, and there are already some experimental implementations that can do some magic, for example this one:
https://github.com/mortberg/cubicaltt
[+] [-] kylebrown|10 years ago|reply
1. https://mathematicswithoutapologies.wordpress.com/2015/05/13...
[+] [-] j2kun|10 years ago|reply
Back in the early 1900's there was a school of mathematical logic that denied a certain axiom called the law of the excluded middle, that !!p <=> p. They said that if you used this axiom (roughly) your proof was not "constructive," whereas if you avoided it your proof was "constructive." They went as far as to deny all math that was not constructive (which I think is silly).
Their ideas faded out of popularity among mainstream mathematicians, but their goal was to rewrite the foundations of all mathematics using only constructive proofs, and to their credit they got pretty damn far in certain subjects like real analysis (calculus).
Today, their ideas are seeing a resurgence in the homotopy type theorists (HTTs). Their goal is also to rewrite all of mathematics in a constructive way, but now they have the hindsight of these great tools like category theory and homology and computers and all these great things developed in the mid to late 20th century. So they are working on building this framework and seeing what it can prove.
The HTTs are also claiming that proofs in their framework can be logically checked by a computer (because it is constructive) and one of their main selling points is that their work will eventually lead to "computer assisted mathematics," not in the sense of Stephen Wolfram but rather in the sense that a computer will check your proofs as you prove them and find stupid mistakes.
I personally am not all that excited about HTT, due to what I see as misguided hype around it. My preferred way to view it is as logicians studying a really fascinating new logical system and seeing how far they can push it. Saying things like "I deny all mathematics that's not constructive" and "computers will start doing our proofs for us," makes me cringe. But it is what it is :)
[+] [-] ericbb|10 years ago|reply
Edit: Here's a more direct page[2] if you just want the video.
[1] http://blogs.scientificamerican.com/guest-blog/voevodskye280...
[2] http://www.heidelberg-laureate-forum.org/blog/video/lecture-...
[+] [-] solomatov|10 years ago|reply
[+] [-] jules|10 years ago|reply
* A proof of `P and Q` is a pair of proofs (a,b) where a is a proof of P and b is a proof of Q
* A proof of `P implies Q` is a function that takes a proof of P and returns a proof of Q
* A proof of `P or Q` is a pair (x,c) where either x=0 and c is a proof of P or x=1 and c is a proof of Q
In modern terms; to give a proof of `P and Q` is to give a value of the pair type `(P, Q)`. To give a proof of `P or Q` is to give a value of the disjoint sum type `P + Q`. To give a proof of `P implies Q` is to give a value of the function type `P -> Q`.
Take the theorem `(A and B) implies (B and A)`. A proof of this is, in Haskell notation:
So a proof of `(A and B) implies (B and A)` is a function that swaps the order of a pair.An interesting property of this way of doing things is that statements are not merely true or false, but statements can in general be true in more than one way. For example the statement `(A and A) implies A` has two different proofs:
The proof1 of `(A and A) implies A` is a function that returns the first component of a pair, and proof2 a function that returns the second component.I'll skip over dependent types.
That's the type theory part. Now the homotopy part. For a long time there was a problem with type theory. There was no really good way to do equality in it. To do math you want to use equality all the time. Since we've defined what the proofs of `P and Q`, `P or Q`, `P implies Q` are, what are the proofs of `x = y`? There has existed some notion of equality in type theory for a long time, but it wasn't satisfactory for various reasons. You couldn't prove two functions equal even though they gave the same outputs for the same inputs. Homotopy type theory fixes equality in type theory. The idea is that a proof of `x = y` depends on the type of x and y.
* Two pairs c,d are equal if their first components are equal and their second components are equal.
* Two functions f,g are equal if for all x, f x is equal to g x.
* etc.
More precisely, the proofs of equality on the type (A,B) is a pair of a proof of equality on A and a proof of equality on B. Just like previously a statement can be true in more than one way (have more than one proof), things can now also be equal in more than one way (have more than one proof of equality).
Another extension that homotopy theory theory does is with user defined types. When you define a type you are allowed to define its type of equality proofs at the same time. This allows you to define things like integers modulo k ((i mod k) is equal to (i+k mod k)), unordered sets (a set S is equal to any permutation of S), and more.
I've brushed over a lot of details and presented only a very narrow way of looking at type theory:
1. A type itself is a value.
2. Full dependent types.
3. When are two types equal?
4. When are two proofs of equality equal?
5. What exactly is a function, what is a pair, etc.? This leads us into category theory, which gives an abstract description of functions and pairs.
Hopefully this still made some amount of sense.
Two line summary:
1. Type theory: using conventional mathematical objects like pairs and functions as proof objects.
2. Homotopy type theory: extends type theory with proofs of equality in a proper way.
[+] [-] AdieuToLogic|10 years ago|reply
While this page[2] helped a bit, doing more research unveiled an example put out by Runar here[3].
Has anyone else considered a reasonable method for addressing what higher-kinded types provide currently in a DOT calculus, especially as it pertains to "matching" the type parameters in a higher-kinded type?
1 - http://lampwww.epfl.ch/~amin/dot/fool.pdf
2 - http://ncatlab.org/nlab/show/dependent+type+theory
3 - https://gist.github.com/runarorama/33986541f0f1ddf4a3c7
[+] [-] mafribe|10 years ago|reply
Higher-kinded types simply allow functions and function application at the type level which can take functions as arguments and can return functions. For example (lambda x.x => bool) and (lambda xy. x => y) are functions at the type level.
An example of a dependent type is List(2+4)[bool], of lists of length 6 carrying booleans. Here (2+4) is a program. This parameterisation happens without having type-functions.
[1] https://en.wikipedia.org/wiki/Lambda_cube
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] noblethrasher|10 years ago|reply
[+] [-] szany|10 years ago|reply
https://existentialtype.wordpress.com/2011/03/27/the-holy-tr...
[+] [-] joe_the_user|10 years ago|reply
The thing I never understood about statements of this sort is that in my understanding of model theory, Godel's theorem and so-forth, a proof is a rare thing. Most of the true statements in a given model don't have proofs. Any consistent proof system admits an uncountable sets of independent theorems and so-forth.
[+] [-] Xcelerate|10 years ago|reply
[deleted]
[+] [-] HOLYCOWBATMAN|10 years ago|reply
[deleted]
[+] [-] dang|10 years ago|reply
[+] [-] unknown|10 years ago|reply
[deleted]