The mentioned size and density of Whitehead & Russel's Principia make the few dozen pages of Goedel's On Formally Undecidable Propositions of Principia Mathematica and Related Systems one of the greatest "i ain't reading all that/i'm happy for u tho/or sorry that happened" mathematical shitposts of all time.
Gödel had great respect for their work and was considered one of only a few people at the time to have read and understood the work. He wrote an entire paper later in life explaining he wouldn’t have come to his result without Principia because it showed him a base case to work from. Him and Russell would continue to meet and discuss logic well into the 50’s.
Thanks for sharing! I like to look at this example inside the debate of if mathematics are invented or discovered.
> That is how Whitehead and Russell did it in 1910. How would we do it today? A relation between S and T is defined as a subset of S × T and is therefore a set.
> A huge amount of other machinery goes away in 2006, because of the unification of relations and sets.
Relations are a very intuitive thing that I think most people would agree that are not the invention of one person. But the language to describe them and manipulate them mathematically is an invention that can have a dramatic effect on the way they are communicated.
I'd say mathematics is discovered and definitions are invented. E.g. "ordered pair" is not part of set theory, it's an invented name we give to a convenient definition of a set schema.
Even base-N representations are an invention: S() and zero are all you need, but Roman Numerals were an improvement over base-1 representations and base-N is significantly more convenient to work with.
That was a lovely read, thank you. I particularly enjoyed the analogy between 'a poorly-written computer program' (i.e. one with a lot of duplication due to inadequate abstraction), and the importance of using the appropriate mathematical machinery to reduce the complexity/length of a proof. It brings the the Curry–Howard isomorphism to mind: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
It's easier if you start from something closer to Peano arithmetic or Boyer-Moore theory. I used to do a lot with constructive Boyer-Moore theory and their theorem prover. It starts with
(ZERO)
and numbers are
(ADD1 (ZERO))
(ADD1 (ADD1 (ZERO)))
etc. The prover really worked that way internally, as I found out when I input a theorem with numbers such as 65536 in it. I was working on proving some things about 16-bit machine arithmetic, and those big numbers pushed SRI International's DECSystem 2060 into thrashing.
Here's the prover building up basic number theory, one theorem at a time.[1]
This took about 45 minutes in 1981 and takes under a second now.
Constructive set theory without the usual set axioms is messy, though. The problem is equality. Informally, two sets are equal if they contain the same elements. But in a strict constructive representation, the representations have to be equal, and representations have order. So sets have to be stored sorted, which means much fiddly detail around maintaining a valid representation.
What we needed, but didn't have back then, was a concept of "objects". That is, two objects can be considered equal if they cannot be distinguished via their exported functions. I was groping around in that area back then, and had an ill-conceived idea of "forgetting", where, after you created an object and proved theorems about it, you "forgot" its private functions. Boyer and Moore didn't like that idea, and I didn't pursue it further.
The main point of the parent article is not about 1+1=2, but about the importance of the concept of ordered pair in mathematics and about how the introduction and use of this concept has simplified the demonstrations that were much too complicated before this.
While the article is nice, I believe that the tradition entrenched in mathematics of taking sets as a primitive concept and then defining ordered pairs using sets is wrong. In my opinion, the right presentation of mathematics must start with ordered pairs as the primitive concept and then derive sequences, sets and multisets from ordered pairs.
The reason why I believe this is that there are many equivalent ways of organizing mathematics, which differ in which concepts are taken as primitive and in which propositions are taken as axioms, while the other concepts are defined based on the primitives and other propositions are demonstrated as theorems, but most of these possible organizations cannot correspond to an implementation in a physical device, like a computer.
The reason is that among the various concepts that can be chosen as primitive in a mathematical theory, some are in fact more simple and some are more complex and in a physical realization the simple have a direct hardware correspondent and the complex can be easily built from the simple, while the complex cannot be implemented directly but only as structures built from simpler components. So in the hardware of a physical device there are much more severe constraints for choosing the primitive things than in a mathematical theory that only describes the abstract properties of operations like set union, without worrying how such an operation can actually be executed in real life.
The ordered pair has a direct hardware implementation and it corresponds with the CONS cell of LISP. In a mathematical theory where the ordered pair is taken as primitive and sets are among the things defined using ordered pairs, many demonstrations correspond to how various LISP functions would be implemented. Unlike ordered pairs, sets do not have any direct hardware implementation. In any physical device, including in the human mind, sets are implemented as equivalence classes of sequences, while sequences are implemented based on ordered pairs.
The non-enumerable sets are not defined as equivalence classes of sequences and they cannot be implemented as such in a physical device but at most as something of the kind "I recognize it when I see it", e.g. by a membership predicate.
However infinite sets need extra axioms in any kind of theory and a theory of finite sets defined constructively from ordered pairs can be extended to infinite sets with appropriate additional axioms.
What definition takes up fewer components in a digital circuit is a terrible reason. The whole point of math is we can reason about the most conceptually simple idea, rather than with engineering constraints. Sets existed before circuits! And before digital the only “hardware representation” was an analog voltage, which cannot easily represent a pair.
Also it’s not even true. There is no hardware representation for the ordered pair containing the earth and the moon. You now need a bit encoding of the information.
The distinctions of infinite constructions you mention are already well understood. See “recursively enumerable set”.
Ordered pairs are trivially definable in terms of sets. It’s a distinction which does
not change any of the foundational proofs and gives you no new insight. This is like arguing that bounded vs counted ranges are foundationally important. We can show they are equivalent in one paragraph and move on.
Wait, am I crazy for thinking relations are not sets? Two sets can be coextensive without the relation have the same intension, no? Like the set of all Kings of Mars and the set of Queens of Jupiter are coextensive, but the relations are different because they have different truth conditions. Or am I misunderstanding?
> Wait, am I crazy for thinking relations are not sets? Two sets can be coextensive without the relation have the same intension, no? Like the set of all Kings of Mars and the set of Queens of Jupiter are coextensive, but the relations are different because they have different truth conditions. Or am I misunderstanding?
No-one can stop you from using terms as you please and investigating their consequences, but, at least in modern mathematical parlance, a binary relation is the set of ordered pairs that are "related" by it. (Your relation would seem to be just a bare set, or perhaps a unary relation, not a binary relation which I think is what is often meant without default modifier.)
> The ⊢ symbol has not changed; it means that the formula to which it applies is asserted to be true. ⊃ is logical implication, and ≡ is logical equivalence.
A strange thing happened to me in mathematics. When I got to the point where these symbols started showing up (ninth grade, more or less) I did not get a thorough explanation of the symbols; they just appeared and I tried to intuit what they meant. As more symbols crept into my math, I tried to ignore them where possible. Eventually this meant that I could not continue learning math, as it became mostly all such symbols.
I got as far as a minor in math. I'm not sure how any of this this happened, but I wish I had a table of these symbols in ninth grade.
I often use the analogy "1+1=?" in debates with both friends and strangers, especially when discussing subjective topics like politics, religion, and geopolitical conflicts. It's a simple way to highlight how different perspectives can lead to vastly different conclusions.
For instance, I frequently use the example "1+1=10" in binary to illustrate that, while our reasoning may seem fundamentally different, it's simply because we're starting from different premises, using distinct methods, and approaching the same problem from unique angles.
None of these are "vastly different conclusions". None of these are starting from different premises. None of these are using different reasoning. You're literally just writing it differently. Okay, so? This is a pointless distinction that doesn't even apply in a verbal debate at all. It'd be like having a philosophical debate with someone and them suddenly saying "oh yeah, but what if we were arguing in Spanish!? Wouldn't that BLOW YOUR MIND!?" No? It has absolutely nothing to do with anything. I would be annoyed at you if you tried to use this in an argument with me.
> It's a simple way to highlight how different perspectives can lead to vastly different conclusions.
But 1+1=10 and 1+1=2 are not different conclusions, they are precisely the same conclusions but with different representations.
A better example might be 9 vs 6 written on the parking floor: depending on where you're standing, you'll read the number differently (and yet one of the readings is wrong).
I know of 7 different ways to do 1+1 getting 5 different answers. I use most of them in my day to day work as a programmer. Most of the time 1+1=10 because as a programmer I work in binary.
“1 + 1 = 2” is only true in our imagination, according to logical deterministic rules we’ve created. But reality is, at its most fundamental level, probabilistic rather than deterministic.
Luckily, our imaginary reality of precision is close enough to the true reality of probability that it enables us to build things like computer chips (i.e., all of modern civilization). And yet, the nature of physics requires error correction for those chips. This problem becomes more obvious when working at the quantum scale, where quantum error correction remains basically unsolved.
I’m just reframing the problem of finding a grand unified theory of physics that encompasses a seemingly deterministic macro with a seemingly probabilistic micro. I say seemingly, because it seems that macro-mysteries like dark matter will have a more elegant and predictive solution once we understand how micro-probabilities create macro-effects. I suspect that the answer will be that one plus one is usually equal to two, but that under odd circumstances, are not. That’s the kind of math that will unlock new frontiers for hacking the nature of our reality.
[+] [-] pvg|1 year ago|reply
[+] [-] oglop|1 year ago|reply
[+] [-] Tainnor|1 year ago|reply
Either I misunderstand the notation or there seems to be something missing there - the right hand side of that implication arrow is not a formula.
I would assume that what is meant is α⊂β→α∪(β−α)=β
[+] [-] youoy|1 year ago|reply
> That is how Whitehead and Russell did it in 1910. How would we do it today? A relation between S and T is defined as a subset of S × T and is therefore a set.
> A huge amount of other machinery goes away in 2006, because of the unification of relations and sets.
Relations are a very intuitive thing that I think most people would agree that are not the invention of one person. But the language to describe them and manipulate them mathematically is an invention that can have a dramatic effect on the way they are communicated.
[+] [-] benlivengood|1 year ago|reply
Even base-N representations are an invention: S() and zero are all you need, but Roman Numerals were an improvement over base-1 representations and base-N is significantly more convenient to work with.
[+] [-] awanderingmind|1 year ago|reply
[+] [-] Animats|1 year ago|reply
Here's the prover building up basic number theory, one theorem at a time.[1] This took about 45 minutes in 1981 and takes under a second now.
Constructive set theory without the usual set axioms is messy, though. The problem is equality. Informally, two sets are equal if they contain the same elements. But in a strict constructive representation, the representations have to be equal, and representations have order. So sets have to be stored sorted, which means much fiddly detail around maintaining a valid representation.
What we needed, but didn't have back then, was a concept of "objects". That is, two objects can be considered equal if they cannot be distinguished via their exported functions. I was groping around in that area back then, and had an ill-conceived idea of "forgetting", where, after you created an object and proved theorems about it, you "forgot" its private functions. Boyer and Moore didn't like that idea, and I didn't pursue it further.
Fun times.
[1] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...
[+] [-] anthk|1 year ago|reply
[+] [-] jk4930|1 year ago|reply
https://us.metamath.org/mpeuni/mmset.html#trivia
https://us.metamath.org/mpeuni/2p2e4.html
[+] [-] lozf|1 year ago|reply
[+] [-] cubefox|1 year ago|reply
Unrelated, but why doesn't Hacker News have support for latex? And markdown, for that matter?
[+] [-] gabrielsroka|1 year ago|reply
[+] [-] ngcc_hk|1 year ago|reply
The issue is 1+1 has no guarantee it will be two. You look carefully you can see the first 1 is exactly the same as the second 1 !!!!
Hence put the set of all Russell that do that kind of maths and add to another Russell also do that maths. You still ended up with one Russell.
That is why go all the trouble to say no intersection and first oneness set does not overlap with the second oneness set etc etc
Qed
[+] [-] adrian_b|1 year ago|reply
While the article is nice, I believe that the tradition entrenched in mathematics of taking sets as a primitive concept and then defining ordered pairs using sets is wrong. In my opinion, the right presentation of mathematics must start with ordered pairs as the primitive concept and then derive sequences, sets and multisets from ordered pairs.
The reason why I believe this is that there are many equivalent ways of organizing mathematics, which differ in which concepts are taken as primitive and in which propositions are taken as axioms, while the other concepts are defined based on the primitives and other propositions are demonstrated as theorems, but most of these possible organizations cannot correspond to an implementation in a physical device, like a computer.
The reason is that among the various concepts that can be chosen as primitive in a mathematical theory, some are in fact more simple and some are more complex and in a physical realization the simple have a direct hardware correspondent and the complex can be easily built from the simple, while the complex cannot be implemented directly but only as structures built from simpler components. So in the hardware of a physical device there are much more severe constraints for choosing the primitive things than in a mathematical theory that only describes the abstract properties of operations like set union, without worrying how such an operation can actually be executed in real life.
The ordered pair has a direct hardware implementation and it corresponds with the CONS cell of LISP. In a mathematical theory where the ordered pair is taken as primitive and sets are among the things defined using ordered pairs, many demonstrations correspond to how various LISP functions would be implemented. Unlike ordered pairs, sets do not have any direct hardware implementation. In any physical device, including in the human mind, sets are implemented as equivalence classes of sequences, while sequences are implemented based on ordered pairs.
The non-enumerable sets are not defined as equivalence classes of sequences and they cannot be implemented as such in a physical device but at most as something of the kind "I recognize it when I see it", e.g. by a membership predicate.
However infinite sets need extra axioms in any kind of theory and a theory of finite sets defined constructively from ordered pairs can be extended to infinite sets with appropriate additional axioms.
[+] [-] tightbookkeeper|1 year ago|reply
Also it’s not even true. There is no hardware representation for the ordered pair containing the earth and the moon. You now need a bit encoding of the information.
The distinctions of infinite constructions you mention are already well understood. See “recursively enumerable set”.
Ordered pairs are trivially definable in terms of sets. It’s a distinction which does not change any of the foundational proofs and gives you no new insight. This is like arguing that bounded vs counted ranges are foundationally important. We can show they are equivalent in one paragraph and move on.
An actually new ideas will give new results.
[+] [-] earthboundkid|1 year ago|reply
[+] [-] JadeNB|1 year ago|reply
No-one can stop you from using terms as you please and investigating their consequences, but, at least in modern mathematical parlance, a binary relation is the set of ordered pairs that are "related" by it. (Your relation would seem to be just a bare set, or perhaps a unary relation, not a binary relation which I think is what is often meant without default modifier.)
[+] [-] singleshot_|1 year ago|reply
A strange thing happened to me in mathematics. When I got to the point where these symbols started showing up (ninth grade, more or less) I did not get a thorough explanation of the symbols; they just appeared and I tried to intuit what they meant. As more symbols crept into my math, I tried to ignore them where possible. Eventually this meant that I could not continue learning math, as it became mostly all such symbols.
I got as far as a minor in math. I'm not sure how any of this this happened, but I wish I had a table of these symbols in ninth grade.
[+] [-] ngcc_hk|1 year ago|reply
[+] [-] redbell|1 year ago|reply
For instance, I frequently use the example "1+1=10" in binary to illustrate that, while our reasoning may seem fundamentally different, it's simply because we're starting from different premises, using distinct methods, and approaching the same problem from unique angles.
[+] [-] feoren|1 year ago|reply
One plus one equals two.
One + 0x01 ≡ 2.0
1+1=10 (in binary)
None of these are "vastly different conclusions". None of these are starting from different premises. None of these are using different reasoning. You're literally just writing it differently. Okay, so? This is a pointless distinction that doesn't even apply in a verbal debate at all. It'd be like having a philosophical debate with someone and them suddenly saying "oh yeah, but what if we were arguing in Spanish!? Wouldn't that BLOW YOUR MIND!?" No? It has absolutely nothing to do with anything. I would be annoyed at you if you tried to use this in an argument with me.
[+] [-] hks0|1 year ago|reply
But 1+1=10 and 1+1=2 are not different conclusions, they are precisely the same conclusions but with different representations.
A better example might be 9 vs 6 written on the parking floor: depending on where you're standing, you'll read the number differently (and yet one of the readings is wrong).
[+] [-] bazoom42|1 year ago|reply
[+] [-] yohannparis|1 year ago|reply
[+] [-] anthk|1 year ago|reply
[+] [-] dvh|1 year ago|reply
[+] [-] somat|1 year ago|reply
You only need mid values of 1 for 1 + 1 to equal 3
[+] [-] croes|1 year ago|reply
[+] [-] bluGill|1 year ago|reply
[+] [-] dist-epoch|1 year ago|reply
[+] [-] nwnwhwje|1 year ago|reply
Also:
١ + ٥ = ٦
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] wildermuthn|1 year ago|reply
Luckily, our imaginary reality of precision is close enough to the true reality of probability that it enables us to build things like computer chips (i.e., all of modern civilization). And yet, the nature of physics requires error correction for those chips. This problem becomes more obvious when working at the quantum scale, where quantum error correction remains basically unsolved.
I’m just reframing the problem of finding a grand unified theory of physics that encompasses a seemingly deterministic macro with a seemingly probabilistic micro. I say seemingly, because it seems that macro-mysteries like dark matter will have a more elegant and predictive solution once we understand how micro-probabilities create macro-effects. I suspect that the answer will be that one plus one is usually equal to two, but that under odd circumstances, are not. That’s the kind of math that will unlock new frontiers for hacking the nature of our reality.
[+] [-] unknown|1 year ago|reply
[deleted]