top | item 38104539

(no title)

bvssvni | 2 years ago

The foundations of mathematics are all about language design.

To answer this question, one must say something about which language a foundation of mathematics is using. For example, Set Theory is formalized in First Order Logic. However, First Order Logic uses Propositional Logic, so to build an alternative foundation to Set Theory, you might consider starting with Propositional Logic and extending it another way.

First Order Logic extends Propositional Logic with predicates. This might seem like a good design at first, until you try to reason about uniqueness. In Set Theory, one requires an equality operator in addition to set membership, in order to be able to reason about uniqueness, at all. This equality operator is ugly, because you have to rebuild objects that are isomorphic but using different encodings.

Predicates causes problems because they are unconstrained. For easier formalizing of advanced theories, I suggested Avatar Logic, which replaces predicates with binary relations, avatars and roles. You can try it here: https://crates.io/crates/avalog

Also, most theories assume congruence for all predicates, which is bad for e.g. foundations of randomness.

The next crisis in "the foundations of mathematics" will be "tautological congruence". Luckily, this is already being worked on, by extending Intuitionistic Propositional with exponential propositions. This theory is known as "HOOO EP" and is demonstrated here: https://crates.io/crates/hooo

discuss

order

JonChesterfield|2 years ago

Equality works out cleanly if you distinguish between definitional equality and equivalence. Definitional sometimes gets called syntactic. As in the definitions are the same symbols in the same arrangement.

  (lambda (x) (+ x x)) != (lambda (x) (* 2 x))
In particular, that's decidable on any finite expressions. Walk the expressions like an AST comparing the nodes on each side.

Equivalence / observational equality covers whether two expressions compute the same thing. As in find a counterexample to show they're not equivalent, or find a proof that both compute the same things, or that each can be losslessly translated into the other.

Requiring that your language can decide observational equivalence on the computation of any two terms seems obviously a non-starter to me, but it also looks like the same thing as requiring a programming language type system be decidable.

bvssvni|2 years ago

I want to reason hypothetically, which is why I don't use syntactic equality. I only use syntactic inequality in a very limited sense, e.g. two symbols `foo'` and `bar'` are symbolic distinct, so one can introduce `sd(foo', bar')`.

The reason is that if I have a proof `((x + x) == (2 * x))^true`, then I can treat objects as if they were definitionally equal.

When definitional equality and syntactic equality are the same, one is assuming `(!(sd(a, b)) == (a == b))^true`, which obtains a proof `!sd(a, a)` for all `a`. This destroys the property of reasoning hypothetically about exponential propositions in Path Semantics. For example, I might want to reason about what if I had `sd(a, a)`, does this imply `a ~~ a` by `q_lift : sd(a, b) & (a == b) -> (a ~~ b)`? Yes!

However, if I already have `!sd(a, a)` for all `a`, then the above reasoning would be the same absurd reasoning.

I can always assume this in order to reason about it, but I don't have to make this assumption a built-in axiom of my theory.

When assuming tautological congruence e.g. `(a == b)^true` in a context, one is not enforcing observational equality. So, it is not the same as requiring the type system to be decidable. I can also make up new notions of equivalences and guard them, e.g. not making them tautological congruent.

dboreham|2 years ago

Yes. Although my take on this is that it's about VM design, or the ISA for a VM upon which mathematics runs.

zmgsabst|2 years ago

Modern mathematics deals with ISA design from the perspective of application:

A CPU, an FPGA, and an GPU are all Turing complete substrates, yet they’re useful for wildly different things.

Category theory, type theory, and set theory all can embed arbitrary mathematics — but the encodings you get lend themselves to different applications.

Eg, category theory is very useful at abstracting structures to check if they’re “the same”.