Parser combinators extended to typed trees, with an efficient syntax, would be a great basis for a programming language. Macros would be a primary application rather than strapped-on as they are now in every language.
Another clue that we're watching a planet form, a fundamental idea is about to coalesce: Combinators for applying tactics in a theorem proving language such as Lean bear an uncanny resemblance to parser combinators.
Today, most programmers view verified code as impractical. Huh? Is getting hacked, or the wrong answer, practical? In the future we'll be able to use the same code to write efficient programs, efficient randomized tests, and for program verification. It will feel like targeting different architectures.
The key idea in type-based theorem proving is "propositions as types": One defines a type to state a proposition, then demonstrates an instance of that type to prove the proposition. Existence is boolean (true or false) which is also the coefficient semiring for automata theory in computer science. There, one generalizes to arbitrary coefficient rings, and gets probability models such as hidden Markov chains as part of the same theory. How does one generalize "propositions as types" to arbitrary coefficients? Does this subsume random testing? The parallel here is that one wants a valid instance of a type to prove a theorem, and a probability distribution on valid instances of a (far simpler) type for random testing.
Anyone designing a language in this decade should look to program verification and theorem proving for the deepest influences.
Syzygies|4 years ago
Parser combinators extended to typed trees, with an efficient syntax, would be a great basis for a programming language. Macros would be a primary application rather than strapped-on as they are now in every language.
Another clue that we're watching a planet form, a fundamental idea is about to coalesce: Combinators for applying tactics in a theorem proving language such as Lean bear an uncanny resemblance to parser combinators.
Today, most programmers view verified code as impractical. Huh? Is getting hacked, or the wrong answer, practical? In the future we'll be able to use the same code to write efficient programs, efficient randomized tests, and for program verification. It will feel like targeting different architectures.
The key idea in type-based theorem proving is "propositions as types": One defines a type to state a proposition, then demonstrates an instance of that type to prove the proposition. Existence is boolean (true or false) which is also the coefficient semiring for automata theory in computer science. There, one generalizes to arbitrary coefficient rings, and gets probability models such as hidden Markov chains as part of the same theory. How does one generalize "propositions as types" to arbitrary coefficients? Does this subsume random testing? The parallel here is that one wants a valid instance of a type to prove a theorem, and a probability distribution on valid instances of a (far simpler) type for random testing.
Anyone designing a language in this decade should look to program verification and theorem proving for the deepest influences.
The Hitchhiker's Guide to Logical Verification https://github.com/blanchette/logical_verification_2020/raw/...
Benjamin Pierce: Backtracking Generators for Random Testing https://www.youtube.com/watch?v=dfZ94N0hS4I&t=489s
Homotopy Type Theory https://homotopytypetheory.org/book/