top | item 28726632

(no title)

caotic123 | 4 years ago

The syntax you presented is really very accessible. Of course, it makes things a little more verbose, but I think you are right, the path for bringing more attention to dependent types is probably making more accessible also. Btw, great article I will have a more detailed look after.

discuss

order

dwohnitmok|4 years ago

That syntax might be unacceptably verbose with too many usually implicit arguments. However, I also think that implicit arguments primarily are needed when working with heavily-dependently-typed data structures, which I think is a bad way of working with dependent types (e.g. I don't like length indexed vectors).

Instead if you primarily work with non-dependent data structures and then pass in dependently-typed invariants as a separate argument, you can avoid a lot of the complexity associated with dependent types. This allows your dependently typed arguments to not have a runtime representation, e.g. they could be given zero multiplicity in Idris. This in turn allows you to easily substitute a property test or an external solver (or just a human "trust me" override) instead of being forced to always prove everything since you no longer require a true implementation of a dependent type.

This in turn simultaneously severely reduces the need for elaborate implicit arguments and makes verbosity much easier to deal with.

arianvanp|4 years ago

This sounds interesting. Can you give a concrete example?

This seems to really fall into place with my feeling that heavily dependently typed data types Ans programs using them are hard to refactor as the invariants that you break in the refactor cascade through your entire program.

Separating them probably makes life easier

lapinot|4 years ago

> Instead if you primarily work with non-dependent data structures and then pass in dependently-typed invariants as a separate argument, you can avoid a lot of the complexity associated with dependent types. This allows your dependently typed arguments to not have a runtime representation, e.g. they could be given zero multiplicity in Idris. This in turn allows you to easily substitute a property test or an external solver (or just a human "trust me" override) instead of being forced to always prove everything since you no longer require a true implementation of a dependent type.

I'll provide a counterpoint since i'm personally more of an "integrated" dependentist! Just to make things more clear, what you propose is to segregate computational data from propositional data (proofs) and i prefer "integrating" the two, writing "proofs with computational content" or "correct-by-construction code" depending on the point of view. Agreed, when separating you get (easier) solvers and escape hatches. But afaik a big reason for this separated style in Coq is that their support of dependent pattern-matching is bad (luckily now there is coq-equations).

If you want to write that "map" is size-preserving, in separated style you'll have the function `map : (X -> Y) -> list X -> list Y` and `map-len : (f : X -> Y) (xs : list X) -> len xs = len (map f xs)`. First the verbosity increases with the complexity of your integrated datastructure. Second in integrated-style, you would have gotten this lemma for free since `map : vec X n -> vec X n` on vectors admits the same implementation as on lists [1]. In my experience, in separated style, you end up duplicating a lot of work. In language theory, subject-reduction (reduction preserves typing) is for free if you're working with intrinsically-typed terms. Eg stuff in my last coq work [2], where manipulating purely computational DeBruijn indices would have been error-prone whereas intrinsically typing terms enables to just auto-complete the single correct implementation.

Going a bit further, both approaches are equivalent: one can prove that `vec X n ≅ (xs : list X) × (len xs ≡ n)`. There is ongoing work on first-class datastructure declaration (ability to construct and manipulate data declarations from code) to actually declare such refinements and derive all the tooling (like [1]) automatically (search for "ornaments", a couple papers there by McBride and Dagand). This would enable to mix-and-match both presentations. There is a fun chain where vectors are a refinement of lists and lists are a refinement of (unary encoded) naturals, and vectors are actually lists indexed by the natural to with they erase (the so-called "reornament" of ornament list/nat, getting additional results generically). Very rich theory, very related to stuff like mathcomp's hierarchy-builder.

[1] With use of `erase : vec X n -> list X` (should get compiled to noop) and `erase-coherent : (xs : vec X n) -> len (erase xs) = n`.

[2] https://sbi.re/~peio/LCD.html and https://sbi.re/~peio/OGSD.html (code at https://git.sr.ht/~lapinot/ogs-coq)