(no title)
wlib
|
4 years ago
Somewhat unrelated to this link, but I really think that it’s interesting how much effort we put into the theory side of languages/types - with almost no one in industry applying these lessons. The essence of this is just something along the lines of “What if we unified values with types, so we could have more useful type checking?”
We have billions of dollars being spent and thousands of smart people working for decades, who don’t even get to use a proper type system - forget about statically-checked polymorphism or dependent typing. It’s not even as if we lost the ball, we’re running in the wrong direction.
It’s almost trivial to bolt on gradual typing to a language. It’s also not too hard to retrofit dependent types (just don’t bother with Turing-incompleteness). The whole point of these type systems is to better model systems, so why not go the extensible route of letting anyone hook into the “type checking” phase? Import affine types (that decades old thing that got rust popular); write your own domain-specific checker, even.
It’s so frustrating to see how beautiful ideas just die because we feel comfortable the way things are now. Maybe language was the wrong abstraction to begin with. Sorry to derail, but this feels like the only path to truly popularize the lambda cube as a concept.
dvt|4 years ago
> with almost no one in industry applying these lessons
Straight-up incorrect. Language designers think long and hard about how typings work in their languages. Rob Pike, Ken Thompson, and Russ Cox deliberated generics for like a decade[1] before finally allowing them in Go.
> a proper type system
I'm not sure exactly what you mean by "proper" -- but "rigid" type systems are extremely cumbersome to use practically. (Typed) λ-calculus is an academic example; Haskell is a real-world example.
> not too hard to retrofit dependent types (just don’t bother with Turing-incompleteness)
Hard disagree. It actually is extremely hard: type resolution is undecidable, for one. So you need to carefully design type resolution in a way that the "edges" of types don't (or can't) break the runtime too badly.
> why not go the extensible route of letting anyone hook into the “type checking” phase
Absolutely terrible idea, for many reasons, but mainly because, if C/C++ macros are any indication, people will abuse any kind of compile-time (or pre-processing) trickery you give them access to.
> write your own domain-specific checker, even
I guess I'm in the opposite camp here, I think domain specific languages are an absolute dumpster fire of abstraction and 99.9% of the time completely exceed the scope of the problem they're trying to solve. The purpose of a programming language is to be applicable to many classes of problems, and DSLs fly in the face of that pretty common-sense tenet.
[1] https://research.swtch.com/generic
ebingdom|4 years ago
I think that kind of proves their point. Parametric polymorphism is one of the most well-understood, least contentious extensions to the lambda calculus. It's formalized by System F and has been implemented in programming languages 40 years ago with great type inference for an important subset (Hindley-Milner). Yet generics was highly controversial in the Go community. And now since none of the standard library was designed with generics in mind, it's full of unsafe patterns that involve essentially dynamic typing (e.g., the `Value` method of `Context`).
Despite Rob Pike et al. designing one of the most popular languages today (Go), I consider them more experts in systems rather than programming languages.
> I'm not sure exactly what you mean by "proper" -- but "rigid" type systems are extremely cumbersome to use practically. (Typed) λ-calculus is an academic example; Haskell is a real-world example.
I find Haskell a joy to use, and I cringe at having to use languages like Java and Go, which are a minefield of error-prone programming patterns (like using products instead of sums to represent "result or error"). Generally speaking, my Haskell code is shorter, less buggy, and more reusable than my Go code, so I'm not sure what you mean by "cumbersome".
wlib|4 years ago
foobiekr|4 years ago
If you want a language, and have legit need, fire up antlr and build a quick compiler.
Parser cleverness if the “internal DSl” variety is on par with the C++ windowing libraries that used heavy operator overloading to be clever, the stuff like “window += new Button()” and other nonsense. Just awful and torturous.
rowanG077|4 years ago
tluyben2|4 years ago
scotty79|4 years ago
As you start adding types to your working program compiler debugs your program for you finding various corner cases.
This motivates you to add more types.
ogogmad|4 years ago
black_knight|4 years ago
[0]: 1982 version: https://raw.githubusercontent.com/michaelt/martin-lof/master...
wlib|4 years ago
shrimpx|4 years ago
But one area of large impact for this pure type theory research seems to be the mechanization of mathematics. It looks probable that in the future, the standard way to do mathematics will be by programming it in a proof development system, aka dependently typed programming language.
ogogmad|4 years ago
This is a controversial opinion, and not all mathematicians are enthusiastic about it, whether rightly or wrongly. Michael Harris, for instance, is a major sceptic and opponent.
It is still a very interesting idea. And it has had some notable successes already.
creata|4 years ago
There are proof assistants without dependent types, like Isabelle/HOL.
armchairhacker|4 years ago
The more abstract parts like dependent types are really complicated and even unintuitive to use. See: issues with Rust’s borrow checker, or Haskell being so confusing.
Earlier languages like C and Java are mostly legacy code, they use libraries in legacy code, or they’re for developers familiar with them.
Untyped scripting languages are fine for scripts. Honestly idk why developers write big libraries in untyped languages like JavaScript and Python.
The main case where advanced formal methods are particularly useful is proving program correctness. And AFAIK this stuff is used by industry, although you don’t hear about it as much. The thing is, most programs either don’t “need” to be proved, or they’re way too big.
chriswarbo|4 years ago
I disagree with this: dependent types are way easier than lots of the convoluted schemes that non-dependent languages have come up with.
As a simple example, dependently-typed languages don't need parametric polymorphism or generics: we can achieve the same thing by passing types as arguments; e.g.
When I program without dependent types, I regularly find myself getting "stuck" inadvertently; knowing that (a) there's no way to make my current approach work in this language, (b) that it would be trivial to make it work if I could pass around types as values, (c) that I need to throw away what I've done and choose a different solution, and (d) the alternative solution I'll end up with will be less correct and less direct than my original approach (e.g. allowing more invalid states)dwohnitmok|4 years ago
Nor do I think the value of dependent types comes from proofs, which I agree are still too cumbersome to really use at a large scale for most codebases.
My hypothesis for the sweet spot for dependent types is to express invariants, and then use other mechanisms such as property tests to check those invariants, rather than actual proofs (except for trivial proofs). This does mean that I favor a very proof irrelevant style of dependently typed programming (i.e. in the language of Agda, way more `prop` than `set` or in Idris lots of zero multiplicities for dependently typed arguments) which is quite different from the way a lot of current dependently-typed code is written.
plafl|4 years ago
> Untyped scripting languages are fine for scripts. Honestly idk why developers write big libraries in untyped languages like JavaScript and Python.
And yet it works
bjourne|4 years ago
adamnemecek|4 years ago
unknown|4 years ago
[deleted]