top | item 29949664

(no title)

wlib | 4 years ago

Somewhat unrelated to this link, but I really think that it’s interesting how much effort we put into the theory side of languages/types - with almost no one in industry applying these lessons. The essence of this is just something along the lines of “What if we unified values with types, so we could have more useful type checking?” We have billions of dollars being spent and thousands of smart people working for decades, who don’t even get to use a proper type system - forget about statically-checked polymorphism or dependent typing. It’s not even as if we lost the ball, we’re running in the wrong direction. It’s almost trivial to bolt on gradual typing to a language. It’s also not too hard to retrofit dependent types (just don’t bother with Turing-incompleteness). The whole point of these type systems is to better model systems, so why not go the extensible route of letting anyone hook into the “type checking” phase? Import affine types (that decades old thing that got rust popular); write your own domain-specific checker, even. It’s so frustrating to see how beautiful ideas just die because we feel comfortable the way things are now. Maybe language was the wrong abstraction to begin with. Sorry to derail, but this feels like the only path to truly popularize the lambda cube as a concept.

discuss

order

dvt|4 years ago

You said a lot of stuff, and I disagree with most of it, but I'm open to dialogue.

> with almost no one in industry applying these lessons

Straight-up incorrect. Language designers think long and hard about how typings work in their languages. Rob Pike, Ken Thompson, and Russ Cox deliberated generics for like a decade[1] before finally allowing them in Go.

> a proper type system

I'm not sure exactly what you mean by "proper" -- but "rigid" type systems are extremely cumbersome to use practically. (Typed) λ-calculus is an academic example; Haskell is a real-world example.

> not too hard to retrofit dependent types (just don’t bother with Turing-incompleteness)

Hard disagree. It actually is extremely hard: type resolution is undecidable, for one. So you need to carefully design type resolution in a way that the "edges" of types don't (or can't) break the runtime too badly.

> why not go the extensible route of letting anyone hook into the “type checking” phase

Absolutely terrible idea, for many reasons, but mainly because, if C/C++ macros are any indication, people will abuse any kind of compile-time (or pre-processing) trickery you give them access to.

> write your own domain-specific checker, even

I guess I'm in the opposite camp here, I think domain specific languages are an absolute dumpster fire of abstraction and 99.9% of the time completely exceed the scope of the problem they're trying to solve. The purpose of a programming language is to be applicable to many classes of problems, and DSLs fly in the face of that pretty common-sense tenet.

[1] https://research.swtch.com/generic

ebingdom|4 years ago

> Straight-up incorrect. Language designers think long and hard about how typings work in their languages. Rob Pike, Ken Thompson, and Russ Cox deliberated generics for like a decade[1] before finally allowing them in Go.

I think that kind of proves their point. Parametric polymorphism is one of the most well-understood, least contentious extensions to the lambda calculus. It's formalized by System F and has been implemented in programming languages 40 years ago with great type inference for an important subset (Hindley-Milner). Yet generics was highly controversial in the Go community. And now since none of the standard library was designed with generics in mind, it's full of unsafe patterns that involve essentially dynamic typing (e.g., the `Value` method of `Context`).

Despite Rob Pike et al. designing one of the most popular languages today (Go), I consider them more experts in systems rather than programming languages.

> I'm not sure exactly what you mean by "proper" -- but "rigid" type systems are extremely cumbersome to use practically. (Typed) λ-calculus is an academic example; Haskell is a real-world example.

I find Haskell a joy to use, and I cringe at having to use languages like Java and Go, which are a minefield of error-prone programming patterns (like using products instead of sums to represent "result or error"). Generally speaking, my Haskell code is shorter, less buggy, and more reusable than my Go code, so I'm not sure what you mean by "cumbersome".

wlib|4 years ago

I’m on my phone so I can’t really give this the response that it’s owed, but most of what I want to say is that I don’t think we disagree as much as you think. My mention of languages being the wrong approach to general programming is a key aspect of my position. You’re right that DSL’s are usually not a great idea, but that seems like more of an issue with the “syntax-chasing”. The affordances that a syntax forces upon everyone are why it takes a decade to add generics to Go and why macros are even used to begin with. As we see from the various notations used for equivalent math, we probably we be better off encoding the structure as a flexible semantics graph and then “projecting the syntax” however the programmer wants. Keep in mind that what I write is about possibilities more than what you would be forced to do every project. You wouldn’t expect everyone to import random type systems more than you would expect everyone of write and import their own random crypto system libraries. Some of the most popular languages out there have absolutely zero static checks. Would it be so bad to write a Turing complete static check function? Why do we care so much about decidability that we can’t even consider it in useful exceptions? Also, do you think macros a la lisp would be so bad if they had more powerful type systems backing them up and more flexible editors to understand them?

foobiekr|4 years ago

Oh my goodness do I ever agree about any kind of “use parser cleverness to make your cute interior domain specific language” thing as found in Scala and Ruby and a few other contexts.

If you want a language, and have legit need, fire up antlr and build a quick compiler.

Parser cleverness if the “internal DSl” variety is on par with the C++ windowing libraries that used heavy operator overloading to be clever, the stuff like “window += new Button()” and other nonsense. Just awful and torturous.

rowanG077|4 years ago

Go is the poster boy for a language NOT learning anything from decades of PL theory and research.

tluyben2|4 years ago

This is what I have noticed writing a lot of software communicating with systems (old systems) with not very well specified input & output; I first want to write drivel and make it work. I don't want to write/change 1000s of vars/fields while i'm not sure if I even need them etc. Then I want to throw away and refactor with lessons learned and I want to keep doing that, adding types (and sometimes proofs). Maybe with compiler flags which enforces static typing for the 'production version'.

scotty79|4 years ago

TypeScript is language that allows you to work like that. You may use as many and as few types as you need at any given stage of development.

As you start adding types to your working program compiler debugs your program for you finding various corner cases.

This motivates you to add more types.

ogogmad|4 years ago

Type theory has uses outside programming. It can be used to make constructive logic rigorous, which was in fact Martin Lof's original use for it.

wlib|4 years ago

I’m aware of math being entirely useful outside of programming - my little rant is more about how little programming is inspired by math

shrimpx|4 years ago

A core problem is that you don't need super fancy type systems to build robust systems in practice. After a certain threshold, type theory research is exploring what's possible in theory, like pure mathematics, not what would be good for practitioners.

But one area of large impact for this pure type theory research seems to be the mechanization of mathematics. It looks probable that in the future, the standard way to do mathematics will be by programming it in a proof development system, aka dependently typed programming language.

ogogmad|4 years ago

> It looks probable that in the future, the standard way to do mathematics will be by programming it in a proof development system, aka dependently typed programming language.

This is a controversial opinion, and not all mathematicians are enthusiastic about it, whether rightly or wrongly. Michael Harris, for instance, is a major sceptic and opponent.

It is still a very interesting idea. And it has had some notable successes already.

creata|4 years ago

> a proof development system, aka dependently typed programming language.

There are proof assistants without dependent types, like Isabelle/HOL.

armchairhacker|4 years ago

Most of the less abstract parts of this research (generic and polymorphic types w/ variance, ADT structs/unions, even type functions) are in most modern languages (Scala, TypeScript, C++20, Kotlin, Swift, Go). Rust even has affine types

The more abstract parts like dependent types are really complicated and even unintuitive to use. See: issues with Rust’s borrow checker, or Haskell being so confusing.

Earlier languages like C and Java are mostly legacy code, they use libraries in legacy code, or they’re for developers familiar with them.

Untyped scripting languages are fine for scripts. Honestly idk why developers write big libraries in untyped languages like JavaScript and Python.

The main case where advanced formal methods are particularly useful is proving program correctness. And AFAIK this stuff is used by industry, although you don’t hear about it as much. The thing is, most programs either don’t “need” to be proved, or they’re way too big.

chriswarbo|4 years ago

> The more abstract parts like dependent types are really complicated and even unintuitive to use.

I disagree with this: dependent types are way easier than lots of the convoluted schemes that non-dependent languages have come up with.

As a simple example, dependently-typed languages don't need parametric polymorphism or generics: we can achieve the same thing by passing types as arguments; e.g.

    map: (inType: Type) -> (outType: Type) -> (f: inType -> outType) -> (l: List inType) -> List outType
    map inType outType f l = match l with
      Nil  inType      -> Nil  outType
      Cons inType x xs -> Cons outType (f x) (map inType outType f xs)
When I program without dependent types, I regularly find myself getting "stuck" inadvertently; knowing that (a) there's no way to make my current approach work in this language, (b) that it would be trivial to make it work if I could pass around types as values, (c) that I need to throw away what I've done and choose a different solution, and (d) the alternative solution I'll end up with will be less correct and less direct than my original approach (e.g. allowing more invalid states)

dwohnitmok|4 years ago

Dependent types aren't terribly complicated intrinsically, see e.g. https://shuangrimu.com/posts/language-agnostic-intro-to-depe... for one explanation of dependent pairs.

Nor do I think the value of dependent types comes from proofs, which I agree are still too cumbersome to really use at a large scale for most codebases.

My hypothesis for the sweet spot for dependent types is to express invariants, and then use other mechanisms such as property tests to check those invariants, rather than actual proofs (except for trivial proofs). This does mean that I favor a very proof irrelevant style of dependently typed programming (i.e. in the language of Agda, way more `prop` than `set` or in Idris lots of zero multiplicities for dependently typed arguments) which is quite different from the way a lot of current dependently-typed code is written.

plafl|4 years ago

If I understood the article correctly the four type of functions are present in Julia:

  julia> factorial(3) # term -> term
  6
  julia> typeof(3)  # term -> type
  Int64
  julia> one(Float32) # type -> term
  1.0f0
  julia> Vector{Float32} # type -> type
  Vector{Float32} (alias for Array{Float32, 1})
And actually more mixed:

  julia> using StaticArrays
  julia> SVector{3, Float32}(1, 2, 3)
  3-element SVector{3, Float32} with indices SOneTo(3):
   1.0
   2.0
   3.0
  julia> convert(Float32, 1)
  1.0f0
Julia is of course dynamic, so maybe it's cheating? I'm writing this comment because all the greek in that cube, and actually using a cube, makes this look like very abstract stuff but I consider Julia a very practical language.

> Untyped scripting languages are fine for scripts. Honestly idk why developers write big libraries in untyped languages like JavaScript and Python.

And yet it works

bjourne|4 years ago

There is a huge disconnect between industry and academia. Type systems are very interesting but is not what makes the difference in industry. What actually increases productivity and reduces bugs is an under explored area. I think the answers are just as much about sociology, psychology, and ergonomics as they are about type theory.

adamnemecek|4 years ago

As with all research, the good ideas eventually make it to production.