When we talk about arrows in terms of functional programming, we really mean Hughes’ Arrows (or Freyd-categories) a generalization of monads. The term arrow represents a level of abstraction over computations, essentially modeling them as objects with inputs and outputs. Hughes' Arrows extend beyond the traditional notion of functors and monads in handling side effects and common computational patterns in a more structured manner.
Arrows are useful because they provide a computation model that can implicitly make use of state without ever exposing that state to the programmer. The programmer can use arrowized computations and combine them to create sophisticated systems.
A strong arrow is an arrow A having an extra structure known as strength. The strength for A is a natural transformation, generally denoted by t, which consumes a pair of objects (X,Y) and gives back an arrow in the category. In a concrete manner, the strength has this form: t_X,Y: X x AY -> A(X x Y), called tensorial strength. The strength obeys some particular coherence conditions. A deeper explanation involves the monoidal structure of a category and how the 'strength' aids in elaborating the combination of values in the arrow context.
My understanding here for Elixir is that you have dynamic() as a universe U and an error set E. All functions f are from U to E + U. f is an arrow A -> B iff f[A] subset B + E. f is a strong arrow iff it is an arrow and f(U\A) subset E. I'm not sure how this connects to strength in the category theoretic sense.
Slightly OT (though I do really like Elixir: I used to work with José and he’s a goddamned visionary): TypeScript just hits it out of the park here. With arguably harder erasure constraints than either Java or C# the covariance and contravariance bounds and ergonomics are so much more intuitive than either. The MSR brain trust is clearly keeping their eye on the ball.
Elixir looks to be one of few languages that is never going to sit still on absorbing the best ideas. At the risk of repetition, no one would expect less from the person who’s passion created it.
Disclaimer: this is not how to the strengths of Julia are normally described by most people, it's just how I think about it.
You might have heard that Julia solves the two language problem (easy as python, fast as C++). But exactly how does it do that? In python you don't have to care about types, but even if you were willing to care about types you wouldn't get any performance benefit. In C++ you have to care about types, you don't have the option not to, even when you don't care about performance.
In Julia you don't have to care about types. You can code it exactly as python. But if you are willing to you can care about types, and you'll get a huge performance benefit. So, the way I code Julia is on a first pass I code it like python. I don't care about types, I just care that it's correct. Then, after it's correct, and if performance is critical, I start thinking about types. So Julia very much lends itself to "make it work, make it right, make it fast", by allowing you to gradually introducing types into the parts of your code that matter.
The way you use the word "correct" is interesting: in PL theory circles, it usually means : "bug free", but it appears that you use it to mean: "produces the results I'm looking for".
Indeed, one may write a program which is bug free, yet does not implement the algorithm that produces the expected result (for instance, a program sorting data in ascending order, when a descending order is needed). In strongly typed languages, type systems are used to ensure that programs are bug free, following the adage : "if it compiles, it works". The issue of having a proper implementation is not a concern of researchers in their papers, it's purely an engineering problem, so there is no interest for them in using the word correct in that sense.
Also, with type inference, one often does not have to mind too much about types, and instead gradually introduce type annotations to lift disagreements between the compiler and its user.
Gradual typing is yet another tool to achieve a similar result.
Julia is slower than Python for most applications if you include the compile time (which is every time you run your program, because it's "just in time").
edit: I want to be clear - I like Julia, and I have long wanted a scripting language that was gramatically simple like python but had support for strong typing, etc. But the TTFX problem, for me, muddies the waters on the question of "which is faster, Julia or Python?"
I cannot support much in terms of money or time currently, but somehow I'd wish there are ways to speed this up.
Sure, sponsoring from big corps comes with their own problems, but maybe that would be an option? There are quite some that use Elixir (or BEAM in general) out there...
Idk what would be the best approach.
But I am very convinced that Elixir and the wider ecosystem (not only phoenix liveview) is (or could be) the very best "default" web app stack we have right now in general. It solves _so many_ problems of other comparable tech stacks right out of the box, one way or the other, and is the only one I've noticed a sentiment of pure joy from seasoned devs that also lasts well after the novelty wears off.
At this point it looks "only" like a number problem to become mainstream (again a number problem). No idea how to solve it, just felt I needed to share my thoughts as a data point here.
Like the type system here: I'd _kill_ to have this exact implementation today rather than _maybe in a few years_. And yes, I can draw comparison to Typescript, Rust and Haskell which I used for projects in the past. Its like combining the very beast ideas (and adding some more) into one awesome thing.
I understand many would love to have this yesterday but I strongly believe we should not speed it up. :)
This is potentially the largest change the language will ever go through and we are being very intentional and deliberate on every step and decision we make. Once we start collecting feedback from the community, it may speed up or slow down the process, but I want to make sure everyone gets plenty of time to experience and internalize the changes.
When it comes to funding, the on-going work on Elixir type system is well funded by companies like Fresha, Starfish*, Supabase, and Dashbit. However, we also want to venture into new research areas, such as existential set-theoretic types, which would allow us to type behaviours (such as GenServer) similar to how one would type first-class modules in OCaml (or Typeclasses in Haskell). If someone is interested in sponsoring research work, hit me up (github.com/josevalim), and I can put you in touch with the proper institutions.
Tbf. While more money to allows stability of people involved would help, it would not really speed it up that much.
Typescript had ms behind it and it still took a decade. At some point the engineering, experiments and learning take time and you can hardly speed it up with more ressources.
I read that Brex moved away from Elixir to Kotlin. If Elixir was really that great then why would a company put such a big effort to ditch it? That killed off my interest in learning it, but maybe their reasons were invalid? Genuinely curious.
the arc of progress in software engineering bends towards static typing. in the future, our type systems will be so powerful they'll be able to do more inference so that we have to be less explicit. then people who have come to hate static type because of these explicit type declarations will know they hated the costume after all, not the person in it.
Yeah, Rich Feldman has a good talk on this, entitled Why Static Typing Came Back[1]. It makes a good argument that static typing can recover most of the advantages traditionally accounted to dynamic typing. Rapid iteration you can get from incremental compilers, being concise you can get from powerful type inference, etc. Conversely, the advantages of static typing (largely boiling down to increased reliability) cannot easily be captured by dynamically typed languages. Gradual typing, particularly successful by TypeScript, might be an interesting middle ground, or it might just be a reflection of the dynamic history of JS.
> in the future, our type systems will be so powerful they'll be able to do more inference so that we have to be less explicit
And in the past too!
In my Haskell workflow I constantly use the _ (typed hole) keyword. When the compiler hits that expression, it tells me what type I have to put there.
If I were using Idris, it can oftentimes provide and implementation too!
Two caveats about what you said though:
- More power is not always better (depending on how you define power). Structured programming removes the power of goto. Functional programming removes the power of mutation.
- There are limitations to what can be inferred. The more your type system can do, the harder it is for type inference to do its thing.
OCaml had this figured out many years ago. Elixir/Erlang's dynamism can be particularly good compared to other dynamic languages if you lean into it properly. But yes, the world is loving static typing right now, and I'm actually a bit excited myself with their ideas for Elixir's type system.
This is great! I've been an Elixir developer professionally for many years now, and I love it. (I took my current job in large part for the opportunity to use it some more.) But this scratches the one last itch I have with it. After having a taste of types from Rust and Typescript, I do miss them from time to time.
One thing I really like about a strong type system is it makes VSCode feel like it has super powers, type-checking in line and improving suggestions and all that. Given that this seems to be a somewhat novel type system, what are the implications for editor integration? Will it still do all the cool things that Language Servers support? In other words, is the "set theoretic type system" an implementation detail which doesn't affect the language server API?
The post is great, and I'm still digesting it, but this one part caught my eye:
> In a nutshell, it means we can only compare functions if they have the same bounds. For example, our type system states a -> a when a: integer() or boolean() is not a subtype of a -> a when a: integer().
It's not a subtype right? If a function could safely accept some function `f` as a parameter, then if that function were able to return more than expected, it wouldn't be safe to drop it in. I think the return values have to be more restricted. On the other hand, the input can be more general. This is "covariance" and "contravariance" in types, I think?
But regardless, this is one of my main hangups with dialyzer: its `underspecs` and `overspecs` design gets it wrong. I mentioned in once a while ago in this comment[0] with more examples. But, if you typed it like this:
@type direction :: east | north | west | south
@spec common_wind() :: direction
def common_winds(), do: :east
dialyzer will complain because it infers `common_winds/0` can only return one of the values that it's spec'ed for. (If you have `overspecs` or `underspecs` - I forget which - enabled, which you need, if you want to restrict input types.)
So I guess my question with this approach, is whether you can explicitly spec "larger" sets than is inferred? And if so, whether the given specs overrule the inferred ones elsewhere.
> In other words, is the "set theoretic type system" an implementation detail which doesn't affect the language server API?
This should be the case. Type violations would automatically appear on your IDEs when you update Elixir. However, if you need additional metadata/features, then we need to expose them from Elixir and Language Servers need to consume it. But that's implementation work and not "type system" work. :)
> But regardless, this is one of my main hangups with dialyzer: its `underspecs` and `overspecs` design gets it wrong.
I believe recent Erlang/OTP versions changed this to give you more fine grained control for the precise reasons you mentioned.
> So I guess my question with this approach, is whether you can explicitly spec "larger" sets than is inferred? And if so, whether the given specs overrule the inferred ones elsewhere.
I am 99% sure the theory allows both options, so we will pick one based on patterns and community needs.
> If the function returns none() (i.e. it does not type check) or a type which is a subset of its codomain (i.e. its output), then it is a strong arrow.
Unless I didn't understand correctly, surely 'none()' is also a subset of the codomain? Or is none() a type containing all errors?
Anyway the concept of strong arrows is interesting. But does it also mean you can 'weaken' an arrow if you implement a more general version of a function? That would make it a bit odd because you can't know up front whether an arrow is strong or not. It makes code very non-local.
> Unless I didn't understand correctly, surely 'none()' is also a subset of the codomain?
Correct. Both are included for clarity.
> But does it also mean you can 'weaken' an arrow if you implement a more general version of a function?
If you have a function that is a strong arrow because of other functions (i.e. the current function does not explicit guard its arguments) then that would be the case.
However, we don't know yet how much of a big deal this is. Strong arrows is all about making static typing more useful to dynamic code. Dynamic code should not be relying on strong arrows for correctness, they are a nice to have. If you want guarantees, then you need static typing.
It may be that, in practice, it is enough for Elixir standard library to be made of strong arrows and that will propagate enough type information to most dynamic programs. Otherwise, if more is needed, we will definitely need to add visibility and reflection APIs around strong arrows.
This blog post makes this look like a type system for theoretical type theory people to play with than something designed to actually be practical and useful.
[+] [-] tutfbhuf|2 years ago|reply
Arrows are useful because they provide a computation model that can implicitly make use of state without ever exposing that state to the programmer. The programmer can use arrowized computations and combine them to create sophisticated systems.
A strong arrow is an arrow A having an extra structure known as strength. The strength for A is a natural transformation, generally denoted by t, which consumes a pair of objects (X,Y) and gives back an arrow in the category. In a concrete manner, the strength has this form: t_X,Y: X x AY -> A(X x Y), called tensorial strength. The strength obeys some particular coherence conditions. A deeper explanation involves the monoidal structure of a category and how the 'strength' aids in elaborating the combination of values in the arrow context.
[+] [-] reuben364|2 years ago|reply
[+] [-] tomberek|2 years ago|reply
[+] [-] aatd86|2 years ago|reply
[+] [-] benreesman|2 years ago|reply
Elixir looks to be one of few languages that is never going to sit still on absorbing the best ideas. At the risk of repetition, no one would expect less from the person who’s passion created it.
[+] [-] weatherlight|2 years ago|reply
but the type system isn't sound :( whats the point of a type system? if it's not sound?
[+] [-] tjrgergw|2 years ago|reply
Disclaimer: this is not how to the strengths of Julia are normally described by most people, it's just how I think about it.
You might have heard that Julia solves the two language problem (easy as python, fast as C++). But exactly how does it do that? In python you don't have to care about types, but even if you were willing to care about types you wouldn't get any performance benefit. In C++ you have to care about types, you don't have the option not to, even when you don't care about performance.
In Julia you don't have to care about types. You can code it exactly as python. But if you are willing to you can care about types, and you'll get a huge performance benefit. So, the way I code Julia is on a first pass I code it like python. I don't care about types, I just care that it's correct. Then, after it's correct, and if performance is critical, I start thinking about types. So Julia very much lends itself to "make it work, make it right, make it fast", by allowing you to gradually introducing types into the parts of your code that matter.
[+] [-] alphablended|2 years ago|reply
Indeed, one may write a program which is bug free, yet does not implement the algorithm that produces the expected result (for instance, a program sorting data in ascending order, when a descending order is needed). In strongly typed languages, type systems are used to ensure that programs are bug free, following the adage : "if it compiles, it works". The issue of having a proper implementation is not a concern of researchers in their papers, it's purely an engineering problem, so there is no interest for them in using the word correct in that sense.
Also, with type inference, one often does not have to mind too much about types, and instead gradually introduce type annotations to lift disagreements between the compiler and its user.
Gradual typing is yet another tool to achieve a similar result.
[+] [-] jovial_cavalier|2 years ago|reply
edit: I want to be clear - I like Julia, and I have long wanted a scripting language that was gramatically simple like python but had support for strong typing, etc. But the TTFX problem, for me, muddies the waters on the question of "which is faster, Julia or Python?"
[+] [-] anonyfox|2 years ago|reply
Sure, sponsoring from big corps comes with their own problems, but maybe that would be an option? There are quite some that use Elixir (or BEAM in general) out there...
Idk what would be the best approach.
But I am very convinced that Elixir and the wider ecosystem (not only phoenix liveview) is (or could be) the very best "default" web app stack we have right now in general. It solves _so many_ problems of other comparable tech stacks right out of the box, one way or the other, and is the only one I've noticed a sentiment of pure joy from seasoned devs that also lasts well after the novelty wears off.
At this point it looks "only" like a number problem to become mainstream (again a number problem). No idea how to solve it, just felt I needed to share my thoughts as a data point here.
Like the type system here: I'd _kill_ to have this exact implementation today rather than _maybe in a few years_. And yes, I can draw comparison to Typescript, Rust and Haskell which I used for projects in the past. Its like combining the very beast ideas (and adding some more) into one awesome thing.
[+] [-] josevalim|2 years ago|reply
This is potentially the largest change the language will ever go through and we are being very intentional and deliberate on every step and decision we make. Once we start collecting feedback from the community, it may speed up or slow down the process, but I want to make sure everyone gets plenty of time to experience and internalize the changes.
When it comes to funding, the on-going work on Elixir type system is well funded by companies like Fresha, Starfish*, Supabase, and Dashbit. However, we also want to venture into new research areas, such as existential set-theoretic types, which would allow us to type behaviours (such as GenServer) similar to how one would type first-class modules in OCaml (or Typeclasses in Haskell). If someone is interested in sponsoring research work, hit me up (github.com/josevalim), and I can put you in touch with the proper institutions.
[+] [-] di4na|2 years ago|reply
Typescript had ms behind it and it still took a decade. At some point the engineering, experiments and learning take time and you can hardly speed it up with more ressources.
[+] [-] tempotemporary|2 years ago|reply
Could you expand top 5(or more) major pain points that it solves?
[+] [-] Aerbil313|2 years ago|reply
[+] [-] daxfohl|2 years ago|reply
[+] [-] yawboakye|2 years ago|reply
[+] [-] raphlinus|2 years ago|reply
[1]: https://www.youtube.com/watch?v=Tml94je2edk
[+] [-] mrkeen|2 years ago|reply
And in the past too!
In my Haskell workflow I constantly use the _ (typed hole) keyword. When the compiler hits that expression, it tells me what type I have to put there.
If I were using Idris, it can oftentimes provide and implementation too!
Two caveats about what you said though:
- More power is not always better (depending on how you define power). Structured programming removes the power of goto. Functional programming removes the power of mutation.
- There are limitations to what can be inferred. The more your type system can do, the harder it is for type inference to do its thing.
[+] [-] sodapopcan|2 years ago|reply
[+] [-] losvedir|2 years ago|reply
One thing I really like about a strong type system is it makes VSCode feel like it has super powers, type-checking in line and improving suggestions and all that. Given that this seems to be a somewhat novel type system, what are the implications for editor integration? Will it still do all the cool things that Language Servers support? In other words, is the "set theoretic type system" an implementation detail which doesn't affect the language server API?
The post is great, and I'm still digesting it, but this one part caught my eye:
> In a nutshell, it means we can only compare functions if they have the same bounds. For example, our type system states a -> a when a: integer() or boolean() is not a subtype of a -> a when a: integer().
It's not a subtype right? If a function could safely accept some function `f` as a parameter, then if that function were able to return more than expected, it wouldn't be safe to drop it in. I think the return values have to be more restricted. On the other hand, the input can be more general. This is "covariance" and "contravariance" in types, I think?
But regardless, this is one of my main hangups with dialyzer: its `underspecs` and `overspecs` design gets it wrong. I mentioned in once a while ago in this comment[0] with more examples. But, if you typed it like this:
dialyzer will complain because it infers `common_winds/0` can only return one of the values that it's spec'ed for. (If you have `overspecs` or `underspecs` - I forget which - enabled, which you need, if you want to restrict input types.)So I guess my question with this approach, is whether you can explicitly spec "larger" sets than is inferred? And if so, whether the given specs overrule the inferred ones elsewhere.
[0] https://news.ycombinator.com/item?id=31568098
[+] [-] josevalim|2 years ago|reply
This should be the case. Type violations would automatically appear on your IDEs when you update Elixir. However, if you need additional metadata/features, then we need to expose them from Elixir and Language Servers need to consume it. But that's implementation work and not "type system" work. :)
> But regardless, this is one of my main hangups with dialyzer: its `underspecs` and `overspecs` design gets it wrong.
I believe recent Erlang/OTP versions changed this to give you more fine grained control for the precise reasons you mentioned.
> So I guess my question with this approach, is whether you can explicitly spec "larger" sets than is inferred? And if so, whether the given specs overrule the inferred ones elsewhere.
I am 99% sure the theory allows both options, so we will pick one based on patterns and community needs.
[+] [-] contravariant|2 years ago|reply
Unless I didn't understand correctly, surely 'none()' is also a subset of the codomain? Or is none() a type containing all errors?
Anyway the concept of strong arrows is interesting. But does it also mean you can 'weaken' an arrow if you implement a more general version of a function? That would make it a bit odd because you can't know up front whether an arrow is strong or not. It makes code very non-local.
[+] [-] josevalim|2 years ago|reply
Correct. Both are included for clarity.
> But does it also mean you can 'weaken' an arrow if you implement a more general version of a function?
If you have a function that is a strong arrow because of other functions (i.e. the current function does not explicit guard its arguments) then that would be the case.
However, we don't know yet how much of a big deal this is. Strong arrows is all about making static typing more useful to dynamic code. Dynamic code should not be relying on strong arrows for correctness, they are a nice to have. If you want guarantees, then you need static typing.
It may be that, in practice, it is enough for Elixir standard library to be made of strong arrows and that will propagate enough type information to most dynamic programs. Otherwise, if more is needed, we will definitely need to add visibility and reflection APIs around strong arrows.
[+] [-] glonq|2 years ago|reply
I guess we peaked at Mavis Beacon then?
[+] [-] charcircuit|2 years ago|reply
[+] [-] hughw|2 years ago|reply
[+] [-] dylanowen|2 years ago|reply
[+] [-] mrkeen|2 years ago|reply