So, I know that Elixir is powerful and good at functional programming. The one thing I can't quite understand is why we would want to program everything as a composition of individual programs and applications, with supervisors and application trees and message passing and the like. It seems like a lot of overhead to accomplish something. Making everything asynchronous and detached makes things more complicated, not less.
I hear a lot about OTP and the "let it crash" mantra, but I just don't quite understand what's so great about it. Maybe it's just due to my problem domain (web development), but it doesn't seem like as big a draw to Elixir and Erlang as pattern matching and FP are.
> is why we would want to program everything as a composition of individual programs and applications,
Because it matches the reality better. Concurrent applications built out of individual isolated processes often map well to real world problems. Not all problems, but quite a few of them. Say if you just multiply a matrix or maybe calculate an average, you don't need concurrency and supervision. If you want to sort a list of files, you don't need etc. Not all systems have to be reliable and fault tolerant.
> Making everything asynchronous and detached makes things more complicated, not less.
The real world is asynchronous and detached. You as an agent are detached from other co-workers or friends. Before you take a breath you don't have to wait for them to finish taking a breath. A car driving down the street is detached from others. They come to an intersection and have to work together in order not to run into each other. But after that they keep going on their way. Also they are fault tolerant -- because your neighbor's car engine crashed, doesn't mean yours should stop working. Seems a bit like a silly example but that maps fairly well to distributed computing -- requests, user connections, work processes, items in a processing pipeline, database shards and so on.
So given that there are these types of problem what framework language should you pick to solve it? I would say first of all one which reduces the impedance mismatch and lets your write clearest, simplest code. And also that one that's most fault tolerant.
You could spawn threads (or green threads) in some languages -- but you'd probably be sharing memory and have some bugs in there. You could spawn OS processes and/or have multiple containers/vms in the cloud, but that gets expensive. You could use Rust for example to achieve some of those goal though, because it will guarantee to a certain degree during compile time that your application will have less memory and concurrency errors. Or you could use something based on the BEAM VM (Erlang or Elixir) for example, and you get a different approach to fault tolerance there.
You're right -- there is a conceptual/complexity cost to BEAM applications, distribution, concurrency, supervision trees, etc.
I think the idea is that if you start with phoenix you don't have pay that cost unless you need it. If you end up needing to scale, the features are there. I think Uncle Bob said once that "architecture is the art of delaying decisions". Phoenix / Elixir let you do this.
Another part of the story is channels. If you're doing websocket work, you can't find a better platform than phoenix / elixir.
Once you start digging in, you realize phoenix / elixir are awesome even if you're not using all the concurrency / distribution stuff. The functional nature / immutability posture of elixir helps you with a large code base and/or lots of devs. There are other features -- pattern matching and macros come to mind -- that make Elixir a great place to code.
All that said, if you have a client, say, with modest needs that is never going to have much traffic, all that might not be worth going with something you're not familiar with. Similarly, if you're talking about a 1-2 dev project, it might not worth learning something new. In those cases, maybe it's rails or django or whatever ftw.
The key is the ease of writing the target program correctly.
You can write any algorithm using any Turing-complete machine, including the original tape-based Turing machine or a language like Malbolge. But it is going to take an inordinate amount of time to write and debug.
What FP and message passing buy you is the ease of reasoning. No shared state -> no problem of concurrent updates. No state at all -> no questions like "but does this method update that instance member?". All your state lives in message queues and call stacks, and both provde very stringent and easy-to-understand disciplines.
This also gives you composability: you have easy time putting things together and rearranging them as you see fit, without caring of any hidden data dependencies. Everything is explicit.
(Combined with a static type checker and a right type system, it gives you the effect of "if it compiles, it runs correctly 99% of times", seen e.g. in Haskell or Rust. Elixir is not a statically-typed language, though.)
Yes, it does sometimes come at a cost. Not all immutable data structures can be made as efficient as mutable ones. Usually they are as fast as mutable ones, but pay some space cost; IIRC it's theoretically proven that the cost is no more than a logarithmic one.
With RAM being so much cheaper than developers' time, it's a rather good tradeoff. At least, using JIT in Java or Javascript, or just using Python and Ruby, one usually pays a similar or higher RAM cost, for a fraction of benefits. JS does move towards immutability and FP-ness, though, because the above applies to it, too, and it's not so OOP-heavy as for make FP approaches largely impractical.
I am currently in the process of learning Erlang, and here are my impressions.
In the paper A Note on Distributed Computing, Waldo et al argue that having a distributed object system is bound to fail no matter how hard one tries.
Differences in latency, memory access, partial failure, and
concurrency make merging of the computational models
of local and distributed computing both unwise to attempt
and unable to succeed.
...
A better approach is to accept that there are irreconcilable
differences between local and distributed computing, and
to be conscious of those differences at all stages of the
design and implementation of distributed applications.
Rather than trying to merge local and remote objects, engineers
need to be constantly reminded of the differences
between the two, and know when it is appropriate to use
each kind of object.
Erlang works around this problem by making the local object model identical to the remote one in all aspects except latency. The semantic model is copy-on-send asynchronous message-passing. This is inconvenient when working with local components, as you rightly point out. But putting up with this inconvenience up-front makes it easier to refactor towards a distributed system later much easier. The language attempts to work around the inconvenience of asynchronous message-passing using functional abstractions - which is largely what OTP seem to be.
BTW, the language can be used in a synchronous context. It works just like another functional language with all code executing from start-to-finish on a single thread of execution.
> I hear a lot about OTP and the "let it crash" mantra, but
> I just don't quite understand what's so great about it.
The top two for me are:
(1) you start to just program the sunny path. It's faster and more fun to code in this mode. And with pattern matching of function arguments and return values (as well as guards) and you get pre- and post-condition asserts all while programming the sunny path.
(2) What are you supposed to do when your system raises an run-time error anyway? I think in practice, in the majority of cases simply restarting your process in some known state is the best you can do.
And that "start in some known state" is non-trivial for a system that handles concurrent inputs. OTP helps you do that correctly.
Edit: Formating. Add note pattern matching of function args and guards.
Elixir and Erlang actually map remarkably well to the _web development_ use case. When every request gets it's own process and shared services are abstracted away, the mental model is quite simple. Use Elixir/ Erlang and you also get to use server push architectures with ease. Want server state to write your own device-sync? Need to write web-hooks? Want to count requests? Need to guarantee uniqueness of a combination of variables in a distributed environment? Want to do server some requests requiring heavy-lifting while also serving short-lived requests with low latency? Choose Elixir / Erlang for web development and as your needs evolve (and they will), you're likely to be surprised and delighted by how much you can achieve before you need to move to a polygot or micro-service-based architecture.
I think if you are starting from nothing, message passing is actually a more intuitive way to think about and design large distributed programs, because you don't need to deal with a lot of the complexities of OO languages such as state and handling distributed communication / disaster recovery.
Running processes in parallel is cheap in Erlang, so you can run hundreds of thousands of them in parallel on a commodity servers. Generally, if you are running any sort of high-traffic service with a lot of dynamic stuff happening, there are probably parts of it that Erlang/OTP can help make more efficient and easier to maintain.
I think part of it is that it fits well into the microservices paradigm - where there is a separation of work even within the same program.
But that said, for where Erlang shines - backends and distributed services - first-class concurrency is damn near a must-have these days (see: Go, Erlang, etc. recent surge in use and popularity).
I ended up with Go, but had I more time to learn Erlang (or Elixir was 1.0 when I started looking) it would have been a solid contender.
Also, don't discount the incredibly solid and robust BEAM VM. If you are writing distributed services or servers (see: Riak, RabbitMQ) the overhead for spinning off "instances" or "processes" within the VM is virtually zero-cost, and in production systems has like sixty-four 9s of uptime. [0]
Make more sense now? :) (in addition to the other responses)
tl;dr Evercam replaced Node.js, Sidekiq, Pusher and Upstart with Elixir, huge net gain in stack simplicity which is arguably a strong plus from a long-term maintenance perspective.
Immutability stops you from experiencing random state changes. The code you read takes in data structures and spits out data structures making it easy to test things in isolation.
OTP is actually a lot lot faster than you'd expect and allows you to use all CPUs very simply and also provides network transparency (on top of supervisors and "let it crash").
I can offer you my understanding. I think this way we allow multiple light processes to be created and this should work well with multicore processors.
Overhead you are talking about can only be mental as processes in Elixir are very light. I think compared to other similar languages, at least for me Elixir doesn't require as much overthinking. Now, you could also say that I wasn't working on complex apps with it, so that might be why.
You don't need to start with supervisors, application trees and message passing for the initial MVP.
However, let's take Ruby on Rails as an example. I have gotten MVP-quality software out in production. We can quickly validate whether this is something customers want. However, when the app starts getting traction, I have found that I need to reach for things outside a RoR monolith. Usually, the first thing is getting some kind of background processing job going, with Sidekiq and Resque.
At this point, we have shifted from a synchronous, no-shared-state architecture to one that requires background processing. This is where we start thinking in terms of concurrency and how to deal with it.
Since Rails is built on top of Ruby, Ruby does not have a sufficiently robust concurrency primitives that work well at scale. To use something like Sidekiq, we rely on using Redis. I have had projects where I need to pass data through a series of background jobs, like a pipeline. Now I am implementing what are effectively mutexes as a database column. I'm processing data that are effectively being streamed from third parties, so each individual Sidekiq job now has to have guards so that it can be idempotent. I have to be mindful of queues, analyzing where the bottlenecks, and how to structure the queues and tune how many of what kind of jobs should get resources. I have had to rely on outside monitoring solutions to make sure these processes stay up and running, and write things in a way so they can still recover on restart. I had to fork Sidekiq and rewrite core parts of it in order to create a different set of guarantees to fit a use-case -- a big deal to every Rubyist I had talked to, and yet, seems normal in the Erlang world.
In other words, in the Rails world, concurrency is handled in a very coarse-grained way, often relying on software outside of Ruby (such as Redis).
I found myself reinventing, in a very crude ways at a coarse-grained level, what OTP already offers. I adopted subversion and later git because I was crudely reinventing version control (shell script and tar); I dropped PHP and Perl CGI scripts in favor of Rails back when Rails was version 1.1 because I knew what a project that doesn't use what Rails offers would look like (a messy puddle). And likewise, now that I've found myself badly reinventing ideas OTP already has, it's the tool I'll reach for now.
With Erlang and Elixir, it is easier to reason with concurrency because the primitives are baked in deep, and it costs little to spin up something asynchronously. Reasoning with concurrency in the Rails world is treated more like black magic -- obscure, non-obvious, forbidden, potentially dangerous, and socially unacceptable.
It depends on your use case. The overhead is much smaller than you probably think and the Erlang / Elixir programming model works extremely well for some cases, networking for example. I've been writing pieces for a Bittorrent client in Elixir. Using a process for every connection to another client or for every torrent makes structuring your code really easy.
"Maybe it's just due to my problem domain (web development), but it doesn't seem like as big a draw to Elixir and Erlang as pattern matching and FP are."
That seems like the perfect domain for this, to me.
HTTP is (traditionally) stateless. So, in the past CGI apps were independent entities. Every new hit was a new world in which the app ran. And, every interaction between each CGI app happened via a standard communication medium (a database or writing to files rather than messages, but if you squint and don't look too closely, you might see the analogy there). There are negatives to that...but, there are real benefits, too. That provided a tremendous level of flexibility in how apps were built and in how they interacted with each other. I'd argue that in some regards (though certainly not all), going to an always-on appserver model was a step backward.
So, to me, in some ways Elixir looks like CGI, only without most of the negatives of CGI. A lot of the mental overhead, like parsing parameters and setting up database connections, can be abstracted away with a modern/fast language designed just for this kind of work. And, the end result is a robust system without a lot of mental overhead other than learning the abstractions and how they interact.
You call out supervisors and message passing and the like as being negatives; complexity you don't want to have to deal with. But, you aren't really comparing apples to apples. Node.js is also asynchronous, and in its bare state is a mess; callbacks are vastly more difficult to interact with and reason about than a system like Erlang/Elixir, and a lot of other languages use the callback model for asynchronous programming.
So, I guess what I'm trying to say is that if you're doing asynchronous programming, Elixir is going to be easier than most (though ES6 has concurrency primitives that are starting to make sense, and Python 3 and Ruby have started getting them, Perl 6 has good options, Perl 5 has somewhat clumsy modules for it, etc.).
All that said, I'm mostly spending my time learning JavaScript/ES6/Node lately; despite its warts, the ecosystem is huge, and there's so many smart people working on the language that by the time of ES7, there will be a very convincing concurrency story. Maybe still not as convincing as Erlang/Elixir and the OTP. But, probably good enough for web apps and services.
I looked into Elixir and Phoenix briefly; watched a few videos, read a few tutorials, never wrote any code. And, came away with a distinct impression of a tiny, tiny, ecosystem. I'm accustomed to going to CPAN or Ruby Gems or PyPI or npm, and finding not just one, but several options for whatever task I want to accomplish, even relatively obscure stuff. It'll be a while before Elixir comes close to even CPAN (which is smaller than all the others these days, though still manages to have modules for nearly every problem I tackle).
Seems like this is mostly a bug fixing release. Solid stuff nonetheless. I recently learned Elixir and even though I came from an object oriented and imperative language background, I've fallen in love with the language. Switching back to others like JavaScript really leave me missing so many of the functional features, especially pattern matching. I think I've been spoiled.
What makes Elixir a sort of "perfect storm" is the combination of a battle-tested, corporate funded, philosophically correct language model and VM (Erlang/BEAM/OTP) combined with a syntax (Elixir) that's both beautiful and comprehensible to an average programmer.
Even if you loved Erlang, I'd argue the language is just too esoteric and jarring to most programmers to ever gain serious traction. I remember reading about Erlang's magic back in ~2007 (maybe [1]) and giving it a brief shot but deciding there was no way I wanted to look at that kind of code 8 hours a day. But coming from writing fairly FP-style Ruby/CoffeeScript/ES6-7, Elixir is feels only a step or two more down that path - in many ways actually conceptually simpler - and with enormous benefits.
Can someone explain why I should use Elixir instead of Erlang? I haven't used either, but from a very high level it seems that Elixir has a "magic" syntax like Ruby, where as much as possible is hidden from the user, whereas Erlang has a much clearer and more concrete syntax. Even though Erlang's syntax is not standard or traditional, as someone who's used neither neither language, I find it much easier to read and understand exactly what's happening.
I don't think anything is hidden, it's just a different syntax. The only thing that's really changed is that in some of the Elixir modules they've changed the order of parameters to make them consistent.
I use Elixir and I like it, but I wouldn't tell you to use Elixir over Erlang. There's no competition, what's good for one language is good for the other. So use whatever makes more sense to you and hopefully all of us in this Erlang community win.
I've only dabbled with both, but the reasons often given is that (apart from surface syntax) the advantages of Elixir over erlang is a powerful macro system, a more cohesive standard library (with some nice things like the pipeline operator), and some very nice tooling (mix is great). With all that being said, erlang is a great language as well, so just pick whatever you like the most.
Ruby doesn't really have "magic" syntax anymore than any other dynamic language does. Too much "magic" is a frequent criticism of the Rails framework, but I've never heard that directed at Ruby.
I use Erlang primarily and like it better -- as you said, there is a bit less magic, things are more explicit and it is the primary target language of the VM.
Some people like one syntax, some the other. Elixir has some constructs like pipe and in some cases macros can be nicer, don't feel like I need them and am quite happy using Erlang. I rather like that syntax is not like Ruby or Python or other language I know, it helps my mind switch contexts. So that is a personal choice. Also Elixir has very welcoming community for beginners. That's big bonus if you are starting out, but Erlang also has more existing literature, books and learning materials.
Take a look at both and see which one appeals more to you. At the end of the day, they both use the excellent BEAM VM and you'd end up learning largely similar concepts anyway.
Elixir has a cleaner syntax that I find a lot easier to read and work with (subjective), and it has powerful macro capabilities that make it possible to do metaprogramming (objective). There's nothing "magic" about Elixir syntax, and it is easy to interface with Erlang libraries.
Good stuff. I've started using Elixir and Phoenix in a new project (I wanted concurrency for making a lot of HTTP calls) and while functional programming does take getting used to (you can not write something like a counter that increments itself in a loop) it's been relatively easy to pick up. Compared to learning Node.js, it's been a more relaxed and productive experience (although maybe I'm not making a fair comparison because I've implemented way more things in Node that I have yet to try in Elixir.)
If all you want is a counter to loop through something a given number of times, you could do something like this:
defmodule Counter do
def loop(num) do
Enum.each(0..num, fn (i) -> IO.puts(i) end)
end
end
If you really want to write your own recursive function (Enum already does that for you, but it's always good to try different ways, so you can learn the language better), you could do something like this:
defmodule Counter do
def loop(num), do: _loop(0, num)
defp _loop(idx, num) when idx == num, do: IO.puts idx
defp _loop(idx, num) do
IO.puts idx
_loop(idx + 1, num)
end
end
The thing that is nice about Elixir is that it does tail-optimization on its recursive functions, as long as the recursive call is the very last thing in the function. This means you won't have 10 copies of the _loop function in memory waiting to unwind.
Why are redditors and ycombinators so annoying? A new version has been released and instead of discussing what is important and notable about this new release,you all go off on a tangent discussing the pros and cons of Elixir.
Why don't you guys take the discussion elsewhere so that interested readers can focus on what is new and relevant about this release? The subject of the post is about a new release,not about people showing of their knowledge and opinions about computer programming and languages etc.
It just makes this forums intolerable especially those which are about new releases new products etc.
You have a very adequate and concise changelog linked for this purpose.
IMO people discussing here aren't "showing off their knowledge". Posting a version upgrade serves as a visibility reminder and people start inquiring about the features of the language/framework. Eventually a technology gains enough critical mass so that more people start using it.
Verses LFE? Not really, at the end of the day they're both ways of making programs for the Erlang VM that aim to improve on Erlang. You can use any of erlang, elixir, and LFE libraries from any of the three[0]. See Jose's comment at https://groups.google.com/d/msg/lisp-flavoured-erlang/ensAkz... .
[0] though beware string types -- LFE string functions generally take charlists, same as Erlang; Elixir ones take utf-8 binaries. So you lose a bit of unicode-just-worksness compared to Elixir, but it isn't really a problem, you just need to be aware of the mismatch and be prepared to handle it
[+] [-] rpazyaquian|9 years ago|reply
I hear a lot about OTP and the "let it crash" mantra, but I just don't quite understand what's so great about it. Maybe it's just due to my problem domain (web development), but it doesn't seem like as big a draw to Elixir and Erlang as pattern matching and FP are.
[+] [-] rdtsc|9 years ago|reply
Because it matches the reality better. Concurrent applications built out of individual isolated processes often map well to real world problems. Not all problems, but quite a few of them. Say if you just multiply a matrix or maybe calculate an average, you don't need concurrency and supervision. If you want to sort a list of files, you don't need etc. Not all systems have to be reliable and fault tolerant.
> Making everything asynchronous and detached makes things more complicated, not less.
The real world is asynchronous and detached. You as an agent are detached from other co-workers or friends. Before you take a breath you don't have to wait for them to finish taking a breath. A car driving down the street is detached from others. They come to an intersection and have to work together in order not to run into each other. But after that they keep going on their way. Also they are fault tolerant -- because your neighbor's car engine crashed, doesn't mean yours should stop working. Seems a bit like a silly example but that maps fairly well to distributed computing -- requests, user connections, work processes, items in a processing pipeline, database shards and so on.
So given that there are these types of problem what framework language should you pick to solve it? I would say first of all one which reduces the impedance mismatch and lets your write clearest, simplest code. And also that one that's most fault tolerant.
You could spawn threads (or green threads) in some languages -- but you'd probably be sharing memory and have some bugs in there. You could spawn OS processes and/or have multiple containers/vms in the cloud, but that gets expensive. You could use Rust for example to achieve some of those goal though, because it will guarantee to a certain degree during compile time that your application will have less memory and concurrency errors. Or you could use something based on the BEAM VM (Erlang or Elixir) for example, and you get a different approach to fault tolerance there.
[+] [-] methehack|9 years ago|reply
I think the idea is that if you start with phoenix you don't have pay that cost unless you need it. If you end up needing to scale, the features are there. I think Uncle Bob said once that "architecture is the art of delaying decisions". Phoenix / Elixir let you do this.
Another part of the story is channels. If you're doing websocket work, you can't find a better platform than phoenix / elixir.
Once you start digging in, you realize phoenix / elixir are awesome even if you're not using all the concurrency / distribution stuff. The functional nature / immutability posture of elixir helps you with a large code base and/or lots of devs. There are other features -- pattern matching and macros come to mind -- that make Elixir a great place to code.
All that said, if you have a client, say, with modest needs that is never going to have much traffic, all that might not be worth going with something you're not familiar with. Similarly, if you're talking about a 1-2 dev project, it might not worth learning something new. In those cases, maybe it's rails or django or whatever ftw.
[+] [-] nine_k|9 years ago|reply
You can write any algorithm using any Turing-complete machine, including the original tape-based Turing machine or a language like Malbolge. But it is going to take an inordinate amount of time to write and debug.
What FP and message passing buy you is the ease of reasoning. No shared state -> no problem of concurrent updates. No state at all -> no questions like "but does this method update that instance member?". All your state lives in message queues and call stacks, and both provde very stringent and easy-to-understand disciplines.
This also gives you composability: you have easy time putting things together and rearranging them as you see fit, without caring of any hidden data dependencies. Everything is explicit.
(Combined with a static type checker and a right type system, it gives you the effect of "if it compiles, it runs correctly 99% of times", seen e.g. in Haskell or Rust. Elixir is not a statically-typed language, though.)
Yes, it does sometimes come at a cost. Not all immutable data structures can be made as efficient as mutable ones. Usually they are as fast as mutable ones, but pay some space cost; IIRC it's theoretically proven that the cost is no more than a logarithmic one.
With RAM being so much cheaper than developers' time, it's a rather good tradeoff. At least, using JIT in Java or Javascript, or just using Python and Ruby, one usually pays a similar or higher RAM cost, for a fraction of benefits. JS does move towards immutability and FP-ness, though, because the above applies to it, too, and it's not so OOP-heavy as for make FP approaches largely impractical.
[+] [-] mavelikara|9 years ago|reply
In the paper A Note on Distributed Computing, Waldo et al argue that having a distributed object system is bound to fail no matter how hard one tries.
Erlang works around this problem by making the local object model identical to the remote one in all aspects except latency. The semantic model is copy-on-send asynchronous message-passing. This is inconvenient when working with local components, as you rightly point out. But putting up with this inconvenience up-front makes it easier to refactor towards a distributed system later much easier. The language attempts to work around the inconvenience of asynchronous message-passing using functional abstractions - which is largely what OTP seem to be.BTW, the language can be used in a synchronous context. It works just like another functional language with all code executing from start-to-finish on a single thread of execution.
[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7...
[+] [-] mbucc|9 years ago|reply
The top two for me are:
(1) you start to just program the sunny path. It's faster and more fun to code in this mode. And with pattern matching of function arguments and return values (as well as guards) and you get pre- and post-condition asserts all while programming the sunny path.
(2) What are you supposed to do when your system raises an run-time error anyway? I think in practice, in the majority of cases simply restarting your process in some known state is the best you can do.
And that "start in some known state" is non-trivial for a system that handles concurrent inputs. OTP helps you do that correctly.
Edit: Formating. Add note pattern matching of function args and guards.
[+] [-] akeating|9 years ago|reply
[+] [-] weixiyen|9 years ago|reply
Running processes in parallel is cheap in Erlang, so you can run hundreds of thousands of them in parallel on a commodity servers. Generally, if you are running any sort of high-traffic service with a lot of dynamic stuff happening, there are probably parts of it that Erlang/OTP can help make more efficient and easier to maintain.
[+] [-] omginternets|9 years ago|reply
[0] http://ferd.ca/the-zen-of-erlang.html
[+] [-] themartorana|9 years ago|reply
But that said, for where Erlang shines - backends and distributed services - first-class concurrency is damn near a must-have these days (see: Go, Erlang, etc. recent surge in use and popularity).
I ended up with Go, but had I more time to learn Erlang (or Elixir was 1.0 when I started looking) it would have been a solid contender.
Also, don't discount the incredibly solid and robust BEAM VM. If you are writing distributed services or servers (see: Riak, RabbitMQ) the overhead for spinning off "instances" or "processes" within the VM is virtually zero-cost, and in production systems has like sixty-four 9s of uptime. [0]
[0] ok, nine, but still: https://pragprog.com/articles/erlang
[+] [-] tazjin|9 years ago|reply
It does in languages like JS because they're not built for it. Elixir programs on average don't look very complicated.
[+] [-] pmarreck|9 years ago|reply
https://speakerdeck.com/mosic/elixir-at-evercam?slide=6
Make more sense now? :) (in addition to the other responses)
tl;dr Evercam replaced Node.js, Sidekiq, Pusher and Upstart with Elixir, huge net gain in stack simplicity which is arguably a strong plus from a long-term maintenance perspective.
[+] [-] andy_ppp|9 years ago|reply
OTP is actually a lot lot faster than you'd expect and allows you to use all CPUs very simply and also provides network transparency (on top of supervisors and "let it crash").
[+] [-] tim333|9 years ago|reply
[+] [-] desireco42|9 years ago|reply
Overhead you are talking about can only be mental as processes in Elixir are very light. I think compared to other similar languages, at least for me Elixir doesn't require as much overthinking. Now, you could also say that I wasn't working on complex apps with it, so that might be why.
Anyhow, this is my view.
[+] [-] hosh|9 years ago|reply
However, let's take Ruby on Rails as an example. I have gotten MVP-quality software out in production. We can quickly validate whether this is something customers want. However, when the app starts getting traction, I have found that I need to reach for things outside a RoR monolith. Usually, the first thing is getting some kind of background processing job going, with Sidekiq and Resque.
At this point, we have shifted from a synchronous, no-shared-state architecture to one that requires background processing. This is where we start thinking in terms of concurrency and how to deal with it.
Since Rails is built on top of Ruby, Ruby does not have a sufficiently robust concurrency primitives that work well at scale. To use something like Sidekiq, we rely on using Redis. I have had projects where I need to pass data through a series of background jobs, like a pipeline. Now I am implementing what are effectively mutexes as a database column. I'm processing data that are effectively being streamed from third parties, so each individual Sidekiq job now has to have guards so that it can be idempotent. I have to be mindful of queues, analyzing where the bottlenecks, and how to structure the queues and tune how many of what kind of jobs should get resources. I have had to rely on outside monitoring solutions to make sure these processes stay up and running, and write things in a way so they can still recover on restart. I had to fork Sidekiq and rewrite core parts of it in order to create a different set of guarantees to fit a use-case -- a big deal to every Rubyist I had talked to, and yet, seems normal in the Erlang world.
In other words, in the Rails world, concurrency is handled in a very coarse-grained way, often relying on software outside of Ruby (such as Redis).
I found myself reinventing, in a very crude ways at a coarse-grained level, what OTP already offers. I adopted subversion and later git because I was crudely reinventing version control (shell script and tar); I dropped PHP and Perl CGI scripts in favor of Rails back when Rails was version 1.1 because I knew what a project that doesn't use what Rails offers would look like (a messy puddle). And likewise, now that I've found myself badly reinventing ideas OTP already has, it's the tool I'll reach for now.
With Erlang and Elixir, it is easier to reason with concurrency because the primitives are baked in deep, and it costs little to spin up something asynchronously. Reasoning with concurrency in the Rails world is treated more like black magic -- obscure, non-obvious, forbidden, potentially dangerous, and socially unacceptable.
[+] [-] rogerbraun|9 years ago|reply
[+] [-] SwellJoe|9 years ago|reply
That seems like the perfect domain for this, to me.
HTTP is (traditionally) stateless. So, in the past CGI apps were independent entities. Every new hit was a new world in which the app ran. And, every interaction between each CGI app happened via a standard communication medium (a database or writing to files rather than messages, but if you squint and don't look too closely, you might see the analogy there). There are negatives to that...but, there are real benefits, too. That provided a tremendous level of flexibility in how apps were built and in how they interacted with each other. I'd argue that in some regards (though certainly not all), going to an always-on appserver model was a step backward.
So, to me, in some ways Elixir looks like CGI, only without most of the negatives of CGI. A lot of the mental overhead, like parsing parameters and setting up database connections, can be abstracted away with a modern/fast language designed just for this kind of work. And, the end result is a robust system without a lot of mental overhead other than learning the abstractions and how they interact.
You call out supervisors and message passing and the like as being negatives; complexity you don't want to have to deal with. But, you aren't really comparing apples to apples. Node.js is also asynchronous, and in its bare state is a mess; callbacks are vastly more difficult to interact with and reason about than a system like Erlang/Elixir, and a lot of other languages use the callback model for asynchronous programming.
So, I guess what I'm trying to say is that if you're doing asynchronous programming, Elixir is going to be easier than most (though ES6 has concurrency primitives that are starting to make sense, and Python 3 and Ruby have started getting them, Perl 6 has good options, Perl 5 has somewhat clumsy modules for it, etc.).
All that said, I'm mostly spending my time learning JavaScript/ES6/Node lately; despite its warts, the ecosystem is huge, and there's so many smart people working on the language that by the time of ES7, there will be a very convincing concurrency story. Maybe still not as convincing as Erlang/Elixir and the OTP. But, probably good enough for web apps and services.
I looked into Elixir and Phoenix briefly; watched a few videos, read a few tutorials, never wrote any code. And, came away with a distinct impression of a tiny, tiny, ecosystem. I'm accustomed to going to CPAN or Ruby Gems or PyPI or npm, and finding not just one, but several options for whatever task I want to accomplish, even relatively obscure stuff. It'll be a while before Elixir comes close to even CPAN (which is smaller than all the others these days, though still manages to have modules for nearly every problem I tackle).
[+] [-] jswny|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] themgt|9 years ago|reply
Even if you loved Erlang, I'd argue the language is just too esoteric and jarring to most programmers to ever gain serious traction. I remember reading about Erlang's magic back in ~2007 (maybe [1]) and giving it a brief shot but deciding there was no way I wanted to look at that kind of code 8 hours a day. But coming from writing fairly FP-style Ruby/CoffeeScript/ES6-7, Elixir is feels only a step or two more down that path - in many ways actually conceptually simpler - and with enormous benefits.
[1] https://pragprog.com/articles/erlang
[+] [-] macintux|9 years ago|reply
I find Elixir uncomfortable because I left behind that syntax and have no desire to return to it.
[+] [-] nilkn|9 years ago|reply
[+] [-] bratsche|9 years ago|reply
I use Elixir and I like it, but I wouldn't tell you to use Elixir over Erlang. There's no competition, what's good for one language is good for the other. So use whatever makes more sense to you and hopefully all of us in this Erlang community win.
[+] [-] out_of_protocol|9 years ago|reply
* Dependency managment
* Rake-like cli tasks (mix)
* Compile-time metaprogramming
* Clean and consistent standard library
* Really good UTF8 support
* Well, syntax in a sense "easier to understand for switchers from ruby/python/etc"
[+] [-] mijoharas|9 years ago|reply
Here is a short article that outlines some advantages. http://theerlangelist.com/article/why_elixir
[+] [-] learc83|9 years ago|reply
[+] [-] rdtsc|9 years ago|reply
Some people like one syntax, some the other. Elixir has some constructs like pipe and in some cases macros can be nicer, don't feel like I need them and am quite happy using Erlang. I rather like that syntax is not like Ruby or Python or other language I know, it helps my mind switch contexts. So that is a personal choice. Also Elixir has very welcoming community for beginners. That's big bonus if you are starting out, but Erlang also has more existing literature, books and learning materials.
Take a look at both and see which one appeals more to you. At the end of the day, they both use the excellent BEAM VM and you'd end up learning largely similar concepts anyway.
[+] [-] innocentoldguy|9 years ago|reply
[+] [-] firasd|9 years ago|reply
[+] [-] jdimov10|9 years ago|reply
[+] [-] unvs|9 years ago|reply
[+] [-] innocentoldguy|9 years ago|reply
[+] [-] anonymousguy|9 years ago|reply
[+] [-] sugarpile|9 years ago|reply
[+] [-] innocentoldguy|9 years ago|reply
[+] [-] vfclists|9 years ago|reply
Why don't you guys take the discussion elsewhere so that interested readers can focus on what is new and relevant about this release? The subject of the post is about a new release,not about people showing of their knowledge and opinions about computer programming and languages etc.
It just makes this forums intolerable especially those which are about new releases new products etc.
[+] [-] pdimitar|9 years ago|reply
IMO people discussing here aren't "showing off their knowledge". Posting a version upgrade serves as a visibility reminder and people start inquiring about the features of the language/framework. Eventually a technology gains enough critical mass so that more people start using it.
Absolutely nothing wrong about that.
[+] [-] dingleberry|9 years ago|reply
i learned lisp instead and now i'm learning lfe (lisp flavored erlang)
do i miss something by not learning elixir?
[+] [-] SEMW|9 years ago|reply
Verses LFE? Not really, at the end of the day they're both ways of making programs for the Erlang VM that aim to improve on Erlang. You can use any of erlang, elixir, and LFE libraries from any of the three[0]. See Jose's comment at https://groups.google.com/d/msg/lisp-flavoured-erlang/ensAkz... .
[0] though beware string types -- LFE string functions generally take charlists, same as Erlang; Elixir ones take utf-8 binaries. So you lose a bit of unicode-just-worksness compared to Elixir, but it isn't really a problem, you just need to be aware of the mismatch and be prepared to handle it
[+] [-] tim333|9 years ago|reply