I gave haskell a shot as some of my earlier github repos indicate: https://github.com/substack. I even wrote my blog in haskell with happstack, since snap hadn't gotten popular yet.
Haskell is very hard, but even after 3 years of pretty intensive use, I never really felt productive with haskell in the same way that I've felt in other languages. Anything I did in haskell required a lot of thinking up-front and tinkering in the REPL to make sure the types all agreed and it was rather time-consuming to assemble a selection of data types and functions that mapped adequately onto the problem I was trying to solve. More often than not, the result was something of an unsightly jumble that would take a long time to convert into nicer-looking code or to make an API less terrible to use.
I built an underwater ROV control system in haskell in 2010 which went well enough, but I had to tinker with the RTS scheduling constantly to keep the more processor-hungry threads from starving out the other ones for CPU. The system worked, but I had no idea what horrors GHC was performing on my behalf.
Later I built the first prototype of my startup with haskell, but the sheer volume of things that I didn't know kept getting in the way of getting stuff done in a reasonable time frame. Then we started to incrementally phase out haskell in favor of node.
I write a lot of node.js now and it's really nice. The whole runtime system easily fits into my head all at once and the execution model is simple and predictable. I can also spend more time writing and testing software and less time learning obscure theories so that the libraries and abstractions I use make sense.
The point in the article about haskell being "too clever for this benchmark" sums up haskell generally in my experience.
It's pretty much what I mean when calling Haskell hard to learn. For me it's also been a steep learning curve, and my experience hasn't been altogether different than yours.
I started out maybe 5 years ago following tutorials, reading up on all the metaphors about Monads and doing project Euler problems.
After a while I started to tackle some small web related things with Haskell and had exactly your experience of running into a lack of understanding of how the system works and wrapping my head around functional datatypes.
I pretty much gave up on Haskell as a practical language at that point, but something kept me coming back once in a while.
Then at a point I had a use for making a small web service fast and the Node prototype I made performed badly and crashed in spectacular ways under high loads. I found Snap and made a quick prototype in Haskell. At that point the experience of years of small experiments must finally have made something click. In a very short time I had a very fast service using almost no memory. It's deployed in production (as a part of http://www.webpop.com) and has been extremely stable.
By now I think I've crossed some kind of barrier, and feel like I'm both being productive and having fun when writing Haskell, but it really didn't come easy to me and all else being equal my experience tells me that a good deal of my colleagues would have an even harder time.
Did you perhaps jump into the water too quickly? I'm currently learning a couple of functional languages (including Haskell) and using it in production environments but my current use is restricted to "I have an input that will always produce a certain output. There are no database or environmental dependencies, this is straight computation. I want to never have to worry about this function ever again". And so far, knock on wood, haskell has been killer for that scenario. I'll probably eventually transition a lot more of my code to functional languages, but will do so slowly (using Go otherwise).
I've also never been productive with Haskell. It's cute, it raises interesting problems if you enjoy wrangling with mathy problem for the sake of it, but when it comes to getting stuff done in a deeply imperative, eager world, the impedance mismatch is simply overwhelming.
Moreover, I was very proficient in OCaml before I discovered Haskell, and it just spoiled be. It has all of Haskell's qualities which matter (type inference, algebraic data structures, a naturally functional mindset) without the parts you regularly have to fight (mandatory monads and monad transformers, algorithmic complexity in a lazy context, tedious interfacing to the underlying OS).
If you felt like Haskell had many amazing qualities, spoiled by a couple of unacceptable flaws, especially when it comes to acknowledging how the real world works, I'd suggest that you give a try to OCaml. You should be proficient with it within a couple of days.
I believe you are attributing a library issue to a language. Before today (and by today I literally mean a month ago when Yesod released a cross-platform development server that automatically re-compiles your web application) there wasn't a productive set of libraries and tools to build a web application with in Haskell. 3 years ago when you started, and even until 1-2 years ago the library situation was absolutely horrible. Web frameworks with very little to offer, mediocre templating languages, not even an attempt at a database ORM. Tutorials would have you write a bunch of code to achieve a detail taken for granted in libraries used in web frameworks of other languages.
Please take a look at doing real-world, productive web development with Yesod. http://www.yesodweb.com
You are still going to take a productivity hit in Haskell due to lack of libraries in comparison to Ruby, Python, etc. So the practical reason for using Haskell today is to take advantage of the amazing performance, take advantage of Haskell non-web libraries in the backend, or for a high assurance project where its type system can rule out most common web development bugs.
oh, and Yesod is even faster than the mentioned Snap framework which is already much faster than Node (and unlike Haskell, Node does not scale to multi-core). Although Yesod isn't going to automatically cache the fibonacci sequence for this artificial benchmark because in the real world I have never once been tasked with writing code like that for a web application.
Let's have a talk about the $ operator. When you use it more than once per line, you're writing code that looks weird and is hard to read. Switch to the similar function-composition operator, and everything looks more idiomatic.
Instead of:
fibServer x = quickHttpServe $ writeBS $ B.pack $ show (fibonacci x)
Anyway, it's a little style thing, but it's nice to use the composition operator (.) when you want composition and the application operator ($) when you want application. It makes the code look nicer and it shows its intent more clearly. And really, they are different concepts, even if they both type-check the same.
And finally, remember that function application, by default, is the highest-precedence operator in Haskell. When you write:
foo . (bar 42) . baz
It's the same as:
foo . bar 42 . baz
Because of operator precedence. $ only exists to change the order of operations for a particular expression.
Please stop with the toy benchmarks and pretty one-liners that show how awesome Haskell is.
There is growing list of smart programmers who get all enchanted with Haskell, jump into it wholeheartedly, and end up frustrated (see bottom of message). GHC makes the typical C++ compiler seem fast. Once code grows past the homework problem size, all hope of understanding memory usage is lost. I don't think people really get how bad that it is. The whole culture of Haskell is based around static checking, yet you have to run a program in order to find out if it blows your memory limit several times over.
Haskell is still a neat language, but we need less advocacy based on toy programs and more honest realism.
That post is from 2005. The situation is entirely different today w/respect to speed (both the compiler and the addition of ByteString and Text libraries), and productive libraries, particularly for web development. Likewise, it is a rare case that you would run into memory consumption issues.
I do agree that there is entirely too much enthusiastic toying around in Haskell and not enough real world users and honesty about limitations.
It is somewhat of a shame that learning curve plays such a significant role for career programmers.
You would expect that people that spend years and years working with their tools would be willing to put a few weeks or months into learning their most important tool: the programming language. It seems most programmers get frustrated and abandon learning of different programming paradigms very quickly.
The problem is not why one "would be willing to put a few weeks or months into learning their most important tool", but why one would be willing to put a few weeks or months into learning the next silver bullet, and repeat that two or three times a year.
Experienced developers have learned that, typically, newer languages are better than older ones, but they typically do not get better by leaps and bounds across the whole domain. Instead, language evolution typically is a matter of two steps forward, ten sideways, and one step back.
Also, experience tells that new languages often get overhyped as making only forward steps. Given that, it does not make sense to switch horses too often, or one would be forever learning, and never be productive.
Maybe most programmers who bother to look up different paradigms. My experience is that most programmers overall aren't even aware of different paradigms, let alone that things could be better: they're taught what they're taught in school or at home and don't move beyond. I've heard the phrase "Well if you know C++ you know it all" at least three times.
Not to be to contrarian but until I see proof to the contrary I think Norvig put said it best:
In terms of programming-in-the-large, at Google and elsewhere, I think that language choice is not as important as all the other choices: if you have the right overall architecture, the right team of programmers, the right development process that allows for rapid development with continuous improvement, then many languages will work for you;
The ecosystem for Haskell is improving rapidly. My startup built a computer vision application on top of easyVision and with the intention of rewriting it in ObjC. Instead we are working with the Haskell community to target the mobile platform. A year ago that would have been a dicey bet.
About our Haskell experience: Yes, the learning curve seems steep, but mainly because of the things you have to unlearn (OMG no for-loops!). However, functions are the most modular things ever invented. That translates into an uncanny ability to add features quickly. A sophisticated type system catches many errors at compile time.
I love Haskell; I do a lot of work with it. That said, I use Python for the web. As nice as Snap is, Haskell just doesn't have the vast array of quality libraries for web development that Python does. Lately, this means that I do web development in Flask, and heavy lifting in Haskell.
I'll say this again: Ted shouldn't have wasted everyone's time highlighting the response time of the request. He effectively benchmarked V8 right on his blog and then called it slow. Now everyone's complaining that people at least demonstrate that part to be untrue.
Ironically, every one of his clients in the ab concurrency test will receive their responses before the users of the hypothetically parallel Python and Ruby services, because Node responds an order of magnitude faster. So he didn't actually demonstrate a problem.
Why is a Fibonacci sequence used as a benchmark for an argument for concurrent programming? The Fibonacci sequence is a recursive algorithm that inherently has dependencies on previous calculations that prevent effective concurrent execution. The Fibonacci algorithm executed concurrently is going to spend an inordinate amount of time creating tasks that do a trivial calculation (add two numbers together).
If you want to benchmark concurrency, at least pick an benchmark algorithm that exercises concurrency. The FFT comes to mind, but there are probably lots of better examples (that is a challenge to HNers ;-).
He was not benchmarking concurrency, he was pointing out that Node is single-threaded system that essentially implements the old-style cooperative multitasking, where a single task will block everything else. He could have used sleep() and it would have illustrated the same point (and more elegantly, since half of the responses miss the point entirely and focus on the Fibonacci part).
Node developers probably don't do a lot of computationally complex stuff, but when they do, they have to think about the concurrency problem. Even something as trivial as sorting a large list or parsing a huge chunk of JSON is going to stop all other requests from executing.
I think that's where the "well-executed trolling" comment comes in. He essentially whipped everyone into a tizzy by complaining that a framework for helping to get better performance out of I/O-bound web services by handling I/O calls asynchronously doesn't work so well if you take a CPU-bound monkey wrench and jam it directly into the gearbox.
I suspect that it worked so well because the idea of using Fib to talk about performance is kind of built into the collective programmer unconscious. The whys, hows, and whats of using Fib to talk about performance are somewhat less well-entrenched, though. So there's room to trip people up by getting them to go, "Yeah, this sounds interesting, and I recognize all the words so it probably isn't technobabble!"
Wasn't the point of the original post that node.js blocks the event loop while it executes functions and thus effectively kills concurrency? Not how fast it calculates fibonacci numbers and sends it over http...
The point was that it kills parallelism – Node is just a single-threaded event loop, running on a single core. And since computing fibonacci numbers is a CPU bound activity, that type of benchmark would be relevant but for the memoization bit.
EDIT: Well also, the author would have to actually benchmark this vs. Node with many concurrent clients in order for it to be relevant; here he's just timing a single request from start to finish, which obviously doesn't say anything about how this scales.
Hasn't the author already said that the measuring of Fibonacci was not the point of his tirade? Which makes the line in this post 'I think a lot of people missed the main point of Dziuba's troll" slightly amusing. Is there now going to be someone running this 'benchmark' in whatever language they can? One of the blogs already posted said he's going to find time to run it in C.
My benchmark was mostly a parody, since Haskell just memoized the call and never really did the work.
The point of the article was more the difference between the languages that really tackles concurrency (Haskell, Clojure, Go, Erlang) and Node's way of simply offering one solution that works for a lot of problems where the common scripting languages (especially PHP) doesn't work that well.
The real point is that Haskell is quite good in many areas and is excellent in parallelism and concurrency. While other languages are excellent in concurrency and not so good in other areas.
Those many languages are the answer for the sole field of concurrency and Haskell is the answer when you combine many fields, one of which could happen to be concurrency.
For every application there is a language best suited... Let's stop trying to force every language to be good at everything and then compare them as though they were all the same, shall we?
What's more important is applying some important concepts in haskell - functional programming and dividing your program into tiny self-contained parts. You can write this way in most languages - Ruby, Python, Scala, etc. The fancier parts of Haskell - lazy evaluation, static typing, whatever - are less important to making software that works than its functional nature.
The static typing is essential to making software that works (and scales). Dynamic typing requires a lot more test code and test code is expensive to write, maintain, and repeatedly execute.
I don't think the point of Ted Dziuba's rant was that every request is a large calculation. In Node.js if most results are small and generated quickly when one large calculation request comes in all the small ones stop going out until it is done. A Haskell web server like Snap should not have this problem.
Haskell occupies a niche similar to Hamilton's quaternions (for classical physics) and Heisenberg's matrices (for quantum mechanics) - not mandatory, inaccessible to the masses and abandoned with haste once a more intuitive tool is found.
[+] [-] substack|14 years ago|reply
Haskell is very hard, but even after 3 years of pretty intensive use, I never really felt productive with haskell in the same way that I've felt in other languages. Anything I did in haskell required a lot of thinking up-front and tinkering in the REPL to make sure the types all agreed and it was rather time-consuming to assemble a selection of data types and functions that mapped adequately onto the problem I was trying to solve. More often than not, the result was something of an unsightly jumble that would take a long time to convert into nicer-looking code or to make an API less terrible to use.
I built an underwater ROV control system in haskell in 2010 which went well enough, but I had to tinker with the RTS scheduling constantly to keep the more processor-hungry threads from starving out the other ones for CPU. The system worked, but I had no idea what horrors GHC was performing on my behalf.
Later I built the first prototype of my startup with haskell, but the sheer volume of things that I didn't know kept getting in the way of getting stuff done in a reasonable time frame. Then we started to incrementally phase out haskell in favor of node.
I write a lot of node.js now and it's really nice. The whole runtime system easily fits into my head all at once and the execution model is simple and predictable. I can also spend more time writing and testing software and less time learning obscure theories so that the libraries and abstractions I use make sense.
The point in the article about haskell being "too clever for this benchmark" sums up haskell generally in my experience.
[+] [-] bobfunk|14 years ago|reply
I started out maybe 5 years ago following tutorials, reading up on all the metaphors about Monads and doing project Euler problems.
After a while I started to tackle some small web related things with Haskell and had exactly your experience of running into a lack of understanding of how the system works and wrapping my head around functional datatypes.
I pretty much gave up on Haskell as a practical language at that point, but something kept me coming back once in a while.
Then at a point I had a use for making a small web service fast and the Node prototype I made performed badly and crashed in spectacular ways under high loads. I found Snap and made a quick prototype in Haskell. At that point the experience of years of small experiments must finally have made something click. In a very short time I had a very fast service using almost no memory. It's deployed in production (as a part of http://www.webpop.com) and has been extremely stable.
By now I think I've crossed some kind of barrier, and feel like I'm both being productive and having fun when writing Haskell, but it really didn't come easy to me and all else being equal my experience tells me that a good deal of my colleagues would have an even harder time.
[+] [-] MatthewPhillips|14 years ago|reply
[+] [-] fab13n|14 years ago|reply
Moreover, I was very proficient in OCaml before I discovered Haskell, and it just spoiled be. It has all of Haskell's qualities which matter (type inference, algebraic data structures, a naturally functional mindset) without the parts you regularly have to fight (mandatory monads and monad transformers, algorithmic complexity in a lazy context, tedious interfacing to the underlying OS).
If you felt like Haskell had many amazing qualities, spoiled by a couple of unacceptable flaws, especially when it comes to acknowledging how the real world works, I'd suggest that you give a try to OCaml. You should be proficient with it within a couple of days.
[+] [-] gregwebs|14 years ago|reply
Please take a look at doing real-world, productive web development with Yesod. http://www.yesodweb.com
You are still going to take a productivity hit in Haskell due to lack of libraries in comparison to Ruby, Python, etc. So the practical reason for using Haskell today is to take advantage of the amazing performance, take advantage of Haskell non-web libraries in the backend, or for a high assurance project where its type system can rule out most common web development bugs.
oh, and Yesod is even faster than the mentioned Snap framework which is already much faster than Node (and unlike Haskell, Node does not scale to multi-core). Although Yesod isn't going to automatically cache the fibonacci sequence for this artificial benchmark because in the real world I have never once been tasked with writing code like that for a web application.
[+] [-] dsharpdiabetes|14 years ago|reply
[+] [-] jrockway|14 years ago|reply
Instead of:
Just write: The case for $ is where you want application instead of composition: I even write things like: instead of for consistency.Anyway, it's a little style thing, but it's nice to use the composition operator (.) when you want composition and the application operator ($) when you want application. It makes the code look nicer and it shows its intent more clearly. And really, they are different concepts, even if they both type-check the same.
And finally, remember that function application, by default, is the highest-precedence operator in Haskell. When you write:
It's the same as: Because of operator precedence. $ only exists to change the order of operations for a particular expression.[+] [-] Peaker|14 years ago|reply
[+] [-] bobfunk|14 years ago|reply
[+] [-] the_mat|14 years ago|reply
There is growing list of smart programmers who get all enchanted with Haskell, jump into it wholeheartedly, and end up frustrated (see bottom of message). GHC makes the typical C++ compiler seem fast. Once code grows past the homework problem size, all hope of understanding memory usage is lost. I don't think people really get how bad that it is. The whole culture of Haskell is based around static checking, yet you have to run a program in order to find out if it blows your memory limit several times over.
Haskell is still a neat language, but we need less advocacy based on toy programs and more honest realism.
(Here's a typical, non-superficial example: http://wagerlabs.com/haskell-vs-erlang-reloaded-0)
[+] [-] gregwebs|14 years ago|reply
I do agree that there is entirely too much enthusiastic toying around in Haskell and not enough real world users and honesty about limitations.
[+] [-] jberryman|14 years ago|reply
[+] [-] Peaker|14 years ago|reply
You would expect that people that spend years and years working with their tools would be willing to put a few weeks or months into learning their most important tool: the programming language. It seems most programmers get frustrated and abandon learning of different programming paradigms very quickly.
[+] [-] Someone|14 years ago|reply
Experienced developers have learned that, typically, newer languages are better than older ones, but they typically do not get better by leaps and bounds across the whole domain. Instead, language evolution typically is a matter of two steps forward, ten sideways, and one step back.
Also, experience tells that new languages often get overhyped as making only forward steps. Given that, it does not make sense to switch horses too often, or one would be forever learning, and never be productive.
[+] [-] Jach|14 years ago|reply
[+] [-] stonemetal|14 years ago|reply
In terms of programming-in-the-large, at Google and elsewhere, I think that language choice is not as important as all the other choices: if you have the right overall architecture, the right team of programmers, the right development process that allows for rapid development with continuous improvement, then many languages will work for you;
[+] [-] T_S_|14 years ago|reply
About our Haskell experience: Yes, the learning curve seems steep, but mainly because of the things you have to unlearn (OMG no for-loops!). However, functions are the most modular things ever invented. That translates into an uncanny ability to add features quickly. A sophisticated type system catches many errors at compile time.
[+] [-] ghc|14 years ago|reply
[+] [-] vegai|14 years ago|reply
[+] [-] bwfeldman|14 years ago|reply
[+] [-] exogen|14 years ago|reply
Ironically, every one of his clients in the ab concurrency test will receive their responses before the users of the hypothetically parallel Python and Ruby services, because Node responds an order of magnitude faster. So he didn't actually demonstrate a problem.
[+] [-] gvb|14 years ago|reply
If you want to benchmark concurrency, at least pick an benchmark algorithm that exercises concurrency. The FFT comes to mind, but there are probably lots of better examples (that is a challenge to HNers ;-).
[+] [-] lobster_johnson|14 years ago|reply
Node developers probably don't do a lot of computationally complex stuff, but when they do, they have to think about the concurrency problem. Even something as trivial as sorting a large list or parsing a huge chunk of JSON is going to stop all other requests from executing.
[+] [-] bunderbunder|14 years ago|reply
I suspect that it worked so well because the idea of using Fib to talk about performance is kind of built into the collective programmer unconscious. The whys, hows, and whats of using Fib to talk about performance are somewhat less well-entrenched, though. So there's room to trip people up by getting them to go, "Yeah, this sounds interesting, and I recognize all the words so it probably isn't technobabble!"
[+] [-] nightski|14 years ago|reply
[+] [-] heisenmink|14 years ago|reply
[+] [-] Niten|14 years ago|reply
EDIT: Well also, the author would have to actually benchmark this vs. Node with many concurrent clients in order for it to be relevant; here he's just timing a single request from start to finish, which obviously doesn't say anything about how this scales.
[+] [-] dorian-graph|14 years ago|reply
I'll do my part. Delphi, here I come. ;)
[+] [-] bobfunk|14 years ago|reply
The point of the article was more the difference between the languages that really tackles concurrency (Haskell, Clojure, Go, Erlang) and Node's way of simply offering one solution that works for a lot of problems where the common scripting languages (especially PHP) doesn't work that well.
[+] [-] piccadilly|14 years ago|reply
PLEASE GIVE UP
[+] [-] agentultra|14 years ago|reply
If I read the article correctly it's simply a matter of concurrency and parallelism that's important.
There are a host of languages that do that quite well and Haskell just happens to be one of them.
[+] [-] thesz|14 years ago|reply
http://hackage.haskell.org/package/monadiccp
The real point is that Haskell is quite good in many areas and is excellent in parallelism and concurrency. While other languages are excellent in concurrency and not so good in other areas.
Those many languages are the answer for the sole field of concurrency and Haskell is the answer when you combine many fields, one of which could happen to be concurrency.
[+] [-] numeromancer|14 years ago|reply
What was the question?
[+] [-] shocks|14 years ago|reply
[+] [-] kennystone|14 years ago|reply
[+] [-] Peaker|14 years ago|reply
[+] [-] megaman821|14 years ago|reply
[+] [-] giardini|14 years ago|reply
Haskell occupies a niche similar to Hamilton's quaternions (for classical physics) and Heisenberg's matrices (for quantum mechanics) - not mandatory, inaccessible to the masses and abandoned with haste once a more intuitive tool is found.
But they will always be there if you need them.
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] unknown|14 years ago|reply
[deleted]