top | item 17904580

Why I never finish my Haskell programs

296 points| AndrewDucker | 7 years ago |blog.plover.com | reply

217 comments

order
[+] pash|7 years ago|reply
I think many beginning Haskellers have this problem. To overcome it, my advice is to write Haskell code with the knowledge that you can re-write it more readily than you can re-write code in many other languages. Write the code that fits the immediate application, and rely on the type-checker to make it straightforward to refactor when the need arises.

I think that’s what many experienced Haskellers would say is the language’s best attribute for getting things done, that the type system makes it possible to refactor even a large program with the confidence that all the parts you replace will slot perfectly back into the original structure. Or that changing the core structure itself will result in a new structure that has all the right slots for all the various bits and pieces that need to slot into it. Having the confidence that you will be able to refactor painlessly, you should be less concerned with finding the perfect abstraction up front. Write code, make it work, then make it better.

And as others have mentioned, yes, the right abstraction and appropriate level of generality will become easier to recognize as you write more Haskell. Go with the first decent implementation you can come up with, and as you gain experience that first implementation will more and more often turn out to be a good one. In the meantime, run HLint and read more Haskell code, and you’ll quickly pick up most of the generalizations that really make sense to use in typical applications. The more experienced you get, the more confident you should become that significant time spent generalizing code to no real purpose is pointless and tends to result in code that both reads worse and runs worse than what you started with.

[+] nextos|7 years ago|reply
In my case, I think it's because large programs always end up containing subproblems that can be better expressed in other paradigms than functional. And it becomes frustrating when I can't shoehorn them to the Haskell way of doing things.

My favorite languages are, for this reason, multi-paradigm: Common Lisp, Mozart/Oz, Scala and C++. It's a bit like building La Sagrada Familia (and that's why it's depicted in the cover of CTM). If you want a superb solution, you end up using many styles like Gaudí did.

But I reckon the future will move towards more provably correct solutions, and we will be using things closer to e.g. Idris. Hopefully that's orthogonal with homoiconicity.

[+] hohenheim|7 years ago|reply
Completely agree with you, and I like to add that this is true for any programming language really. Often I find people, specially juniors, obsessed with how they solve a problem and design patterns used than actually solving a problem. To be honest I was once such a person.

The best advice, as you stated it, is to go with the first instinct, and then refine it, IF needed. It might turn out that the code you wrote was redundant anyway.

[+] KurtMueller|7 years ago|reply
I know Elm isn't the same as Haskell, but these points are also used as a selling point for Elm. And more often than not, as with Haskell, the type system makes it possible to refactor a large program with the confidence that all the parts you replace will slot perfectly back into the original structure OR the compiler will yell at you until it does :).
[+] hhmc|7 years ago|reply
I'm reminded of one of my favourite HN comments on Haskell:

'There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen.'

https://news.ycombinator.com/item?id=7962612

Although it's not 100% applicable in this case (unless you argue that the cost is to your own time) - I think the sentiment is perfect.

[+] chongli|7 years ago|reply
The blank canvas analogy is a good one but for the wrong reason. Think of the aspiring novelist with a blank stack of paper in front of them. The problem is not the freedom they have, it's their lack of discipline. It's far easier to write PR for an agency or injury reports for a sports blog than it is to write a novel.

The reason so many programmers struggle with this problem is because nobody is paying them to write CRUD applications in Haskell. If they were, they'd find it more than adequate to the task and the job would be easy and painless.

People only go down rabbit holes when they don't have a manager breathing down their neck all day. It takes real discipline to create really good hobby projects on your own, regardless of language.

[+] zengid|7 years ago|reply
I think Rich Hickey described it pretty well when he said (paraphrasing from [0]) that static languages present the programmer with neat little puzzles to solve that feels like we're writing applications but we're just creating intricate types and abstractions. I think he has a point, but I certainly don't want to give up the benefits of static languages, like being able to catch all of my silly errors.

[0] https://youtu.be/2V1FtfBDsLU?t=39m44s

[+] workleg|7 years ago|reply
There's something very seductive about a chance to comment on posts like this one for a particular group of people. These posts whisper in your ears "the fact that you've had a hard time and failed to understand, learn and harness these exotic languages does not mean that you are not brilliant. Here's a blank canvas for you to convince others of same, perpetuating the perspective convenient to your ego, which got bruised by the hard task in the past" (all in good humor) Jokes aside, I code professionally Haskell every day all day long. I'm very productive and get more productive and love the language more every day. It's all about realistic expectations and commitment (just like anything that's hard). You cannot expect to learn and be productive in Haskell within a week/month or even a year. You need a lot of patience and dedication, but you will be rewarded, that i can promise.
[+] village-idiot|7 years ago|reply
Oh yeah, as I mentioned in another comment, this is me. The result is that I have a bunch of 20% finished Haskell projects, and a bunch of finished Rails projects that accomplish the exact same task.
[+] vmchale|7 years ago|reply
> 'There's something very seductive about languages like Rust or Scala or Haskell or even C++. These languages whisper in our ears "you are brilliant and here's a blank canvas where you can design the most perfect abstraction the world has ever seen.'

Do they? I think the Haskell community online is pretty crappy for this and other reasons, but GHC/Haskell are at their best when you actually use them to write programs instead of doing things like this.

[+] tannhaeuser|7 years ago|reply
Coming from Prolog I'm loving Haskell, but I'm seeing a special kind of "worse is better" at work here: that projects using innovative and sophisticated languages have a high risk of running into obsessive "getting it right" and "holier than you" mentality resulting into them often getting never finished, and even if finished, having a high barrier of attracting contributors. It's unfair and embarassing, but shitty languages like JavaScript and PHP often allow you to be more utilitarian and churn out good enough code because you're not emotionally attached to them, and aren't under peer group pressure to express eg. algebraic properties in their purest form or some such.
[+] pwm|7 years ago|reply
I'll put a slightly different spin on this: At my current job the system I'm writing is in PHP (for reasons...). In its core it's all about domain modelling with some workflow sprinkled on top. Haskell would be a near perfect fit, but it can be done in PHP. However the code itself looks very Haskelly. I have a growing library of domain specific types that are composed into larger and larger tree shaped ADTs all the way to the top level entities. Validation is mapping/folding these ADT trees where nodes are (type, data) pairs that are mapped to their instantiation or its failure. The workflow bit is essentially a couple of FSMs with conditional transitions where the condition is usually the existence of some type that fulfils its constraint. Etc...

Reading it back it sounds analogous to the classic saying of one can write fortran in any language. In my opinion having experience with Haskell gives you a mindset first and foremost. When you bump into a problem where this mindset is a good fit you can use that knowledge with whatever tools are at hand.

[+] yogthos|7 years ago|reply
I definitely find there's a strong relationship between language complexity and bikeshedding. When you have a big language like Haskell or Scala, it's easy to get distracted from solving the actual problem by trying to do it the most "proper" way possible. This is also how you end up with design astronautics in enterprise Java as well where people obsess over using every design pattern in the book instead of writing direct and concise code that's going to be maintainable.

Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems. That goes a long way in avoiding the analysis paralysis problem.

[+] Barrin92|7 years ago|reply
Reminds me of one of Rich Hickey's talks where he states that people just love to solve puzzles, and that complex languages with, for example, demanding type systems trick people into thinking they are adding safety or value when in fact they're just outsmarting themselves.

Often when I write something in Haskell I have this feeling. It feels satisfying to build up nice types and constructs but I don't know if it at all pays off in any objective or empiric sense. I can't really tell if I've invented the problem that I just solved.

[+] lmm|7 years ago|reply
I think the very fact that this happens in Java - a deliberately simplistic language - is proof that it's not a problem with the language itself. If the language doesn't support particular constructs, all that means is that people will bikeshed over which pattern to use instead of which language construct.
[+] mac01021|7 years ago|reply
I don't disagree in general but is Haskell a big language?
[+] vmchale|7 years ago|reply
> Nowadays I have a strong preference for simple and focused languages that use a small number of patterns that can be applied to a wide range of problems.

Haskell is a simple and focused language, provided you use GHC with minimal extensions.

[+] ocharles|7 years ago|reply
There's a weird interpretation here that this post is the author expressing frustration with this process. I often have a similar experience and I wouldn't want it any other way! This process of repeatedly asking "what is this?" just doesn't seem to come up in the same way in other languages. This gives me the ability to do some practice I wouldn't otherwise be able to do, one that often has tremendous transfer over to "real work", because I can start to see patterns and get a feel for what is really going on once I get rid of all the dull IO tedium.

If you want an analogy, consider this like studying jazz or something. Sure, you could just notice a II V I progression and call it done, but if you pick away at each individual note, you can find a whole lot more going on behinds the scenes.

Basically, I don't really consider what's happening in the blog post a bad thing. It just has a time and a place, and you need to be aware when it's the wrong time.

[+] rossdavidh|7 years ago|reply
I've seen this, and I've never had a Haskell gig. One of the best pieces of advice I ever got re: programming was, "don't write the abstraction until you've written three cases first". This is good advice in the intended way (you will write the abstraction better when you get to it), but even better because you probably often won't ever write three of the thing in question, in which case you shouldn't write the abstraction anyway.
[+] vmchale|7 years ago|reply
> One of the best pieces of advice I ever got re: programming was, "don't write the abstraction until you've written three cases first". This is good advice in the intended way (you will write the abstraction better when you get to it),

I don't think this is good advice in the context of Haskell. Haskell allows some abstractions that aren't just "black boxes" or glorified templates. It's of kind like elementary logic: the more models there are, the fewer proofs there are, and vice versa. Analogously, when you write a more abstract function, there are fewer ways you can manipulate it and thus it is in some sense simpler.

[+] gabipurcaru|7 years ago|reply
I think the main reason is that there is no _actual_ problem that OP needs to solve. If there was one, then he would get pragmatic and figure out one of the reasonable solutions to this and move on with his life.

Though it's true that Haskell is easy to put you into a mindset where you want to simplify and generalize the code as much as possible, leading to wasted time on overly general solutions. Which shouldn't be the case, because e.g. if you want to extend the solution from lists to traversables, Haskell gives you the confidence to safely refactor the method at a later time.

[+] seanmcdirmid|7 years ago|reply
There are some languages that tempt more abstract navel gazing than others. I’m not sure about Haskell, but Scala tends to do that. Anyways, there is something about human behavior and language design that can lead “more is less” situations.
[+] sdegutis|7 years ago|reply
I lost confidence in Haskell's ability to let me write something one way and safely refactor it later, when I found out that you can't use a ton of the algorithmic functions in the standard library because they do things all wrong.
[+] thomasjm42|7 years ago|reply
I think the problem in this case is that the author's attempt at generalization went off in the wrong direction.

The fact that fixed-length lists aren't working well as a representation for polynomials is a hint. Polynomials with real coefficients form a vector space [0], so you should really think of them as infinite-dimensional lists of numbers (in which most of the numbers are zero).

Once you know you want to represent an infinite dimensional vector with only a few nonzero entries, you can use a sparse vector. The first library that comes up when you google "Haskell sparse vector" is `Math.LinearAlgebra.Sparse.Vector`, which lets you write something like this (I haven't run this code but it should get the job done):

import Math.LinearAlgebra.Sparse.Vector as V

poly1 = V.sparseList [1, -3, 0, 1]

poly2 = V.sparseList [3, 3]

sumPolys = V.unionVecsWith (+)

So, I read this more as an article about trying to reinvent the wheel in a domain which isn't necessarily simple, which isn't a good idea in any language.

[0]: https://en.wikipedia.org/wiki/Examples_of_vector_spaces#Poly...

[+] aidenn0|7 years ago|reply
I see this a lot with intermediate lisp programmers; they spend so much time building ivory tower abstractions that the original problem is forgotten. I sometimes call this "bottom down" programming.

Predicting the future is very hard; remembering the past is much easier. If you find yourself typing the exact same pattern for the Nth time, then it's time to refactor it into a macro or a function as appropriate.

Figuring out what parts of the next 1000 lines of code you are going to write will benefit from an abstraction (and which abstraction that is) is a rare skill that comes only (if at all) with experience.

[+] sokoloff|7 years ago|reply
“Bottom down” really resonated with me in my dalliances with both Common Lisp and Haskell.
[+] agentultra|7 years ago|reply
The turning point for me was when I realized that these problems exist in other languages and are practically invisible. Without a good type system and inference you cannot hope to catch all of your type errors. You'll just write some unit tests and run your program many times until you're certain you've sussed them all out... until that pesky bug report comes in. Then you get to play detective!

I honestly don't have time left in my life for such meaningless drudgery.

With a type system I have the computer aid me in designing the program. It keeps me honest and ensures that I don't have type errors which are are huge class of things I'd rather not have to think too hard about.

When I program in Haskell I spend more time solving problems than fixing programming errors.

[+] KirinDave|7 years ago|reply
You could also just: write the less general version and stop listening to folks who flip and scoff at every piece of code isn't maximally general.

Crazy, I know, but especially when were doing labor in industry, even without maximal generality your code is probably going to outlive its patron corporation and then die in obscurity.

[+] gowld|7 years ago|reply
The only person who flips and scoff at every piece of code isn't maximally general is... the author the code. That's the problem.
[+] ianbicking|7 years ago|reply
I always felt very productive in PHP, because the only rewarding part of PHP is having made something. It never rewarded sophistication... but making a web site that did something WAS rewarding, so all my attention went to that part.

Calling Haskell an anti-PHP seems fair.

[+] ainar-g|7 years ago|reply
>I ought to be able to generalize this

I've never understood this. Unless you write a library that you plan to publish, or already have actual cases where you need a more general solution, why spending time trying to generalise code instead of switching to the next task?

[+] kelvin0|7 years ago|reply
My tentative answer is this: someone who uses Haskell appreciates elegant solutions (a.k.a mathematical/functional) and is inclined to write things 'properly' once and they might also idealize that the functions they write will not only solve this current issue, but be useful to others and themselves in other programs ... thus going down the generalization and elegance rabbit hole.

Of course, all of this is purely speculation on my part.

[+] endgame|7 years ago|reply
One reason is that more general types mean you can write fewer functions, and so the function that you do write is more likely to be correct.

The function `intMap :: (Int -> Int) -> [Int] -> [Int]` can do all sorts of crazy things that are not map. The function `map :: (a -> b) -> [a] -> [b]` can do far fewer crazy things, and just from looking at the type you can say that any `b` in the result list _must_ have come from applying the function to some `a` in the input list.

[+] dack|7 years ago|reply
My take is it's because there are some really great benefits to implementing things more precisely (which usually means "more general" in this sense), and Haskell is more amenable to it than most.

There are many cases where it's worth it, so much so that it's worth at least considering whether a more generic solution is better.

I think the problem is that it's hard to predict how deep a rabbit hole like this gets - so you think it's just a few minutes extra work, but it ends up completely derailing the project.

[+] abecedarius|7 years ago|reply
If you spend some fraction of each task reflecting on how you could've written it 'better', then over time you'll learn to write more of your code 'better' from the start. (You could say making it more general is not always better, and that's true. But it is a win often enough to make it a skill worth cultivating.)
[+] AlexCoventry|7 years ago|reply
Some people overvalue the power of generality, and overdiscount the obscurity and complexity it tends to involve.

Also, it's not a bad way to learn the ins and outs of the language, really.

[+] jerf|7 years ago|reply
You can eventually "come out the other side" and get to the point where you write the general version correctly the first time. But it is some degree of work. I think it's a good exercise for a pro, but you can certainly live without it.

The general principle does come in handy elsewhere, though. Doing the most useful work with the minimum power is a generally useful skill. I get a lot of mileage out of it in other languages, because across a couple hundred modules, the difference between modules that have minimum dependencies and modules that carelessly overuse power becomes quite substantially different in character.

[+] ajross|7 years ago|reply
> You can [...] write the general version correctly the first time. But it is some degree of work. [...] Doing the most useful work with the minimum power is a generally useful skill.

There's something wrong with that logic, but I'm too lazy to work out the proof in the general case.

[+] phendrenad2|7 years ago|reply
Can’t say I agree. There will always be unexplored ways to generalize.
[+] jonalmeida|7 years ago|reply
I think there's an error in the first example.

  Poly [1, -3, 0, 1]
Should be:

  Poly [1, 0, -3, 1]
EDIT: My mistake.
[+] sqrt17|7 years ago|reply
It's not - it starts with the unit coefficient (1), then x (-3), then x^2 (0), then x^3 (1). Some things become easier this way - including addition of polynomials of differing degree - and as an added bonus you can phantasize about representing power series as (lazy) infinite lists.
[+] laurentl|7 years ago|reply
I thought so too at first but given the way addition is defined later on, it makes sense to keep the coefficients sorted by increasing power (the leftmost element in the list is its head, and the easiest to access when doing anything recursive)
[+] jlebar|7 years ago|reply
I am not a Haskell programmer, but is this correct?

    (Poly a) + (Poly b) = Poly $ addup a b   where
       addup [] b  = b
       addup a  [] = a
       addup (a:as) (b:bs) = (a+b):(addup as bs)
Imagine a simple example, adding `x+2` and `10`. In OP's representation, these would be represented as the lists [1, 2] and [10]. That is, the first element is the coefficient of the term of highest degree.

But doesn't this implementation add list elements left-to-right, so we'd end up with the result [11, 2] instead of [1, 12]?

[+] desdiv|7 years ago|reply
You're misreading OP's representation:

> The polynomial x^3 −3x +1 is represented as Poly [1, -3, 0, 1]

It starts with the 0th coefficient and goes up. So adding `x+2` and `10` would be zip-adding lists [2,1] and [10,0].

[+] bojo|7 years ago|reply

  module Main where

  newtype Poly a = Poly [a] deriving (Eq, Show)

  instance Num a => Num (Poly a) where
    Poly a + Poly b = Poly $ addup a b
      where
        addup [] b  = b
        addup a  [] = a
        addup (a:as) (b:bs) = (a+b):(addup as bs)

  main :: IO ()
  main = print (Poly [1, 2] + Poly [10])


  *Main> main
  Poly [11,2]
Yep, you are correct.
[+] Jeff_Brown|7 years ago|reply
"Doc, it hurts when I do this." "Don't do that."
[+] skybrian|7 years ago|reply
Similar things happen in other languages. Most recently, I started writing a program in Elm to try it out, realized I wanted to use some CSS, and then got distracted looking at the various ways to do that, with their different tradeoffs. (What is this stylish-elephants package?)

Sometimes it's more productive when you join a team that has already decided on its standards. You don't learn as much, though.

[+] village-idiot|7 years ago|reply
This is me. I have a backlog of personal projects that I've slowly burned through and every freaking time I start with Haskell and end up in Rails. I love working in Haskell ... in theory. In practice I spend way too much time figuring out how to wrangle data into the correct shape when it would've taken me 30 minutes to accomplish in any other language I know, static or dynamic.
[+] misja111|7 years ago|reply
It's about Scala, not Haskell, but the gist is the same:

at my company we're giving new candidates a live coding interview where they get one hour to write a very simple application using either Java or Scala. Candidates are free to choose between those languages.

The funny thing is, that candidates who choose Scala are never able to fully finish the assignment. Even though the application is really simple, many don't finish half of it and some even get completely stuck in complex for-comprehensions and what not. Candidates who choose Java however mostly are able to finish the assignment. The code might not always be the most elegant, but it does what it is supposed to do.

Even though I like Scala a lot, I feel it has the downside that it gives you too many options to do the same thing. This can get in the way when you are simply trying to implement some basic business feature.

[+] amelius|7 years ago|reply
So Scala is harder to write, but the question is: is it easier to read?

(Since in general a particular piece of code is read far more often than it is written).