top | item 20673506

Monads as a Programming Pattern

233 points| charmonium | 6 years ago |samgrayson.me | reply

80 comments

order
[+] pjc50|6 years ago|reply
I think monads highlight something underappreciated about programming, which is that different people regard very different things as "intuitive". It almost seems that modalities of thinking about programming are bigger, external things than programming itself. Certainly they're barriers to learning.

Like Lisp, there seems to be about 10% of the programmer population who think "ah this is obviously the clearest way to do it" and the remaining 90% who go "huh?", and the 10% are really bad at explaining it in a way the others can grasp.

The two monad explainers that really resonated with me were:

- How do you deal with state in a language that would prefer to be stateless? Answer: wrap the entire external universe and all its messy state up into an object, then pass that down a chain of functions which can return "universe, but with some bytes written to the network" (IO monad)

- If you have a set of objects with the same mutually pluggable connectors on both ends, you can daisy-chain them in any order like extension cables or toy train tracks.

(It's a joke, but people need to recognise why "A monad is just a monoid in the category of endofunctors" is a bad explanation 99% of the time and understand how to produce better explanations)

[+] Sharlin|6 years ago|reply
There is a saying that the difference between poetry and math is that poetry is about giving different names to the same thing, and math is about giving the same name to different things.

Grokking monads really requires the adoption of the mathematical mindset of finding commonalities in things that at a first glance appear completely different. Tell an average OO programmer that lists, exceptions, dependency injection, and asynchronous execution all share a common structure, and they will probably give you a blank stare.

Of course, just the fact that abstracting over those things is possible doesn’t mean it is useful. In a pure FP language it might be necessary, but why should I bother with weird mathy things in my imperative language that has side effects and global state? You really have to start by explaining why composability is such a nice thing to have, and that gives the motivation for various FP patterns that are, fundamentally, all about composability.

[+] dkarl|6 years ago|reply
John von Neumann famously said that in mathematics you don't understand things, you just get used to them. I think monads are an example of this, and all the attempts to make them intuitive before you use them are a huge waste of time. Many mathematical concepts are like this. Take compactness. At first, the definition of compactness seems a little random, but it's useful for proving things, so you keep using it, and after you do enough proofs you develop an intuition for it. It feels very clear and fundamental instead of random.

Once you have this feeling of obviousness, how do you transmit it to the next person? Many things can be explained, but in math it was figured out long ago that at any given time there are many things we don't know how to explain, and aren't sure we ever will be able to explain in a way that transmits understanding faster than experience can instill it. The best thing you can do for someone is give them the definitions and some problems to work on. We can't rule out that someone may eventually come up with a brilliant explanation that provides a shortcut to understanding, but we know from experience that some things persistently defy our efforts to explain them. If hundreds of people's earnest attempts to explain something have failed, then perhaps teachers should keep trying, but learners should not waste their time with these experiments; they should skip the explanations and seek active engagement with the idea through problem solving.

That's how I feel about monads. I can't absolutely rule out the possibility that someday an effective way to explain them will be found, but I think we can agree at this point that there is ample evidence that people who want to understand monads should not waste their time waiting for the right analogy to be blogged and posted on HN. They should just start programming, and soon enough they too will feel like a great explanation is on the tip of their tongue.

[+] munificent|6 years ago|reply
I think you're roughly right, though I shy away from "intuitive" since it can come across as excluding people who haven't learned certain things yet.

What I've observed is that a lot of learning programming is about becoming comfortable with thinking of more and more concepts as first-class entities. Turning larger pieces of code, procedure, and patterns into object/values you can pass around, hold, create, etc.

1. small-scale one I see a lot is that many programmers don't realize boolean expressions produce values. Instead, they think they are syntax that can only be used inside control flow statements. It is a mental leap to realize that you can go from:

    if (thing == otherThing) { doStuff(); }
    moreStuff();
    if (thing == otherThing) { doLastStuff(); }
To:

    var sameThings = thing == otherThing;
    if (sameThings) { doStuff(); }
    moreStuff();
    if (sameThings) { doLastStuff(); }
2. In some sense, recursion is about thinking of the procedure itself as an operation you can use even while being in the middle of defining the procedure itself. The mental trick is "Assume you do already have this procedure, how could use use while defining it?"

3. Closures are another big one where you take a piece of code and turn it into a value that you can delay executing, pass to other procedures, etc.

[+] fizixer|6 years ago|reply
I see what you're saying. Unfortunately that's not the only problem with monads. Monads have two problems:

- They're explained badly.

- Once they're explained well, many (including me) think it's a bad idea (not the monad, the motivation behind its use in pure-FP).

You start with a pure functional formalism, because you like to be stateless. Then you realize that avoiding statefulness is impossible in computing. So you try to shoehorn state into your stateless state of affairs (no pun), while at the same time refusing to admit that you're not stateless anymore.

The larger issue: some folks appear to think that imperativity is a subset of declarativity.

What that really means is that, they're saying that, computing is a proper subset of math.

And by that, what they're really saying, is that actions are a subset of words.

In other words, if you write something on a piece of paper that describes some action in the real world, (roughly speaking) that action happens or is supposed to happen automatically.

That's now how the world works. That's not how computers work. And I'm sorry to say that's not how programming works.

Computing is not a subset of mathematics. And Mathematics is not a subset of Computing. Same goes with Physics (Physics is not a subset of Mathematics, despite there being way more math used in Physics than CS).

Physics, Computing, and Mathematics are the holy trinity of the reality that we live in. You have to give each of the three the respect they deserve, and only try to make connections between the three, and NOT try to make any of them a subset of the other.

[+] jagthebeetle|6 years ago|reply
I think I approximately understand monads, but I find the "wrap the entire external universe" type of explanation to confuse me a bit. It's just the result of one IO computation + a way of handling / unwrapping it!

When trying to map (or bind :) a monad to OOP/imperative programming, it strikes me as more straightforward to think that, e.g., the IO monad is an object that encapsulates the result of a network operation, together with some utility functions for dealing with the result and not having to deal with the unwrapping of the result. Kind of like Futures or Promises.

(Now, here the real FPers will say that the comparison is flawed because of certain FP desiderata like referential transparency, but that's beyond the extent to which I've internalized monads.)

[+] AnimalMuppet|6 years ago|reply
> I think monads highlight something underappreciated about programming, which is that different people regard very different things as "intuitive". It almost seems that modalities of thinking about programming are bigger, external things than programming itself.

I think this also. You need to pick a language that fits your problem area, but also one that fits your brain.

[+] stcredzero|6 years ago|reply
How do you deal with state in a language that would prefer to be stateless?

Encapsulation? No one gets direct access to the state. Instead, there are methods or functions for dealing with the state indirectly, crafted to protect those outside.

Answer: wrap the entire external universe and all its messy state up into an object, then pass that down a chain of functions which can return "universe, but with some bytes written to the network" (IO monad)

Sounds like "Outside the Asylum" from the Hitchhiker's Guide to the Galaxy universe. Basically, someone decided the world had gone mad, so he constructed an inside-out house to be an asylum for it.

http://outside-the-asylum.net/

"A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you." How is a Monad anything different than a particular kind of object wrapper?

https://www.infoq.com/presentations/functional-pros-cons/

[+] kybernetikos|6 years ago|reply
I think the programming pattern paradigm is the right way to explain monads (as you can tell from my own monad explanation: https://kybernetikos.com/2012/07/10/design-pattern-wrapper-w...) The category theory language around it is off-putting to working programmers, and many of the ways people explain it is by trying to introduce yet more terminology rather than just working with the perfectly adequate terminology that working programmers already have.

I think part of it is that lots of languages don't have sufficient abstraction ability to encapsulate the monad pattern in their type system, and those that do tend to be academic focused. That doesn't mean you can't (and do) use monads in the other languages, it's just that you can't describe the whole pattern in their type systems.

I was pretty sad that the discussion around javascript promises sidelined the monad pattern.

[+] tomstuart|6 years ago|reply
In my experience it’s easiest for people to understand monads when they’re presented as an abstract data type (e.g. what I wrote at https://codon.com/refactoring-ruby-with-monads#abstract-data...) rather than a programming pattern, because despite having “abstract” in the name, abstract data types are a relatively concrete thing that programmers already know how to use.
[+] feanaro|6 years ago|reply
> The category theory language around it is off-putting to working programmers

Off-putting to some working programmers. I am a mathematically-minded working programmer who prefers mathematical and type theoretical explanations quite strongly since they just click for me.

[+] noelwelsh|6 years ago|reply
This seems a pretty good introduction to monads.

There is a cliche that no-one can write a good introduction to monads. I don't think that is true. My opinion is more that monads were so far from the average programmer's experience they could not grok them. I think as more people experience particular instances of monads (mostly Futures / Promises) the mystique will wear off and eventually they will be a widely known part of the programmer's toolkit. I've seen this happen already with other language constructs such as first-class functions. ("The future is here just not evenly distributed.")

[+] ajnin|6 years ago|reply
"Introduction to monads" articles generally miss the mark because either 1) they insist on using Haskell syntax throughout, and this is most likely to be unfamiliar and obtuse to programmers looking for this kind of articles. Expecting people to learn a new syntax at the same time as a new concept is bound to be confusing. At least it was for me when I first came across the idea. Or 2) they go through a bunch of examples with various names and number of methods, like Maybe and Collection in this article, and the reader is supposed to infer the common structure themselves. At least this article goes through the formal definition, but I think that ideally that should come first, as it is easier to see the structure of the examples once you have established a mental model.
[+] Insanity|6 years ago|reply
I like the comparison with first class functions. I do feel like they are more commonly understood nowadays than when I first started programming ~12ish years ago.

I think because languages like Java are evolving towards a world where those things are common, the average programmer is 'forced' to learn those concepts.

[+] mbrock|6 years ago|reply
I think for newbies there are two separate aspects to explain: first an intro to algebraic structures perhaps using groups as an example, then monads in particular.

It’s important to emphasize that algebraic structures are abstractions or “interfaces” that let you reason with a small set of axioms, like proving stuff about all groups and writing functions polymorphic for all monads.

With monads in particular I think the pure/map/join presentation is great. First explain taking “a” to “m a” and “a -> b” to “m a -> m b” and then “m (m a)” to “m a”. The examples of IO, Maybe, and [a] are great.

You can also mention how JavaScript promises don’t work as monads because they have an implicit join semantics as a practical compromise.

[+] noelwelsh|6 years ago|reply
You really don't need to introduce groups or other algebraic structures to understand monads, and if your goal is to teach monads I believe it is harmful to do this.

The average programmer is much more likely to encounter monads (e.g. error handling, promises), than they are to encounter groups in an abstract context. Unnecessary maths will drive people away. Making a big deal of axioms, reasoning, and all this stuff that functional programmers love (including myself) is the approach that has been tried for the last 20 years, and it has failed to reach the mainstream. If you want to reach the average programmer you need to solve problems they care about in a language (both programming and natural) they understand.

[+] chongli|6 years ago|reply
groups as an example

Why not go all the way and teach functors and applicatives before monads? Then the student can see that monads are just a small addition built on top of the other two. Functors, in particular, are very easy to grasp despite their intimidating name. They just generalize map over arbitrary data structures:

    l_double : List Int -> List Int
    l_double xs = map (* 2) xs

    f_double : Functor f => f Int -> f Int
    f_double xs = map (* 2) xs
Applicatives are a little bit trickier but once you get them, there's only a tiny jump to get to monads. Taught this way, people will realize that they don't need the full power of monads for everything. Then, when people learn about idiom brackets [1], they start to get really excited! Instead of writing this:

    m_add : Maybe Int -> Maybe Int -> Maybe Int
    m_add x y = case x of
                     Nothing => Nothing
                     Just x' => case y of
                                     Nothing => Nothing
                                     Just y' => Just (x' + y')
You can write this:

    m_add' : Maybe Int -> Maybe Int -> Maybe Int
    m_add' x y = [| x + y |]
Much better!

[1] http://docs.idris-lang.org/en/latest/tutorial/interfaces.htm...

[+] toastal|6 years ago|reply
I get the intention but it's even harder to understand with this Java/C# syntax. I feel like if you're gonna talk about FP you should probably highlight along the Haskell or Scala code (or similar) and provide OOP stuff for reference in case it's not clear. It ends up being so verbose that I don't many people see the 'point'.
[+] secure|6 years ago|reply
To me, reading examples in a syntax I’m familiar in helps me much more than reading them in a language where the concept might be elegant to express, but hard for me to get into.

I think it’d be best to include examples in a number of languages with a lanugage selector, actually. That way, people who are fluent in functional languages can read that version, and others can read the one they are more fluent in.

[+] jerf|6 years ago|reply
This seems pretty good; the only thing on my mental checklist of "common monad discussion failures" is only a half-point off, because I'd suggest for:

"A monad is a type that wraps an object of another type. There is no direct way to get that ‘inside’ object. Instead you ask the monad to act on it for you."

that you want to add some emphasis that the monad interface itself provides no way to reach in and get the innards out, but that does not prevent specific implementations of the monad interface from providing ways of getting the insides. Obviously, Maybe X lets you get the value out if there is one, for instance. This can at least be inferred from the rest of the content in the post, since it uses types that can clearly be extracted from. It is not a requirement of implementing the monad interface on a particular type/class/whatever that there be no way to reach inside and manipulate the contents.

But otherwise pretty good.

(I think this was commonly screwed up in Haskell discussions because the IO monad looms so large, and does have that characteristic where you can never simply extract the inside, give or take unsafe calls. People who end up coming away from these discussions with the impression that the monads literally never let you extract the values come away with the question "What's the use of such a thing then?", to which the correct answer is indeed, yes, that's pretty useless. However, specific implementations always have some way of getting values out, be it via IO in the case of IO, direct querying in the case of List/Maybe/Option/Either, or other fancy things in the fancier implementations like STM. Controlling the extraction is generally how they implement their guarantees, if any, like for IO and STM.)

[+] pron|6 years ago|reply
I think it's important to separate the issue of what monads are/how they're used from the question when they should be used at all. While monads are very useful for working with various streams/sequences even in imperative languages, they are used in Haskell for what amounts for effects in pure-FP, and that use ("Primise" in the article) has a much better alternative in imperative languages. Arguably, it has a better alternative even in pure-FP languages (linear types).

Here's a recent talk I gave on the subject: https://youtu.be/r6P0_FDr53Q

[+] goto11|6 years ago|reply
The problem with monads is they are horrible without some form of syntax sugar. I like the metaphor of "programmable semicolon", but in languages without some built-in support, the "semicolon" becomes repetitive boilerplate which is more code than the actual operations happening in the monad.
[+] fxj|6 years ago|reply
I like the "do" notation in Haskell because it boils down the meaning of monads to the following:

Monads let you break the functional programming paradigm that a function should have the same value each time it is called. e.g.

    do {
    x <- getInput
    y <- getInput
    return x+y }
here getInput is called two times and each time it has a different value. When you now think about how this can happen in a functional language you have to understand what a monad does.

The heureka moment came when I learned about the flatMap in scala which is nothing else but the bind function in haskell ("just flatmap that sXXt") and voila thats how to use monads.

See the following explanation:

https://medium.com/free-code-camp/demystifying-the-monad-in-...

[+] namelosw|6 years ago|reply
It's hard to explain 20 years ago, but today if someone has used enough something like Reactive extension, Promise, LinQ, async/await, Optional, etc. There's a great chance to make one wonder about the similar pattern behind this, and then he can understand the abstraction very easily.
[+] mikorym|6 years ago|reply
I once googled "functional programming for category theorists" and obviously got instead to "category theory for programmers" (incidentally I use Milewski's book in this reverse way).

I still have a rudimentary understanding of functional programming (apart from the canonical "it's just an implementation of lambda calculus"). And I have to say that without exercise and training one grabs at wisps and mist. In mathematics it's also like this, you often have your favorite prototypical monad, adjunction, group, set, etc. (e.g.: The adjunction Set->Set^op by powerset is a strong contender.) And I view axiomatic systems in essence as sort of list of computational rules (e.g.: transitive closure).

I haven't found some idiosyncratic project to code in Haskell yet though...

[+] zug_zug|6 years ago|reply
I guess I don't get all this "monad" stuff. This article talks about 3 types of monad. An optional, a list, and a future.

However an optional is really just a list constrained to size 0 or 1. And a future is often called "not truly a monad."

So I question the value of explaining this abstraction in great detail over so many articles when people struggle to come up with more than 1 concrete example of it (Lists), an example that engineers have already understood since our first month coding.

Maybe somebody can speak to this more.

[+] jpfed|6 years ago|reply
Other replies talk about how Promises can be implemented as monads. Aside from that, you can come up with your own, too.

One monad that I occasionally use is something I'll call "Tracked". For "return" (when we make a new instance of the monad) we store a pair (initialValue, initialValue). For "bind" (when we act on what's in the monad) we only ever touch the second value in the pair, returning (initialValue, transformedValue).

That way, you can know where this piece of data came from. I've gotten a lot of mileage out of Tracked<Result<T>>: when one of your Results is an exception, then you can check what piece of data ended up triggering that exception. Yes, you could do this without the Tracked monad, but doing it monadically means that most of your functions don't need to know or care about tracking the initial data; you can just Apply those simpler functions and the Tracked instance will do it for you.

[+] thedufer|6 years ago|reply
> And a future is often called "not truly a monad."

I think you've misunderstood this observation. It is possible to write a future library that gives a monad (I use such a library regularly). But the most common ones do not, because it happens to be a decent design decision in dynamically-typed languages to not quite obey the monad equations.

The next interesting monad is probably Haskell's Either (or OCaml's Result, if you're into that). It is only a slight twist on the optional monad. Where optional's None case contains no data, Either's Left case can contain data.

After the collections (list and optional), either, and future monads, the difficulty to understand useful monads without first understanding the category of monads jumps considerably. If you're interested, the next ones to look at would be the reader, writer, state, and continuation monads. There's also the classic example from Haskell, the IO monad.

[+] pjc50|6 years ago|reply
You can go a very long way in programming without ever needing to explicitly use a monad for anything. The usual route into needing them is something like Haskell's IO: if you want as much as possible of your code to be stateless and immutable, how do you deal with IO, which inherently changes state external to the program?

If you've learned from the assembly end of programming upwards, it can be very hard to see the need for them at all.

[+] mrkeen|6 years ago|reply
> However an optional is really just a list constrained to size 0 or 1.

True, but you've only written about them as data types, which misses the monadic part. What makes them monads is that you can join any Optional<Optional<T>> into an Optional<T>. Likewise you can join a List<List<T>> into a List<T>. It is that principled 'smooshing down' of the data types that makes them monads.

[+] charmonium|6 years ago|reply
Promises, as exposited in the article, are an interface for bona fide monads. I think Promises are the best example of monads because they don't just wrap a type in memory. Maybe is kind of a subset of List (as you mention), and for both the contents are available in memory. Promises are not like Lists, and their contents can't just be pulled out, so you have to use `.then` (monadic bind) to do computation on them.
[+] rkido|6 years ago|reply
> However an optional is really just a list constrained to size 0 or 1.

What?

[+] jgodbout|6 years ago|reply
This really was excellent. It actually helps with the category theory def.