As far as monad tutorials go, this one seems quite good. I like the categorization of monads between "containers" and "recipes".
However, I personally think that monad tutorials tend to give people the wrong impression and leave them more confused than they were before, because they focus on the wrong thing.
A monad is not a complex concept, at all. IMO a more useful way to present the topic would be with one separate lesson for every common monad instance. Start with Maybe, then IO, then maybe State and List, and so on... because ultimately, every instance of a Monad works very differently. That's why the pattern is so useful in the first place, because it applies to so many places. (Note: this is a criticism of monad tutorials in general, not this one in particular, which seems to do a decent job on this front).
In my experience, people new to Haskell focus way too much on getting the "a-ha" moment for monads in general, when really you want a bunch of separate "a-ha" moments as you realize how each instance of a monad takes advantage of the pattern differently.
I also tend to think that monads are best demonstrated in Haskell rather than in other languages, if only because the notation is so much less clunky. That may just be me though. (EDIT: well, also because almost no other languages have typeclasses, so you have to approximate it with interfaces/traits/etc)
Also FYI: in part 2, the code examples have extra newlines in between every line, which makes it hard to read (I'm on firefox, if that matters).
I think you are right. I don't think I've fully mastered the concept yet, but what you are saying resonates with me.
I've been trying to grok monads for almost a decade. More and more I'm beginning to realize how "mundane" the concept is, and the usefulness really is just that specific pattern of mundanity.
Similar to pipelines on Linux, they are pretty basic, but their ubiquity and their use in composing unrelated things together is really, really useful, and you only get that if you use them in a variety of different ways.
Monads are extra cool because of the mathematical rigor behind them, and that's what I'm still trying to appreciate.
If all monad instances work differently what is the value of the Monad interface? What kind of usefull generic code can one write against the Monad interface.
I came to the conclusion that a List<X> is a good generic data structure, for instance in cases where the cardinality is supposed to be 0..1 it is often less trouble than a nullable scalar or an Optional<X> and you have cases where you’re going to get a list anyway such as if you are querying a relational database. (Often people write awkward code to turn that result into a nullable/Optional and then more awkward code to turn it back to a list later) Lists work almost exactly the same in most languages whereas there is often something weird about null, there might not be Optional, it might have something weird about it, etc.
For multi-language distributed processing, particular if JSON is involved it’s worth a try.
To be fair I write a lot of Java where Optional is a train wreck in so many ways not least it could be null anyway, you are allocating objects needlessly, and I just see people get hypnotized by awkward code also they write bugs or scan right past them.
I think "monad" is overloaded, or at least there are varying depths of understanding that are confused.
From a programming perspective, the definition of monads is clear.
bind :: m a -> (a -> m b) -> m b
return :: a -> m a
You can start using monads immediately, and in a language like Haskell, things click fairly quickly, because monads are used everywhere and taken seriously in that language.
But the implications and consequences of this definition for monads aren't always obvious, like how they can be used to structure operations or whatever.
And then there's the category theoretic business of monads which you don't need to understand for most programming purposes. That might be a source of confusion. As you more or less say, people have vague, built up expectations about monads. They expect something heavy and mysterious and doubt they're understood them according to the first definition. But the basic definition is straightforward.
Numbers are like this, too. You understand what a number is (a quantity). You can perform all sorts of operations and calculations using them without knowing number theory or the philosophy of mathematics.
Thanks for the feedback! I didn't expect my post to garner a lot of attention. I am totally ok with rewriting part 1, potentially to make it more concise to prevent confusion (wow, this post is super long, monads must be complex!) is what I'd like to avoid.
I can reproduce the double line issue in part 2, this was my mistake and I missed it as part of my editing process. I'll delete part 2 while I make the corrections.
> In my experience, people new to Haskell focus way too much on getting the "a-ha" moment for monads in general, [...]
I feel this is true in general for mathematics (and therefore by languages whose design is heavily inspired by maths). A lot of people not familiar with university-level maths think that they need to understand what some mathematical concept "really means", but modern mathematics is a structural science. It looks at things that have entirely different semantics (symmetries, conservation laws, integers, matrices, Rubik's cubes, ...) and noticing that they all have the same structure (they're all groups) and therefore we can say something about all of them simultaneously.
That doesn't mean that intuition is useless. Once you have thoroughly understood what makes a group a group or a vector space a vector space, it's totally normal to e.g. consider a space of functions and think of them in your head as if they were arrows in a Euclidean space (the analogy breaks down at some point, but it can carry you a certain way). That's also why it's fine to think of a monad as a container or as a burrito or whatever once you've actually understood the concept. But you can't really short-circuit this process in my opinion.
Yes, and I don't even see the value in generalizing to one Monad concept. It only makes things worse, as then one is tempted to share terminology between different kinds of monads. E.g. there's no reason Maybe's flatMap is called the same as List's flatMap, it might be more readable to call them differently, as some libraries do.
-a lot of functions that can ignore that hidden information
-some functions that can act (switch|add|mutate|signal|sequence) on that hidden information
people seem to think talking about flatMap somehow makes this intuitive despite the tautological issue of flatMap only making sense if you already know what's going on.
The way I think of it, monads are a solution to Callback Hell, where you've fallen in love with lambdas, but now you have a nightmarish mess of lambdas in lambdas and lambdas calling lambdas. The monadic functions allow you to create "for comprehensions" aka "do comprehensions" but really, they look like a classic for-each loop. They secretly call the monadic map/flatMap/filter functions.
for x in list
doThings(x)
These comprehensions have a strange bonus feature, that you can do nested "loops" all at once, and even add "guards" (little if statements)
newlist=
for x in list1
y in list2 if y > 3
z in list3
doThings(x, y, z)
But again, the comprehension, when "de-sugared", is secretly calling the map/flatMap/filter functions of list1, list2, list3 to get our result. You as the author of a given monad can implement those functions however you want, and they're all 3 lambda based. But notice how the comprehension is flattening those lambdas out! Our callbacks in callbacks are much more readable like this.
Without comprehensions, you can still implement monadic functions in any old language (probably in C...?), and they're handy in their own right, but you don't get the flattening-of-callback-hell magic.
A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface." So you can define a List type that "implements" the monad interface, but this is not an inherent property of lists themselves. That's the sense in which a list "is a" monad: the OOP sense.
Haskell's List monad provides a model for nondeterminism. But that certainly isn't the only way List could satisfy the monad interface! It was a deliberate choice -- a good choice, possibly the best choice, but a choice nonetheless.
> A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface."
You operate with things that are bound to PL notions of specific languages. Instead, consider that list isn't a data structure, it's an abstraction that defines a multitude of position-ordered values. The multitude of position-ordered values called "list" is a presented entity of "monad", because it can be used as a valid input for a monadic computation defined consistently (in terms of the monad laws).
Hi, I completely agree. "A" list isn't inherently a monad, and that is where my metaphor starts to fall apart a bit (my post title furthers this issue.)
I can clarify this earlier in part 1 or 2 instead of in to-be-written part 3.
Reminds me on an analogy for a monad I once heard. Not sure if it is correct because I lack the mathematical understanding to verify.
Anyway, a nested To-Do list is (allegedly) a common form of a monad. Say I am trying to clean my whole house. Well, I could have an item for a task like cleaning the kitchen that has each task I need to do in the kitchen in order for the kitchen to be cleaned. I can do the same for the living room, garage, etc..
However, that is mainly for organization purposes. While I may write the tasks in a nested manner, I could very well write each item as just a long flat list of To-Do tasks, and in reality, all the tasks are effectively completed as if they were one large flat list.
Is that kind of what you mean by 'flatMappable'? Nested To-Do lists being flattened and mapped to one large list? As in, a To-Do list of To-Do lists is just a single, larger To-Do list?
So I come at this from a math background but I’ve always found these explanations to be overly complex. In the parlance of C++, I think of a monad as a template class T with the following properties:
1. For any class X, there is a canonical method
F: X -> T<X>
2. For any class X, there is a canonical method
G: T<T<X>> -> T<X>.
3. For classes X and Y, and any method
f: X -> Y,
there is a corresponding method
“T<f>”: T<X> -> T<Y>.
—————-
Here “any type” means any type that is compatible with the template.
And then there’s some additional rules which make all these methods compatible, based on the different ways of stacking nested T’s and the “canonical” maps you get. Admittedly there is some confusing accounting here, but I also think most natural ways of constructing the above three requirements are going to satisfy them anyway. For List and Maybe it’s fairly obvious what the above methods are.
I dunno, maybe I have it wrong and someone can correct my understanding.
This is like explaining chess by simply stating the rules. Like sure explaining the rules of chess is important but only knowing the rules provides for nothing more than a superficial understanding of a topic.
Yeah, that's correct. You also often see it as having that for any method `X -> T<Y>` there's a corresponding method `T<X> -> T<Y>`. Or you can have that for any two arrows `X -> T<Y>` and `Y -> T<Z>` there's a composed arrow `X -> T<Z>`. All are equivalent.
Another tutorial which makes monads about 100x more impossible to understand for me by relating them to something else and describing all the weird ways that they are NOT that thing.
IMO if you already have it, this will be a lovely comparison full of insight, but if you haven't then it's full of confusing statements.
IMO what they are is utterly unimportant, except to mathematicians, and what you can do with them is more to the point.
The fact that explanations are so often in Haskell just makes them more unintelligible because you really need to know what problem they solve.
The reason that the explanations are all in Haskell is that Haskell is the only language that is reasonably popular that implements monad and calls it a monad, and 90% of the people looking up "What is a monad" are trying to learn Haskell.
Thanks for the feedback! I'll likely be editing part 1 to include the feedback so far from the commenters as well. If there's a specific statement or analogy that felt especially confusing, please point it out and I'll clarify it in the post.
* "List is a monad" - True. But I think "`List` is a monad" might be clearer.
* "A list is an algebra for the `List` monad." - False!
What's correct is the following:
* "An algebra for the `List` Monad is precisely a monoid."
Sketch of the construction:
(an algebra for the list monad is a monoid): Recall that an algebra is a set/type `A` together with a function `mult: List A -> A` together with some axioms. Think of such a function `mult: List A -> A` as the function that assigns to each list with elements in `A` the product over all those elements. The aforementioned axioms boil down to: (1) `mult([])` is a neutral element and (2) `mult` is an associative binary operation when restricted to two-element lists.
(a monoid is an algebra for the list monad): Same Idea - Given a monoid, we define a function `mult: List A -> A` by assigning to a list of elements of `A` the product of these elements. And the empty list we assign to the neutral element. Then we can use the associativity and properties of the neutral element to show that this function constitutes an algebra for the list monad.
Thanks for the feedback! I can definitely rename the post soon as a first step, although this may require rewriting a chunk of the article to more accurately reflect the fact that List is a monad, and not "a" list.
I could make this distinction in part 3 (not written yet) although I want to balance not misleading readers, but not overcomplicating it too early on.
IDK if it ads anything to the article, but `map` is a property of Functors, and every Monad is a Functor. Well, every Monad is an Applicative Functor, and every applicative functor is a functor.
All a way of saying that, yep, you always have `map` when you have a Monad, but you don't need a Monad to have `map`.
If you want an example we can compare a regular list and a Ziplist. A regular list's Applicative instance does a cross product, while a Ziplist's applicative instance does a dot product.
For quirky reasons in Haskell, `fmap` the function implemented for every Functor instance. This is because `map` was already taken by lists. Weird, I know.
Misunderstanding of Monads is such an interesting phenomenon. Kind of similar to grasping 4D geometry or understanding the difference between a class and an object in OOP.
List can be an instance of a monad, i.e. a monadic type.
I think the trick to understanding monads is to see what benefits monad interface gives to the types that implement it.
A monad is not a container! It’s a way of composing functions if they have an effect. You tell how to inject a value in that effect (unit) and how to compose two functions that have that effect and that’s it: programmable semicolons.
Thanks for the feedback, I totally agree that monads are not containers. From an OOP perspective, they have some properties that make them, in some sense, sorta like containers, e.g., they contain a value like the Maybe monad. I still agree that they are not simply containers. I can clarify this in a revision to part 1 soon.
Realizing that lists are monads is what made monads "click" for me.
When I was first learning Haskell a million years ago, I was completely confused by the concept of a monad; I could, after enough fighting with the compiler, usually get something working, but it was a stochastic guess-and-check process trying to figure out what `IO` actually means. Even the `Maybe` was confusing to me, because I couldn't really figure out how the hell you relate something like "checking for null" with "writing to the disk".
I can't remember where I saw it, probably on the Haskell wiki somewhere, but when they pointed out the List is a monad, and after seeing an example of how it worked, I suddenly got it: in a hand-wavey way, a monad is basically just a value with a wrapper context [1], and from a practical perspective that's all it is. In the case of a List its wrapper context is that there might be 0 or many of those things in there, in the case of a Maybe its wrapper context is that it might exist or it might not, in the case of IO its wrapper context is that it's interfacing with the outside world, and once you abstract away the entire idea of context, you can suddenly open up an entire world of reusability.
This is a good tutorial, I will probably be linking it to people if they ever make the mistake of asking about monads.
[1] I don't need a lecture on the minutia of this, I know that there's a lot more to it in the theory world, I went to graduate school specifically to study functional language verification. I'm keeping it simple.
While I can understand the desire to draw a metaphor, there are better approaches than saying, "A List Is a Monad".
The statement as-is breaks pretty much immediately because, while there is a canonical list monad, there isn't a list monad, there are in fact several[1].
There are several more correct ways of phrasing the idea among:
"List can be given a monad instance"
"List forms a monad with pure and bind as defined"
"List is the underlying functor of a monad"
The point is that picking any old list implementation is likely not a monad without the supporting structure.
Will any of these help you learn what a monad is? Likely not. Monadology is a Mary's Room[2] problem; there is a qualia, a subjective sensation, when one understands monads having experienced them first hand. Subsequently monad tutorials are the best case against physicalism[3] yet devised.
Although there can be other monads (and stuff other than monads, such as ZipList) that can be made with lists, I think that such a monad would not necessarily be a "list monad". (Your link [1] has several examples of this.) You are right that it does not mean that "a list is a monad" and that your other phrasing is better, but it does not mean that "there isn't a list monad".
In most programming languages the compiler authors go to great lengths to gives intuitive semantics to having one statement follow another, followed by another. This is an organizing principle for thinking about code and for having a program exist with well-defined semantics.
But its not a very robust one: its never true of fast programs on realistic hardware for example (not for a long time now). And all the rule bending (-fstrict-alias, bunch of stuff) exists in this tension between the grade school natural language paradigm and the reality of computers. I say grade school not to be pejorative, but rather because it is roughly the boundary where written natural languages begin to have interesting tensions around past and future and simultaneous, changing and not changing.
Functors and applicatives and monads and other type classes like these are the source of endless analogies because there isn't an accepted, broadly-understood terminology for this "well its roughly what would happen if you had a piece of paper and wrote things on it at every statement boundary and scratched off the old ones" (though Turing and von Neumann did formalize this in useful ways, they just don't generalize well to realistic computers anymore).
Monads are the mathematical object that is forced on you if you want a rigorous way to describe the semantics of program execution in the vicinity of this "common sense" notion. That's really what everyone is dancing around: your program is only well defined with either:
- a big rulebook full of exceptions and edge cases
- a compositional rule strict enough to give some useful predictability but lax enough to admit most useful programs.
It is this rigor/laxity tension as concerns text on a page and gates on a semiconductor that gives monads a privileged place in the towers of categories. When I worked on Sigma we were among the earlier adoptors of ApplicativeDo, for example, because we wanted a slightly different rigor/laxity tradeoff for performance reasons.
Monads are what happens when you do shift the giant pile of "back of the book" compiler details that describe program execution semantics into a much simpler set of rules, but at the cost of increasing the barrier to entry because you need to know the rules before you can print "hello world".
I like to think of a monad as a design pattern for constructing new objects where you pass in a sequence of callback functions, one at a time. A monad’s ‘bind’ operation adds another callback function to the end of a sequence.
The monad interface only requires ways to construct object using callbacks. The ‘bind’ operation takes a callback as an argument, but says nothing about when it’s actually called; it could be immediately, deferred, multiple times, or even never. It’s up to the implementation of the monad, as well as the language, if it’s a lazy language.
This is basically a framework. Like with other frameworks, the principle is “don’t call us; we’ll call you.” Arbitrary computation can happen between callbacks. The framework can do whatever control flow it wants, and this is what often makes frameworks opaque. Hiding control flow is what frameworks do, for better or worse.
So far, none of this is specific to a Monad. The Monad part comes from the type signature of the callback function passed in to flatmap(), which allows ‘bind’ operations to be nested.
Once you know what kind of thing you’re dealing with (frameworks) then you can go into why some frameworks qualify as a monad.
I used to struggle with understanding the "receipe" metaphor for monads when it comes to lists. But a list (or, really any collection) as a monad can be thought of as the "discrete nondeterminism monad".
Meaning that every collection is a set of possible inputs to the computation that is provided as the argument to a `flatMap` operation. Each `flatMap`, by definition, returns a new collection of possible outputs for each of the inputs, and each of those collections gets concatenated. Every item in the final output collection represents the result of following some path through the computations, selecting a single item at each step. Importantly, the type of the output of each `flatMap` operation can differ from the input.
You can imagine extending this by assigning probabilities, or making the domain continuous (I think...). These extensions would still be monads, just without being simple collections.
It's kind of like how multiplication over whole numbers is repeated addition, but that metaphor becomes less useful for other domains of numbers.
I still think the best way to conceptualize a monad is a "wrapper" around some data that contains additional state. The critical part of this definition is the second half, which sometimes gets skipped over for simplicity but is really what ties together all the disparate uses of monads.
I will die with the crushing embarrassment of not being able describe what a monad is, despite being a so-called (and apparently fake) programmer since the age of 10.
Ive never understood what a Monad is or why I should care. I still don't care to learn category theory or whatever but I was wondering, if there is a single image/diagram which will convey the idea behind monad? I recently stumbled on such an image which made the concept of simultaneity of SR click and I'm intrigued in other things I could never understand suddenly becoming crystal clear with a single image.
The obsession with trying to explain a monad ultimately stems from conflicting explanations and the inability to differentiate between a mathematical monad and monads implemented in software.
Monads in software are just a standard API for any given type. That’s it. Theres no magic here. Just implement the standard and you have a monad.
It grinds my gears seeing monad tutorial after tutorial using the wrong metaphors or misleading explanations
We define a small OOP framework for monads, plus macros which then can succinctly define different kinds of monads, including generating a comprehension macro for each one.
The article sort of danced around what I think is the most natural way List is a "recipe": it's the bounded nondeterminism monad (a `List<T>` is a nondeterministic result; one could implement `List<T> -> T` by selecting an answer uniformly at random from the finite multiset).
Seriously, I've read things about lists and nondeterminism a few times in this thread, and I can't help but wonder if "you guys" (functional programming nerds, maybe?) use the word "nondeterministic" different than the rest of the world?
If not, I'd love a good explanation about what makes lists non-deterministic, and why we would want that, and why they seem to be perfectly deterministic in imperative programming languages.
Let's start with function composition. We know that for any two types A and B we can consider functions from A to B, written A -> B. We can also compose them, the heart of sequentiality. If f: A -> B and g: B -> C then we might write (f;g) or (g . f) as two different, equivalent syntaxes for doing one thing and then the other, f and then g.
I'll posit this is an extremely fundamental idea of "sequence". Sure something like [a, b, c] is also a sequence, but (f;g) really shows us the idea of piping, of one operation following the first. This is because of how composition is only defined for things with compatible input and output types. It's a little implicit promise that we're feeding the output of f into g, not just putting them side-by-side on the shelf to admire.
Anyway, we characterize composition in two ways. First, we want to be clear that composition only cares about the order that the pipes are plugged together, not how you assemble them. Specifically, for three functions, f: A->B, g: B->C, h: C->D, (f;g);h = f;(g;h). The parentheses don't matter.
Second, we know that for any type A there's the "do nothing" identity function id_A: A->A. This doesn't have to exist, but it does and it's useful. It helps us characterize composition again by saying that f;id = id;f = f. If you're playing along by metaphor to lists, id is the empty list.
Together, composition and identity and the rules of associativity (parentheses don't matter) and how we can omit identity really serve to show what the idea of "sequences of pipes" mean. This is a super popular structure (technically, a category) and whenever you see it you can get a large intuition that some kind of sequencing might be happening.
Now, let's consider a slightly different sort of function. Given any type types, what about the functions A -> F B for some fixed other type F. F here exists to somehow "modulate" B, annotate it with additional meaning. Having a value of F B is kind of like having a value of type B, but maybe seen through some kind of lens.
Presumably, we care about that particular sort of lens and you can go look up dozens of useful choices of F later, but for now we can just focus on how functions A -> F B sort of still look like little machines that we might want to pipe together. Maybe we'd like there to be composition and identity here as well.
It should be obvious that we can't use identity or composition from normal function spaces. They don't type-check (id_A: A -> A, not A -> F A) and they don't semantically make sense (we don't offhand have a way to get Bs out of an F B, which would be the obvious way to "pipe" the result onward in composition).
But let's say that for some type constructors F, they did make sense. We'd have for any type A a function pure_A: A -> F A as well as a kind of composition such that f: A -> F B and g: B -> F C become f >=> g : A -> F C. These operations might only exist for some kinds of F, but whenever they do exist we'd again capture this very primal form of sequencing that we had with functions above.
We'd again capture the idea of little A -> F B machines which can be plugged into one another as long as their input and output types align and built into larger and larger sequences of piped machines. It's a very pleasant kind of structure, easy to work with.
And those F which support these operations (and follow the associativity and identity rules) are exactly the things we call monads. They're type constructors which allow for sequential piping very similar to how we can compose normal functions.
Thank you so very much. This is the first time monads have made sense to me, and now its clear why. People who try to explain them usually end up adding all kinds of Haskell minutia (the top post is another example), rather than actually explain the concept and why we need it. Your comment is the first time I actually understand what it is, and why it might be useful.
The amount of people who tie themselves into knots to understand this pointless concept is very funny to me. I am 16 years into a successful software engineering career without learning what a monad is an it never held me back. Turns out I can use lists and optional types and all that jazz without it.
I mean really. Look at posts like this[0]. What does this give you? Nothing, in practical reality. Nothing.
> I am 16 years into a successful software engineering career without learning what a monad is an it never held me back.
How would you know? That's the classic Blub Paradox.
Being able to write a custom monad and then leverage the vast array of libraries that already exist has helped me deliver functionality to end users quicker, more maintainably, and with lower defect rates. They don't let you do anything that you couldn't do by writing it out longhand. But just like using generic container libraries instead of writing a specific container for every type you want to handle collections of, they're extremely helpful.
In Haskell List is a Monad as well as MonadPlus. Since List's Monad instance is used probably 100x more than its MonadPlus instance, I think it makes sense to focus on that,
join has the type `m (m a) -> m a`. That's the thing that really shows off the monoidal structure. People normally implement monads in terms of bind, but you can easily define join in terms of bind for any Monad: `join ma = ma >>= id`. So really, as long as you have a lawful instance of Monad written with bind the existence of join is your proof.
I do not even know what a monoid or an endofuncor is. While I enjoy math, despite not being the best at it, I am confident I never made it this far in my studies. I looked at the Wikipedia definitions, and I am even more confused now.
brooke2k|8 months ago
However, I personally think that monad tutorials tend to give people the wrong impression and leave them more confused than they were before, because they focus on the wrong thing.
A monad is not a complex concept, at all. IMO a more useful way to present the topic would be with one separate lesson for every common monad instance. Start with Maybe, then IO, then maybe State and List, and so on... because ultimately, every instance of a Monad works very differently. That's why the pattern is so useful in the first place, because it applies to so many places. (Note: this is a criticism of monad tutorials in general, not this one in particular, which seems to do a decent job on this front).
In my experience, people new to Haskell focus way too much on getting the "a-ha" moment for monads in general, when really you want a bunch of separate "a-ha" moments as you realize how each instance of a monad takes advantage of the pattern differently.
I also tend to think that monads are best demonstrated in Haskell rather than in other languages, if only because the notation is so much less clunky. That may just be me though. (EDIT: well, also because almost no other languages have typeclasses, so you have to approximate it with interfaces/traits/etc)
Also FYI: in part 2, the code examples have extra newlines in between every line, which makes it hard to read (I'm on firefox, if that matters).
aeonik|8 months ago
I've been trying to grok monads for almost a decade. More and more I'm beginning to realize how "mundane" the concept is, and the usefulness really is just that specific pattern of mundanity.
Similar to pipelines on Linux, they are pretty basic, but their ubiquity and their use in composing unrelated things together is really, really useful, and you only get that if you use them in a variety of different ways.
Monads are extra cool because of the mathematical rigor behind them, and that's what I'm still trying to appreciate.
pdhborges|8 months ago
Related: https://buttondown.com/j2kun/archive/weak-and-strong-algebra...
PaulHoule|8 months ago
For multi-language distributed processing, particular if JSON is involved it’s worth a try.
To be fair I write a lot of Java where Optional is a train wreck in so many ways not least it could be null anyway, you are allocating objects needlessly, and I just see people get hypnotized by awkward code also they write bugs or scan right past them.
lo_zamoyski|8 months ago
From a programming perspective, the definition of monads is clear.
You can start using monads immediately, and in a language like Haskell, things click fairly quickly, because monads are used everywhere and taken seriously in that language.But the implications and consequences of this definition for monads aren't always obvious, like how they can be used to structure operations or whatever.
And then there's the category theoretic business of monads which you don't need to understand for most programming purposes. That might be a source of confusion. As you more or less say, people have vague, built up expectations about monads. They expect something heavy and mysterious and doubt they're understood them according to the first definition. But the basic definition is straightforward.
Numbers are like this, too. You understand what a number is (a quantity). You can perform all sorts of operations and calculations using them without knowing number theory or the philosophy of mathematics.
polygot|8 months ago
I can reproduce the double line issue in part 2, this was my mistake and I missed it as part of my editing process. I'll delete part 2 while I make the corrections.
Tainnor|8 months ago
I feel this is true in general for mathematics (and therefore by languages whose design is heavily inspired by maths). A lot of people not familiar with university-level maths think that they need to understand what some mathematical concept "really means", but modern mathematics is a structural science. It looks at things that have entirely different semantics (symmetries, conservation laws, integers, matrices, Rubik's cubes, ...) and noticing that they all have the same structure (they're all groups) and therefore we can say something about all of them simultaneously.
That doesn't mean that intuition is useless. Once you have thoroughly understood what makes a group a group or a vector space a vector space, it's totally normal to e.g. consider a space of functions and think of them in your head as if they were arrows in a Euclidean space (the analogy breaks down at some point, but it can carry you a certain way). That's also why it's fine to think of a monad as a container or as a burrito or whatever once you've actually understood the concept. But you can't really short-circuit this process in my opinion.
kqr|8 months ago
Right on. This is the "What Are Monads" fallacy: https://entropicthoughts.com/the-what-are-monads-fallacy
unknown|8 months ago
[deleted]
deepsun|8 months ago
Bjartr|8 months ago
billmcneale|8 months ago
How useful, really? Monads don't even universally compose, which is what most people sell the concept for.
monkeyelite|8 months ago
magarnicle|8 months ago
bdangubic|8 months ago
Avshalom|8 months ago
-a data type with some hidden information
-a lot of functions that can ignore that hidden information
-some functions that can act (switch|add|mutate|signal|sequence) on that hidden information
people seem to think talking about flatMap somehow makes this intuitive despite the tautological issue of flatMap only making sense if you already know what's going on.
kerblang|8 months ago
Without comprehensions, you can still implement monadic functions in any old language (probably in C...?), and they're handy in their own right, but you don't get the flattening-of-callback-hell magic.
stronglikedan|8 months ago
nemo1618|8 months ago
A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface." So you can define a List type that "implements" the monad interface, but this is not an inherent property of lists themselves. That's the sense in which a list "is a" monad: the OOP sense.
Haskell's List monad provides a model for nondeterminism. But that certainly isn't the only way List could satisfy the monad interface! It was a deliberate choice -- a good choice, possibly the best choice, but a choice nonetheless.
instig007|8 months ago
You operate with things that are bound to PL notions of specific languages. Instead, consider that list isn't a data structure, it's an abstraction that defines a multitude of position-ordered values. The multitude of position-ordered values called "list" is a presented entity of "monad", because it can be used as a valid input for a monadic computation defined consistently (in terms of the monad laws).
polygot|8 months ago
I can clarify this earlier in part 1 or 2 instead of in to-be-written part 3.
Jaxan|8 months ago
blakehawkins|8 months ago
gr4vityWall|8 months ago
Context: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Usually articles that describe them in a very Math-y way go above my head. But the definition above was immediately clear (I saw it on HN).
I think this article is a bit more approachable than others I've read, but it still gets very confusing near the end.
hirvi74|8 months ago
Anyway, a nested To-Do list is (allegedly) a common form of a monad. Say I am trying to clean my whole house. Well, I could have an item for a task like cleaning the kitchen that has each task I need to do in the kitchen in order for the kitchen to be cleaned. I can do the same for the living room, garage, etc..
However, that is mainly for organization purposes. While I may write the tasks in a nested manner, I could very well write each item as just a long flat list of To-Do tasks, and in reality, all the tasks are effectively completed as if they were one large flat list.
Is that kind of what you mean by 'flatMappable'? Nested To-Do lists being flattened and mapped to one large list? As in, a To-Do list of To-Do lists is just a single, larger To-Do list?
aadhavans|8 months ago
kevinventullo|8 months ago
1. For any class X, there is a canonical method
2. For any class X, there is a canonical method 3. For classes X and Y, and any method there is a corresponding method —————-Here “any type” means any type that is compatible with the template.
And then there’s some additional rules which make all these methods compatible, based on the different ways of stacking nested T’s and the “canonical” maps you get. Admittedly there is some confusing accounting here, but I also think most natural ways of constructing the above three requirements are going to satisfy them anyway. For List and Maybe it’s fairly obvious what the above methods are.
I dunno, maybe I have it wrong and someone can correct my understanding.
Kranar|8 months ago
tel|8 months ago
gpderetta|8 months ago
4ad|8 months ago
I would agree that most of these articles about monads are bad. Just study the definition, then study what you can do with monads, it's not that hard.
t43562|8 months ago
IMO if you already have it, this will be a lovely comparison full of insight, but if you haven't then it's full of confusing statements.
IMO what they are is utterly unimportant, except to mathematicians, and what you can do with them is more to the point.
The fact that explanations are so often in Haskell just makes them more unintelligible because you really need to know what problem they solve.
empath75|8 months ago
polygot|8 months ago
robinhouston|8 months ago
A list is not a monad. List is a monad. A list is an algebra for the List monad.
Garlef|8 months ago
In detail:
* "A list is not a monad" - True!
* "List is a monad" - True. But I think "`List` is a monad" might be clearer.
* "A list is an algebra for the `List` monad." - False!
What's correct is the following:
* "An algebra for the `List` Monad is precisely a monoid."
Sketch of the construction:
(an algebra for the list monad is a monoid): Recall that an algebra is a set/type `A` together with a function `mult: List A -> A` together with some axioms. Think of such a function `mult: List A -> A` as the function that assigns to each list with elements in `A` the product over all those elements. The aforementioned axioms boil down to: (1) `mult([])` is a neutral element and (2) `mult` is an associative binary operation when restricted to two-element lists.
(a monoid is an algebra for the list monad): Same Idea - Given a monoid, we define a function `mult: List A -> A` by assigning to a list of elements of `A` the product of these elements. And the empty list we assign to the neutral element. Then we can use the associativity and properties of the neutral element to show that this function constitutes an algebra for the list monad.
polygot|8 months ago
I could make this distinction in part 3 (not written yet) although I want to balance not misleading readers, but not overcomplicating it too early on.
leoh|8 months ago
1-more|8 months ago
All a way of saying that, yep, you always have `map` when you have a Monad, but you don't need a Monad to have `map`.
If you want an example we can compare a regular list and a Ziplist. A regular list's Applicative instance does a cross product, while a Ziplist's applicative instance does a dot product.
There's no great way to write a Monad instance for ZipList. But it's an Applicative Functor and thus is also a Functor and thus you can map over it. https://www.mail-archive.com/haskell-cafe@haskell.org/msg572...For quirky reasons in Haskell, `fmap` the function implemented for every Functor instance. This is because `map` was already taken by lists. Weird, I know.
ivanjermakov|8 months ago
List can be an instance of a monad, i.e. a monadic type.
I think the trick to understanding monads is to see what benefits monad interface gives to the types that implement it.
drumnerd|8 months ago
polygot|8 months ago
mrkeen|8 months ago
tombert|8 months ago
When I was first learning Haskell a million years ago, I was completely confused by the concept of a monad; I could, after enough fighting with the compiler, usually get something working, but it was a stochastic guess-and-check process trying to figure out what `IO` actually means. Even the `Maybe` was confusing to me, because I couldn't really figure out how the hell you relate something like "checking for null" with "writing to the disk".
I can't remember where I saw it, probably on the Haskell wiki somewhere, but when they pointed out the List is a monad, and after seeing an example of how it worked, I suddenly got it: in a hand-wavey way, a monad is basically just a value with a wrapper context [1], and from a practical perspective that's all it is. In the case of a List its wrapper context is that there might be 0 or many of those things in there, in the case of a Maybe its wrapper context is that it might exist or it might not, in the case of IO its wrapper context is that it's interfacing with the outside world, and once you abstract away the entire idea of context, you can suddenly open up an entire world of reusability.
This is a good tutorial, I will probably be linking it to people if they ever make the mistake of asking about monads.
[1] I don't need a lecture on the minutia of this, I know that there's a lot more to it in the theory world, I went to graduate school specifically to study functional language verification. I'm keeping it simple.
kelseyfrog|8 months ago
The statement as-is breaks pretty much immediately because, while there is a canonical list monad, there isn't a list monad, there are in fact several[1].
There are several more correct ways of phrasing the idea among:
"List can be given a monad instance"
"List forms a monad with pure and bind as defined"
"List is the underlying functor of a monad"
The point is that picking any old list implementation is likely not a monad without the supporting structure.
Will any of these help you learn what a monad is? Likely not. Monadology is a Mary's Room[2] problem; there is a qualia, a subjective sensation, when one understands monads having experienced them first hand. Subsequently monad tutorials are the best case against physicalism[3] yet devised.
1. https://hackage.haskell.org/package/exotic-list-monads-1.1.0...
2. https://en.wikipedia.org/wiki/Knowledge_argument
3. https://en.wikipedia.org/wiki/Physicalism
zzo38computer|8 months ago
benreesman|8 months ago
But its not a very robust one: its never true of fast programs on realistic hardware for example (not for a long time now). And all the rule bending (-fstrict-alias, bunch of stuff) exists in this tension between the grade school natural language paradigm and the reality of computers. I say grade school not to be pejorative, but rather because it is roughly the boundary where written natural languages begin to have interesting tensions around past and future and simultaneous, changing and not changing.
Functors and applicatives and monads and other type classes like these are the source of endless analogies because there isn't an accepted, broadly-understood terminology for this "well its roughly what would happen if you had a piece of paper and wrote things on it at every statement boundary and scratched off the old ones" (though Turing and von Neumann did formalize this in useful ways, they just don't generalize well to realistic computers anymore).
Monads are the mathematical object that is forced on you if you want a rigorous way to describe the semantics of program execution in the vicinity of this "common sense" notion. That's really what everyone is dancing around: your program is only well defined with either:
- a big rulebook full of exceptions and edge cases
- a compositional rule strict enough to give some useful predictability but lax enough to admit most useful programs.
It is this rigor/laxity tension as concerns text on a page and gates on a semiconductor that gives monads a privileged place in the towers of categories. When I worked on Sigma we were among the earlier adoptors of ApplicativeDo, for example, because we wanted a slightly different rigor/laxity tradeoff for performance reasons.
Monads are what happens when you do shift the giant pile of "back of the book" compiler details that describe program execution semantics into a much simpler set of rules, but at the cost of increasing the barrier to entry because you need to know the rules before you can print "hello world".
skybrian|8 months ago
The monad interface only requires ways to construct object using callbacks. The ‘bind’ operation takes a callback as an argument, but says nothing about when it’s actually called; it could be immediately, deferred, multiple times, or even never. It’s up to the implementation of the monad, as well as the language, if it’s a lazy language.
This is basically a framework. Like with other frameworks, the principle is “don’t call us; we’ll call you.” Arbitrary computation can happen between callbacks. The framework can do whatever control flow it wants, and this is what often makes frameworks opaque. Hiding control flow is what frameworks do, for better or worse.
So far, none of this is specific to a Monad. The Monad part comes from the type signature of the callback function passed in to flatmap(), which allows ‘bind’ operations to be nested.
Once you know what kind of thing you’re dealing with (frameworks) then you can go into why some frameworks qualify as a monad.
acjohnson55|8 months ago
Meaning that every collection is a set of possible inputs to the computation that is provided as the argument to a `flatMap` operation. Each `flatMap`, by definition, returns a new collection of possible outputs for each of the inputs, and each of those collections gets concatenated. Every item in the final output collection represents the result of following some path through the computations, selecting a single item at each step. Importantly, the type of the output of each `flatMap` operation can differ from the input.
You can imagine extending this by assigning probabilities, or making the domain continuous (I think...). These extensions would still be monads, just without being simple collections.
It's kind of like how multiplication over whole numbers is repeated addition, but that metaphor becomes less useful for other domains of numbers.
hackandthink|8 months ago
https://hackage.haskell.org/package/Agda-2.6.4.2/docs/Agda-S...
psyleft|8 months ago
The most convincing description of this idea imo is this video by Tsoding: https://www.youtube.com/watch?v=fCoQb-zqYDI
theanonymousone|8 months ago
perlgeek|8 months ago
Like, why would you embarrassed about not understanding the details of MPLS networking if you're not a network engineer who works with MPLS?
mikelitoris|8 months ago
zahlman|8 months ago
fud101|8 months ago
danieltanfh95|8 months ago
Monads in software are just a standard API for any given type. That’s it. Theres no magic here. Just implement the standard and you have a monad.
It grinds my gears seeing monad tutorial after tutorial using the wrong metaphors or misleading explanations
petesergeant|8 months ago
I don’t think that’s helpful for people to understand _why_ monads though, and that’s generally what people are looking for.
lmm|8 months ago
kazinator|8 months ago
[Content Warning: Lisp]
We define a small OOP framework for monads, plus macros which then can succinctly define different kinds of monads, including generating a comprehension macro for each one.
Smaug123|8 months ago
perlgeek|8 months ago
Seriously, I've read things about lists and nondeterminism a few times in this thread, and I can't help but wonder if "you guys" (functional programming nerds, maybe?) use the word "nondeterministic" different than the rest of the world?
If not, I'd love a good explanation about what makes lists non-deterministic, and why we would want that, and why they seem to be perfectly deterministic in imperative programming languages.
tel|8 months ago
Let's start with function composition. We know that for any two types A and B we can consider functions from A to B, written A -> B. We can also compose them, the heart of sequentiality. If f: A -> B and g: B -> C then we might write (f;g) or (g . f) as two different, equivalent syntaxes for doing one thing and then the other, f and then g.
I'll posit this is an extremely fundamental idea of "sequence". Sure something like [a, b, c] is also a sequence, but (f;g) really shows us the idea of piping, of one operation following the first. This is because of how composition is only defined for things with compatible input and output types. It's a little implicit promise that we're feeding the output of f into g, not just putting them side-by-side on the shelf to admire.
Anyway, we characterize composition in two ways. First, we want to be clear that composition only cares about the order that the pipes are plugged together, not how you assemble them. Specifically, for three functions, f: A->B, g: B->C, h: C->D, (f;g);h = f;(g;h). The parentheses don't matter.
Second, we know that for any type A there's the "do nothing" identity function id_A: A->A. This doesn't have to exist, but it does and it's useful. It helps us characterize composition again by saying that f;id = id;f = f. If you're playing along by metaphor to lists, id is the empty list.
Together, composition and identity and the rules of associativity (parentheses don't matter) and how we can omit identity really serve to show what the idea of "sequences of pipes" mean. This is a super popular structure (technically, a category) and whenever you see it you can get a large intuition that some kind of sequencing might be happening.
Now, let's consider a slightly different sort of function. Given any type types, what about the functions A -> F B for some fixed other type F. F here exists to somehow "modulate" B, annotate it with additional meaning. Having a value of F B is kind of like having a value of type B, but maybe seen through some kind of lens.
Presumably, we care about that particular sort of lens and you can go look up dozens of useful choices of F later, but for now we can just focus on how functions A -> F B sort of still look like little machines that we might want to pipe together. Maybe we'd like there to be composition and identity here as well.
It should be obvious that we can't use identity or composition from normal function spaces. They don't type-check (id_A: A -> A, not A -> F A) and they don't semantically make sense (we don't offhand have a way to get Bs out of an F B, which would be the obvious way to "pipe" the result onward in composition).
But let's say that for some type constructors F, they did make sense. We'd have for any type A a function pure_A: A -> F A as well as a kind of composition such that f: A -> F B and g: B -> F C become f >=> g : A -> F C. These operations might only exist for some kinds of F, but whenever they do exist we'd again capture this very primal form of sequencing that we had with functions above.
We'd again capture the idea of little A -> F B machines which can be plugged into one another as long as their input and output types align and built into larger and larger sequences of piped machines. It's a very pleasant kind of structure, easy to work with.
And those F which support these operations (and follow the associativity and identity rules) are exactly the things we call monads. They're type constructors which allow for sequential piping very similar to how we can compose normal functions.
HexDecOctBin|8 months ago
mvdtnz|8 months ago
I mean really. Look at posts like this[0]. What does this give you? Nothing, in practical reality. Nothing.
[0] https://news.ycombinator.com/item?id=44446472
lmm|8 months ago
How would you know? That's the classic Blub Paradox.
Being able to write a custom monad and then leverage the vast array of libraries that already exist has helped me deliver functionality to end users quicker, more maintainably, and with lower defect rates. They don't let you do anything that you couldn't do by writing it out longhand. But just like using generic container libraries instead of writing a specific container for every type you want to handle collections of, they're extremely helpful.
battle-racket|8 months ago
daxfohl|8 months ago
ChadNauseam|8 months ago
nyeah|8 months ago
agnishom|8 months ago
lordgilman|8 months ago
revskill|8 months ago
rebeccaskinner|8 months ago
hirvi74|8 months ago
I do not even know what a monoid or an endofuncor is. While I enjoy math, despite not being the best at it, I am confident I never made it this far in my studies. I looked at the Wikipedia definitions, and I am even more confused now.
draw_down|8 months ago
[deleted]
FeelingsSparked|8 months ago
[deleted]
jrjrjrjrjfjj|8 months ago