I think that the misconception the author talks about happens because a noun is used for the properties of an object.
An adjective such as "monadic" can be used instead: "In Haskell, IO actions are monadic because the IO type has a flatMap (bind) operator and a unit (return) function which satisfy left identity, right identity and associativity".
All of the above are the properties (traits?) of the IO datatype. Monads (in Haskell) don't "exist" by themselves, they're just a property.
I think a part of it is a slight weirdness of Haskell typeclasses. In math, the noun "monad" refers to is a triple containing a functor and two natural transformations. The equivalent noun in Haskell would be the typeclass instance for a particular type. However, because Haskell (basically) only allows one instance per type, you select different instances by choosing a different (possibly equivalent, as with newtype) type, and so there is some natural conflation that arises in the language people use to talk about these things.
The dynamic may be slightly clearer for semigroups. In mathematics, a semigroup is a pair containing a set and an (associative) operation on that set. In Haskell we say things like "ByteString is a semigroup" but that's short for "there is a semigroup where the set is all possible ByteStrings" and you assume the people you're talking to understand what the operation is because there is only one instance of Semigroup commonly defined for ByteString.
This is pretty true and certainly important but "Monad" does exist for many reasons. It's essentialy standing for "a monadic type" used when you're given a value of some monadic type but no specific one. This happens (in various forms) a lot in Haskell and so the term has weight.
It took me so long to figure out that "where is the monad?" was the wrong question. It took me even longer to grok that "monad" is essentially an interface/trait rather than a "type". I think your suggestion of 'monadic' is a big improvement.
I think there's an important difference between saying Haskell uses monads to do IO and we use rings to do arithmetic. In Haskell, understanding monads helps you do lots of tasks more effectively, since so many data types are monads: IO, Maybe, lists, ...
In math, there are other uses for rings, but if you're using them, it helps to know a little ring theory. Similarly, if you want to use any of the other Haskell monads, it helps to know how monads work.
You say "I think there's an important difference", but then, you end up saying precisely the same things about monads and rings. What was the difference?
> This is why unsafePerformIO is unsafe: it’s completely foreign to the programming model
Maybe, although it is also not type safe because Haskell doesn't have a value restriction for polymorphism. (or looked at another way Haskell's value restriction assumes that function application is a value).
And, by extension, we don't have a value restriction because unsafePerformIO is foreign to our programming model. If it was actually part of the abstraction, Haskell would be a lot more like ML!
why, it is just an ADT which could be implemented in almost any language. Something like tagging a value with a type-tag and then define whole bunch of constructors and selectors, and then procedures which removes the tag, applies a given procedure and sticks the the tag to the result.
(cons 'safe x)
OK, In Haskell this stuff could be type-checked, which means only that the arguments to functions and its return values are of correct type (tags are present).
Why is it silly? Haskell libraries derive a lot of safety benefit out of being able to type-check different kinds of effects. One example is the STM library which uses this property to prohibit IO within an atomic transaction. Since transactions may be aborted and/or repeated, it's important to ensure no interaction with the outside world takes place within them lest these effects be repeated.
Perhaps the best way to think of it is that a Haskell program is a pure function from FFI outputs to FFI inputs. Something like
H out in = out -> (FFIOperation, in)
The `IO` monad is nothing more than how operating "inside of this function" feels. The runtime thus operates the pure Haskell program evaluating its FFI demands and returning their results.
The AST POV espoused by this article is quite good as well, but a little bit less obvious how to "step" things forward or operate nicely in parallel contexts.
Also, in reality the above perspective is a good way to embed Haskell into other contexts.
Not really. The point of Haskell is not to avoid having side effects. The point of Haskell is to allow code to be referentially transparent - this makes it both easier to reason about as a developer, and easier for the runtime to optimise.
I don't know what that word means, to me it has always seemed a bit silly. The whole point of software is its side effects, you will always have some. So instead of admitting that, you do a little dance to pretend side effects don't really happen in your program. It's very strange to me.
Are you sure that's quite what Haskell's rhetoric is? It sounds to me more like the bad rhetoric of people who do a poor but enthusiastic job of trying to explain Haskell.
Yeah, this was more of a draft than a finished article. I didn't realize anyone was following my blog :P. It's a working title at best—it doesn't really reflect what I wanted to talk about... but then, I'm not sure the second half or so does either.
[+] [-] spion|11 years ago|reply
An adjective such as "monadic" can be used instead: "In Haskell, IO actions are monadic because the IO type has a flatMap (bind) operator and a unit (return) function which satisfy left identity, right identity and associativity".
All of the above are the properties (traits?) of the IO datatype. Monads (in Haskell) don't "exist" by themselves, they're just a property.
[+] [-] dllthomas|11 years ago|reply
The dynamic may be slightly clearer for semigroups. In mathematics, a semigroup is a pair containing a set and an (associative) operation on that set. In Haskell we say things like "ByteString is a semigroup" but that's short for "there is a semigroup where the set is all possible ByteStrings" and you assume the people you're talking to understand what the operation is because there is only one instance of Semigroup commonly defined for ByteString.
[+] [-] tel|11 years ago|reply
But for learning purposes I agree wholeheartedly!
[+] [-] mattdw|11 years ago|reply
[+] [-] MichaelDickens|11 years ago|reply
In math, there are other uses for rings, but if you're using them, it helps to know a little ring theory. Similarly, if you want to use any of the other Haskell monads, it helps to know how monads work.
[+] [-] Chinjut|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] lpw25|11 years ago|reply
Maybe, although it is also not type safe because Haskell doesn't have a value restriction for polymorphism. (or looked at another way Haskell's value restriction assumes that function application is a value).
[+] [-] tikhonj|11 years ago|reply
[+] [-] tel|11 years ago|reply
[+] [-] dschiptsov|11 years ago|reply
But calling this a "safety" is a silly meme.)
[+] [-] chongli|11 years ago|reply
[+] [-] wyager|11 years ago|reply
You need HKTs to have type-checked generic monads. Almost no languages have them.
[+] [-] dschiptsov|11 years ago|reply
http://karma-engineering.com/lab/wiki/Monads
[+] [-] teyfille|11 years ago|reply
[+] [-] tel|11 years ago|reply
The AST POV espoused by this article is quite good as well, but a little bit less obvious how to "step" things forward or operate nicely in parallel contexts.
Also, in reality the above perspective is a good way to embed Haskell into other contexts.
http://comonad.com/reader/2011/free-monads-for-less-3/
[+] [-] zoomerang|11 years ago|reply
[+] [-] serve_yay|11 years ago|reply
[+] [-] quadrangle|11 years ago|reply
[+] [-] dustingetz|11 years ago|reply
[+] [-] brandonbloom|11 years ago|reply
I guess that I've got to read the rest of the article now too...
[+] [-] tikhonj|11 years ago|reply