Hacker News new | past | comments | ask | show | jobs | submit login
Monads Are Not Metaphors (codecommit.com)
96 points by flapjack on Dec 29, 2010 | hide | past | favorite | 39 comments



I've recently finished reading Learn You A Haskell For Great Good!, and I think the author's approach to monads works well. I'd read various metaphorical explanations that only served to further cloud the issue, but thanks to LYAH I finally got it.

In short, he has you using monads long before you understand them (Maybe and IO in particular), then slowly introduces monads by first explaining functors and applicative functors.

In retrospect, the mystique seems crazy. Monads are just not that confusing: they're simply values with added context, along with functions that let you interact with those values without losing the context. It's a shame that this powerful idea is so obscured by its supposed difficulty.


>Monads are just not that confusing: they're simply values with added context, along with functions that let you interact with those values without losing the context. It's a shame that this powerful idea is so obscured by its supposed difficulty.

This is a typical thing in mathematics.


This is why I don't like self-study for mathematics. A lot of the stuff is very simple once you understand what it's for, but finding out what it's for from the formal definition is an uphill battle.

And then, after you see the thing for what it is (months later), you start coming up with questions that lead to more formalism.

Studying from an encyclopaedia is hard, and even a good book will only get you as far as well-known results. Answers are much easier to learn from, so you want someone to provide you with the correct questions and avoid the difficult ones.


>Monads are just not that confusing

I don't think the 'what' is confusing, but the 'why and when'. Monads are a pattern; when I first learned about the Strategy Pattern, I understood it. It wasn't until a while later I started seeing the need for it and could explain why...


Analyzing code and naming patterns is far harder than just writing and reading code.

Also it's kinda pointless. If you go up to an improvising musician and tell them how amazing the hemiola cadence they just used was, they'd probably tell you to chill out and enjoy the music.


My bass teacher used to say: "Music theory is not music. It's just a way we have of talking about music."


Unfortunately I took Music 'A' Level at school (age 16-18). It was pretty much all music theory - history of composers, analyzing music, etc. Pretty much no playing music at all. Boring as hell.


`(a->r)->r` is `a` with some added context?


In the sense that you can combine Cont r a objects into CPS computations over a value of type a, yes.

It's a leaky metaphor but it does reiterate the two levels of logic going on in a monad: the "boxed" value and the "box" itself.


> Here’s the deal: Haskell doesn’t allow side-effects. At all.

I disagree with this, and it was Eric Meijer's "Fundamentalist Functional Programming" lecture that really changed my mind. The point isn't that Haskell prevents side effects. It's that it requires you to be explicit about them. Indeed, some advocates say that Haskell is "the best imperative langauge". I don't know that I would go that far though.



I have to say, I had no idea what monads were before reading that article, and after reading it, I still don't.

It sounds like something to get the benefits of object-oriented programming while using functional programming, but I'm really confused.


Think of monads as a design pattern. The reason they're named in Haskell is because that language allows you to name a lot of abstract things that most of the time are either ignored or not made explicit.

A monad is a type that keeps some value contained, which is like you said a big part of object-oriented programming. Monads are the use of objects for encapsulation, but a bit more extensible than in regular object-oriented languages. The Maybe monad, called Option in the article, is a type that can hold either some value or Nothing. You have to do a null check to get the value, but because it's a monad, you can use the generic function bind to automate this check.

The IO monad is just another application of this idea of containing something - all IO operations have the world as their input and output. This is implicit in most languages, but in Haskell it's explicit. Any value you get from the outside world is hidden inside an IO action, but you can use bind to automate the required passing around of the world.

One possible reason for confusion is that you can explicitly do null checks with Maybe but you can't explicitly thread the world around with IO. Maybe's method of direct access is public, but IO's is private. Because the /only/ way to get a value out of an IO operation is with bind, you are guaranteed to have all side effects contained in IO actions.


Do you understand the three-level distinction between type constructors (i.e. templates like C++, or generics in other languages), classes and instances? Do you understand first-class functions (lambdas)? If you've got those concepts down, you can understand monads.

Monads are the name for a particular member of a forth level on that three-level stack of abstractions (call them type classes, as that's what Haskell calls them (though I'm fudging a little here)); a type class is a particular shape for a generic type, in the same way that any given generic type is a particular shape for a class, and a class is a particular shape for an instance.

So when people talk about Option/Maybe being a monad, they mean Option/Maybe is a generic type (i.e. a type constructor) which has a certain shape - i.e. has a particular set of methods / functions defined on it.

That's the abstraction level of monads out of the way. The next thing to understand is the particular operations required. Monads are like containers for values, but rather than operating on the values directly (e.g. y = f(x)), you instead pass the monad the function you'd like it to apply to the value (e.g. y = x.bind(f)), and it returns you a monad which is logically the wrapper that function applied to its contents.

Because the application of the function to the value inside the monad is delegated to the monad itself, it gets to decide what to do with it. In the case of Option/Maybe, it'll only apply the function if the contents aren't empty. In the case of IO, it'll logically buffer up a list of imperative I/O operations which will be logically returned as a result of the main function; and that list of imperative instructions will then be imperatively executed, after the nice and pure Haskell program has finished its unpolluted existence. (That's how it happens logically (assuming there are no cheats available in the language), but not necessarily actually.)


I really like the shapes explanation. It's a much more explicit version of what I was trying to say.


A monad is an abstraction for composing functions that return values with a context by defining how to compose that context. That's it.


With the recognition that those functions have a common protocol / interface, and consequently have more in common that is usually noticed.

Monads aren't a new thing, rather they're giving a name to a concept that has been hiding in plain sight. Trying to look at them directly is confusing at first, like suddenly being able to explicitly manipulate continuations.


I found this explanation of monads more lucid than most, and it is true that the examples and analogies are usually more confusing than helpful.

But what I still don't understand is how monads force sequencing more so than regular function calls and expression evaluation. The very first example is about sequencing but there doesn't seem to be much explanation of how monads are usueful for inducing sequencing in a lazy pure functional context.


When learning any new sufficiently abstract typeclass (whether it is Monad, Applicative, Functor, Monoid, Arrow, or whatever), the best way to learn it is in two phases: 1) What is the mathematical meaning of the typeclass? 2) How can the typeclass be applied (what are the instances) and how do they map to reality?

It is very tempting to try to jump quickly to 2 - but it helps to force yourself to avoid thinking about applications for a bit and understand the applications.

I think the article does achieve this.


You mean 'and understand the axioms', don't you?


I'd love to have the data to graph the ratio public_monad_tutorials/haskell_programmers over time.


First wave: don't think of monads as monoids in the category of endofunctors. No, Monads are burritos, programmable semicolons, design patterns etc.

2nd wave, monad tutorial fallacy, red herring, don't read/write monad tutorials, they just sort of grow on you! e.g.

http://www.haskell.org/haskellwiki/Monad_tutorials_timeline

http://news.ycombinator.com/item?id=336085

http://byorgey.wordpress.com/2009/01/12/abstraction-intuitio...

http://news.ycombinator.com/item?id=1534405

http://news.ycombinator.com/item?id=1997341


Almost, but not quite, entirely unlike what you asked for. But if you are interested in similar mentions in print

http://ngrams.googlelabs.com/graph?content=monad%2CHaskell&#...


A monad is like a tiger in a cage. You don't want it running around mauling all your code, so you transport it in cage. If someone needs your tiger for something they need to have the key to his cage. Also it is polite to put the tiger back in the cage if anyone else needs him.


I wish I knew Scala, I was too caught up trying to decipher the code to take in the author's point. I have a basic knowledge of monads but unfortunately wasn't able to glean anything further from this article.


I found a quick 15 page introduction to Scala and took me about an hour to digest it (of course I didn't dive in very deeply). It gave me a sufficient knowledge to understand the article about monads.

So here it is: http://www.scala-lang.org/docu/files/ScalaTutorial.pdf


Thanks, that was helpful.


I'm trying to parse all of this, but my understanding of Scala is shaky. Could anyone explain this statement -- I think it'd help a lot:

  sealed trait Option[+A] {
    def bind[B](f: A => Option[B]): Option[B]

(My best guess: define a trait named `Option` ... something about an object of type A ... really not sure of the +. Then define a function `bind` that ... something about type `B` ... which takes an argument of type `f`? and returns the argument wrapped in an Option?)


define a trait named `Option` <parametrized with another type> A <(think of the List type that can be parametrized with another type, e.g. Booleans)>. The + is there to indicate covarience, i.e. that for a type B that extends A, Option[B] can also be thought of as a subtype of Option[A] (e.g. a list of integers is a subtype of a list of floats).

the Option type defines a method `bind` (I'll come back to the type B) which takes a function f that takes something of type A and returns something of type Option[B], no matter what B is (Scala makes you declare the type parameters you use in a function's arguments in the beginning (after its name)).

I found that explaining bind works best with a more familiar type like List.

if we have a type List of A's, and to simplify further, let's say that A is Int, so we have a List of Ints, then bind is a function that applies its argument, the latter being a function itself, on all its elements, and merging the resulting lists together. Suppose we pass the function `f` that for an Int returns the int itself and its double to the bind function of the following list: [1, 10, 100]

In the first step, the function f is applied to all the list elements: [[1, 2], [10, 20], [100, 200]] Than flattens it so we end up with: [1, 2, 10, 20, 100, 200].


It's ok that it takes some time for the idea of the necessity of Monads to spread. I bet it took at least the same time for the Haskell guys to understand the necessity of IO. ;-P


Great article. One of the more lucid explanations of Monads... although I do wish a different language was used. As much as people don't like C++, it is probably the simplest language to explain simple OO concepts (at least of the set which are expressible in C++).


I strongly disagree. C++'s particularly clumsy static typing introduces a LOT of conceptual baggage. Smalltalk, Scheme, or Self would probably be better, depending on whether you consider prototype-based OO to be a novelty or just the bare essentials.


For simple examples, such as those from this article, I don't think you see C++'s type system at all -- except maybe the fact that it doesn't do duck typing (which you also see with Scala).


I always liked MenTaLguY's explanation:

http://bit.ly/fh1F2l



Historical correction: the monad idea originated with Eugenio Moggi not Philip Wadler.


I don't think a correction is necessary, the article just says Wadler 'took advantage of the monad pattern to solve this problem' (the problem of Haskell needing to side-effects eventually to be of any practical use). Implying the monad idea already existed.


Right, the monad concept was developed in the 60s. Moggi applied it to computation in the late 80s.


A monad is not a metaphor, but a monad is like a simile.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: