Hacker News new | past | comments | ask | show | jobs | submit login
Snake-fury – a challenge for Haskell beginners (github.com/lsmor)
139 points by bmacho on Sept 7, 2023 | hide | past | favorite | 49 comments



Some thoughts based on reading the README:

* I'm not sure if the "challenge" approach will work. I think it's good to demonstrate things to students before letting them loose. I strongly agree that process (e.g. knowing how to find information) is really important and under-taught. I'm not sure that the approach here will be sufficient scaffolding for students to make progress.

* I'm not convinced that monads are really all that hard. I'm mostly a Scala developer. We used to have a lot of monad tutorials at Scala conferences but not so much any more. I feel the baseline level of knowledge has moved on, just like you don't need to explain first-class functions to most developers today but it was very necessary 20 years ago. I think once you start using `flatMap` / `bind` it makes sense fairly quickly. `IO` is a little bit harder because it wraps up the interpreter pattern (distinction between description and action) in a monad, so there are two distinct concepts at work. However, my feeling about monads is that they are either deeply profound or trivially simple (it's just a useful API!) and too much time is spent on deep profoundity when what beginners need is just the trivially simple.

* It's always good to have different ways to approach a problem. What works for one person might not work for another. While some books & tutorials are better on average than others, I've never found one learning resource that works for everyone.


People seem to do fine dealing with iterators—which are defined by their operations (hasNext, next) and are convenient because they have a dedicated syntax (foreach)—without the need for enlightenment from esoteric "iterator tutorials".

Monads are defined by their operations (flatMap/bind, unit) and have a dedicated syntax (do). I agree with you: just get used to flatmapping stuff and the concept will sink in.


Assuming you mean iterators in other languages, I have seen folks bit by iterators supporting the "member" functions. That said, agreed that standard use is mostly fine.

The problem with monads, in this regard, is folks try and bend everything to them. I've seen similar with "everything can be iterated" before, and it ended as poorly as you might expect.

The basic trap comes down to the fact that many of us (yes, I'm projecting), really only understand basic algebra as defined with addition and multiplication. But a basic understanding doesn't help you see how that will abstract over working with other real life things. And as soon as you have to start dealing with non-transitive and non-associative things, the basic understanding can hurt.


>Monads are defined by their operations (flatMap/bind, unit)

This is why everyones intuition screws up. It's better to explain monads in terms of the Kliesli operator (>=>). Monads like functions in functional programming are about composition and the explanation should center around that.

The right intuition about monads is that they are higher level functions that like regular functions can compose.

Bind by itself feels weird and sort of pointless for a newcomer as he usually thinks why does this sort of strangely specific operation need to be abstracted behind a name "monad"?


Is there a tutorial that uses this approach?


your in luck I wrote one which no one read in a reply:

https://news.ycombinator.com/item?id=37427499

I hope it helps. If not then I'm utterly wrong.


The problem with monads is just the words. "bind" in particular is a bad term!


It make sense in some contexts: it binds the value produced by the monad to a new name. For example, it binds (or "assigns") the value read from a file to the variable x.

I agree that `flatMap` is clearer when manipulating lists, which is probably a better first approach than the IO monad. However `flatMap` would sound weird for IO monad, I think.

What I find confusing is using the do notation for monads with very different meaning. Code looks the same but it does something totally different!

I wonder if putting the Monad interface ("typeclass") front and center in Haskell is such a good idea. Kinda feels like premature abstraction. We should use descriptive names instead of opaque abstractions.

...or maybe I haven't seen the light yet.


I'm using Scala syntax here... If I have:

  val a = IO(42)
  def f(n: Int) = IO(n)
And I use map, then I get:

  val r: IO[IO[Int]] = a.map(f)
Which we don't want... the solution is to flatten it. This can be done with flatten, or flatMap:

  a.map(f).flatten
  a.flatMap(f)
Both produce IO[Int], and this makes sense to me using any monad. By contrast, bind means nothing to me.


>`IO` is a little bit harder because it wraps up the interpreter pattern (distinction between description and action) in a monad

I've always thought that the trick to explaining the IO monad in Haskell is just to realize that IO in Haskell has very little to do with monads. Fundamentally, Haskell values of type (IO a) may have side effects when evaluated, the evaluation semantics of Haskell are such that expressions are (observably) evaluated at most once†, and the >>= operator forces sequential evaluation by threading through a hidden world parameter and doing some special compiler magic. (It's a common myth that sequential evaluation is forced merely by the type of >>=, but this is just false.) Once you've grasped this underlying mechanism, you can then observe that 'return' and >>(=) form a monad.

† This is subtle and somewhat beyond my pay grade. Strictly speaking the semantics of Haskell only require non-strictness, which doesn't directly entail any limit on the number of times an expression is evaluated. But in practice, it is safe to assume that a Haskell implementation will evaluate any expression with side effects at most once. I'm sure someone else can do a better job of explaining exactly to what extent, if any, this property is entailed by a non-strict semantics.


The trick with IO was basically putting a monotonically increasing counter in it. Binds increment the counter creating the implicit ordering dependency needed for the compiler to "sequence" the actions as expected.

It's supposed to be some elegant design handed down from an all-knowing creator I guess, but it's a useful hack in practice.


Yes. You also need something to force strictness and disable certain potential optimizations, though. The counter isn't actually used, so the compiler ought to be free just never to evaluate it, or delete it altogether.


flatMap is the worst way to explain monads. People associate mapping with iteration and a monad is a much higher level concept that is hard to associate.

Also Bind or the >>= operator is the worst way to explain monads. Flatmap is a synonym for Bind which makes everything even more confusing. If your intuition around monads centers around the >>= operator then your intuition is incomplete. This infatuation with BIND as the central concept around monads is what screws everything up, it leaves people with broken intuition about monads and it also makes learning 1000x harder.

Better to explain monads in terms of functions.

A function (F) is something that takes an input A and returns an output B.

   F(A) -> B
You can compose functions. Combine 2 functions into one function to produce a bigger function so long as the types match. We often overload an operator for function composition:

   F(A) -> B
   F2(B) -> C

   F3 = F2 * F = compose(F2, F)
   compose(F, F2) = func(A) -> C {return F2(F(A))}
You can create bigger functions by composing lots of tiny functions

   F5 = F4 * F3 * F2 * F
A monad is simply a special type of function that returns something in "addition" to the return type. One way to think of it is it's like Golang where the function returns an additional error value, another way to think of it is it returns the function as a list with additional values appended.

normal function:

   F(A) -> B
monad:

   M(A) -> (B, error)
Nothing crazy here, M is a monad we can give it a name call it the "golang error monad."

Why does this matter? Well For one thing. Monads should be able to compose like functions.

You should be able to:

   M5 = M4 * M3 * M2 * M1
How would that work? Imagine two monads:

   M(A) -> (B, error) = func(A){return (A, nil) if A.correct() else (nil, error.New()) }
   M2(B) -> (C, error) = func(B){return B, nil) if B.correct() else (nil, error.New()) }

   M2 * M - compose(M2, M)
   compose(M2, M) = func(A){
         B, err = M(A)
         if err {
              return nil, err
         } else {
              return M2(B)
         }
   }
It's monads can be composed like functions it's just a little different. For the golang error monad as soon as one monad in the composition chain returns an error just make the whole thing an error.

In haskell the compose operator for monads is F1 >=> F2 for functions its F1 . F2

So that's all it is. Functions should have the property to compose. Monads should be like functions with the results wrapped in a burrito, but they also should be able to "compose" as well. It's like a higher level version of a function.

That's all.

Typically in languages you create functions by defining functions. Normal. In languages with first class support for monads, you define the monads like functions, BUT after that you HAVE to define how the monads compose. That completes the definition of a monad and function.

Functions need to be defined in terms of how they compose as well, but you typically don't do this in other languages as those languages just have you do it explicitly F(A) = F1(F2(A)) or they have the compose operator predefined.


From skimming the code, this honestly seems to be designed in the way that begs to be written down in Erlang, not Haskell.


I think this is a great idea. I love working with haskell and I’m always excited to see more opportunities for people to learn it in ways that work for them. I learned Haskell mostly through building things without a lot of structured reading, and it can be a rough path but I think it can work out in the end for sufficiently motivated people.


I have read many a monad tutorials. I'm reading through your new book right now and the chapter on IO is one of the best onramps I've read on building up intution for Monads!

Writing a book is hard work. Thank you for all the hardwork you put in writing this book, it's great! (I haven't gotten to the Monad Transformers yet in the book, I hope your book can finally help me get over the hump in understanding them :-)


Thanks! I spent a lot of time trying to get the introduction just right in a way that was practical and accessible, I’m really happy to see it’s been landing very well with people.


> now you know what a monad is... therefore you will be unable to explain to anyone what a monad is

Isn't a monad effectively just an abstract wrapper for some properties that you can apply sequential transforms on? Why do people keep making out they are some form of voodoo?


For people learning Haskell, one of the big challenges I see is that learning about monads is often when people need to start understanding higher kinded types. It’s not so bad as long as you focus on container-ish things like lists, Maybe, and Either, because people tend to have an easier time thinking “the m is the box and the a is the value inside it”, but it really tends to throw people off balance when they need to think about something like IO or State where the intuition is less immediately obvious.

Ultimately I don’t think learning them has to be hard, but it can be, and it does usually require a bit of patience to let the full picture sink in.


Can you give an example of the full picture to someone who doesnt know haskell? The only functional language I'm familiar with is elixir.

I tried learning about them for fun but couldn't see many examples unless I know all of the weird haskell syntax.

The core basis that I gathered of functors is that they're like anything which is "mappable", the key part being it can be any type, and not something like a list or array. So maybe you have an object data type, and you create a custom function which makes it mappable.

The basis of monads was a functor that returns another functor, and the example I saw was a "stream". Elixir does have streams, and this "makes sense" but I wasn't super satisfied with that answer as I highly doubt all this talk of making it hard to understand is about streams.

Is there an actual example that exists where you can present the different ways to solve it? Ie... you have X data structure, and you want to get Y parts out of it while transforming Z at the same time. Method 1: no monads, Method 2: monads, or something like that?


One example would be to simulate some form of early return from a procedural language. Suppose you have a function which executes some operations, and these operations can sometimes fail, so they return a Maybe (probably Elixer has some form of optional type)

You could write this without making use of Monads using:

    f = case op1 of
          Nothing -> Nothing
          Just x ->
            case op2 x of
              Nothing -> Nothing
              Just y ->
                case op3 y of
                   ...
Or you can use the Maybe monad to make this a bit nicer

    f = op1 >>= (\x ->
        op2 x >>= (\y ->
        op3 y >>= (\z -> ...)))
Or you would probably use the do notation, which is just shorthand for >>=

    f = do x <- op1
           y <- op2 x
           z <- op3 y
           ...


I'll reference the binary parsing example I've showed in another comment: https://news.ycombinator.com/item?id=37425640

Let's first think about how we would write that second example in an imperative language (loosely modeled after Java but don't yell at me if it doesn't compile):

  Header parseHeader(ByteInputStream bytes) {
    int size = bytes.readWord8().toInt();
    Word16[] fields = new Array();
    for (i : (1...size)) {
      fields.add(bytes.readWord16le());
    }
    return new Header(fields);
  }
but that's no good in FP, since we're modifying state on a byte input stream, so let's try to make it more functional (let's make some syntax allowances for that, so it will look less like Java):

  Header parseHeader(ByteArray bytes) {
    val (size, rest) = bytes.nextWord8();

    val (_, allFields) = (1...size).fold((rest, [])) { _, (currentRest, fields) ->
      val (field, nextRest) = currentRest.nextWord16le();
      (nextRest, fields.append(field))
    }

    return Header(allFields);
  }
Now we avoid mutation, so the FP gods are happy, but we have to thread the state through the computation (look at how many variables we used) and it will quickly get out of hand if the structure becomes more complicated. Plus because so many of these variables have the same type, it's easy to accidentally introduce errors.

The example with the "Get Header" monad has the advantage of being as simple to read as the imperative example, but purely functional. That works because the Get monad builds up a description of steps to perform, so is purely declarative. It is only executed when I later call it on some actual byte string (e.g. by "runGet parseHeader myBytes"). As an added benefit, this means that I can define the parsing logic without actually calling it. I could also write additional code that takes this representation and does other things on it instead of just running it (e.g. running it with extra logging, or doing some kind of dry-run, etc.)


Oooooo... that makes perfect sense.

> That works because the Get monad builds up a description of steps to perform, so is purely declarative.

AWESOME way to put it. Thank you...


Definitely agree about intuitions,

I think even experienced people can have issues when a type has a lot of type variables. What do they all mean? It's almost like an implicit API you need to learn for that particilular type, with varying levelsnof documentation or lack thereof


> Isn't a monad effectively just a [something]?

I think this article responds well to that kind of "it's so obvious!" flash of insight: "The Monad Tutorial Fallacy", https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...

Basically, you have to build your own intuition. Examples of monads are just examples, what matters is how you reached that insight, and there's no shortcut for that; you cannot simply be told that "monads are like wrappers" (or burritos, read below). Like the article states:

> “Monads are easy,” Joe writes. “Think of them as burritos.” Joe hides all the actual details about types and such because those are scary, and people will learn better if they can avoid all that difficult and confusing stuff. Of course, exactly the opposite is true, and all Joe has done is make it harder for people to learn about monads, because now they have to spend a week thinking that monads are burritos and getting utterly confused

[...]

> What I term the “monad tutorial fallacy,” then, consists in failing to recognize the critical role that struggling through fundamental details plays in the building of intuition. This, I suspect, is also one of the things that separates good teachers from poor ones. If you ever find yourself frustrated and astounded that someone else does not grasp a concept as easily and intuitively as you do, even after you clearly explain your intuition to them (“look, it’s really quite simple,” you say…) then you are suffering from the monad tutorial fallacy.


I think of monads mostly as "an implicit context on which we can perform operations in sequence", which is maybe similar. But I'll agree with other people here that you really have to build up your own intuition by working through examples and maybe trying to write your own Monad instances, etc. Otherwise you will apply this intuition in a subtly wrong way.

There are so many cool monads that work differently from IO, Maybe and such types, e.g. the binary package allows me to write

  data Header = Header Word8 Word16

  parseHeader :: Get Header
  parseHeader = Header <$> getWord8 <*> getWord16le
or for a more complicated example

  data Header = Header [Word16]

  parseHeader :: Get Header
  parseHeader = do
    size <- fromIntegral <$> getWord8
    fields <- replicateM size getWord16le
    return $ Header fields
and then I can later run parseHeader on some binary input to parse a header, and I can do it in one go or I can run it incrementally, it really doesn't matter, because the Get monad only describes the way the input should be parsed and not the actual parsing algorithm.


Because, that's nowhere near general enough to describe what monads can do, that is, your explanation is wrong because of its over-specificity. That's the problem with monads. They are an extremely general mathematical object, but the specific uses of them are wildly different. It's the exact same "problem" with functor and applicative.

It's like explaining a vector space using the analogy of arrows in space and then having to explain how sets of functions are also a vector space.

In mathematics you just have to stick with the general definition and a few examples, keeping in mind that those examples are specific. Most programmers are simply not used to thinking in that way. That's why it's "some form of voodoo".

Further, much like the definition of a vector space does not make it immediately obvious that functions can be vectors, it's not obvious from looking at the definition of a monad that it allows us to do sequential IO. That's why people think it's some sort of voodoo. You have to just accept that the example obeys the definition, the example doesn't fall out of the definition.


You should consider explanations within the context of which they are being given. The explanation above is sufficient for many cases, and it is a sufficient foundation for building further understanding.

It's not helpful to tell someone their definition is wrong because it doesn't cover all possibilities. You shouldn't ask a 6-year old to define numbers and tell them they're wrong when they describe the integers. This would just bring confusion and anger to the child and makes you look like an arse. You might instead tell them it's a good explanation, but there are some numbers, like one half, that it doesn't include. So maybe we can make the description a bit bigger and include the rationals as well. This makes the child feel good because they were right about something, and also learned something and you feel good because you helped them learn. The same is true of adults.


Okay but the point is that all these analogies claim to apply to all possibilities, and in fact this specific person has claimed that his analogy is a general definition of monads and so applies to all monads.

> This makes the child feel good because they were right about something and also learned something and you feel good because you helped them learn.

I'm not going to treat them like a child just to give them the impression that they contributed. They said something that I think is inadequate, so I told them that.


> Because, that's nowhere near general enough to describe what monads can do

Totally disagree. You are effectively saying you cant define anything unless you include every case, which is clearly nonsense.

> it's not obvious from looking at the definition of a monad that it allows us to do sequential IO

The transform is "transform the wrapped properties into some IO call", and all transforms are explicitly sequential.


Here is Haskell's Monad:

    class Monad m where
      (>>=)  :: m a -> (  a -> m b) -> m b
      (>>)   :: m a ->  m b         -> m b
      return ::   a                 -> m a
and these functions must obey the properties

    return a >>= k                  =  k a
    m        >>= return             =  m
    m        >>= (\x -> k x >>= h)  =  (m >>= k) >>= h
Anything other than the definition I gave is an analogy that will only work for certain examples, that is, unless you can mathematically prove that your analogy is isomorphic to this definition


Yeah, my definition works for all those examples.


How does your "definition" (I will take that to mean "an abstract wrapper for some properties that you can apply sequential transforms on" from your first comment) apply to e.g. the monad whose algebras are monoids?


Okay, then prove that your definition of a monad is isomorphic to the given definition


> The transform is "transform the wrapped properties into some IO call", and all transforms are explicitly sequential.

Not every monad has to do with IO. The oddity of Haskell is that Haskell has this IO type and every real world action has to produce an IO value, but the defining characteristic of IO isn't that it's a monad.

Now, in the IO type, we use monads to sequence I/O operations, indeed (like "read something to the keyboard, then write this same something you read).

But there are other monads that do other things! Monads always enable you to write sequential-looking code (through do notation), sure, but they don't need to execute sequentially - and most of them has nothing to do with IO!

For example, the Cont monad enables writing code in continuation-passing style [0], and it's famously difficult to grok [1]. Suffice to say that it enables using the "call with current continuation" operator from Scheme (which doesn't work on IO, but works on Cont) [2]

[0] https://www.haskellforall.com/2012/12/the-continuation-monad...

[1] https://stackoverflow.com/questions/3322540/how-and-why-does...

[2] https://stackoverflow.com/questions/20451022/how-to-interpre...


You can define abstractions very precisely in mathematics, that's what makes it useful. You have to get used to the fact that you're talking about the abstraction and not any specific instance/use of it.

In the case of monads, the things that are useful which we can say about them: unit is a left identity for bind and also the right identity for bind, etc. What makes an abstraction useful, and is kind of essential in my opinion, are the laws.

These do include every case because we don't say anything about specific terms or types, just the laws of bind (in the case of monads).

I agree that people keep making them out to be some kind of mountain to climb when, after building up an intuition and learning the fundamental, it turns out they are more of a hill. However I think it takes a good mix of both the theory and application to appreciate how elegant and simple it is in the end.


> You are effectively saying you cant define anything unless you include every case, which is clearly nonsense.

No I'm not. I will repeat what I said again: such things are defined by their general mathematical definition, and any analogy can only speak for certain subsets of examples of that definition.

> The transform is "transform the wrapped properties into some IO call", and all transforms are explicitly sequential.

How exactly does the mathematical definition of a monad say anything about "transforms"?

I think you didn't actually read my comment.


> How exactly does the mathematical definition of a monad say anything about "transforms"?

Im not sure that it does, but a "transform" as I mean it in my basic description is applying an operation to the "wrapped" properties.


The word "transformation" in that name has absolutely nothing to do with your use of the word "transform" ("transform the wrapped properties into some IO call"). MacLane could have decide to call natural transformations "widgets". My point is, besides the name (which is completely arbitrary), why would you even begin to believe this has something to do with "transforming properties into IO calls"?

Nice shadow edit. For context, the previous comment used to consist of a single link to https://en.wikipedia.org/wiki/Natural_transformation


Yeah to be honest it's kind of a red flag given the context to see a random term misappropriated in such a way


The transformation mentioned there has nothing to do with using the bind operator within a monad. An example of a natural transformation would be transforming from one monad to another monad


I mean monads alone are not enough to do IO. Like it’s not possible to implement the IO monad in pure haskell. It’s more like the haskell runtime exposes a monadic API for its IO system.

The general intuition that makes sense for me after a few years of full time haskell is that Monads are the thing that allows us to encode effectful computations within a pure language.


Often times people don't want the general thing. They want to know how to solve a problem.

An analogy I might give is memory ordering constraints in C++. If somebody comes to you and says "hey I've got some concurrent code that has these requirements", it might be thoroughly reasonable to give them a recipe that uses a particular sequencing constraint and operation pattern even though you could have a much more general discussion of the nature of happens-before relationships and proofs of correctness of concurrent algorithms. But somebody's eyes might glaze over.

This is even true in mathematics. Sometimes you want to learn the full abstraction. Sometimes you want recipes that let you solve a PDE.


At the same time, the general definition didn't pop up out of nowhere. I can't speak specifically for the history of monads, but very often these ideas start as specific ideas with very practical intuition, and are then carefully generalized.

If you tried to learn linear algebra by starting with the general definition of linear transformations on vectors in an abstract space, you'd get nowhere. You need a reference point in order to effectively generalize.

For example, I first came to understand functors/applicatives/monads as contexts, so the idea of a list being a monad was actually kind of weird to me.

The other problem with monads (in Haskell) is that it's sometimes far from obvious what a particular monad actually does. If I see code like this:

  thing1 >>= frobify
It tells me basically nothing about what the computer will actually do, unless I have some familiarity with the types involved. You see similar issues with things like context managers in Python, and with languages that support operator overloading for user-defined types.

The difference is that we all have some generally correct intuition about what "+" means, so when we see that operator is overloaded on a custom type, we can at least have an idea about what it might do. But a programmer who lacks concrete intuition about what monads do will not be able to perform the same reasoning when they see ">>=" or "join".


I don't know that this definition is very precise. For example, the kind `(->) r` (which you can roughly understand as the class of all functions that go from a fixed type `r` to other types) is a monad in haskell [1]. In what sense is this class of functions a "wrapper"?

Additionally, even if you insist that this reader monad is a "wrapper", your definition would also informally apply to things like `Applicative`, and not all applicatives are monads (some are).

Some monads are more easily understood as wrappers (`Maybe`, `List`, ...) and you can be very productive if you just stick to that level of understanding. But there's a deeper understanding that one can get.

[1] https://hackage.haskell.org/package/mtl-2.3.1/docs/src/Contr...


That's definitely a sufficient understanding to make progress with. It doesn't describe every monad but it describes the ones you will commonly meet when programming, and it provides enough understanding to scaffold further understanding of the more abstract concept.

A lot of the "it's so hard" BS is from people who think you have to understand a concept in its full generality in one shot. That's definitely not how effective learning works in most cases.


No, they are not wrappers. I know what they are though, so I won't be able to explain them to you.


Take the state monad : state -> (a, state). I think one would only count that as a wrapper by squinting really, really hard.


The haskell learning-curve graph seems quite accurate. I'm over in the "happy pythonistas" camp.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: