Comonads tend to well represent a notion of computing where you're isolating a particular section of your data in relation to its context. Compare this to monads which are often thought to layer computational context atop pure values.
> Comonads tend to well represent a notion of computing where you're isolating a particular section of your data in relation to its context.
This was the piece missing for me to understand comonads, worded exactly the way I needed. Thank you.
The author mentioned zippers, but for some reason I didn't quite get it (`a collection with one of the items being at the focus,` perhaps it sounded too specific for me at first)
Duality is your best friend when trying to understand nice concepts like monads and comonads. You automatically get two chances to examine every topic.
I've looked into comonadic IO before and it seems to me that it isn't actually useful except in very narrow use cases that can barely be described as IO. An example of asynchronous IO as a comonad would be very useful.
> Since he’s forgotten more than I’ll every know about programming, I tend to accept that statement.
This seems to be an unwise principle under which to operate.
> Every monad has an associated comonad which is it’s dual.
I would be very surprised if this was true. Monad and comonad are indeed dual concepts, but that doesn't mean that actual instances of monad have dual comonads and vice versa.
> What that means is, for every monad (which is triple of data type, result function and bind function that obey the monad laws), there exists a triple which has a data type, an extract function and an extend function that obeys the comonad laws.
I was hoping for a few concrete examples of where comonads can be useful. I have found the need for monads fairly often in my code but I've never found any reason to use a comonad.
I think the more striking example is the first: "builders" with a default starting value form a comonad, which is the dual of the Writer monad. A Haskell implementation can be found in package 'comonad-transformers' as the 'Traced' comonad.
Infinite streams also form a comonad ('Data.Stream.Infinite' from the 'streams' package).
Inductive data types are defined by how they are constructed, while coinductive types are defined by how they are deconstructed (or observed). It's object instantiation vs field destructuring/patten matching. Turns out that OOP and protocols give you uniform deconstruction of objects (like destructuring first/rest from a sequence). Co-monads are basically objects!
Here we go again. Monad is a simple ADT. A data structure with constructors and selectors. There is no magic in it.
You could write it in Scheme, nothing special.
There is a special form in scheme called delay which basically wraps an expression inside a lambda (all you need is lambda). It is a special form, because it has its own evaluation rule, which just passes its argument un-evaluated.
What is produces is called a promise, which is basically this:
(lambda () e)
Because procedures and thunks are first-class "objects" we could cons them together into pairs or any other data-structures.
So we could make a list of promises. This list could, then, be folded or mapped or filtered the usual way.
Because you want to hide implementation details of how you hook promises together, you must have constructors and selectors.
Again, there is nothing special in "wrapping expressions into ADTs" just because we can, and especially because it is very clever.
The situation in which we really need delayed and encapsulated computations are quite rare, so all this monadic stuff must be thought as a special case, not as a general computation strategy.
We need no monads or comonads to manipulate elements of a vector. Sometimes we may need a vector of promises, or a vector of partially applied procedures (closures) or any combination of delayed and partially applied procedures, but that is, indeed, a special case and must be justified.
In Haskell, where they deliberately separate so-called pure functions from ones which have side-effects (performing I/O) by marking them with an additional type tag and grouping them in a distinct block of code, usage of such ADTs is justified and makes sense. It has nothing to do with JVM which makes no distinction between pure and impure code.
Monads are not a need, they are a generalization and a pattern to be recognized. In a similar, if hyperbolic, vein we need no multiplication as it is nothing more than repeated addition.
Sure, but one rarely goes out of their way to use a monad. They just crop up. Often you gain expressive power by noticing when all you need is that pattern. Further, recognizing it reduces your cognitive load.
Even in Haskell you can write all of your code without ever explicitly using a monad. There will be a lot of repetition and unneeded complexity but no loss of expressivity—just grace.
> Monad is a simple ADT. A data structure with constructors and selectors
Are you talking about the Free monad? Or the reification of the monadic operations as a dictionary?
If so, that's a weird way to describe monads, and not very useful. Monads aren't ADTs in general.
> You could write it in Scheme, nothing special
Sure, though you might need to manually thread around the monad dictionary.
> There is a special form in scheme called delay which basically wraps an expression inside a lambda (all you need is lambda).
It's supposed to be a memoized lambda, and not just a lambda (i.e: Force it twice, and it'll only be computed once).
> Because you want to hide implementation details of how you hook promises together, you must have constructors and selectors.
> To force a promise we just call it as a function
> This is what monads really are.
Hooking promises/thunks together is only one instance of a Monad.
Otherwise, Monads have very little to do with thunks or forcing of thunks.
This is not what Monads really are.
> The situation in which we really need delayed and encapsulated computations are quite rare, so all this monadic stuff must be thought as a special case, not as a general computation strategy.
This may be an argument against lazy evaluation, but it isn't an argument against Monads. Monads are a pretty general computation strategy.
> We need no monads or comonads to manipulate elements of a vector
If we want to view the vector as a set of "possible results" and use a computation strategy that goes through/generates all possible results of all computations, then the list monad (in vector form) would be useful.
If we want to work with vectors that are all of the same length, zipping them together, then the "ZipList" monad would be useful.
We don't need monads because a generalization is never needed. Generalizations make code applicable in more situations, and may help separating concerns.
> In Haskell, where they deliberately separate so-called pure functions from ones which have side-effects (performing I/O) by marking them with an additional type tag and grouping them in a distinct block of code,
You don't really have to "group them in a distinct block of code". "do" notation is just optional sugar.
Also, IO is just one instance of a Monad, and the usefulness of Monads goes far beyond IO. Having Monads even if you don't have any purity distinction in your language is still very useful.
[] is a list, yes (not a pair). I explicitly mentioned it as an example of a monad which is an algebraic data type. Then I also mentioned (->) which isn't.
Why does the fact that pure code can be executed in all those ways relate to the object files? Same object file can have code of both kinds, and different pieces of code within it can run in different ways/threads/etc.
a List is a generalization of a Pair - it could be empty, while a Pair cannot (logically). Apart from that, a Non-empty List could be viewed as a self-referential (recursive) Pair or a chain of Pairs. So, technically, List is the same ADT as Pair (cond/car/cdr) plus explicit null? and list? functions.
Yes, they could, but it is more reasonable to split the code, so you could dlload or statically link the pure code independently from unsafe code.
A list is not a "generalization" of a pair. If anything that used something else was a "generalization" of that other thing, things would get confusing.
Why is it more reasonable to split the code? You get the safety from the type-system, so you don't need extra splitting in obj files. Such splitting would complicate matters, and have no extra gain.
Haskell's IO system is built around a somewhat daunting mathematical foundation: the monad. ... Rather, monads are a conceptual structure into which I/O happens to fit. ...
... For now, we will avoid the term monad and concentrate on the use of I/O system. It's best to think of the I/O monad as simply an abstract data type.
To those who still want to fight - ADT is a concept form an implementation realm, and the only way to implement a monad is to define an abstract data type.
The IO monad (a particular instance) is an abstract data type (not an Algebraic data type, by the way).
Other monads (e.g: the State monad from Control.Monad.Trans.State) are not abstract data types (or algebraic for that matter).
Specifically, for something to be an "abstract data type", it must not expose its internal representation, but only relevant operations. IO is indeed such an abstract data type which happens to be a Monad instance. State and [] are monads that do expose their internal representation: ergo, they aren't abstract data types.
You seem to be confusing algebraic and abstract data types (different things!) and taking a statement about a specific monad instance (IO) as if it was about the general concept of Monads (it's not).