Oh, haven't seen a monad article in quite some time.
> Monads really are just a convenient way to build up action values
Except when they aren't, because lists/arrays (with map) and Maybe (with map) and .. are something totally different. And that's the problem with _all_ these monad tutorials/how-tos/...: the problem is, that monads (and functors) are not burritos or elephants but _both_. So they are hard to understand looking only at a single instance.
Actually the most important part of monads in Haskell is the do-notation. Without that, monadic code would look like JS' callback hell (although composition of monads using monad transformers/monad stacks isn't much better).
I still think that it would look a lot less bad than JS callback hell, mostly because of syntax.
actA >>= \a ->
actB a >>= \b ->
pure doSomethingWith b
is a lot better (to my eyes) than:
actA(a =>
actB(a, b =>
doSomethingWith(b)
)
)
and it gets worse from there (blocks to have statements to declare variables, or to call multiple functions. pre ES6 those would've also been functions rather than lambdas. Admittedly monad transformers aren't that great, but that's mostly because of the nxm amount of instances (to avoid explicit lifting).
Disclaimer: personal opinion
Having used both extensively over the years... I'll have to disagree. Don't get me wrong, purescript is awesome in its own way, but it falls short of haskell in a few major ways.
- Lacks ergonomics (for example it has better extensible records, but they are clunky to manipulate due to the lack of features at the type level)
- as of now it's tied mostly to the JS ecosystem and runtime(s): that means no TCO, which _really_ hurts when using monads that require on a lot of recursion. And in fact purescript sometimes has to take a performance penalty and use specific trampolined stack-safe monads to avoid stack overflows
- Haskell layout rules may be complex, but purescript's break in weird and unintuitive ways (by layout rules I mean the indentation-sensitive syntax)
- Haskell is catching up on _a lot_ of the feature that made/make purescript great (mostly talking about syntactic things right now, like RecordDotSyntax, QualifiedDo, BlockArguments...)
- as it turns out, when programming "Haskell-style", laziness is really damn effective. So many times I've had to forcefully (and painfully) introduce laziness in purescript to make things behave well...
As much as I like purescript, I think if you want that kind of strict programming, maybe OCaML or a derivative thereof (or maybe even ReasonML/Rescript?) would work better? I feel like whenever I use purescript it looks so similar to Haskell that I try doing things the haskell way and it just doesn't work, and the "purescript" way will often look weird to a trained haskeller.
Yes, it does do some things better than Haskell. And row types and row polymorphism are a really nice way of handling records and some type level programming.
Monads are algebraic objects that conform to certain axioms. The same way burritos and elephants can end up being groups, rings, Hilbert spaces or whatever under right conditions and right operations, many things can be monad too, under the right operation. The issue with these monad tutorials are in their very premise. They want to explain what monad is without giving its mathematical definition (and its mathematical context) but this inevitably causes significant amount of nuance to be lost.
A "mathematical object" in this case is a description of how the object is expected to behave.
But even so, I think something is lost in reducing monads to a clump of related behaviors and properties. You still need intuition about what these behaviors actually mean and what kinds of real-world objects actually conform to these properties.
So you are right that monad "is not" a nondeterministic computation or a container or whatever, but a monad "is" a common set of behaviors/properties that we should expect nondeterministic computations and containers to conform to.
So this is basically Eternalism[1]? Instead of writing a program that reads inputs, experiences a sequence of state mutations and takes actions, you generate the graph of every possible state and path between states annotated with inputs on the edges and outputs on the vertices, and let the runtime pick the actual path through it. And this works because you're generating it lazily, but that's just an implementation detail. And the program in itself isn't ever in any particular state, because the program is the description of every possible state.
I didn't know there was a name for this concept in general. This is exactly how I think about writing code in functional programming languages. And as far as I can tell, algebraic effects are an extreme case of this, where your program is essentially a big graph of decision points in response to as-yet-unseen inputs.
Functional programs don't _do_ anything, they are a list of commands some external interpreter (what you call the runtime, but doesn't have to be a runtime) will execute or compile.
Functional programs only contain descriptions of the various commands, but it is an external interpreter executing the commands.
Functional programs return "recipes".
"recipes" are picked up by some executor that will turn them in dishes (the side effect).
If all of this seems generic allow me some typescript.
```
// we can declare a a function that takes no argument and return a value IO.
type IO<A> = () => A
declare const program: () => void; // thus can be rewritten as
declare const program: IO<void> // this is the PURE function I can reason about
declare const execute: IO<void> => void // this is the interpreter that will execute the commands and have side effects
```
Now, let's declare a side effectful functions:
```
const log: (s: string) => IO<void>
// desugarized: (s: string) => () => void
``
`
Notice how the signature is similar to the standard console.log:
(s: string) => void
except that it's lazy. Laziness is one of the most convenient ways to express "commands" rather than side effects in a language like JavaScript, but there are also alternatives.
Now we can have a pure program that logs to the console some string.
`log("foo")` does not "execute" any console.log, it still needs to be executed:
```
const program: IO<void> = log("foo");
// program() will actually print "foo" in stdout
```
Note: we could've encoded IO in different ways, e.g. with a struct/interface rather than a function and had the interpreter actually reason about the side effect on its own.
Now, the only missing part is: how do I compose such IO functions together? Well, there are various ways but the most common ones are applicative functors and monads. I am not going to delve deeper in this comment on those topics because it would take long but I hope I have transmitted my point:
in functional programs you return programs (I like to think about them as recipes, recipes don't DO anything), those programs may compose effectful commands, the actual execution of the commands is shoved inside an external interpreter.
This is quite obvious in some languages like Haskell, it's less obvious in others.
“Have you tried turning off and on again?” is advice to perform a total state reset, which works because the system’s state is inconsistent but its sources of truth are not.
You never have to reset state if you do not have it in the first place. You get other bugs still, but you don't have the most common class of bugs.
But, you need some state. You just want to discourage it. Some of it can come from laziness, but with difficulty. (Chris Okasaki’s Purely Functional Data Structures uses it to de-amortize the bound on a FIFO queue.) Other things, like ring buffers, are harder to argue.
So you want to be able to express an array of IORefs, say, for your ring buffer. But the people who use it become aware that it is a stateful construct and must be used that way.
Read “what color is your function?” for a counterpoint, of course.
You do it if you're using a language where there's a rule that there are only functions. For example there's no concept of doing one thing followed by another. The only way you can arrange that is to have two functions and compose them: g(f(x)). So now you know that the code in g is executed after the code in f. Monadic composition is basically about doing that while having code that looks like it would in a regular language that has statements and sequential execution. There are a few other reasons, such as you can create your own control flow as-code, lifting, etc. But the primary driver is: because you can't write useful programs in a pure functional language without this trick.
Referential transparency, as the original post says. For example, in a functional language you're always allowed to transform
a = f()
b = f()
c = g(a, b)
into
a = f()
c = g(a, a)
because f() is guaranteed not to have side effects. (If it does return side effects through the IO monad, instead of actually performing the side effects, g receives two data structures that encode what IO operations to perform.) In an imperative/procedural language, f() may have side effects, and if it does the two programs are not equivalent.
This may (not) make programs easier to reason about.
If your brain can encompass the thought, yes, it is arguably the best and most correct way to think of a Haskell program. This is why Haskell programmers can claim with a straight face that "main" is actually a pure value of type IO and the language has no impurity, and there is an important sense in which this is completely true and not the cop out some people think it is in the slightest.
The laziness allows the pure language constructs like "if" to be lifted up into the IO value. That is, if you embed an if statement into one of the (pure) functions used in an IO value somewhere, only the branch that the "if" would take is ever evaluated. However, technically speaking, you can also look at the branches happening for all inputs; if you input a true or a false from the user, you can look at that as a true branch, a false branch, and some additional error branches, even if the code doesn't look like it. A strict language would not permit this. (Though even in a strict language, real-world useful IO values end up with enough function calls in them that a strict language wouldn't manifest the full tree either.)
If you input a full Int32, you can look at that as defining 2^32 possible branches in the IO value, and the execution will take one of them. In reality, the program does not do 2^32 different things, and a better view of what the program is doing is the more conventional one for almost every purpose. (Although that view will still pass through a lot of possible states!) But in terms of understanding how the IO value is "pure", this view is momentarily helpful.
The only operation in a Haskell program that violates this is unsafePerformIO, which really does penetrate this abstraction and work like a "normal" program. Otherwise, no matter how much Haskell code you write, technically all you're doing is making a bigger and more complicated pure IO value, which defines how to have an effect on the world but does not itself have an effect on the world. Executing the program is what finally puts the two together.
To put it another way, Haskell has a completely clear separation between being a program and executing a program. If I say to someone, "pick up that glass, fill it with water, and water this plant", that statement is itself a "pure value". The execution of that statement is where things get impure. Some languages do not have this clear separation, most notably Perl but the dynamic languages in general don't. Many others do, as having a compilation step all but forces this sort of separation, they just don't think of it that way and there can be leaks here and there, and features that may blur the line deliberately. Haskell does have a very clear separation, and in that separation, with laziness, it is almost like IO is just one big macro language for putting together programs, in some sense beyond what even Lisp would dream of.
And in another sense it's just a funny way to write conventional programs with really weird pretensions, and there's a certain value to that point of view too, which is that when you're done blissing out on the hippie math juice, when you actually sit down and write code in Haskell you're doing much the same thing you do in any other language and treat IO just like any other source code. But, as with the story of enlightenment... "first it was a mountain, then it was not a mountain, then it was a mountain again"... where you end up on this journey is not quite the same as where you started.
I don't speak Rust well, but I can share my perspective from Clojure, where I tend to think in data.
A program is a sequence of immutable functions taking input from the world and reducing that to instructions to mutate the world (aka side effects). The world here can also be some place that stores state.
Ideally mutation is the last step of the pipeline, which takes the instructions and produces the side effects.
That model fits functional reasoning. It doesn’t fit all programs, though, as it doesn’t allow the input to depend on the _execution_ of the mutation instructions.
Example: input “I step forward”, output “monster appears”, next input “I try to shoot it”.
That second input wouldn’t be there if the output weren’t executed.
You missed the point where a program can take inputs over time. In your example, the program takes “step forward” and goes through the flows, eventually mutating state at the end, including redrawing the screen or prompting the user for their next input. The next input is based on the current state of the machine, and there can be as many or few valid state transitions as you, the programmer, would like to manage.
Your example isn't entirely clear to me - do you mean that “I step forward” and “I try to shoot it” are instructions in sequence? What exactly do you mean with allowing the input to depend on the execution of the mutation instructions?
In any case, what I'm going for is that in both imperative and functional programming languages I would have a function/method like "handleInput" that takes an input and decides what to do with it. The difference would be that in a classical OOP setting handleInput would be a method of your GameState class [1] while in FP handleInput would look something like "Input -> (GameState -> GameState)", i.e. a function that takes an input value and returns a function that transforms the game state in some way (alternatively and equivalently, thanks to associativity/currying: a function that takes an input value and a current game state and returns a new game state).
[1] I know, this obviously is a very contrived example, it'd only work for very simple games. Game programming patterns are interesting but not the focus here.
Outputting and manipulating side-effects can be useful even in imperative code.
I had real performance problems a while back attempting to build a very large file, my code could only produce the write instructions out of order and the file was too big to hold in memory, so what I ended up doing was writing writing instructions in radix-grouped batches to a bunch of temporary files, and then reading and evaluating them to build the large file.
This seems counter-intuitive, as it more than doubles both the amount of data written to disk as well as adds a reading-step, but doing it this way means the data is written in a way the hardware can deal with a lot more efficiently. Sequential access to and from the instruction files (off a mechanical drive), and densely clustered writes to the big output file. (on an SSD, strictly sequential writes matters less than being in the same block)
This reduced the runtime from several hours to like 5 minutes.
I think it's always fascinating to find situations where counterintuitively it is faster to do more work.
For example, it took me a while to realize that most of the it's actually faster to read/write compressed data overall - you'd think that reading from a disk and decompressing the data would be slower than just reading uncompressed data from a disk directly, but due to the vast difference in disk IO performance and CPU decompression performance it's almost always faster to perform disk IO compressed. I'm writing almost always since I'm not sure how the tradeoff looks for current high performance PCIe SSDs (or other storage devices with very fast IO).
Well, that's the first time side effects in Haskell made sense to me. Well done.
Not that I've ever seriously tried to learn Haskell, but in the past every time I've lazily come across an article about it it's always seemed like a bizarre confusing world, even though I know how functional programming (in the sense of purity) works.
Now there's just one thing missing. We all know what this style of programming is. It's asynchronous programming with callbacks. Seriously, Haskell folks, if you started with "all side effecting functions are kind of like async operations with a completion callback (the stuff people do in JavaScript all day), and then we have some syntactic sugar to make it suck less" you'd have a much easier time getting people to wrap their head around all this.
(Yes, I know the details aren't exactly the same, but drawing parallels to stuff people already know matters)
Seriously, this idea of there being a central effect dispatcher (the bit that runs `main` behind the scenes) is so eerily like the coroutine scheduler in async coroutine paradigms that I can't believe more people haven't drawn parallels between these programming styles.
I like the analogy with musical instruments. Very practical.
I think you're right in saying that async is not representative of all monads, but it does help with questions that many beginners have, like "How do I get the value out of the maybe/IO/your-monad-here?"
I don't think it generalises that well, unfortunately. How to get the value out of Maybe is "just pattern match it". There's nothing process-like about it at all -- it's just a plain container/wrapper.
The async analogy works for some monads like ST and IO and whathaveyou but it's not generally useful, due to the wide variety of types that are monads.
I think it's still a useful monad to look at. If you just pattern match Maybe, you're not really treating it like a monad or a functor, just as a tagged union that happens to implement a monad interface.
In contrast, there are easily understood reasons to use the monad interface of a promise/async. That's why I think it's a good monad for understanding the structure of monadic binding.
The way I see it is kinda like this, and I hope it makes some sense to others:
You can get the value out of a list... but there might be many or none. You can get the value out of a maybe... if it's there. You can even get the value out of the async function... but you'll just have to hold up what you're doing to wait for it.
This kind of (roughly) non-determinism is how I understand the concept of "effect", which is what Haskell frequently uses monads to model. A function returning a `M a` becomes "deterministic" (it always returns an `M a`) and benefits from the sort of equational reasoning that we like.
To be clear, the use of monads to represent effects is a software engineering decision and not inherent to monads.
It's definitely helped my understanding and I suspect it would help others too.
I disagree. I feel like this perspective makes sense to you because you feel more comfortable with js, not because monadic io is best explained in terms of callbacks.
While you may oversimplify a little, I think the core idea of your comment is pretty useful for learning. Unfortunately the first monads one encounters when learning Haskell are typically Maybe and IO. There are good practical reasons for this, but an early analogy to Async would probably make the idea a lot clearer.
When learning Monads in F# I felt like I'd seen this before. Promises in JavaScript were the first time I'd seen them. I just didn't know what they were at the time.
This perspective is interesting to me because I feel like every single comment section on HN about async syntactic sugar has a Haskell programmer complaining that the sugar is only available for 1 single type (the async type) and not ALL THE MONADS.
I just did some PHP the other day and I stumbled on a structure I was happy to have forgotten:
something(x, result);
somethingElse(result, result2);
anotherThing(result2, result3);
with the occasional surprise mutation when you just have to do
something(x)
and x contains the result.
I also got caught recently with MomentJS with that one when doing mydate.add and discovering it also mutated the original instead of just returning the result.
I'm a very average programmer and far from a FP purist (I mostly use JS and Rails) and I'm surprised how much I now use some FP principles and how it feels very natural to me.
You can get something similar via method chaining: x.something().somethingElse().anotherThing()
C# extension methods aren't the perfect solution for this, but I love that they exist to at least make chaining possible without needing to modify the class that I'm applying the function to. D has an even better version of this called UFCS which makes it so `Bar something(Foo f)` can either be called as `something(x)` or `x.something()` without needing any special annotations like C# requires.
In F# (the most widely used ML?) expressions can be much more complicated than that, incorporating loops, conditionals etc.
To illustrate:
let x =
let mutable maxScore = 0
for item in items do
if item.Score > maxScore then
maxScore <- item.Score
maxScore
Compared to:
const x =
(() => {
let maxScore = 0;
for (const item of items) {
if (item.score > maxScore) {
maxScore = item.Score;
}
}
return maxScore;
})();
Not necessarily the best way to write this (you would probably use a sequence library) but hopefully conveys the idea.
Extension methods are definitely useful in OOP languages. However, a much cleaner language design is to remove the need for a "class" altogether and just have free-functions, function composition and a pipe operator.
None of this is something you can't simulate in OOP / procedural languages, it's just much more clunky.
That's worse because now you have to find the innermost function and then walk back outwards. It gets especially ugly when the calls require other parameters
let x = anotherThing(
somethingElse(
something(
x, y, z
), 1, 2, 3
), "a", "b", "c"
);
Or maybe this looks cleaner?
let x = anotherThing(
somethingElse(
something(x, y, z),
1, 2, 3
),
"a", "b", "c"
);
I think laziness would be a distraction here. The idea of encoding effects as "action values" is relevant and interesting, even under eager evaluation.
Certainly laziness has influenced a lot of things in the world of FP. But it's really helpful to distinguish laziness from FP, especially considering the problems that laziness introduces.
> I don't know why people try to explain monads without explaining this first
I mean, I get where you're coming from, but the thing that you've described is that a monad and there's not an easy way to get it there.
But yes, I am with you, you want to explain to someone that Haskell's model of side effects is a sort of metaprogramming, much simpler than macros, we just give you a data type for “a program which does some stuff and eventually can produce a ______” and ask you to define a value called `main` which is a program which produces nothing special. And it's the compiler's job to take that program and give it to you as a binary executable that you can run whenever you like.
You also want to give people a number of other examples of things that are monads. “a nullable ____”, “a list of ____s,” “an int (summable) with a ____,” and maybe an example that is not like “a
function from ____s to ints,” or “a set of ___s.”
The key to telling someone what a monad is, involves trying to explain to them that in some sense “a program which produces a program which produces an int” is not terribly more descriptive than just “a program which produces an int.” If you can combine this with the adjective being outputtish and universal you have a monad.
As someone who has never really understood monads (or tried that hard to) but who has done a good amount of Javascript programming, I really like the description of a monad as a series of nested data structures describing the next steps of a project.
Ultimately, that's very similar to what old-school ways of declaring Promises do in Javascript. You're creating a data structure that you then attach a new function to execute with the results.
Not just similar. Promises are a monad. Though they don't strictly follow the monad laws as they are collapsing in JavaScript due to how Promise.resolve never allows a promise to resolve to a promise but only to the value of a promise.
I mean, as long as you have a flatMap/concatMap function JavaScript arrays are technically a monad too. But that is a practically meaningless thing to say.
What we talk about when we have about monads is the highly generic interface coupled with the highly generic combinators. This is not something shared by arrays and promises. In other words, none of them are monads in any practically meaningful sense.
> I mean, as long as you have a flatMap/concatMap function JavaScript arrays are technically a monad too. But that is a practically meaningless thing to say.
It's not "practically meaningless"; it's depth-first-search logic programming (as per How to Replace Failure by a List of Successes https://rkrishnan.org/files/wadler-1985.pdf )
I'm not saying the operations themselves are meaningless. I'm saying that "being a monad" only means something beyond "supporting flatMap" when there's a library of generic monad combinators involved.
Haskell is a framework that takes a monad interface implementation and performs IO based on it. The monad in question is used like a self flattening iterator / generator, and together with lazy evaluation it can function like a regular program even though the monad itself is implemented using pure functions.
One benefit of describing state change as a pure data structure, instead of directly executing it, is that you may now write functions that run these state changes in different ways.
E.g. you may have a dry-run function that doesn't actually change the state but only describes what it would do. Or you could have a function that generates a verbose log of all steps executed. Etc.
That doesn't necessarily work for IO, as that gets special treatment by the runtime, but you can do it with your own types.
Yeah! That's why I'm excited for the effect system to land in OCaml. In general I think effect systems are more user friendly than monads and it makes the choice of how to handle the effects more explicit.
Something that I always found funny is that main clearly has to take something as input (otherwise it would not be able to produce different output between runs). So in GHC the main method (main :: IO()) internally takes a value of type RealWorld (theRealWorld) and returns the new RealWorld produced by running the program. I imagine by the time it's lowered to actual assembly they've dropped that, but I've never dealt with ghc's machine codegen
I am not at all an expert on GHC's codegen, but having glanced at that code before (mostly to get an intuition of what IO is under all that sugar), RealWorld doesn't even... exist. Realworld and its close friend, State# are tokens of sort.
they're zero-sized and deeply magical (the GHC.Prim is quite enlightening to read). All the primops are defined as infinite loops... it's quite a sight to behold. As something totally unrelated to the above, a lot of the more internal GHC stuff (so the GHC.* modules, the stdlib/base, etc) are in my opinion readable and most of all, well commented, so it's quite fun to take a look sometimes.
Oh of course - I only ever worked with the GHC Core language which is completely and explicitly typed so it was especially blatant :D
I do understand that in reality it's simply a tag value to ensure that the compiler correctly orders and threads operations correctly, but I still think it's semantically cute :D
There is a reason for this, and it's not that Haskell is jumping through hoops unnecessarily. All those languages are strictly evaluated, which means that side effects are clearly localized in time and space, we know when and where they happen if we run the program. Haskell is lazy, and laziness doesn't go well together with ad-hoc side effects. Haskell evaluates expressions when the result is needed, and an expression that only side effects and doesn't return a value is never needed.
You could finagle your way around that, but the result would be something like unsafePerformIO in current Haskell, where it's hard work to ensure that it's used correctly.
This sort of thing is where I detach from the idea that functional programming is the ultimate good as some advocates would push it: it's far too obvious that outside of narrow constraints, real hardware just isn't functional.
No where was this more apparent in my career then when I was on a team of Scala-using FP programmers who were building part of a real hardware provisioning system (and the org structure tipped in their favor) - the number of arguments about the sheer amount of things which can and definitely would go bad across hundreds of servers when you tried to do things - versus their desire to ignore these problems as infrequent so they wouldn't have to try and code the handling in a functional way (FP is terrible at logging, so even getting that done adequately was an argument).
Basically the paradigm does tend towards collapsing once the problem grows too complicated for someone to keep in their head because it actively fights the sort of procedural reasoning humans do pretty much natively in favor of dealing with abstract math which very few people are any good at.
Heh, I'm currently experiencing the exact opposite in our Scala codebases. Unhandled exceptions everywhere, null pointers left-and-right, `if(myOptionalValue.isDefined) myOptionalValue.get` all over the place, etc.
I gave an internal talk about FP approaches like .map, .flatMap, etc. (as well as `Try[T]`). Although I didn't call it "FP", I called it "type-safe error handling" (i.e. `null` and `throw e` lie about their types, and are hence not type-safe)
They probably mean Haskell. Note: I don't know Haskell, so below is speculation.
To log you need IO. To get IO you need to provide it to the function that will do logging. And now your function has to be marked as doing IO. So now you need to thread all that IO through all your functions, and "turn you code into monadic code" (I think that's the term).
Other languages (like Erlang) don't care, and you can log whenever you want.
---
Flamewar off-topic: it looks like hardly anyone is doing any useful logging in Haskell, because if you search for "logging in Haskell" you end up in:
- highly academic discussions on "logging actions" vs "logging of computations"
- extremely convoluted solutions that turn even the simplest examples into a mess
- a couple of libraries whose entire documentation is usually "believe me we're the shit", and if they do have examples, they are an impenetrable mess of custom types and ascii art
For every other language it literally is `logger.info(something)`
And here we see the damage caused by the modern OOP. People that complain about that want to replicate in Haskell the lob4j philosophy of adding logging into every interface, because with data and IO chunked everywhere inside object interfaces, you never know if you can ever repeat an execution in a development environment to verify it.
The thing is, if for some reason you really think you need to log inside a pure function, you either need an intermediate variable or your perceived needs are severely misguided.
> And here we see the damage caused by the modern OOP. People that complain about that want to replicate in Haskell the lob4j philosophy of adding logging into every interface
And here we see a person slinging unsubstantiated accusations
> if for some reason you really think you need to log inside a pure function, you either need an intermediate variable or your perceived needs are severely misguided.
Clear demonstration of "it looks like hardly anyone is doing any useful logging in Haskell".
Because, as we know, the fact that "you can repeat an execution" immediately makes your need to log anything in that execution as "misguided".
All the FP languages include a subset that is referentially transparent.
I do not consider the attempts of making the entire language referentially transparent and the exclusive use of lazy evaluation, like in Haskell, as being useful.
Obviously there are people who like these features and who use Haskell, but in any case whenever FP languages are mentioned they should not be reduced to those that have made the controversial Haskell choices.
I do believe that monads are what made it possible to be referentially transparent not for just a subset, but the entire language. It's a good choice - it enforces equational reasoning, even for things like IO and "side effects". It is harder to write code like this, if you're not used to it, but i imagine that if you can, the code would be of higher quality.
As someone who is just getting into FP I’d say Haskell was the most famous language I knew of before a month ago. Lisp might count but it isn’t widely known to be a functional language most people just think it’s a weird language.
I'm a professional OCaml developer, so I definitely know and agree with you :p I still think Haskell is "more FP" than OCaml because it allows equational reasoning in more cases. I don't think it's a problem to describe the platonic ideal of functional programming, since even in OCaml the pattern I mentioned is useful (for example a very common OCaml pattern is using the let-monadic syntax to simulate do-notation when using LWT)
> ipfs resolve -r /ipns/nauseam.eth/coding/random/how-side-effects-work-in-fp/: no link named "coding" under QmdzzonFE9eX6FGs8UbCyoC2XS5NQjaG6gaqhgeUyTHnag
> You're viewing my site on the centralized web. Check me out on the dweb ! (Warning: it's slow.)
It's really hard to take someone serious when they make big, untrue, statements saying the real web is "centralized" right on every page of their site, to peddle crypto scams.
> Neo-Nazis are holding a demonstration in a small town, waving swastikas around and shouting about Hitler. They seem to be pretty peaceful so far, so the First Amendment says you probably can’t get rid of them. However, their demonstration is near a main street and it could be a minor inconvenience to the traffic trying to go through.
>
> [ ] Allow the neo-Nazis to demonstrate.
>
> [ ] Break up the demonstration on the grounds of ‘blocking traffic’.
I am at a loss for words. Nazi sympathy under the guise of tolerance.
> big, untrue, statements saying the real web is "centralized"
Well, it's not completely centralized, but it's more centralized than using IPFS for the backend and ENS for the namespace. I think that's hard to debate right? If I take down my server right now chadnauseam.com will go down for everyone. But if anyone has my IPFS page pinned, it will stay up for everyone no matter what I do (barring exploits in IPFS I don't know about). So in that sense it really is more decentralized.
> Nazi sympathy under the guise of tolerance.
I don't understand why you see a quiz that gives you the option to pick either way as promoting one option over the other
Because it's obvious to anyone that has written any sort of complex program that those are not the same thing. foo() can be an expensive operation like an HTTP call. Or it might depend on a database which can change state underneath it.
I assume FP has answers for these things but the tutorials never cover them. They all imagine a world without state or expensive operations to show how wonderful it is. And that's an easy world to program in.
> Because it's obvious to anyone that has written any sort of complex program that those are not the same thing.
They are the same thing in Haskell (except for when forcing the thunks into eager values happens, due to the weirdness of laziness, but that has nothing to do with purity).
> I assume FP has answers for these things but the tutorials never cover them.
Except this one does:
> The <- works like an =, except it signals that equational reasoning doesn't apply to this value. You can't replace what_the_user_typed with getline - your program won't compile.
Yeah, I'm not denying that these things are solved in Haskell or other FP languages. People build complex applications in those languages so all things must be possible one way or another. My complaint is that the tutorials never wade into these weeds and show how FP makes real world applications easier to build.
import System.Random
randomInt :: IO Int
randomInt = randomIO
isEven1 :: Int -> Bool
isEven1 n = (n `rem` 2) == 0
isEven2 :: IO Int -> IO Bool
isEven2 n = do
m <- n
return $ isEven1 m
example1 :: IO ()
example1 = do
i1 <- randomInt
let result1 = isEven1 i1
putStrLn $ show result1
i2 <- randomInt
let result2 = isEven1 i2
putStrLn $ show result2
let result2' = isEven1 i2
putStrLn $ show result2'
example2 :: IO ()
example2 = do
result1 <- isEven2 randomInt
putStrLn $ show result1
result2 <- isEven2 randomInt
putStrLn $ show result2
randomIntIsEven :: IO Bool
randomIntIsEven = isEven2 randomInt
example3 :: IO ()
example3 = do
result1 <- randomIntIsEven
putStrLn $ show result1
result2 <- randomIntIsEven
putStrLn $ show result2
main :: IO ()
main = do
putStrLn "Example 1:"
example1
putStrLn "Example 2:"
example2
putStrLn "Example 3:"
example3
In example1, result1 and result2 may differ, which is signalled by the arrow (instead of "="). result2 and result2' cannot differ, which is signalled by "=", where the right hand side is verbatim the same.
In example2, result1 and result2 may differ, which is signalled by "<-" (instead of "=").
In example3, result1 and result3 may differ, which is signalled by "<-" (instead of "="). Another argument here is that they may differ, because example3 is the same as example2, except it used randomIntIsEven in place of isEven2 randomInt. But we defined randomIntIsEven to be equal to isEven2 randomInt, so result1 and result2 being able to be different in example2 but not in example3, or vice versa, would be a contradiction.
that is _not_ possible in haskell, simply.
generateUUID needs a random number generator, thus it probably has a type like
generateUUID :: IO UUID
which means that those two lines don't _really_ make sense: you'd get a compile error. to use side effects (which in this case, IO, mostly means opting into sequencing of actions) you'd have to write
uuidY <- generateUUID
y = bar uuidY
uuidZ <- generateUUID
z = bar uuidZ
in which case, equational reasoning still holds.
Side-note: in haskell, you could write it while avoiding the middle line using either the left or right bind operators, so you could write it as
y = bar =<< generateUUID
z = generateUUID >>= bar
(mnemonically: the first one runs an action and pipes it to the left, the second one does the same but pipes to the right)
Side-note #2: that's actually not too far away from how random number generators work in haskell: either you pass the state around, or you do it in IO.
Should mean the same thing as means has the same result value here. The operational semantics could be different, as you mention: even without side effects, computing x may be very expensive.
This isn’t even special to FP, it’s a basic problem when building a compiler, namely whether inlining/common subexpression elimination is beneficial.
To drive the point home, even in a language like Haskell these two examples might have a factor of two in execution time between them.
In theory you're right, but in this specific case they really are the same - the compiler has an optimization pass called "common subexpression elimination" that converts the second to the first in nearly all cases.
In case 1 the value of x must be kept in memory across the entire execution of bar(x) since you need it for the second invocation of bar. In case 2, the result of foo() can be discarded once bar is done with it.
To use a contrived example, imagine a program that has 100 mb of available memory.
foo() returns a data struct that is 95 mb in size.
def bar(some_big_data_struct)
some_small_data = some_big_data_struct[:some_key]
[A bunch of other code that allocates and then releases 90 mb]
return some_other_small_data
end
In Case 1 I get an OOM error. In Case 2 I (or the runtime) can reclaim the 95 mb that some_big_data_struct uses and the program works.
Clearly that's a contrived situation but it illustrates my original point. There's a huge gap between theoretical pure functions and what we have to deal with in the real world. These sorts of FP tutorials never go into the weeds and explore these problems.
> foo() can be an expensive operation like an HTTP call. Or it might depend on a database which can change state underneath it.
It can't, in Haskell, because it's a pure language. That's basically the definition of pure! But you still need effects, even in a "pure" language, and the whole point of the article is about how to support effects in a pure language.
Actually in some FP languages like Haskell those two might be literally the same thing. When the compiler sees the second, it does the first (obviously over simplifying all over the place here, but that's the idea I think is being expressed).
> Monads really are just a convenient way to build up action values
Except when they aren't, because lists/arrays (with map) and Maybe (with map) and .. are something totally different. And that's the problem with _all_ these monad tutorials/how-tos/...: the problem is, that monads (and functors) are not burritos or elephants but _both_. So they are hard to understand looking only at a single instance.
Btw. monads are not the only way FP deals with effects, see algebraic effects. https://github.com/yallop/effects-bibliography https://www.youtube.com/watch?v=DNp3ifNpgPM