Hacker News new | past | comments | ask | show | jobs | submit login
Immutability is not enough (recurse.com)
271 points by r4um on March 31, 2016 | hide | past | favorite | 123 comments



> This is exactly the kind of problem that functional programming was supposed to help us avoid!

No, no it isn't. Just like you wouldn't expect 2 * 3 + 8 to suddenly return 22 when you intend it to, you shouldn't expect the order in which you apply functions to not matter.

When you pass around a changing state object to every step of your program, you are effectively doing normal imperative programming with global-ish state.


I upvoted the original post only because of this comment, which gets to the heart of the matter with a simple example anyone can understand.

If the functions aren't commutative, function order definitely matters!


Commutative functions are scarce. You can rely on having transitive functions, but not commutative ones and frankly, there aren't that many useful commutative functions possible.

Yes, ordering matters, but if you cannot force ordering by protocol or by the types used, then the result is going to be non-deterministic by definition, which means it's not referentially transparent.


>No, no it isn't. Just like you wouldn't expect 2 3 + 8 to suddenly return 22 when you intend it to, you shouldn't expect the order in which you apply functions to not matter.*

Only the latter (that the order that pure functions are applied doesn't matter) has been touted forever as a benefit of functional programming, and what supposedly makes it "trivially parallelizable".


If the code is commutative, then this that statement is true.

But functional programming isn't going to make a non-commutative problem magically commutative.

All code has inherently serial parts (eg because of dependency chains). These cannot be magically made parallel (although, especially with immutable data, it may be possible to apply parallelism techniques like pipelining). However, the parts that are independent can be trivially parallelizable. Immutable state and functional programming make it easier to tease this out, but it doesn't automatically eliminate dependencies or allow them to be evaluated in arbitrary order.


>If the code is commutative, then this that statement is true.

Actually my statement is true whether code is commutative or not -- since my statement wasn't that "pure functions can run at any order" but rather "people has been claiming often that pure functions can run at any order".

What they claiming can be right or wrong (depending on the commutative nature of the functions in their code), but that people indeed claim it is true (and can be quantifiably verified)...


Can we see some examples of people claiming this?

Barring people who are new to FP and reading through a tutorial, who can make erroneous claims out of lack of understanding (just like people learning OOP also make mistaken claims about it), I don't see how anyone using FP can make that claim. It's evident that you cannot arbitrarily change the order in a series of transformations unless they are commutative. I mean, this is already the case with maths! Who would claim it isn't with FP?


>Can we see some examples of people claiming this?

I have posted several examples lower in this very thread.


Yes, thank you, I wrote this comment and then saw your other comment. Below that comment I explained why I think your interpretation of what those people are claiming is mistaken. Mind you, not a case of No True Scotsman: I'm saying they aren't arguing what you say they are arguing; at worst they are being unclear, but they are certainly arguing something very different!


Are you sure we are talking about the same "functional programming"? Consider a function that parses a number from a string, and a function that doubles a number. How could anybody expect the order of these applications to not matter?


Some people just oversell it, or don't think of the whole spectrum of commutativeness scenarios when they write about FP. I give several examples of that, lower in this thread, with links.

And it's exactly the same thing that the author of the original article discusses -- that order was supposed not to matter, but it does.


I don't think anybody ever meant to imply that pure functions can be composed in any order. That's the confusion here. "Pure functions can be run in any order" only means that they don't have side effects so one call doesn't impact another, but nobody ever meant or (hopefully) understood that to mean that the inputs and outputs of pure functions could be composed in any order. I don't even know what kind of confusion could lead to such an idea.


This is exactly what I was thinking.


Has it?

The order pure functions are applied in doesn't matter if their inputs don't change. Here, the inputs do change. The result of handleCollisions is a different state than the state you pass into it, so you should expect a different result when passing it to another function.

Pure functions are trivially parallelizable because they don't depend on non-local state. The moment you know what arguments you are passing to them, you can run them in whatever parallel or non-parallel fashion you like.

Every function that uses and returns a modified version of a single object can not be parallelized, since every single function in the pipeline depends on the result of the previous one. Here, for example, you don't know what the argument you are passing to processInput looks like until you've run handleCollisions.


Your statement may be mistaken. The order in which pure functions are applied matters often, but not necessarily. You can think of a few trivial algebraic examples where order matters.

f(a) = a + 3

f(b) = 2b

f(a).f(b) can have different results vs f(b).f(a)


Note that the parent said "if their inputs don't change".

In general in this thread, some people are following the root comment in using "order in which functions are applied" to refer to what you mean - applying one function to the output of another, or vice-versa. Of course this affects the correctness of the result (and possibly also the well-typed-ness of the program).

Other people, like the author of your parent comment, are using "order in which functions are applied" to mean "order in which functions evaluated", which indeed does not for the correctness of pure functions (except possibly with respect to nontermination).

Working with f(x) = x + 3 and g(x) = 2 x, the expression f(g(7)) can be evaluated inside-out or outside-in:

    f(g(7)) = f(2*7) = 2 * 7 + 3
    f(g(7)) = g(7) + 3 = 2 * 7 + 3
As best I can tell, the confusion was introduced in this thread by coldtea's accusations against unspecified FP proponents.

Edited to add: per https://news.ycombinator.com/item?id=11396816, that should be "coldtea's incorrect accusations against now-specified FP proponents".


In this case f(a) and f(b) can be called in whatever order, but the . cannot.


I think you misunderstood what the GP is saying. To take your example, given:

  fun f(a) = a + 3
  fun g(b) = 2b
Then:

  val x = f(2)
  val y = g(3)
is equivalent to

  val y = g(3)
  val x = f(2)
And this is true only because both f and g are pure functions.


I don't see why your comment is not simply additive to mine. Both our statements are true, and are related to the general discussion. Your statement relates to the most "pleasingly parallel" situations. Mine refers to problem composability.

This was the post right before the one we're talking about:

>No, no it isn't. Just like you wouldn't expect 2 3 + 8 to suddenly return 22 when you intend it to, you shouldn't expect the order in which you apply functions to not matter.*

A reasonable interpretation of the text and the one that follows is that some claims have been made that functional programming provides benefits of commutativity, and that people are disappointed to find no such benefit.


> Only the latter (that the order where pure functions are applied doesn't matter) has been touted forever as a benefit of functional programming, and what supposedly makes it "trivially parallelizable".

Functions that return the same values when given the same arguments are substantially easier to reason about than functions that return values that depend on both the arguments passed in and some state somewhere else in the system.

Additionally, functions that don't modify their arguments are relatively easy to parallelize. (Due in part to the fact that it's really easy to figure out locking in a share-nothing system.)

Anyway. I'm no scholar, but I'm pretty sure that no knowledgable FP proponent has ever seriously said "FP lets you call your pure functions in any order whatsoever and get the same end result, every time.". Perhaps I'm misunderstanding your statement?


Especially since shuffling the order functions get called often means they do NOT get called with the same arguments. This is important!


The order in which pure functions are evaluated doesn't matter (except with respect to nontermination). You are claiming people say that the order they are applied doesn't matter - this is a bit ambiguous. The example above is changing what the functions are applied to - everyone agrees that this matter (outside some well structured exceptions).


It's not the order of application that is irrelevant, it's the order of evaluation, due to the Church-Rosser Theorem.

For example, consider the expression:

    myList = map(double, [10, 20, 30])
We can evaluate the "map" call, to get this:

    myList = [double(10), double(20), double(30)]
Which of these calls to "double" should we evaluate first? It doesn't matter! We can go left-to-right:

    myList = [20, double(20), double(30)]

    myList = [20, 40, double(30)]

    myList = [20, 40, 60]
We can go right-to-left:

    myList = [double(10), double(20), 60]

    myList = [double(10), 40, 60]

    myList = [20, 40, 60]
Or any other order, and we will converge to the same result: [20, 40, 60]. If a language has the Church-Rosser property (AKA being "confluent"), then we're guaranteed that if we reach a result (i.e. if we don't get stuck in an infinite loop) then we get the same result, regardless of evaluation order.

To see the link to parallelism, imagine that the above list is much larger, and rather than just doubling a number we're doing some expensive calculation. In a non-confluent language, we'd have to specifically designate the tasks which can be performed in parallel, e.g. creating a job queue or the like, to avoid race conditions and out-of-order side-effects. In a confluent language, everything can be evaluated in parallel without affecting the result; the only thing we have to care about is performance (the overhead of parallelising small tasks far exceeds the gains) and data dependencies (if X uses the result of Y, there's no point starting X until Y has finished; and it's better to run both on the same machine, to minimise network usage).

Now instead of a list of numbers separated by ",", imagine we have a list of statements separated by ";":

    myProcedure = print(reverse(" olleh")); print(concat("wo", "rld"))
If the "print" procedure has side-effects, we get different results depending on which order we evaluate the calls in, hence the language would be non-confluent and we would have to be very careful that parallelism doesn't break the functionality.

On the other hand, we could approach these statements from a higher level view: we can think of each statement as a perfectly normal piece of data, e.g. as if it were a node in some AST. In which case, the ";" separator is just a pure function for combining two statements (pieces of data) into one statement (piece of data). In this setting, we can regain confluence, and hence evaluate in any way we like, e.g. we can reduce the inner expressions to:

    myProcedure = print("hello "); print("world")
If we eventually choose to execute this "AST", all of the side-effects will still occur in the same order, even if we've evaluated a bunch of things inside it, since evaluating a compound statement like 'a; b' cannot change the order of the parts, in the same way that evaluating a list '[x, y]' cannot change the order of its elements. Treating side-effecting statements as pieces of data in this way lets us blur the lines between interpreters, compilers and "eval", and is similar to what Haskell's "IO" type does.

In a non-confluent language we couldn't evaluate parts of the second statement in general, e.g. because "concat" might depend on effects from "reverse" or the first call to "print", hence there is less opportunity for aggressive optimisation, and without a distinction between pure and impure code (like we have above between functions and 'statement data'), there's no way in general to know which parts can be evaluated in parallel.

In fact, confluence isn't limited to function application (AKA "beta reduction"); we can add all sorts of program transformation rules to our language whilst remaining confluent (although such rule-adding mechanisms can be used to break confluence, this is usually considered user-error). For example, we might have a rule like the following:

    print(x); print(y) --> print(concat(x, y))
This turns one statement value into another with equivalent behaviour, but better performance (e.g. fewer system calls, etc.). Haskell's GHC compiler allows these sorts of rules, although they're applied at compile-time rather than run-time. There are also languages based around rewriting expressions at run-time, like Pure and Maude, for which all of these arguments also apply.


Not sure what the downvotes are for.

I'm not saying that order doesn't matter with pure functions (which is wrong).

I'm saying that the notion that "order doesn't matter" (without qualifications) has been touted as a benefit of pure functions and FP -- and indeed it has. Wrongly, but it has -- and that's an impression many people get from reading such "introductions to FP".

In fact, the author of the original post we're discussing makes the exact same observation: that order was supposed to "not matter", but he shows how it does.

And if you want some more example of what I said, here's a very lazily collected sample, I just used "pure function" and "any order" etc. in Google:

"A pure function is also robust. Its order of execution doesn’t have any impact on the system.". -- http://www.nicoespeon.com/en/2015/01/pure-functions-javascri...

"Finally, and here's the coup de grâce, we can run any pure function in parallel since it does not need access to shared memory and it cannot, by definition, have a race condition due to some side effect". -- https://drboolean.gitbooks.io/mostly-adequate-guide/content/...

"Another consequence is that it doesn't matter in what order functions are evaluated — since they can't affect each other, you can do them in any order that's convenient". -- http://stackoverflow.com/questions/4382223/pure-functional-l...

"Pure functions may be run in any order, so it's more easy to parallelize their execution". --http://leonardo-m.livejournal.com/99194.html?page=1

"Pure functions can be evaluated in any order, the result is always the same. Therefore, pure functions calls can be run in parallel". -- https://medium.com/@yannickdot/functional-programming-101-6b...

Note how all examples don't bother to give any qualifications for input data dependencies between some of your pure functions and race conditions based on them...

What do newcomers to FP learn from those?


I think you're being disingenuous. Those articles aren't claiming what you say they are claiming. Their wording may be imprecise (I agree with you on that), but nobody is claiming that FP makes all chains of transformations magically commutative.

You know what these people mean when they say pure functions may be evaluated in any order: that as long as they have the same input, the result will be the same. But in contrast, the "magical commutativity" property doesn't mean the functions get called with the same input! Or, to say the same in other words, you cannot change a function and expect your program to be the same, and we know that h1 = f . g and h2 = g . f are, in the general case, NOT the same function! In fact, one of the two might not even compile!

At best you could argue the explanations you linked to are incomplete or unclear. But it's dishonest to say they claim you can order the functions in any way, regardless of their input -- which is what TFA was having problems with.


>I think you're being disingenuous. Those articles aren't claiming what you say they are claiming. Their wording may be imprecise (I agree with you on that), but nobody is claiming that FP makes all chains of transformations magically commutative.

But I didn't say they do [say the latter].

Only that they sidestep the issue -- often confusing application with evaluation (using them interchangeably, or e.g. using the imprecise "run in any order" instead of evaluate).

Very few of those article mention commutativeness, the distinction between evaluation and application, data dependencies, etc. And even those ment

ioning "order of evaluation" forgetting to clarify that is not the same as application, and confusing the matters more by adding that we can "easily run them all in parallel" (giving newcomers the impression that the order of application doesn't matter at all).

>At best you could argue the explanations you linked to are incomplete or unclear. But it's dishonest to say they claim you can order the functions in any way, regardless of their input -- which is what TFA was having problems with.

TFA author had those problems (or, more precisely, had the idea that they shouldn't exist) because he was influenced by such articles -- coming to believe that FP was "supposed to free us" from ordering issues.

You say that "it's dishonest to say they claim you can order the functions in any way, regardless of their input", but that's just what this statement implies:

"A pure function is also robust. Its order of execution doesn’t have any impact on the system."

Potential issues with input not mentioned at all, as if they don't exist.

Or this:

"Pure functions can be evaluated in any order, the result is always the same. Therefore, pure functions calls can be run in parallel"

Here they indeed write "evaluation", but the "can be run in parallel" sans qualifications, muddies the waters.


I don't follow you. When people say "you can run these functions in any order" (or "in parallel"), how do you interpret this to also imply "...regardless of their input"?

Why does it matter if they say "run" instead of "evaluate"? What's the difference, in your opinion?

Do people really have trouble with the (universal, not FP-specific) notion that the input to a function matters?

Do we both agree that FP/immutability doesn't promise what the author of the article thinks it promises, and that people (regardless of about how clearly they say it) do not claim it does?


>Why does it matter if they say "run" instead of "evaluate"? What's the difference, in your opinion?

"Evaluation" solely implies passing an actual value and getting the result from a function. Taken in isolation, each evaluation (of pure functions) is independent from whether 10 or 200 other happened before or after -- it depends only on the input it gets.

Saying "run" on the other hand can be conflated to mean running a pure function in the course of a program's execution, and there the order or execution can matter with regards to the overall program's behavior, e.g. when it affects the derived input of a subsequent function.

>Do people really have trouble with the (universal, not FP-specific) notion that the input to a function matters?

Yes, people do expect pure functions will free them from race conditions for example -- only to discover that they still exist, as the author did, at a higher conceptual level.

Heck, if we are to discuss the original article, isn't it a given that such people as its author do exist and have the same confusion?

Or does the author seem like some totally ignorant programmer that is not representative of more people (and that we'd expect him/her to get it all wrong)? It doesn't come out that way to me.


You're mistaking order of evaluating separate functions, from function composition. So if I have two pure functions, f and g, that take and return immutable values (thus having no side effects) I can go:

a = f(x)

b = g(y)

or:

b = g(y)

a = f(x)

And the order does not matter. Because each function cannot effect the other. This is what these quotes are talking about. That independence. There's no outside state mutated in one evaluation affecting the other evaluation.

What you're talking about is function composition. That it f(g(x)) vs g(f(x)). And as many others have said on these threads, the order of the functions when composing two functions together like this is only unimportant if the functions themselves are commutative.


It's clear that you know what is actually meant by "order doesn't matter".

However, I find the idea that FP advocates have been ambiguous in using this phrase to be preposterous. In none of the examples you cite does the other say "order doesn't matter" without qualification. In every example you cite, the author explicitly says that order of evaluation/execution doesn't matter. That is an unambiguously correct statement.

That being said, it is still possible to be unambiguously correct and still regularly confuse people. Rhetorical style and knowing your audience go a long way towards avoiding that sort of confusion. However even when you write with clarity, there will still be some who will misinterpret - this cannot be helped.

I recognize this is merely my personal experience, but even as an average programmer who was haphazardly exploring the concepts of FP, I don't think I ever misunderstood what was meant by "order doesn't matter". If it turns out that many/most people were confused, then I will consider myself lucky in having found some good introductory material. Otherwise, I'm inclined to think that this is one of those blindingly obvious things, and that people who fell into this confusion - while by no means stupid - were simply exercising poor reading comprehension when they came across that phrase.


>However, I find the idea that FP advocates have been ambiguous in using this phrase to be preposterous.

And yet the author of the original article, writes:

>It turns out that our purely functional rendering code is sensitive to ordering in non-obvious ways. The first time I encountered this kind of bug, it felt strangely familiar – it’s something that often occurs in imperative programs.

And notes:

>This is exactly the kind of problem that functional programming was supposed to help us avoid!

[empasis mine]

That the author felt that way is a failure from those intro to FP blog posts/articles that failed to make such things clear.

>In none of the examples you cite does the other say "order doesn't matter" without qualification.

They all fail to mention that those functions can have data dependencies from one another, and that we can't easily parallelize "any" of them as they claim.

None of the articles mentions anything about the commutativity of the calculations (or even the very term commutative itself).


Oh dear.

I read the first half of this article watching him take the simple OO game loop and iteratively make it worse and worse, leaving it nearly unreadable and impossible to follow the logic or know what the state would be at any given moment. Then I came to this line:

Let’s take a moment to appreciate a few of the things we’ve gained by rewriting this code in a functional style

He actually thinks that he has improved the program. By piling functions up six deep and replacing pos.x += 3 with four lines of object accessor code (using strings for property names). Oh dear.

The second half of the article, where he realizes that he has not actually gained anything by ruining his game is just the icing on the cake.

I don't feel as though I've been sold on the benefits of Immutability today.


In their defense, while the syntactic structure of the program may have had hash made out of it, the specific problem they described is indeed something that you could experience with a game engine with that design. I haven't quite seen that exact problem pop up for someone on /r/haskell, because that usually gets posts about how to design the loop in the first place, but the loop he describes has certainly been laid out there.

The only quibble is that usually when you do the "every entity declares its desired action and the core loop harmonizes it", you do it more symbolically. The player character does not issue a symbolic "I wish to move to position 38" command, it issues an "I wish to move 3 units to the right" symbolic result, and the enemy issues the "I wish to push the player two units to the left" symbolic result, generally with both results carrying their source info too, and then, when the loop goes to harmonize these things, it has more information with which to do it than opaque closures or something that the harmonization code can't understand.


I swear someday someone will write a blog post titled: "Controlled mutation of state variables in a purely functional paradigm: a syntactic approach."

And just after that we will rediscover GOTO.


You're joking, but "Goto" is coming back into fashion again. It's in Swift, for example. There are few things more self-destructive than taking any of these programming memes too seriously.


Immutable data structures look awful in mutable languages; also, the performance is worse.

Still, don't confuse syntactic overhead with the general usability of an idea. Writing a['x'] += 3; would be godawful in pure assembler, but we still use hashmaps, references, variables and operators daily when it's easy enough to do so.


Unless they're representing date/time or strings, then immutability is an excellent feature.


Wait, I'm confused. Date/time and strings are the two things that I ALWAYS want to be immutable.


I often end up writing an immutable wrapper for date/time in different programming languages.


My theory is that he thinks that variables are an imperative language feature - so he tries to avoid using them to make it all functional.

He is partially right - variables are imperative. What he does not know is that he can use his variables in a way that directly translates into purely functional 'value binding' and leave them readable.


I wish they would still use Scheme in intro computer science. It's so obvious in that language what is and isn't pure functional.



We use it at Berkeley for the first CS class, CS61a. The final project is writing a scheme interpreter in python!


I'm with you. Why is it that the immutability path always requires endless amount of streaming-here, streaming-there, parsing-this, generating-that, &etc. Methinks there is a cargo cult factor going on .. at least in this case.


The bugs described in the article stem from the existence of a "state" type. If you're allowed to call draw() on the result of handleCollisions(), but not on the result of processInput(), that means there are actually two state types in your program:

    draw : cleanState -> unit
    handleCollisions : dirtyState -> cleanState
    processInput : cleanState -> dirtyState
These signatures allow draw(handleCollisions(processInput(state)) but forbid the incorrect ordering draw(processInput(handleCollisions(state)).

Effect systems boil down to "discovering" these types eventually, but it is easier for a human to think in terms of what invariants are enforced by a given type. Even if there is no language support for actually checking those invariants before execution.


I think the uber point of the article is that, even with the help of types, you cannot design the whole type structures until you know the entire picture. Suppose you throw some autonomous to your character (e.g. pathPlanning, preprogrammed behavior, subcomponents moving autonomously otherwise interfering other objects or user input, etc.) then there are many types of state, and you'll design what operation gets which type of state and returns which type of state, and that's essentially the same as how you design with chaining State -> State functions or working with imperative procedures. Sure, types will keep you away from shooting yourself---once you design the whole type chain the compiler will detect if you swap or drop operations inadvertently. But during the stage of your designing the type structure itself, the issue described in the article still exists.


My point is that you don't have to design the whole type chain ahead of time, you can discover new type constraints as you go and apply them straight away (in the same way that you don't have to wait until you finish your design to start writing unit tests).

Maybe an experienced game programmer would notice straight away that it could be a problem to draw a state that has not undergone collision detection, and design this state constraint into the game straight away (as a type, as an assert, etc.) and maybe another programmer wouldn't notice this requirement until they ran the game and noticed something odd (or someone from QA notified them) and had to dig through the system to find the underlying reason. But once they shot themselves in the foot once, it makes sense to implement _something_ to detect when they make the same mistake again.


During the design stage, you do need working components, independently implemented, and experiment various combinations. If you use types, that means frequently rewriting types to match the reorganizations of control flow. Types still help to prevent mistakes, but the main part of actual design process (like working on clay, or, scrap & rebuild with Lego pieces) is just as tedious as other ways. I read that the original article is trying to address that problem.


"even with the help of types, you cannot design the whole type structures until you know the entire picture"

Sure you can. And like your design of anything else without knowing the entire picture, some bits of it will be wrong. A good type-checker - especially with inference - helps you find where you'll have inconsistencies in the assumptions you've made (usually before you've coded all the way to them), will give you a sense of the form of those inconsistencies, and when you change the types will point you at the code that needs to change in tandem.


I write a bit of Haskell in small projects but my background is largely Lisp/Scheme (the mud ball approach). I do admire hard-core Haskellers who seem "thinking in types". I wonder if there are good articles that tell how type-based approach helped dynamic development (meaning, lots of trial-and-errors, and rapidly changing spec, involving many components developed simultaneously by different programmers).

I assume you can encode implicit constraints in types, for example (suppose components form DG, you might encode dependencies to ensure restructuring graphs always satisfy them). Just like to see how it works in real life development.


I'm not sure if the description "hard-core Haskeller think[ing] in types" applies to me, but I definitely find the support of the Haskell type system hugely advantageous when I'm working quickly (and conspicuously absent when I have to sit down and write some Python or what have you).

My current project at work is a library of tools for parsing and analyzing SQL. I started out targeting one dialect and with a few analyses in mind. I think it meets all your criteria (lots of trial-and-error, rapidly changing spec, many components) but the last (I've done most of the development).

My approach was to start with a pretty dumb sketch of my AST, and then refine it as I learned things from trying to use it, trying to balance complexity against concerns of 1) keeping invalid states unrepresentable, 2) reducing unnecessary coupling, and 3) keeping inherent coupling explicit. With a reasonable amount of 1 and 3, changes could be made with far more confidence (paired, of course, with a reasonable test suite).

Going back to "learned things from trying to use it", it's a matter of walking out whichever is easier - types or code, and stubbing out the other until I have enough clarity that filling in the details there is easy as well. Quite often I'll build out the obvious code, come to a hard bit, and then I can ask the compiler for the shape of what it expects. Or drop in undefined and see if the result still builds. This can get me feedback long before tests could be informative.

The experience has been good, and I know I still have substantial room to improve over where I am.


Now let's say you want collision with one special item change the way inputs are interpretted (you iterate on the game rules and someone had an idea to add alcohol that reverses the controls).

This very detailed specification of dependencies in applications make it hard to change them quickly, and gamedev is mostly about how quickly you can iterate to make the game fun.

I would love a hybrid system, where you get a "root" object passed to each function, that keeps the whole state, and you can just do side effects on that, but you are required by convention to always reference the members of that root object by full path.

And then when you're finished you run a macro, and it generates nice elegant minimaly coupled functional code that passes only the required stuff around.

Don't know if that's possible but it would be great.


This with lisp syntax, compiling to Rust code & datastructures would be fun!


I was thinking of a system generating code in the same language, and still keeping the root object, so that you can experiment, add some new dependencies, and regenerate dependenceis, many times.

But you can at any moment pass null from the main loop instead of the proper root object to see if the code is generated properly and works only with the functional part.

It's more complicated than a system that just regenerates everything from scratch every time. And removing dependencies would be tricky. So maybe your idea is better.


Well you could precompile part of the guaranteed immutable stuff and distribute it seperately from the mutable execution part...

Wait we've reinvented the JVM at this point!


Come to think of it you should take a look at clojure/script, the pattern you describe could easily be implemented with maps in atoms.

Cf the 'reagent' libs way of handling state


> Now let's say you want collision with one special item change the way inputs are interpretted

Then the rules become part of state and the flow doesn't need to change.


It's hard to give small example with so general code, but in the end you either have very general flow (collision between any 2 objects can change state of the whole world, including the way the inputs are interpreted), that doesn't provide much protection against bugs, or you have very specific minimal flow (collision between 2 objects changes the state of these 2 objects, their parents, particle system, music system, and input interpretation, and nothing else) that protects against more bugs, but is a lot of work to maintain.

I was trying to imagine a system which could give you the best of 2 worlds.


Can anyone explain why it is:

  var newState =
      drawManuel(
      drawBackground(
      processInput(state)));
and not:

  var newState = processInput(state);
  drawManuel(newState);
  drawBackground(newState);
When I read the first my intuition is that application of drawManual and drawBackground is important for the value of newState. The second one makes it clear that newState does not depend on drawManual (or drawBackground).

It would also make the first bug more easier to spot:

  var newState = handleCollisions(state)
  newState = processInput(newState)
  drawBackground(newState)
  drawManuel(newState)
The problem with the order of handleCollisions and processInput is now more visible because now it is obvious that just those two change the state - so we concentrate on them. And there are less parentheses.

(Initially I used another variable here to make it more functional - but then I decided this is useless - we are still in an imperative language)

I have a feeling that the author have some strange idea about what functional means. I stopped reading after this part.


Indeed.

If the state is immutable, then drawing (which shouldn't modify the state) can be, conceptually, done in parallel. While in practice this is unlikely to actually happen, your code snippet reflects this conceptual property better and is therefore, in my opinion, much easier to read: I know where the state is modified and where it is merely read. I also know that drawManuel cannot mess with drawBackgrounds data and vice-versa.


It's not only conceptual difference.

It is a good practice to decouple state manipulation from drawing (even to do drawing in different thread, and with different frequency, than state manipulation).

This allows for computers with wastly different processing power to run the same simulation in sync, while the more powerful computer can draw 10 times as many intermediate frames.

Alternatives are variable step length (makes it hard to write deterministic simulation), or fixed step length and capping framerate to it (sacrifices animation quality on better computers for easy implementation).

EDIT: also - the ordering between drawManuel and drawBackground probably do matter, and there's probably some state related to drawing (like animation frame numbers etc) so there should be separate screenState that reflects that.

  state = processInput(state);
  screenState = drawBackground(state, screenState);
  screenState = drawManuel(state, screenState);


I meant conceptually because of what you say in your edit. As far as your state variable is concerned, they are parallel. But as you point out, there is implicit screen state (either in the code or in the GPU or wherever) and when updating that, order certainly does matter.

If each draw* function drew to a temp buffer to be composited later, then they could be parallel. But none of this is really relevant to the game-state discussion, though, since we were both talking about (and agreeing, I think) that decoupling is good regardless. Even if just to aid reasoning.


It is interesting that the solution for order dependency problem essentially turns functions that update the state into functions that writes a program in a mini-language. When interpreted, the language updates the state. In Haskell this is known as Operational Monad [1].

The nice thing with that approach is that it decouples complex application-specific logic from state management allowing to test both components separately. Another bonus is that it enables things like Elm's time traveling debugger [2].

The drawback is that this adds maintenance burden as now one has to deal with the intermediate language. If that language is a poor fit for the problem, the burden can be rather high. This is especially bad in JavaScript where lack of rich static types makes generating programs and writing an interpreter is rather error prone.

[1] https://wiki.haskell.org/Operational

[2] http://debug.elm-lang.org/


> However, we still don’t know what they do with the state. Do they modify it, or do they only read it?

   var newState =
        drawManuel(
        drawBackground(
        processInput(state)));
I don't see how this makes it more readable. I still don't know which functions actually modify the state. This is what I imagined the result to be:

    var newState = processInput(state);

    drawBackground(newState);
    drawManuel(newState);


Don't try to use functional programming for GUIs, simulations, or games. That's what OO was invented for.

Don't try to use OO for compilers or formal verification. That's what FP was invented for.

Don't try to use OO or FP for operating systems. That's what imperative languages were invented for.

Anyone who insists on a single style for the whole software industry is probably underinformed and overconfident.


I really like your sentiment, but I'm not sure if "OO was invented for X" or "FP was invented for Y" is right.

They are abstract paradigms that were created with very little connection to their underling usage at the time (FP in particular stems from a mathematical background, but most of OO as well).

Rather I would say more then "OO is more suitable for GUI" and "FP is more suitable for compilers" (though I'm not sure if I agree with that)


This isn't true about games; FP isn't a bad fit for them. Games can be written in FP, and there are some interesting things you can do with it, it's just that it's obviously less explored and more "researchy".

See:

- Nikki and the Robots (Haskell): https://github.com/nikki-and-the-robots/nikki

- Frag, a simple 3D shooter (Haskell): https://wiki.haskell.org/Frag

- Clean Game Library, using Clean, an FP language with a different philosophy from Haskell's: http://cleangl.sourceforge.net/

- For my (abandoned :( ) master's thesis, I was also writing a platformer in Haskell and was doing just fine. I abandoned it because I'm lazy, not because of FP :P


I guess the main problem is that FP can encode updatable trees with direct pointers from parents to children, but it can't encode updatable graphs with direct pointers, and in some applications you really want that.


I don't know. I have actually tried writing a game using Haskell. Admittedly, it was an absurdly simple platformer, and I left it unfinished, but it was a joy to write it. FP helped with both high and low-level code organization. I didn't hit any walls writing it (maybe I would have later on, of course). Had I continued it, I would have tried using FRP techniques, which was ultimately the whole point of the thesis, but never got around to trying them.

I think it's one thing to argue from theory ("this can't possibly be practical!") and another to actually try your hand at it and see whether it works or not :) My semi-informed guess is that it does. I don't expect to see AAA games done in Haskell any time soon, but that's for completely unrelated reasons!

IIRC, Carmack himself seems to be warming up to the idea of using some pure FP techniques in games. Of course, not everywhere.



> Don't try to use functional programming for GUIs

Then why React is so popular?


I'm not convinced GUIs can't be built with functional programming techniques. My GUIs (in Android and more recently with React) generally tend to follow a unidirectional data flow, where the view building is really a dumb passthrough of massaged data that's stored in a local DB (like SQLite). The view state could be represented ENTIRELY by a JSON object, e.g. and since it's all stored locally, it's replayable if something goes wrong.


Configuring and assembling a GUI out of an existing GUI framework, and writing that framework in the first place, are two different use cases IMO. OOP works a lot better for the latter case.


Here's a ClojureScript + core.async + Om + my own animation lib solution. No surprises, no weird bugs. Dead simple functional code + CSP = WIN! : )

https://gist.github.com/raspasov/e894a0c6c26e5814d752b3fc907...

Here's also the anim lib gist, if anyone is interested https://gist.github.com/raspasov/f9ca712571efd932169e


Here's a complete web game a mate and I did for global game jam this year in less than 24 hours from scratch using ClojureScript and core.async targeting pixi.js.

https://github.com/retrogradeorbit/moonhenge

It even has multiple weapons, enemy waves and an end sequence if you complete the game!

The code is not as clean as it could be. For instance it has many more atoms than it needs and lots of code that could be simplified and re-factored. But time was extremely tight and it was a massive rush job.

But even so in the end it came together seamlessly with no weird bugs or crashes of any kinds and it is rock solid. I attribute this to the immutable data structures and pure functions used. I've done games before using mutable OOP and in my experience they never come together this easily or with so few bugs.


This is awesome - great job! : )


"You will never find a programming language that frees you from the burden of thinking about bugs" https://xkcd.com/568/


Moreover, the following theorem is easily proved: there cannot exist a useful general-purpose programming language in which all (or even most) programs can be efficiently verified to be correct, either by man or machine[1].

Where "useful general-purpose" means any language with forward branching and some form of looping recursion (even if bounded) -- it does not need to be Turing complete -- and "efficient" means done in time polynomial in the size of the program.

[1]: ... unless PSPACE = P, in which case some small class of languages that are not quite general purpose but useful nonetheless may have this property, because verifying useful finite-state machine languages is PSPACE-complete.


For those interested, the proof for the Turing-complete and nearly-TC (as in total functional programming) cases is the same as the proof of the time hierarchy theorems[1], and for finite-state-machines the proof is based on results showing that verification of a (concise program expressing) finite state machine is PSPACE-complete. Those results are not surprising, as it is easy to see how any problem in computer science can be reduced (efficiently) to verification, so being able to efficiently prove correctness of a language capable of describing algorithms with super-polynomial complexity (this, BTW, only requires the ability to loop/recurse to depth 2) would mean contradiction.

[1]: https://en.wikipedia.org/wiki/Time_hierarchy_theorem


It seems what the author learned is that "programming languages sometimes let you do things in the wrong order", which is so obvious that it's kind of a non-observation.

Anyway, functional programming on its own doesn't get you there, but functional programming with a solid type system does prevent this bug from happening.


Great article, feeds all my prejudices. For example, I'm tempted to link to it in http://akkartik.name/post/modularity at the point where I say that "There seems the very definite possibility that the sorts of programs we humans need to help with our lives on this planet intrinsically require state."


Immutable state, functional programming, etc, is not about eliminating state, its about isolating it and carefully managing it.

In a standard-imperative-mutable world, you can never be completely sure that state isn't being modified behind your back, you can never be completely sure that concurrent access is safe, you can never be completely sure how far reaching state changes you make are.

The idea, IMHO, isn't to remove state. All programs need state. All interesting programs need state that changes over time. The "big idea" is that these changes are isolated and managed carefully so that nobody can pull the rug out from under you and change something behind your back or in a way that makes concurrent access unsafe.

Immutable data means that the data you are reading will never change. If you read it again in ten minutes, it will still give you the same value. No other module or thread can change it. Concurrent access is safe too and all reads give the same consistent view.

But even immutable data needs to be updated, so there will always be some mutable state (for example, in Clojure, you might have one or more atoms, which are mutable cells into which you can put immutable data. The data doesn't mutate -- it gets replaced[0]. Only the cell is mutated, but in a carefully managed way. There are other mutable state management primitives too). The programmer cannot accidentally mess up the state, they have to intentionally do it.

So to reiterate: its not about eliminating all mutable state, its about isolating and carefully managing it.

[0] and anyone still holding a reference to the data (not the atom) will continue to see the same data as before, unchanged. That is, if I take the value from the atom, somebody else updating the atom will not change the value I took out of it. This of course introduces things like the dual-writes problem, but Clojure gives me other tools (like refs with STM) to tackle these: ie atoms aren't the right tool for all use cases, just one example I chose.


That's why Haskell programmers, for example, try to represent state in clear ways.

The basic mathematical model of functions and values turns out to be able to represent state in several ways: the "monadic" way of composing state transforming functions; the "functional-reactive programming" way of modelling signals and events (which offers interesting solutions to the OP's game problems); various "interpreter" patterns; and so on.

I point this out just because there's a widespread misunderstanding that pure functional programming is somehow opposed to modelling state. It's definitely not. It just involves building state models on the coherent basis of functions and values.

Put simply, in Haskell, when your problem requires state, you can just use the "State" type, just like when you require I/O, you use the "IO" type.


Absolutely. In "The Awkward Squad," SPJ says "Haskell is the finest imperative programming language in the world," not entirely tongue-in-cheek.

:)


The problem doesn't really have anything to do with immutability; the problem is dependency and composition. E.g.

f :: A -> B

g :: B -> C

will let me write g(f(x)), but not f(g(x)). It does not matter if f and g are 'pure', the point is that f must come before g. I think the mistake is to think that functional programming is about immutability, and that immutability solves the problems. Immutability is there to keep the types honest, it is fine to have effects aslong as we are upfront about them, and don't conceal them under false names such as "int" and "bool".


Just as it's possible to model immutability on top of mutable structures, it's possible to model mutability on top of immutable structures. Regardless of which way you go, you end up with the properties associated with the kind of structure you're building. This is not too surprising, I would think.


There's a funky thing going on here because of the environment this guy is using: Javascript and a JS library to handle immutability.

Because of the way he's coding, he's doing a lot of horsing around with several things at once. He's not writing functions and then composing them. He's refactoring multiple functions at the same time. As it turns out, this difference is important. Looking at it from one angle, from small pieces into larger pieces, you'll be solving the order of functions almost immediately. Looking at it from another angle, from big pieces into smaller pieces, you solve problems in a different order. You make pieces with names on them that do what you want, then you look at whether there's support in the code to do what you want to do, regardless of how that integrates. In fact, you end up (as he does) integrating later. You're not composing, you're decomposing.

This is why one of the recommended ways to do FP is to use the REPL and get the core, toughest function working first. Then start composing. Work outwards from there.

The first function you'd probably write in this case would perhaps be moveTheGuy which would move the guy from one place to another. You would immediately be solving the problem of whether or not the guy could actually move. Contrast this with the first function being updateTheWorld, where you can start inventing all kinds of stuff that might or might not happen. Then you horse around trying to figure out what should go under updateTheWorld

I remember the first conference talk I saw with a guy talking about F#. It was still new in the industry, and many of the folks who were using it were true-blue OO coders.

The guy gets up, shows some code, then goes behind-the-scenes to show us the "real" C# code. "As you can see, all this really is? It's do-while loop. You could code this way with C# and never touch F#"

Yeah sure, but the environment you're in can help lead you to write better or worse code. It seems to me if I wanted to do pure functional stuff and Javascript I'd be pretty freaking careful how I coded. Lots of things there that point you in the wrong direction. Functional code just isn't regular code with all sorts of weird syntax; it's actually a different way of looking at solving problems.


It seems to be a common trend to claim that functional programming solves the problem of state.

It can solve the problem of accidental state in the solution space, but it obviously cannot change the fact that a specific problem space has stateful characteristics.

Don't introduce accidental state, yes. But embrace the problem's states as first class elements of your solution and don't try to hide them.


FP is all about embracing state as a first class element and reducing accidental state. I hope nobody claims otherwise :)

The confusion is that people sometimes say "state" when they mean "mutable shared state".


I don't see how functional programming would ever solve the problem of chronology. If the collision engine needs to know the new position in that frame then it needs to run that code first. You can't just run code whenever and expect functional programming to fix it. Nor will assigning different areas of the state to specific functions fix that either.

I think immutable data solves a lot of headaches and using Redux and Immutable together in our current stack has shown me some of the benefits of immutability and a more functional programming style. However, much of this article was about immutability not being a silver bullet for things that, IMO, were never going to be solved by immutability. It's like saying it would be solved by writing everything in Haskell, there are some "complexities" that can't be hidden, e.g. order of events. Maybe I missed something.


My typical analogy is this: mutable state is to software as movable parts are to hardware. There are a group of problems that cannot be solved without it, but introducing it makes everything a bit more fragile.


     var newState =
        drawManuel(
        drawBackground(
        processInput(state)));
This isn't how functional programming must work to be functional programming. The point is always to write as much as possible in functional, preferably immutable style, if only it fits well. You can write a large portion of your program that way.

The next sacrifice is to glue the functional calls together with more of a stateful approach, but preferably keep the mutable state more contained. For example, having a single function that handles the state, calls into the immutable functional code to do the heavy lifting, then sew everything together and ideally return something that is immutable and stateless. Like little bubbles of state where absolutely needed. For example, draw() calls in the above example can't be functional as drawing relies on side-effects. There's nothing to return from a drawing function, so those can't natively be chained into nested function calls.

The final level is usually some hodge-podge state management at the very top level where it becomes unavoidable in order to keep things organised and beautiful. There's always some state, so you'll have to do it somewhere, but the more localized and minimized the better state management it is.


I've said this a few years back in some other thread but it's so fitting here I'll say it again:

In a few years down the road I hope that the JS folks take a very very close look at Rule engines. These kind of errors can be easily avoided with them.

There is actually folks in the Clojurescript world using clara-rules for Browser state/rendering and it apparently works very well for them. I really wish some bigger player would push in that direction.


In a few years down the road I hope that the JS folks take a very very close look at Rule engines. These kind of errors can be easily avoided with them.

I agree up to a point, but I think off-the-shelf rule engines will only ever get us so far.

Front-end web development in recent years has been going through its “Visual Basic phase”. We’ve been developing tools to help automate basic UI interactions and communications between front- and back-end. Those often get the job done better than the more manual techniques we used to use, but mostly they work best with simple presentation like forms or charts, and with simple data models and limited relationships — in other words, the kind of thing you need for a simple CRUD app UI where the tricky issues like scalability are handled on the back-end, like a lot of the web apps being developed today.

However, you didn’t see a many people trying to build more demanding UIs in Visual Basic, things like CAD applications or DTP software or flight simulator games or tools for visualising and exploring tricky data sets. I agree that to build more sophisticated web apps, the front-end world is going to need to broaden its horizons. Tools that let you systematically represent large numbers of rules without hand-crafting a custom function for every one of them will surely be necessary in some cases. As with so much of web development, lot of lessons learned some time ago in other programming fields will be relearned by a new group of developers.

However, that’s all still relatively easy. The hard part is how you reconcile the effects of those rules when they conflict, and there aren’t any universal right answers to that one. In fact, often there aren’t any good answers at all that can be fully automated. That can create a whole new world of nasty, awkward interactions where users have to actively deal with those conflicts. Maybe even that isn’t going to be scalable or efficient enough, and you have to design the entire system to operate with only incomplete or potentially incorrect information and still have acceptable if not always ideal behaviour. There are lots of interesting problems at this level, but a basic rule engine is never going to solve them alone.


I really liked this post. It reminds me of a similar post by James Hague[0]

A number of people seem to be hung up on what is meant by FP advocates saying "order doesn't matter". In all the literature I have read when I first learned about FP it seemed pretty clear to me that it meant "function composition is associative", not "function composition is commutative".

But whether this is obvious is beside the point. The hypothetical confused programmer is merely a rhetorical device to set the reader up to think more deeply about how they handle state. FP teaches us to push state changes to the boundaries of the system. OK, once it is there, now what do we do with it?

We handle it declaratively[1]. That's what treating the updates as data allows.

[0]http://prog21.dadgum.com/189.html [1]https://awelonblue.wordpress.com/2012/01/12/defining-declara...


Here is a related article for anyone curious about, or who uses, persistent/immutable data structures:

http://concurrencyfreaks.blogspot.nl/2013/10/immutable-data-...


I've never really dived into functional programming but often end up reading a lot about it here.

There's always been something about state and immutability that irked me the wrong way and I'm very happy to see it being put into words and code examples.


Well, immutability is something that can help you in non-functional programming languages too: even C has const. It doesn't solve every problem, but it reduces the cognitive load.


You are absolutely right. What I meant, and what I should have written was "completely stateless and completely immutable".


You can still model state pretty straightforwardly when everything’s immutable. This article shows that different designs have different applications more than anything, and, well, duh.


> but it reduces the cognitive load.

No. It can reduce the cognitive load if applied in the correct place. It can just as easily increase the cognitive load if applied in the wrong place. Immutability should be used with care, not used always.


I would contend that it's exactly the opposite; mutability should be used with care, not used always.


Totally agree. Mutability adds significant cognitive difficulty that just goes away with immutable structures. Here's a great talk by Rich Hickey on the subject.

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...


I'm watching this now and just wanted to say thanks for the great link!


My pleasure :) Even though I'm fully bought into the functional and immutable idea (I have a couple of fairly popular libraries bringing the functional world to C# [1][2]), I still like to watch this video from time to time.

The clarity of thought and the way he communicates the ideas is really impressive. Especially the notion of immutable structures being about managing time, and that's why the original blog is wrong. What he seems to totally miss is the fact that with immutable structures you can keep the old world and the new world: not collided and collided. And then step back in time to the not collided state. It gets far too bogged down on implementation attempts and doesn't think about the problem conceptually.

[1] https://github.com/louthy/language-ext

[2] https://github.com/louthy/csharp-monad


First of all, it's great that you're exploring functional programming paradigms and their applicability to game programming/state management.

That being said I have to point out some things that bothered me: I believe you're conflating immutability and commutativity -- these things are not the same thing. Functional programming does not absolve you of paying attention to the order of operations. For example if you had two functions plus1 and plus2, you can order those any way you like since summation is commutative. However if you had plus1 divide2 functions, you have to pay attention to the order of application due to non-commutativity.


I think this is an example of something I see often, where a particular idea (e.g. immutability, interfaces, modularity, etc.) gets confused with a particular feature of a particular language which shares the same name. In this case, the author's code isn't using immutability; it just-so-happens to be creating an object called "Immutable.Map".

Following the author's terminology, mutability is when we replace the contents of a memory location with different values, e.g.:

     Time | RAM
    ------+--------------------------------
     0    | foo
     1    | bar
     2    | baz
          .
          .
          .
The author is using the term immutable to mean never replacing the contents of a memory location, so in their view this is immutability:

     Time | RAM 1 | RAM 2 | RAM 3 |
    ------+-------+-------+-------+--...
     0    | foo   |       |       |
     1    | foo   | bar   |       |
     2    | foo   | bar   | baz   |
          .       .       .       .
          .       .       .       .
          .       .       .       .
However, this is really just an implementation detail; Javascript programming isn't really about values in RAM (unlike, e.g. C where pointers are first-class values in the language). Instead, Javascript is conceptually built around variables, objects, scope, etc. So let's see how these RAM details translate to the values of variables.

The "mutable" version:

     Time | "state" var
    ------+-------------
     0    | foo
     1    | bar
     2    | baz
          .
          .
          .
The "immutable" version:

     Time | "state" var 0 | "state" var 1 | "state" var 2 |
    ------+---------------+---------------+---------------+--...
     0    | foo           |               |               |
     1    | foo           | bar           |               |
     2    | foo           | bar           | baz           |
          .               .               .               .
          .               .               .               .
          .               .               .               .
So far so good. However, scope plays a large part in Javascript programming: we can only access variables which are in scope, and what's more the same variable name can refer to different values depending on the scope we're in. This is where the author confuses the concept of immutability with the name of a library they just-so-happen to be using. Their code keeps generating new scopes, in which the variable name "state" refers to a different value than in the previous scopes. Conceptually, the variable "state" is not immutable because it keeps referring to different things, even though the implementation details could be argued to be immutable.

As an analogy, consider a display which shows the current value of a stock price. That is certainly not immutable, and trying to use that reading to perform calculations will be tricky since it keeps changing. This is like the author's original, mutable "state" variable.

Now consider a plotter drawing a graph of how that price varies over time:

          |
          |    _--.      ____
    Price |   /    \    /    \
          |__/      \__;      \__/
          |                       
          +-----------------------------------
               Time              ^
                                 |
                                Now
This is like the author's "immutable" version, where values are never changed once they're defined. This seems more useful for performing calculations, however, because the author keeps creating new scopes, they have no access to these old values! We have no access to the graph paper, we're only allowed to see the current position of the pen; which is exactly the same as if we only had the original display!

If we take scope into account, we actually get the following:

Mutable version:

     Time | Scope | Current "state"
    ------+-------+-----------------
     0    | 0     | foo
     1    | 0     | bar
     2    | 0     | baz
          .       .
          .       .
          .       .
Immutable version:

     Time | Scope | Current "state" | "state" var 0 | "state" var 1 | "state" var 2 |
    ------+-------+-----------------+---------------+---------------+---------------+--...
     0    | 0     | foo             | foo           |               |               |
     1    | 1     | bar             | foo           | bar           |               |
     2    | 2     | baz             | foo           | bar           | baz           |
          .       .                 .               .               .               .
          .       .                 .               .               .               .
          .       .                 .               .               .               .
I do think the author has a few good points to make; however, I don't think it's an argument between being mutable vs. being immutable, so much as one between being implicit vs. being explicit.


Author here. You've actually summarized one of the main points of the article -- here's what I wrote in the "What's going on?" section:

> In a sense, we are modelling a mutable object. Implementing that model with immutable data structures does not eliminate the difficulties of dealing with state updates -- it just shifts them to a different conceptual level.


I thought you would recommend Elm in the end of the article :P


This is wonderful. The "Mutable" vs. "Immutable" tables are clear. I have been telling myself to think this way -- there are no immutable or mutable versions. Everything changes, over time. The time is the key here. A value of X maybe 1 at t0. At t1, it may be 2. So the source of confusion is due to asking the wrong question. We commonly ask the question of "what is the value of X?". Does that question really make sense? What we mean to ask is, "What is the value of X at a certain time?"


The point of functional programming (or any good programming style, really) is to eliminate unneccesary dependencies (by which process the necessary dependencies are clarified) not to remove necessary dependencies (which by definition is impossible).


reminds me of Elm's Effects system.

great article.


sounds like a solution looking for a problem.


Immutability as absense of over-writes or mutations of existing values (in Erlang one just introduces a new binding for a new value) is obviously enough.

State and I/O is easily incapsulable inside closures with CPS or with explicit message passing.

For a lazy language one needs some ADT, like Monad, for explicitly "enforcing" a strict order of evaluation for a specific set of expressions, because I/O implies a strict order. Think of it as a transformation from a set to sequence (by defining an order as an ADT).

Lifting State or IO into Monadic world is mere a trick to satisfy a typechecker and enforce a strict order by using an ADT. A strict language with optional lazynes could use ordinary type-tags.

Nothing to see here.


> A strict language with optional lazynes could use ordinary type-tags.

And then you'll have a total order, that makes your code hard to parallelize.

But more important, it mixes state updating (that can be actually done in several orders, can be rewinded, and has several other interesting properties) with IO, with list mapping, with exception handling, and everything else.


Not one mention of the blockchain in the story or here. How interesting.


As a non-game-programmer, the idea of "collision detection and prevention" seems like a case of bad modeling.

In the real world, two objects don't start by occupying the same space, then deciding that's not a valid state and taking corrective action.


Its less a case of them occupying the same space, and more one of each object running its movement code and stating where it would like to be in the next frame. Collision detection then takes place, finds that two objects want to share the same position, and notifies the object of that fact so that it can compensate. That compensation might take the form of changing velocity, blowing into a hundred pieces, or just outright ignoring the fact and carrying on.


But in the example, the character in the game actually is moved to the invalid location, then the test is run for invalidity. Not sure why I'm downvoted, this seemed like a reasonable objection to me. Like I said I'm not a game dev.


Can someone explain why I cannot just check for collision before moving the character?


If the character moves faster than the space between him and the block, he will not be able to stand flush with the block - this can be fixed on other ways though.


You can. It's called collision detection!


What sort of approach do you suggest as an alternative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: