Hacker News new | past | comments | ask | show | jobs | submit login

Your code is impossible to analyze. Console.log is a side effect.

For loops can be easily mapped into functional constructs, but not your snipped does not compute anything. It's not a function.




Don't be pedantic, this is super simple code that is easy to analyze. Side-effects are not the terror that haskell programmers make of it.


After working on and maintaining several very large codebases, I would strongly disagree. When large blocks of code start being called only for their side-effects then refactoring/deleting old code gets pretty tricky.

Conversely, when writing green-fields code, side effects are extremely convenient and make developers more productive in the very short-term.


(Side) effects are the reason you write code in the first place.

So I'd agree that focusing on the reason you are writing the code makes everyone more productive, though not just in the short term. ¯\_(ツ)_/¯

Now how you organize your code so that you don't mix things up willy-nilly is a different, more nuanced and, I think, more interesting question.

I am a big fan of hexagonal architecture, where you have a very localized and super-testable core with, ideally, all the complex functionality, surrounded by adapters that are as trivial as possible and communicate with the outside world, be that the user, persistence, network etc.

FP is one way of achieving this, but certainly not the only one.


A better reading of his comment is this. You can analyze side effects easily in a small function or system. Not so easily in a large system. That's why it's better to avoid side effects.


Oh, you mean using analyzing software. Yeah, I've never had a use for that. On the other hand, you could make my function append to a string and then return the string. Then it wouldn't have side effects so it would be analyzable.


> Oh, you mean using analyzing software. Yeah, I've never had a use for that.

One of the major goals for Haskell programmers (and users of fancy-type-systems in general) is to make it easy to transform runtime errors to compile-time errors. Further, an additional goal (albeit one which Haskell doesn't prioritize as much as some of its relatives) is to make the compiler better at telling the programmer what is wrong and how to fix it. If you have no interest in these projects and don't see why they could lead to code which is more reliable and easier to maintain, then your confusion about why people care about this stuff in the first place is perfectly reasonable IMO.


> On the other hand, you could make my function append to a string and then return the string. Then it wouldn't have side effects

State mutation is a side effect. If you mean it builds it up locally, it would then not have nonlocal side effects (e.g., it would have a pure functional interface even though it has an imperative implementation). But to do that, you'd have construction and mutation overhead in the code, whereas with functional idioms / comprehensions, you would not.

This is especially the case when (as is often the case in practice) you are taking collection datastructures and producing collection structures to work with.


(I'm new to FP so excuse this beginner question) How could you ever (in a non-trivial case) avoid local state mutation? It seems like any function which takes a collection and returns a collection would have to maintain some local state.

For example, if you want to take a list of integers and return a list of those those integers plus one ([1,2,3] -> [2,3,4]), that new list needs to be built up in memory somehow before it is returned, no? Sure that process might be hidden behind a generic `map` function (something like `map(lambda x:x+1, [1,2,3])`), lambda, but then you just have the `map` function building up the new list. Or if `map` returns a generator (so you don't need to construct the whole return list first), you're still storing some local state to keep track of which element of the input list you're currently processing.

What am I missing or getting wrong here?


Let's take a look at how we might write a function like `map` the (pure) functional way in a language like JS:

    const head = ([x, ...xs]) => x;
    const tail = ([x, ...xs]) => xs;
    
    const map = (list, fn) => {
      if (list.length === 0) {
        return [];
      } else {
        return [fn(head(list)), ...map(tail(list), fn)];
      }
    };
    
    map([1, 2, 3], x => x + 1); // [2, 3, 4]
We're not keeping track of any state here. Using recursion you don't need to keep track of the current element in the list for example (when you run this on a physical machine it will of course, but not at the conceptual level).


> How could you ever (in a non-trivial case) avoid local state mutation? It seems like any function which takes a collection and returns a collection would have to maintain some local state.

Somewhere underneath there will need to be something maintaining state, but it won't have to be local state in the function (in a pure language, typically it will be within a built-in with a pure interface.) FP isn't about changing the fact that computers operate by mutating state, but to isolate such mutations (and other side effects) behind pure interfaces, so that risk and difficulties associated with effectful code are isolated to, ideally, extremely well understood pieces of infrastructure code rather than permeating large codebases.


You are correct that memory allocation takes place to implement the FP, and you could call that a state mutation. The underlying computer mutates RAM.

But that's not what FP people mean by state.

RAM is an implementation detail. (You don't even strictly need RAM to compute. But that's another conversation.)

In your example, one part of the state is the list [1,2,3]. That isn't mutated when the program runs. It's passed around. The implementation probably passes it by reference - a pointer to the list - but actually the programmer can't tell and doesn't care if it's copied or passed by reference. There's no visible difference, when values can't be mutated.

The other part is [2,3,4]. As the program runs, it will be allocated in new memory, and from the programmer's point of view, it's as if the value [2,3,4] always existed, just waiting to be looked at. When it does look, that's always the value it's going to find, so in a very meaningful sense, that value is already there.

It's not usually allocated and stored in RAM until the program looks there, but it could be, it makes no difference from the perspective of the FP programmer. (In some implementations it actually might be. If you called map(f,[1,2,3]) twice it might "rediscover" the value already existing in RAM on the second call and use that.)

And the [1,2,3] might get freed at some point. But that only happens when it's not being "looked at". It gets forgotten then. (Or in an exotic implementation if there's still a reference to it, the memory containing the value might be freed, and it might be reallocated and recalculated when it's looked at again later. All invisible to the FP programmer.)

For your generator example, the implementation might create a state variable to implement it, or it might not. Either way it's hidden from the pure FP program, and it's as if the list [2,3,4] is just there, waiting to be looked at. Some implementations won't use a stateful generator like you'd think in Python though. They may instead represent the list as having a lazy tail: [2,3,lazymore...] where lazymore... is a placeholder in the list in RAM, which represents the part of the list which hasn't been looked at yet. This is lazy evaluation. The lazymore... is completely invisible to the pure FP program, because the act of looking at it (to do something useful) causes it to be replaced with the "real" calculated value, [4] in this case. Only those "real" values are visible to the program.

Overall, in the pure FP programming model, it's as if all the values exist already and never change, and they are determined by the FP expressions from other values which also never change.

The only "effect" is an implementation detail, triggered by the the act of looking at values to see what they already are, which converts lazy placeholders into useful values. The equivalence between lazy placeholder and useful value is so well hidden in pure FP that the implementation is free to do things like calculate values before they are needed or even if they are never needed, and to discard some values (putting the placeholder back) and recalculate them again later whenever it feels like. Yet to the pure FP programmer, it's always the same values.

The underlying implementation will allocate, free, move values around in memory, and perform lazy evaluation as its way of "looking at" values as requested. But those are all implementation details which are hidden from the pure FP programmer, and the details will vary between different implementations too. In practice there's still debugging and timing and memory usage visible, but not to the FP program (except through "cheating" non-pure FP escape hatches), and we think of those as part of the implementation, separate from the FP programming model.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: