Hacker News new | past | comments | ask | show | jobs | submit login
How to use array reduce for more than just numbers (jrsinclair.com)
136 points by jaden on June 2, 2019 | hide | past | favorite | 103 comments



  function splitLineReducer(acc, line) {
      return acc.concat(line.split(/,/g));
  }
  const investigators = fileLines.reduce(splitLineReducer, []);
That’s unnecessarily quadratic. So yes, definitely use the more readable `flatMap` (with appropriate polyfill):

  const investigators = fileLines.flatMap(line => line.split(','));
(The `flatMap` in the article is not an appropriate polyfill)

The object spread example has the same problem, like c-smile mentioned. Use a Map for lookups instead.

> Some readers might point out that we could get a performance gain by mutating the accumulator. That is, we could change the object, instead of using the spread operator to create a new object every time. I code it this way because I want to keep in the habit of avoiding mutation. If it proved to be an actual bottleneck in production code, then I would change it. ↩

Avoiding (very local) mutation because it’s an FP bad word in a language like JavaScript is just a terrible way to go about things. Also, this turns out to be an actual bottleneck way more often than people notice.


> Also, this turns out to be an actual bottleneck way more often than people notice.

I wish more devs -- particularly web front-end devs -- tried out their work on something that isn't a top-of-the line developer computer. There are plenty of devices out there that are computationally constrained and/or memory-bounded. Moreover, no one ever runs these things in isolation. So, while profiling in isolation makes a ton of sense to rule out external factors, in the real world you're competing with all the other open tabs and windows. That's a tricky problem to model, of course, but it doesn't mean we shouldn't try.


> I wish more devs -- particularly web front-end devs -- tried out their work on something that isn't a top-of-the line developer computer.

I agree, and I am, and I do.

I held out on laptop from 2009 until recently, it had a core 2 duo, It was (and is) a really good machine for showing up sloppy performance hungry websites, and has been the source of a few of my rants here on HN when discovering some stupid side JS animation saturating one of it's cores unnecessarily.

I agree there's a lot of shitty JS code out there. But there are a few of us web devs that care, and will do anything we can to prevent the crap we see around us.


Slow machines aren't good to just easily notice bad websites, they're also good to notice bad server-side code and native apps.


That's commitment. Keep fighting the good fight.


It's best to test on an actual device, but chrome developer tools does have CPU and network throttling to simulate slower machines.


deploying the same javascript code on both web and old android devices (using RN) keeps you honest about performance :O


Avoiding local mutation in JS for the sake of avoiding it is especially bad because Functional Programming Languages with immutable data usually have optimizations for these kind of things. JavaScript does not.


This is the first time I’ve ever seen an emoji (↩️) in a HN comment. I thought they were stripped? Interesting!


This technically isn't an emoticon. It's an Other_Symbol U+21A9 ‹↩› \N{LEFTWARDS ARROW WITH HOOK} followed by a Nonspacing_Mark U+FE0F ‹◌️› \N{VARIATION SELECTOR-16}. VS16 forces rendering of graphemes as a colourful graphic (where supported).

https://en.wikipedia.org/wiki/Variation_Selectors_%28Unicode...


> That’s unnecessarily quadratic. So yes, definitely use the more readable `flatMap` (with appropriate polyfill):

  > const investigators = fileLines.flatMap(line => line.split(','));
It’s often nice to use iterators, e.g. with pieces from https://observablehq.com/@jrus/itertools

  const inspectors = Array.from(flatten(map(line => line.split(','), fileLines)))
Then the steps can be broken down into smaller pieces if desired, without needing to re-store all of the content temporarily at every step along the way.

Python’s generator expressions also often make this kind of thing even clearer:

  list(name for line in fileLines for name in line.split(','))
Ideally there would also be a nice iterator-producing “split” method/function for scanning through the string incrementally so that it would be unnecessary to construct a separate array for each split.


Hmm… not sure how that particular iterator version is an improvement.

Aside: the Python

  inspectors = list(name for name in ','.split(line) for line in file_lines)
is backwards in two ways and should be

  inspectors = [name for line in file_lines for name in line.split(',')]


> not sure how that particular iterator version is an improvement

In this case it is not. But if you want to do more complicated things, you can do several steps with iterators, and then crystallize an array (or sometimes a single reduced result) at the end.

* * *

Edit: whoops I had it correct it in my python shell and then mindlessly typed it wrong into this conversation.

Apologies!

* * *

Also yes sure you should use a list comprehension at the end in the Python, but again if you use generator expressions you can keep working on them for several lines of streaming transformations without needing to actually construct any full list along the way.

* * *

As an example, instead of TFA’s min/max example, we can do something like:

  const
    readings = [0.3, 1.2, 3.4, 0.2, 3.2, 5.5, 0.4],
    [r0, r1] = tee(readings, 2),
    running_min = accumulate(r0, Math.min),
    running_max = accumulate(r1, Math.max),
    [rmin, rmax] = last(zip(running_min, running_max));


Yeah but in python you would just use a generator for that.


In python you could use just a regular function with a for loop. You can do that in javascript too. But the versions using smaller components can be faster to write and more composable. Really depends what kind of style you like and what you are trying to do..

  const min_max = function min_max(iterable) {
    const it = iterable[Symbol.iterator]();
    const {value, done} = it.next(); if (done) return;
    let min = value, max = value;
    for (let x of it) {
      min = Math.min(min, x);
      min = Math.max(max, x);
    }
    return [min, max];
  }
  min_max(readings);
Anywhere that would be appropriate for a generator in python can also be a generator in Javascript though.


Generator expressions exist for that in python.


> That’s unnecessarily quadratic.

Any idea why concat() doesn't optimize it? Having a refcount of 1 should be enough information to make it linear, right? Or do the GC engines not keep track of refcounts at all?


Tracing GCs usually don't have any recording of the number of pointers to a value.


Python's GC does this though?


Indeed, some languages (such as Swift) provide a mutating reduce because it makes sense to use it in some cases.


At my current project we try to do everything as functional as possible (we're backend developers using node).

After a year or so of trial and error, we have pretty much stopped using reduce. The reason is that it pretty much always causes the code to be far less readable, specially for new devs that come from more object oriented approaches.

With the .Map, .foreach and .filter, methods we don't usually have that problem, people get the hang of them quite quickly and find them more elegant, but reduce always seem to require to stop for a few seconds to learn what it's doing.


> After a year or so of trial and error, we have pretty much stopped using reduce

I see Reduce (along with recursion itself) as the equivalent of a "low-level" construct in functional programming.

It is a great building block for other functions like sum/product, and even those demonstrated in the article, but not something that programmers should be using everyday.

I think it's very appropriate to have a rule to only use reduce (and recursion too!) inside "utility" libraries.


I work on a similar team, with a mix of people who want to use functional styles and those who want something more imperative.

I lean toward the pro-functional style, but I also put a few strong constraints on when to use reduce. Namely, I would almost never use it in cases where I either want to (a) mutate something other than the accumulator or (b) include more than two branches of control flow in the reducer function.

This kind of goes for all the functional array methods. Many people are tempted to use them simply as alternate syntax for iterating through the array (i.e., a simple loop), where the loop body could contain several statements -- i.e., data mutation side effects, taking different execution paths based on certain conditions, etc. The functional style is way less clear in this sort of code.

The benefit of the functional style in the context of javascript array methods is when you can see at a glance what the shape of the resulting value is with respect to that of the source array. This is usually best done when the body of the reducer/mapper/filterer is a single expression.


I think it’s reasonable to expect engineers to be able to learn how .reduce works. It’s part of JavaScript, after all.

If a new hire doesn’t know (a junior dev, or someone without experience in JS/Scala/etc.), an hour of reading plus coding exercises should be plenty to get them up to speed (if it’s not, that would be a red flag to me, and could be a sign of poor performance to come in other areas, too).


The distinction isn't “can possibly learn” but “can clearly and immediately understand while doing the real work”. It's one thing to sit down and play around with alternate styles, code golfing, etc. but for production work it's usually best to pick the most understandable way to reduce the cognitive overhead so when someone is making a change, debugging, profiling, etc. they're not instead spending time decoding the clever trick their predecessor (possibly them six months ago) used.

This is particularly true with religiously following functional style in languages which weren't designed from the ground up to work that way. The results usually take more time to understand and are often slower because the runtime is not as optimized for uncommon styles.


Readability isn't the same thing as learnability.


Our reasonsing wasn't "I don't expect them to be able to learn" as much as "it doesn't seem to be worth the extra effort to have them learn this skill". For us, reduce didn't make code more performant, readable or maintanable, so if felt a bit like shooting ourselves in the foot in a search of "purity".


As a freelancer, I go from comoanies to companies. Most of the dev I worked with were not 10x. Not even average HN level.

You gotta deal with reality at some point.


> specially for new devs that come from more object oriented approaches

Could this be solved by accepting in the team only those devs who are comfortable with functional programming? :-)


It could, but it's not in our hands.

And to be fair to management, there's not much of an argument to be made to shrink the pool of candidates so we can use reduce, if we're not seeing that feature bring anything to the table...

We already work with a somewhat small pool of candidates - they need to be somewhat competent, preferably familiar with niche technologies that we use in other areas, and willing to work mainly with JS.


> there's not much of an argument to be made to shrink the pool of candidates so we can use reduce

Sure, but since you mentioned that your team tries to write javascript in a functional paradigm, doesn’t it make sense to filter out candidates who aren’t comfortable with functional programming in general (and reduce is a marker, albeit very superficial, of candidate’s familiarity with this paradigm)?

Of course I don’t know how hardcore your functional style is; whether it’s just map-filter-reduce, or whether it’s also currying, composing and lenses (basically, ramda; also not very familiar to OOP programmers, I would imagine), or whether it’s all the way down to imitating Haskell in Javascript (Maybe monad, Either monad, State monad, reader monad, lifting, etc.; basically, crocks).


> doesn’t it make sense to filter out candidates who aren’t comfortable with functional programming

Only if your business success depends on it, which sounds very unlikely. It's already difficult to hire devs, no need to make it harder just so that you can check a programming style checkbox.


But having a programming style checkbox checked means the developer will almost immediately be productive in your codebase.

If you are saying that familiarity with a particular programming paradigm doesn’t matter, then does familiarity with a language matter? People still check for that :-)


On the other hand, if you happen to have the problem of many qualified developers applying for a position, picking the one that happens to know functional programming in additional to the procedural style is often a good choice.


> if you happen to have the problem of many qualified developers applying for a position

But the OP has already said that's definitely not the case:

> We already work with a somewhat small pool of candidates


> Sure, but since you mentioned that your team tries to write javascript in a functional paradigm, doesn’t it make sense to filter out candidates who aren’t comfortable with functional programming in general (and reduce is a marker, albeit very superficial, of candidate’s familiarity with this paradigm)?

Well, the issue of reduced readability wasn't just there for people unfamiliar with functional programming, people used to OOP were just more affected by it. So even if we could hire only functional programmers, using reduce wasn't really improving our work in any way.

> Of course I don’t know how hardcore your functional style is; whether it’s just map-filter-reduce, or whether it’s also currying, composing and lenses (basically, ramda; also not very familiar to OOP programmers, I would imagine), or whether it’s all the way down to imitating Haskell in Javascript (Maybe monad, Either monad, State monad, reader monad, lifting, etc.; basically, crocks).

Rather than seeing it in terms of more/less deep, we choose which functional style features we implement depending on how much they help us and how much we feel they fit well with the language: Using the Either monad for example didn't seem to make much sense for us while working with JS's (lack of) types, while on the other hand everything related to first order functions tends to fit rather well - using partial application for dependency injection comes to mind, which greatly helps us to make code easier to test and refactor.

I would say that striving for immutability has proved to be the best bang for the buck so far, as it's extremely reduced the amount of bugs in our codebase with little to no cost in readability or maintainability - although probably with a non-negligible cost in terms of performance.


Avoiding the use of reduce/fold is a good thing even in teams with more experience in functional programming.

The problem is not reduce itself, it's the code that uses it.


This sample

    function keyByUsernameReducer(acc, person) {
      return {...acc, [person.username]: person};
    }
    const peopleObj = peopleArr.reduce(keyByUsernameReducer,   {});
has O(n*n) complexity and memory consumption of the same magnitude.

This, while being not so functional pure, is better I think:

    function keyByUsernameReducer(acc, person) {
      acc[person.username] = person;
      return acc;
    }
But still... needless function call on each iteration.


You can also use `.map()` and `Object.fromEntries` if you really don’t want to mutate the accumulator:

    function keyByUsername(person) {
      return [person.username, person];
    }

    const peopleObj = Object.fromEntries(peopleArr.map(keyByUsername));
But you should probably be using a `Map` for this anyway:

    const peopleMap = new Map(peopleArr.map(keyByUsername));


You don't need a function, this is my little trick to convert an array to object

peopleObj = peopleArr.reduce( (a,x) => (a[x.username]=x, a), {} );


I too prefer to use the "inefficient" notation because it just feels wrong (kind of an antipattern in the functional sense if you will) to mutate the argument. And as long as you are not dealing with a huge input list, this doesn't really matter at all.

Also, why would the memory complexity be n * n? Sure, a new object is created in each iteration, but its not like you have to keep the previous ones in memory - all but the last one can be garbage collected.


There's nothing wrong with mutating an argument. The whole reason JS passes objects by reference is so that you can do it.


There are tons of wrongs with mutating arguments. The fact that it is possible to mutate arguments in JavaScript does not make it a good idea. Such functions cannot be trusted (how can you know what happens when you call it?), and they are hard to test.

There is _always_ a better alternative to mutating arguments in a function.


Sure, but here it's a local mutation, it won't leak to the rest of the program. There is nothing wrong with mutating acc in reduce.


I agree, should you always test the reducing function separately outside the reduce? And if so, what's the real benefit of not mutating accumulator? Seems silly to just copy a transient value which is immediately discarded and then do it for all reduced elements. Does the original reference to the object even stay during the iterations? Hmm, I guess it does.

To me it seems weird to be so puristic about a simple reduce-function, which by-design leads you to mutate the accumulator. I mean for object-accumulators it definitely is just a massive waste to not to just mutate the argument directly without copying. Although as a disclaimer I have to say I am big fan of simple code, were it FP or not. So if you have a messy reduce-function I guess having it immutable makes it a somewhat easier to manage.


This is a different argument than what megous was making (JS's support for mutating arguments validating that it's a good practice in a broad sense)

"Unobservable" or local mutation is completely fine (and pretty common in most functional programs that get to any significant scale)


> There are tons of wrongs with mutating arguments. The fact that it is possible to mutate arguments in JavaScript does not make it a good idea. Such functions cannot be trusted (how can you know what happens when you call it?), and they are hard to test.

I know what happens by reading the function. Pretty much any function that calls a method on some object mutates its argument. It makes little difference if you abstract this mutation behind the method, or do it directly, or do it via a helper function that takes the object as an argument.

> There is _always_ a better alternative to mutating arguments in a function.

No. Most of those are usually more wasteful, because if you can't mutate objects, you'll have to throw them away at every mutation.

Pretty much all issues I ever had with mutation was when object was shared between multiple users, and some users were incorrectly muttating it during some operation that was meant to be temporary.

For example you store params for a AJAX call in some object and for some AJAX calls you just want to modify one of the params for some AJAX call and instead of {...params, prop: 'newval'} you just do params.newval.

But that is a mistake you do a few times, and then learn to recognize when you're doing temporary changes to use copies of objects. Anyway, it is harmful to avoid mutation at all costs.


> I know what happens by reading the function. Pretty much any function that calls a method on some object mutates its argument. It makes little difference if you abstract this mutation behind the method, or do it directly, or do it via a helper function that takes the object as an argument.

Are we talking about functions in general, or small accumulator functions here? I would argue that mutating arguments in general is a really bad pattern that certainly will lead to weird and subtle bugs in the long run. It is also completely unnecessary – you have inputs and outputs for a reason. You might know what happens if you read the function yourself, but you can never assume that everyone else will.

> No. Most of those are usually more wasteful, because if you can't mutate objects, you'll have to throw them away at every mutation.

No, you don’t. This is an implementation detail. Libraries like immutable.js [1] (and several others, I believe) makes it possible to work with immutable data structures without cloning anything.

> Anyway, it is harmful to avoid mutation at all costs.

We can more or less agree on this one :) My comment was unnecessary bastant. I have used mutations in reducer aggregators myself. But, avoiding mutation removes a source of bugs, without sacrificing readability or functionality.

[1] https://immutable-js.github.io/immutable-js/


I prefer to limit mutation to changing what a local variable refers to: mutations that escape into the calling context can create all sorts of really nasty headaches.


> There are tons of wrongs with mutating arguments.

What leads you to this dogma?

Why is mutation of a tightly scoped local variable bad?

The initial accumulator of a reducer usually does not exist until the reducer is invoked, or is defined as an arg to the reducer.

What possible harm could result from mutating it in a tight loop until it is returned?

This is not to say that mutation is fine in all circumstances. That's crazy. There are tradeoffs.

But to declare an entire technique off limits, what are your reasons? You seem to assume your own conclusion.


> how can you know what happens when you call it?

Documentation, good API design. Real-world example of a function I call pretty often (which I’m also interested in knowing your better alternative to):

  const extend = (dest, from) => {
      for (const x of from) {
          dest.push(x);
      }
  };
It doesn’t return anything, so I’d say it’s pretty clear its entire purpose is to mutate one argument.

(Non-stack-overflowing version of dest.push(...x), by the way.)


I just came across a test that passed spuriously because it passed an object to a function that mutated one of its arguments and then used that object in the assertions.


I tend to treat my data structures as immutable, even when they technically aren't.


It’s n^2 time, not space. Each property of the object gets copied each time around, and the number of properties being copied increases each time around.

It’s probably a good idea to start considering this kind of copying an antipattern. The good news is that the correct implementation doesn’t have to involve mutating any arguments, just a local with unambiguous ownership:

  const keyBy = (iterable, key) => {
      const map = new Map();

      for (const value of iterable) {
          map.set(key(value), value);
      }

      return map;
  };

  const peopleObj = keyBy(peopleArr, person => person.username);
If you have an array, you even get to keep using array methods, which everyone loves:

  const keyBy = (array, key) =>
      new Map(
          array.map(value => [key(value), value])
      );


"new object is created in each iteration"

Not just created, but properties are being populated one by one.

"can be garbage collected"

And what do you think is computational complexity of garbage collector?

Yes, memory allocation in JS (GCable environment) is cheap. But garbage collection per se is not, it is at least O(N) complex.


I never claimed garbage collection was free, but the point was memory complexity and not computational complexity. So claiming that the implementation would be O(n * n) in terms of memory complexity still doesn't make sense to me.


memory complexity in JS translates into time complexity needed for GC cycles.


and again, you are arguing about time complexity, while I don't, but I guess we can now just call it a day...


Just use https://underscorejs.org/#groupBy. This is it's reason for existence.


I see a lot of for loops.

Working on a project in which there are really nary such resources available, and I'm stuck with only 'for' ... guess what? I don't miss much of these functions; I just don't find myself reaching for them, wishing I had them.

Either a for loop, or a couple of lines of code, or gasp you end up writing a short custom utility function to do specifically what you want, which is generally better optimized anyhow.


Usually it is the other way round: you miss what you had somewhere else and learned to love. When you go back it hurts. Try a 800x600 monitor nowadays and you will cringe. For me after using functional constructs a lot I find myself wishing for them in languages where they are not available that easily.


Yes, we miss nice screens when we go back to crappy ones, agreed, but this is my point: I don't miss most of these lodashy functions when they're not there. Maybe a little bit.


I'm not opposed to using `reduce` where it solves a problem in a readable or performant way, but it seems there's a trend among JS developers recently (I noticed this in the react ecosystem) to use `map` / `reduce` for _everything_.

Sure, _some_ problems can be solved elegantly and efficiently with it, but a lost of the time it just impedes readability, and maintainability of the code. A `for of` loop (or your C-styled for loop) would sometimes be better suited.


I agree. I'm a big fan of functional-style JS but I've seen people waste a ton of time tying themselves in knots trying to get a reduce solution working where a for loop would be trivial (and more readable).

For example, the "use reduce to run asynchronous calls in sequence" is an important pattern, and was pretty much the best way to do this (or another way that was basically equivalent) before async/await. With async/await, though, it's trivial (and in my opinion more readable) to just do this in a for loop with await. So I found it a bit odd that his reducer function used async/await syntax.

There was an article on HN recently about "don't compromise simplicity for dogma", and I think that applies here.


You have to know both methods and the trade off for each.

The performance differences can be quite marked as well.

I wrote some code the other day that could have been done with forEach more readably but I just used regular for loops because I was working with what was effectively a 3D array (grid/cell/contents) and know that for can be nearly twice as fast as foreach.


Wouldn't map and reduce be more readable if you were used to reading it? I've been using a lot of Elixir and Ruby, both of which use map and reduce, and neither of which make extensive use of for loops. Map and reduce are what I am used to reading.


When you have some imperative code inside the reduce function and you have to go out of your way to achieve something (ie, immutability by using Object.assign, spread, or Array.prototype.concat, etc) and on top of that have some conditional and imperative code, then the reduce function stops being elegant, and looks horrible.

Sure, every problem can be solved in a functional way, but JavaScript is not a purely functional language. Why limit yourself to a subset of the language? IMO, you should use the best tool you can for the problem. Sometimes that's reduce, sometimes it's not.

Also, I love Ruby (and used Elixir), and it's not really a comparison that makes sense, they're different languages with different APIs, syntax, and there's a category of problems in JS you never have to think about when using Erlang/Elixir, or Ruby. I could say that I'm used to seeing maps and folds in Haskell code; doesn't really mean anything in this discussion.


in ##javascript on freenode this comes up so often we have a macro saved to one of our irc bots explaining why forEach, map, etc are better alternatives to reduce in many common situations. by far the most common situation is people reducing arrays into an object when they could be using a simple forEach. The first example in the article is actually this exact thing. It's also awkward considering they mention they're trying to do a "lookup" which makes it sound like they could just do find on the array and be done with it.

sidenote, i really do enjoy code typeset with typewriter fonts for short codeblocks like in the article


I think it's perfectly fine in many cases. Like say writing a listToKeyed function.

Where I find it fails is when there's a ton of imperative code that's really hard to understand the premise. The beauty of functional programming is that you can take these stanzas and break them into nicely testable, named functions.


What’s a listToKeyed function?


I think they mean something that turns a list (array) of unique elements into a keyed collection (like a map or an object). So something like:

    [
      {id: 1, name: 'Jane'},
      {id: 2, name: 'John'}
    ]
becomes

    {
      1: {id: 1, name: 'Jane'},
      2: {id: 2, name: 'John'}
    }


This is driving me mad. This operation is called groupBy and it is a solved problem. How on earth is HN oblivious to such a fundamental operation?


Who’s HN here? What’s driving you mad? Not really clear what your point is.


Reduce is also known as "fold" in other languages.

https://en.wikipedia.org/wiki/Fold_(higher-order_function)

And especially when using immutable data it can be used for iterating, traversal (lists and trees, like say an in-order fold), filtering and other things. But there it is not an just an nice trick like for JS, but it is the canonical way of doing things.


Reduce is slightly different from fold because it does not use an initial value, which means its semantics differ somewhat with regards to restrictions on types.


Reduce in Javascript has the initial value as an optional parameter.


And it doesn't do types, so that's not really an issue either…


Fold does not always have an initial value either. The Wikipedia article linked in the post to which you responded says as much.


Sorry, I worded that poorly: fold usually does not have an initial value.


To balance all the negative comments about readability, I do think .reduce is often more readable than its traditional alternatives.

Simply because its use cases are always the same so you can guess the code without reading it:

* Reducing an array of numbers, with a number as initial value? => We are going to do some operations on those numbers.

* Reducing an array of objects, with a number as initial value? => We are going to do some operations on one or several props of those objects.

* Reducing an array of objects, with an empty object as initial value? => We are going to create a key/value object with computed key names.

Then just by reading the name of the variable on the left of the assignment, you know exactly what's going on.

And if you don't need to know about the details, you can just jump to the next statement.

    const products = [product1, product2, product3]
    const sumPrices = products.reduce(/* I don't need to read this */, 0)

    const users = [users1, users2, users3]
    const usersByFirstname = users.reduce(/* I don't need to read this */, {})
    const usersByLastname = users.reduce(/* I don't need to read this */, {})


Reader mode is a huge UX upgrade here. Recommend it.


It doesn't show the code's comments though.


pocket's reader mode does though. (and the rest of the text looks pretty much identical to firefox's reader mode)


reduce() is how you get imperative programming in a functional setting. JS is a perfectly good imperative language; for all these examples, you may as well use a for loop.

Personally I prefer a functional style, but if you're going to use an imperative style, just use JS's built-in imperative for loop and assignment operators.


I seriously hope nobody takes the advice in this article seriously.

Here are the conclusions you should take away from the article: Don't use reduce to reimplement map. Don't use reduce to reimplement flatmap. Don't use reduce to reimplement groupBy. Don't use reduce to reimplement filter. The minMaxReducer is the only legitimate use of reduce in the entire article.

The above functions require less boilerplate code. With reduce you will have to write additional bookkeeping code for the accumulator and without mutable data structures [!] it would be incredibly inefficient.

[!] We are far away from functional programming at this point.


While we're on the subject of reduction, I really think transducers [0] deserve a mention.

The general idea is to stack operations and use reduce to perform the entire transformation in one go, as opposed to chaining calls to map/filter etc.

Which means first class transformation pipelines and less wasted effort among other things.

[0] https://clojure.org/reference/transducers


When EcmaSCript 2019 hits the shelves it will have flatMap(). flatMap is the "corner-stone of monads" (poetically speaking) at which point monads will lose much of their mystique and things will be much advanced in the programmer land. You can use monads where you need them.

http://2ality.com/2018/02/ecmascript-2019.html

More on ES2019 at https://medium.com/@selvaganesh93/javascript-whats-new-in-ec...


It's funny to see other perspectives. I've never used array reduce for numbers.


Optimize for readability above everything else. Almost all of these examples I'd argue would be better suited for something like .map(). Little point in being overly cute, and if you are working with arrays large enough such that doing multiple passes is that much of a performance hit client-side, you have other issues going on.


> Almost all of these examples I'd argue would be better suited for something like .map()

Roughly half of his examples take an array and return either a differently-sized array, or a different data structure altogether. I can’t imagine how one could argue for a map in these cases.


Use the right tool for the job. Array to potentially different sized array? Use flatMap (which he implements in his example, albeit not it an optimal way). Array to single object or value? Use reduce.


Indeed, reduce is the swiss army knife since you can use it (sometimes when you shouldn't) to implement most of the other methods of map/filter etc.


I think the intent was simply to teach. Obviously if something can be done with map() instead of reduce(), it'll be more readable as map(). But seeing how the other list functions can be made out of reduce() helps you understand how it works.


That's also my minor nitpick, I feel like the syntax : `list.filter(myFilter).map(transform)` is clear about what I do and, each functions has a clear goal.

Otherwise I enjoyed reading this.


While your example reads a bit better, you’re now iterating through your list twice.


Most code isn't performance-critical.

This pattern will probably be dominated by the time taken to filter and map, not loop iteration.


Does JavaScript have adapters to covert lists into generators or lazy iterators?


Unfortunately not in a way that can optimize these methods. This is one reason the Java Streams API, which has a similar purpose (e.g. map, filter, flatMap, reduce, etc.), is a better design IMO because you can chain together method calls without creating (and iterating) the full array at each step.


Technically there is no iteration in filter/map/reduce at all.


That's incorrect. All of those functions are implemented using iteration.

Consider this excerpt from the .map() polyfill on MDN:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

    // 7. Let k be 0
    k = 0;

    // 8. Repeat, while k < len
    while (k < len) {
      // [loop body omitted]

      // d. Increase k by 1.
      k++;
    }


[flagged]


Ok, but please don't post unsubstantive comments here.


<deleted useless comment>


Please don't do this here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: