Hacker News new | past | comments | ask | show | jobs | submit login
Pipe Operator (|>) For JavaScript (github.com/tc39)
309 points by nassimsoftware on Jan 20, 2023 | hide | past | favorite | 426 comments



Is this:

   Object.keys(envars)
     .map(envar => `${envar}=${envars[envar]}`)
     .join(' ')
     |> `$ ${%}`
     |> chalk.dim(%, 'node', args.join(' '))
     |> console.log(%);
Really better than:

  console.log(chalk.dim(
      `$ ${Object.keys(envars)
        .map(envar => `${envar}=${envars[envar]}`)
        .join(' ')
      }`,
      'node',
      args.join(' ')
  ));
That's the real-world example they have (I reformatted the second one slightly, because it looks better to me). Neither seems very good to me, and the |> version doesn't really seem "less bad".

Can also write it as:

   process.stdout.write(chalk.dim(
     `$ ${Object.keys(envars)
       .map(e => `${envar}=${e[envar]}`)
       .join(' ')
     }`,
   ))
   console.log(chalk.dim('node', args.join(' ')))
Which seems clearer than either because it splits out "print env variables" and "print out node args". And it would be even better with some sort of helper to convert an object to k=v string:

   console.log(chalk.dim(`$ ${dumpObj(envars)}`, 'node', args.join(' ')))
---

I also feel this:

> In the State of JS 2020 survey, the fourth top answer to “What do you feel is currently missing from JavaScript?” was a pipe operator.

Is the wrong way to go about language design. Everyone wants something different, and if you just implement the "top 5 most requested features" you're going to end up with some frankenbeast of a language.


The F# syntax looks/acts a lot better here (especially paired with lodash). I also feel your code example wasn't done in the way people would actually use pipes.

    import {map, join} from 'lodash/fp' //iterators have better performance

    envars
    |> Object.entries
    |> map(([key, val]) => `${key}=${val}`)
    |> join(' ')
    |> x => chalk.dim('$ ' + x, 'node', join(' ', args))
    |> console.log
Even without lodash, it's still easy to read.

    envars
    |> Object.entries
    |> x => x.map(([key, val]) => `${key}=${val}`)
    |> x => x.join(' ')
    |> x => chalk.dim('$ ' + x, 'node', join(' ', args))
    |> console.log
For sake of completeness, here's the hack-based variant

    envars
    |> Object.entries(%)
    |> %.map(([key, val]) => `${key}=${val}`)
    |> %.join(' ')
    |> chalk.dim('$ ' + %, 'node', join(' ', args))
    |> console.log(%)


The F# syntax would endlessly confuse me though. I'd always wonder whether |> join(' ') means join(x, ' ') or join(' ', x).


The F# syntax is an incredibly simple bit of syntactic sugar.

Just replace this:

    x |> f
With this:

    f(x)

For the join example, you must do this:

    xs
    |> (x => x.join(' '))
It de-sugars to:

    (x => x.join(' '))(xs)
... which of course is simply:

    xs.join(' ')


Sure, I know. What I actually mean is that I don't grok curried functions.

Does join(a)(b) mean join(a, b) or join(b, a)? Does it mean a.join(b) or b.join(a)? And is a or b the delimiter? I don't really feel confident without looking it up or trying it out. I have a similar problem with Haskell's function syntax a -> a -> a.

If the function was called makeJoinerBy(sep)(array), it would be somewhat clearer:

    join = makeJoinerBy(' ')
    result = join(array)


in F# syntax the right side is always a unary expression, so `|> join(' ')` means `|> join(' ')(x)`

(the requirement being that join(' ') returns a function that takes one arg)


The top example relies on Lodash/fp having curried functions you can use.

The middle example would be using native stuff which is why you have the wrapper function (kinda like you'd have a function wrapping a callback)


Your example wouldn't work as the function would need to be unary.


Having done some personal projects in F#, I'm a huge fan of the syntax. However we would need most standard functions in JS to be curried to take advantage of it.


it's still easy to read

That’s highly subjective I’m afraid.

“Take entries of envvars, turn that into k=v, join that by a space, make dim $, previous that, and args, log that”.

Personally I have no clue what’s going on at first glance with all these %%% “thats”. It’s meant to be declarative, but reads imperatively instead.

  console.log(chalk.dim(
    '$',
    ...Object.keys(envars)
      .map(k => `${k}=${envars[k]}`),
    'node',
    ...args,
  ))
“Log dimmed $, k=v pairs of envvars, node, then args”.


That's why I prefer the F# version where it's just the names of functions or anonymous arrow functions. Nothing magical about them. It just calls the functions in order from top to bottom.


I mean it’s still not very readable. Reading through the code I can tell that it does.. some strong things with envars.


I won't speak about the specifics of the chosen syntax (Hack/F#) but in general - absolutely.

With pipes you can visually follow the manipulations and function calls in the order that they happen instead of being forced to scan the code inside-out & outside-in, matching parentheses and function call parameters in your head, while still visualizing intermediate results to get 1 final return value.

I find Elixir code much easier and quicker to understand, in large part thanks to its (admittedly, imperfect) pipe syntax. Code written in this way is also much easier to debug because you can quickly add `console.log`, breakpoints, or equivalent between the pipes.

I find this unnecessarily time-consuming and difficult to parse and I'd likely raise some flags in a code review:

  console.log(chalk.dim(
      `$ ${Object.keys(envars)
        .map(envar => `${envar}=${envars[envar]}`)
        .join(' ')
      }`,
      'node',
      args.join(' ')
  ));

Without pipe syntax, I'd refactor this to:

  const envStr = Object.keys(envars)
        .map(envar => `${envar}=${envars[envar]}`)
        .join(' ');
  const styled = chalk.dim(`$ ${envStr}`, 'node', args.join(' '));
  console.log(styled);
  
But you often find yourself having to add additional logic, e.g. to scrub sensitive values, so it would probably end up closer to:

  const sensitiveEnv = [...];
  const envStr = Object.keys(envars)
        .filter(envar => !sensitiveEnv.includes(envar))
        .map(envar => `${envar}=${envars[envar]}`)
        .join(' ');
  const styled = chalk.dim(`$ ${envStr}`, 'node', args.join(' '));
  console.log(styled);


Late reply, but I find that with a bit more of modernizing, and more consistent usage of chalk's arg-concatenation feature this can be turned into something much terser:

  const envs = Object.entries(envars).map(entry => entry.join('='));
  console.log(chalk.dim('$', ...envs, 'node', ...args));
I don't really have a point to make, perhaps just that there's often simplifications possible with extra creativity.


the refactored version of the example is much worse


Do you mind elaborating? I find the refactored version significantly easier to understand than the original. Readability is one of the top priorities for me and I find the original example too clever, in a bad way.


I dislike variables that are used just once. Sometimes they're a "necessary evil", but rarely. For "envStr" it's defensible IMHO as it splits up some of the complexity, but I would rather just use a helper function, which has the same "splits complexity" effect and is re-usable.

"styled" seems entirely pointless here.


This actually surprises me!

One habit introduced to my current team by a former co-worker involves having even more intermediate keys than that:

    const sensitiveEnv = [...];
    const envKeys = Object.keys(envars)
    const safeKeys = envKeys.filter(envar => !sensitiveEnv.includes(envar));
    const safeEnv = safeKeys.map(envar => `${envar}=${envars[envar]}`).join(' ');
    const styled = chalk.dim(`$ ${envStr}`, 'node', args.join(' '));
    console.log(styled);
In the beginning I wasn't a fan of it, as I do a lot of Haskell (and have point-free idioms on the tip of my fingers), and it's obviously unnecessary, as you said yourself. But with time I learned to appreciate this kind of function for its simplicity and consistence.

Now, of course, with a pipe operator (or other similar constructions) you can get the consistency without the intermediate names.


I just find it easier "thing" that happens is one self-contained statement, if that makes sense. Makes it easier to see what does what. I don't think one way is "better" or "worse" btw; all I can say to my brain, I find it harder to follow. This is what can make programming in a team hard.

I'm also one of those people that likes single-letter variables. I know some people hate it with a passion, but I find it very convenient. Just makes it easier to read as there's less to read.

I'm not smart enough to do Haskell, so I can't say much about that.


Oh, a kindred spirit. I also love single-letter variables where they make sense. I have a math-heavy background, so they're totally cool for me, BUT I get why most programmers would hate 'em. I also like them since they were the "norm" in old C# LINQ code where I learned functional programming. If I were alone in the programming world I would have written the example above with single letter vars.

I agree with your remarks, there's no right or wrong, it's like tabs and spaces.

Btw I'm also not smart enough for Haskell, but it hasn't stopped me so far ;)


Extracting `envStr` is definitely the highest impact change for me. I dislike temporary variables too when they don't represent a meaningful intermediary result, but in this case I see them as a lesser evil. I agree that `styled` is more about personal preference.

This is why I'm happy to see the pipe syntax proposal, it avoids unnecessary temporary variables while simultaneously aiding readability.


> This is why I'm happy to see the pipe syntax proposal, it avoids unnecessary temporary variables while simultaneously aiding readability.

The thing is I don't think it's all that much more readable. No matter which syntax you use, there's still the same number of "things" going on in a single statement.

I do have to admit I never worked with a language that uses |>, so I'm sure that with increased familiarly with this it would become "more readable" to me, but one has to wonder: just how many calling syntaxes does one language have to support? More syntax also means more potential for confusion, more ways to abuse the language/feature, more "individual programming styles", more argueing over "should we write it like this or that?", more overhead in deciding how to write something, harder to implement the language and write tooling for it, and things like that.

There is always a trade-off involved. The question to ask isn't "would this be helpful in some scenarios?" because the answer to that is always "yes" for practically any language feature. The question to ask "is this useful enough to warrant the downsides of extra syntax?" I'm not so sure that it is, as it doesn't really allow me to do anything new that I couldn't do before as far as I can see. It just allows me to make things that are already too complex a bit more readable (arguably).


> I do have to admit I never worked with a language that uses |>

Have you never worked with Bash? It's basically the same thing


Indeed it is, but it's kind of a different context than a "real" programming language. What works for one doesn't necessarily work well for the other.


Truly, one of the main reasons I might stop increasing the complexity of a bash one-liner and move it to a full shell script, or bail for Python, is specifically so I can turn those chains of pipes into imperative steps with temp vars so that they're actually legible and easy to reason about. I can't imagine why I'd want to go the other direction.


Those variables help to document intent by naming the intermediate values. They also make step-debugging more convenient. In languages with type declarations, they also serve to inform about the type of the intermediate value, which otherwise is invisible in a pipe sequence.


IDEs for languages with pipes allow break points on pipelines and can show inferred type annotations mid pipeline. Given the popularity of JS, this tooling will appear rapidly after pipe standardisation.


In this case I like that it gives you a hint about what's going on. "chalk.dim" sure doesn't.


The refactored version is doing what pipes would do in a language that doesn't support pipes; it's reordering the statements into execution order rather than having to use a mental "stack" to grok the original version.

Given that the OP stated that they like the pipe syntax, and the refactored version illustrates the hoops you'd have to jump through without it, I guess your comment is just a strange way to agree with the OP?


why? are you of a 'pointfree' opinion? what are your concerns? https://wiki.haskell.org/Pointfree

personally i detest pointfree syntax. having intermediate values makes it much easier to step through code with a debugger & see what is happening. and it gives the reader some name for what the thing is, which is incredibly useful context. the enablement of pointsfree styles is one of my main concerns about potential pipe operator syntaxes: the various syntaxes that have been raised often introduce implicit variables which are passed, and i greatly fear the loss of clarity pointsfree style brings.

maybe there's something beyond the pointsfree vs not debate here that i'm missing, that makes you dislike the refactored example. personally i greatly enjoy the flatness, the step by step production of intermediate values, each of which can be clearly seen, and then assembled in a last final clear step. that is much more legible to me than one complex expression.


I think it really depends on the language.

In languages that more easily support repl-driven development (e.g. Clojure), I think this is less of an issue. If you have a handful of pure functions, you can quickly and easily execute them via the repl, so you get a lot of clarity as to what those intermediate values look like even if the functions are ultimately used in a more point-free style.

But on the other hand, this would be a nightmare in C# (what I use in my day job). Sure, you can execute arbitrary expressions while debugging C#, but IMO you can't really achieve the same clarity. I'd rather see intermediate values like you suggest since it's easier while debugging, vs a bunch of nested function calls.


I agree that

(1) named intermediate values are sometimes more readable ... though I have examples where it's very hard to come up with names and not sure it helped

(2) debugging is easier.

For (2) though, this IMO is a problem with the debugger. The debugger should allow stepping by statement/expression instead of only by line (or whatever it's currently doing). If the debugger stopped at each pipe and showed in values (2) would mostly be solved. I used a debugger that worked by statements instead of lines once 34 years ago. Sadly I haven't seen once since. It should be optional though as it's a tradeoff. Stepping through some code can get really tedious if there are lots of steps.


Intermediate variables also have the benefit to make not just the last value available in a debugger view, but also previous values (stored in separate variables). Of course, a debugger could remember the last few values you stepped through, but without being bound to named variables, presentation would be difficult.


It's hard to understand which statement the debugger has a break point set to when you can put many breakpoints on the same line

I have tools that can do it, but I'll still have a better time splitting out a variable for it, especially since what I really want is a log of all the intermediate values, so I can replicate what it's doing on paper


TC39 proposals often have rubbish real world examples that should never see the light of day in a JS codebase.

It makes me really question the judgement of the people that are working on this language, and it explains how some of the shittier proposals manage to slip in and why the good ones are misused all over the place in every real world codebase when this is the kind of guidance devs have on where to use fancy new features.

In a few years everyone is going to collectively lose their fucking mind once again and decide that ternaries are now the devil and every React component should be chock full of do expressions. I can see that coming clear as day. That might finally be enough to get me to throw in the towel if the also incoming decorator hell doesn't do it.


> TC39 proposals often have rubbish real world examples that should never see the light of day in a JS codebase.

This example was literally taken directly from real-world code in the React codebase (a script to call jest-cli). While you're correct that there are probably a lot of clearer and more readable ways to write this snippet, the fact of the matter remains that people write code exactly like these "real world examples" all of the time.


Yes and those people with a proven track record of writing awful code will now have another new tool to apply their valuable skills with. I can't wait.


I am still hoping that decorators might not pan out :fingers-crossed:


Wait, what's wrong with decorators? They seem pretty nice in python.


I like decorators as a concept... but there have been a couple different implementations now... the TypeScript version is probably the most broadly used, but there were others before it via Babel (formerly 6to5).

I was an early adopter of the original decorators proposal as well as the F#-style pipeline operators. The more time has moved on and other bits made it into JS proper, I'm far more inclined to stick to what's "in the box"... Have even considered just writing straight JS + ESM (modules). Of course, I also like JSX, though not sure of any proposals of how to get that "in the box" as e4x died on the vine and a lot of other efforts didn't gain much traction either.


> It makes me really question the judgement of the people that are working on this language

Only now?


> some of the shittier proposals manage to slip in

Examples?


React component should be chock full of do expressions

If the motivation behind these features are React components, javascript clearly lacks proper logic in array (and object) literals, like

  '[' if (cond) […]expr1 [else […]expr2] ']'
Although it seems pretty strange to add syntax only because some specific user interface library together with a specific language extension could benefit from it.


Do you mean this?

    [cond ? expr1 : expr2]
Or maybe this:

    [(() => {/*code*/})()]


More like this:

  [...(cond ? [expr1] : [])]


The Hack proposal is horrible imo because it doesn’t look like JS anymore. The F# proposal is 99% of the benefit whilst being actually approachable.


> The Hack proposal is horrible imo because it doesn’t look like JS anymore.

I mean it looks exactly like JS with an additional % placeholder.


And a lot more like JS than the F# proposal.


How so? JS already has operators, but placeholders is a totally new concept


Is there another JavaScript feature that looks like a list of function names but which causes actions to be performed? Or operators that operate on both functions and values? The placeholders are very similar to variables, so do what they look like they do, and it's only the implicit definition that is new. The explicit placeholders feel much more like JavaScript to me than the implicit function calling of the F# proposal.


- functions and lambdas are already first-class values in JS

- we already have operators on values, such as +

- |> is just another operator on values


It absolutely is, but time and time again TC39 has followed few champions over the waves of people favoring F# pipes.

It's incredible to me TC39 would rather have this monstruosity over F# pipes which are actually pretty similar to how most pipes work and read in most functional languages including unix `|` one.


On the flipside, I think that this is still such an ongoing debate is part of why the pipeline operator is still stuck in Stage 2 and having trouble getting into Stage 3, despite being listed as a "priority" multiple times on and before 2020. TC-39 isn't blindly following anyone here, they seem to be dragging their heels hoping that someone comes along with an even better compromise between the styles.


Before ES6: “The arrow function proposal is horrible imo because it doesn’t look like JS anymore.”


"Why did it take so long?" and "Pointless bloat!" reactions were much more common:

https://news.ycombinator.com/item?id=3780367

https://news.ycombinator.com/item?id=6418337


It’s a balance. I actually support the addition of the F# proposal, minus the await parts.


I think we should have BOTH.

Use |> for the Hack proposal

and -> for F# -style.


I'd far rather save that for an alternative switch expression with pattern matching.

    const slowSum = (lst: {x: number}[]) =>
      switch(list) {
        [{x}, ...y]   -> x + slowSum(y) //not tail recursive
        [{x}, {x: y}] -> x + y          //alias second x to y
        [{x}]         -> x              //handle length 1
        []            -> 0              //handle length 0
      }


Having been watching, and a couple times playing with Rust... would definitely like a similar pattern matching system in JS. I still feel the C# syntax for this feels a bit alien `varname switch {...}`.


As an F# developer who also does some JS and Java this would confuse the hell out of me!

|> is the best operator


There's no need. The % placeholder can be used for a consise lambda syntax and would work with a F# application operator.


All of the examples look unreadable and hard to debug to me. Why not something like this rather than one massive nested instruction?

    let keys = Object.keys(envars);
    let text = keys.map(envar => `${envar}=${envars[envar]}`).join(" ");
    console.log(chalk.dim(`$ ${text}`), "node", args.join(' '));
That way you can easily inspect and verify the intermediate values at runtime. Helpful for you to see if your code works as expected, helpful for others to see what the code is doing.


Bad examples can doom a project. It's amazing how much time people can spend on a solution while completely ignoring the documentation.

Pipes are probably better used on a set of operations that takes input and runs multiple functions on the input, rather than fiddling with string concatenation multiple times.

What do we use multi-pipes for in unix? I can't recall the last time I used one where the middle action wasn't a filter of some sort. grep -v to remove lines, or colrm or awk print to cherry-pick fields from a multi-field line. Once in a very long while I need to do three commands with filters between them. Beyond that it's too complicated and I make a script to handle several of the steps.


The issue with taking examples from real-world code & converting them is that there's no guarantee the real-world code is good. It usually isn't, so you're comparing bad with bad.

A more aggressive reformulation would be to prefix the original code with

  const envOutput = Object.keys(envars).map(envar => `${envar}=${envars[envar]}`).join(' ');
  const argsOutput = args.join(' ');
Leaving the example being converted as simply:

  console.log(chalk.dim(`$ ${envOutput}`, 'node', argsOutput));
vs

  envOutput |> `$ ${%}` |> chalk.dim(%, 'node', argsOutput) |> console.log(%);
This more clearly highlights the obvious limitations of pipelines - piping envOutput but not 'node' nor argsOutput is a jarring syntax-mix here. Though I think it offers some hope for them working well in other scenarios - possibly in curry-heavy applications.


It's not jarring. It expresses what is the subject that's being processed and what are additional parameters of the processing steps. This allows to keep all parameters of each processing step together and processing steps easily visually separable.


> It's not jarring. It expresses what is the subject that's being processed and what are additional parameters of the processing steps.

Only if the first parameter of the function is the sole subject & subsequent parameters are "additional". Which isn't the case in this example: all params are equal subjects.


Right. `chalk.dim()` is probably not the best thing to use as an example for this.

But you can still think about this as merging in two additional pipes into the one you are processing. So 'node' and argsOutput are just very short pipes that you are merging into the flow of current one.

Btw... chalk API feels super weird.


> Neither seems very good to me, and the |> version doesn't really seem "less bad".

For me the piped version seems way better, way cleaner, with cleanly separated processing steps.

I don't like nested steps because for nesting multiple arguments functions your expression tends to grow in both directions and parts belonging to the same step tend to end up really far from one another and clearly distingushes them from data that's pushed through the pipeline.

% is short so it keeps parts of the same processing step together.

This syntax also enables you to express which part is the data to be processed, what are the processing steps and what are additional parameters of the processing steps.


Just because pipe's aren't the ideal solution for that convoluted example they decided to list doesn't mean they aren't still extremely useful for a consistent set of problems. Like anything they can be abused to make code less readable.

I also have a feeling this doc lists every sort of usecase, not because they are advertising it as "always the better version", but because it's a design doc that needs to factor in edge cases, such as using a pipe following chained function calls.


I'm just using the example they posted in the README as an "before-after". I think that's a reasonable thing to do when evaluating "do I think this would be a good feature?" Blame the author(s) of that document if you don't think it's a good example.


That seems to be missing the point of the document then. Before/after in a design doc isn't "this is a better way to do it".


What is the point then?


I don't see design docs as a tutorial on how to be a better programmer using a new syntax, the goal is to flesh out a new concept built on some fundamental ideas.

You cherry picked one example of a tangled/messy block of code that they used to communicate specific idea around "left to right" comprehension and flow of the data using the new syntax. For that specifically it did a fine job.

But that doesn't mean that's the way you should be writing code in the first place, given it started with a mess and only used one piece of syntax to change it.

I will admit it's a poor example to open with. But for a design doc about exploring and debating ideas it's fine.


The way I read the proposal is that they used a real-world example from a commonly used codebase (React) to demonstrate how this feature would improve it. I think that's a good approach as features are intended to address real-world concerns, and concrete real-world examples help with that.

I don't think it's fair to say it's "cherry picking" to focus on the example they focus on themselves. They have a few other examples too, but I would say "I don't see how [NEW] is better than [OLD]" for many of those as well (and for some, I think the [NEW] is significantly worse).


Considering it was their opening example (which they reiterated multiple times) I will concede it was a poor choice. Since yes one of the top goals is selling the general idea. So it's fair the general audience would take it at face value, especially when they use it multiple times as a real world use case, it is hard to take it any other way.

Plus the doc is #1 on HN after all, which could IRL push it beyond a proposal stage if done right.


I do prefer the piped version mainly because it removes annoying nesting but that's not high on my prioritized list of discomforts. The issue of deeply nested function calls is a thing, but I just use some temp variables to avoid it when it really starts looking ugly.

Then again, this is my typical opinion to a lot of the proposals - I see what this is useful for but I haven't experienced enough pain to really argue for it. However, if it does make it into the spec then I will use it probably because it's there.

I also recognize my stance as one that not many people like in other contexts because it can be interpreted as me not looking to improve my situation. But on the flip side I think cluttering the language specification with a lot of superficial syntactic sugar is a mistake.


For me not. In the Pipe example you must read first that something is done with the object and that it will be output at the end.

If i read a code and i want to now what it will do, i wanna first read that it will output something. If that is what i search for, i read the nested code.

With the pipe style i waste more time, even if the code looks clearer.

I not overused example could look like this:

  console.log(
    Object.keys(envars)
      .map(envar => `${envar}=${envars[envar]}`)
      .join(' ')
    |> `$ ${%}`
    |> chalk.dim(%, 'node', args.join(' '))
  )
If i don't care about console.log, i don't have to read the nested code.


Both examples are not good. They are both very unreadable and an example of slapping things together without to look on readability.


During the 2020 survey there were two versions of this operator on the table smart mix and F#. A few months later the committee advanced a third option Hack, which is kind of like smart-mix but always requires the placeholder token.

I suspect that many developers expressed their desire for this operator under the believe that it would make their style of programming easier, which F# indeed does for many code bases, particularly those that use libraries such as fp-ts or rxjs.

However the advanced version fails to deliver that


I'll bite:

    let _= Object.keys(envars).map(envar => `${envar}=${envars[envar]}`).join(' ')
    _= `$ ${_}`
    _= chalk.dim(_, 'node', args.join(' '))
    _= console.log(_);
This is possible in current JS syntax.

You can also cram it into one line with semicolon:

    let _= Object.keys(envars).map(envar => `${envar}=${envars[envar]}`).join(' ') ;_= `$ ${_}` ;_= chalk.dim(_, 'node', args.join(' ')) ;_= console.log(_);

So it's basically Hack syntax for pipes with just ;_= instead of |> and _ instead of %. And you need to 'mark' the beginning of the pipeline with `let _=`

Additional 'benefit' is that until you leave the scope you can access output of the last pipe through _.

You can always use ;_= for consistency and pre-experess you intent to use the piping in the current scope by doing `let _;` ahead of time:

    let _;

    ;_= Object.keys(envars).map(envar => `${envar}=${envars[envar]}`).join(' ')
    ;_= `$ ${_}`
    ;_= chalk.dim(_, 'node', args.join(' '))
    ;_= console.log(_);
Full disclosure, I hate all of the above but I love Hack syntax with |> and %.

To better confer the direction of the pipe you might even use the letter that is oriented to the right:

    let D;
    ;D= Object.keys(envars).map(envar => `${envar}=${envars[envar]}`).join(' ')
    ;D= `$ ${D}`
    ;D= chalk.dim(D, 'node', args.join(' '))
    ;D= console.log(D);
And if you want to use your pipe as an expression or return it from the function , instead of ; might be better:

    let D;
    return D= take(D) ,D= bake(D) ,D= serve(D);
Surprisingly semicolon auto-insertion doesn't interfere with this:

    let D;
    return D= take(D) 
    ,D= bake(D) 
    ,D= serve(D);


This is the code equivalent of that "illegal LEGO techniques" thing to me.


;D


;D is perfectly legal syntax too.

Btw I think I'll name this operator duck ,D=

Maybe the pipe syntax extension will be introduced if we threaten to make duck operator a thing?


Within the current js syntax, you can make the code looks readable with a chain or pipe with a simple function https://github.com/beenotung/tslib/blob/master/src/pipe.ts

You can even use an array if you don't need to peek the value in the middle of a chain of operations

Example:

    createChain(
      Object.entries(envars)
      .map(([key, value]) => `${key}=${value}`)
      .join(' ')
    )
      .map(keys => `$ ${keys}`)
      .map(pattern => chalk.dim(%, 'node', args.join(' ')))
      .use(line => console.log(line))

In each step, you can name the intermediate result accordingly to it's meaning, should be more readable than calling it %


> Everyone wants something different, and if you just implement the "top 5 most requested features" you're going to end up with some frankenbeast of a language.

Very much this. They need a new question on the survey "does JavaScript need new syntax or can we just leave it alone and work on perf/tooling/etc. of the existing stuff without throwing a bunch of new junk in"?


The flaw with that approach to language development is that there's no limiting factor.

Can you imagine a survey where everyone says they're satisfied with the language as-is? Of course not. That's statistically impossible. Even if 99% of the needs of developers are satisfied by a language, people are going to eventually answer that they'd like something that the language doesn't have, and that's always the case.

Let's say JavaScript implemented nearly every single language feature that's ever been invented. That is except the goto statement. Upon being surveyed on what they think is currently missing from JavaScript, a significant number of developers will have to respond with goto. Does that mean JavaScript is actually "missing" goto and that it was a mistake that it was never implemented in the first place? Of course not.


Method chaining is better, but you need the library/class/ code to support it. If you got a bunch of function it can’t help you. Pipe operator is useful for when you use other people’s code.

On the other hand, pipe is less useful when you don’t have partial application of function built-in. So yeah I think this is not worth it.


This works when your language doesn’t have an implicit “return nothing” as it’s default. Nothing says “imperative” like “this routine may or may not take input, but you ain’t getting nothing back out of it.”

Ironically, OO gets thrown under the bus a lot lately because it ain’t functional. But in reality, early OO languages like Smalltalk and CLOS were much more functional in this regard, you always had an implicit return of self, which could be chained easily.

I (over)use the piping operator (|>) in Elixir a lot. I just like the way it reads. But one thing I don’t love, is that it’s not an easy sequence to type.


Elixir's pipe mechanics probably aren't a great model for implementing pipe operators in existing languages. A large part of what makes |> work in Elixir is the steadfast commitment to f(most_likely_to_be_piped_param,other,params) argument ordering in the standard library.


fluent apis are a matter of preference.

I personally find pipe style much more readable.


JavaScript has been a frankenbeast of a language right from its inception, by design.

However, the 5th most requested feature in that poll is somehow "functions", and "sanity" also appears in the results, so this particular source may not be a good one.


The thing I dislike about it most is the constantly-rebound % variable. It means something different in each line. In this case they have elected to keep it as a string throughout the pipe, but this ‘more pipey’ version of the code has it start out as an array then turn into a string halfway through the pipe, which feels dangerous (and is presumably why they didn’t take the example this far):

    Object.keys(envars)
      |> %.map(envar => `${envar}=${envars[envar]}`)
      |> %.join(' ')
      |> `$ ${%}`
      |> chalk.dim(%, 'node', args.join(' '))
      |> console.log(%);


Not really... F#-style pipeline syntax would be a bit more explicit about it, assuming you could use % for a variable name...

    Object.keys(envars)
      |> % => %.map(envar => `${envar}=${envars[envar]}`)
      |> % => %.join(' ')
      |> % => `$ ${%}`
      |> % => chalk.dim(%, 'node', args.join(' '))
      |> % => console.log(%);
To be honest, I've pretty much given up the hopes that TC39 would actually resolve pipelines and decorators at this point... I think it's been around a decade now.


But then why would you call all those different variables ‘%’?

Reading this version though I also notice something else that the F# style enables that the Hack style doesn’t which is that it supports destructuring. So in F# style, I can more easily make pipe steps that pass on multiple values in a structured object or array, then access them easily down-pipe.


Unfortunately F# pipelines have been rejected a few times by TC39 to advance... I used it for a while via Babel/6to5, but I gave up (like with decorators) after many years of no advancement. I doubt I'll see either any time soon.


So their argument in favor of this fugly new operator is that some people write unreadable code? That's not the language's fault. As they say, you can write FORTRAN in any language. This just gives them a new tool to make things even worse.

     chalk.dim(Object.keys(envars)
       .map(envar => `${envar}=${envars[envar]}`)
       .join(' ')
       |> `$ ${%}`, 'node', args.join(' '))
     |> console.log(%);
And it doesn't even work on objects. Lame. I know the reason: JS has no typing. Still is lame.


I'm trying to compare this to the jQuery kinda syntax

X

.this()

.that()

.then()

.do()

.a()

.thing()

And I expect this to let me keep that chain going for when I need to run a static function against the chain

X

.this()

.that()

.then()

.do()

.a()

.thing()

|>JSON.stringify(%)


To format code as code, indent two spaces: https://news.ycombinator.com/formatdoc


  > I also feel this:
  >> In the State of JS 2020 survey, the fourth top answer to “What do you feel is currently missing from JavaScript?” was a pipe operator.
  > Is the wrong way to go about language design.
Let's call it Signor Rossi language design…

"Signor Rossi cosa vuoi? … E poi, e poi, e poi" (Viva la felicità by Franco Godi [1])

[1] https://www.youtube.com/watch?v=UrKKMtjNWCI


This and, they probably didn't mean the pipe operator from the niche language "hack".


Some people, it seems, just love to write things in A-normal form but dislike naming temporary variables. Solution: pipe operator.


As somone that use to write a lot of functional-style code (in ruby), and generally prefers functional style, and have created many many many "pipelines" like that in ruby code - I've actually started to "regress" to "status-quo" (as the article puts it) mostly because I work with developers that don't understand functional style and it just becomes point of contention in review that I just don't care about getting into anymore. I can see this same kind of thing happening with this operator in JS-land (I may be wrong, haven't written any significant javascript in years, but people tend to be stubborn).

I just write things as stupidly as possible now, and just do the second one even though there are nicer "ruby-ways" to do them - maybe this is "bad" but I find it easy to read and grok...and it doesn't cause arguments during review <.<


You have nested backticks there. Is this really real code?


Yes, you can do that in lambdas:

    > `${50}: ${[1,2,3].map(a => `${a+1}`)}`
    '50: 2,3,4'
Whether that's a good idea or clear code is another issue. But it's allowed.


Is that how they really do it? I haven't actually seen that happen. There's no reason not to take popularity into account for some measure, as long it's not the ONLY thing you take into account.


I feel like your first example was written kind of in bad faith but still, literally yes your first example is better than your second.


> Is this… ${new-style} really better than… ${old-style}?

Yes! A hundred thousand times yes!

I have not seen this syntax in JavaScript before, but just by having the vague notion that it's pipe-like, I was able to generally understand what was happening within seconds of reading it. I'd have to read up on the syntax a bit to confidently write code in this style, or perhaps to fix a bug (does "${%}" do what I think it does) but I can quite literally comprehend the author's intent almost at almost the same speed as I can read.

    1. get all the keys of envars.
    2. convert every key into a "var=value" format.
    3. put a space between them all
    4. stick a dollar sign in front of the whole thing
    5. no clue what the `chalk` library is for?
      a. this *appears* to prefix a "node ${args}" call with the previously-created environment variable bits
      b. that tracks with what we've seen so far
      c. close enough for now
    6. log the whole thing
Note that even without knowing what chalk is or does, the flow of everything makes it extremely clear what exactly the high-level outcome is supposed to be. We're building up a string like "$ FOO=bar BAZ=qux node --arg thing --arg2 more_args". We're doing something fancy with it that I don't quite know about just yet, but the above knowledge makes it very easy to fill in that gap.

With the second one, I have to construct a tree in my head

    1. console.log…
      a. the result of chalk.dim, which is
        i. all the env vars
        ii. mapped to key=value format
        iii. joined by a space
      b. ^ actually do chalk.dim on that thing above
        i. sorry, back up, there were some additional arguments
      c. ^ actually do chalk.dim on the above with 'node' and space-separated args
        i. I think this concatenates them?
        ii. double-check [1.a] to confirm
        iii. did I miss `args` defined somewhere in [1.a]?
          A. no, it's apparently inherited from scope
    2. ^ actually log all the above
      a. wait is this a correct reading, based on reassembling the above?
The confusion is compounded by not knowing the details of chalk. I have to jump backwards and reason about what the result of a non-linear chunk of steps is to make sure my guess is consistent.

I have no opinion on the merits of this particular syntax, its implementation details, or in comparison to alternate proposals of similar ideas. I'm sure there's worthwhile debate to be had on those details. But from a high level, I'm a fan. My head is not a meat-based tree traverser and I'm guessing most people's isn't either. Human brains are big fans of linear narratives. While I love movies and shows like Memento and Westworld, this kind of disjoint storytelling is quite literally done for the purpose of generating confusion and not for promoting comprehension.

Also, there may already exist better ways of expressing this particular example linearly and succinctly. If there is, I'd probably prefer that. Worst-case scenario you can always deconstruct nested function calls into sequential variable assignments, and maybe that's the "best" answer here instead of new syntax. But is an approach that can be understood linearly from start to finish clearer than one that requires mental tree-walking? Yes. Yes yes yes yes yes.


They're all horrific


Wait, design by committee is not a good thing? Gasp

I can only hope this leads to Javascript becoming even more unbearable. Perhaps only this can weaken Google's resistance to WASM.


I don’t like complex nesting inside the string though. That might be a nitpick as you can split that out without the pipe operator.


SELECT 'Jerk' FROM jerkings_tab;

Is this Perl?

DROP jerkings_tab;

Is this really Perl?


Temporary variables are often tedious? I have found that well named temporary variables are the only clear way to comment code without actually writing the comment. The version with temporary variables is much easier to understand without having to read the rest of the code.


Exactly. As the proposal contemplates this alternative, it claims:

> But there are reasons why we encounter deeply nested expressions in each other’s code all the time in the real world, rather than lines of temporary variables.

And the reason it gives is:

> It is often simply too tedious and wordy to write code with a long sequence of temporary, single-use variables.

Sorry, but...that's the job? If naming things is too hard and tedious, you don't have to do it, I guess, but you've chosen a path of programming where you don't care about readability and maintainability of the codebase into the future. I don't think the pipe operator magically rescues the readability of code of this nature.

The tedium of coming up with a name is a forcing function for the author's brain to think about what this thing really represents. It clarifies for future readers what to expect this data to be. It lets your brain forget about the implementation of the logic that came up with the variable, so as you continue reading through the rest of the code your brain has a placeholder for the idea of "the envVar string" and can reason about how to treat it.

The proposal continues:

> If naming is one of the most difficult tasks in programming, then programmers will inevitably avoid naming variables when they perceive their benefit to be relatively small.

Programmers who perceive the benefit of naming variables to be relatively small need to be taught the value of a good name, and the danger of not having a good name, not given a new bit of syntax to help them avoid the naming process altogether.

The aphorism "There are two hard problems in computer science: cache invalidation, and naming things." is not an argument to never cache and never name things. That's mostly what we software folks spend our time doing, in one way or another.


> The aphorism "There are two hard problems in computer science: cache invalidation, and naming things." is not an argument to never cache and never name things.

Sure, it can’t be completely eliminated, but why not do less of a thing that’s hard, when it can be avoided?

Values have a “name”, whether it’s a variable ‘keysAsString’ or the expression ‘keys.join(' ')’. The problem with keysAsString is that you have to type it twice, once to define it and again to use it. It’s also less exact, because it’s a human-only name, not one that has a precise meaning according to the rules of the language. (E.g. a reader might wonder what the separator between the keys was - if you don’t store it in a variable, then the .join “name” tells you precisely right at the site it’s used.) Making the variable name more precise implies more tedium in the writing and reading.

If the value is used twice or more, I would usually say storing it in a well-named variable is preferable, but if it’s cheap or optimizable by the compiler I might still argue for the expression.

This may be a irreconcilable split between different types of thinkers, perhaps between verbal and abstract.


If names are the source of crisis, wouldn’t it be better to define temporary variables without names?

  var [$1, $2] = foo(bar(envars))
  console.log(chalk.bold($2), $1)
Job done, no plumbing needed. Has the same level of semantics as %.


and yet looking through code from the place you work I see something like this

    let field = ve.instanceContext.replace(/(#\/)|(#)/ig, "").replace(/\//g, ".")
Which you apparently claim should be

    const fieldWithHashMarksUnesacped = ve.instanceContext.replace(/(#\/)|(#)/ig, "");
    const field = fieldWithHashMarksUnesacped.replace(/\//g, ".")

https://github.com/mirusresearch/firehoser/blob/46e4b0cab9a2...

and this

    return moment(input).utc().format('YYYY-MM-DD HH:mm:ss')
Which apparently you believe should be

    const inputAsMoment = moment(input);
    const inputConvertedToUTC = inputAsMoment.utc()
    return inputConvertedToUTC.format('YYYY-MM-DD HH:mm:ss')


You've confused method chaining and nesting. The proposal itself says that method chaining is easier to read, but limited in applicability, while it says deep nesting is hard to read. The argument against the proposal by the GP comments is that temporary variables make deep nesting easier to read and do it better than pipes would.


Thanks for taking the time to look and reply.

In your first find, yes, your modification helps me understand that code much more quickly. Especially since I haven't looked at this code in several years.

In that case, patches welcome!

In your second case, as the sibling comment explained, I'm not opposed to chaining in all cases. But if the pipe operator is being proposed to deal with this situation, I'm saying the juice isn't worth the squeeze. New syntax in a language needs to pull its weight. What is this syntax adding that wasn't possible before? In this case, a big part of the proposal's claim is that this sequential processing/chaining is common (otherwise, why do we care?), confusing (the nested case I agree is hard-to-read, and so would be reluctant to write), or tedious (because coming up with temporary variable names is ostensibly hard).

I'm arguing against that last case. It's not that hard, it frequently improves the code, and if you find cases where that's not true (as you did with the `moment` example above) the pipe operator doesn't offer any additional clarity.

Put another way, if the pipe operator existed in JS, would you write that moment example as this?

    return moment(input)
      |> %.utc()
      |> %.format('YYYY-MM-DD HH:mm:ss');
And would you argue that it's a significant improvement to the expressiveness of the language that you did?


|> is for functions the same thing that . is for methods

If you program in object oriented style then . is mostly all you need.

If you program in functional style you could really use |>


I like that the expected variable name has a typo.

Those typos leak out to calling code and it's hilarious when the typo is there 10 years later once all the original systems have been turned off


fieldWithHashMarksUnesacped

The code removes all “#/“ (or just “#” if a slash isn’t there). After that it replaces slashes with dots. How on earth is that “hash marks unescaped”?


This. Temporary variables are the way to go for deconstructing a complex expression like this. Everything is more readable when you put the results of an expression with two to four terms in a well-named variable. Trying to put everything into one giant closed-form expression feels clever and smart, but it's really just getting in the way of the next poor sucker who needs to understand what you were doing.

This works the way human cognition does, by batching. The way humans can fit more items in short-term working memory is to batch up related concepts into one item. This is how chess masters do it - they don't see a piece and look individually at each square it is attacking, they see the entire set of attacked squares as one item. This is why "correct horse battery staple" passwording works - the human doesn't remember twenty-eight individual characters, they remember four words.

Temporary variables follow how human cognition works, particularly when the reader is going to be somebody else's cognition who didn't go through the process of writing it.


Temporary variable make debugging easier too.


Agreed. It feels great to chain stuff or nest function calls, but then I hate myself when it comes to debugging.


Pipe is a tool for dealing with batching too.


I wish I could triple-upvote this.


When there are a few they can be really great. But if you need to accurately name every single intermediate thing they can become visual noise that hides what happens.


I struggle to think of real-world examples where I've just needed to chain and chain and chain values of different types more than a handful of times. The claimed need for the pipe operator is this construction:

    function bakeCake() {
      return separateFromPan(coolOff(bake(pour(mix(gatherIngredients(), bowl), pan), 350, 45), 30));
    }
The piped code looks like:

    function bakeCake() {
      return gatherIngredients()
        |> mix(%, bowl)
        |> pour(%, pan)
        |> bake(%, 350, 45)
        |> coolOff(%)
        |> separateFromPan(%)
       ;
Which is... fine? It certainly looks better than the mess we started with, but adding names here only helps clarify each step.

    function bakeCake() {
      const ingredients = gatherIngredients();
      const batter = mix(ingredients);
      const batterInPan = pour(batter, pan);
      const bakedCake = bake(batterInPan, 350, 45);
      const cooledCake = coolOff(bakedInPan);
      return separateFromPan(cooledCake);
    }
Even if you consider the `const` to be visual noise, the names are useful. At any point you can understand the goal of the code on the right-hand side by looking at the name of the variable on the left-hand side. You can also visually scan the right-hand side and see the processing steps. You can also introduce new steps to the control flow at any point and understand what the data should look like both before and after your new step.

I agree that the the control flow is more clearly elucidated in the pipe operator example, but it tosses away useful information about the state that the named variables contain. It also introduces two new syntactical concepts for your brain to interpret (the pipe operator and the value placeholder). I contend the cognitive load is no greater in the example with names, and the maintainability is greatly improved.

If you have an example where there are dozens of steps to the control flow with no break, I'd be really curious to see it.


Imagine that you asked someone the question "How do you make a cake?" Which response would be clearer?

1. Gather the ingredients, mix them in a bowl, pour into a pan, bake at 350 degrees for 45 minutes, let it cool off and then separate it from the pan.

2. Get ingredients by gathering the ingredients. Make batter by mixing the ingredients. Make batter in a pan by pouring the batter in a pan. Make a baked cake by baking the batter in the pan at 350 degrees for 45 minutes. Make a cooled cake by cooling the baked cake. Separate it from the pan.

For me personally #1 is more readable because #2 is unnecessarily bloated with redundantly described subjects.


Right, it works for your analogy.

Going back to the concrete scenario GP presented, naming things makes it much clearer to me.


In fact, I'm so fanatical about naming things, I'd probably give the two magic numbers and the return value names as well:

    function bakeCake() {
      const bakeTemperature = 450;
      const bakeTime = 45;  // minutes
      // ... 
      const bakedCake = bake(batterInPan, bakeTemperature, bakeTime);
      // ...
      const finishedCake = separateFromPan(cooledCake);
      return finishedCake;
    }
And I'd not look at a code review which quibbled about the particular names I chose as being a waste of time either. Time spent in naming things well is the opposite of technical debt, it's technical investment. It pays dividends down the road. It increases velocity. It makes refactoring easier. It improves debuggability. It makes unit tests easier to see.


Should make it an async function, and await the bake step. ;-)


Sometimes intermediate values either don't have domain specific meanings or the meaning is obvious from the function name that returns this temporary value.

Then naming it is just noise.

If your bake() function was rather named createBakedCake() than naming returned value bakedCake just increses reader fatigue through repetition.

Same way

Random random = new Random();

in C# is worse than

var random = Random();


> Sometimes intermediate values either don't have specific meanings or the meaning is obvious from the function name that returns this temporary value.

I don't necessarily disagree with this. But even granting that this is true: congrats, you've just found the worst part of giving these intermediate steps a name! Like, that's the worst case example of the cost side of the tradeoff we're discussing here. And it's not that big a cost! Like, of all the code you write, how much of it fits this case? Where you're writing a function where there's a lot of sequential processing steps in a row with no other logic between the steps AND the intermediate state doesn't have any particular meaning?

In that worst case, you have a little extra information available (like your Random random = new Random()) example that your eyes need to glide past.

I would wager your brain is more used to scanning your eyes past unnecessary information and can do that with less effort and attention than it can either:

    - bounce back and forth between the chained function calls of the original nested example.
    - synthesize the type and expectations of the intermediate value at any arbitrary point in the piped call chain.
That last thing is the big cost of not naming things. In order to figure out what the value should look like at step 4, you have to work backwards through steps 1-3 again. And you have to do that any time you are debugging, refactoring, unit testing, adding new steps, removing existing steps, etc.

And the work to come up with "obvious" names isn't hard. Start with the easy name:

    batterInPan = pour(batter, pan)
And if the name batterInPan never gets any better and never really helps anyone read or debug or refactor or unit test this code, then in that sense, I guess it's a "waste". I just claim that this case is far less common in the real world and far less costly than having to untangle a mess of unnamed nested or chained call values.

Or maybe you want to just start with the unnamed nested or chained calls, and when you need to read or debug or refactor or test your code you pay the "naming things" price tag at that point. That's often the first thing I do when I come across code with a dearth of names, I just give everything a boring, uncreative temporary name, and then I can do whatever work I showed up to this code to do. It's not ideal, but it's better than every JS library sprinkling a new bit of syntax in just so they can avoid giving their variables names and can use an overloaded modulo operator instead.


> But even granting that this is true: congrats, you've just found the worst part of giving these intermediate steps a name!

Yes. But given that people would usually put you on a stake for naming function bake() because it doesn't tell anything about what the function expects or returns and bare minimum about what it does, this use case scenario is what happens very often, because naming your function in a very informative manner is very important because they are a part of the API.

If you really have functions like bake() or pour() in your code esp in weakly typed language then for the love of God, yes, please name the variables that you pass there and get from them always and as verbosely as possible.

Don't get me wrong, I'm very fond of naming intermediate things too. And with helpful IDE it can even tell you the types of intermediate things so you can better understand the transformations that the data undergoes as it flows through the pipeline.

But sometimes type, that IDE could show also automatically in |> syntax is even more important than the name for understanding. VS Code does something like that for Rust for chaining method calls with a dot. Once you split dot-chain into multiple lines it shows you what is the type of the value produced in each line.

My personal objection to naming temporary values too much in a pipeline is that it obscures distinction between what's processed and what are just options of the each processing step. But I suppose you might keep track of it by prefixing names of temporary values with something.

> Or maybe you want to just start with the unnamed nested or chained calls, and when you need to read or debug or refactor or test your code you pay the "naming things" price tag at that point.

Yeah, that's usually what I do. I start with chains and split them and pay for the names as I go.

> That's often the first thing I do when I come across code with a dearth of names, I just give everything a boring, uncreative temporary name, and then I can do whatever work I showed up to this code to do.

I'm also splitting and naming stuff in that case and checking types along the way. But I prefer that to encountering the code named verbosely and wrongly. Then I need to get rid of the names first to see the flow then split it again sensibly. Of course I don't usually commit those changes in shared environments. Only in owned, inherited ones or if the point of my change is to refactor.

Granted that chaining class member accessor mostly covers up this problem of naming intermediate things if you use classes. That's why we even survived without pipe syntax. But since we would like to move away from classes a bit to explore other paradigms maybe it's time?


Also the second example is easier to manipulate. You can hack in branches, logging etc. during development. I'm also not sure how the proposal tries to solve the problem that we can't easily pluck out members from an object in the first example. Will people just write something like `get(obj, "member")`? Or maybe they thought about this?


How about

    function bakeCake() {
      return do(
        () => gatherIngredients(),
        ingredients => mix(ingredients),
        batter => pour(batter, pan),
        batterInPan => bake(batterInPan, 350, 45),
        () => coolOff(bakedInPan),
        cooledCake => separateFromPan(cooledCake)
      );
    }


...which is just

   function bakeCake() {
      const ingredients = gatherIngredients();
      const batter = mix(ingredients);
      const batterInPan = pour(batter, pan);
      bake(batternInPan, 350, 45); // this is an in-place modifying function, I guess
      const cooledCake = coolOff(bakedInPan);
      return separateFromPan(cooledCake); 
    }
...but with an extra `do(...)` wrapper?

It could at least be

    function bakeCake() {
      return do(
        gatherIngredients,
        mix,
        batter => pour(batter, pan),
        batterInPan => bake(batterInPan, 350, 45),
        coolOff,
        separateFromPan
      );
    }
Although if we had function currying, the convention in ML languages is to put the most-commonly-piped-in param last for these functions:

    function bakeCake() {
      return do(
        gatherIngredients,
        mix,
        pour(pan), // assuming that pour(pan) returns a function that pours something into that pan
        bake(350, 45), // assuming that bake(temp, minutes) returns a function that bakes something at that temperature for that time
        coolOff,
        separateFromPan
      );
    }


What this reminds me is of those hierarchies Cat extends Animal... In these simple "real-world-inspired" examples it seems to make sense, but in programming I'd say a lot of times there's simply no good name for the intermediate steps.


In general, I think that when one does that, the code smell one is smelling isn't "This language isn't expressive enough; I need a third way to describe calling a function." It's "What I'm doing is actually complicated and I need to switch to describing it with a DSL, not adding more layers of frosting on this three-layer cake."


All I can figure is the people who keep pushing this sort of stuff in JS have very different problems than I do, if they think this will improve things rather than making them worse.

... I further suspect that their problems are mostly self-inflicted, but maybe I'm wrong about that.


Thank you.

I often find flow-crutches like the one described in this proposal more confusing to decipher than plain old fashioned, well thought out code.

Lambda (=>) expressions in C# and closures in JavaScript are others I sometimes find myself pausing at to make sure I'm interpreting correctly.

I always figured it's just because I'm an older programmer and haven't used the new language features enough for them to become intuitive. I do acknowledge there are use cases where they're a perfect fit for the pattern in which you're coding.

But I feel like they're too-often taken as a shortcut to dump a bunch of operations in one place when it would be more readable to structure into well-organized functions that logically group concerns.

It's not that I don't like syntactic sugar to make code more concise, I just think languages need to remain judicious about how many different ways they dole out to accomplish the same task before they start to risk 'rotting their teeth'. Gotta keep striving for elegance - as you renovate over time it can get harder to keep the bar high.


I agree.

I'm afraid these pipe operators will become like ternaries and will tend to produce "smart" lines of code which are difficult to parse at first glance.

Temporary variables are great for writing obvious code which is trivial to parse when reading.


The version with temp variables is also easier to debug.


My JS code looks exactly like you describe. Just a bunch of const (rarely let) statements with descriptive, short names. It's not tedious at all, just a little verbose. But JS is already a language that is relatively compact so it doesn't really matter.


I think that depends on the context. Are all intermediate results of the application of multiple procedures relevant and need a name? Or are we only interested in the result after applying all the procedures? Why polute our namespace with names, which are never again used, except for the next step of the pipeline? Then in other cases one does need some intermediate results.


Quite often I rewrite the code from

```

return validate(get_response(value))

```

or

```

value = get_response(value)

value = validate(value)

return value

```

into

```

res = get_response(value)

new_res = validate(res)

return new_res

```

Why? Easier to read and when Sentry throws an error I have each value from the call stack. Much easer to debug. In the example 2 you can accidentally move a line and not notice the error.


I hope Records & Tuples[0] land before this does. It would have meaningful and far reaching positive effects for the language, without much controversy. Like most of these things, it takes about 5-7 years for it to permeate through enough of the engines to be meaningfully useful in the day to day of web developers (node / deno typically 12-18 months tops). It would drastically speed up existing code once wide adoption is gained though.

I don't think the Pipe Operator would be as useful in comparison.

I really hate how long the Records & Tuple proposal has been languishing at stage 2. It could've shipped years ago if not for a syntax debate that dragged on and on without being fruitful[1]

EDIT: there is a class based version of this, that is stage one, for adding structs[2]. They behave similarly.

[0]: https://github.com/tc39/proposal-record-tuple

[1]: https://github.com/tc39/proposal-record-tuple/issues/10

[2]: https://github.com/tc39/proposal-structs


The one I really want is “do expressions”. I also can’t understand why anyone wants private fields over these kind of improvements. Lack of private fields in JS has literally never caused a big for me in 15 years of JavaScript development, but I have to use mutable variables and ugly if-else chains to 3 case assignment all the time!


"do expressions" is one of the things mentioned in the Pipe Operator proposal as possibly a pre-req and one of many reasons the Pipe Operator seems to have stalled out at Stage 2 despite multiple attempts to push it past.

Personally, for the 3-case assignment I would love to see one of the smarter pattern matching "match" expression proposals make it further along over the over-generalized "do expressions". Especially now that C# has pattern matching switch expressions.


You might already know, but until do expressions land, you can use IIFEs for ifs and switches assignments:

    const isSomething = (() => {
        switch (...) {
            case ...:
                return ...
        }
    })()
They are a bit ugly but the do wonders at avoiding `let`.


I've been wishing for this for years.

It opens up whole new ways of doing stuff when tuples and records are just primitives. They never mutate, so representing them efficiently right from the start is possible. They are primitives, so passing should be easier to do.

Concurrency becomes a lot easier to add to the language because if you limit it to primitives (which are all immutable), you get a lot of safety guarantees. I'd love to see either channels or actors baked into the language in the future.

Unfortunately, I don't think the pipe operator is the issue. Both of the proposals are just syntax. Implementing either just consists of a very straight-forward translation into already-existing syntax then running through the rest of the JIT as normal.

The real issue is stuff like the private variables in classes garbage. It added unnecessary complexity to the syntax (and is very ugly and perl-like). I've never met a senior JS dev (outside of TC39) who actually wanted or used it. Despite this, the JIT engineers couldn't wait to refactor all kinds of stuff all across the JIT to make this happen.

The time spent implementing private variables SHOULD have been spent implementing the infinitely more useful records and tuples.


alot of the class stuff is driven by the vendors. Google & Microsoft tend to be the big champions of the class stuff, like decorators, decorator metadata, the new struct proposal, private fields.

They want classes to be more unique than plain objects (when initially introduced, aside from easy extending via `extend`, they really weren't in practice). You can see it in their frameworks and how they do development. I'm not shocked those proposals get traction quickly because they come from the vendors themselves. I imagine internally their many thousands of engineers take advantage of these features. It just isn't as common (since `class` received a ton of backlash in the JS community mindshare with a broader move to functional style programming)

Records & Tuples on the other hand, came from Bloomberg (largely) and later Meta endorsed it. Unsurprisingly, it solves real problems they face. React (and anything that uses diffing today in the same manner) would be sped up dramatically if they could just `===` two objects vs what they do today. Yet this languishes, despite being a nearly universal upside for the average language user.


Is the point of those just immutability or something else too?

Wouldn't it be better to have a way to easily make any type deeply immutable?

EDIT:

I think tuples and records are also supposed to have value semantics, which is great, but why not a mechanism that turns on value semantics for arbitrary object type instead.

Also value semantics is separate concept from immutability. Maybe there should have separate ways to get expressed rather that combining them in form of just tuples and records.

To sum up, I can see why the community is hesitant.


Objects carry prototypes and mutability with them. Further, all primitives/value types of the language are immutable.

Adding those features to an Object would still necessitate creating a new primitive.

At that point, you might as well make that primitive record that is directly available and just use the constructor to convert (just like you'd use `Number("123")`, you'd use `Record({my: "object"})`).


> Objects carry prototypes and mutability with them.

You can deep freeze the objects and freeze their prototypes.


It's not the same as tuples and records. You can do the following with them:

  const first = #{a: 'b', c: 'd'};
  const second = #{a: 'b', c: 'd'};
  first === second // true
So no more deep comparing of objects if they have the same properties.


Fascinating. I would have guessed that identity checks are still possible with ===, and == would use a structural comparison.

I suppose V8 for example could easily optimize records and tuples which are structurally equal into the same memory, then separate them into distinct memory blocks when they differ. This way an equality check could have the speed of an identity check if the two are known to be structurally equal.

I could be talking out of my ass here – maybe this isn't a performance concern at all and has been addressed far better already, and this is nonsense. But I do wonder how you'd quickly check for equality of, say, a large state tree to discover changes. On one hand that could be addressed by architecture. On the other, it is nice sometimes to simply know if two variables reference the same memory.


Yes. This fixes only immutability. For the value semantics (=== comaprison of content) pls check sibling comment thread.


as outlined in the proposal, these new types have engine level guarantees, and support using a plain triple equals (`===`) statement to test structural equality. It also follows that strucural equality trickles down into objects that do this implicitly (Map, Set etc) as well.

Doing this for plain arrays and objects has been said by the vendors for years to be too hard and possibly break alot of existing applications, and a new type for this purpose would need to be proposed. This is an answer to that. There isn't much to be semantically gained by allowing this on frozen objects, you'd need to do something identical to this anyway, from an engine perspective. Using a new syntax / types to represent it semantically makes it easier for developers to understand it too, I'd argue.


You could create class that keeps static Map with references to all created objects of given type. On creation it would check if structurally same object was already created and return reference to previous instance instead of creating a new one.

If you freeze those objects and their prototypes you get exactly your records. === becomes structural comparison that costs exactly the same as reference comparison.

I made something like that for my own purposes for very simple types like Point.

To implement this you'd just need to be able to turn contents of an object into a key for a map. I agree that it's probably better to do this rather in the VM than in the library.


I believe this approach was proposed, but implementers pushed back because the initial interning is expensive and there are a lot of scenarios where it would never be used. You don't really expect creating a Point to do a hash lookup.


Sadly, JS weirdness interferes with value semantics:

    > 0 === -0
    true
    > #{v: 0} === #{v: -0}
    ???


That's not a JS thing, 0 and -0 comparing as equal is specified in the IEEE 754 floating point standard.


It's the JS `===` operator. The semantics are defined by JS. JS can use IEEE754 semantics for numeric `===` comparisons if it wants, but it'll be at a cost if there are other ways to distinguish the two. In this case, if you make `#{v: 0} === #{v: -0}` then you're ok with `a === b && 1/a.v !== 1/b.v` being possible.

I could see an argument for `0 == -0 && 0 !== -0`. It would have made it possible to say "if a === b, then a and b can use the same bitwise representation internally without losing information". But that's not what we have.

I guess for maximum consistency, we could use the 3rd JS equality operator `Object.is` with records, so that

    0 == -0
    0 === -0
    ! Object.is(0, -0)
    #{v: 0} == #{v: -0}
    #{v: 0} === #{v: -0}
    ! Object.is(#{v: 0}, #{v: -0})
but then all Record-comparing code would have to do `Object.is()` in order to avoid a recursive comparison. (Well, maybe the engines would get clever and include a "contains -0" flag in Records, and skip the recursion if it's false for both sides?)


Saw a talk with Douglas Crockford[0] years ago. He said something like: Before JS classes got introduced he asked why they didn't just implement macros for the language. Classes are in fact just syntactic sugar. Just like async/await, and now this proposal.

In hindsight he was right. JS would be better off if it did have macros. Much of the whole babel/webpack/react/ts stuff would be just a bunch of macros instead of idiosyncratic build tools and so on. And we would have had much less compatibility churn.

In fact this proposal here, is trivial to implement with macros. Clojure has the same thing (threading operator) and it's just a macro.

[0] https://en.wikipedia.org/wiki/Douglas_Crockford


Mozilla created SweetJS over a decade ago[0]. It added hygenic macros to JS and I'm sure everyone on the TC39 committee is familiar with it.

There's a lot to like about it, but macros in such a complicated language as JS are hard to get right. They'd also potentially lead to huge fracturing in the JS ecosystem with different factions writing their own, incompatible macro-based languages.

Look at JSX for an example. It's actually a subset of a real standard (E4X -- actually implemented in Firefox for a long time), but just one relatively small syntax addition has added complexity elsewhere.

For example, `const foo = <T>(x:T) => x` is valid Typescript for a generic arrow function, but is an error if your file is using JSX.

I like the idea of macros, but I suspect they made the right call here.

[0] https://www.sweetjs.org/


The issues you describe have emerged regardless of macros. Just with more expensive tooling.


It's pretty sad that you have to advocate for macros on a site called Hacker News nowadays. They aren't even terribly different from frameworks (which everyone loves): incorrect usage generates a stack trace you need to decipher.


> Much of the whole babel/webpack/react/ts stuff would be just a bunch of macros instead of idiosyncratic build tools and so on. And we would have had much less compatibility churn.

Wouldn't that just trade compatibility churn against running the transpilers on client side in javascript, making it even slower to execute? Moving this part of the execution on the developer side seems like a good choice to me.


It’s not as clear how macros affect performance.

You can write macros that are very declarative but generate very hairy code that you wouldn’t necessarily write by hand. The result might be much more machine optimized.

Sure, you pay some upfront cost for expansion, but generally speaking macros don’t hinder you to write fast code, they just give you more options and trade offs.


First, syntactic macros are great, and I've often wished for them to exist in javascript (and other languages).

Second, I only trust macros to people who are disciplined to use them wisely.

Third, I've met only a handful of developers I would consider disciplined in this way.


People who write widely used macros are typically of that last category.


I've been thinking about your response since you left it.

I think that what you said is true in an ideal world. Sadly, there are many developers who want to believe they are the disciplined ones, when in fact they are the trouble makers trying to inflate their ego at their team's/company's expense.

Along with discipline, there must be humility. Good luck finding that combination reliably.


What would macros for JavaScript look like? Would you ship them to the client for browsers to execute?


Yes, JS is a dynamic language, so I would expect you ship them as-is.


I don't understand why you can't just use temporary variables. The article mentions mutation is bad, but what actually happens is that the name gets reassigned. No value is mutated.

That brings me to something I really want in JS, actual unmutable values. If you use `const x = new SomeClass()`, you cannot reassign it, but you can change fields. The first time I encountered `const`, I thought it did the opposite. It would be cool if you could declare something (object, array) to be an immutable value.

If you really want to introduce new operators, how about operator overloading? For example vector and matrix calculations become a lot clearer and less error-prone with infix operators. It should be technically easy to add them to typescript - rewrite the expression to a function call depending on the types of the operands - but the TS devs refuse to implement this on philosophical grounds unless it is implemented in JS. I guess in JS it would require runtime dispatch, but maybe that is not such a big penalty given that it usually uses a JIT anyway.

Oh, and while we are at it, fix `with`. The JS with statement is ridiculous and deprecated anyway. It makes all fields of an object available in it's scope. Contrast with VB6's `with`, which requires you to use a leading dot and is much more readable:

    with (elem.style) {
        .fontFamily = 'Arial';
        .color = 'red';
        console.log(.borderWidth);
        // in actual JS this would just be
        // console.log(borderWidth);
    }


A problem is that you can only declare intermediate constants in a statement context, not an expression context. And with React, more and more JS devs are spending time in expression contexts

Example:

  return (
    <div>
      {foo(bar(stuff))}
    </div>
  )
There's no way to break out inline intermediate constants here; you have to bail out and do it up above the `return`. In this case that may not be too bad, but when you've got a hundred lines of JSX, things start getting really spread out


But that's a symptom of issues with React, not issues with JavaScript.

React's declarative model makes it easy to write unreadable spaghetti React declarations that are nested ten levels deep. Nobody should be adding features to JavaScript to encourage that.

("Please, I'm begging you, for the love of sanity... Refactor into more than one component. Just one time. Look, functional components even make that cheap and easy now. Please please please, just take some of that nesting and put it in a new component.")


Highly disagree on two counts:

> that's a symptom of issues with React, not issues with JavaScript

IMO keeping as much as possible in an expression-context is preferable, even without React, because it simplifies control-flow and avoids mutation. The main problem in this case, as I see it, is that javascript doesn't fully support that style- it could and should have a feature that allows intermediate constants (but not variables!) inside expressions. (It should also have something better than ternaries for expressive conditionals, but that's a separate topic)

> Refactor into more than one component

Plenty of ink has been spilled here in the past about splitting functions when there's a real conceptual boundary, not just when they "get too large", and how otherwise keeping your code together in one place can actually make it easier to follow because you don't have to jump around all over the place. So I don't really want to get into all that again here, but suffice to say that's my stance on the subject.


I mean, we could refrain from talking about the need to "jump around," but that's the crux of the issue: modern IDEs (like vscode) include "peek" functionality to look at a definition inline. No need to jump around at all. And a thousand-line component has a thousand lines of context it could be pulling in; reasoning about that level of complexity rapidly gets complicated.

If you have some complicated expression to say inline in a React declaration, pull it up to the preparatory layer. If there are performance reasons not to do that, push it down into a subcomponent.


Technically, you should be able to write something uniquely terrible like this ;)

  return (
    <div>
      {(() => {
        const barStuff = bar(stuff);
        return foo(barStuff);
  })()}
    </div>
  )


Yeah, JS programmers sometimes work around this and similar language gaps by making an inline function and immediately calling it (because that gives you a statement context inside of an expression context, which isn't normally possible). But IMO it's very rarely worth the hit you take to readability


They should repurpose `do` so that `do {}` (without the `while`) is an expression that you can put statements inside and return the last statement.

It would be great if we had expression if..else too (instead of just the tertiary operator)


> They should repurpose `do` so that `do {}` (without the `while`) is an expression that you can put statements inside and return the last statement.

There's a proposal for precisely that. Unfortunately, only Stage 1 though.

https://github.com/tc39/proposal-do-expressions


> There's no way to break out inline intermediate constants here; you have to bail out and do it up above the `return`.

Which seems fine. There's not an obvious gain to having one giant super-complex return statement.


But isn't then the problem that JS doesn't offer more expressive function contexts? Some languages like Nim allow you to use blocks in place of expressions; the last expression in the block determines the return value. You can also use `if` as an expression. No need to introduce a separate syntax for this case. (Although I do like piping in bash, but I don't think it is a good idea for JS.)

Also sometimes you should just give stuff a temporary name, and treat React like a templating language.


The way to fix that then would be to introduce a way to create named values within an expression. E.g. have something like

  let foo = bar in <expression>
be an expression.


> That brings me to something I really want in JS, actual unmutable values. If you use `const x = new SomeClass()`, you cannot reassign it, but you can change fields. The first time I encountered `const`, I thought it did the opposite. It would be cool if you could declare something (object, array) to be an immutable value.

That sounds like a fundamental mis-understanding. Variables do not hold objects, they hold references to objects.

    const foo = {};
    let bar = foo;
foo and bar hold references to the same object. They do not hold the object themselves. foo's reference can not be changed. It's const. bar's reference can. But the object is independent of both variables.

If you want the object itself to be unmodifiable there's Object.freeze.

    const foo ...
makes foo const. If you wanted a shortcut for making the object constant (vs Object.freeze) it would be something like

    let foo = new const SomeObject()
This doesn't exist but it makes more sense than believing that `const foo` some how makes the object constant. It only makes foo constant (the reference).


I don't want to freeze the object, I want to have a handle that doesn't allow me to modify the object. (Whether that would be at all possible in JS is another question.)

So

    let foo = {field:1};
    immutable let bar = foo;
    bar.field = 2; // error
    foo.field = 3; // ok
This is what I actually want when I think "const". I don't really care that you can reuse a variable, or re-seat a value. What I care about is that I recieve an object and sometimes want to modify it, and sometimes I want to make sure it stays the same. Maybe somebody else holds a reference and I don't want to surprise them.

(The inverse problem is when I have a function that takes something like a string or a number, and I want to change that from within the function. There is no way to pass a value type by reference. You have to encapsulate the value in an object and pass that. It would be cool if you could say something like `function double(ref x) { &x = x*2; }`.)


I agree that having a "can't modify this object" via this reference is useful and that JavaScript doesn't have it. TypeScript has it with `ReadOnly<T>`

It could be worse. You could be in python that has no const whatsoever :P

I also agree pass by reference is useful. JavaScript only has pass by value, similar to python.


If you are using TypeCript checkout ReadOnly<T> utility type

https://www.typescriptlang.org/docs/handbook/utility-types.h...


I think what you're looking for is:

  const x = Object.freeze(new SomeClass())


Can't wait for this. Pipes are awesome in Elixir and bringing them to JS/TS will be great.

To me this is both concise and readable:

    const weather = `https://api.weather.gov/gridpoints/TOP/31,80/forecast`
        |> await fetch(%)
        |> await %.json()
        |> %.properties.periods[0]


While pipes are great in Elixir I think it’s important to look at the total cost to adding any new syntax in the context of the language that is considering them.

In the JS ecosystem, idiomatic nested function calls are read from the inside out. And in most cases that is highly readable because a single layer of nesting is all you need.

This proposal flips that on its head, so you have to retrain yourself to read code in a new way based on the presence of a |>. That adds significant cognitive overhead, especially in code that isn’t well written or that is already complex.

Your example looks nice, but also just as nice is the .then() syntax you could have used.

Now we’ll see code bases with both styles. You’ll have some team members who love pipes so much they’ll never call a function the old way doing x |> console.log and the rest of the team doing console.log(x).

Sometimes, limitations are a good thing.


This is not elixir pipes. Elixir pipes are sound because they are piping to a function. Piping to a statement is not the same thing and it will become more clear in actual usage just how bad these are. Remember % in js is also modulo, and "test" % "ing" returns NaN. NaN is infectious since basically anything that operates on NaN returns NaN. "test" % "ing" + "ana" returns NaNana . You can quickly see how these pipes (as implemented) are so bad they might as well be language sabotage.


That is concise, readable, and does not have any room at all for error handling. If this was Rust, it could at least be turned into something which returned the right Err for what was happening.

Without something like that, trying to add error handling to the things which may blow up would instantly turn it into gibberish. Every single function here can fail (the HTTP request could fail, the data returned could be unparseable as JSON, or not have the right format). We should be trying to make our languages encourage us to write code which can handle those errors naturally, rather than encouraging us to write fragile code.


That ignores the “let it crash” motto of the BEAM. Don’t bloat the code with error handling when the supervisory tree can retry crashed processes.


But it allows a fully functional style, and so you can do railway oriented programming[0], where every function has a success and failure route. The top level code looks the same as that without error handling.

[0] https://fsharpforfunandprofit.com/rop/


    Uncaught TypeError: Cannot read properties of undefined (reading 'periods')


   :3,$s/\./?./g
Solves all your problems without having to think.


periods could be null. So you would need to optional chain the array access also.


Came here to say this. I am learning Elixir on and off, and I think pipes are amazing.


It’s pretty great in R too


I *HATE* pipes. For example from Elixir School (https://elixirschool.com/en/lessons/basics/pipe_operator):

``` foo(bar(baz(new_function(other_function())))) ```

They offer this example of an improvement:

``` other_function() |> new_function() |> baz() |> bar() |> foo() ```

While yes, pipes improve readability, how do they deal with errors? How do they deal with understanding what each thing is supposed to return?

I would prefer something like this, (descriptive variable names):

``` var userData = other_function();

var userDetails = new_function(userData);

var userComments = baz(userDetails);

var userPosts = bar(userComments);

var finalUserDetails = foo(userPosts);

return finalUserDetails; ```

Then I can easily debug each step, I can easily understand what each call is supposed to do, if I'm using type script, I can assign types to each variable.

I strongly oppose clean code for the sake of looking pretty, or being quick to type. Code is meant to be run and read more then written, it should be descriptive, it should describe what it's doing not a nasty chain of gibberish. Hence why most people hate REGEX.


I think there is merit to the argument that if naming is one of the hard problems, programmers writing that code are having to do a lot of ‘naming’ and that is hard for them. The proposed pipe operation eliminates those names and lets the programmer just use %.

But these variables are rarely the kind of thing it’s hard to name, so it feels like a slightly disingenuous argument.


Naming is hard because names are useful.

Getting rid of the name moves the hard problem, rather than solve it


Usually that effort should go toward naming functions rather than their results, though, and if the functions have good names, the results don't need them. In this example, `other_function` could have been named `get_user_data`, `new_function` could have been called `extract_user_details`, whatever.

Once you have good function names, which you should generally be spending a lot more effort on than good local variable names, you won't find any value in adding variables like `var foo = get_foo()`.


That's a limit of exception handling in Elixir.

A good language treats errors as a first class type. Functions that can error have no business returning an A.

Example given, the fetch API, is basically typed as fetch<A>(): Promise<A>

In reality, there might not be A at all, it should return a data type such as Either (or Result) which encodes linearly and forces the consumer to consider and handle the error.


There's nothing stopping JS devs from combining pipes with heavy use of a Maybe library. You don't need to use pipes everywhere, but it would be useful for a system that was designed with that pattern in mind.

I already use Maybe patterns quiet a bit in my TS project but there always the libraries and frameworks you work with that don't. So yeah I can see it being a language level thing (the way Promises were used) pushing for wide adoption.


Elixir handles errors as you describe, the example does not cover it. The pipeline also assumes a single direction, which pushes error handling to the next step, but there are also constructs (case and with) for handling and composing errors in one go.


In this case, Promise already includes the possibility of rejection, though the types that may be rejected are not specified.


These hack pipes are a trojan horse. People wanted elixir/F#/ocaml aka function pipes, and what we got was unreadable line noise. I argued against it until I was blue in the face, decided it was bad for my general wellness to keep it up. I genuinely would prefer no pipes over this. I couldn't find a single example where I preferred it. The token they chose already has a meaning in Javascript! The arrogance and willful disregard for readable code was astonishing. The only tangible reason I could pull as to why they picked the least popular implementation despite all the outrage was "someone at google didn't like the function pipes". Even if you think we should avoid it because some google employee doesn't want it, that doesn't mean you should ram in an even worse implementation. I had to block the TC39 discussion because I was just going to get argumentative because they weren't listening at all, and they were dismissing actual concerns without any explanation.


Can you give a link to the discussion? It would be quite relevant here.


This strikes me as something better left to libraries. If you want to write in a functional style then Ramda, Lodash, Underscore, and plenty of others have pipe and compose functions.

    pipe(one, two, three)
Easy to read. No new syntax. Extendable with arrow functions.

Yes, there are some limitations in comparison to Hack Pipes. But those are far outweighed by not messing yet again with the language’s syntax.


This only applies to TypeScript, but error checking for this pattern from a library is much harder than error checking for native syntax.

In order for TypeScript to check if the return value of one function has the correct type for the next function, you need to use generics, but generics don't allow for a variable number of generic parameters, so `pipe` would just have to use unknown or have dozens of overloads for each length of pipe usage (within reason).

I know TypeScript is a different, optional language, and I see no reason why they couldn't add the feature without it being in JavaScript, but that's not generally how TypeScript operates. If JavaScript doesn't add it, TypeScript won't. There's also lots of tooling that will type check your JavaScript when available, which will benefit from a simpler model.


For me limitations are killing 90% of usecases.

In absence of |> % syntax I'd nearly always would go with intermediate temporary variables instead of pipe(). "point-free" syntax feels horrible for me and wrapping everything in lambdas feels excessive.


I use Ramda pipe() all the time and use a descriptively-named intermediate/temp variables wherever I might want to write a comment


The problem with that is that it is impossible to infer the type of the n-ary pipe() function. Libraries “solve” this by overloading that function with n annotations, but that really isn’t a permanent solution, especially for libraries with millions of users, as there will always be a user that puts n + 1 parameters in that function.


Can someone remind me again why there's never been movement to add a second modern language to web-browsers? JavaScript was created in a weekend and then stuff tacked on for the last 28 years.

We know so much more about how to create programming languages today than we did then, and the whole "Year of the Linux Desktop" has become a WebAssembly meme now every year since its introduction six years ago, with it getting popular always next year, next version, with feature XYZ. Seemingly creating unmaintainable/debuggable mess from external languages with no true 1:1 into WebAssembly isn't as big of a hit as the originators expected.

Yet every time someone asks why there hasn't been movement here it is "Year of WebAssembly is next year!!!" WebAssembly has managed to slow actual progression towards something good. With the browser monoculture you'd think it would be easier now than ever to start a fully integrated second language with WebAssembly compilation for backwards compatibility.


Because there's not that much wrong with modern JS, with the "created in a weekend" mantra being wholly irrelevant.. Back when JS was trash there was was Coffeescript and other lesser used alternatives, and Google tried to push Dart but nobody cared. Today is pretty much only JS and its superset TS and that's not an accident. They're perfectly productive. My annoyance with JS is almost entirely to do with Node, otherwise I prefer JS to Python, for example.


I know a few people don't like it... but I've really enjoyed the Deno take on things for JS/TS runtime. While I'd prefer more native deno, the npm/node compatibility integration has made it really useable.

As for GP, you're right... there have actually been MANY attempts at other languages for in-browser. In the end, the JS runtime has been hardened and all other roads have led to WASM being the second target. I still remember a lot of people using VBScript in IE. I was an outlier in that I used JScript for classic ASP.


There have been efforts, but none have been totally successful.

In the past you had Java applets, Silverlight and Flash. Compile-to-JS languages depend on rather than replace JavaScript, but it's about as close as you can get without being a browser developer. Chrome did originally intend Dart to be supported natively in the browser but eventually gave up that effort. Perhaps now Chrome is dominant to push it through without getting buy-in from Apple and Mozilla, but I doubt they have any interest now that there's WebAssembly.

Like it or not, WebAssembly is seen as the answer for additional languages in the browser. And for most developers JavaScript functions well enough as either a development language or compilation target.


WebAssembly is just what you want, a second "modern" language that works in browsers. It hasn't reached 100% of its potential just yet, as there are some things missing for that (like DOM access) but once the language is feature complete, I'm sure most languages will have some sort of "Lang to WASM" tooling that'll allow you to write React apps in Ruby or whatever, if you so wish.

Stuff like that takes time though, so if you're antsy, you have two options: get involved, or wait patiently.


There's a few options out there if you don't mind being an early adopter. Most interactions via DOM bridging are slower than actual React... but it's kind of cool.

Yew (Rust) is one that I've been following with interest, the hello world examples are interesting enough. I know people that have been liking the direction of Blazor (C#) more, but it has a larger initial payload, that I don't care for, it's also closer to SSR approach in the browser.

The problem with both, IMO, is that they don't have a good UI library/toolkit. I find mui.com (with React) pretty much the best browser ui framework I've experienced (since 1996). To me, the first language + component library that targets WASM and that level of components will likely win.

Flutter is probably the closest example I'm aware of, though afaik it's JS as the browser target, not WASM, but wouldn't be surprised if this changes assuming DOM interop for WASM becomes better performing.


WASM is effectively "a second language" to browsers. Yes, you call it from Javascript, but it enables browsers to work with "any" language instead of standardizing on a second new language and doubling their workload by needing to integrate it with the rest of the browser just as tightly as they have with JS.


Internet Explorer supported multiple script languages.

The DOM in IE was essentially a COM API and callable from any supported scripting language. This included JScript (their JavaScript clone) but also VBScript and PerlScript.


I don't recall seeing anyone actually using PerlScript, or other scripting languages beyond VBScript in the browser for apps. I did work in a few locations where there was heavy push for VBScript for browser. I used JScript in classic ASP in much the same way.


If you want to use wasm, just go and use it. You don't need every other site to do the same before you jump in.

Personally, I do think JS is a better fit than Rust or C for almost everything people do on the web. And the support in other languages is still too bad to use. But you can use it whenever you want, differently from a second language you may invent.


Dart could have been our savior.


You can still use Flutter/Dart if you want.


For languages, which do not have built-in the power to change themselves, in many cases it might be better to stick to their feature set, instead of introducing even more language concepts. Look at how much work is involved to get something as simple as pipelines. As if they will ever find the right syntax for everyone.

If we used a normal function we might have to include a library or a dependency or heck, just take 5 minutes and write a pipeline function oneself. Sure, it will not be syntactically as minimal as a _change of the language itself_ to allow pipelining, but at least it will not introduce even more language features and everyone can easily look at the definition change it or include it in their own projects.

Ideally we would strive for a minimalistic set of features, which in turn can implement anything we want. Adding more and more features, without introducing a way for the user to modify the language without a commitee and a lengthy process, seems short-sighted.

If you want to give the user more power over syntax, introduce a well engineered macro mechanism (plenty of examples in other languages) and let people develop standards and libraries as macros and libraries. Later on decide what to take into the standard language. Similar to how jQuery influenced modern JS standard functions like querySelectorAll. Even if you don't take something into the standard language, users are still free to include it in their project. No harm done and everyone gets their lunch and can scratch their itches.


I understand the hassle around pipelines.

Entire ecosystems like `fp-ts` or `effect-ts` are regularly used with pipeable APIs and no one beats an eye.

The only thing that was needed was a syntax that would've allowed this:

    pipe(
      "foo ",
      capitalize, // "FOO "
      trim, "FOO"
      shout, "FOO!"
      console.log, undefined
    )
to not require `pipe`.

      "foo "
      |> capitalize
      |> trim
      |> shout
      |> console.log
Why `pipe`? Because otherwise you need to write it like that:

`console.log(shout(trim(capitalize("foo"))))` which is unnatural as you need to read the chain in reverse order and is especially hard in less trivial cases than a sequential string manipulation.


I'm pretty sure this'd be a welcome addition to the language but to be honest, the syntax looks shit.


What syntax would you propose or use?


I would propose -> instead of |>


but then is not a pipe!


It’s a horizontal pipe. It arguably actually makes more sense when the values flow horizontally from left to right through the pipe.

In an alternative universe, Unix shell syntax may have used “—“ instead of “|” for piping.


PS: The commands in Unix pipes are executed in parallel, and “|” could be interpreted as a mnemonic of that fact. However, the same is not true for the programming-language feature discussed here, which only processes singular values, not streams.


hack pipes aren't "pipes" anyway. Every other language that uses |> uses it for functions, not expressions.


To be honest |> looks more like a tipped over traffic cone to be begin with.


As someone who actually loves JavaScript, I think this is a really bad idea.

Maybe it has a place in other languages. I really don't want to see it in JS. We don't need more ways to do things implicitly or bass-ackwards from how they're actually behaving underneath. Syntactic sugar rarely makes code more readable. This operator only makes code seem more concise and as if it's executing in a way that it actually isn't.

I can absolutely see junior developers running wild with this kind of thing.

JS should be kept simple. This operator is not simple. It now makes understanding expressions more complicated. JS has its quirks and rough edges, but its power is in its simplicity. Please do not import the mistakes of other languages into JavaScript. If someone wants this operator, they should be forced to use a Babel transform or to compile their own JS interpreter.

OR just compile your favorite language runtime with the pipe operator to WASM.


An alternative is to make the pipe operator a simple function application and provide syntax for creating simple pipeline functions.

For example:

    left |> right
Would semantically translate to:

    right(left)
And you could define a pipeline function like so, where the following:

    const myPipeline = @[
        one(@),
        @.two(),
        @ + three,
        `${@} four`
    ]
Would translate to:

    const myPipeline = (value) => {
        const _1 = one(value);
        const _2 = _1.two();
        const _3 = _2 + three;
        const _4 = `${_3} four`;
        return _4
    }
Or:

    const myPipeline = (value) => `${one(value).two() + three} four`;
And you could define the placeholder value name (which would allow nesting):

    const myPipeline = @it [
        one(@it),
        @it.two(),
        @it + three,
        `${@it} four`,
    ]
You'd combine the two syntaxes to get immediately-invoked pipeline functions:

    // Using a modified example from the proposal:
    envars |> @ [
        Object.keys(@),
        @.map(envar => `${envar}=${envars[envar]}`),
        @.join(' '),
        `$ ${@}`,
        chalk.dim(@, 'node', args.join(' ')),
        console.log(@),
    ]
This is better, in my opinion, than building the '%' placeholder syntax into the pipe operator.


That % syntax is just completely unlike anything else I have seen in JS.

As a multi paradigm language, JS typically suffers from whatever programming style is on trend when these features are implemented. We are apparently on the other side of the pendulum now, but I can’t remember the last time I worked with a class and felt like that was right either.


We said the same things when arrow functions got introduced using an outlandishly un-javascripty syntax, as well as when templating strings got introduced using symbols that were the domain of LaTeX. Now they're "just what JS looks like" to both old and new JS devs.

Look past the syntax, because you'll master it quickly enough and 2 years down the line forget it was ever not part of the language: does the actual functionality it introduces improve on what we can do and how we write and understand code or not?


Working with something that uses an ES3-like level of the JS language, it's actually painful not having some of the conveniences added in the past decade+.


I see someone's using Photoshop.


> JS typically suffers from whatever programming style is on trend

Perhaps they are just jealous of the C++ committee. How many lambdas have they proposed and added to the language? At least 3: the C one, the Java one and now the Rust one.


This just complicates things if you have any kind of complex nesting.

Something "simple" like:

   a(b(),c(),d(e(),f(g())))
Turns into the following:

   value |> b() |> a( %, c() , v2 |> e() |> d(%, v1 |> g() |> f(%) ))


Nope, it becomes:

g() |> f(%) |> d(e(), %) |> a(b(),c(), %)

Which makes super clear what processing is actually done. Which is the data to process and which are just parameters of processing.

Because it could equivalently be:

e() |> d(%,f(g())) |> a(b(),c(), %)

If the data you process is rather produced by e() not by g().

This new syntax allows you to express intent beyond what's possible without it.


If i saw that in a colleague's code, i'd be angry at them as that's not legible.


It's no less legible than original code and at least expresses the intent of what's being processed, what are the processing steps and what are processing parameters.

a(b(),c(),d(e(),f(g()))) is just function call soup.


And I’d be angry if I saw that too. Break out the soup into variables and have it all be In a clean layout


Use a functional language if you want to chain 10 functions together! I would definitely tell them in the PR to stop being clever and rewrite it


At least I can instantly tell which call is ultimately returning something, in that version.


There's nothing preventing you from not using the pipe for last call or even introducing that rule in your team if you can convince your colleagues it's a good idea or even automate it with the use of a linter.

If it's really good rule you can advocate for it at this stage. It might be prudent to use |> % only inside function call parameter or possibly as right hand of an assignment instead of everywhere where parser expects an expression.

a(b(),c(), g() |> f(%) |> d(e(), %))

Although I'll be honest, I don't like it. Other parameters of a() call for me occlude the flow and the intent.

We could make it a bit better with newlines.

    a(b(),c(), 
        g() |> f(%) |> d(e(), %))
    )
But if a() takes more parameters after the main one then we have the same problem as usual where parts of the same call can end up far away from one another.

    a(b(),c(), 
        g() |> f(%) |> d(e(), %))
    x(), y(), z())

For me

    g() |> f(%) |> d(e(), %))
    |> a(b(), c(), %, x(), y(), z());
is still better and I wouldn't want it to be prevented by language syntax.


That's almost worse. I think I'd have to break out pen and paper to figure out which of those things end up returning arguments to function "a".


I agree that such restriction would make it worse.


Except that the order of evaluation is different now. Side effects? Who cares.


Not to mention the confusion of nested percent vars


Cool stuff, but I miss the "it" variable from HyperTalk (the language used by HyperCard) which contained the result of prompts to the user. Just search for "The it variable":

http://www.jaedworks.com/hypercard/HT-Masters/scripting.html

  ask "How many minutes do you want to play?"
  put it * 60 into timeToPlay  -- convert it into seconds
Today we could have a reserved keyword that holds the result of the last statement executed. A practical example adapted from the article using "it" might look like:

  Object.keys(envars)
  it.map(envar => `${envar}=${envars[envar]}`)
  it.join(' ')
  `$ ${it}`
  chalk.dim(it, 'node', args.join(' '))
  console.log(it);
A better name for "it" today might be "_", "result" or perhaps '$' in a shell-inspired language like php. Most shells support "$?" so that could work too:

  # prints 0
  true ; echo $?
  
  # prints 1
  false ; echo $?


Kotlin uses "it" as the name of the implicit arg in lambdas. Makes things very readable imo:

``` listOf(1, 3, 5).map { it * 100 } ```


it, _, result, and $ are all valid identifiers already; the only choices would have to be syntax errors currently.


I wonder if "%" placeholder is the right approach. It makes the code longer in most cases.

Without pipes:

    a = d(c(b))
With pipes in the proposed form:

    a = b|>c(%)|>d(%)
My first approach to design it would be:

    a = b~>c~>d
So the rule is that on the right side of the pipe operator (~>) there is always a function. We don't need parenthesis to indicate that.

If the function takes more than one argument, it can be defined by another value on the left of it.

Without pipes:

    a = d(c(b,7))
With pipes in the proposed form:

    a = b|>c(%,7)|>d(%)
With the ~> approach:

    a = b,7~>c~>d


Your ~> operator is effectively the F# style pipelines (using |>) that have already been rejected twice... Personally, I was fine with F# style myself... Hack style in TFA is also fine, not sure on `%` specifically though. In either case, I've lost hope of seeing either pipelines or decorators actually make it through committee in my lifetime at this point... it's been about a decade now.


I haven't looked into F# pipelines for a while, because they seemed so bloated to me that it hurts. Isn't it that this:

    a = b|>c(%,7)|>d(%)
Becomes something like this in F# pipes?

    a = b|>(b)=>c(b,7)|>d
If not, what does it become?

My suggestion is much shorter:

    a = b,7~>c~>d


> three(two(one(value)))

const oned = one(value);

const twoed = two(oned);

const threed = three(twoed);

This proposition goes out of its way to find problems with code that is written in a confusing and uncommon way in the first place.


It's annoying to have to decide on and write out so many names. The intermediary names are not relevant to solving the problem. This is so much less noisy:

    value 
    |> one
    |> two
    |> three


The intermediary names are extremely relevant to the next poor sucker who has to understand what you were trying to do.

Code is read far more than it is written. Use temporary variables. Put in the effort to name them once, and then that effort pays back every time anyone needs to read and understand the code.


> The intermediary names are extremely relevant to the next poor sucker who has to understand what you were trying to do.

There are lots of cases where they’re not and are just noise written in exactly the style of the original comment. Or worse all the one-shot temporaries get assigned to the same worthless name.

Unless you live and die by the mantra that no expression can have more than one period, pipes pretty much just bring attribute/method chaining to arbitrary functions and expressions.


> The intermediary names are extremely relevant to the next poor sucker who has to understand what you were trying to do.

I just don't think that this is always true.

Consider:

    const highestScore = 
      players
      |> filter(x => x.isAlive)
      |> map(x => x.score)
      |> tryMax
I don't see how this is better:

    const alivePlayers = filter(x => x.isAlive)(players);
    const scoresOfalivePlayers = map(x => x.score)(alivePlayers);
    const highestScore = tryMax(scoresOfalivePlayers);
And you can add helpful comments to pipeline code if needed:

    const highestScore = 
      players
      |> filter(x => x.isAlive) // Dead players cannot win
      |> map(x => x.score)
      |> tryMax
More generally though, I don't see why forcing everyone to write out intermediary names all of the time leads to more readable code. If it's more readable to do so, I will. If a pipeline is more readable, why should we be prevented from using it?


> I don't see how this is better

Case in point:

> you can add helpful comments to pipeline code if needed

The pipeline with explanatory variables explain to you what the steps are with code. Using pipeline you need to add comments to explain "what" you are doing.


Then you can mix-and-match:

    const activePlayers = 
      players
      |> filter(x => x.isAlive)

    const highestScore = 
      activePlayers
      |> map(x => x.score)
      |> tryMax
In any case, I don't see how being restricted to always using an intermediary variable for every step is an advantage.


And then it's a pleasure to open the debugger and immediately see the values for each step.


If pipes are added to JS then IDE support will follow very swiftly.


> I don't see how this is better:

    const alivePlayers = filter(x => x.isAlive)(players);
    const scoresOfalivePlayers = map(x => x.score)(alivePlayers);
    const highestScore = tryMax(scoresOfalivePlayers);
I think this is way better. The variable names tell me at an instantaneous glance what each clause is doing. I don't have to spend mental effort delving into what's going on with the lambdas, or scan back and forth to find a comment that may or may not be there or out of date if it is.

Furthermore, I'd wrap all of those lines together into a getHighestScore() function as well. That makes the complexity exactly as visible or as abstracted as you want at any given moment. The name of that function tells you what the aggregate of the operations is doing, and you can go look inside that function if you want to see the individual steps.


It's only less noisy because of the simplistic nature of the example. In real world code, `one`, `two` and `three` and probably a chunk of code put together in a single line and it's difficult to find out what the hell they are doing. Concat together a few of these and that's a recipe for disaster. Less experienced engineers would add a comment at the top of the pipe chain explaining what's going on. More experienced engineers would divide and conquer and use temporary named variables (render commets useless).


Agreed. I'm not the one coming up with the one/two/three, I took it from the proposition.

Explanatory variables help understand the steps of the process, this |> operator is receipe for unmaintainable, expedited code.


The proposed pipe operator could only be efficiently implemented via a transpiler pass to lower it to "regular" JS varaibles. I wouldn't want this feature in a JS engine due to the overhead it would require. Sometimes it's better just to say no to new features that yield questionable utility and don't reduce code size or complexity by much.


I love the pipe operator in Elixir, but I've never really wanted it in JS. It's critical in Elixir because the design of the runtime (immutability, functions only, no methods). In the rare cases that you might need it, like their three(two(one(value))) example, function composition is available from libraries or easy to do yourself.

It's just my opinion, but I think turning JS into a kitchen sink of language features is a mistake.


I know I'm in the minority, but I dislike both version of the syntax, since I think chained code is almost always worse than just being forced to use intermediary variables for everything.

While it's fine to say Object.entries().map(), anything more is not only harder to read, but makes debugging and maintenance more difficult.


Certainly looking forward to the pipe operator - if it ever lands! It doesn't seem like it's moved to the next stage recently in the commit history, and it's been discussed for ages now.


The pipe operator is awesome because you can use it to “extend” objects without messing with their prototypes.

Missing String.titleCase ? Write your own!

    “hello world” |> titleCase


There was also a different proposal that allows objects to be extended: https://github.com/tc39/proposal-bind-operator

Personally, I don't use classes much, but sometimes I think free functions are a little too hard to find, so I tend to experiment with the following pattern.

   interface User { … }

   const User = {
     rename(user: User, newName: string): User { … },
   
     getDisplayName(user: User): string { … }
   }
   
   const renke: User = { … }   
   
   console.log(User.getDisplayName(renke)); 
Which makes finding an operation for a certain type easier to find (just write User and trigger autocomplete).

The alternative is of course having renameUser (or userRename) and getUserDisplayName (or userGetDisplayname). The prefixed version would make autocomplete easier also.


This is the beauty of pipeline operator. It adds very little extra machinery - it’s just an infix operator that makes working with free functions easier.



But in the Hack proposal wouldn't that have to be

“hello world” |> titleCase (%)

I'm starting to like the F# proposal more now.


Consider the example code here in Hack,

  const weather = `https://api.weather.gov/gridpoints/TOP/31,80/forecast`
    |> await fetch(%)
    |> await %.json()
    |> %.properties.periods[0] 
In F# it would be much more verbose,

  const weather = `https://api.weather.gov/gridpoints/TOP/31,80/forecast`
    |> fetch
    |> await
    |> json
    |> await
    |> x => x.properties.periods[0]
    // or |> { properties : { periods: [result] } } => result
Hack will also requires less key strikes for calling functions that have multiple augments.


Number of key strokes is only one design consideration and not even a very important one


`titleCase(“hello world”)` , how you would already do this seems equally ok to me


Assuming you are only calling one function - if there are more you get into a bit of a mess!


Pipes are cool but this falls pretty hard on the side of adding too much cruft to the language.


> Deep nesting is hard to read […] Temporary variables are often tedious

I’m a little torn because pipes might be pretty nice, especially for prototype code and small projects. One thing this proposal doesn’t acknowledge is that for production code, deep nesting’s often impractical and often considered an anti-pattern in the first place, so making it easier isn’t a common need or problem to have in my experience. Usually I’m going the other way, having to make it more tedious. Using temporary variables and breaking apart nesting is, far more often than not, necessary in order to do proper error checking, and just to make code readable, commentable, and refactorable, etc. I feel like what we need is not a way to make deep nesting easier to read, it’s a way to make temporary variables less tedious, perhaps while also piping from one function to the next… that would be really helpful in a deeper way than just adding another chaining syntax. Is the syntax is stuck at “|>”? I guess it’s not possible to override bitwise-or (“|”), but “|>” feels maybe a little clunky?


At work I am always hopping between F# and JavaScript and I'm convinced that if more people had this experience they would want the pipe operator in JavaScript.

Unfortunately it's hard to convey the advantages to those who haven't used it. Maybe it's because it's such a simple bit of syntatic-sugar?


I love the pipe operator in Elixir!


The pipe operator in Elixir also benefits from having the subject of most functions in the standard library as first argument. No need for the % hack operator, so you can just do:

    "hello world"
    |> String.split(" ", trim: true)
    |> Enum.map(&String.upcase/1)


The current JS one isn't as nice as the Elixir one, at least it wasn't when I tried using it via Babel a couple yrs ago.


The JS one does seem to have some more power than the Elixir one. For instance, in Elixir, if it's a bit kludgy to pipe to a second or third argument. I find this often when I want to insert into a map. You can always define more functions, but otherwise it's annoying, because you end up with something like

  computation() |> &(Map.put(my_map, key, &1)).()
That said, with this less-power, you do kind of end up forced to design your functions in a way to where piping makes sense, and IMO it leads to cleaner and more consistent APIs.


Pipes work really well with named arguments and partial-application. Both functionalities make it easy to get the unary function you want with minimal cruft. Lambdas solve the more complex case.

In Ocaml, you often end up doing things like:

     List.create 0 3 
     |> List.map ~f:(fun x -> x+1)
     |> List.fold ~init:0 ~f:(fun acc x -> x+acc)
     |> Stdio.print_endline "%d"
I'm ambivalent about adding them to JS however. It's a nice feature but I don't think it works well with the rest of the syntax.


As I mentioned upthread, most Elixir functions are designed to have the thing they operate on as first argument.

    computation() |> &(Map.put(my_map, key, &1))
is terrible style when

    Map.put(my_map, key, computation())
works just as well and is more readable. It is pretty rare to have a pipeline that needs to insert the value elsewhere than the first position. And please, do not write single element pipelines, I see them far too often from Elixir beginners.


I agree! I even pointed this out in my comment, but perhaps not clearly :)

I think that the way it's done is a net-positive in designing cleaner APIs, but there are times when I've already done a pipeline, and storing the output is just the last step. This last step is just frustratingly, not always possible. I don't think one should do something like the above, it's just what you must resort to if you _did_ want to do it.


There is nothing wrong with doing

   data =
     this
     |> is
     |> a
     |> pipeline

   save_to_file(file, data)
Instead of trying to put the call to save_to_file into the pipeline by wrapping it in a closure.


From what I understand the constraint is intentional to guide you in the direction of writing simple pipe chains.


That's probably a wise choice. Pipes are great for simple situations, mostly for code clarity. But it can be abused easily, like a hammer seeking nails.

It's similar to await/async, you eventually start designing code that better suits that interface rather than pigeonholing it with complex syntax.


There is another way that isn't _as_ kludgy, but still not as nice as the JavaScript proposal:

    computation() |> then(&Map.put(my_map, key, &1))
It's the big reason the `then/2` function was created from my understanding.


Oh wow, TIL about `Kernel.then/2`. That definitely works around the syntax problem.

An aside, the implementation is kind of amusing. It almost seems unnecessary for this to be a macro but maybe the compiler can optimize this a bit more? I would expect TCO to simplify of my "simple" implementation of

  def then(value, fun) do
    fun.(value)
  end
https://github.com/elixir-lang/elixir/blob/a64d42f5d3cb6c327...


It's nice in Elixir because there's a strong convention that the first argument is where you'd almost always pipe into. The JS proposal is better for retrofitting in a language.


Clojure has thread first ->, thread last -> and even thread as as->, to define where the pipe will be applied. I find it very cool!


I love the pipe operator in OCaml!

I wrote this line for a compilers class:

  let _ = apply_effects effs in ()
    in m |> fetch |> decode_execute |> memory_writeback
Actually looks like a RISC pipeline! It looks even better with code ligatures[1].

[1]: https://i.imgur.com/Qwx8CDr.png


This isn't the pipe operator in elixir. It's from some php derivative language that facebook uses and you shouldn't trust it, because nobody could even show in TC39 that it was sound.


const isRandNumOdd = Math.round(Math.random() * 100) |> % % 2 |> Boolean;

Excruciatingly contrived, but does this sort of arithmetic work in the Hack syntax? I'm genuinely curious, couldn't find any mention of modulo (or remainder) in the proposal.


% % 2

Gross


...so only the first usage of `%` counts as a replacement?

What if I need the value to be replaced multiple times, like this:

   |> `${%.id}: ${%.friendlyName} ${%.url}`
I'm surprised they aren't going with an idiom like `$1`, `$2`, etc or something like in other languages that have "magic" lambda parameters.


> ...so only the first usage of `%` counts as a replacement?

Hopefully not because there’s no reason to: in the same way you can use + or - as prefix or infix, % as value and % as binary operator are not ambiguous.

    % % %
should not be an issue, though it’s useless and not exactly sexy looking.

> I'm surprised they aren't going with an idiom like `$1`, `$2`, etc

That makes no sense, $1, $2, and $3 are different parameters.

Using your example,

    |> `${$3.id}: ${$1.friendlyName} ${$2.url}`
makes absolutely no sense.

Not to mention the very minor issue that $1 is already a valid JS identifier.

> or something like in other languages that have "magic" lambda parameters.

% is one of those, it’s what closure uses for its lambda shorthand. Scalia uses `_` and kotlin uses `it`.

The latter two are ambiguous but I guess since pipes are new syntax there wouldn’t be a huge issue making them contextual keywords.

Most language with “pipelines” are curried so it’s not a concern, and in the rest it tends to be a fixed-form insertion, so it’s quite inflexible, but in both cases APIs are designed so they play well with that limitation e.g. in curried language you’d have the “data” item last (that’s obviously in Haskell), which also allows for partial application in general, while in “macro” languages, well, it depends where you decide the magical argument should be inserted (IIRC in Elixir it’s the first, so functions written for pipe compatibility should take the main operation subject first).

Clojure is cool because it has both plus a macro where you give it the name of the substituted symbol. However being a lisp the pipe macros are still prefix, not infix.


>That makes no sense, $1, $2, and $3 are different parameters. >Using your example,

    |> `${$3.id}: ${$1.friendlyName} ${$2.url}`
>makes absolutely no sense.

That's not what I was saying. I was saying that using `$1`, `$2`, and `$3` _would be different parameters, which would be good at helping disambiguate_.

That would enable this, from my example:

    |> `${$1.id}: ${$1.friendlyName} ${$1.url}`
while also enabling something like this:

    |> `$1.indexOf($2)`
...whereas just sticking with the single `%` means you _can't_ disambiguate, and that if they instead try to allow disambiguation by deciding that the first `%` is `$1`, and the second `%` is `$2` (and so on), then now you can't use the template-string example I gave.

Having unique "magic scope variables" at least allows you the flexibility to handle non-unary use-cases.

Either way, this is another case of "every lexer/parser has to be riddled with special cases" to handle "is this `%` a fancy-pipeline-identifier, or is it an operator?"


> That would enable this, from my example:

So the same thing except more verbose.

> while also enabling something like this:

Enable for what? A pipeline threads a value through a sequence of operation, there is no second parameter.

> ...whereas just sticking with the single `%` means you _can't_ disambiguate

Which doesn't matter because there is nothing to disambiguate.

> Having unique "magic scope variables" at least allows you the flexibility to handle non-unary use-cases.

Which do not and can not exist.

And even if they did (which, again, they don't), you could do exactly what Clojure does with its lambda shorthand: %1, %2, %3, %4.

> Either way, this is another case of "every lexer/parser has to be riddled with special cases" to handle "is this `%` a fancy-pipeline-identifier, or is it an operator?"

There is no special case, having the same character be a unary and binary operator is a standard feature of pretty much every parser. Javascript certainly has multiple, as well as operators which are both pre and post-fix.


To give credit where it's due. I came across the proposal through this video : https://www.youtube.com/watch?v=h1FvtIJ6ecE by Theo the CEO of Ping.gg.


and kudos also belong to the people whove been heavily championing and bikeshedding the proposal on behalf of the rest of us for the last 8 years - which i just discovered is really nicely documented in the repo: https://github.com/tc39/proposal-pipeline-operator/blob/main...

there's usually a ton of nuance behind the syntax considerations and i usually find that the people on tc39 care way more than i do about the things i never think about until its too late. peeking into their discussions is often very enlightening... and a reminder of how hard it is to do language design by committee and at scale.


Yeah, sure, just heap in more stuff. Because it isn't chaotic and ugly enough yet.


Can't javascript already do [this][1]?

    const thrush = (value, func, ...funcs) =>
        func ? thrush(func(value), ...funcs) : value;

    // no new syntax required
    thrush(
      envars,
      Object.entries,
      x => x.map(([key, val]) => `${key}=${val}`),
      x => x.join(' '),
      x => chalk.dim('$ ' + x, 'node', args.join(' ')),
      console.log);
[1]: http://www.davidgoffredo.com/thrush.html


Thrush combinator is neat. Here's another without using recursion:

  const thrush2 = (value, ...funcs) => funcs.reduce((value, func) => func(value), value);

  const thrush3 = (value, ...funcs) => { for (const func of funcs) value = func(value);  return value }


Reduce is perfect for this, nice point.


Well here’s a solution to a problem that didn’t really need any solving.

How about spending time on:

• Actors • Real Immutable Structs

?

This is going to create more code unreadable, while it may be cool and clever to put a pipe in and call it a day, I imagine most production code will be a pile of nested pipe gibberish that any junior engineer or new hire would pull their hair at trying to piece together.

We want to create features, and solve business problems and that in turn also requires maintenance. This is just some clever macro wrapper around currying. While functional style is cool and all I don’t really see much value for most day to day workflows


I had a hard time rethinking decades of frontend projects where pipes would simplify code at least two times. However, in the backend please go ahead.

Having said that, can anybody provide an example with error handling per pipe?

You know servers are bitches :)


I don't think that it's a good idea to introduce a left-to-right flow into a language, which strictly assigns righthand values by general design. The text mentions a back-and-forth in reading, this introduces a back-and-forth intellectually.

(There's already a limited left to right capability, namely by chaining expressions using logical operators, especially when used outside of condition. It should be mentioned, however, that this is already confusing to some.)


  !(funcA(transferObject) || true) || !(funcB(transferObject) || true) ...
  // ceci n'est pas une pipe

  function pipe(func, obj) {
    return !(func(obj) || true);
  }
  pipe(funcA, transferObject) || pipe(funcB, transferObject) ...
  // cecie n'est pas une pipe non plus

  function pipeB(func, obj) {
    func(obj);
    return 0 | 0;
  }
  pipeB(funcA, transferObject) | pipeB(funcB, transferObject) ...
  // no more pipes, please!
 
  funcA(transferObject), funcB(transferObject) ...
  // finally...
;-)


This is cool but it always frustrates me to see these the TC39 focusing on these little improvements to the language instead of taking big bold steps that would have a much more significant impact.

Stuff like types, data binding, reactivity, etc. These would save so many kbs and CPU cycles if implemented natively. The world sorely needs that. God knows how much energy is wasted in sending and processing huge bundles of JS billions of times every day.


This is similar to Elixir. It's such a natural way to program and so useful. I wish all languages had this.

Also shout out to bash pipes


> In the State of JS 2020 survey, the fourth top answer to “What do you feel is currently missing from JavaScript?” was a pipe operator.

In 2021 (https://2021.stateofjs.com/en-US/opinions/) it fell to 7th place.


JavaScript continues on its eternal and unavoidable march from a small quirky language towards a huge quirky language.


Maybe it's just that I'm just old and been doing JavaScript for 20+ years, but I HATE IT.


I actually find the original easier to read than the piped example given for this one:

Original

  jQuery.merge( this, jQuery.parseHTML(
    match[ 1 ],
    context && context.nodeType ? context.ownerDocument || context : document,
    true
  ) );
-----------------------------------------

With pipes

  context

  |> (% && %.nodeType ? %.ownerDocument || % : document)
  |> jQuery.parseHTML(match[1], %, true)
  |> jQuery.merge(%);
I think F# pipes are ideal in more complex cases, the % can add unnecessary complexity when reading code. Alas, it looks like we're not going to get that.


I played with coconut recently (fp layer on top of python) and the `|>` syntax felt very annoying compared to `.` (even though it felt ok in Ocaml..)

But in an OO underlying language it's probably impossible to reuse it.


This makes me nervous.

In general, I think adding features like this to a mature language is a misstep because it increases the cognitive load of "things you have to know to read other people's code." And that's strictly increases... Since changes like this can't remove previous approaches (for backwards compatibility reasons), we'll now have three syntaxes for function calls? Yuck.

Left unchecked, this predilection eventually leaves you with languages like C++: a language where you can write good, safe code if you stick to modern methods, but good luck learning what "modern methods" are or finding tutorial books that don't teach you any of the bad-old approaches or, most importantly, working with other people's C++ code that still has `setjjmp` and `longjmp` in it because the language allows for it, therefore someone used it somewhere.

(oblig: https://xkcd.com/927/)


I'm starting to think the entire direction of modern Javascript is one giant yak-shave built on enabling bad decisions.

"I like functional programming so I'm going to do that in JavaScript" -> "Now I have a problem because JavaScript is not very good at that, so now let's radically alter JavaScript until it's... well, still not good at it, but it looks like it is, at a glance" (initially through libraries, now altering the language itself)

"Let's make as many calls async by default as possible" -> "Oh but actually I need most of them to seem synchronous, even if they're actually async, like 90+% of the time" -> "Callbacks?" -> "Oh god that sucked... promises?" -> "Better but still not great, let's just... uh... add `async` and `await` and watch as their use becomes so hilariously common that it's now painfully obvious that the default behavior is wrong?"


> watch as their use becomes so hilariously common that it's now painfully obvious that the default behavior is wrong

Hm. :) Hindsight being 20/20, perhaps "synchronous" was a bad default for a language embedded in an application space where the lifeblood is "network communications over unreliable channels."

Still, seemed a good idea at the time(1)

(1) ... at the time, they were doing a cute tech demo, there were alternative scripting languages under consideration, and I don't think anyone expected JavaScript to blow up to become the only viable option.


The syntax is simply disgusting, some examples look worse with pipes than without them.

I understand it is stage 2 now but can we do something about it? I'm seeing that most comments here and in other places are critical, why should we get this stuff in JS because of a vocal minority that is trying to push it on everyone else?

If we keep adding the fifth thing asked on state of JS year after year we will end with some frankenstein weirdness... JS was getting better and better, let's not make it worse.


This proposal has been languishing for years and years. I'm not sure when it last made progress, but I remember waiting for it with anticipation around 2018


Yeah, no. The idea of even reversing function order just to use the piping operator itself is already bad enough to even consider using it.


Constant remodeling is the sign of an ugly home.


Seems to me that someone "stole" it from F#. F# probably also take this idea from some unknown (to me) language. Here a similar idea in C++ https://www.fluentcpp.com/2019/10/18/piping-to-and-from-a-st...

This is generally a weakness of text based programming languages, that you cannot easily express graph flows like:

    / B1 \
 A>-|    | -> C
    \ B2 /
|> operator only solves the problem for non-branched flows like A -> B -> C making them more readable by removing nested calls.

In theory you can create something similar using JS OOP by attaching e.g. map/use(func) methods to every prototype:

  function use(func) { func(this); return this; }
  function map(func) { return func(this); }
and then:

  (1).map(n => n+1)
    .use(console.log)
    .map(n => "n = " + n)
    .use(n => console.log("str = $n"));
Introducing a new operator instead of a library is a huge effort. I don't think this proposal will succeed, especially that current custom operator support in JS is nil.


   three(two(one(value)))
.

   value |> one() |> two() |> three()
.

   a=one(value);
   a=two(a);
   three(a);
.

   pipe(value, one, two, three)
all seem fine but nr 2, where does the return value from three() end up?


I don't see how syntactic sugar makes any sense for javascript. Breaking compatibility is so severe because every browser needs to catch up, yet it doesn't actually enable anything that couldn't be done before.


Browsers have established they can roll out changes behind feature flags, and the community adapts and adopts them. This is a solved problem. ES6 transition, await/async, string templates, etc. were major JS changes that required browsers and runtimes to change. They did. It was fine. They can do it again!


Promise added a very valuable new capability, but this would just mean what you write doesn't work on any non updated browser for a very minor syntax improvement.

If there are too many nested calls, assign some of the function results to variables first.


Async/await is pure syntactic sugar. Look at python 3.0 era coroutines that were iterator/generator based as an example of async coding without special keywords. JS did not need those keywords to enable coroutines and async coding. People were already making their own promises, coroutines, etc. long before the new keywords were added.


Async await has been supported by every browser that's not IE for like 6 years, and it's easily polyfillable if you need to support earlier.


It lets a transpiler like Babel or Typescript use the new syntax and output the equivalent code that works on older browser.

Same reason async/await was useful far before there was native browser support - you could immediately use it in your codebase and compile to a legacy browser target.


If you're compiling to javascript anyway, why would you need a pipe operator in javascript instead of just nesting the function calls?


Because x |> f |> g |> h is much easier to read than h(g(f(x))) for most people.


If you're compiling from another language you would be using whatever syntactic sugar you want, why break compatibility just to have the compiled javascript look a little better?


> If you're compiling from another language

TypeScript isn't another language though. It is the latest official ECMAScript plus type annotations. Only some very, very few, rare, old stuff like enums really is different code. 99% of TypeScript is just "remove the types to get ECMAScript".

That TypeScript, the tool,also adds a transpiler is a distraction that made a lot of people believe TS is a different language. But the TS folks have always taken great pain to only ever support features that are or are about to be in the ECMAScript standard, and not to deviate from it. That they did initially with some namespace stuff and enums was before ES2015, when JS was lacking some things many people thought were essential. Even then they only added less than a handful of "TypeScript-code".

When you look at the Babel "transpiler" for Typescript, before they added a bit more for class stuff, it pretty much showed that "transpilation" of TS to JS - as long as you targeted a recent ES version - was achieved by doing nothing more than to remove all those type annotations.

I'm still mad at the TS guiys for muddying the waters so much by confusing soooo many people by bundling type checking and transpilation in one tool. This could have been much more clear. I too stuck to using Flow for quite some time until I realized TypeScript really is Javascript, while Flow communicated in its architecture and usage already that it just "added types" (literally).


>TypeScript isn't another language though. It is the latest official ECMAScript plus type annotations. Only some very, very few, rare, old stuff like enums really is different code. 99% of TypeScript is just "remove the types to get ECMAScript".

That's another language. Javascript doesn't have type annotations - even the suggested addition of type annotation syntax to JS[0] doesn't actually do anything because it can't and still be Javascript. Javascript doesn't have enums. Javascript doesn't have interfaces. That 1% (although it's probably more than that) matters. If it can't run, unaltered, in a Javascript interpreter it isn't Javascript.

[0]https://github.com/tc39/proposal-type-annotations


It's not a different language as in "it's a different language". It's just types added. The actual executable code is pure Javascript. The type annotations have ZERO influence on what is executed, they are completely and used during development.

To call this "another language" as if it was C vs. Python does not make any sense, unless your main goal is to win some Internet argument no matter what.


Yes, the language that Typescript emits is Javascript. I can write Python code that emits Javascript, but that doesn't make Python, itself, Javascript.

Languages can be structurally or idiomatically similar but still not be the same language. And there are more differences between Javascript and Typescript than just the type annotations (although that, alone, would be sufficient.) Typescript has generics, ffs.


You're completely missing the point here. If you are transpiling something, you don't need to worry about syntactic sugar in the base language you are transpiling to, only what you're transpiling from.


No, you don’t get it. Babel is for transpiling newer versions of JS to older ones, typically based on targeted browsers or Node runtimes.

JS is a fantastic fp language and pipe/compose is commonplace for people writing in that style already. This just adds first class support to the language


> No, you don’t get it. Babel is for transpiling newer versions of JS to older ones

For all intents and purposes, JavaScript with all the extra features it has accreted over the years is a different language from the JavaScript of a decade ago, and a compilation step is necessary in order for web browsers to parse it.

If you're running a compilation step anyway, you may as well write the code in a language that isn't such a dumpster fire.

> JS is a fantastic fp language

Actually, it's pretty terrible at that.


> This just adds first class support to the language

If it was a different language you could say it "just" adds something, but javascript is not compiled and any syntax change means you start over as far as compatibility goes. It is unique in its scope and usage and this feature doesn't enable anything new for users and is only marginally useful for programmers.


Browsers add support for new JS features every year. You can use them or not, I don’t care what you do. Every web stack I’ve worked on in the last 7 years has a compile step, too, so you’re either ignorant or being intentionally obtuse. Either way it’s not like anyone should listen to you.

So what’s your point, other than communicating how upset you are over a programming language? That we should be writing websites like it’s 1999?


I'm not sure why you're having a meltdown over this. You still haven't answered why someone needs compatibility breaking syntax sugar if you're already compiling to javascript.


I just don't want to use source maps anymore ...


Indeed. The argument of “nested calls are hard to read” is strange, because this looks terrible


Anything looks better than nested calls once you have started to get used to the benefits of reading left-to-right. It's one of those pains you don't realize you have unless it suddenly goes away.


I don't see how moving the actual thing that's returning (and it returns "toward" the left) all the way to the right end of the expression—as far away from the assignment, if any, as possible—improves readability.


Nested calls can be a code smell, sure. But easily fixable:

    one(two(three()));
Becomes

    let a=three();
    let b=two(a);
    one(b);
Clean and easy, no sugar required.


Lots of low-value names required. Those names will inevitably either be badly chosen or have taken far more time to make up than they are worth. That's assuming the code isn't written by that one guy on every team who stubbornly insists on

   var x = three();
   x = two(x);
   one(x):
(having the names certainly is nice to have in the debugger, but I'd rather have those intermediate results be an explicit debugger feature than junk taking up mental bandwidth all over the code, at all times)


Good point and yours does look better :)


The beauty of not using variables is that they don't pollute scope: a promise to the reader that it's perfectly fine to forget about that inermeditate result immediately after seeing it passed into the next step of the pipeline. Otherwise, you never know if it will perhaps show up later for some unexpected purpose. In a left-to-right language, those values that do get a name implicitly stand out


No compatibility is broken, old scripts will keep on working just fine. Scripts written for the new syntax won't work in old browsers, but that's the nature of evolving standards anyway.


> Scripts written for the new syntax won't work in old browsers,

And critically browsers are - for the most part - much more in sync these days across the board.

A lot of web devs got burned through the years of IE6-IE11 and have a natural distaste for browser level changes.

Pipes have been (formally) debated for 5yrs now by the JS people, so they aren't exactly being non-conservative about this one.


Unfortunately. If your web page need to deal with Chinese Android phones, you still need to deal with old browsers.

Their android system somehow always shipped with broken auto-update of webview or chrome.

Which results in people open your webpage with quite old browser version.


Every browser these days just means Chromium and WebKit, so all it takes is that Google and Apple need to agree.

Firefox usage is down to a rounding error, and they happily implement whatever the commercial duo decides the web should have.


This feels like code golfing. To messy to explain to newbies. Just extra friction.

You have to explain how the syntax works, the environment it works in, and the error handling. What the hell.


LETS FREEzE JS, why are we adding more stuff to js ? (੭ ˊ^ˋ)੭



This language is a garbage pile, a literal scrap heap. Look over there, that's the kitchen sink. For fucks sake we added the absolute aesthetic disaster that is CLASSES to the thing. The strength of this language is that we throw in everything and the kitchen sink. I've had one wish for the last 5 years and it's for this god damn operator to land in the language. Please just throw it in there like everything else, it will make my life so much more pleasant. If you don't like it you can just ignore it studiously just like I do with classes.


I'll only allow it if you can pipe `|> debugger`


Anyone doing data analysis knows the value of a pipe.


The "Hack" pipe really is a bit of a hack. It conflates two orthogonal issues, a consise lambda syntax and function application.


What a pain to type! The idea is great though.


What if you have to use the remainder operator? How would "|> % % 2" work?


The same way any other infix operator would, the operator has to have an expression on either side so the first % can't be an operator, and you can't have two expressions next to each other anyway i.e. you can't have "x x" or "f(x) x" or "% %"., so the second % has to be the % operator.

Definitely looks strange and could be confusing at first, but the syntax is unambiguous to parse. Add a comment, write "|> (%) % 2" perhaps, or just create a mod function and write "|> mod(%, 2)".


Functions are universal. Choose them over everything else.

Functional F# syntax or bust.


It looks super ugly.


Please god no more...


They should have stuck with the F# proposal. The hack proposal just takes one more giant step toward turning JS into Perl.

Hack proposal

    value |> foo(%) for unary function calls,
    value |> foo(1, %) for n-ary function calls,
    value |> %.foo() for method calls,
    value |> % + 1 for arithmetic,
    value |> [%, 0] for array literals,
    value |> {foo: %} for object literals,
    value |> `${%}` for template literals,
    value |> new Foo(%) for constructing objects,
    value |> await % for awaiting promises,
    value |> (yield %) for yielding generator values,
    value |> import(%) for calling function-like keywords,
F# proposal

    value |> x=> x.foo() for method calls,
    value |> x=> x + 1 for arithmetic,
    value |> x=> [x, 0] for array literals,
    value |> x=> ({foo: x}) for object literals,
    value |> x=> `${x}` for template literals,
    value |> x=> new Foo(x) for constructing objects,
    value |> x=> import(x) for calling function-like keywords,
F# proposal would make `await` and `yield` into special syntax cases or not allowed.

I'd rather do await/yield the old fashioned way (or slightly complicate the already complex JS syntax rules) than add the weird extra syntax. Arrow functions are elegant and already well-known and well-understood.


The proposal is not finished yet (and might never been). Pessimism aside, there is a whole issue on why it was decided on following up with Hack's implementation [1].

You can always give your input on such decisions, the % token is still being "bikeshedded"[2] (is that a word?), and there's still a possibility of making follow-ups proposals that could implement some F#-esque implementation

[1]: https://github.com/tc39/proposal-pipeline-operator/issues/22...

[2]: https://github.com/tc39/proposal-pipeline-operator/issues/91


On the one hand, I tend to agree and I dislike making JS syntax even more complex than it already is.

On the other hand, it works well in F# (and the ML that then borrowed it from F#) because it’s just a standard infix operator there and everything is already curried which is definitely not the case with JS. It means you will nearly always have to use a lambda when piping in JS which is honestly a bit tedious.


You do for member access, e.g.

    let firstName = 
      person
      |> fun x -> x.FirstName
      |> trim
Some languages allow:

    let firstName = 
      person
      |> .FirstName
      |> trim
But you're right, expression-orientation and currying make this much more natural.


I really enjoy writing me some of that F#, esp. with Bolero, but I would take `value |> foo(%)` over actual F#'s `value |> (fun x -> foo x)` any day. However, I agree that the "F# proposal" comes across as better, esp. when we consider -copypasting- consistency of the language. Now, considering everything in JS already, you are right, we did it again, didn't we? After couple more years, the Perl and Javascript languages will finally merge and become one.


Wouldn't it be better to use '->' as the operator? This is about dataflow so an arrow would represent that nicely. Whereas '|>' doesn't really "mean" anything. An arrow means that something flows in the direction of the arrow.


I also pushed for this in TC39 which was shut down for seemingly no reason. Since only one language with almost no user base uses |> to mean expressions, people are guaranteed to get the wrong idea about how it works. It almost seems like intentional misleading.


Not only that but I can't be the only person who finds -> vastly easier to type than |>.


Sure, but I'd rather reserve that for a pattern-matching switch statement.


Why not use |> for that?


It's more common for pattern matching Java's new syntax, Erlang, Elixir, StandardML (fat arrow), F#, Kotlin, Ocaml, Haskell, Rust (fat arrow), Ada (fat arrow), Scala (fat arrow), Zig (fat arrow), and probably a bunch of others too.

That's a lot of prior art for people to be comfortable with and as the fat arrow already has another meaning, overloading it in pattern matching might complicate things.


Right but then it would not be the same as in those other languages which use fat arrow anyway.

Another syntax for pattern-matching could be simply ':' instead of the arrow.

In the early days of Smalltalk the Smalltalk return statement was simply '^'. That could also be suitable for pattern matching. The idea would be that the switch statement returns something meaning pushing it up from the expression to whoever called it. So '^' might be good for that. Whereas pushing the results to the right to the next expression could be ->.

Just my preferences.


I originally thought you were referring to this as a "hack proposal" as a way of denigrating its value, but in fact it is the "Hack proposal" where "Hack" is Facebook's fork of PHP.


It is strange to see JS taking ideas from a derivative of PHP.


fwiw as a fb dev, I use this operator every day to await expressions in a repl.


Is either proposal compatible with later adding a backward pipe <| operator?

If so I think that should be a consideration, even if in practice there isn't massive value in a backward pipe operator in javascript. It would be a shame to later want to add it and have to add another layer of kludge and messy syntax.


Yeah, F# has a backward pipe operator, and it works like this:

    a_value |> (fun a b -> a + b) <| b_value
In F#, this works because of automatic function currying. Not sure how that would apply to Javascript, though. Without function currying, what would this even mean?

   a_value |> ((a, b) => a + b) <| b_value
...since there's no currying, you'd just immediately invoke the function that sits "in the middle" with `a` set to `a_value`, but `b` set to `undefined`.

The Hack-style proposal though...I don't think it would work with a backward-pipe operator at all. Not without adding a _second_ special-case symbol, at least.


For the non initiated amongst us who might be confused, F# has first class functions and everything is curried.

The expression you see is evaluated left-to-right. (|>) is a function taking a_value and (fun a b -> a+b) as arguments and applying a_value to the function. This returns a function taking b as an argument (that’s called a partial application). (<|) is once again a function taking the resulting function and b_value as arguments and applying b_value to its first argument which finally returns the wanted results.

The issue with unary function doesn't exist in F# because every functions can be seen as a unary function returning a function taking one less argument thanks to partial application. That's the beauty of currying.


The tie-fighter infix|> <|. |>(+)<|

Then there are ||>, |||> and <|, <||. As well as >> and <<.

Would be nice to use computational expressions for async and generators. (https://es.discourse.group/t/add-computation-expressions-fro...).

This proposal reminds me of Scala with anonymous parameters '_'. Could I use more than one '%' for curried functions. xs.reduce(_+_)? Though I guess for perf reasons it might make sense to keep things as tuples so it would be xs.reduce(%[0]+%[1]).


Consider the example code here in Hack,

  const weather = `https://api.weather.gov/gridpoints/TOP/31,80/forecast`
    |> await fetch(%)
    |> await %.json()
    |> %.properties.periods[0] 
In F# it would be much more verbose,

  const weather = `https://api.weather.gov/gridpoints/TOP/31,80/forecast`
    |> fetch
    |> await
    |> json
    |> await
    |> x => x.properties.periods[0]
    // or |> { properties : { periods: [result] } } => result
Hack will also requires less key strikes for calling functions that have multiple augments.


Your example is bad. Fetch resolves the promise even if you get a 400 or 500 error.

If your project is going to be robust, you MUST intercept the result and check before asking for JSON.

    const resp = fetch(`https://api.weather.gov/gridpoints/TOP/31,80/forecast`)
    if (resp.ok) {
      const weather = await resp.json()
        |> x => x.properties.periods[0]
    } else {
      ...
    }
Your hack-syntax response would then become

    const resp = fetch(`https://api.weather.gov/gridpoints/TOP/31,80/forecast`)
    if (resp.ok) {
      const weather = await resp.json()
        |> %.properties.periods[0]
    } else {
      ...
    }
Hardly a big win and not at all a win when you realize that you can't copy/past code to and from that syntax without risking weird errors due to the special symbol.

`await` isn't a function and (as I noted) either wouldn't be possible or would require special syntactic consideration for F# syntax, but the hack syntax is basically one giant ball of special syntactic considerations.

I'd rather explicit async/await and keep the simplicity of the F# syntax.


The F# syntax looks more consistent with idiomatic JS.

It always seems there’s some obscure edge case that derails the nice path for these spec proposals though. I haven’t tracked the conversation on this one, but wonder why they didn’t go with it.


Someone at google didn't like it, and apparently that overrode the entire original intent of the proposal. I'm all for industry participation but I was pulling my hair out with this.


JavaScript isn't that kind of programming language. I don't know why people are constantly trying to make it something else.


Javascript is, without exaggeration (and with much chagrin), the language that has done more to popularize functional programming than any other programming language in history. It's only natural that they'd continue pushing on that front. :P


Second most after spreadsheets


Yes I took a functional programming course with Racket. But in hindsight Javascript made me appreciate and understand functional programming way more.


This was by accident though.


My pet theory:

Front-end devs are generally stuck with JS, but they wish they were able to use other languages, but they can't, and can't convince their managers to use Clojurescript or Purescript, so this is what happens.

What's bonkers to me is that ECMA would prefer to keep adding features like this instead of adding a macro system.


I'm not buying that theory. I'd love if more front-end developers were eager to use PureScript or Elm, but popular sentiment appears to be that anything other than JavaScript is weird or hard or impractical or unreadable.


JavaScript with macros? Please please no. Can you imagine that?


There is an increasingly large population of "programmers" that have never known anything other than Javascript and don't understand the concept of "different languages for different purposes". They want Javascript to do everything so they don't have to leave their comfort zone.


I can't say I blame us, as its the utility lingua franca for UIs, especially with cross platform UIs via React Native or NativeScript.

Web browsers are a long ways off from WASM being acceptable as an alternative in most cases, as it yet can't access the DOM directly.

Its not that we don't understand different languages for different purposes, its more what real alternative do we have here and if we want certain features in the language we have to follow the process to get them.


"There is an increasingly large population of "programmers" that have never known anything other than Python and don't understand the concept of "different languages for different purposes". They want Python to do everything so they don't have to leave their comfort zone."

Also, s/Javascript/$LANG_YOU_HATE/g


But clearly we need "that kind of programming language" for rich client web apps. We need many of the modern language constructs in such a language. Would you advocate for an entirely new language to meet that need?


This is perhaps a controversial opinion, but I don't think that you need this at all to do programming. This is just syntactic sugar! Type some parentheses and move on.


Then we might as well give up async/await, promises, generators, recursion, null coalescing, and more.

After all, they are all “syntactic sugar”, right?


What does that mean?

1. Javascript is at its core a functional programming language. In many ways it is more like lisp than like java (and lisp was the original intended syntax for javascript!) 2. Adding pipe operators is 100% in line for a functional programming language. 3. Even if it weren't, adding functional features to a high level language is definitely a good thing.


Why do you think it's not?


[flagged]


That is a good idea, we need then more developers! More jobs for all! :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: