My point is that sometimes you don't have a single "thread" of calculation. Putting rbind into a pipe like you did is somewhat artificial (broken symmetry) and doesn't work so well if there is some pre-processing before the merge and some post-processing after the merge (or if we have two or more essentially different arguments in a function that need some preprocessing). You may say that having multiple pipes, one merge operation (or whatever the function with multiple inputs is), and then a downstream pipe is the "human readable" way to do that. I'm not sure if that makes programmers able to handle some nesting superhuman or subhuman :-)
Teaching a "paradigm" can be too limiting. Looking at some random tutorial on the web:
"To demonstrate the above advantages of the pipe operator, consider the following example.
round(cos(exp(sin(log10(sqrt(25))))), 2)
# -0.33
"The code above looks messy and it is cumbersome to step through all the different functions and also keep track of the brackets when writing the code.
"The method below uses magrittr‘s pipe (%>%) and makes the function calls easier to understand.
which is a perfectly readable way of calculating the evolution of the value of a 60/40 portfolio of stocks and bonds from the value of each component at each rebalancing date?
I don’t think anyone is arguing that the pipe should be the -only- form of composition. Just that it’s a useful form when you have a linear sequence of transformations. Sometimes it’s useful to force something close to being linear into a linear form for consistency, but typically you would switch to an alternate form of composition, typically assigning to intermediate variables.
It’s easy to find examples of using the pipe in ways that I would consider suboptimal. But that doesn’t affect my thesis that, on average, the use of the pipe leads to more readable code.
I was also initially quite sceptical of the pipe, since it is a fundamentally new syntax for R (although obviously used in many other languages). I think the uptake by a wide variety of people across the R community does suggest there’s something there.
I was actually excited about pipes when they were introduced but in the end I'm pretty happy writing and debugging "unreadable" code.
I had a similar experience with Lisp syntax: Clojure's threading macros seem a neat idea but I do actually prefer old-style nesting of function calls.
Maybe my R journey is a bit atypical. I started learning R around the time you created reshape and ggplot and used them extensively. But as the "tidyverse" thing has evolved I have found myself more attracted to "base" R as I've become more familiar with its data structures and functionalities.
Teaching a "paradigm" can be too limiting. Looking at some random tutorial on the web:
"To demonstrate the above advantages of the pipe operator, consider the following example.
"The code above looks messy and it is cumbersome to step through all the different functions and also keep track of the brackets when writing the code. "The method below uses magrittr‘s pipe (%>%) and makes the function calls easier to understand. Really? Do we want to teach people that the code below is so much better than the code above?What do we expect them to do if they find something like
which is a perfectly readable way of calculating the evolution of the value of a 60/40 portfolio of stocks and bonds from the value of each component at each rebalancing date?http://thatdatatho.com/2019/03/13/tutorial-about-magrittrs-p...