My pynto https://github.com/punkbrwstr/pynto is a similar framework for creating dataframes, but using a concatenative paradigm that treats the frame as a stack of columns. Functions ("words") operate on the stack to set up the graph for each column, and execution happens afterwards in parallel. Instead of function modifiers like @does it uses combinators to apply quoted operations to multiple columns. The postfix syntax (think postscript or factor) is unambiguous, if a bit old-school.
The stack-oriented approach can also be used within modern languages to realize the benefits of simplicity and code-reusablility. I created a python package for data analysis that treats a data frame like a stack of columns and lets you manipulate columns using postfix operators: https://github.com/punkbrwstr/pynto
Building up complicated time series transformations by composing simple functions helps me be sure I'm doing what I think I'm doing. Since the transformations are tacit expressions that don't define specific parameters they are then very easy to re-use and combine. And I have some combinators that can apply the functions in pretty flexible ways. Also, since its all lazy-evaluated on a per-column level I can work with huge tables, but only end up operating on the subset I need.
You definitely could do a similar column-level functional approach that way, but I think the simple syntax of the stack-oriented approach makes it easier to read and catch errors. The code would be a lot longer that way.
Thanks! I tried keeping it all stack-based, but I really wanted default arguments. In the end the parameters felt more usable even if they are "impure". I made a bunch of iterations of a postfix calculator for time series before realizing there is a whole world of concatenative langauges. The Factor paper really blew my mind.