There's some kind of underlying, overwhelming point in there...
That tail call elimination, agressive inlining, escape analysis, continuation passing style, are saying a lot about the boundary between blocks in Algol-style languages, on one hand, and functions in LISP and ML derivatives, on the other hand.
I've seen someone in a Quora thread state that monads are just like semicolon - newline, which, beyond the witticism, says that (most) monads do sequential code, which is the point of blocks (defined by statements separated by semicolons, and semantically executed sequentially, and to make that block an atomic... Thing)
Now yeah. Where am I getting to?
I'm not sure.
The point for functional programming is composability, which is obtained in many philosophically and technically different ways, but since those distinctions, naively implemented, make call stacks explode and garbage collectors cry in agony.
TCO aims to find functions that can be seen as a block of statements that set n different variables, block which is then iterated a set number of times with different values of those variables
Escape analysis aims to find functions that take an argument A and return a value R where R can be substituted for A in the caller's, uh, memory. Such a function is equivalent to a block that simply sets a variable that is in scope.
(state-ish) monads aim to allow the sequential and atomic execution of functions that are related in a way that's hard to fit into the "pipes" model, typically by passing around a data structure representing the state those functions "live in". That data structure is roughly equivalent to the scope in block-structure languages.
CPS means, more or less, that the stack is a great way to deal with tree-like expression evaluations, but that some control flow is hard to fit in a tree, or that it's a waste of tree space to do so.
It seems like a lot of those techniques' goal is to offer, at the same time, the composability of functions, and the performance of blocks.
That tail call elimination, agressive inlining, escape analysis, continuation passing style, are saying a lot about the boundary between blocks in Algol-style languages, on one hand, and functions in LISP and ML derivatives, on the other hand.
I've seen someone in a Quora thread state that monads are just like semicolon - newline, which, beyond the witticism, says that (most) monads do sequential code, which is the point of blocks (defined by statements separated by semicolons, and semantically executed sequentially, and to make that block an atomic... Thing)
Now yeah. Where am I getting to?
I'm not sure.
The point for functional programming is composability, which is obtained in many philosophically and technically different ways, but since those distinctions, naively implemented, make call stacks explode and garbage collectors cry in agony.
TCO aims to find functions that can be seen as a block of statements that set n different variables, block which is then iterated a set number of times with different values of those variables
Escape analysis aims to find functions that take an argument A and return a value R where R can be substituted for A in the caller's, uh, memory. Such a function is equivalent to a block that simply sets a variable that is in scope.
(state-ish) monads aim to allow the sequential and atomic execution of functions that are related in a way that's hard to fit into the "pipes" model, typically by passing around a data structure representing the state those functions "live in". That data structure is roughly equivalent to the scope in block-structure languages.
CPS means, more or less, that the stack is a great way to deal with tree-like expression evaluations, but that some control flow is hard to fit in a tree, or that it's a waste of tree space to do so.
It seems like a lot of those techniques' goal is to offer, at the same time, the composability of functions, and the performance of blocks.