Hacker News new | past | comments | ask | show | jobs | submit login
Lambda Calculus (safalra.com)
45 points by jacquesm on Oct 12, 2009 | hide | past | favorite | 23 comments



This page looks like it gives a very nice introduction to "programming" in the pure-untyped lambda calculus. But reading this, one would have no idea that a "meatier" side to lambda calculus existed. Just understanding the Church-Rosser theorem and its consequences, for example, would give a far deeper understanding than anything shown here. One thing that saddens me is that it shows construction of the combinators used to compute partial-recursive functions, but makes no discussion of the significance of this (the partial-recursive functions are an important computational class, and this is essentially a proof that they're lambda computable. Throw in a 6th type of combinator, and you've just proved lambda calculus capable of emulating one of the major models of computation).

It's certainly cool, but there's only so much to be gained from knowing that you can implement the factorial function with (λf .(λx .f (xx ))(λx .f (xx ))) (λfn.((λn.n(λu.λxy .y )(λxy .x ))n)(λfx .fx ) ((λmnf .m(nf ))n(f ((λnfx .n(λgh.h(gf ))(λu.x )(λu.u))n)))) .


I also think this article is lacking. Something also about domain[1] theory and categorical models could be discussed. Particular the latter as they relate to abstract machines, .eg. Curien's thesis work on combinators and functional programming. I'm not saying one can explore these in depth, but at least throw them out there for further reading to help provide a roadmap for the "meatier" side.

One could also cite a few programming constructs, .eg. (let (... and show how lisp is essentially syntactically sugared lambda calculus.

I should also add Carl Gunter's excellent "Semantics of Programming Languages: Structures and Techniques", as a good intro to this area.

[1] http://en.wikipedia.org/wiki/Domain_theory)


This page: http://safalra.com/science/lambda-calculus/recursion/ contains exactly the phrase "Y combinator", which I found exciting. Still trying to understand many other bits of the article.


Recursion in pure functional programming has me stumped. I can't see how in practice it would ever 'end' the recursion, in other words, if I try to envision the recursion I see an endless loop in there.

In practice something needs to decide to end the recursion.

Can somebody shed some light on this ?


The Y combinator only works in languages with either a lazy or call-by-name evaluation strategy. If you run it in a language with an eager evaluation strategy then it actually would result in an infinite loop. For those languages you would have to use the Z combinator or an equivalent. The z-combinator is just the Y combinator with the argument wrapped in a thunk to prevent premature evaluation.


Apologies for ignorant questions, but I don't understand how lazy or call-by-name evaluation would solve that.

After all lazy is just another way of saying 'deferred', and call-by-name is a way to identify what it is that you are calling, in either case you'd sooner or later have to do the evaluation and then you're back to that loop again.

So, what am I missing ?


Lazy doesn't just mean deferred; an unnecessary computation might never be performed at all. Take for example a function where the recursive argument is never called:

sample = (\this->(\x->x))

Then, (Y sample) can reduce as follows:

1: (Y sample)

2: (sample (Y sample))

3: ((\this->(\x->x)) (Y sample))

4: (\x->x)

Now, if your evaluation strategy required evaluating arguments to final form before applying an operator to them, then line 2 would expand forever:

2: (sample (Y sample))

3: (sample (sample (Y sample)))

4: (sample (sample (sample (Y sample))))

... And so on.

However, neither call-by-need (lazy) nor call-by-name evaluation strategies require evaluating an argument before applying some operator to it. Thus, in languages that support either of those evaluation strategies, the operator can be evaluated to the point where it throws away the argument, and the argument need never be evaluated.

This is a contrived example but real world recursion works for the same reason; at some point in the evaluation of the function, it no longer needs to be applied recursively, and once it reaches that base case, the recursive function can be discarded.


Thank you!

Ok, I think I see a glimmer of understanding here, it doesn't quite 'click' yet but it is a lot clearer now.

I get the 'never needed' bit, that's like in many other languages where you say for instance

   if x and y then 
And x = false, then y is never evaluated. Not quite the same, but it's a case where you can optimize out the evaluation of something because it's result will be discarded anyway.

The last part of your explanation still has me going around about what makes the decision to discard, it can't be the function itself, so it has to be some criterion outside the function causing that. For instance, maybe if work is no longer done, or if some arbitrary precision level has been reached.

Interesting stuff this!

In 'pure' functional programming you'd not have an 'if' statement to make the decision with, or is that allowed to be used to end recursion ? I just can't see that working using functions alone.


There is very much an "if-statement" in lambda calculus.

The basic idea is, in the Church Booleans, we think of true as a function which takes a then clause and an else clause, and returns the then clause (T=\lambda xy.x), and false as a function which takes a then clause and an else clause and returns the else clause (F=\lambda x y.y). Then we get our if statement just by applying a predicate (which returns F or T) to a then clause and an else clause. Indeed, that is very much a way to get recursion to step -- the typical Church-numeral definition of the factorial function works that way.

By the way, one sure-fire way to get evaluation of a lambda expression to terminate if possible is leftmost-reduction -- that is, reducing the leftmost redex first. Intuitively, the leftmost redex is the "outermost," and any other reduction preserves it, so it must be reduced at some point.


Why wouldn't a pure functional language allow you to have an if statement? Haskell certainly has one!

The key is that given the same input you need to produce the same output every time. Which is great as long as you're not doing I/O. (Haskell addresses that with monads - even though it looks like you're giving the same input, you're really giving a different input each time!)


Ok, so that dilutes the 'pure' of the functional part. I know that in practice functional programs have an if to force termination, but the lambda calculus deals with abstracts. Anything evaluating lambda expressions then has to deal with that or you'd have to expand the syntax of the calculus to contain a termination function which 'magically' decides that enough is enough.

According to the definition all functions are unary functions, an if would have to be a binary function wouldn't it ?


With currying, you can easily make functions of n variables out of functions of one variable. To avoid syntactic confusion, here is a comparison in JavaScript

  // Call this with foo(x, y)
  function foo (x, y) {
      return x*y;
  }

  // Call this with foo(x)(y)
  function foo (x) { return function (y) {
      return x*y;
  }}
The key thing to notice is that you can get from one form to the other with a straight text substitution. So restricting yourself to functions of one variable involves no loss of generality.

Going back to the lambda Calculus, you can do Boolean logic there as well. See http://safalra.com/science/lambda-calculus/boolean-logic/ for details. In fact I believe that the if construct in Haskell is just syntactic sugar around that. So it isn't that there is a decision made within the function, but rather that you call a function which turns out to be one of two different functions.


Ah, right, yes, that's true, I forgot about the Currying bit (I read up on it earlier, promptly forgot it).

I'm coming at this from a 'coders' perspective, not the mathematical side of it, so the code example is much appreciated, that makes it much easier to understand.

The function 'foo' returns a function that has already 'bound' 'x' to whatever value you put in, the 'scope' then allows you to fish out the right 'x' when you're in the inner function doing the multiplication.

So, does that mean that as far as the inner function is concerned 'x' is 'free' ?

So far when reading articles about the lambda calculus it seemed like the 'free' variables where somehow magically present in the scope of the function without a hint about where they came from.


I could easily be wrong, but here is my understanding. Consider this snippet of JavaScript:

  function foo (x) { return function (y) {
      return x*y;
  }}

  bar = foo(5);
Within bar, x is bound and y is free.

Does that make sense to you?


Ok, so now 'bar' contains a function where the 'x' is already bound to the '5', but the 'y' is still free because it has not yet been 'bound' to any parameter, is that what you mean ?

bar contains "function(y) { return 5*y; }"

The 'x' has been replaced by the '5' because of the function call. So 'binding' effectively pins down a function parameter and allows you to replace its instances in 'named' form with its value to reduce the expression ?


That is the way that I understand it. However you should be warned that I have never made a detailed study of the lambda calculus, and so might have basic misconceptions about it.


> Anything evaluating lambda expressions then has to deal with that or you'd have to expand the syntax of the calculus to contain a termination function which 'magically' decides that enough is enough.

This isn't so magic. I don't know all the details, but it has been proved that:

* If some expression can be reduced to normal form, this normal form is unique (modulo variable naming —alpha conversion).

* There is a systematic (mechanic) strategy which guarantee to reach normal form, if it exist at all : normal order reduction.

Normal order reduction reduces the outer-most redex in an expression (instead of the inner-most one when you do eager evaluation —call by name). This is why you don't need `if`.

A small example:

    true  = \t f. t
    false = \t f. f

    if = \b t f. b t f

    nt = something_that_doesn't_reduce_to_normal_form
    id = \x. x
Now, trying to reduce this expression (in normal order):

    if false nt id -- outer redex = if X X X
    false nt id    -- outer redex = false X X
    id             -- normal form = \x. x
Now, without using definitions (except for `nt`):

    (\b t f. b t f) (\t f. f) nt (\x. x)
    (\t f. f) nt (\x. x)
    (\x. x)
Finally, the outer-most thing may not be a redex. (but you can still reduce inner-redexes). This is the weak head normal form. For example:

    \x. ((\x. x) x) -- weak head normal form. Outer redex = (\x. x) X
    \x. x           -- plain normal form
You can decide whether weak head normal form is enough, or go to normal form.


I posted this because I'm trying to get my head wrapped around this subject and it is one of the most concise and clear explanations that I've found so far.

I want to get a more complete understanding of functional programming, and that seems to start with understanding the lambda calculus.


Understanding modern functional programming via pure lambda calculus is somewhat like understanding C++ via raw assembly. Which is to say, you'll have a great low-level understanding to build on but a long, long road ahead.

It is a lot of fun, though. I once spent a delightful afternoon puzzling out how to subtract Church numerals knowing only that it was possible to do so, having previously worked out the far simpler addition and multiplication functions. Highly recommended!


I figure you can't really say you understand a programming language until you can re-implement it from the ground up.

So that's my intermediate goal.

Spot on about C/C++ through the assembler route by the way, if there is one excellent way to really grok those then it is via the -S flag.


Implementing a language is complicated enough that by doing so you fully learn the implemented language AND the language used for implementation. Win-win!


I find lambda calculus needed when you get into some of the subjects that functional programming languages are a gateway to. It definitely helps when trying to read the Coq manual.


I don't know that the lambda calculus is necessary for functional programming. It can't hurt, but I don't think it's vital. I basically just think of the l.c. as another equivalent model for general purpose computing, just like Turing machines.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: