Hacker News new | past | comments | ask | show | jobs | submit login
What is the contribution of lambda calculus to the theory of computation? (cstheory.stackexchange.com)
112 points by kocheez75 on April 1, 2014 | hide | past | favorite | 47 comments



The λ-calculus is basically the MVP of programming languages. It allows somebody designing a language feature or part of a type system to experiment with that feature in isolation. Fast iteration for programming language designers.

It's also great as the core of a programming language. If you can express a feature just in terms of the λ-calculus, you can implement it almost for free. It's just desugaring. Most of Haskell is like this: while the surface language has become rather complex, the majority of the features boil away rather transparently into typed lambda terms. Haskell uses a variant of System F which it actually calls "Core" to underscore how core this concept is :).

Of course, as the accepted answer points out, the λ-calculus is also very useful for logic. In some sense, it is a sort of "proof essence": the basic building block of what a proof is. I really like this notion because it gives me a concrete, first-class object to represent a proof. Seeing proofs as programs really helped me come to terms (heh) with mathematical logic.

One surprising use for the λ-calculus outside of CS is in linguistics. In particular, linguists use typed λ-calculi to construct formal semantics for sentences. It's a way to build up the meaning of, well, meaning. Or at least the things we say ;). I'm not super familiar with semantics in linguistics, but I've always thought it was very cool that they also use some of the same tools as programming language theory! Wikipedia has more about this: http://en.wikipedia.org/wiki/Formal_semantics_%28linguistics...


Two brief comments:

(1) The λ-calculus is better thought of as the MVP of SEQUENTIAL programming languages. It is not a good formalism for concurrency. Thee π-calculus is more suitable for concurrency, and, in a real sense, subsumes λ-calculus (see R. Milner's "Functions as Processes" and its subsequent elaborations for details).

(2) Proofs = λ-terms works best for CONSTRUCTIVE logic. With classical logic where not-not-A = A, it kind of works, but not really well.


(1) I don't think that's fair to say. The lambda calculus is a way of expressing computations with little regard for how they are actually executed.

(2) True, it works better if you add continuations.


re 1) one of the primary limitations of the lambda calculus wrt concurrency is that it does not model time--there is no way to determine how long a computation takes. An example of a function the lambda calculus cannot define is f(x, y) where the fastest computation of x and y wins. You'll see that function pop up in a number of places in the Haskell world, where it is called "amb". This function is obviously important from a pragmatic POV but it turns out it is also important when doing formal work as well: see the parallel-or/full abstraction problem for PCF[0].

My personal favorite concurrency formalism is the Chemical Abstract Machine[1]. The y-calculus is a neat extension that preserves the spirit of the lambda calculus, but also permits (purely non-deterministic) concurrency.

[0]: http://en.wikipedia.org/wiki/Fully_abstract#Abstraction

[1]: http://www.lix.polytechnique.fr/~fvalenci/papers/cham.pdf


Most process calculi (including the CHAM) don't model time. I think it's perfectly possible to consider computation without timing.

Adding timing is straightforward for any model of computation with operational semantics. This addition generally changes the semantics dramatically (eg. equalities that hold). There are many timed versions of process calculus, e.g. timed CCS, timed CSP, timed pi-calculus.


As to (1) I disagree, it's difficult to express general concurrent/parallel computation in lambda-calculus. You need to resort to encodings. That's why process calculi were invented.

Regarding (2), I'm afraid I also disagree. While continuations can express classical logic in some way, it's generally only possible to express certain cut elimination strategies (e.g. call-by-value, as in the Stewart-Ong lambda-mu-calculus). I don't think we know how to do all of classical logic yet. Moreover, adding jumps (which is what continuations are) to lambda-calculus results in an extremely complicated and hard to understand system (e.g. the reduction semantics of the lambda-mu-calculus is usually only handwaved). If you write the very same semantics in pi-calculus, it's trivial and extremely transparent (see "Control in the Pi-Calculus" by Honda et al). Finally, I think it's conceptually deeply confused to 'add' continuations to lambda-calculus. A much cleaner picture emerges when one understands that the call-return jumping discipline engendered by the function-calls of lambda-calculus is a special case of the general jumping that continuations can do. So really one should start with a continuation calculus, and can then see lambda-calculus as a strict sub-calculus of especially disciplined jumps (call-return with affine usage of the return jump).


w.r.t (1), it's interesting that you and the other replier thought that I meant LC was good for expressing concurrent processes. What I actually meant was that it was also bad at expressing sequential processes, because it doesn't have a single agreed upon operational semantics and it's too high level.


> Seeing proofs as programs really helped me come to terms (heh) with mathematical logic.

I suppose this underscores how some people think algebraically while others think analytically. I am an algebraic thiner. I like to break apart algorithms into a mathematical calculus to understand them.

Humorous: http://bentilly.blogspot.com/2010/08/analysis-vs-algebra-pre...


> implement it almost for free

What does this phrase mean?


I suspect what the parent author meant is that the lambda calculus is a very simple language. If a new language feature can be defined as a small extension to the lambda calculus, then creating a toy implementation is should be easy because the new language is so small.

More importantly, since your new language is small, you can eyeball the implementation and be fairly confident that the implementation "matches" your formal definition (about which you might prove theorems such as type safety).

Contrast this with defining a new feature as an extension of C++, Haskell or Java. You'll invest a lot of time, and in the end might not even be very confident that your implementation matches your theory. So you might write a few hundred line program in your new programming language, get a wonky result, and not be sure if the formal definition is flawed or if the implementation is buggy.

The "implement almost for free" phrase takes on a different meaning if you decide to implement computer-checkable proofs about your language.


Excellent! Thank you for your explanation.

My initial suspicion was that the phrase somehow related to logarithmic efficiency -- which made no sense at all to me!


The importance of calculus is most easily demonstrated by applying it to physics problems. For example, if I tell you how a ball moves in time, and then ask you how fast it's moving after a certain number of seconds, then calculus will help you find the answer (take the derivative of the equation of motion and plug in the time).

What is the equivalent computer science problem that lambda calculus can help you solve? Challenge: pretend like you're Richard Feynman and avoid jargon, if at all possible.

EDIT: I find it quite curious that I got so badly down-voted (-3 and counting!) for simply asking for concrete examples of applicability to actual, concrete problems. I've always found that tools are best understood in the context of their use. Even an abstract concept is useful to speak about in this way - for example, complexity/Big O analysis helps us with capacity planning, comparing algorithms, and so on. It may be that lambda calculus helps us with, oh, decomposition of computation or something like that. But for all the digital ink I've read about it, it's always seemed like an academic form of name-dropping. Even the name is intimidating, right? Reminds me of terms like "schroedinger's equation" or "canonical ensemble" from the good old days in physics class. But behind the intimidating names is just a tool for solving problems - and I have yet to see anyone demonstrate this for lambda calculus. Granted I haven't looked very hard!

It takes self-awareness to realize that you are enthralled with something without understanding it. The litmus test for this is the umbrage taken by someone who's asked simple question about what their high-status concept is really used for. That's why I mentioned Feynman in particular, because of his wonderful reputation as having knowledge that was totally grounded in reality.


A century ago, mathematicians were busy proving theorems (for instance, about calculus), and some of them were trying to figure out if we could start with a mathematical/logical statement and just calculate the answer, "this is true" or "this is false". They were not thinking about electronic computers, and they weren't concerned about how long it would take, they just wanted to know if you could calculate the answer at all, using an algorithm and pen an paper, and eventually finishing with an answer. For example, if you have just something like "10^23423 is bigger than 23454^10", there's no question about it, you can calculate the answer by using the definitions of integer numbers, product and exponentiation . But what about any mathematical or logical statement? That was the big question.

A re-statement of your example is that differential calculus gives you the definitions of limit and derivative, and it helps you with real world problems that are well modeled by differentiable functions. It also tells you that some functions are not differentiable.

Lambda calculus starts with a definition of what is calculating, and then it tells us that the answer to big question is "no", some functions cannot be calculated. But it also tells us that whatever can be calculated, can be done with a very, very simple programming language.


Thanks. Can you give me an example of using LC to prove a simple function can (or can't) be calculated? Is f(x){return 1/0} an example of a program that "can't be calculated" or is that just an error (does LC even have a concept of "error")? What about f(x){assert(false)}? Also, f(x){f(x)} clearly won't ever terminate; but how does LC "prove" this?


All these statements/calculations/proofs are near-trivial once you've defined an appropriate semantics for your calculus - probably the easiest route is via a small-step operational semantics - see, e.g., Types and Programming Languages.

There are many lambda calculi, ranging from the untyped lambda calculus with strict evaluation underlying Scheme to the Calculus of Inductive Constructions which provides an alternative foundation for mathematics to set theory and is the basis of the Coq theorem prover which has formalized proofs of the Four-Color and Feit-Thompson theorems.


So a student asks a question about how physics helps describe the path of a thrown ball. You answer:

"All these physics problems are near trivial once you've laid out the right differential equations and solved for the equations of motion. Probably the easiest route is to use the Leibnitz notation to solve f=ma for various time-independent force functions.

"There are many formulations of (classical) mechanics, ranging from ordinary linear differential questions to the use of Lagrangians. The latter is an alternative foundation for mechanics which was useful in the evolution of both classical e&m (Maxwell's equations), quantum mechanics, and even statistical thermodynamics."

The problem that I have with LC is not that I think it's useless - I have too much respect for Alonzo Church to believe that. Heck, I've even read Goedel Escher Bach, and enjoyed large swatches of it! What gets me is that people run around singing the praises of the lambda calculus, but when you ask about what it's really good for, you get more formalism.

I'm beginning to suspect that the whole thing is an intricate practical joke, that lambda calculus is fundamentally so self-referencing it really doesn't have anything to do with anything, other than itself, which is everything.


You can think of the lambda calculus as the first general-purpose programming language. Logicians were using it to explore the theory of computation before the modern computer was even invented.

But, despite being a general-purpose programming language, it's not very practical to use as one. This is simply because the lambda calculus is so minimal, and we have more sophisticated and featureful programming languages to solve problems with. So while theoretically the LC can compute anything, programmers turn to other languages instead.

So what is the LC used for, if not for writing programs?

Today, it's used as a basis for research into programming languages. PL researchers will often take the untyped LC and extend it with some feature. This allows that feature to be explored in isolation, and provides a rigorous, well-understood platform for writing proofs. This is how we wound up with the various typed lambda calculi for exploring type theory (e.g., the simply typed LC, System F, etc.).

GHC actually reduces Haskell into a typed lambda calculus known as System FC, also known as Core. It uses the Core representation to perform most of its optimizations. I suspect that having a library of proofs about System F available helped quite a bit with implementing the optimizations.

The reason students still learn and use the infinitesimal calculus is because it's still one of the best tools we have for certain problems. The reason students don't learn the lambda calculus is because we have better tools for many of it's applications (pick just about any other general-purpose PL). But if you talk to students of type theory, they'll tell you that they did learn the LC and that they use it quite a bit in their research. I think someone already mentioned Pierce's Types and Programming Languages, which is a really good introduction to the topic, starting with the untyped lambda calculus and gradually building upon it. If you're genuinely curious about the stuff LC is used for, that's the place to start.

I reckon that for most programmers, the lambda calculus is nothing more than an intellectual curiosity, but for PL researchers it's still a useful tool -- a useful formalism for exploring and demonstrating properties of programming languages and type systems.


I'm sorry for my brief answer. I'll expand a bit (really, I'm inlining part of Pierce's book):

As you probably know, there are two major domains where lambda calculi appear: computability (which I know next to nothing about) and formal methods/type theory/PLs. The people talking about "showing functions can't be computed" are referencing the former. From the formal methods view, a calculus really consists of a syntax (the set of allowable terms; this is like the state space) plus a dynamic semantics (which is like the dynamics), and optionally a static semantics (e.g., typing rules). I.e., we have the following analogy (which holds best for small-step semantics; see below) between lambda calculus and physics:

    semantics : programs :: differential equations : physical states.
A lambda calculus is a calculus for programs with first-class functions :)

(While ML is basically a compiler for a lambda calculus, you can even define calculi to prove things about Java if you wanted.)

Now, to answer your questions. Let's say our set of terms (our syntax) is

    term := x         -- var
          | \x . t    -- lambda abstraction
          | t1 t2     -- application
          | n         -- natural number
          | div n1 n2
and let's call a term a value if it's "fully evaluated" according to our not-yet-existent semantics; i.e., either a number or a lambda.

We clearly expect the function defined syntactically by

   f := \x . div 1 0
to diverge when applied to any argument, so any semantics for our calculus should capture that somehow.

A denotational semantics would define a semantic function S[[ ]] taking our function into the mathematical (partial) function x |-> 1/0. We would also require that S is a homomorphism, so S[[ f x ]] = S[[ f ]](S[[ x ]]), which indeed is undefined in the usual mathematical sense.

A big-step operational semantics is similar to a small-step operational semantics and so I won't describe it except to say that it rather resembles an implementation of an eval/apply Scheme interpreter (but as a purely mathematical relation on terms, and usually using the fundamental LC idea of substitution instead of an "environment"). A big-step semantics often suppresses both nontermination and "bad" states arising from nondeterministic rules (think concurrency).

A small-step operational semantics is closest to the notion of a dynamics in physics (especially in the discrete case of difference equations). Let's define one. We have choices for what "div n 0" should be: it could be some arbitrary value of our choosing (or we could extend div with an extra argument to give as the answer in this case), we could "get stuck" (i.e., not allow any transitions out of such a state), or we could add an error state to the the syntax and evaluate div n 0 to error (this generalizes pretty easily to try/catch, assert(), ...). The semantics are (technically we are defining a relation called -> on terms):

If mathematical division gives an answer, so does our division program:

       n/m = p
    ------------
    div n m -> p
(I've evilly conflated mathematical numbers with their syntactic representation in our language.)

We also need more reduction rules:

           t -> t'
    ---------------------
    div t t2 -> div t' t2

           t2 -> t2'
    ---------------------
    div v t2 -> div v t2'
If we had added an error term, we'd also add the following rule:

    ----------------
    div n 0 -> error
(Exercise: what happens if we wrote "div t 0", as I did in an earlier edit?)

Application proceeds by reducing the function first (we'll either end up with a lambda or a number, which we could avoid by typing our calculus):

       t1 -> t1'
    ---------------
    t1 t2 -> t1' t2
There are more rules for substitution and for reducing the argument (the usual choice gives strict semantics, but other choices are possible).

Then we prove certain things about the language using induction, usually induction on derivations (just a special case of well-founded induction). We can't prove much about an untyped calculus, though.

Now your first two questions should be clear, e.g.:

       f v
    -> div 1 0   -- by substitution rule
    stuck! (i.e., no rule applies)
If we'd added an error rule, we'd get instead

    -> error
There's a bit more work to show f t gets stuck (gives error) for all terms t - maybe an induction or something, right? But wait! That's not actually true! What if t is a nonterminating function call? What about f x (x is a variable)? Then we have a free variable in the program, and again get stuck!

For your last question, you could add recursion either by hand (look up "fix") or with the Y combinator, then show that (again let's look at values first, since an arbitrary term might error in a different way):

    f v -> .... -> f v
in only a few steps. Then, we'd use our proof that "->" is deterministic (i.e., a partial function from terms to terms) to conclude that f v never reduces to a value, since otherwise one of the terms in the loop above would be a value also, but none of them are. qed

There's also the formal/mechanized logic/proving aspect (omitted for brevity). It's nice that it's easy to study lambda calculi with theorem provers since, as you can see, it's easy to forget a rule or make a small mistake in a proof.

(I'm being quite imprecise here: I haven't shown what rule induction is or why it's valid, or how the rule definitions work, or ... )


Lambda calculus, like Turing Machines, was created as part of an investigation into the fundamentals of mathematics and in particular what is meant by "computation" - so it was never really intended as a practical tool for solving real world problems - which was arguably very much the motivation for Newton's creation of what is generally called "calculus".

Edit: I don't think javajosh's question is unreasonable - not sure why it is getting downvoted.


When you say "real-world problems" are you referring to the physical world? I don't expect that. I'd settle for using it to help me solve a mathematical problem. So far, all I see is a lot of hand-waving and not a lot of "this is a tool that can be applied to this sort of problem".


A quick Google should point you in the direction of what people use lambda calculus for:

http://en.wikipedia.org/wiki/Lambda_calculus

One fascinating area for the use of lambda calculus is as a conceptual basis for the implementation of functional programming languages - indeed you could argue that a language is functional iff it can be mapped to the lambda calculus.

Edit: Although it is rather old (like me!) I have fond memories of "The Implementation of Functional Programming Languages" by Simon Peyton Jones:

http://research.microsoft.com/en-us/um/people/simonpj/papers... [PDF]

Another very cool thing is that you can translate expressions in the lambda calculus to SK[I] combinators - which means you can actually implement real programs (including recursion - e.g. using the Y-combinator) as two extremely simple functions - not very efficiently, mind you... :-)

http://en.wikipedia.org/wiki/SKI_combinator_calculus


> indeed you could argue that a language is functional iff it can be mapped to the lambda calculus.

Lambda calculus is a higher-order, applicative functional language. There are other kinds of functional language.

For example, kappa calculus is first-order. Joy is a pure functional programming language which is based around composition instead of application.


Thanks - it was quite a while ago that I was taught about this kind of stuff and for postgraduate study I ended up going down a AI/engineering route rather than programming languages - I now wish I had done the latter given the resurgence of interest in functional programming :-)


Lambda calculus is a simple model of substituting values into variables. For anyone studying the behaviour of such a system (eg. programming languages or Mathematical proofs) then LC is an nice, simple model to work with.

A physical-world analogy would be the billiard-ball model of atoms, which works well to describe the behaviour of gasses.


You're thinking about it from the wrong perspective, the lambda calculus is essentially the equivalent of assembly language for functional programming. The problems you can solve by building abstractions on top of it are infinite, it's utility comes from all properties we can deduce about it when building higher level abstractions, not from it not from being immediately useful to people programming at a higher level any more than x86 assembly is.


Interesting point, since personally in my experience, lambda calculus is overly held up as useful to learn. While simple models are very useful, it has a geek culture/hierarchy aspect too. (http://en.wikipedia.org/wiki/Knights_of_the_Lambda_Calculus)

In explaining things, I find it very important to deflate intimidating-sounding jargon; and discover for oneself if something really is important, or whether it has more of a dysfunctional social purpose like mystery-of-the-priesthood for initiates.


The term calculus generally refers to a system of calculation. I don't get why you're expecting lambda calculations to solve anything.


How would you write a program which compute the speed of the ball in your example, and how would you ensure or prove that your program is correct? That's what lambda calculus and all its derived works give you. Others have described it more elegantly here, so please refer to their posts, but if you want references, check out the Curry-Howard isomorphism, Hindley-Milner type system, System F, or what Coq, Agda and similar languages bring on the table.

You could also look at the answers given in the linked stackexchange page, of course.


Your answer is useless to the commenter you are responding to. If he would 'get' lambda calculus by studying the Curry-Howard isomorphism or the Hindley-Milner type system, he already would have. It is like showing someone that asks about rocket science a calculus book or a rocket and saying: just study how it works. For many people that is completely backwards.

First you need to teach them what the lambda calculus fundamentally is and why it is useful in small bits. Then you can explain to them how powerful it gets, by explaining how the usefulness is extended by the Curry-Howard isomorphism and how you can build the Hindley-Milner type system with it.

The commenter is an 'engineer' and you're a 'theoretician'. As a fellow 'engineer', I completely understand his exasperation with all this pointing to either nice theoretical results or the large scale end products of the theory. That's just fundamentally not how we learn.


I am not explaining how, I am merely pointing out that there is a "how to compute" question, and that lambda calculus and type theory help address that question formally. That point tries to address the following question by OP:

> What is the equivalent computer science problem that lambda calculus can help you solve?

That may sound very obvious in retrospect - surely OP knew already what I was saying, and he actually wanted explanations on the how itself. Perhaps that's why it seems useless: I admit that I tend to stick to strict semantics on technical questions, and often completely miss the actual intent. Sorry about that.


FWIW I think it's an insightful question.

Other replies to this thread do a pretty good job at pointing out the applications of lambda calculus. I want to contribute this excerpt from one of the original papers on the topic:

"In this paper we present a set of postulates for the foundation of formal logic, in which we avoid use of the free, or real, variable... Our reason for avoiding use of the free variable is that we require that every combination of symbols belonging to our system, if it represents a proposition at all, shall represent a particular proposition, unambiguously, and without the addition of verbal explanations."

The motivation was a setting for doing "precise and unambiguous" mathematics. Mathematical results are used pervasively today, so the overall vision is obviously important.

Whereas The Calculus was developed in response to the difficulty of calculating physical quantities, such as those about celestial bodies, the lambda calculus had a more foundational motivation.

None-the-less, lambda calculus is tremendously useful for solving "real world" problems. Most of these problems arise where doing "precise mathematics" is important but tedious, difficult and/or time-consuming; or the related task of defining computations correctly.


I'm not sure what you mean by "equivalent", but historically, the "computer science problem" that lambda calculus has been used to address is understanding the meaning of expressions in programming languages. The first paper to introduce lambda calculus in relation to computers was Peter Landin's "The mechanical evaluation of expressions" (1964). Its specific goal was to use the lambda calculus to model the facilities of other programming languages in use at the time. Landin continued this investigation with two more papers in the series: "A Correspondence Between ALGOL 60 and Church's Lambda-Notation" (1965) and "The Next 700 Programming Languages" (1966). Because of the LC's basis in mathematics and logic, it provided a useful way to define the semantics of programming languages. It continued in this role over the next few decades. One of many high points in this evolution was the use of the LC as the basis of Scheme, "An Interpreter for Extended Lambda Calculus". Among many other contributions, the "Lambda Papers" investigated other models of computation (e.g., actors).

So the lambda calculus has been used as a consistent and primitive basis for understanding computation itself and how it is expressed in programming languages. I suppose it is more "equivalent" to Newton's 2nd Law than any particular way to solve a problem. It's perhaps worth remembering that Principia used geometry, not what we know of as calculus, to clarify mechanics.


Lambda calculus is a rigorous mathematical way to define what "computation" even means. You need something -like- Lambda calculus to even begin reasoning about computation mathematically.


The lambda calculus is an intellectual framework for the description of computation and logic, just like Newton's calculus forms the basis for a description of physical processes.


A concrete example: say we use your own.

Say we do have a function f(t) that tells us the location of the ball with respect to time. Something like (I'll use JavaScript as my implementation language, as it is capable and accessible):

    function location_x(t){ return 2 * t; }
    function location_y(t){ return 3 * t; }
In calculus, we say that the derivative of f(t) with respect to t can be approximated by calculating the limit as dt approaches 0 in the equation "(f(t + dt) - f(t)) / dt". Since you imply you know linear calculus, I'll not explain the jargon there.

Well, f(t) could be anything in that description, it doesn't necessarily have to be a function describing the position of a ball. So why don't we make f a variable? Why don't we write a general-purpose derivation function that can compute the derivative of any function given to it?

To do this, we need the ability to treat functions as "first-class citizens", as things we can manipulate just as much as we do primitive integers and decimal numbers.

That includes being able to refer to a function without evaluating it (i.e. storing it as a variable, able to pass it around, with no special knowledge of its purpose, just as we can pass integers to a function and it doesn't matter that the integer is 10 or 173 or -83491, we pass it to our function in the same way every time). We also need to be able to create functions on the fly (just as we might create integers other than the ones we typed in literally through arithmetic operations). We finally need the ability to capture the the values of any variable defined outside of our functions at the time the function was defined. This is called lexical scope, because it follows how we read the code, and functions that do this are called closures, because they "close over" the referenced values. Imagine the function hugging out further and further to accept everyone in. It's important that the values get captured and not the references, which we'll see why later.

Because of these three things, I could have also written my location functions as:

    var location_x = function (t){ return 2 * t; }
    var location_y = function (t){ return 3 * t; } 
How do you write the literal value of an integer? "2". How do you write the literal value of a function? In JavaScript it's "function(args){statements}". A named function--as you are used to them in procedural languages--is really just a function value stored in a variable that has a name. Just like a named integer is just a literal integer stored in a variable that has a name. We could even have an array of functions!

    var location = [function (t){ return 2 * t; },
                    function (t){ return 3 * t; }];
Okay, so what does our derive function look like:

    function derive(f){ // f is an argument that accepts a function as a parameter
        return function(t, dt){ // we will return a new function
            return (f(t + dt) - f(t)) / dt; // the bare definition of a derivative! Nothing fancy!
        };
    }
This is where "closing over" gets important. I want the function(t, dt)... to have access to the value of f when it was created. A simpler example, say I wanted to make an array of functions:

    var arr = [];
    for(var i = 0; i < 100; ++i)
        arr.push(function(t){ return i * t; });
What do you expect arr[50](3) to return? If you said "150", you've been paying attention. Closing over the value of "i" allows us to do natural things with our code.

Unfortunately, it's incorrect. JavaScript does unexpected things with scope, so it can break our natural understanding of it. Every one of these 100 functions will return 100 * t, because 100 was the final value of i that caused the for loop to break, and the i variable is still in scope after the for loop finishes. It's one of the ways in which JavaScript is fundamentally broken. In JavaScript, this example has to be fudged to work right:

    var arr = [];
    for(var i = 0; i < 100; ++i)
        arr.push((function(_i){ // an anonymous function...
            return function(t){ return _i * t; }
        })(i)); // ...that we call right away
We are used to the for loop introducing new scope, but in JavaScript it does not. We have to explicitly introduce the scope. It's bad, but thankfully the ability to treat functions as first-class values allows us to fix it!

Moving on, let's apply our derive function:

    var velocity_x = derive(location_x);
    var velocity_y = derive(location_y);
    // evaluate a few values
    var x = location_x(10);  // x == 20
    var dx = velocity_x(10, 0.0000001); // 1.999999987845058 which, for an approximation, is very close to the correct value of 2
"Oh, but 2 times t is easy to derive." You might say. Well, what if our location function described the ball moving in an ellipse?

    location_x = function (t){ return 2 * Math.cos(t); }
    location_y = function (t){ return 3 * Math.sin(t); }
Then the derivative is exactly the same:

    velocity_x = derive(location_x);
    velocity_y = derive(location_y);
And we will get reasonably good approximations for increasingly small values of dt.

Now, try doing that in C. We live in blessed times where most of the popular languages have the basic building blocks of functional programming enough to support the lambda calculus. This didn't used to be the case.


Your answer doesn't tell him anything about the lambda calculus. It tells him a lot about functional programming.

Is your claim that functional programming is the lambda calculus? Is dependent on the lambda calculus? Is an implementation of the lambda calculus? Something else?

You seem to have some implicit assumptions or claims here. Stating them explicitly could let the GP know what it is that you think you're demonstrating.


For anyone that wants a readable overview of the λ-calculus, I recommend reading the second chapter in Simon Peyton Jones' book:

http://research.microsoft.com/en-us/um/people/simonpj/papers...


see also https://cstheory.stackexchange.com/questions/3650/historical...

It's interesting that Turing never rigorously proved Turing Machines were a model of computation, it was only an intuitive appeal. He actually apologised for this when introducing them, in his Entscheidungsproblem paper http://www.turingarchive.Entscheidungsproblemorg/browse.php/...

Also curious is that Church wrote to him, saying that he found Turing's model more intuitively convincing.

Intuitive appeal isn't everything of course.


The reason why he did not there was no rigorous proof is probably in parts that the result is not controversial at all.


This is probably going to sound really dumb, but I have utterly no formal computer science background. Lambda calculus in the languages where I have seen it (Scheme and Python) simply seems like a way to express a function as a one-liner. Surely, I'm missing something important, but I can't figure out what.


That's just the lambda abstraction, not the lambda calculus.

The idea is that a lambda abstraction, in a math sense, is much less general than a single argument function. 'sin θ' is a single argument function that's defined by some business about the ratio of sides of a right triangle with angle θ, but it's not a lambda abstraction because all they're allowed to do is substitute a variable into some expression. 'f(x) = x² - x' can be directly expressed as a lambda abstraction (f = λ x. x² - x)

Once you guarantee that the function takes the specific form, you can manipulate it in ways you can't manipulate the general-case function, and it turns out those manipulations give you enough power to define almost anything else you'd want to do.

As it turns out, (IIRC) Python makes lambdas essentially opaque objects, and doesn't let you peek into them any more than you can a general-case function. This means you can't do any lambda-calculus on them, even simple stuff like determining if they are exactly the same expression.


Once you guarantee that the function takes the specific form, you can manipulate it in ways you can't manipulate the general-case function, and it turns out those manipulations give you enough power to define almost anything else you'd want to do.

In λ calculus, the only way one term can inspect another is by applying it. You don't get an intensional view of a λ abstraction. The other language features that get built on top of this (conditionals, tuples, etc.) only rely on functions' extensional features.


Viewed that way, that lambda calculus expresses a particular (kind of tiny) feature in a PL, it's pretty boring. What you want to do is take note that lambda calculus, this single, tiny feature in a PL, is powerful enough to simulate every other feature in the PL.

The other nice thing is that if you do base your whole language on LC then you get really excellent variable scoping without further questions. That's something that a lot of languages still struggle with (js).


The Python usage isn't very instructive:

http://python-history.blogspot.com/2009/04/origins-of-python...

That's the Python language designer saying he wasn't a huge fan of the name when it was introduced, because it didn't quite live up to the expectations it set.

The usage of 'lambda' to construct anonymous functions is just a reference to Lambda calculus, the actual thing is a formal system of operating on those functions.


Perhaps you refer to lambdas in Python.

Those are not the same thing as Lambda calculus -- it's just a bad naming decision for a Python language feature (short anonymous function construct).


Yes, you miss something important: that this can be written down in one line is a great breakthrough. You also miss that languages like Scheme and Python are the way they are because they copy the lambda-calculus. They are (in parts) implementations of the lambda-calculus. If you look at the history of programming languages this is very clear.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: