Hacker News new | past | comments | ask | show | jobs | submit login
Functional Programming Is Hard, That's Why It's Good (fayr.am)
177 points by trptcolin on Aug 20, 2011 | hide | past | favorite | 102 comments



FP is becoming the new OOP. People who don't understand its original meaning are misinterpreting it and incorrectly expounding its usefulness. Newcomers are not grasping how it fits into the bigger picture. Be cautious.


You know, I've learned something today!

When some people talk about functional programming, they talk about monads and iterating with recursion and lambda calculus.

Other people talking about functional programming mean using data-oriented abstractions, where functions operate on data in well-defined ways, to avoid state and interlocked dependencies[3]. I've been calling the latter a "functional style", but clearly there is ambiguity and people get turned off and say things like "we're using an OO language why would i want FP"

in Avdi Grimm's talk "Confident Code"[2] he calls this style "narrative structure", compared to "haphazard structure"[4]. I like that; "functional" envokes all sorts of emotion about being hard to understand and unnecessary. "Narrative" or "confident" or "data-oriented" are more articulate, and don't evoke flame wars.

Avdi's example (code screenshot) is the most articulate example of this I've yet seen. https://lh4.googleusercontent.com/-Cs-muD17ato/Tk6b1Rb9MwI/A...

And it's in ruby. Nothing hard here, but it meets a functional style per John D Cook's definition[1]: "OO makes code understandable by encapsulating moving parts. FP makes code understandable by minimizing moving parts." "functional in the small and OO in the large" seems a good path. The Scala community seems to get this, and seems to me Scala is mainstreaming much quicker than any of the pure functional languages.

[1] http://www.johndcook.com/blog/2010/11/03/object-oriented-vs-... [2] http://avdi.org/talks/confident-code-2010-01-12/confident-co... [3] http://www.dustingetz.com/how-to-become-an-expert-swegr [4] http://www.dustingetz.com/confident-code-vs-timid-code


functional in the small and OO in the large" seems a good path.

That's what I like about Erlang as well. Once you recognize processes as your objects, this philosophy just flows out of the language.


But is taking a while for Erlang to come to terms with it...see Joe Armstrong "Why OO sucks" (http://www.sics.se/~joe/bluetail/vol1/v1_oo.html)


I think you miss difference between OO and FP.

It is paradoxical. Fifteen years ago, OO was this giant, over-sold piece of junk that was going to magically make random code piece. But, but it does/did have merits for ginormous software operations employing many, many mediocre programmers. It contains a simple metaphor and simple device for turning totally bad code into slightly less bad code, for allowing completely unrelated stuff to relate together in a half-assed way. OO isn't about giving programmers more power but allowing the power one programmer to interface with that of another (even perhaps another inferior programmer).

FP is about power. You could anhilate whole galaxies with the power of a third order functional if your mind was strong enough to wrap itself around such a thing - from another link currently up: "Lisp's purpose in the programming language galaxy is to assist our most gifted fellow humans in thinking previously impossible thoughts, remember?" Essentially, neither Lisp nor FP nor Logic Programming exist to collect the multitude of junk that enterprises have and make it more sensible.

==> I am speaking "jocularly" here. I actually think collecting the junk of enterprises is a worthy activity. Large scale programming in a typical enterprise (not Google) requires you to make allowances for a multitude of human foibles and this can expand you as a human being and not just a genius.

Just consider, there are geniuses who just produce "amazing stuff" for others to run after trying to understand, there are geniuses who explain to other geniuses and there are geniuses who explain to normal people. All have their place.


"OOP ... contains a simple metaphor and simple device for turning totally bad code into slightly less bad code, for allowing completely unrelated stuff to relate together in a half-assed way."

haha, nice. i don't think it's quite that cynical, hybrid OOP/FP languages like Scala seem to have a better industry record than pure functional languages, at least for the internet companies we hear all about.

OO isn't inherently bad, and FP isn't inherently good, but knowing how to think in FP will certainly make us better engineers, and the "functional in the small, object-oriented in the large" seems to be gaining acceptance in elite circles.


Just for record.

Type system for OOP was developed during some 25 years: http://lucacardelli.name/Talks/2007-08-02%20An%20Accidental%...

Lambda calculus was introduced in 1936 and in 1948 it was given simple type system (along Fortran one, which, in turn, was introduced in 1958, ten years later).

Year after Simula-67 was introduced, logician Hindley presented a polymorphic type system with type inference for lambda-calculus, which was later rediscovered by Milner and incorporated into ML in 1978.

Simula-67 didn't have proper parametric polymorphism. I know no such examples before ML.

Functional programming is based on logic and could be a basis for logic on its own. Consider Type Theory by Per Martin-Loef. Or even simpler extension of polymorphic type system fo lambda calculus: Haskell's type classes, where type represents theorem, and function text serves as a proof.

OOP, on other hand, does not represent some simple kind of logic. I emphasize simple, because Luca Cardelli finally found some logic out there.

So I think that OOP is inherently less useful that FP.


Unfortunately, I don't think type theory has a whole lot to do with programming or software engineering. The most practical languages (as measured by popularity, which seems to be the best way to do it over a period of decades) do not have sound type systems, nor even particularly interesting ones theoretically.


You should reconsider your stance.

Types allow requirements changes to be propagated through all program. State-of-the-art type systems even allow accounting of effects, like prevent deadlocks by ensuring a specific order of locking: http://www.cs.st-andrews.ac.uk/~eb/writings/fi-cbc.pdf

This is just another tool in a toolbox. You can use it. Or you can throw it away as "no one seem to use it".


I never said "no one seems to use it". What I specifically said was that I don't think type theory has a lot to do with programming: i.e., that the majority of work put into type theory in academic contexts does not, on balance, do much to improve the state of programmers writing software, or the quality of our software engineering.

"Types allow requirements changes to be propagated through all program" - what does this even mean, and why is it a justification for powerful types? I designed a system once that let you describe the program at a high level, using a DSL. I've outlined it several times on this site here (here's one link: http://news.ycombinator.com/item?id=2192322). Now that system allowed requirements changes to be propagated throughout the whole program, because the level of description of the program was dense enough that it could just about be considered detailed requirements (i.e. it was powerful - see below). It worked because it was focused on a reasonably precise niche. It did enough checks during the compilation process that you could be fairly sure that if it didn't find any errors, it would work without bugs (i.e. it was good - again, below). But the level of type system it had was no more advanced than Pascal - and I mean early Pascal, no objects etc. If you can get the same effects without a very abstract and powerful type system, isn't that better, because it reduces the overall expressiveness of the language - i.e. reducing the solution space that needs to be searched? Form can be liberating.

Preventing deadlocks by ensuring a specific order of locking. Fantastic; good work. How many people are using it? Which large shipping systems are using it? What was the prior rate of deadlocks for similar systems at that level of complexity, how long did those deadlocks take to be fixed, and how much time and money was saved by using this new technique? Because frankly, deadlocks are trivial to debug; the relevant threads are stopped, and after you attach a debugger the stack traces tell you exactly which locks were taken and in which order. Debugging races is a wee bit harder.

I have TAPL by Pierce right here on my bookshelf. Here's a typical sentence which illustrates my problem: "Perhaps the best illustration of the power of recursive types is the fact that we can embed the whole untyped lambda-calculus - in a well-typed way - into a statically typed language with recursive types." (pp 273). It seems to me to take for granted that a statically typed language is good[1], and that a good measure of power for a statically typed language is how expressive it is (i.e. what shapes of programs it can express, not e.g. how syntactically light it can be in doing so). And this is probably right and good, for type systems. But I have a different perspective of "good" and "power" in a programming language (as distinct from a type system). A good language is one whose target audience finds it easy to express their intent in such a way that the computer can understand. A powerful language is one in which complicated effects can be expressed with succinctness. And this is the core of it: type theorists spend entirely too much time on theoretical benefits, and far too little time on whether these benefits are worthwhile. Perhaps that's not their job, you say: and I agree to you to the extent that you also agree, then, that much of their work is irrelevant because of that.

[1] I happen to like statically typed programming languages, but I understand why some people don't. I wrote an extended reply on this topic to Paul Biggar some time ago, again on this site: http://news.ycombinator.com/item?id=1110653


You are right that many people don't need the results of type theory for their work. But that mainly tells on the type of work you do. For example many people think programming does not need math but if you are writing physics engines, graphics engines or doing machine learning then you need some basic level of mathematical sophistication. Similarly, if your work is related to automatic verification or the transformation from code to code then knowing type theory will save you a lot of wasted effort reinventing pitfalls. Type theory gives a foundation and a why basis so that intuition can be augmented beyond rules of thumbs. Language Oriented programming is one which would benefit from such a knowledge. Consider also that the future is not going to be dominated by one platform, powerful code transformation tools will be a strategic advantage. Know that my emphasis is not on the strength of the type system just its consistency and ability to augment human reasoning.

Software engineering is the only field of engineering whose practitioners do not use a basics grounded in math. Electrical people have Maxwell, ohm, gauss etc. Mechanical engineers and dynamics, civil continuums and so on. None of these professions you will note, came after the math. The math came as a result from studying and generalizing on these and then feeding the results back providing a net gain for both. This leads me to believe we are in early stages yet. We have the catapult without trigonometry. We do not have enough understanding to build stuff to withstand earthquakes. You say that most languages don't have well founded types but you fail to note that many of those same projects often have massive teams, massive code bases, fail often, run over time and are insecure.

But all that said, programming is different isn't it? You can get instant feedback and build something that you think is good because it fits all the white swans you put inside it. So I think before we have a foundation we will need tools that are built from and require a basic understanding of category theory and types. These can then be used to quickly iterate tests and do so with strong guarantees without having to write annoyingly tedious constructive proofs. Genetic programming is stupid but what if it could reason about types and try hypothesis on a meta level as well? Or what if tools such as http://lambda-the-ultimate.org/node/1178 were developed into the typical programmers toolchain? But yes, complex types without tools are a hard sell.


Code transformation has historically focused on syntax, but fallen down on semantics. The devil is in the fine details; C# might look superficially similar to Java, that you might create a code transformation from one to the other, but you end up falling down in really thorny areas like order of class initializers and finalization semantics, not to mention of course differences in scope and substance of libraries, standard and otherwise. When your code is pure and functional, you can make much stronger statements about it. But I think it's still an open question as to whether functional code can be made practical in the large.

"You say that most languages don't have well founded types but you fail to note that many of those same projects often have massive teams, massive code bases, fail often, run over time and are insecure" - are you suggesting that correlation is causation? Of course you aren't. But you might be guilty of thinking that something must be done, powerful type systems are something that seem good, therefore they should be used. And it's precisely this kind of implication that I'm skeptical of. When we start seeing substantial bodies of industrial software using advanced type systems rather than incorporating more limited, less orthogonal and less general subsets, I'll dial back my skepticism; but I'm willing to bet, at this point, that if that does happen, we won't have seen the end of massive software failures. I think that's a social problem, not a technical one. Our reach forever exceeds our grasp.


As I mentioned in my earlier post I do not think it is the power of the type system that is key but its consistency and the accompanying tools. I am not saying programming in Coq or dependent types are the answer. I am asking, what if programmers had access to a more polished toolkit to create domain specific versions of djinn where it would guide them or help layout some basics? Knowing that the code was derived from sound axioms. Other engineers can quickly protoype the validity of their ideas by mechanically running through some equations, why not software engineers?

Already I use types in Haskell and F# to help me figure out how to layout the code - I write some basic code where I know which function it will feed into, I don't have to think much as I try to get the code to fit the types. Nothing fancy but still highly useful.

As for code transformation I don't mean generic stuff like Visual Basic to Java, I mean having a team able to write the tools themselves with full awareness of the requisite semantic mappings. Or even better, creating a dsl for the domain and then having that map down to appropriate platform specific code. But it need not be traditional development you could extend it to machine learning - say you are doing genetic programming to fit data, using types to have some level of axiomatic reasoning rather than random cross over may be fruitful. There are all sorts of uses where application of basic - not fancy theory is helpful.


"I don't think type theory has a whole lot to do with programming or software engineering. The most practical languages ... do not have sound type systems"

well i mean most software isn't understood, using Linus's metaphor, most software works due to "years and years of testing on thousands of machines", not due to "thinking and analyzing"[1]. and, i think, teams that are capable of "think and analyse" can out-compete teams that don't.[2]

[1] http://www.dustingetz.com/linus-think-and-analyze-motherfuck... [2] http://www.dustingetz.com/re-seth-godin-how-do-you-know-when...


I think this more relates to mindfulness as you work through a programming problem. I know that when I am very tired, I can sometimes succumb to the "random walk" approach to debugging, where my reasoning takes a rest and I start coming up with theories as to why the program isn't working, and start changing little things to see if they fix it. If it was graphical work, it might be changing + to -; if it was one of my current bugs right at this minute (relating to dynamic linking on OS X), it would be adding an extra indirection to a pointer lookup. This "approach" is sometimes faster than the mindful "think and analyse" way, but only by chance, and only when your approach was roughly correct to begin with. And I agree that the "think and analyse" technique is better; rest is usually better than the random poking, which can also introduce a bug that bites you a few weeks later.

But "thinking and analyzing" I think also applies to my problem with type theorists! They spend a lot of time coming up with new ways to use various formalisms to capture various intents of the programmer (like the deadlocks and accounting of effects in an uncle post), but they don't spend much time doing science: the kind of science that happens when you have a control group. They've got their heads down at too low a level.

That's a bit of an attack on the whole edifice of computer "science", but I think it's long overdue. I don't think we're going to find any magic formalism that revolutionizes the world, making writing programs of arbitrary complexity trivial for schoolchildren. I think we need to spend more time looking at actual people, their strengths and weaknesses, and crafting our formalisms around those - and that work is only honest if it actually measures effects on average joes doing work, not rework of existing programs by talented graduate students.


"Unfortunately, I don't think type theory has a whole lot to do with programming or software engineering. The most practical languages (as measured by popularity, which seems to be the best way to do it over a period of decades) do not have sound type systems, nor even particularly interesting ones theoretically."

That is hilarious! I just showed this to my black, British cat, Mr. Fluffer Wickbidget, III, and he spit out the milk which he had been lapping from the palm of my hand.

You should sit down with the brilliant Hongwei Xi, the creator of the amazing ATS programming language, and see what he thinks of your jocular comment. If you're lucky, he might explain to you the reality of his Applied Type System, which not only provides for a very practical use of types, with the side benefit of theorem proving, but has also been shown to be almost as fast as C.

http://www.ats-lang.org


You seem to think that I think that the problem with languages with powerful type systems is that they are slow, and showing something as fast as C is a proof against this? I don't think your comment is relevant to the thrust of what I wrote, sorry.


All sorts of interesting stuff and then...

... OOP is inherently less useful that FP ...

Is it that hard to see different uses for different languages? Original BASIC was neither OO nor FP and came after both. Still it satisfied a particular need. C++ is the worse programming language in the world, except for all the others - in the domain it is used in.


>Is it that hard to see different uses for different languages?

Okey. How about language that could encompass all of them? The thing about such language is that it have to be functional one, at the core. Be it Lisp, or Haskell, or Agda2.

Let's look at Haskell. It has OO: http://homepages.cwi.nl/~ralf/OOHaskell/ Everything in mainstream OOP languages and then some, all in Haskell type system. It has BASIC: http://hackage.haskell.org/package/BASIC

I embedded to Haskell a MIPS assembler, a language to describe CPU cores, and some other things.

I am not an expert in Lisp, but an avid reader of HN should know that it pretty good in encompassing languages.

As for C++, it's domain shrinks every year.


Uh certainly, FP language are certainly the most powerful and flexible languages, aside from maybe logic languages, far beyond OO. FP language definitely encompass more, you are correct there.

I just think it's important to not to confuse power and flexible with usefulness. Rigidity and limits have their uses.

As far as c++ goes, it's market share shrinks every year yet since the market grows and c++'s share began huge, c++ gains more users and more market position every years than the total users of functional programming languages. That may change in the future. FP will have to show its usefulness and not only its power and flexibility for this to happen.


As for C++, it's domain shrinks every year.

Mobile seems to be generating a resurgence of interest in C++. Squeezing every last bit of performance out of tightly constrained CPU and RAM matters again.


So surely Forth should make a resurgence, it has the most code density, the fastest runtime and the lowest error rate.

The only metric is loses on is its high score in impenetrability.


Actually, you don't need a Forth implementation to benefit from compact code. Take a look: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.3...

They split program into slow and fast parts, slow parts used compact byte-code interpreter, fast parts were compiled into native code with optimizations.

They didn't used "code compression" like Forters do - Forters refactor common pieces of code into separate definitions. Nevertheless, they made quite balanced system.


thanks


thank you for posting this, fascinating perspective


I really think that history of computing is full of fantastic discoveries, atlants and their country, historical twists, etc.

When I finally come to create my own map of some small island of history of computing, namely "type systems", I was thrilled, no less.

The superheroes are among us, I think. ;)


Care to elaborate?


The title of this article, for example. The idea of this article is that functional programming is hard, because it forces you to think in a new way. But here, calling functional programming hard is just used to pat oneself on the back. Great, this article says you can map a function over a collection and then sum it. You don't need to think of yourself as some elite programmer that casts a shadow over procedural programmers. It's not true.

Functional programming is just the general notion of using more functions, and less computer-architectury things, to express your ideas. It's not necessarily about map/reduce, monads, or whatever. Encode your ideas mostly as functions where possible. That's it.

It's bad to say that functional programming is hard on purpose, or that it makes you a part of some elite cadre. Functional programming should be, and usually is, no harder than any other way to explicitly write down your ideas. The whole point is to make it more simple to express your ideas in a way that can be evaluated by a computer. Functional programming isn't hard. Programming is hard.


"Functional programming is just the general notion of using more functions, and less computer-architectury things, to express your ideas."

Umm, no, that is not what functional programming is. Functional programming is, basically, programming without side-effects. A purely functional language will have variables that are completely immutable. This is a bit of a shift away from the way things are done in C or C++.

Speaking just for myself, I started into the programming world with Java (not counting QBasic), and I am finding it incredibly hard to wrap my brain around the idea of functional programming.

My understanding is that most programmers that have a background primarily working with non-functional languages have a challenging time initially grasping the concepts of functional programming. However, I would guess that a programmer that learns a functional programming style early on would probably have very little difficulty understanding non-functional constructs.

For more information, I would like to point you towards some easy reading:

* http://en.wikipedia.org/wiki/Side_effect_(computer_science)

* http://learnyouahaskell.com/introduction#so-whats-haskell

Edit: To whoever down-voted me, I would appreciate some explanation of your views. The fact is that functional programming is not just using functions. The article plainly refers to functional programming languages and not simply using functions. There is a difference.


I don't ninja downvote, but if I had to guess I'd say it's because if you're finding functional programming incredibly difficult, you maybe shouldn't be lecturing on what it is.

As my cousin points out, it's true that it's easy to think of functions as just being macros in a language like C. If you call a function which doesn't have a return value, and which modifies some global state, that's not a "real" function, just a stored procedure.

However, if you write a C function which doesn't modify global state, but simply takes an input and returns an output, as even novice programmers do all the time, hey presto— that's the functional paradigm! There's nothing mystical about it, and what the GP is saying is that telling people who have done this that functional programming is something different is fundamentally confusing.


"I'd say it's because if you're finding functional programming incredibly difficult, you maybe shouldn't be lecturing on what it is."

Thank you for your thoughtful reply. I really do appreciate it.

It isn't the programming that is the hard thing. Writing some code to do something meaningful is not what is giving me a problem. Actually, it is trying to develop a deep understanding of functional programming that I am finding difficult. Maybe the problem is that I am confusing functional programming with purely functional programming?

My apologies to tumult if it came off that I was lecturing in my original post. I find it challenging to get tone perfectly right on the Internet. Face to face conversation or conversation over a phone is far better.

I think maybe this entire thread comes down to a question of semantics. When I program in non-functional languages I tend to lean towards using map/reduce type constructs a lot, but I never really thought of that as "functional programming". I guess I always thought that in order to really do functional programming, you really needed to have some kind of deep understanding of it.

Maybe if I had a formal computer science background, my perspective would be a little bit different because I would have more easily seen the point that you are making. As it stands, my background is in the social sciences and my knowledge of the science part of computer science is sometimes lacking.

Perhaps you are right, I should have been better informed before attempting to inform others. But if I had not typed my original post, then I would have never received your helpful reply and I would still be in the dark!


Maybe the problem is that I am confusing functional programming with purely functional programming?

Yeah, you nailed it. Writing a whole program with purely functional constructs is a little mind bending, since procedurally fundamental things like "printf" don't really exist. But that's just something that's much easier to do procedurally; it's hard for everyone, and there's no real secret to understanding it.

The important thing to remember is that languages don't have just one paradigm. An imperative language with first-class functions (as most high-level scripting languages have) is functional, just not "as" functional as, say, Haskell. You can use functional concepts in a language which isn't purely functional, just like you can use OO concepts in a language which isn't purely OO.

The good news is it sounds like you get functional programming much more than you thought, and you've definitely got the right attitude towards learning.

As long as you're open to being corrected, and honest about the limits of your knowledge, there's nothing wrong with trying to teach something you don't fully understand yourself. Teaching is one of the best ways to learn. Keep it up :)


+1. I fat thumbed you and downvoted, so sorry!


  My understanding is that most programmers that have a background primarily working with non-functional languages have a challenging time initially grasping the concepts of functional programming.
I'd guess that "most programmers that have a background primarily working with non-functional languages" could just be shortened to "most programmers". Take a look at any of the "language ranking" pages around and you'll find that they're all dominated by imperative/procedural languages.

  However, I would guess that a programmer that learns a functional programming style early on would probably have very little difficulty understanding non-functional constructs.
You're right; unfortunately, it seems that most CS curriculums these days just rush people through some basic programming classes in Java/Python (maybe C++) then through a few more "upper level" electives while just skimming over the algorithmic and data structures stuff that FP is most useful for.

I'd argue that the algorithmic / data structures material (maybe with an intro to some FP language mixed in) is much more important than those elective classes, because the elective stuff is much easier to pick up later on (as needed), while someone who doesn't know their algorithms could spend a whole career writing buggy, slow code because of it.


I'm not expert enough to call you out on this, but isn't "function", the way it's used in "functional programming", working from a different and more strict definition of the word "function" that (say) a C programmer is used to? I don't know that you're wrong about this summary of FP, but it doesn't smell right to me.


That's about right. A "function" in functional programming is supposed to be just like the functions you remember from grade-school algebra class, e.g.,

  f(x) = 2x + 5
If you look at that function, you'll notice that for any given value of 'x', the function always produces the same output value. For example, no matter how many times you try it, plugging in 3 for 'x' always returns 11. This means that f(x) is a "pure" function.

The only difference between the trivial example with f(x) above, and say, Haskell, is that the functions you're working with in Haskell are much more complicated. Boil it down though, and they're still pure, mathematical functions nonetheless.

Now, you can (should?) write purely functional programs in C if you follow the same principals. The main difference is that C doesn't force you do write your code this way, and since writing purely functional code can be a bit tedious, most people don't.

This leads to impure functions in your code, like when you read a file from disk -- if the file is there, the function does one thing; if not, it may do something else (like crash). Since the output of an impure function can vary depending on when/where/why/how it's executed, it can be much harder to find bugs in it, since the code may compile and run just fine until the moment it doesn't.

Anyway, that's the basic rundown on FP. I'm not an FP expert, so if I've missed anything, someone please correct me.


It depends. A C function is actually a procedure: a list of statements that have some effect on the computer, and then (maybe) shove some return value through the stack at the end. A function in languages like Clean or Haskell or Clojure or Agda or Coq can only return values, and have no effect on the machine (like mutating memory). Of course, this construct is used only as a way to express ideas. The notion of a function that has no other effect on the machine is actually a lie. The time it takes to evaluate the function is a real effect. So is the memory it uses. It's not a perfect abstraction (none are) but for many types of problems, pure functions can work as a tool to help a programmer.

Sorry, typed this on a phone, hard to edit.


I agree. Minimizing state and side effects are at least as core to functional programming as "you should use lots of functions."


I believe there have been functional programming contests where the language used by the winners was the functional subset of C, which as you would expect includes C functions.

But the term "FP" is as much about freedom from side effects, lambdas, currying, and sometimes lazy evaluation as it is about the superficially-similar functions in a typical imperative language.


Like you said, functional programming isn't hard, but I think the problem is that learning a completely new language with an unfamiliar syntax and abstractions is hard, and trying to apply functional programming techniques in most current mainstream languages is equally hard.

The fundamental language feature that makes true functional programming possible is first class functions, since that allows higher order functions and anonymous functions. The lack of that feature in most mainstream languages is why functional programming feels so foreign to most programmers. Sure, you can write functions without side effects, but making good use of them without polluting your namespace, explicitly passing around function pointers, or having lots of extra boilerplate code lying around is difficult in a lot of languages.

That's constantly changing though. I think as more languages start adding support for higher order functions, even established languages like C++ and Java, we'll see more and more people using a functional style and less nonsense discussion about how "hard" it is, because it really isn't hard at all. Already, you see bits and pieces of functional Javascript code littered all over the Internet on a regular basis.

In fact, I'm willing to bet that at some point in the future, functional programming techniques be considered easier and more natural than a purely imperative approach, simply because more programmers will have encountered it sooner in their programming careers and made effective use of it. It'll be an everyday occurrence to meet a programmer who uses anonymous functions for callbacks and commonly makes use of fold, reduce, and map instead of explicit loops and state variables.

Meanwhile, meeting someone who knows what a pointer is or even how to manage heap memory will be a one in twenty event, and chances are that person will be an embedded developer who also knows how to write interrupt handlers and even a custom, deterministic memory allocator for real time applications.


http://harmful.cat-v.org/software/OO_programming/

e.g. "The phrase "object-oriented" means a lot of things. Half are obvious, and the other half are mistakes." -- Paul Graham


I don't think pg really understands OO. He seems to associate it with boilerplate getters and setters ala Java.

This is the best description that I've found:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things - Alan Kay


William Cook's response to Paul Graham's explanation for why Arc isn't especially object-oriented:

http://wcook.blogspot.com/2011/04/paul-graham-on-objects-in-...


Quote fight

"object-oriented design is the roman numerals of computing." -- Rob Pike

And Rob shared an office with Bjarne for quite a while.


OO advocates don't generally consider C++ to be a particularly OO language, so I don't see what Bjarne has to do with anything, you are just reinforcing the original point: critics of OOP are often criticising things that have little at all to do with the original ideas of OOP.


Ok, I've been a procedural programmer for years. I struggle even with OO. Where's my easy bridge to Functional? Where's the killer "here's the trick, the secret, the leap"?

Because I have to say, for all the "functional will save you" mantras, I keep finding a procedural approach gets the problem solved. Are my problems too simple? Not scale issues? Perhaps. And clearly, I am not trained in anything other than intro Lisp and Hadoop, so I haven't had the deep dive indoctrination others appear to have had.

But as each new computing metaphor comes, we find ways to make it easy for folks trained in older metaphors to come over. Other than Scala, I've found few bridges that are trying to help procedural and OO lang folks adopt functional. It's no-one's fault, I guess, other than new folks are trained in it, and older folks aren't.

But I'll keep looking for that bridge, that shining one thing that will make me go "aahhha, I see" and not "one of these books will explain how passing this function through this multi-nested other function is better than just making a loop".

Because after all these articles, I know functional is great. I just feel bad that I haven't been able to make it great for me... yet.


I don't know if FP will appeal to you or make your life easier, but why not spend a couple weeks learning Haskell and see? Try "learn you a Haskell" or the Real World Haskell book (both excellent and free online).

I'm a fucking music major and figured it out pretty well. I'm sure you'll have no problems if it's something you're actually interested in learning about in your free time.


one of the cardinal rules of OO is: "each unit (method, object, whatever) should do exactly one thing"[3]. this rule is universally ignored in all the big OO codebases I've seen[1]. applying this rule with discipline, forces you to have a better understanding of what your code is actually doing. I think, that applying this rule even in OO languages forces you towards "functional-style in the small, object-oriented in the large", and the only difference is your code will have fewer classes/methods, and more high-level data-structures with operations like list.filter on them.

[1] i get it, refactoring takes discipline and time, and desire to practice and learn[2], and a team of like minded people, on top of sensible business constraints. [2] http://www.dustingetz.com/nostrademons-75h-work-week-harmful... [3] http://en.wikipedia.org/wiki/Single_responsibility_principle

edit: added source for single-responsibility principle


That's not a cardinal rule of OO or even a rule of OO.


It is if you don't want to write side effect riddled 'objects'.


If it's a rule it's a rule of programming in general, not OO.


The thing that got me really interested in learning Haskell was the realization that something like 90% of the nasty bugs that I fixed in production over the years would be have been impossible in a pure functional language.

The thing that slows me down is the prospect of actually getting paid to write Haskell, and my doubts about it's ultimate suitability for iterate-quickly, fail-fast software world.

But reading this article reinvigorates me. Pure functional programming may be much harder, but its rigor allows for more powerful abstractions. Grokking those abstractions is no doubt useful regardless of what language you're using.


>doubts about it's ultimate suitability for iterate-quickly, fail-fast software world.

Lest you doubt: http://thesz.mskhug.ru/svn/hiersort/doc/hhm-eng.pdf

We did a cycle-accurate prototype of MIPS CPU with some twists. In Haskell.

For Haskell to shine in prototyping you have to apply it to some critical and new task, where type system works with you, preventing errors. The novelty is crucial, I think. You will have to explore the solution space with the help of some sort of theorem prover (type system).


"You will have to explore the solution space with the help of some sort of theorem prover (type system)."

Have you tried ATS (Applied Type System)?

http://en.wikipedia.org/wiki/ATS_(programming_language)


No, I haven't. When I looked at it, I didn't see a well-composed thing like Epigram or Agda2.

So I decided to stick to Haskell for a while and then slowly work my way to Agda2, if I would feel the need.


They might have been impossible but how long would it have taken to write the functional code vs write the code with side effects and fix the bugs?


It is worth noting that there are a few other paradigms that can change the way you think of programming - recently there was a post providing a good overview of languages that will make you learn some new abstractions, functional programming being one of them: http://blog.fogus.me/2011/08/14/perlis-languages/


I have done a few real, production projects with Haskell, and I've come to realize that a lot of interest in it is hype. The paradigm is neat, the language is relatively well designed (considering that it's an academic committee language) and the implementation is passable.

Programmers sadly forget that the greatest challenge of software creation very often happens outside of your text editor: it happens in the domain, in architecture, specification and communication. If you succeed in those, your project will succeed, no matter which language* you chose. If you fail in them, no amount of strictness in your programming language will save you.

* unless of course it's Java


I see that some people are saying that functional programming is more powerful than OOP. I disagree with that.

I do not think one is less than the other. Actually there is a mathematical argument why one is not less than the other. To me functional programming is "what does what I'm trying to describe do?". In object oriented programming it's "what are the properties of the thing I'm trying to describe?". In functional programming I describe the interactions directly and in OOP the interactions come about from how I have described the objects. Done properly, they are both about interactions, it is just a difference of what you focus on.

It has been shown that classes of OOP can be modelled for the most part as co-algebras. This means that objects are a mathematical dual to algebraic types of functional programming. The reason why functional programming is important is that some things are easier to express in a dual space. This means that many things that are hard in OOP are trivial in functional languages. But the reverse is also true. This is why it is important to not drop one for the other.

The real advantage IMO in functional programming is that these languages tend to be developed with stronger mathematical foundations. Hence programming in them tends to encourage people to be principled and rigorous (at least in theory). It is very likely that if you study the subject you will naturally come to be interested in why monads are only one particular type of functor, that polymorphic functions are well described as natural transformations or try to wrap your head around mechanically generating dynamic algorithms in terms of hylomorphisms. The more mathematical nature also makes it less magical and rickety to the self taught programmer (such as myself). That I think is the real advantage. But there is nothing inherent in OOP that stops it from also being built from more rigorous foundations. Such things will come to matter more with increasing concerns in security.

I think functional programmers sleep on the power of coalgebras. In fact his example of google map reduce is completely ignorant of the fact that the real hero in MapReduce tm is unfold not reduce.

I have a pet theory that the fact that algorithms are written in dual styles is why experienced OOP people find functional programming so hard. They literally have to reverse their style of thinking. This takes a lot of energy. Just because it takes place in your head doesn't make it any less physical than trying to roll a boulder uphill or get a wagon wheel out of a rut.

http://www.cs.ru.nl/E.Poll/papers/durham97.pdf


i think the root problem here is out-of-band -- expertly written OO models are fine. the problem is that i've never encountered an expertly written OO system, which is probably because OO allows you to get lazy, where functional patterns require thought and understanding. i'm gradually starting to suspect that an expertly written OO system actually kinda looks like a functional system, except with practical compromises with respect to functional purity.


> The more mathematical nature also makes it less magical and rickety to the self taught programmer (such as myself). That I think is the real advantage. But there is nothing inherent in OOP that stops it from also being built from more rigorous foundations.

I feel that OO the paradigm is rickety, a marrying of ad hoc semantics and syntactic sugar, and that it is inherently limited.


And here I was expecting the surprise moral to be, "Functional code looks better than average, just because only smarter-than-average programmers can manage to work in FP languages."


Functional Programming Is Hard, That's Why It's Good

This is a major issue in software development. Developers like a challenge so they are often drawn to complexity. Unfortunately, this interest in complexity often carries over into their work.


I'd really love to hear PG defend this line: "But with Lisp our development cycle was so fast that we could sometimes duplicate a new feature within a day or two of a competitor announcing it in a press release. By the time journalists covering the press release got round to calling us, we would have the new feature too."

I'd really love to see the code for something that one would do in a day or two in Lisp, but would be much longer in another language. I don't suspect there exists a wide class of such things -- I suspect the features tend to look like embedding Lisp compilers.


It's not because they were using lisp. It's because everyone else was using C.


Also speculating, but I would hazard a guess that a lot of that was due to the fact that the library ecosystem was much smaller then, which means people would be implementing things that you don't have to think too hard about now.


i speculate the edge compounds as, say, a competitors java codebase's complexity increases faster than a lisp codebase. exponentiate this over time and the edge is obvious.


The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening-- that's starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried.

http://www.paulgraham.com/avg.html


How about learning functional JavaScript?


"Pointers are a very powerful and fundamental abstraction"

An abstraction of what? Can't get much closer to the metal than with pointers. Frankly I'm not sure someone writing this have much insight worth wasting time on.


I have an electrical engineering background and all this FP VS IP sounds like Async Logic VS State Machines. How far off base is that description?


In the first paragraph a dialect of Lisp is considered a functional programming language. So I don't read any more. 133 points for this?


The "One True Language Paradigm"....

... Is functional programming!

(while it lasts that is).

Keeps the money circulating.


Can anyone recommend any resources for getting started with Scheme?


the little schemer series was very good. you can safely skip the first 5 chapters if you're comfortable with recursion. learn scheme in fixnum days is good too for learning the language right away.


I love Erlang so much I haven't yet gotten around to learning another functional language. (still growing with erlang).

I learned it reading Armstrong's book from Pragmatic Programmers. It was a joy. I'm also reading the OReilly book, and it seems good as well, though I can't tell how it would be if I didn't know the language already. As for the titles-- one is Programming Erlang, and the other is Erlang Programming!


I think you should learn Brainf*ck. It has awesome mental/intellectual benefits to you as a Erlang lover. No, but really.... ! And if you don't you will suffer! Muahaha!


Erlang is easy to read and makes a lot of sense, without requiring much brainpower... if you'll take a few days and read about it.

I think its kinda silly that people will just ignore a language because it doesn't look like java/C/pascal. Its not like the syntax of erlang is stupid, or pointlessly obtuse.


IMO, the only really ugly part of Erlang is it's record syntax, but it's binary handling more than makes up for it!

Doing Erlang at the day job for the last few months has been awesome. I really wish it would catch on more. Maybe when the JVM supports tail-call optimization and there is a JVM implementation of the runtime...


We know, we know. BUT no one has come up with a better syntax that really works, all the time. There have been suggestions but none have really worked.


I write Erlang every day and love it, but I have to agree with most of the points raised by http://damienkatz.net/2008/03/what_sucks_abou.html


This is the lie that every functional programmer has perpetuated for the last 50 years since the dawn of Lisp - that somehow, automagically, productivity or effectiveness of programmers increase with the use of functional languages.

It's the biggest lie in the programming world, and it was designed to make the uber-geeks of this world, those people that you walk around the block just to avoid saying hello to because you know they are deeply and profoundly anti-social, feel good about themselves, to feel special, as though they have the "upper hand" that other normal people who deliberately choose other languages over purely functional ones.

There is value in learning how to think recursively, but a lot of programming, especially systems programming doesn't need to have its for loops transformed into recursive functions, or its addition operators into primitive recursive functions.

I am sick and tired of people expounding on the alleged efficacy and efficiency of functional languages and I wish they would just STFU once and for all and go back to their porn intermezzos between their functional programming spurts.


You're making the same mistake that the Java loyalists did when Rails came out: judging something based on your distaste for the advocacy around it without any real understanding. I'm not going to pretend to know your motivations here, but often times people throw up this defense mechanism to protect their own professional knowledge; but here's the thing, that does nothing but stunt your own growth.

Functional programming is about the removal of side effects, aka mutable state. Removing side effects makes programming harder because ultimately the whole point of computer programming is to create side effects. Isolating those side effects and writing most of your code in a functional (mathematical definition: always the same output for a given input) way is challenging, but it also promises an order of magnitude more possibility of correctness. The mathematical rigor of a pure functional language like Haskell allows deeper reasoning to be done, and thus more powerful abstractions to be introduced.

In practice functional programming is not the fastest way to solve many problems, but it is an excellent way to push forward the state of computer science as a whole. Think about the relationship between physics and math. We would not have the deep practical knowledge of the physical universe that we have today without the complex math that allowed physicists to reason about things well outside the realm of experimentability.


In practice functional programming is not the fastest way to solve many problems, but it is an excellent way to push forward the state of computer science as a whole.

I don't think the state of computer science is really held back much by the current state of programming ability. SW Engineering is maybe held back by it, but many of the best computer scientists I know don't program at all and would think discussing Lisp vs Java would be like asking a mathematician if he preferred to use an ink or gel pen.


Here's the problem I have with this: nearly everything I do involves state. It's called a database.

Let's take a simple example, a to-do list. Abstracted out to its essentials (a piece of paper), the to-do list is exclusively state. Add a task, complete a task.

Now, I can write a program that manages a to-do list in a functional language. There are two options. (Let's assume this is a web app.) First, I can make the client manage the state. Each time they use the application, they bookmark the last page. This is obviously inconvenient, but it's pure. Every input has a distinct output. Send the entire to-do list and what you wish to add or complete as input, receive an update copy of the state in return. It's always immutable state. It's also inconvenient if you ever want to use this on another computer.

So, like most to-do lists, we use a database on the server side to manage the state of the application. However, you've lost the entire benefit of immutable state! You are not guaranteed to get the same output given an input. We're back to square one, aren't we? What's functional about mutable state?

Now, in a subset of programs, eliminating mutable state is beneficial. 99% of scientific programming would be best served by FP. However, I've never seen an answer to how we can still call any system functional once we add a database.

And at that point, why aren't we using a language that is bad at what we expressly need in the first place?


> It's called a database.

Programs have to have state and side effects to interact with the outside world. That's not what's frowned upon.

The goal with functional programming is to try and minimize the number of functions that involve those external transactions. The spine of your program in Haskell will revolve around IO, but all the limbs can probably avoid requiring anything more than inputs and outputs.

Perhaps a concrete example would help? Imagine parsing incoming HTTP requests on your TODO app. Ignoring streaming (which I think we can do safely in this case), your IO code is to read and write to the socket, but the intermediate code to parse the incoming request into data structures for inspection? That can and should NOT be dependent on I/O. Similarly, the code that actually generates your HTML output (before writing it to the socket) need not care about I/O.


Functional programming isn't just "no effects." There are many parts to functional programming: there's the idea of functions as the primary means of expressing data/information (as opposed to e.g. objects in OOP), the idea of functional purity (which is obligatory in Haskell but merely preferred in ML or Scheme), the various degrees of static type-safety, expressivity and power (e.g. Haskell with extensions is powerful but not always safe, Coq is safe and expressive but necessarily limited in power, Scheme is dynamically typed), as well as a handful of other concepts which appear from time to time. Even OOP isn't necessarily a conflicting paradigm, see OCaml or Common Lisp's CLOS system for the intersection of objects and functional programming. For more radical ideas, look at e.g. Functional Reactive Programming for functional data-flow programming.

w/r/t the specific side-effects in pure-functional programming: in Haskell, functions which perform effects are specially marked in the type system, so a function which performs side effects can call another one which performs side effects, but non-effectful functions can't unexpectedly induce effects. You end up writing programs with an effectful core for e.g. the manipulation of the db, and non-effectful functions for whatever processing you need to do with the data, with compile-time guarantees that the processing functions can be tested in isolation and won't cause exceptions or do unexpected IO.

Of course, you can write imperative code in a functional setting, and you can write functional code in an imperative setting. There's an old saying that bad programmers can write FORTRAN in any language; you can also write Lisp in any language, or Haskell in any language, with varying degrees of effectiveness or utility. Functional programming partisans—myself included—assert that functional techniques, correctly applied, can reduce errors (c.f. the Compcert C compiler) and simplify writing good code (c.f. the functional implementations of the Actor model of concurrency, parser combinators). I don't personally believe it to be a panacea, but it puts another tool in the toolbox for whenever your existing hammers aren't solving your problem.


It seems you didn't read my post at all, because this is a complete non-sequitur. Look at the only sentence in which I emphasized a word and notice that it is exactly the same as the opening sentence your rebuttal.

This tells me that you are operating off of some mental model of what you've heard from FP proponents in the past without actually giving what I said proper consideration. The irony is that in your other comments you decry the religion of such debates with FP proponents, but in this case you are the one bringing a religious view. Nowhere did I say that FP languages are generally superior for real-world problem solving.


I'm not the one you replied to originally. You're right, it was a non-sequitur, but mostly because it was a question I'd had since I took a functional programming class in college, and never saw an answer that sat well for me. One of the other commenters gave one, so rather than contribute to the noise I just stayed silent and upvoted.

I was just taking advantage of a heated discussion to get an answer. :)


Of course, most of useful software involves input and output and state. Functional programming doesn't preclude that. There are compilers, package managers, documentation systems, etc, written in Haskell. They all use files and databases. Of course, your programs have to be designed differently.

For reference, here's is a talk by Don Stewart et. al about the design of XMonad:

http://www.scribd.com/doc/19503176/The-Design-and-Implementa...

I guess if one can write a window manager in Haskell, one can also write a todo list.


"The mathematical rigor of a pure functional language like Haskell allows deeper reasoning to be done, and thus more powerful abstractions to be introduced."

Allows or requires? I guess I never really understood the difference between reasoning about code and actually getting shit done.

The thing with Haskell is it is almost like two languages in one. There is the Haskell you learn, with no side effects, and everything is lazy, and you are basically just binding stuff up and it gets evaluated at the end.

Then you have the Haskell that programs actually get written in, with do blocks and actions, if I remember correctly, and it looks a lot more iterative than functional. With actual side effects happening all the time.

The syntax, which is probably the hardest part of Haskell, is totally different.

Very confusing for someone learning text-book Haskell, then jumping into actually maintaining and writing programs, you almost think you have learnt the wrong language.


Two things: this is one of the trolliest posts I have seen on HN on a while, save for my own (now deleted) counter-troll. The fact that it keeps getting upvoted is astounding. Seriously, associating a certain technique with uber geeks/neckbeards/nerds is actually a valid argument for discounting it?

It's true that FP is presented too often as a remedy for the common ailment of programming of not producing software that you can understand quickly enough. Also known as 'productivity'. But just because it's over-enthusiastically promoted it doesn't mean there's no truth to the claims. It's mostly that

  a. you can compose your software nicely from parameterizable operations over common structures (map, reduce, filter, take, etc.)

  b. mutable state can make programs messy so you should avoid it
But of course you don't care. At all. After all, if you did, you wouldn't be quickly hand waiving the whole issue with a fleeting reference to 'recursion', as if it was the dominant instrument in functional programming. You learn _very_ early that writing recursive functions is just more work than throwing together a bunch of functions over a sequence or some such if needed.


"it was designed to make the uber-geeks of this world, those people that you walk around the block just to avoid saying hello to because you know they are deeply and profoundly anti-social, feel good about themselves, to feel special, as though they have the "upper hand" that other normal people who deliberately choose other languages over purely functional ones."

What is the point of this vitriol? Makes you seem fairly anti-social yourself.


I think you're misunderstanding the aim of my essay. It's not to say that FP is uniformly superior to OOP. It's that there is value (and difficulty) in learning the functional abstractions. The same is true of most programming paradigms, but we live in a world where FP is one of the less understood and increasingly relevant tools.

Do you really think this is so controversial?


Controversial is hiding a bastard child in plain sight for 10+ years while running a state. You're not :).

Joke aside - I don't want to write essays here, so I'm gonna try to make a hopefully brief point.

I -know- your essay was well-intentioned, and believe me, so is my reaction, after having written software for a living for nearly 15 years...

Rather than to expound - let me ask you - do you know of Martin Fowler? I can get Graham and his cult-like persona has ensnared you/others somewhat in his FP-Kool-Aid cult :D... but read Fowler for a bit of balance too.

Your intention was actually awesome - you want to illustrate a general principle, e.g. an abstraction, that thinking via and writing through a functional paradigm/language makes you a better programmer. Is that "true" to the extent you and a bunch of other FP-evangelists claim? Why, yes it is! So is learning assembly language, or Brainf*ck for that matter....

Why I mentioned and really enjoy reading Fowler's non-imposing, well-argumented style of writing is because he effectively does, what you bravely attempt to do here, convey a general principle about _programming languages_, except he does it expertly, whereas you do it naively, despite the good intention. To get what I'm saying, surf over to www.amzn.com, and get yourself a copy of Fowler's Domain Specific Languages.

Upon reading that book, heck, even half way, you soon come to the same realization that what one of the other repliers to my original rant has come into some time ago - we are ultimately discussing various paradigms/philosophies of thinking. Not ONE single or multiple paradigms that are tailored to your particular way of thinking or that you are "hard wired" for (which I find hard to believe, but I'll grant some 'nature vs nurture' arguments here too), makes you or anyone else, me included, necessarily a 'better' programmer... I'd go as far as saying that it's probably your personality, mental/emotional states you experience on a daily basis while glued to your PC, that have you be more or less effective at programming, e.g. you're a 'perfectionist/procrastinator', most common among programmers, or you're extreme and reckless and use inadequate tools to just conjure something up, looking to get 'promoted' and it blows up a few months later or BSOD's in a demo to the board of directors....

This has been said over and over, and for some reason, it doesn't sink in - there is no silver bullet, there is no 'better/worse' programmer - unless you count lines of code produced per day as a 'productivity measure', hardly a clever measure of anything...

Back to your essay - I'd much rater you embark on the topic of DSLs, rather than using CL as a platform to write a DSL in - which is -fine- except maybe people who are unfamiliar with Lisp's syntax, e.g. domain experts, can relate to a specific, external DSL better than they can relate to a LISP based DSL...

Think of teaching a DSL to a ... I dunno, a postal worker who examines mail rejects that didn't pass the OCR phase. You could construct a DSL specific to that purpose. It would be relevant and useful and increase the worker's productivity.

On the flipside, teaching him Lisp, such that he can learn a DSL designed inside of Lisp, would be a pain in the ass...

DSLs, e.g. macros, are one of the highly touted benefits of Common Lisp, which I actually like, because of its multiparadigm ('dirty') nature...

Anyway, I don't really want to indulge here - I think if you are serious about writing something, which I believe you are - be as balances as you can be, and never throw ANYONE under the bridge - including people who have only coded VB in their life.... You are either on board w/all people, or you're alone.

Smugness about a particular paradigm is something I'd stay away from, as well as people who hypnotize you with their eloquence and "successes" (financial or otherwise) into thinking that they've discovered the next best thing since sliced bread.....

This is why, as much as I like my Macbook, I still think Apple's a cult, and that Linux will eventually prevail, even on mobile platforms, tablets, etc.

Peace.


No offense but the same could be said about Object Oriented Programming. To me functional languages make a lot more sense, because of the "No side effects" stuff.

Reminds me of a quote from Richard Stallman: "Adding OOP to Emacs is not clearly an improvement; I used OOP when working on the Lisp Machine window systems, and I disagree with the usual view that it is a superior way to program."

But I also don't consider functional programming harder. I actually do have more problems with (C++/Java/Python style) OO, even though I like it when it is a real language concept, like in Smalltalk and others.

But I am weird. I consider Assembly to be easier (to learn) than C(++) and Perl easier (to read) than most other scripting languages.

I think it is a lot about the way you think. A language has to compensate the things that are hard for you so you can concentrate on the easy stuff. It's a bit like having different tutorials or teachers. There are people that think similar to you while there are other who think very different. It's a common problem in schools. If the teacher thinks in a different way the pupils will have a hard time.


"I think it is a lot about the way you think"

Brother, you are on the money with that statement. But the MINUTE I make a religion out of the way _I_ or a bunch of eclectics uber-geeks think, is the minute you bring about the demise and distaste of that particular paradigm of thinking.

I can relate to the 'way of thinking' better than these stupid religious arguments about FP. Did you know you can program in an OO fashion in C? Or even in Lisp, if you really cared to? You could also write a very 'functional' program in C if you really wanted to.

Why is everything black and white with geeks, is beyond my wildest dreams - but it just points to a lack of mature thinking.


Common Lisp at least has full support for object-orientation. You don't have to roll it yourself like with C.


Which is why I find CL more appealing to my palate...


I'm not a FP-fan boy (yet), but it seems you have a very limited view of FP. It's not just recursion...


I assure you, it's recursion all the way down.


Wow, this is really dramatic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: