Hacker News new | past | comments | ask | show | jobs | submit login
How Class-based Programming Sucks (2011) (loup-vaillant.fr)
140 points by vimes656 on Dec 13, 2013 | hide | past | favorite | 139 comments



Class-based programming works great for GUI toolkits. In most other contexts, the suitability is variable. OO is a horrible match for compilers, for example.

The article is chasing down the wrong tree when he tries to build an Option type in C++, though. The normal OO way to handle the same class of functionality as ADT sum types is to use separate subclasses. What he's not acknowledging is that OO and functional+ADTs both only handle half the expression problem[1], in different ways. There's no absolute superiority for either model. It depends on the domain.

Less mutability is good almost everywhere though. The lack of persistent data types is a blight on almost all non-functional container libraries.

http://en.wikipedia.org/wiki/Expression_problem


PL designer here with extensive experience writing compilers in C#, Java, and Scala. Pattern matching is nice but OO will get you most of the way and filling the matching gap is simple.

The expression problem is much safer solved using virtual classes (using traits and type members) in scala than with case matching. There are so many OO solutions to this problem it's not funny; I think I had a small argument with Wadler over this back in 2001 or 2.

As for persistent collection types, observable collections with undoable operations are just as good, if not better because they can still support mutability while persistent collections cannot.


I've written compilers in C, C++, C#, Delphi, Java, ML and Lisp; targeting x86, x64, JVM and MSIL. I was maintainer of the Delphi front end at Borland / Embarcadero. Pattern matching isn't just nice; you end up with substantially better typechecking, and doesn't need ugly patterns like visitors with their inverted callback control flow, if you want to decouple e.g. type checking or code generation from your AST classes. Visitors are implicitly closed to extension unless you create very abstract visitors (and I've gone down that path, it's not pleasant) - you lose a big chunk of the extensibility objects give you over ADTs. (I'd prefer to use multimethods, but they are rarely available.)

I think persistent collections, along with nice syntax for updating immutable objects (i.e. cloning with a subset of changed attributes) are more practical than having undo available. Immutable data types remove the burden of worrying about who's going to modify the data type from under you, so it lets you share subgraphs more freely. You don't need to worry as much about coupling, because the things you give your state to can't modify it. A possible alternative is mutable collections with snapshots or freezing, but I think persistent collections are a better approach.


I've never had to use visitors in my compiler implementations; abstract methods + overrides is good enough; I also use partial classes in C# heavily even those these don't support separate compilation. I also dislike callbacks also, but control flow in general is abstracted away by a flow analysis framework of some sort anyways (technically you don't need this for parsing, but it works well for the systems I build). On the other hand, scalac is just a giant pattern match in Typers and Namers, a style I dislike but to each his own.

I've built a live programming environment where reactive mutable collections are much more appropriate than immutable persistent collections, you can check it out here:

http://research.microsoft.com/en-us/people/smcdirm/liveprogr...

The problem with immutable collections in general is that they completely cannot track change deltas, which is necessary when building an incremental systems. Undo is also essential for these kinds of systems. I think persistent collections are a dead end, but its an argument I'll have to work on over the next few years.


The most complex code I've ever personally worked with involved a proliferation of abstract visitors in Java. I really don't think most people working on the code could understand how it all tied together (myself included).


> Visitors are implicitly closed to extension unless you create very abstract visitors (and I've gone down that path, it's not pleasant)

I'd love to hear more detail about your experience on this point.


Wow. Don't mind us mere mortals please continue your discussion, it seems really fascinating even though i hardly understand a word of it.


I consider myself a mere mortal too but I can understand most of the discussion.


I couldn't help but notice this phrase:

[X] is nice but [Y] will get you most of the way and [dealing with the difference] is simple.

reads a lot like the Blub conceit. I'm not accusing you of that, but I do disagree that OO can give you what pattern matching typically does.


>reads a lot like the Blub conceit

Perhaps. But then again the "Blub conceit" is nothing that exists in real life as a proven fact of computer science.

It's just an argument expressed in an essay. Not some kind of formal logical error.


What's your point?


My point, which is pretty obvious, is that it "reading like the Blub conceit" doesn't mean anything at all with regards to its correctness.

Might as well say "your argument reads a lot as something that an unbeliever in the flying spaghetti monster would say".


I live ok without pattern matching and I build compilers/runtimes for a living; is that blub conceit?

I also have no problem with the expression problem. C# has partial classes anyways, which work even if type checking is non-modular. Of course, I would like it if C# support some form of pattern matching, but not enough to switch over to F# (whose pattern matching isn't as rich as Scala's, anyways).


> I live ok without pattern matching and I build compilers/runtimes for a living; is that blub conceit?

No, I would have quoted it the first time if it was. But if you want to repeat that again you will certainly sound conceited.


OK, I have no idea what point you are trying to make then. Please forgive my ignorance on your social conventions.


The point was simply that what you wrote is how a very bad argument starts. I took care to point out I wasn't accusing you of that but you have reacted defensively anyway.

It's my fault for trying to engage an HN-er in a form of discourse other than debate.


If its a meta comment, just say so directly. It is a bit difficult to decode these comments sometimes, especially when defending OOP.


What is the blub conceit? Google searched it and got nothing.


The important bit is:

    As long as our hypothetical Blub programmer is looking down the power 
    continuum, he knows he's looking down. Languages less powerful than Blub 
    are obviously less powerful, because they're missing some feature he's used 
    to. But when our hypothetical Blub programmer looks in the other direction, 
    up the power continuum, he doesn't realize he's looking up. What he sees are 
    merely weird languages. He probably considers them about equivalent in power 
    to Blub, but with all this other hairy stuff thrown in as well. Blub is good 
    enough for him, because he thinks in Blub.
From PG's Beating the Averages (http://www.paulgraham.com/avg.html)


A "my language is more expressive than yours" version of the Sapir–Whorf hypothesis.


>The expression problem is much safer solved using virtual classes (using traits and type members) in scala than with case matching.

Why do you think this? I prefer having an open trait that's only inherited by ADTs.

I can get non-exhaustive match warnings and I still 'solve' the expression problem by matching on a superset of my partial functions (that do the matching).


Virtual classes with family polymorphism attacks the problem directly: you can add new variants as well as enhance the super class with new abstract methods: your system is abstract until all variants have implemented the abstract methods of the super class. You can then do cool things like factor all your functionality for X in one layer and Y in another layer, replicating variants in X and Y layers to specify their implementations separately.

When I did this in Scala (using traits and type parameters in a virtual class pattern), I got a lot of pushback from the FP people, who thought this problem couldn't, or more correctly SHOULDN'T, be solved using object-oriented constructs; it was one of the main things they liked to boast about in how FP was better than OOP, and I literally took that away from them. A 10,000 line pattern match was somehow preferable to a file with 20 traits that described how the new operation was performed per variant. Martin even made my pattern illegal via a new type check in the Scala compiler after I left EPFL :)


> Class-based programming works great for GUI toolkits.

That and games (and I'd wager simulations in general). The only two domains I'm familiar with which I cannot imagine without OO.

From what I've seen, even UIs and games written in languages without support for OO still use something very close to OO. GTK for instance takes it quite far with GObject.


I spent some time writing Asteroids in netwire - a functional reactive programming library in Haskell - in my free time over a few evenings. I blogged about this at http://ocharles.org.uk/blog/posts/2013-08-18-asteroids-in-ne... - and I don't think it pretends to be OO. It was a radically different style, and I left feeling fairly convinced that FRP is a fantastic model for realtime interactions


FRP is great until you need to be interactive or switch over collections, then it becomes quite ugly. It can work for small games, like the ones in Courtney's dissertation. But over that? Not until a complete physics engine can be joined with a FRP library.


FRP can exist happily alongside imperative methods of updating state. This approach is explored in FrTime for Racket. http://cs.brown.edu/~sk/Publications/Papers/Published/ck-frt...


FrTime is not really pure FRP as envisioned by Elliot or Hudak. But then I prefer these impure FRP systems and have designed/implemented one myself called SuperGlue [1] as part of my dissertation.

[1] http://research.microsoft.com/apps/pubs/default.aspx?id=1793...


I guess I prefer the impure systems then, too. :)


Thank you for sharing. I really enjoyed your netwire Asteroids post. I used Asteroids for teaching myself OO concepts many years ago. It's good to know that it's still a valuable teaching tool.


That and enterprise software. As productive as I am with functional programming, I still find it works better to expose object-oriented interfaces to the world at large.

Some of that's admittedly just down to the window dressing. Object-oriented abstraction mechanisms such as interface tend to have a measure of self-documentation built in, to the extent that all the members are named, right down to method parameters. There's no reason you couldn't do something similar with FP, it's just that as far as I can tell nobody's ever made a point of doing so.


What is so bad about OO for compilers? An AST can naturally be modeled as a class hierarchy. Visitor pattern for AST transformations.


> What is so bad about OO for compilers?

See my comment about the expression problem in another thread: https://news.ycombinator.com/item?id=6783075, as well as this response: https://news.ycombinator.com/item?id=6784057.

A compiler is a program that consists of lots of different algorithms (operations) that operates on a pretty well-defined set of data (an AST or an IR). Changes to the data representation are rare compared to changes to the operations on that data.

What OO does in a compiler is totally obscure the control flow of each algorithm by spreading the logic out over dozens of classes. You can look at an AST node and see at a glance how it participates in constant folding or code generation, but if you're trying to optimize the constant folding algorithm or the code generator, you've got to follow control flow across dozens of different classes.

Using visitor pattern mitigates this somewhat, because you can have ConstFolder::visitBinaryOperation, ConstFolder::visitFunctionCall, etc, all in one file, but note that visitor pattern is just a really verbose, roundabout way of writing a switch statement! If you add a new type of AST node, you have to add a visitNewAstNode call to the visitor interface, and then go update every class that implements the visitor interface. This is no easier, and a lot more verbose, than simply adding a new variant to an AST ADT then fixing up all the places where the compiler complains that your pattern match is no longer exhaustive.

For a good example, look at how LLVM does instruction simplification: http://llvm.org/docs/doxygen/html/InstructionSimplify_8cpp_s.... A good old switch statement, no "simplify" method in the "Instruction" class nor visitor pattern.


Scala, use traits and type variables, the problem is really easy to solve through what I call an open class pattern. See "new age components for old fashioned java" OOPSLA 2001, note I'm using Flatt's term "extensibility problem" which is the same as Wadler's "expression problem" term.

But if you want your control flow to be in place, then a 10,000 line case match ala scalac should be right up your alley. Frankly, I'd rather divide and conquer, which is what OOP is good at.


> OO is a horrible match for compilers, for example.

Actually it is a very good way to create ASTs if you need them to be extendable.

Which is exactly the use case where algebraic data types fail short.


You want some way of specifying an interface - that doesn't necessarily imply classes. Also, class-based inheritance tends to be a poor fit for interfaces; note for example that C#/Java have their own special-purpose interface mechanics, and even C++ which allows multiple inheritance tends to use implicit template-based interface implementation instead.

Discriminated unions on the other hand are - the name says it - intended for cases where you want to discriminate between multiple possible values. They're by design not intended to hide differences behind an interface; they don't address the same problem at all.

All in all, I think that neither classes nor discriminated unions solve the interface problem (nor were they ever really designed to do so) - that's a different problem.


You can have OO without classes.

I just meant OO in general, not classes specifically.


You can have Objects without classes, certainly - even C had structs, and all functional languages I know of have records or something similar.

It's a question of terminology whether you call that object-oriented. It's certainly part of OO, in any case.


"Class-based programming works great for GUI toolkits."

I've programmed with 3 approaches to UI: every things a subclass, instance of class, and prototype-based objects. I enjoyed programming UI on the Newton because of the prototype-based programming. It felt natural. NeXT / Apple's instance of a class works very well too.

Every time I see "every things a subclass", I wince and know its going to be a pain in the butt.


That reminds me of a little quote by Aaron Hillegass in his book "Cocoa Programming for Mac OS X" (p. 73):

> "Once upon a time, there was a company Taligent. Taligent was created by IBM and Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached about the peak of it's mindshare, I met one of its engineers at a trade show. I asked him to create a simple application for me: A window would appear with a button, and when the button was clicked, the words 'Hello World!' would appear in a text field. The engineer created a project and started subclassing madly: subclassing the window and the button and the event handler. The he started generating code: dozens of lines to get the button and text field on to the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its doors forever. "


Ever use Wicket? It's an utter joy to work with, and the only framework I've found that really shows the good side of Java.


Cannot say I have. I left Java at the 1.1 version and never made it back.


I believe someone has solved the expression problem in Haskell and other languages.


The thing about "Class based Programming" - or Object Orientated Programming - is that it allows you to model a problem domain in real-world terms. No one approach is ever going to be perfect; OO being mutable makes it perhaps less founded (on the face of things) on formal principles, but for a lot of large codebases it can have a great beneficial effect on readability and maintainability - cognitive overhead is, perhaps, reduced when you can "ground" your understanding in real-life terms.

At the end of the day, pick what language and coding paradigms best suit your problem domain and you as a coder. Pick right, and there's no mistake there.


>The thing about [OOP] is that it allows you to model a problem domain in real-world terms.

That's only true on a superficial level. Sure, in CS 101 it's easy to explain how to make a Point object and add a Move() method to it. But dive into any real-world code base and you're more likely to meet RectangleCollectionColorPickerFactory instead. Let's face it, writing a large program is an exercise in abstract symbol manipulation. Trying to make it correspond to "real" objects is about as hopeless as basing literature on coloring books.


"... RectangleCollectionColorPickerFactory ..." Sounds like you're blaming the OO world for Java's problems. But I digress...

Trying to make code correspond to real world objects is annoying because after you've modeled your Car with Wheels that support Tires, and an Engine with EngineParts attached ... you then have to write a adaptation layer to make all that fit into your storage engine, your user interface, your query engine, etc ad nauseam. Strings are not "real world" objects, but we use them in our code nonetheless. They're a great UI item that lets us interface with our users. "I need input from the user. Here, type some text in this field and I shall call it String." Now you can manipulate the String with operations (methods on the object) that make sense for Strings.

Point: OO is not about "modeling the real world," it's about code organization. Some people organize with OO. Some with other means.


It's not just java - any good OO platform has those problems. If you don't have that problem, it's worse: that tends to mean you failed to abstract away critical concepts like factories, and that you will have a hard time elsewhere, for example in testing (this is what you often see in C#, for instance) or with global state (a common issue in ruby code).

There is of course an alternative: use a function rather than a factory; but that's kind of the point of this discussion :-).

(Before we start a flame war: This isn't a criticsm of ruby or C# specifically, it's just something I've seen a lot in those code bases. Both languages allow writing factories or using lambdas.)


No, "any good OO platform has those problems" is just wrong. Point-like classes without factories or any other BS are alive in kicking in big code bases. And geometry (visualization) is actually an excellent example for where mutable state and simple classes make things easier (I can actually move the point/scene object and everyone who has a reference to it gets the update). I really hope those articles at some point will start being a little more balanced and stop trying to pretend like "the purer and functional the better" (or the opposite).


I'm not suggesting that everything has a factory; I'm suggesting that having a factory isn't some java-specific code smell; it's a necessary pattern in an OO language. You obviously don't need one for a point. The context was some color picker example - and I could well imagine multiple color pickers and a useful color picker factory.

And I entirely agree with you that a PointFactory is unwanted (barring special circumstances) - though I entire disagree you'd want points to be mutable. Certain control points? Certainly - make a MutablePoint which is (conceptually) simply a reference to a point value. All points? That's just asking for pain; I really have better things than to track down with nested submodule thought it was mutating a copy but due to some optimization interaction turned out to be mutating a copy someone was actually looking on. Not to mention that reference semantics don't work very nicely with hashtables and lots of other datastructures which become a lot more complicated when the values can change right under them.


OOP is not fundamentally about mapping your code to physical real-world objects, but about mapping it to natural, intuitive concepts. Those concepts can, and in many cases should, be as abstract as you like: eg, an "Algorithm" object.

Something like "RectangleCollectionColorPickerFactory" is obviously a confusing and ridiculous concept, and exemplifies a failed application of OOP, not a failure of OOP itself. One could invent equally absurd demonstrations of functional programming, or indeed of any other paradigm.


Those concepts can, and in many cases should, be as abstract as you like: eg, an "Algorithm" object.

I only really "got" design patterns when I realized that the strategy pattern was just treating an algorithm/implementation as an object and stopped thinking that OO was exclusively about mapping things to real world objects.


Of course no true Scotsman would write a class like RectangleCollectionColorPickerFactory.


The inability to comprehend why a mashup of nouns into a long class name might be useful is one of the most annoying memes on HN. Is it just that programmers are slow typist or something? Surely any competent programmer should be able to string together what the individual terms mean in your example (Rectangle, Collection, Color Picker, Factory), and easily grok what the class is intended to be used for.


The meme is more about the abuse of that idea into absurdly overengineered and the approach to programming where anything needs 3x the amount of work in the form of scaffolding and code bureaucracy.


>The thing about [OOP] is that it allows you to model a problem domain in real-world terms.

>> That's only true on a superficial level.

Exactly. I had forgot that I learned OOP by modelling a CD collection. When thinking about it now, "mapping" a program into real-world models and terms seems totally backwards.

What's the point of thinking in real-world models unless you program actually manipulates real-world objects? Of course there are properties that are useful to carry over, but why limit your program to real-world models?


OOP actually doesn't model the real world well at all. It seems to on the surface, but it completely ignores time.

In the real world, everything is a process. Nothing ever stays the same or stands still. No object is the same object from one millisecond to the next.

Immutable state isn't just a formal exercise--it's a more authentic model of the world.


"everything is a process"

To me that's as bad as saying "everything is an object".

I'd rather try and understand where each approach is best rather than picking a single approach and dogmatically applying it in every scenario.


I think his point is that the real world is concurrent states, not context switching. To loosely paraphrase Joe Armstrong, in the real world, data isnt shared, it's communicated. In this sense, the process as a fundamental abstraction more correctly models "the real world."


>> In the real world, everything is a process. Nothing ever stays the same or stands still. No object is the same object from one millisecond to the next.

Been reading up on Heraclitus lately?

While this is an interesting statement for a philosophical dialogue, it doesn't really make much sense in the context of programming, where many things are constant, many things are not processes, and there are plenty of use cases for both mutable and immutable state (which is exactly why I always get really defensive reading articles like this one, that pretend all problem domains would be best modelled by a single programming language/paradigm/model of computation).


It makes a lot of sense in the context of programming when you look at the big picture. I think Rich Hickey articulates it best: http://www.infoq.com/presentations/Value-Identity-State-Rich...


I'm sorry, are you arguing that immutability models the world more well than OO because no object in the real world ever stays the same?


Yes. Due to Special Relativity, different observers perceive events (changes to object state) at different moments, the state of the whole universe is not consistent. Therefore, we model the universe using a sequence of immutable universe snapshots, with different computational agents independently moving through the (branching) timeline of snapshots, so that each and every one of them views the universe consistently.


Most events humans care about happen in plain old classical physics, and no computer is moving at some large fraction of light-speed relative to any other computer. If you maintain synchronized clocks, everyone can agree on the exact same order of events (which would not be true in a relativistic situation).

Now it turns out to be difficult to maintain synchronized clocks, and Lamport timestamps and vector clocks are alternatives. The end result looks similar to a relativistic situation, but (and I'm not a physicist), it seems wrong to claim this situation is because of Special Relativity.

Thoughts?


Yes. I was half joking, half explaining why immutability actually is better to model the universe.

In practice, replace Special Relativity with Network Problems.


So what you're saying is the universe is mapped COW, and everytime there's a new observer, UniverseOS runs it with fork().


Check out the first 20 minutes or so of this SICP lecture. In it, Abelson specifically mentions special relativity as a way for us humans to "model" the real world as immutable values over the continuum of time (in contrast to a mutable world that fits with OO).

http://ocw.mit.edu/courses/electrical-engineering-and-comput...


No need to apologize. With immutability you have to explicitly model temporal changes.


Immutable doesn't mean unchanging. It means no mutation. I can throw a ball in the air and model it as a function of time without mutation.


> OOP actually doesn't model the real world well at all. It seems to on the surface, but it completely ignores time.

This is important, and something almost everybody ignores.

Get an OO codebase, and try to implement a time-dependent feature, like snapshots, or even a simple undo. The abstractions fall apart.


"OOP actually doesn't model the real world well at all."

Surely that is in part due to the developer. Poor developers will choose the wrong abstractions, and have a poor OOP model as a result. Better developers will choose the correct abstractions, and have better models. Obviously the domain you are modeling will make it easier or harder to model things, and in some cases OPP may not be a good choice at all.

In my day to day work I use Django. I start with a data model - based on an ER diagram. I code that in the Django model classes. Is that OOP modeling or ER modeling? I don't know. I don't really care too much. It works fairly well for most things I am doing.


We can think the state of the world is immutable. When the state changes, it's a different world.

If space-time is discontinuous, then we can think any change of state, like motion, as a set of discrete changes. If we think the motion of a particle from one energy bin to another is immediate (meaning the particle cannot be found on the border between the bins, or one moment the particle is in Bin A, next it is in Bin B) then we can think the particle was destroyed in Bin A and another one was created at Bin B. And this is what we call motion from A -> B.


A criminal tells the arresting police officer: - You're wrong, officer. It was yesterday-me who committed the crime but you are trying to arrest now-me. How wrong are you!

I bet the authorities around the world don't like your time-inclusive-dimenstion view much.


You do realize that there's a difference between an object with identity (ddd names it entity) and the one without it (ddd names it value object).

No matter what you do, identity does not change. You commiting a crime yesterday was an event that included you as an object with identity and today you have the same identtiy so you are clearly responsible for what you did yesterday.

GUARDS! :P


Yes. The article correctly points out potential pitfalls of OOP but glosses over its benefits:

> See, mutable state makes shorter English sentences, and the agent concept helps make analogies with our fellow humans. In the end, this first impression trumps the fact that avoiding mutable state where possible ultimately yield simpler programs.

- But shorter English sentences are awesome.

- Good analogies are awesome.

- Effectively communicating with our fellow humans is awesome.

Indeed, all these things contribute to simplicity.

I find that often critiques of OOP are actually critiques of poorly implemented OOP. When it's done well, it can retain many benefits of immutability while still mapping more naturally to domain concepts. This talk by Gary Bernhardt is a great discussion of using functional concepts in OOP, and doing OOP well:

https://www.destroyallsoftware.com/talks/boundaries

The talk "Functional Core, Imperative Shell" is also a must see.


> - But shorter English sentences are awesome.

You missed the author's point here. He of course prefers shorter sentences as well. He is complaining that mainstream language syntax is biased in favor of mutability since you need to use an extra keyword to make a const expression. In F# for example it is just the opposite, you have to use the keyword "mutable" in your declaration.


I love Hal Abelson's opening of this SICP lecture[1], in which he acknowledges that OO (with mutable state) is born from the idea of modeling computer programs the way we perceive the world. But then he goes on to say that the reason why OO can be so complicated (because of having to deal with the can-of-worms that is mutable state) is because maybe we have the wrong view of reality.

He then proceeds to launch his piece of chalk across the room! (good stuff). And proposes that instead of the chalk having changing state (position, velocity, etc), instead, its better to thing of the chalk as a collection of immutable values; each value existing at a moment in time. And this of course aligns more to functional programming; and you see the same notion (immutable values over time) with descriptions of Datomic.

[1] http://ocw.mit.edu/courses/electrical-engineering-and-comput...


What a great lecture. Thank you for sharing. I own and have read a good portion of SICP and these lectures are a great supplement to that.


Mutable state can be a huge pain in the ass. A common mistake OOP beginners, and even "experienced" developers, make is the creation of this complex jungle of entangled objects that all directly or indirectly manipulate each other's internal states. If you've ever had to work with such a code base for a prolonged amount of time you are pretty quickly ripe for a year-long sabbatical. There is simply no way to reason about the control flow and state of the program without investing a lot of time and mental effort. Sometimes it's almost impossible.

I don't know, I think it's a point where education fails. Your average enterprise developer should be forced to read books like "Clean Code" by Bob Martin before being allowed to touch a keyboard.


>> Mutable state can be a huge pain in the ass [..] 8I don't know, I think it's a point where education fails*

I'd argue that the problem with software quality is not so much in the (ab)use of mutable state, but the fact that (especially in enterprises) software often reflects the organizational structure of the people who worked on it.

Exhibit A: the software solution where every tiny part of the system is developed by some (semi) random group of people that often changes, doesn't directly communicate with any of the other groups, doesn't bother to much with the 'architecture' of the complete system (a big ball of mud), and has no interest improving either the overall architecture of the system, or any of the components outside their scope. Add pervasive mutable state into this mix, and you have a recipe for disaster.

That's not to say the mantra 'eliminate (almost) all forms of mutable state' will improve this problem much.


>> Immutable state can be a huge pain in the ass. A common mistake FP beginners, and even "experienced" developers, make is the creation of this complex jungle of entangled functions.

See what I did there.


The analogy would be that those functions are mutually recursive and while mutual recursive functions can be an elegant solution to several problems (state machine, some algorithms)overuse is not a mistake i have seen very often. FP does coerce, or at least very strongly suggests, a very simple program structure. This makes it hard to model some more complex relationships that are easy in OO. Anyway, my point is that FP is indeed different in this regard compared to OO.


My main point was that bad code exists in all paradigms. If there were a perfect language or paradigm, everybody would be using it and there would be no such debate. Immutability is not necessarily good or bad, same for mutability. We should all do a better job at explaining to beginning programmers how to use all paradigms appropriately, and how to choose among them. And of course, to use vi.


Why should I use vi? Why wouldn't this decision be similar to what paradigm I choose?


It was a joke. Emacs vs. Vi, FP vs OOP, etc. All members of the set of useless debate topics.


Note that Class based != Object Oriented.

Also note that theories that have tried to categories real world objects into classes have a tendency to fail (although they also have a tendency to become enormously popular [see Platon and Aristoteles fx.]).

So the claim that classes allows you to model a problem in real-world terms, is most likely false.

See for example http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.... for a review.


> The thing about "Class based Programming" - or Object Orientated Programming

You can have OOP without classes - prototypal inheritance for example. In JS, you just clone an exemplar with Object.create() for example.

That said, the article author does the same thing.


Having used Angular for a month or so in summer, I find prototypal inheritance horrible and unpredictable. I am never 100% sure if the data is coming from or updating the parent object or the current object. Class based inheritance seems a lot more predicable to me.


If Angular is your only experience of JS, I'm surprised you don't think JavaScript is the most complicated language in existence.


> cognitive overhead is, perhaps, reduced when you can "ground" your understanding in real-life terms.

It absolutely reduces cognitive overload. Having "containers" of functionality that allow me to easily know at a glance what I'm working with is invaluable. I'm not knocking functional programming, since I haven't built a few projects in a purely functional language yet, but trying to do some things with the functional paradigm (or at least, my understanding of it) left me recognizing class-based programming as a powerful tool when used correctly.


In fp, those containers are known as modules.


> pick what language and coding paradigms best suit your problem domain and you as a coder

An addendum: there is often no single obvious best way to do something. Though many approaches are obvious as being less wrong than others.


The idea that the world appears to mutable and therefore programming languages should encourage mutation in order to model it is one that doesn't make any sense.

To me, it makes much more sense to try to create programming languages that are able to express human thought. People think in terms of things that are, to a first approximation immutable, memories are a good example.


I agree wholly that picking the right tool is critical. I also think you have to be skilled at a broad subset of all the tools to be able to even make any sort of comparison.

Many times I've heard this argument from someone who only knows the one or two tools they learned in college or in their first job, and they try to dress up their ignorance as wisdom for "not wasting time on the wrong tools".

I recommend http://norvig.com/21-days.html as a great place to start (I have only learned, at best, 3/6 of his suggested language categories, and I am on the 4th).


The problem is it's not real-world most of the time.


Okay, for one:

1. Dynamically Typed Languages (LISP, Erlang, Smalltalk, Python, Ruby, and Javascript) are all easier than Haskell, JAVA, C++ or any other typed language for working with ADTs. Whether the language is OO or not isn't really relevant. On the other hand, you loose type-safety/efficient representations. But life is filled with choices, different tools are good for different things at different times.

2. You can have closures in a class based language. Here's some rather exotic Python code that uses both for measuring the performance of iterators: https://github.com/wearpants/measure_it/blob/master/measure_...

Smalltalk and Scala also have closures. It's not an either/or for classes vs. closures in programming language land.

3. There's more to concurrency than shared memory, so immutability is again a weapon that is good for some battles. If you have an app which you've factored into workers, unless they are on the same machine they'll need to communicate. Having your data be serializable is more important than having things be immutable in this context. What a worker does with data while it is handling it can involve lots of mutation.

So everyone should take a big breath, and remember that there's no one way to think about software and there's no universal rules, other than probably P != NP and stuff like that.


At this point, it's probably easier to name the languages that don't have closures as opposed to those that do


Java's anonymous classes can also act as a (large and bulky) closure, although as you have to declare local variables used by the anonymous class as final, the capacity to close over mutable state is limited to instance fields or mutable objects.


Best (and most humorous) argument against Java's object oriented obession that I've ever read is: http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...


The funny thing is that many like to bash Java's OO model, and in the process forget that Smalltalk, Eiffel, C# and many others offer a similar model.


What's really funny is to claim that the Java/C++ model is similar to the Smalltalk model.


Kind of, yes.

You can map instance methods to normal methods in Smalltalk. And static methods to class methods in Smalltalk.

Of course, you don't have the dynamism of sending messages, instead of invoking method calls.

However given that Smalltalk is the originator of OO and everything is an object, OO critics would feel in prison most likely. :)


I thought Simula was the originator of OO?


I truly miss Steve Yegge's rants.


It's more that Java is hard because naming things is hard and Java forces developers to name lots of things.


I find a mix between Object-Orientation to model the problem, and a lot of functional thinking in the 'solving' department to be the most comfortable. The modernisation (as in taking features from the old languages into the newer ones) of C++, C#, Java, Obj-C etc. are all liberal with this direction.


I agree - in my experience object-orientation is at it's best at structuring a representation of the problem domain and functional thinking fits well with solutions. I've written a lot of code over the last few years that models large scale industrial systems and the general style is very much functional processes over immutable objects and this has worked pretty well.


This is a very good article. The point about mutable state is key. Class based programming encourages you to treat objects as little bags of mutable state, instead of as values, and everything goes downhill from there. Its one thing to not go all the way to disallowing mutation. Its another to encourage it to be used pervasively.


Prorgramming without mutating state is honestly just bizarre to me. Mutating state is all a program does, literally.


> You may even have to inherit a base class or implement an interface. Subtype polymorphism is a very poor substitute for closures. This is why C++ algorithm library is unusable.

Does anybody understand what the author means by this? STL's algorithm library does not use class hierarchies for anything. And they often take functions (be they free functions, lambdas, or functors) as arguments.


The author probably had (this was written 3 years ago) a bitter and rather narrow view of OOP capable languages.

In C++ a class is just a facility of the language with many different features. If you want immutable, value-semantic objects, then do it. If you want a closure... well objects can be closures too. C++ calls them functors, they can be immutable. In fact, C++11 lambdas are immutable by default.

His criticism of the STL is probably that they expose their mutation, rather than hiding it in the architecture like pure functional languages. In C++ you just have to hide the infrastructure yourself.


The overhead that is assumed unimportant by most functional languages (copying is cheap, GC is fine, etc.) is intolerable in key situations and problem domains. In C++ all overhead must be optional.

That being said, nearly all C++ applications can benefit from immutable classes, algorithms that take closures, and so on. But C++, like C before it, is really just an abstraction of the hardware and OS below it. That hardware mutates. That pool of memory changes size and locality. The closer your problem domain is to the metal (especially drivers and highly-optimized code), the more you appreciate mutable state, at least given the current state of computing.


Dealing with complex immutable data structures without GC is either extremely hard or inefficient (and usually both). C++ standard library does not offer any helping hand here either. Additionally C++ logical memory model (flat, uniform memory) does not describe the hardware memory model well - it is already an abstraction. An memory access isn't uniform and mutations can cost you really very much. In some cases immutable structures + little copying is often the way to go, especially when we aim at parallelization.


Before C++11, if you wanted to pass a function with closure-like behavior to an STL algorithm, you had to write a little class with operator() overloaded. It was a huge pain in the ass, lots of boilerplate code. Most programmers didn't bother and wrote plain loops.

Now with C++11 lambdas, std::algorithm is much nicer to use.


In case you don't know ML, you should. Also note that though OCaml is the most popular ML dialect right now, Standard ML is where it all started (PolyML is a good Standard ML implementation: http://www.polyml.org/)


Or even F#, which is basically OCaml on the CLR. Very fast, fully supported IDE, great if you're stuck in SLAs for CLR code, complete interop with C# or VB.NET, it's the current hidden treasure of the .NET world.


Pity that Microsoft pushes it more for library code and not so much for full applications.

But if it gets more use in the mainstream that way, then so be it.


The only difference between a "library" and a "full application" is that the "full application" has an entry point where execution begins, whereas a "library" does not.

Many large applications are implemented as a thin user interface (GUI, web, services/APIs) which glues together a bunch of libraries (which is where all of the real functionality is implemented). This doesn't only apply to F#, but C#, Java, Python, etc.

The reason F# gets used for implementing libraries is because that makes it easier to introduce into organizations with existing code in C# or VB.NET; a new component or plugin can be written in F# and easily utilized by the existing code, so the other developers in the organization don't need to anything differently than if they were consuming another C# or VB.NET component.


I do understand the reasoning behind it, just as a ML fan would like to see the possibility of proper support for GUI/Web for pure F# applications.

But I fully understand it, on my enterprise world JVM == Java and .NET == C#, except when doing small prototypes. It is very hard to use alternative languages, as managers always look for coding drones.


> Standard ML is where it all started

Technically, ML is where it all started. Standard ML and OCaml are to ML as Scheme and Common Lisp to Lisp.


Also see: The problem with OOL is not the OO by Carl Sassenrath (from 2009) - http://www.rebol.com/article/0425.html

PS. Just submitted to HN - https://news.ycombinator.com/item?id=6900426


Relevant recent tweet by the creator of the Io Language, Steve Dekorte: https://twitter.com/stevedekorte/status/411045428361056256

I suppose most folks here disagree?


Of course I agree with Dekorte. Having written compilers written in perl, C and LISP - the ones in perl in pure OO, and the ones in C and LISP without, I conclude that I rather add classes and methods to my compilers than trying to improve it without. In the one written in C, classes are supported by the intermediate VM though (dynamically typed, very similar to Io), which is still better than having to write it in horrible C++.


I agree, mostly, with the conclusions. "Classes" as a method of code re-use is dead. However, his conclusion that ML would dominate C# and Java as a language, I highly contest.

The most important thing in my opinion is tools and platform support. Show me a fast, good looking IDE with good autocomplete/intellisense/integrated debugger and UI tools for ML. See?

Is there an abundance of libraries and api wrappers available (that doesn't require you to do straight C-interop)? See?

Also: the design of a language should be done with the tools in mind. While there is not much difference between norm(vec) and vec.norm() the latter is the form that supports autocomplete. So even for functional languages, being able to use member properties and member functions is an absolute reqirement to support the tooling that we expect.

You could say that F# dominates C#. It has almost the same tool and library support, while having the features you expect from an ML type language.


I'm noticing that whenever there is talk about any language that isn't Java or a CLR language several someones pop up to go on about the tooling "we" expect.

The difference between norm vec and vec.norm() is that the function can be used in a higher order function while the method would require wrapping it in a lambda to achieve the same thing and would probably still break your tool's ability to autocomplete if you used it that way.

You're basically insisting that an FP language needs to include an embedded OO language to be usable.


If your functions are inside a module you import, you can still navigate them by browsing autocomplete using the module name as an unnecesary qualifier. Then once you find/remember, delete the qualifier in cleanup. No need to force members just for discoverability. (Not to mention that tools could provide other ways of doing autocomplete.)


"The obvious conclusion is that ML simply dominates Java (and C#) as a language. Time to switch."

The obvious conclusion is that any statement that contains the phrase "the obvious conclusion is" is 100% BS, including this one.


mutable state is not a problem per se. _Shared_ mutable state however, is a recipe for disaster.

There are other problems with Object-Orientation. For example, it tends to scatter allocation which has performance impact. ( see this nice presentation: http://harmful.cat-v.org/software/OO_programming/_pdf/Pitfal... )

If performance is not an issue, I'd guess the price you pay for OO is that you eschew parallelism and concurrency.


Any code that operates upon your object with "private" state still has to account for the fact that its methods are not referentially transparent. E.g. I have to open a connection to the database before I can send commands through it, and so now I have to check if its already open everywhere I use it. So that is definitely shared state, but I guarantee you that a lot of people think a private variable means it is not shared.


Functional paradigms scatter allocation even more (though maybe more predictably, so the Generational Hypothesis is of more value in a functional programming language).

Also, as soon as state escapes a function, it becomes shared, so should be immutable (except when explicitly made mutable).


any evidence for the 'even more' statement ?


It's a natural side-effect of using immutable structures. When you need to modify you allocate a new object with the mutation. This naturally has an impact on the GC because it needs to allocate and recover more garbage.


As an aside, my final year project for my CS degree included benchmarking different sets of SK combinators for efficiency as well as looking at different reduction algorithms.

What I found out pretty quickly is that, especially as I was running my code on a mid 80s mini computer, I had to spend as much time on allocation and garbage collection strategies as anything else. Indeed, my only really "difficult" bug was caused by my premature re-use of application nodes - I thought I was being clever re-using them immediately but it caused problems months later when I started writing recursive expressions using the non-native Y-combinator. I had no idea what was causing the problem and was really quite worried, I was stumped for days and then I had a flash of insight from nowhere while sitting on a bus that fixed the problem.


> I was stumped for days and then I had a flash of insight from nowhere while sitting on a bus that fixed the problem

We're on tenterhooks here...


I remembered that I thought I was being awfully clever re-using application nodes aggressively - turns out this worked fine for non-recursive code or code that used the native Y-combinator in my reduction engine but failed when I used a Y defined directly in the lambda calculus. I removed this "optimization" and the problems went away.

What I always remember is that the idea as to what was causing the problem came as a complete bolt from the blue when I was thinking about something else - perhaps the first time that something like that happened to me, but certainly not the last!


That's true, though it's also worth pointing out that immutability can make garbage collection easier.


Indeed, and many of the objects will be very short lived, therefore getting collected in generation 0.


so, no real evidence except anecdotes. Anyway, I was not claiming anything about functional programming being better. I was merely saying Object-Oriented is not necessarily data-centric. That being said, Lisp & OCaml programmers will argue the statement that FP implies immutability. OCaml strings ARE mutable, (so are OCaml arrays) and you have the keyword 'mutable'. Lisp has setq ... IMO FP does imply that you treat mutation & side effects with the respect they deserve.


A language can't suck just because it isn't Haskell or ML.


It's funny how people blame the language when they are the ones who made a mess. :)

On the other hand, there are languages that make it easier for you to screw it up and the ones that try to prevent that.

But there's no bullet proof language.


A bug-proof language is like an uncrashable car.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: