It's nice to see this surface again. Personally I think it's interesting, and useful. If you want to read previous discussions about it from here on HN, here are some references:
You know... I often do not give encouragement and/or express gratitude because I am afraid of being corny.
But I am beginning to see that even if you already know the value you bring to HN, or even if it seems sycophantic to say I appreciate you, it never matters so much that it should suppress a sincere, neighborly, human, "thank you, keep it up"!
So even if this is the wrong place (or even the wrong way) to do this -- an upvote seems insufficient -- I want to say thank you, RiderOfGiraffes, for all that you bring to the experience that is HN; may your tribe increase.
"It is a logical impossibility to make a language more powerful by omitting features" - false. Different designs have different limitations. C can't have precise garbage collection. Shared-memory threads can't be isolated and so can't be automatically migrated. Pure functional code can be rewritten by the compiler in a way that imperative code can not. A language that omitted any global construct would be intrinsically sandboxed against use of any functionality not injected. And so forth.
I think we're in violent agreement. Removing a feature only makes things better when it permits the addition of another feature. This is the point I trying to make about pure functional programming. Removing the ability to arbitrarily entangle state in time does not make a functional language more powerful, but adding lazy evaluation does make it more powerful.
Am I correct that we are seeing things the same way, even if my ability to explain it is poor?
You are looking at the situation from a pure semantic point of view and from there, you are correct.
I think the parent is looking thing from the point of view of effective computing and they are correct from their point of view.
The viewpoint are perhaps incompatible.
One thing to consider is that I would say that I "can't" implement a recursive descent parser in Ruby because Ruby is too slow. Thus from the point of view of effective computing, adding features that make Ruby slow also removes some power.
I wouldn't say either position is "correct" but I think you have to take both into account.
You can argue that removing features can make a language more powerful, even if you don't add another feature.
Basically, if removing your feature, like side-effcts, allows you to give more guarantees about sub-program, you can do more with your sub-programs. Like use equational reasoning.
Not really. Take the pure functional example - you've got power because the compiler can transform certain parts of code. Now, throw in all sorts of unsafe mutable operations to the language. The language can still transform the pure parts, so you've lost no power. But you've gained the power to write other types of code as well. It's not as if you have to use every language feature all the time and suffer the downsides everywhere.
This still depends on the design of the language, though. Consider Java. There is definitely a pure functional subset of the language. However, any time you call across an object boundary the compiler has no way to know that the call can be treated as functional - because the language has late binding, and the actual implementation is swappable. In practise the JIT learns stuff like this and transforms it at runtime. That extra effort is needed because the language can't guarantee functional purity.
The point here being that some design decisions that add functionality remove the ability to rely upon design assumptions.
An argument for functional programming languages that I've heard many times is that they can be optimized very well for (automatic) multi-core parallelism and even GPUs.
But I've never found any actual examples, evidence or benchmarks showing this in practice.
You should listen to Guy Steele's talk he gave on 2009's ICFP: "Organizing Functional Code for Parallel Execution: or, foldl and foldr Considered Slightly Harmful" (http://www.vimeo.com/6624203).
He drives the point home, and adds that you also need appropriate data structures.
"And one reason many people consider Java “better than Ruby” is because you cannot open base classes like String in Java?"
Do people really say this? The ability to monkeypatch classes is one of Ruby's many strengths. It would only add unnecessary complexity if there were any "special case" classes. I guess it's somewhat helped by the fact that people generally modify base classes responsibly.
Rspec is one example of how this is used to great effect:
Sure. Here's one example, "It looks at first like there’s a clear factoring 'Baltic Avenue has a method called isUpgradableToHotel,' but when you look more closely you realize that every object representing a property is burdened with knowing almost all of the rules of the game.
The concerns are not clearly separated: there’s no one place to look and understand the behaviour of the game."
The concerns are clearly separated, but they're separated along a different axis than what you're looking for. I'll argue that for any partitioning of concerns there will be a concern that is not adequately separated.
With that said, I must admit I'm unclear why each property object has to know all of the rules of the game. They don't have to know most of the rules AFAICT. This may seem like a small point, but I do think its important. I think the only thing they have to know if their cost to buy, upgrade, and rental price (I'm not a monopoly expert, so its possible I've missed something).
To buy there's a fixed price associated with the property-- you just need to not be owned by someone else. To upgrade I assume its a fixed price. I think the only requirement is you own all the colors. In which case you defer their ability to buy to another class -- the property group class. Baltic Ave doesn't need to know what the actual rule is -- it just asks "Light Blue Property Group Object -> Can I be upgraded?"
But the property object has no idea about passing Go, what Jail is, how many dice are used, what Chance cards are, who their neighbors are, how many other like-colored properties there are, what special powers railroads have, if there's a community pot, or how many players are even playing the game. The partition of concern has been made along some axis that this programmer thought useful.
Another is the telephone test and entangling what and how. I think what and how are the same thing. The for-loop example though is solving a different problem than the select example. It's solving a more constrained problem that you can't generally solve with the select syntax. It turns out though that what you are looking for happens to be provided by both solutions. As you move up the stack your problem requirement may be less constrained, in which case you don't need the rigor, but we should note that the rigor isn't simply the how, but just a degree of rigor (unfortunately standard languages with for-loops have to solve more constrained versions of the problem). What the telephone test is really about is that for anything you actually care to discuss on the phone will not be extremely rigorous.
But I feel like those are nits which I probably uniquely have, which is why I didn't actually mention them. But I wanted to point out that I disagreed, because I think it strengthens how much I enjoyed it. And while I disagree today, the fact that I read it and thought about it may mean that I disagree less tomorrow.
I'm really surprised by this statement, and I hope you will expand on it. I think the example of relational databases and SQL (which is very close to the example in the blog post) demonstrate conclusively that there are many case where huge gains are attained (in development time, maintenance, flexibility, ...) by building systems that allow a question to be answered by specifying "what" instead of "how".
Now I suppose you could argue that writing a SQL statement doesn't really specify "what"; it still specifies a "how", it's just that the "how" includes things like "let the RDBMS decide what indexes to use to answer this query, what order to access tables, whether to get data from disk or in memory, etc". But then I would ask what you propose as an alternative to the what/how distinction ... because there very clearly is a difference between SQL and hand-crafted for loops.
To me they're just degrees of abstraction/assumptions/givens (I'm not really sure what the best/right term is).
There's an evolution from:
label:
if (i > size) goto exitLabel;
yada yada yada
To iterators. When you see an iterator you don't care if there's an exitLabel or fall-through. For the most part you don't care if the loop counter is an int or long, or if there's no loop counter at all, as its using a null terminator or something. The iterator is the "what", the old skool method is the "how".
But the walk from iterators to select is similar. A select statement (grossly simplified) is simply a for-loop over a collection where you don't care about order:
select * from collection
That's not so intersting. So lets look at a common looping construct. It might look like:
for (obj in collection) {
if (obj has some property) { add_object_to_new_collection}
}
We can abstract that and make it:
select * from collection where property_holds;
Of course, this select, again, doesn't care about the order of the iteration at all (which also likely means doing things like having side effects evaluating the property will result in non-deterministic behavior). The iterator has become the "how" and the select the "what".
But while this is one man's high level abstraction this is another man's "how". For example, if I'm looking for anomalies in the data, I may not care to specify what the anomaly is. Just something that looks "out of the ordinary". One could imagine a language saying:
find uncommon relationships in collection1 x collection2
It should have well-defined semantics, but these semantics probably don't care if you use the "select" statement or the "for-loop" or some other mechanism. Those are just implementation details. Those are both "how's" not the "what".
To me you're just walking a chain of abstractions. There's no discontinuity where things suddenly break from "how" to "what".
Remember when Fortran was created it was a revelation because you could now just tell the computer what to do, not how to do it. You no longer had to load specific bits into specific locations in specific flags or accumulators. We certainly would no longer consider Fortran specifying the "what", but people did.
Now when I see people saying here's an abstraction so you don't have to specify "how", my question is "what are the assumptions?"
I don't think this is a fundamentally different way of looking at things, but rather a way that puts the emphasis on the nuance rather than the benefit.
One more point I should make on assumptions (and this goes to the convention vs ceremony debate) is that I think that part of why Raganwald like the notion of "what" is that it captures the right assumptions or best practices. Like above when I mentioned the side effects during property evaluation -- that's a pretty yucky thing. Just makes reasoning hard. Part of what makes good library/language design is capturing assumptions that bend programmers to make usually good tradeoffs. This is the whole trend of conventions in frameworks. But again, to me this is a gradient -- not black and white.
OK, this all makes sense. But why does it matter? Am I as a programmer going to make mistakes if I try to write programs in a "what" manner instead of a "how" manner? Is there some other, better distinction that I should be making?
Are you really saying that what matters is abstraction, and that we would be better off thinking about higher vs lower levels of abstraction instead of the more subjective what vs how? I think I might agree with that, but then you do get into the problem of deciding which of two pieces of code is more abstract, and I think you are going to end up with the same problem all over again of it depending on your assumptions.
I don't think you should look at what is more vs less abstract. Rather I think you should ask, "what maps closer to the problem I'm trying to solve" and implicit in that "what are the assumptions in the abstraction?". Although I suspect this is what you do already, as most good devs do, whether they articulate it or not.
(I think) you're starting with the flawed assumption of what "the" stack is, while functional languages like Haskell have a stack, it is very different from the stack as you may now it in C, so the optimization issues are different too. The assembly of functional calls in functional languages can be a lot more lightweight then in for example C.
http://news.ycombinator.com/item?id=1048304 <- 9 comments
http://news.ycombinator.com/item?id=3934 <- 4 comments
The original "Why Function Programming Matters" has, of course, also been submitted:
http://news.ycombinator.com/item?id=50193
http://news.ycombinator.com/item?id=983401
http://news.ycombinator.com/item?id=1482797
There was less commentary on that.