A lot of the "OMG Scala <3" stuff being written recently comes from people who have written Scala books and therefore kind of have an agenda.
The essential idea behind the post is this: using a language like Scala can make you use a mainstream language like C# or Java differently. By using a language that makes immutability the default it teaches you when mutable state is actually necessary and when it isn't. This will lead you to write code in whatever normal language that also has less mutable state, and fewer side effects. It will therefore be easier to test and have fewer bugs.
And, no, I have not written any Scala books and have no Scala-related products/services to sell you.
I can't describe how happy I'll be when we have lambdas in C++. Them + the STL will be a very lightweight gateway drug into functional programming for a lot of programmers.
I have no idea why anyone would use ++i (though technically correct) instead of i++ in a for loop's incrementation clause since the incrementation is done after the loop body, not before... That just seems like willfully confusing intuition.
I think using ++i instead of i++ everywhere is clearer, actually -- there's no need to think about temporaries, and the order of evaluation is (marginally) more transparent.
Not that pre- and post-increment are ever that intuitive or transparent...
Using i++ was the recommended practice before C++ came along and let people change what i++ means. That convention was even the reason it's C++ and not ++C.
In C++, it's suggested to do ++i because it will sometimes result in fewer calculations if i is an object with overloaded increment operators.
Is this true? It was never explained to me this way, and it doesn't make much sense. The increment will always be performed unless it can be optimized away. In places where the two are equivalent, using i++ might impose a space penalty if you have to store both values, which can be a significant consideration for large objects. I don't know what you mean by "change what i++ means" since you can't redefine ++ for any types for which it has a meaning in C. (Or maybe you can, but I wouldn't know because it would never done in sanely written code; the possibility definitely isn't taken into account in any C++ coding recommendations I've read.)
Mainly I was taught that situations that require i++ are less common, more subtle, and easier to get wrong, so programmers should make them stand out by using ++i everywhere else.
If you were confused in such a way you'd end up thinking that i would be incremented with "i++" before the loop body ran. For "++i" to be confusing would be seriously messed up.
Yes. When overloading prefix and postfix increment, there are two functions to write, and the postfix version returns a copy of what you've incremented. It's impossible for a compiler to optimize this away in the general situation.
There's no difference between C and C++, except that in C++, you can define prefix and postfix operators on user-defined types. The two are defined separately because C++ was created for people who are really finicky about performance. (For some reason some people are really bothered by the possibility that you could provide really insane definitions, as if it wasn't already possible to create insanely named functions and methods....)
Anyway, even in C, ++i and i++ change the value of i and evaluate to a value. So there are two values: the new value of i, and the value of the expression. In the case of prefix ++, the two values are the same. In the case of postfix ++, the two values are different. If you ignore the value of the expression, then ++i and i++ can be used interchangeably. The compiler will probably figure this out and produce the same code.
I use ++i in for loops because pre-incrementing is the semantically correct option. I've never understood why i++ came into favor; can anyone shine some light on it for me?
My hypothesis: it has better rhythm -- the repetition of 'i' at the beginning of each piece of the for loop makes it nice and easy to remember. I bet people would tend to learn this when they were first learning how to code and then never change because it never caused them any problems.
The essential idea behind the post is this: using a language like Scala can make you use a mainstream language like C# or Java differently. By using a language that makes immutability the default it teaches you when mutable state is actually necessary and when it isn't. This will lead you to write code in whatever normal language that also has less mutable state, and fewer side effects. It will therefore be easier to test and have fewer bugs.
And, no, I have not written any Scala books and have no Scala-related products/services to sell you.