>>> def f():
... type(x)
... x = 10
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in f
UnboundLocalError: local variable 'x' referenced before assignment
irb(main):010:0> def f()
irb(main):011:1> x.class
irb(main):012:1> x = 10
irb(main):013:1> end
=> nil
irb(main):014:0> f()
NameError: undefined local variable or method `x' for main:Object
from (irb):11:in `f'
from (irb):14:in `evaluate'
from org/jruby/RubyKernel.java:1101:in `eval'
from org/jruby/RubyKernel.java:1501:in `loop'
from org/jruby/RubyKernel.java:1264:in `catch'
from org/jruby/RubyKernel.java:1264:in `catch'
from C:/jruby-1.7.12/bin/jirb_swing:53:in `(root)'
public class Test {
public void f() {
boolean isInt = x instanceof Integer;
Integer x = 10;
}
}
>javac Test.java
Test.java:3: error: cannot find symbol
boolean isInt = x instanceof Integer;
^
symbol: variable x
location: class Test
1 error
Oh no, it's now in line with just about every other lexically scoped language.
No, because it can only throw if there's a flaw in the code. Not randomly according to the argument passed to the function. Unless I'm missing something, it will either always throw, or never throw.
No because the let keyword didn't exist before. So existing code won't suddenly start to throw. The article's author is making a terrible, bogus argument.
Except, now you have one that you expect and the other you don't. This doesn't make the language anything but more inconsistent. I prefer consistency inside a language far more than familiarity between languages.
Remember, it's not the function that's throwing the exception, it's the runtime. In order to make it not throw an exception, we'd have to special case typeof to make the runtime behave differently just for it. /That/ is inconsistent.
let x = 0;
function typeof_wrapper(y) { return typeof y; }
(function() {
typeof_wrapper(x); // throws an error
let x = 1;
})();
(function() {
typeof_wrapper(x); // returns "undefined"
var x = 1;
})();
Again, typeof isn't throwing the error, the runtime is, because a variable declared with "let" is being referenced before the declaration. It's not inconsistent, it's one of the main points of "let".
You're essentially arguing that a feature added because the old behaviour was undesirable is inconsistent because it's not exhibiting the old undesirable behaviour.
Sure there's a zillion frameworks and patterns for shoehorning OO into JavaScript. But the point is, ES6, is standardising it. That means libraries going forward will assume this is the way it's done and build on top of it. It gives them more expressive power and better inter-op.
Also, default params are long overdue but I'm not sure that makes them "so what". I'd rather avoid mangling the arguments array in some awkward if statements.
Bottom line, there's a lot of things about JS that aren't great, but it's what all the browsers actually run so we're kind of stuck with it and any improvements are welcome.
(Even if you use something like ClojureScript or CoffeeScript, JS is still the target language and the runtime. Improvements and standardisation matter.)
ugh, really sick of this meme. OO is a tool in the box, it's useful a lot of the time, sometimes it isn't. The reason ES6 introduces the `class` keyword is that people are doing that already.
I somewhat agree, but "adding tools" at the language level doesn't come for free: it causes an explosion in the number of interactions to keep track of. How do classes interact with prototypes? How do they interact with lexical scope? How do they interact with exceptions? etc.
The egregious part is that, by turning something into a language feature, those who don't use it are often forced to take it into account in their code; especially library authors.
About the only thing that might be interesting to see is how constructor parameters are handled. With the prototyping system, I consider it to be a bug to initialize members in the constructor - if you always just call an 'init' method straight after construction (typically chained, much like in Objective C - var thing = new Thing().init(param); ) then protyping is very simple and does nothing surprising. It's only if you really really really want to have RAII that things become complicated...
Just because a tool is in a box doesn't mean its useful. Considering the relative cleanliness, refactorability of code coming from the functional world (see Haskell and the like), it is debatable whether OOP is a "good" style to develop at all!
If I look at most of the libraries commonly used in production JS, pretty much none of them apply classic OOP principles.
> it is debatable whether OOP is a "good" style to develop at all!
You've drank way too much Kool-Aid here.
First of all: there are many different styles of OO, some more cumbersome than others, some so far removed from "classic OOP principles" that it takes effort to see how are they related. It's utterly useless to talk about "OOP style" as if it was a well defined, canonical set of rules - because it's not.
Second: there's nothing that makes OOP and FP be in opposition to each other. You should be aware - and if you're not, you should feel ashamed - that closures, one of basic FP tools, were supported in Smalltalk 20 years before Haskell.
Third: just as with OOP, things coming from FP are not uniformly "better" in any sense. Just as with every other paradigm, FP languages frequently have features needed only because they are FP. It makes very little sense to adopt those. Many really interesting features meant for solving real problems (and not for solving problems with a paradigm) are steadily coming to non-FP languages anyway.
> If I look at most of the libraries commonly used in production JS, pretty much none of them apply classic OOP principles.
Once again: either you have no idea what OOP is or you have no idea how JS libraries look like. For every http://ramdajs.com/docs/ you get hundreds libraries which implement their own kind of OO, starting from jQuery and Backbone.
To be clear, "closures" are merely an implementation of lamdbas which were invented in the 30s. I wouldn't mention it except you're trying to shame someone for not "knowing" that Smalltalk beat Haskell to the punch despite both of them borrowing from a much richer history.
To drive the point home, you can implement lamdbas (and thus Haskell) without using closures. In fact, this is easy in a pure language since bindings are static. One way to do this is to compile the language into a basis of combinators like SKI. These no longer have any notion of binding so we don't need closures. Likewise, you could compile it into a categorical semantics and maybe implement this on FPGAs—again, no need for closures.
Lamdbas are merely a syntax and theory. Closures are merely one interpretation given a Von Neumann machine.
I happen to know all this; what I wanted to stress is not the nature of closures, which is exactly as you say, but that they were used since long ago in a "purely object oriented" languages, which - AFAIK - are not meant to implement lambda calculus in any shape or form.
GP wrote that - in short "everything in OOP is bad, let's use FP only". I responded with an argument that, in fact, certain OOP languages used FP features long before Haskell (because that's the example GP provided). I don't think the fact that closures are just one interpretation of an abstract concept of "lambda" is very relevant to this argument.
And about "shaming" people: I meant to gently point out that advocates for some cause should at the very least get their facts straight. Bashing some concept without knowing it well is the thing I objected to, a "not knowing it well" part by itself wouldn't be anything one should feel ashamed of.
I suppose I read it overly antagonistically then. Honestly, the entire FP/OO thing is so draining. I wish we'd all just start talking about Church/Curry debates instead. It's dispense with most of the marketing mumbo jumbo.
Just because it's possible to write a program without a tool, doesn't mean it's more productive. Functional programming doesn't even directly oppose OOP, it opposes imperative. It is absolute basics of CS that there is data and logic and many problems are easy to reason about as logic modifying data (state). OOP is a tool that allows to encapsulate some data and logic that go together. Functional programming doesn't allow you much more than that. Just because you may (or may not) end up decomposing a program into the smallest units of logic, doesn't mean you've done something useful, or clean.
* FP as I understand it doesn't much oppose imperative or even most of OO. It opposes non-composability using discipline from pure/total languages.
* Even if you want to encapsulate data and logic together like you suggest, you only need to buy 10% of OO to get that (ADTs, existential types, modules, what-have-you do it fine if not far better). The remaining 90% may be a waste or actively harmful.
* FP, as I practice it, means using the simplest possible thing in a world where simple things compose nicely. This may end up being some kind of full-scale OO, I'm eager to try to find places where it does, but so far it hasn't ever.
As a meta note, "my kind of FP" is the Haskell type of purity and other such nonsense.
>FP as I understand it doesn't much oppose imperative or even most of OO
I think this is definitely true, if you take 'oppose' as 'incompatible'. If we look the design concept and plot them, there will be axes, and I think this is what we'll see.
>* Even if you want to encapsulate data and logic together like you suggest, you only need to buy 10% of OO to get that
That's also true, but it's not an argument against the seed of usefulness of the 10%. We have kitchen sinks because people pick and choose features at will.
>* FP, as I practice it, means using the simplest possible thing in a world where simple things compose nicely.
Same here, and everyone should practice this when they look at any function, or object method, trying to make it as atomic as possible.
> This may end up being some kind of full-scale OO
I don't think it's any more useful shoehorning a program into "everything objects" more than "everything is functions".
My point was to say that the core goal of OOP is a great one, it simply codifies something that people have done in C when they put structs and procedures in the same header file. Let's not throw the baby out with the bath water.
My point is merely that the benefits of are gained using essentially nothing more than real ADTs and yet are almost always bundled with harmful other things in practice. FP can include these core ideas nicely, much like how monads include imperative ideas nicely, by focusing on what composes well.
As long as I'm not using C++ I don't feel harmed by any OOP features I don't have to use. I don't feel being restricted. I'm curious, what are these commonly harmful things you're talking about?
On the other hand, most FP languages I tried feel very restrictive. The only language that is inclusive of all good ideas, seems to be F#.
I'm not a big fan of languages allowing mutability anywhere. You can use it, sure, but advertise in big bold letters as to where. This plays out repeatedly for any side effect.
There are some OO systems with effect types I suppose. Those would be interesting.
There are also OO systems that don't have mutability. But I think that it you remove mutation, all that class nonsense, and are careful with subtyping (since it breaks things badly often) then you basically have (badly typed) ML modules with open recursion.
The open recursion (a.k.a. late binding) bit is probably the most interesting thing but you can play with it in Typed LC just fine. It's just another form of recursion.
> Functional programming doesn't even directly oppose OOP, it opposes imperative.
Just to be pedantic, OOP is (a kind of) imperative programming: goals are achieved by performing a sequence of actions in the world. In functional programming, goals are achieved by defining values; the evaluation order is an implementation detail.
It could be argued that OOP is "more imperative" than, for example, procedural programming, since in an OOP approach, "the outside world" includes everything other than the current object (due to dynamic dispatch).
The problem is for anyone who sees prototype OO as a better solution. They kind of give their blessings to the other OO, which is a bit sad.
(edit: spelling)
I see it the other way--prototype OO was given the blessing of the Creators of the language back in the mid 90s when it was created.
Though the people who 'get' prototypes swear that they're the greatest paradigm ever you will still get a huge amount of people coming in from other languages which find the prototype chain awkward and prefer to think in complex class hierarchies.
These people who choose to create their abstractions out of classes are in no way 'wrong.' To each their own.
Adding `class` syntactic sugar is just standardizing on something which many people already do every day.
There are dozens of ways to do classes/inheritance. Standardizing this means better tooling, documentation, and interoperability.
>Default parameters. By itself, this is another “so what” feature.
It's extremely useful in conjunction with named parameters. Named parameters are so much better than "option" objects.
If it's right in the function's signature, you can see right away how this thing is supposed to be used. Since it's declarative, it can be also picked up by your editor, too.
>let considered harmful
No, it's not. It makes variable declaration work like everywhere else. Function scope is the super weird anomaly.
Did people actually use the side-effect var-hoisting intentionally within their code?
Pretty much any JS style-guide worth its salt suggests manually moving var declarations to the top of scope since it's nice to know ahead-of-time which indicators of state you should be keeping an eye on.
The idea of inspecting a variable that is later-on defined with let seems baffling to me. I can't think of any reason why you would want to do this.
I tend to declare vars with their context. Especially in functions that are (unfortunately) longer than usual, having all vars at the top makes for a mess.
But I don't think the `let` syntax is going to be a problem for people who write plain JavaScript. It's more likely to it might become a problem for languages that compile to JavaScript. (For example, soak operators in CoffeeScript.)
> let considered harmful: The problem with let is that it is not “hoisted” to the top of its block, as var is with its containing function.
Interesting point, but I disagree. I think that the lack of hoisting is one of the benefits of `let`. It works in a different way, which is more in line with other languages. Sometimes, you don't want a bunch of variables at the top of your function. Many functions do not need to be executed in the way that hoisting makes easier.
By the way, quoting Betteridge’s law of headlines at the beginning of your article whose headline is a question does not mean you get a free pass of using such a headline ;)
Have you read the linked article about typeof? Normally, it's safe to do typeof possiblyUndeclaredVariable, but if later in the function you do let possiblyUndeclaredVariable it starts the scope defined as "uninitialized", which causes typeof to throw.
What you have to keep in mind is that JavaScript is not "a properly scoped language". Pretending that it is will cause you to miss key aspects of how the tool works. This helps no one, including you. Please, for as crappy as the language might feel, approach the language on its own terms.
Function scopes and variable hoisting was not the best parts of JS anyway. let brings lexical scope into the game, which I think is a great progress. And you cannot expect a variable to be defined outside of its lexical scope. That's not any different that trying to access a variable outside of a function that is defined in.
If people were abusing variable hoisting in some way, they can continue to do so, by not using `let`.
If I'm following the blog post correctly, let does have effects outside of its lexical scope: it effectively makes the variable even more undefined than a totally non-existent variable. That is, the following code will not throw anything:
// x has not been defined or initialized anywhere
console.log(typeof x)
However, if you add a let statement after it like so:
// x has not been defined or initialized here
console.log(typeof x)
let x = "foo";
typeof will fail with an exception, because the let statement changes x from an undefined variable to a new state that isn't even undefined anymore.
It's not having an effect outside of its lexical scope. You introduce the let into the same lexical scope as the console.log.
It will also throw an exception every single time. So unless you're adding a variable, and then never testing the code path that hits the new line, you're probably going to catch it pretty early.
Well, but doesn't let itself create a new implicit scope, from the point of the let statement to the end of the scope the statement is in? It seems to me that here let is indeed acting on things outside of lets own implicit lexical scope.
Inside a lexical scope, all matching variable names refer to the same variable. Because they are in the same lexical scope, all instances of the variable name "x" refer to the same variable.
Separate (but related) to this is the concept of where it is valid to dereference variables (this is where my knowledge breaks down - is there an accepted term for this?). Javascript says that for variables declared with "var", it is always valid to dereference them, but it might dereference to "undefined". For variables declared with "let", it is only valid to dereference variables in the lexical scope. In addition to this, it defines a "temporal dead zone", which covers the span between the start of the lexical scope and the "let" declaration. This isn't a novel thing - other languages do it, though they may use different terminology.
It's this "temporal dead zone" that you seem to be referring to when you say it's creating a new implicit scope.
It's still impossible for a let declaration to affect anything outside of its lexical scope. If it's shadowing a variable name in a parent scope, what it can do is stop variable names earlier in the scope from referring to the parent scope and make it refer to the variable in inner scope. This may seem like a problem, but most other languages will do the same thing (C++, Java, C#, Python, Ruby etc.) and I've never seen it be much of an issue.
I understand, but I believe this still happens only within the lexical scope where the variable has been defined with let.
TBH, It doesn't bother me at all. I think I can even go ahead and say the latter makes much more sense. Using the former one is abusing the weaknesses of the language that has come along with it throughout its history.
We have been whining about the bad parts of JavaScript for a long, long time and I think these changes are for the better for all of us.
> And you cannot expect a variable to be defined outside of its lexical scope.
Okay, but since `typeof x` can return "undefined", I would expect it to return `undefined` if x is no defined. Now, sometimes `typeof x` returns `undefined` if x is not defined, but other times it raises if x is no defined. That's the issue.
I too predict this is going to give people a lot of trouble. We will see.
And the worst thing is, introducing a let variable affects code outside of the implicit lexical scope created by that let. The switch from "undefined" to "more undefined" happens one scope above. Even if it doesn't create much of a trouble for people, it feels like a very ugly design.
I understand the implementation details, but the fact remains that previously you could call `typeof something` for any `something` at all, and it would never throw, ever. So you could use it as a way to check if a variable was defined.
Now, sometimes it will throw.
I mean, come on, you really don't see anything confusing in the fact that there are now two kinds of `undefined`, one kind you can call `typeof x` and it returns `undefined`, but another kind of undefined where you can't mention x at all without a throw? Like, if you can't mention undefined variables without getting a throw, how come `typeof` sometimes returns `undefined`? Ah, because some "undefined" variables you can do that with, but not others? So I guess there is more than one "kind" of "undefined" now? This is not confusing?
let x = 1;
function some_random_function(y) { }
(function() {
some_random_function(x);
let x = 2;
})();
This will throw the same exception for exactly the same reason. You're using a variable defined using "let" before the variable definition. Sod all to do with typeof.
I don't see it as massively confusing that a variable declared one way behaves differently to a variable declared another way. That's the whole point of having a new way of declaring a variable.
Why isn't everyone up in arms about how confusing it is that it behaves differently here?
(function() {
var x = 1;
if (true) {
var x = 10;
}
console.log(x); // Will print 10
})();
(function() {
let x = 1;
if (true) {
let x = 10;
}
console.log(x); // Will print 1
})();
Oh no, if I use "let" it behaves differently to "var"!
Its not like its throwing on unexpected data, its throwing because the program isn't constructed properly. This is the same as just about every other language.
Are you really saying we shouldn't make improvements to the language because then it would behave differently to the old, poorly designed features we're trying to deprecate?
[edit]
You still can use typeof to check if something is defined without throwing on spurious inputs. The only time the code will throw is if you make changes to further down in the lexical scope. And guess what - whether you do that with var or with let you've changed the argument of typeof to refer to a different variable. Except with var it will likely silently fail in a hard to track down way, while with let it will error in an obvious place.
let data;
function sanitizeData() {
if (typeof data === 'undefined') {
// data not yet set
return;
}
// sanitize data in place
}
No matter what you put in data, "typeof data" is never going to throw an exception.
The only way to get it to throw an exception is to change sanitizeData to add a variable shadowing "data" from the parent scope.
let data;
function sanitizeData() {
if (typeof data === 'undefined') {
// data not yet set
return;
}
// sanitize data in place
var data = ...;
// sanitize data in place
}
If I use var, sanitizeData doesn't do anything and I spent a while hunting the issue down.
let data;
function sanitizeData() {
if (typeof data === 'undefined') {
// data not yet set
return;
}
// sanitize data in place
let data = ...;
// sanitize data in place
}
If I use let, sanitizeData throws as soon as it's called and I can track down the issue much quicker.
The only time "typeof throws an exception" is if you add a let statement later in the lexical scope in a place where if you added a var statement, it would completely change the behaviour of your code in a way you almost certainly didn't want it to.
I understand that it's not typeof that's throwing.
The statement `typeof x` still results in a throw. I understand that it's not `typeof` doing it.
It is still the case that before, you could write `typeof x` to see if `x` was defined in all cases, and the line you wrote, `typeof x` would never throw, for any possible state of `x`.
Now, that is no longer true. That is what people are "up in arms" about. Although as far as I know, nobody's actually taken up arms. I hope.
Being picky in an argument about exactly what is throwing does not change this situation. I am not sure what you don't understand, or if you understand everything but you feel that explaining what's really going is supposed to somehow appease people, oh, okay, now that I understand what's going on.... it still doesn't chagne the fact that before, if I wanted to know if `x` is defined, i could write `typeof x !== "undefined"`. In all cases. And the result of that statement would never be a throw. Now, if I want to know if x is defined, there are at least two kinds of "undefined", one that will be returned by `typeof x` as "undefined", and another where I am not allowed to mention `x` at all, including to do `typeof x`.
> ... before, if I wanted to know if `x` is defined, i could write `typeof x !== "undefined"`. In all cases. And the result of that statement would never be a throw. Now, if I want to know if x is defined, there are at least two kinds of "undefined", one that will be returned by `typeof x` as "undefined", and another where I am not allowed to mention `x` at all, including to do `typeof x`.
You can still write 'typeof x !=== "undefined"' and not worry about an exception being thrown. There are pretty much no situations where you're going to write legitimate code and 'typeof x !=== "undefined"' is going to throw. Any situations where it will would also break if you used var, but in much less obvious ways.
"Is this variable reference before its variable declaration" is never a question that you would ever want to ask at runtime.
I really don't think we'd be having this discussion if it had started with any statement other than "typeof x", but the function/operator (as we've tediously established) has no bearing on the behaviour exhibited.
You have variables. It's illegal to refer to a variable before its been declared in it's lexical scope. Once declared, variables start with the value "undefined" until you assign them a value. I don't see what's massively complicated about that. Its the same as in many other languages, just replace "undefined" with "null".
I have seen exactly zero blog posts about people being confused that they can't reference a variable before its been declared in other languages. I have seen dozens about people being confused by "variable hoisting". When looked at along with how it interacts with other language features, 'let' is definately less confusing than 'var'.
People are going to be told (rightly so) to use `let` instead of `var`. If they do so too mechanically, and their code is structured in certain ways, an expression which could never before throw an exception in Javascript, `typeof x`, now will do so.
This is not the end of the world, by any means. And it doesn't add an exception into unchanged code, but it still is a legitimate concern.
That part of the article isn't particularly clear. The intended behavior of let is great. The unintended behavior - that it can cause typeof to fail - is bad.
I guess the argument here is that let adds inconsistency? ie: var statements are hoisted and let statements aren't. It's a double edged sword: On the wone hand it can be used to improve readability, but it also adds complexity/more to bear in mind.
If I'm not mistaken that's not hoisting, since that code will work as long as a() is called after b is declared regardless of the hoisting. Keep in mind b is only accessed when a() is actually executed, not when it's declared!
Hoisting works like this:
a();
function a() {
console.log("FOO");
}
Which might or might not be more readable, depending on your tastes.
Classes are just functions for constructing certain kinds of objects using open recursion. There's nothing wrong with them per se, but their elevated status is confusing. It also might lead to trying to solve all complexity concerns using open recursion. I'm interested in clear analysis about how good or bad that idea is honestly. At this point I just find it (a) not personally necessary and (b) interesting.
As to the author's problem? I imagine it might have some deep seated cause related to the above, but that keeps getting translated to abstract disdain for the concept. Which is tragic.
Classes adds complexity and reduces composability. With OO you end up with a soup of inheritence and patterns. Functions are way easier to compose and reason about.
>Classes adds complexity and reduces composability.
I could easily argue the opposite. Classes reduce complexity, and increase composability. Classes allow you to group methods in a cohesive manner into a single entity. For example, a class named Queue, List, or Vector would imply to the programmer what it does without them reading any code at all.
>With OO you end up with a soup of inheritence and patterns.
Just because programmers don't make clean code, doesn't mean the whole idea is somehow lesser then just using functions. Having a hodge-podge of 2000 line functions in one massive file is just as evil as "dirty" OOP code. At least classes encourage you to group your functions in some hopefully sane manner.
>Functions are way easier to compose and reason about.
Functions are easy to compose and reason about, however, that does not mean that classes and OOP are not as easy. Each have their own clear costs/benefits. OOP, and procedural programming, should both be in a programmers toolbox.f
(article author here) I very much agree with this point of view. The most important concern is a developer who thinks and writes clearly, irrespective of style or programming paradigm.
That said, I find FP easier to reason about and comprehend, but that is just one man's opinion. I also find it more fun.
The "bad bet" I was referring to was that you can make JS impersonate Java. That strikes me as a losing proposition. Sugaring up prototype inheritance to look like class-based inheritance just muddies the water.
They don't necessarily introduce complexity. Its a case of using thing s in the appropriate place. I use Python Django. Class based inheritance is useful, though it probably not overdone like it is in the Java world. I also use functions - I find these tend to be better for smaller units of computation and classes for overall organization.
You might be interested in trying Go's OO out, which privileges composition above inheritance. It's not often talked about on HN under the low-signal furor about generics, but it is a case of a small change that has a surprisingly profound effect on the language. I am becoming convinced that the current backlash against OO should really be against inheritance, not OO. Inheritance is a thing that is occasionally useful and often painful; composition is occasionally painful and often useful. The latter should be the syntactically-privileged default.
No, the composition aspects. Go privileges composition over inheritance syntactically, which has a surprisingly large affect on the whole. Interfaces end up extending it in other interesting ways, but that's a different aspect.
The whole is still an imperative language in the end, but I've found that while it may not encourage composition and separation of concerns as much as I'd like, it fights me less than some other imperative languages.
That part of the article isn't particularly clear. The intended behavior of let is great. The unintended behavior - that it can cause typeof to fail - is bad.
typeof is never being invoked, so it can't return anything.
The rules of let are that any reference to the variable before the let declaration (the "temporal dead zone"), it's a reference error. So the reference error is being thrown before typeof is invoked.
JavaScript is definitely getting better. The only question is whether they are changing too much too fast. There'll be a lot of headscratchers when running into unfamiliar ES6 code.
So classes in ES6 are something I have very mixed feelings about, on one hand all that is being added is syntactic sugar for what is already being done, I'd like to repeat that ES6 classes add NOTHING that isn't already being done and all it really does is pave the cowpath that is being used in places like node
That being said, the fact that previously you could not simply use the word class meant that despite efforts of people unused to the language (let me tell you how many half assed class libraries i've seen in code written by people who mainly code in python or java but have some web code as well), unnecessary inheritance tends to be avoided. The lack of a class keyword tends to get javascript writers to avoid inheritance for things best served by mix-ins or object literals, or whatever. I predict that adding the class keyword, while saving me some time will also cause an uptick to unnecessarily convoluted inheritance patterns as new users find they can implement all of their design patterns verbatim without thinking if they really need the ajax method, the json parser, and the url builder to all be in objects that inherit from each other.
I'm sorry, but I really see no point in this article.
Do you realize that if the optional arguments were not included in Function#length, you'd just be saying "They are not reflected in the function’s length property. C'mon, man" instead ?
My bad, I misread. However, my point is still the same: there was two equally valid choices. Should the other path had been prefered, the author would now be complaining about it too. I don't find it very constructive.
As for Whoop-dee-freakin’-doo, no, it don't think it means anything in the context. That's not an argument, and the following sentence isn't either. Maybe I don't understand because english isn't my native language, tho.
As a native speaker (granted, it's an American English idiom and I'm British), I understood it semantically as "so what?", i.e. "I don't care about this feature's existence".
From my understanding, the author was complaining about the way default arguments mess up implementations of currying. If default arguments were included in the .length, then all existing currying implementations would carry on working with no modifications, they would just ignore the default values and require that we pass in values explicitly.
Let's say we have these functions:
function getElem(arr, index=0) { return arr[index]; }
var curried = curry(getElem);
In a hypothetical world where default arguments are included in the length, we would have to supply them explicitly. That's no big deal though, it's just a potential inconvenience:
// Hypothetical, "better" semantics
var arr = ["a", "b", "c"];
console.log(curried(arr)); // partially-applied function, *not* "a"
console.log(curried(arr, 0)); // "a"
console.log(curried(arr, 1)); // "b"
The ES6 semantics is worse because we cannot provide a non-default argument to a curried function:
// Actual, "worse" semantics
console.log(curried(arr)); // "a"
console.log(curried(arr, 0)); // Error: attempts to run `"a"(0)`, but "a" is not a function
console.log(curried(arr, 1)); // Error: attempts to run `"a"(1)`, but "a" is not a function
The reason this is particularly unfortunate is that one major use-case of currying is to provide default arguments! In other words, by adding default arguments to the language in this way, ES6 is breaking an existing mechanism for default arguments!
I would argue that currying is actually more general than default arguments, so if it's a choice between one or the other, they should have added currying to the language instead of default arguments.
They're not quite comparable, but you can think of currying as supplying default arguments "dynamically" (at the call site), whereas regular default arguments are supplied "statically" (at the definition site). For example:
// Give "y" a default value for everyone
var defaultMultiply = function (x, y=2) { return x * y; };
console.log(defaultMultiply(5, 7)); // 35
console.log(defaultMultiply(5)); // 10
// Don't commit to any defaults yet
var curryMultiply = curry(function(x, y) { return x * y; });
console.log(curryMultiply(5, 7)); // 35
// Give "x" a default value, to act like defaultMultiply
var double = curryMultiply(2);
console.log(double(5)); // 10
// Give "x" a different default, which we can't do with defaultMultiply
var triple = curryMultiply(3);
console.log(triple(5)); // 15
I've implemented and used currying in JS, PHP and Python, and I found JS the most pleasant, specifically because it didn't have default values.
Very well said. I work on the Ramda (http://ramdajs.com/) FP JS library with the OP. Ramda is heavily invested in currying, and I don't yet have a clue how we'll deal with default parameters. At first guess, we'll simply have to explain that we can't curry such functions. Such a shame!
Fat arrow notation massively increases readibility by reducing the boilerplate verbosity of function definitions. Lexical "this" binding means that the "this" of the enclosing scope is reused; Arrow-defined functions do not get their own "this" when they are called.
And for a single expression the `{}` are optional and a `return` is implied, which is pretty nice for .map()/.sort(), etc.
Automatically bound `this` is super important. It's very easy to forget one `.bind(this)` (or `that` alias) somewhere, especially when you have several levels of callback nesting.
Eh, a lot of the additions are improvements, a few are not, a very small few introduce some annoying features. It's an improvement, maybe not perfect, but still a big improvement. Disclaimer: I like JavaScript.
I'm more interested by how many people still seem to be using typeof instead of toString. What are you doing for array checking since typeof array === "object", !!x[0]?