Hacker News new | past | comments | ask | show | jobs | submit login
Teaching the Unfortunate Parts (executeprogram.com)
126 points by gary_bernhardt on Dec 28, 2020 | hide | past | favorite | 95 comments



Kind of disappointing to read the comments here. The introduction talks about semantics of teaching. How phrasing can demotivate the students.

He goes over some rough edges and explains why they might be rough. He uses the word unfortunately here on purpose, to distinct it from bad, or weird or whatever.

Concluding he writes about how some of the unfortunate parts are needed to make typescript great.

That is confusing to me, he's talking about teaching, and using these examples for future teachers to think about in their field. Reading the comments here, they all seem to discuss the examples. I'm sitting here, thinking did I missing something, should I be discussing the examples? As someone that written many lines of code in different language. I can ignore the idiosyncrasies. But when you are new, it's really hard.

Similar that is why I'm always in awe of clojure/lisps. It's so minimal and predictable. you don't have to learn 100 million different syntaxes or exceptions. S-expressions, maps bools, data and go.


Indeed. Lots of "it makes sense in the context of JavaScript" reactions here. If the article had pointed to warts in another language, we would probably have had the same reaction from proponents of that language. I think the point of the article is actually being reinforced here, as it illustrates the mindset of people who know a language so well, that they think it worthless to spend time addressing such warts when teaching it. But a programming language is not in its own universe where it is exempted from data structure theory. A[-1] in the context of a list is a convenient alias for A[len(A)-1]. When you advertise your data structure as a List or Array, but it starts to display Hashmap properties, it is confusing, in any language.


Each point doesn't really bear the same level of "unsoundness". Most are actually not problems at all the way I see it.

1. I can live with NaN. When the source of something becomes difficult to track, use a debugger.

2. Map vs map? Just another homograph, like there are hundreds in all spoken languages, yet most people manage to communicate.

3. The overhead of negative indices might be acceptable in JavaScript, but most languages probably don't benefit from having them. I think negative indices are actually hurtful, they tend to hide bugs which would otherwise be caught very early at runtime.

4. This is actually a serious issue. I don't get why so many people are in awe with TypeScript type system. Anything based on duck typing is a red flag to me. But still, better than no typing at all...


3. What he's calling unfortunate is not that negative indices don't work in JavaScript, but rather that it silently returns undefined instead of raising


That behavior does make sense in the context of Javascript.

    > {a: 1, b: 2}['c']
    undefined

    > {1: 1, 3: 2}[2]
    undefined
If you're arguing that JS objects shouldn't be like that, you do have a point. But taking JS basic semantics as a given, I don't see what other behavior you would expect.


The point is in the context of lists, not objects. It's in the article.


But lists are objects, with negative indices as possible keys:

    > const l = [1,2,3];
    undefined
    > l.a = l;
    [ 1, 2, 3, a: [Circular] ]
    > l === l.a
    true
    > l[-1] = 0;
    0
    > l
    [ 1, 2, 3, a: [Circular], '-1': 0 ]


And positive indices not as possible keys, but as array elements. It's crazily inconsistent.

  > l = [1,2,3]
  (3) [1, 2, 3]
  > l[3] = 4
  4
  > l
  (4) [1, 2, 3, 4]
  > l[-1] = "eek"
  "eek"
  > l
  (4) [1, 2, 3, 4, -1: "eek"]
So adding a 4 as a "key" is a proper array element; adding -1 is an object key. I'm struggling to see how anyone could think this is a good idea. Saying that they're objects is not reasonable. Objects don't need to do this (in most languages lists are objects, and they don't also do key-value pairs at the same time); they just decided to make them do this.


It's extremely consistent. I'd say that's the only consistent way to do it, if you want to preserve the semantics of JS objects. Lua does pretty much the same.

> Saying that they're objects is not reasonable.

This is a fair point, in the alternate universe where JS objects have different semantics.

> in most languages lists are objects, and they don't also do key-value pairs at the same time

This is a dynamic language we're talking about. You are very welcome to think dynamic languages are inferior choices, an opinion that I personally would share, but that's a discussion for another time. In the context of JS, that behavior is neither unreasonable nor inconsistent.


Python is also dynamic. That has nothing to do with this.

  >>> l = []
  >>> l.x = "hello"
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  AttributeError: 'list' object has no attribute 'x'
Always adding arbitrary properties to an object is not a dynamic thing, it's just a bit silly. At best you could say it's a weak typing thing, but even then, it doesn't need to be.


Even without that argument, the behaviour is still consistent with other cases for arrays in JS anyway.

    > const a = [1, 2];
    > a[0]
    1
    > a[2]
    undefined
    > a[-1]
    undefined


In JS semantics, lists (by which I assume you actually mean arrays) are just objects with numeric keys. Nothing special about them, except that there are certain functions which assume the objects they take in have only numeric keys starting from 0 and always increasing (they usually ignore non-numeric keys that those objects might have).

This is similar to LISP, where you can build many kinds of structures out of cons cells, and you have certain families of functions that assume that the cons cells you pass in have certain kinds of structures, defining lists or trees or association lists etc.


4 is not an issue given that basically all other JavaScript typing systems that refused to make that tradeoff never became successful. so one could make a post hoc argument that unsoundness is simply a lesser evil than not having full JavaScript compatibility.


A decision can be rational but still, as the article says, unfortunate. Yes, it probably is the lesser evil to accept the typing wart and maintain compatibility. It's still a glaring hole in the type system of a language whose main raison d'etre is its type system.


I’m surprised about the map thing, too. The period also has two meanings and nobody complains. Compare 1.3 with a.b — same character, two meanings.


#3 is interesting because you can assign arr[-1] and it's valid. But it goes from being a pure array to a mixed array+hash/dict where "-1" is the key.

BUT if you set/access arr[0] it's just like an array...

  arr = []
  arr[-1] = 'neg-one-pos'
  arr[0] = 'zero-pos'
  arr[1] = 'one-pos'
  
  arr # => ["zero-pos", "one-pos", -1: "neg-one-pos"]
That was quite unexpected for me.


Wow, this truly is an "array with a hash/dict at the end"

  arr = []
  arr[-1] = 'neg-one-key'
  arr[0] = 'zero-pos'
  arr[1] = 'one-pos'
  arr[2] = 'two-pos'
  arr[-2] = 'neg-two-key'
  arr[3] = 'three-pos'
  
  arr // => ["zero-pos", "one-pos", "two-pos", "three-pos", -1: "neg-one-key", -2: "neg-two-key"]

  arr.splice(2, 1)

  arr // => ["zero-pos", "one-pos", "three-pos", -1: "neg-one-key", -2: "neg-two-key"]

  arr[3] // => undefined
Incredibly weird for me.


That’s because arrays are also objects in JS, and objects can have new properties added, which are implemented as hash strings. Everything in JS is a kind of object, including functions.


Not quite everything. JavaScript does have some primitives, though it also provides object wrappers for those primitives via the new operator.

  > let numberPrimitive = 0;
  undefined
  > let numberObject = new Number(0);
  undefined
  > typeof(numberPrimitive);
  'number'
  > typeof(numberObject);
  'object'
  > typeof(numberObject + numberObject);
  'number'
  > numberPrimitive;
  0
  > numberObject;
  [Number: 0]
  > numberPrimitive.one = 1;
  1
  > numberObject.one = 1;
  1
  > numberPrimitive.one;
  undefined
  > numberObject.one;
  1
  > numberPrimitive;
  0
  > numberObject;
  [Number: 0] { one: 1 }


Just using parenthesis is enough for wrapping:

    > 0.toString()
    Thrown:
    0.toString()
    ^^
    
    SyntaxError: Invalid or unexpected token
    > (0).toString()
    '0'


Parens aren't doing the same thing. Primitives are implicitly being wrapped in objects when you use methods, but they come out on the other end as primitives again. The new operator gives you an object wrapper that you can keep working with and modifying the values of, whereas the parens you're using just let the interpreter know that your period isn't a decimal point. You can also use a space for this.

  > typeof(new Number(0));
  'object'
  > typeof((0));
  'number'
  > 0 .toString();
  '0'


The class name “Map” is a noun (because it’s a class), the method name “map” is a verb (because it’s a method), and the conceptual relationship between that noun and verb is exactly the conceptual relationship between the class and the method. There’s nothing “unfortunate” about it at all. It’s elegant, in fact. (And if you know some math, it’s even familiar.)


The fact that the Map class doesn't have a map() method, is however extremely unfortunate. The number of times I've been backed into a corner where I have to convert a Map to an Array of (key, value) pairs, just so that some other API could map() over it's contents...


This advice seems excellent. I find myself running into this type of thing all the time when I introduce someone to something -- anything -- that's new to them and that I have a lot of experience with, whether it's technology or a board game or other.

It's such a distracting downer to lead with the warts and edge cases, but it also sucks to leave them off, and I end up feeling like I'm on the defensive and apologizing for whatever I'm explaining when they run into each issue themselves.


I have a big red flag that goes against what this post claims (I've written about this to a similar point https://academia.stackexchange.com/a/129673/58645 )

You do NOT want to START a course/topic by predisposing students negatively. Whether that is in relation to a topic, or inadvertently in terms of your perceived ability to teach it.

You may think that "apologising" on behalf of a technology (or yourself) might attract sympathy and establish rapport, but typically it backfires and achieves the opposite.

To make the example concrete, if you start the lecture with "why NaN sucks" in order to get students to sympathise and appreciate that you're showing them the pitfalls, is more than likely going to backfire and create a general feeling of resentment along the lines of "why are we even using it, why am I wasting my time here". Not to mention the risk of "great, I'm paying $Xk/y so that some random disgruntled guy can teach me hacks to circumvent shitty technology".

There are much better ways to approach this subject (linguistically), which would make it far more interesting and scientifically engaging.

Note, I'm not saying this person is "teaching it wrong" - it sounds like they've put a lot of thought in their work. But generally one should aim to refrain from 'negative' language. You could do exactly the same syllabus with positive, non-apologetic language.


> I have a big red flag that goes against what this post claims... You do NOT want to START a course/topic by predisposing students negatively.

How is this going against what the post claims? It sounds like you are in violent agreement.

> As authors, we could try to head this off with "OK, this technology has a ton of problems... in fact, it's pretty bad, but here we go, let's learn it!" That sets the learner up for demotivation from the start... It's better to describe the good parts, then tell the learner that, unfortunately, ...


You are right. I had to re-read the article with fresh eyes, hah! My first reading had left me with the impression that they were keen to introduce all the negative aspects as early as possible for the sake of "neutrality" and "transparency" ... hence my comment. But now that I re-read it, it's probably not the case. D'oh.


> NaN itself is part of the IEEE 754 floating point standard. It behaves the way it does for a reason.

The reason is that it was a workaround for the limitations of 1980s hardware that has been mindlessly cargo-culted ever since. Silent NaN-propagation is a second billion dollar mistake and equally worth avoiding in new languages.


I quibble with your characterisation of IEEE 754 ubiquity being mindlessly cargo-culted, because there’s a runtime cost to doing things any other way. Specifically, all mainstream hardware has instructions that work the IEEE 754 way, so if you want to not propagate NaN, I believe you’ll need to either implement those operations manually, or check for NaN after every operation and convert it into an exception or panic or whatever. I’m not certain what the cost would be, being no compiler engineer or hand-coder of assembly, but I’m fairly confident that there is one. I suspect that cost may also be inordinate on various realistic workloads.

(Since you mention billion-dollar mistakes: you can remove null pointers from a language at no runtime cost, in a few different ways—e.g. require that types be explicitly nullable, or Rust-style Option<T> optimised to squeeze the None/Some discriminant into any spare space in T, so that for sized T, &T and Option<&T> both take only one word.)


The issue isn't actually IEEE 754 but rather common C implementations; most hardware supports much more in the way of signalling NaNs and floating point exceptions, but most languages blindly follow the way C does it. It's the same with rounding modes; 754 provides a decent range of rounding modes marred by a poor default, but C and everything that follows it expects that default to be used.


> I’m not certain what the cost would be, being no compiler engineer or hand-coder of assembly, but I’m fairly confident that there is one. I suspect that cost may also be inordinate on various realistic workloads.

Back in the PowerPC days, something along these lines happened with Java. Java mandates that integer divide-by-zero throws an exception, but PowerPC doesn't have a mechanism to signal that a divide-by-zero occurred (1/0 == 0, as far as PowerPC is concerned). This forced the JVM to wrap every integer divide with an explicit test for a zero divisor, which put a significant drag on JVM performance...


People have been pushing for overflow checks for integer arithmetic on C on the basis that the cost isn't too high. For floating point it can only be lower.

We are probably talking about single-digit percent increases in run time. For some software that's relevant, but not for most.


Reasonable people can differ on whether quiet NaN propagation is good or bad, so I don't fault you for being in the latter camp, but I am surprised by your suggestion that it was particularly friendly to 1980s hardware. I was under the impression that an 8087 would have been equally happy to respond to NaN by quiet propagation, setting a global sticky flag or crashing your program, and it's modern hardware, with multiple heavily pipelined vector units running in parallel, that prefers to quietly propagate. Am I missing something?


Excellent technique, and one of the main ones I use for a Bash course & other material. I feel like being upfront about caveats is a good way to enable non-expert users to choose between languages based on their shortcomings rather than just feature lists. Having a feature is very different from nailing a feature. See for example PHP's early OO support, Python's impressive but still incomplete typing support, open source reverse engineered graphics drivers/formats.


Some pedantry:

> And JavaScript has always had a map method on arrays, which transforms the array's values into other values.

This isn't true. We only officially got this with ES5, which only came out in 2009. And we couldn't meaningfully use them until IE9 was released with ES5 support in 2011. And even then folks still had to support old IE for quite a while: it wasn't until around 2013 (after IE11's release) that ES5 use really picked up. Until then, we had Underscore.js and jQuerys' map methods, and various polyfills/shims.

Looking at the web today, you'd be forgiven for forgetting that most of the "cool" stuff is less than ten years old.


I 'm far from old and I remember that time. For loops everywhere.


Thanks, I updated the post. I've been using JS since the 90s so I'm not sure how I managed to forget that!


> You're running an outdated browser that we don't support. We use many new browser features, including WASM, so we require very recent browsers. We maintain support current versions of Chrome, Safari, Firefox, and Edge. Sorry for the inconvenience!

Im running a newish version of safari that does have wasm support.. what could this website possibly be using that I dont have on my phone? Does no one else see this message?


Only people using Safari < 13 will see that message. 13 is 1.5 years old, so yours must be older. We saw tons of wasm failures on Safari < 13, to the point that it was better to stop people before they got deep into something and hit a failure.

We run the entirety of SQLite, compiled to wasm, in the browser, among other things. It depends on the course, but our version checks are site-wide because it's all one app.


I think I might disagree with #3. I find that having arr[-1] point towards the last element of the array can hide accidental off-by-one errors. It's also a bit weird how the arrays are zero-based if you access them normaly but one-based if you access them backwards. IMO it's more consistent to make the negative indexes behave the same way as an out of bounds access like arr[arr.length + 1]


If your mental model is that an array is zero-indexed but circular, I think arr[-1] still is consistent with that model. If you don’t think of arrays as circular, then sure, it doesn’t make sense.

I’m still in agreement that negative indices make it more confusing than just calculating based off the length, but I get why they exist.


The mental model doesn’t really jive with circularity since arr[arr.length] is not the same as arr[0] in any language that I know of. With a circle, you would expect that arr[0], arr[length], and arr[2*length] are all identical.


You can also think of negative indexes as a shorthand for length-N. arr[-1] is the same as arr[len(arr)-1].


Certainly negative indexes being out-of-bounds would be the most consistent behaviour, but that requires giving up the index-from-end functionality (or, at least giving up up with the defualt array access syntax).

If you are going to have negative indexing, there isn't really a better alternative than 1 based indexes; since you can't exactly do arr[-0]


You can actually do it (I'm not saying it's a good idea), there's a way to differentiate between -0 and 0:

    > Object.is(0, -0)
    false


I was going to make a comment about this not working for integer types; but then I remembered that JavaScript doesn't have integers, so I guess it technically works.


FWIW, I fully support the notation `arr[-0]` as a solution to this problem.


Even better: just make all arrays one-based instead of zero-based ;)


So then what should this code produce?

    i = -0
    return arr[i]


So then what should this code produce?

    i = -0
    return arr[i]
That code would (assuming an integer value; the question is interesting without reference to the implementation of JavaScript) return arr[0], the first element. Don't think of arr[-1] as indexing the array with a negative number that might have been a positive number. Think of array access from the front, arr[ ], and array access from the back, arr[- ], as separate operations with separate syntax, each of which can accept only nonnegative numbers. arr[i] is of the first type, so you get the first element counted from the front.


Actually, I think that might work in Javascript! All the numbers are IEE754 floating point numbers and there are separate values for positive and negative zero.


iirc using numeric literal values there's a coercion step—a console example:

  let arr = new Array(10);
  // 1e1 => 10
  arr[1e1] = 1;
  console.log(`arr[10]  === 1 => ${arr[10]  === 1}`)
  console.log(`arr[1e1] === 1 => ${arr[1e1] === 1}`)

So -0 becomes 0:

  arr[0] = 2;
  console.log(`arr[0]   === 2 => ${arr[0]  === 2}`);
  console.log(`arr[-0]  === 2 => ${arr[-0] === 2}`);
  arr[-0] = 3;
  console.log(`arr[0]   === 3 => ${arr[0]  === 3}`);
  console.log(`arr[-0]  === 3 => ${arr[-0] === 3}`);


It seems to me like TypeScript's soundness hole in #4 could be fixed.

1. I'd want TS to show an error on the line where the aliasing occurs, encouraging me to clone the list if I want to change its type from a distance.


I'd be concerned that any fix that still follows typescript's design principles would do more harm than good. In practice, the unsoundness of typescript doesn't seem to cause much actual problem, although more than none. It's easy to create example programs that demonstrate the problems, but they seem to be comparatively rare in day to day code.


IIRC some of those soundness holes are an intentional tradeoff to simplify the type system. http://users.soe.ucsc.edu/~abadi/Papers/FTS-submitted.pdf


This soundness hole (or a very similar one) was also accepted in Java itself (with subtypes rather than sum types).

The main reason behind accepting it is that it is very useful and safe in certain situations that the type system is too weak to define strictly: if a function takes an (string|number)[] and only reads from it, it is perfectly safe to pass in a string[].


Only arrays (the only generic type that predates Java 5) have this problem in Java, and for precisely this reason, modern Java practice avoids arrays in favor of other collection types. When real generics were introduced in Java 5, they used a design with proper variance that doesn't have this problem.


Well, that depends on context and language. I know at least on language where the binary representation of String and String|Int is different and where the difference matters.


If you want a (practically) sound type system, use Flow.


Explaining the history of things usually tends to clear things up.


The negative index thing has been extremely useful for me when using nmigen, an HDL based on python. I wish more languages supported it.


The wisdom in OP can be extended to numerous (or all?) topics of study, where human constructs are involved.


Under what circumstance do you end up with a negative array index? Eg. Why do you omit bound checks? And should accessing any gap in an array result in error, and why are there gaps in the array?


Since the focus of this article is Javascript they should have written:

'When explaining Javascript, we have to decide how to approach its shortcomings. There are mistakes in its design, and it has usability problems, and it is unreliable. How do we approach these and how much emphasis do we place on them?

One approach is: "Javascript has a lot of problems, but we'll show you how to avoid them." That can demotivate the learner: "Why am I learning Javascript if it has so many problems?"'

The answer is: "You need to know Javascript because it is the heart of web programming and one of the most widely-used programming languages in existence."

Javascript was designed and implemented in 10 days and became ubiquitous through the world wide web, despite its laughably bad flaws and shortcomings.

Gary Bernhardt's WAT talk shows just how non-sensical Javascript can be:

https://www.destroyallsoftware.com/talks/wat

I love you, Javascript, but it's Stockholm syndrome.


They're the same Gary (I wrote this article).


There’s a Typescript joke in here about treating the author as an “any” and not realizing that it was actually a Gary Berndhart.


This is amusing. I see your affiliation through the faq page, but as far as I can tell there is no obvious affiliation displayed on each blog article.


`The Birth & Death of JavaScript` remains one of the all-time best tech talks out there. Just a few rungs below the Mother of All Demos. Thank you!


As a seasoned developer who was in early in JavaScript (writing a SPA framework before there were SPAs, creating complex CAD software in WebGL pre 1.0), I can say there are a lot of good, and some really really bad things about it.

As a former boot camp instructor, I can say there are many, many people who call it a first language, and a significant subset of these people that worship intricate knowledge of it's faults as if they were features.

I tried very hard while instructing JavaScript to understand why the faults were there and how to avoid them and why detailed knowledge was no longer relevant thanks to many smart people spending lots of energy to make them not matter any more, e.g.: q: "when should I use var?" a: "never, use let or const to avoid hoisting"

Unfortunately, I think one just has to learn a few more languages before they can take an objective view of their 1st.


I'm starting to think it's a blessing if your first language "dies." Anyone who truly tied their personal/professional identity to K&R C, Pascal, Smalltalk, CLisp, even today ObjC or Perl - will have long since retired in frustration. Distance brings clarity, and as much as I liked (and still like!) some of those languages it's very easy to see the unfortunate parts of them once you are working three, four, ten toolchains after out of necessity - and that brings more understanding how unfortunate your current tools also are.

Unfortunately it seems like JS developers will likely never know this joy.


A version of this article could be written for virtually every language and tool.


Yep. Originally I had a SQL example in there too, but it took a lot more setup to explain and would limit the post's audience.


Any prospects of it being unveiled in the future?


What would be the educational role of a list of even four unfortunate parts of SQL?

This post isn't about JS; making another one using examples from SQL won't increase understanding.


By making it generic, it can apply to any language. For example, someone writing a C++ book, or a Perl book could find this blog post useful. If the blog post was purely about Javascript it wouldn't be as clear that others could find it useful.

But yes, a good addition to the blog post might be to include some more stuff to motivate the learner why the topic is useful.


I agree about points 1 and 3 (especially writing to negative indexes being very weird), but I disagree about the other half.

2. This is not a serious issue. I think it is unfortunate that you can't use a built-in function to map over a Map, but this isn't what the article is saying.

4. This could absolutely be fixed? The reason we "can't" fix the NaN issue is because we don't want to break JavaScript backwards compatibility. TypeScript version 3 could outright ban this action and it would be totally "fine" (modulo migration pains in existing codebases). Fixing this wouldn't affect backwards compatibility with the emitted Javascript at all.


The behavior of 3 actually changed in JavaScript1.2


As the full lesson says:

For every soundness-related bug that sneaks through, TypeScript will probably save you from hundreds of "cannot read property of undefined" errors.


In JavaScript its good practice to check function parameters for errors. TypeScript (TS) fans call this "poor mans type checker". It has however several benefits over TS. Like more simple then extended type declaration: if(arr.lenght==0) throw new Error("expected arr to be not empty")

Vs

type NonEmptyArray<T> = [T, ...T[]];

Or if you need to check if the array has exactly 3 items. How would you do that in TS?


> Or if you need to check if the array has exactly 3 items. How would you do that in TS?

Like that: https://www.typescriptlang.org/play?#code/C4TwDgpgBAKgFgJwhA...


About map() vs. Map > Their identical names are just an unfortunate accident of history.

No. Their names are clearly NOT identical.

Arrays have the method "map()" and there also exists the global class "Map".

"Map" !== "map"

The fact that they are similarly named is NOT an unfortunate accident of history because yes they are conceptually related. They both are about "mapping" things to other things. So because they are conceptually related one would assume that their names are somewhat similar as well.

What is unfortunate I think is that JavaScript Object does not have the method "map()" like Array does. Therefore you can not "map() over Maps" (or Objects in general).


I've never used Maps (ES8?) and never felt the need for negative indexes.


Map is ES6. Everyone had implemented it by late 2014. (Even IE11 has it.)

https://caniuse.com/mdn-javascript_builtins_map


Nitpick: IE11's implementation of Map isn't iterable, which makes it largely useless for practical purposes. Real-world codebases that use Map and care about running on IE11 polyfill it.


That’s more about [@@iterator] not being a thing on IE11 rather than anything about Map. There’s still Map.prototype.forEach, which is entirely sufficient.

Also, iteration is not the only purpose of Map; its real value is that you can use keys of any type, and I’ve used Map and WeakMap a number of times without having needed any form of iteration—e.g. avoiding polluting every element I need to track data on (MY_DATA: Symbol; Element[MY_DATA]: MyData), maintaining a WeakMap<Element, MyData> instead.


One thing I like about JS is that you can monkeypatch anything. And if you want all objects of any type to have the property/method you just add it to the prototype.


Except that really doesn’t pan out well here: it’s not just one or two things that need patching, it’s a lot, and the patching will slow some extremely common operations down quite a bit (sometimes immensely), and it’ll still be incomplete because there are syntax elements to it (e.g. for..of) that can’t be polyfilled.

Thus, for as long as you want to support a certain vintage of browser, you’re very commonly better to just write things in the old way—which here means using Map.prototype.forEach instead of Map.prototype.{entries, keys, values} and/or for..of loops.


I’m not sure what benefit I, the learner, get when the author makes editorial comments about how “unfortunate” certain features of a language are. It seems to me that that I’m better served by just quickly acknowledging that something different happens under certain circumstances, and then I’m taught the techniques that professionals use when those circumstances arise.

(Edit: Perhaps we need a clearer model of the student that is material is meant for. Novices will need a “just do it this way” approach, while experienced programmers will need more detailed explanations.)


I think it helps to point out when something is an oddity that serves no practical modern purpose; the learner needs to know which are the "good bits" and which are the backward compatible cruft so they don't mistake one for the other.


> I think it helps...

Not necessarily. Teaching too much, too fast can distract a student from mastering each step; from properly “leveling up”. Teaching materials (including online courses like this) need to start with a theory of the learner in order to select and tailor materials for best effect.


It's useful to know whether some language behaviour is the result of planning or just historical accident.


Well, I think some students will be switched on enough to spot inconsistencies, and so it makes sense to acknowledge them and not have the student feeling that their mental model is wrong, when it's not, or that they shouldn't expect any consistency, when they should


A lot of people are shocked by Javascript behavior, but the truth is, Javascript became successful BECAUSE it's a language that is(or was) incredibly forgiving thanks to its very weak typing and syntax and its behavior, trying to do everything in its power not to yield errors, or if an error happens in the browser, not propagate it so that an app that is 10% buggy can still run without blowing up (thanks to the event loop). In my book, that's certainly an achievement. IF I recall things correctly VBscript was stricter than Javascript, thus harder to write for people who didn't know what they were doing.


JavaScript became successful simply because it was the only real choice, and it would have been so even if it were less permissive. VBScript’s failure stems mostly from the fact that only Microsoft supported it (and as I remember, it wasn't even particularly well documented).


> And JavaScript has always had a map method on arrays, which transforms the array's values into other values.

This is false. Array.map has been only available since 2011 in Ecmascript and has been usable without polyfills since only around 2016. You have to be a newbie in the language not to know this... If anyone older than me ever said that to me I would judge them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: