One of the defining thing about MythBusters was that they'd take a myth, build an experiment with at least some notion of control, and then carry it out, so you can actually see the results.
"Avoid using >= and <= unless necessary. It’s faster to use a simpler comparison." There's a trivial code example, and then absolutely no evidence to support the claim. No benchmarks, nothing.
I'm extremely skeptical of that claim, too. As far as I can tell SpiderMonkey for example will emit practically exactly the same code for >= and <= as other comparisons. In loop optimization SpiderMonkey will actually sometimes rewrite eg. `x < y` as `x + 1 <= y` to put linear inequalities into a standard form for analysis [0].
They also recommend using `let` and `const` over `var` for performance reasons in the "Scope" section, but I have to ask whether they benchmarked that at all. Engines may grow to take advantage of these signals over time for further optimization, but at the moment many ES6 features incur a performance penalty just because the code behind them is less mature. I don't know specifically about `let`/`const` but I'd be pleasantly surprised to learn that they boost performance anywhere yet.
Exactly. There are no myths. There is no mythbusting. This has absolutely nothing to do with mythbusters.
It's just some random opinions about optimizations you most likely don't need. Who cares if something is barely faster? If the slower way is a lot more readable, I'm opting for that.
I'm not sure how this can be called "mythbusters" when it doesn't provide evidence for how some optimizations are better than others. Seems likely to produce more myths rather than debunk them.
Some of these suggestions are good but some caveats:
- Lookup tables- while lookup tables are great, engines might sometimes convert switch statements to lookup tables too- Chakra does this when it's advantageous to do so
- Try-catch- this seems like v8 specific advice. Chakra definitely does optimize functions with try-catch in it, and I think SpiderMonkey does too
- Freeing memory- setting the reference to null does not necessarily free the memory- it just makes it likelier to get collected when the Garbage Collector runs
> Try-catch- this seems like v8 specific advice. Chakra definitely does optimize functions with try-catch in it, and I think SpiderMonkey does too
http://gs.statcounter.com/ certainly suggests that specifically optimizing for Chrome/V8 will benefit the majority of users (58% not including mobile, 50% including mobile). As with all statistics, take with a grain of salt.
This one also seems a bit iffy to me, depending on the stage of compilation, how often this code runs, how much information the engine is able to collect, and the size of your table. The initial unoptimized property lookups will likely be slower than if/else statements, and the engine may end up optimizing them with inline caches, which are essentially a chain of if/else comparisons and jumps between dynamically generated code stubs.
Carakan doesn't either. But given that nowadays affects some old TVs (but still newish, I doubt most people replace TVs that often…) and Opera Mini's limited JS support… Yeah, okay, that doesn't matter. :)
Many of the examples I looked at were very granular and specific, but didn't have much explanation. Does high level point of view really mean brief and without elaboration?
From a high level on JS optimization I would like to see more emphasis on which considerations are most likely to have the biggest impact on my code. Throwing everything in together as if they were all equal seems like it may be missing the point.
That was my take, too. It says "from a high level point of view", but the very first optimization was about using the != operator vs the >= operator in certain conditionals. I'm struggling to think of anything more low-level than that, in JS.
I think this is not well researched enough. There doesn't appear to be any thought put into optimization vs. readability, and when the optimization improves enough speed to justify the poor readability.
This has been one of my biggest issues when people write illegible but supposedly lightning quick code. The biggest bottle-neck in software isn't language quirks, but the person reading your code when it breaks.
Also, when Google or whoever decides to update the engine and compiles things differently, these hacks might become less efficient than just following best practices.
The styling definitely needs more contrast. The comments on the code examples are nearly impossible for me to read without highlighting them. Looking at the styling, comments are set to rgb(51,51,51) which is nowhere near enough contrast between the black background.
This handbook assumes that JS execution really is your bottleneck and not network, DOM, what not. I doubt this is the case in the majority of applications, so following these tips might be even counterproductive as they sometimes come at the cost of readability.
Some of these performance tips are just not believable without some kind of justification. I'm to believe that garbage collection can't work if you use delete? That seems pretty silly.
This one actually seems fair to me. They're not saying that GC won't kick in on the value the deleted property used to point to, but rather that using `delete` on a property can be more expensive than you need if you're just trying to let the GC know something can be cleaned up. This is because optimized JIT code and caches guard on the structure of objects that pass through them, and changing that structure (eg. deleting a property) will lead to deoptimizations / tossing away some of that code.
On SpiderMonkey, deleting a property that isn't the last one that was defined may incur a "dictionary mode" conversion [0]. This means the information describing each object property, which is usually immutable and shared between similar objects, will be copied one-by-one into a unique chain owned by the particular object instance. This can only happen once per object instance, however, and subsequent usages of `delete` on its properties will be less expensive -- but still more expensive than simply setting property values to `null`. I believe something similar happens with V8's "hidden classes" mechanism.
I'm not saying it's not true. I don't know enough deep javascript magic to make that claim. It just didn't sound true, given that they've provided no supporting information or even a plausible-sounding explanation of why one way is better than another way.
If the entire site were rewritten using this level of detail plus benchmarks, it would be 10x more useful.
Instead this site seems to be "regurgitate myths". e.g. http://mythbusters.js.org/workflow/boolean-conditions.html
"Avoid using >= and <= unless necessary. It’s faster to use a simpler comparison." There's a trivial code example, and then absolutely no evidence to support the claim. No benchmarks, nothing.