Every few years, it’s proposed engines offer a way to precompile scripts so we don’t waste time parsing or compiling code pops up. The idea is if instead, a build-time or server-side tool can just generate bytecode, we’d see a large win on start-up time. My opinion is shipping bytecode can increase your load-time (it’s larger) and you would likely need to sign the code and process it for security. V8’s position is for now we think exploring avoiding reparsing internally will help see a decent enough boost that precompilation may not offer too much more, but are always open to discussing ideas that can lead to faster startup times."
Surprised there was no mention of webassembly, which does exactly this.
WebAssembly doesn't give you access to DOM APIs, so it's not like you could rewrite Angular in wasm, for example. Most load time/parse time discussions are in the context of web frameworks, but they're typically not able to benefit from the performance improvements of WebAssembly, particularly given the overhead of moving data into and out of the wasm context.
Perhaps, but I've been surprised at how fast WASM support has moved already, and how committed all the browser vendors has been. Usually there's at least hold-out.
I feel like there may be enough internal pressure from groups in Apple, Google and Microsoft to get DOM integration available, so that they can experiment with compiling the likes of Swift, Dart and C# to it, that it will become a priority once the MVP has shipped.
Even when it's out and mature, there is some complexity around gracefully degrading when it is not available - it is going to increase frontend complexity dramatically I suspect, at least until browser support dictates that graceful fallback isn't necessary for the feature.
Most current WebAssembly compilation paths produce asm.js as an intermediate artifact. In the near term, backwards compatibility should be relatively simple: save both and load the asm.js code as a fallback.
Not first class access, but you can call arbitrary JavaScript via FFI. An approach that worked fine for me was to use a simple object which holds your DOM handles via incrementing numeric IDs and pass them back to your Rust (or C++) code. I found the FFI overhead to be non-existent compared to the actual DOM work as well.
In my case I knew when the owning Rust struct was destroyed, and it was guaranteed to be the only thing holding the element handle so I would just free the DOM node during the destruction logic.
For something more general purpose I'd probably just use a destructor (whether C++ or Rust) with the same guarantee of single-ownership of the underlying ID, and the DOM node will automatically be freed once the native handle goes out of scope.
Edit: Of course, you'd have to prohibit copies on the native side.
Well such object references and also string handling (not first-class representation in asm.js/current-wasm either) is just a lot of pointer munging really --- realistically I don't see many of today's framework writers re-expressing their visions at that level, plus in the browser context you need them to be "safe pointers"/"references" managed carefully etc.. can of worms. wasm/asm.js' prime use cases are certainly in high loads of real-time numerical calculations, I'm not just talking games or shader-helpers (and sciency/other simulations whether in academia or commercial solutions) but also envisioning faster, neater in-browser apps evolving for all sorts of media (pics/vids/audio/360°/etc) handling/editing/munging etc.
> need to sign the code and process it for security
If they don't do this for JavaScript, why would they start doing this for compiled code? Unless they're expecting platform specific byte code, in which case V8 becomes superfluous (perhaps another win for consistency).
"bytecode can increase your load-time (it’s larger)"
how come bytecode is larger than plain text?
I don't know the specifics of the V8 bytecode, but for the AVM2 which load pre-compiled ActionScript as bytecode, the load time is faster, faster than the JVM for example.
I'm always shocked at how reluctant sites are to actually ship less code, and I think some of this comes down to a needed shift thinking in what an application is, what modules are and how to use imports.
One thing I've heard recently is "My app is big, so it has a lot of code, that's not going to change so make the parser faster or let me precompile".
The problem with this is thinking that an app is a monolith. An app is really a collection of features, of different sizes, with difference dependencies, activated at different times. Usually features are activated via URLs or user input. Don't load them until needed, and now you don't worry about the size of your app, but the size of the features.
This thinking might stem directly from misuse of imports. It seems like many devs think an import means something along the lines of "I'll need to use this code at some point and need a reference to it". But what an import really means is "I need this other module for the importing module to even _initialize_". You shouldn't statically import a module unless you need it _now_. Otherwise, dynamically import a module that defines a feature, when that feature is needed. Each feature/screen should only statically import what it needs to initialize the critical parts of the feature, and everything else should be dynamic.
In ES2015 this is quite easy. Reduce the number of these:
import * as foo from '../foo.js';
and use these as much as possible:
const foo = await import('../foo.js');
Then use a bundler that's dynamic import aware and doesn't bundle them unnecessarily. Boom, less JS on startup.
You still need good solutions for bundling assets because network connections aren't free.
The approach you advocate can have adverse side-effects to web performance. It would help with the initial load time due to reduced initial JS, but if you end up loading a few or dozens of additional JS modules asynchronously you're talking about a lot of extra HTTP requests. Over HTTP 1 that's a big problem, and even over HTTP 2 each additional asset sent over a multi-plexed connection has overhead (~1ms or more).
Where's this ~1ms overhead for an additional asset on a HTTP2 connection coming from? Do you have a reference to a benchmark or something that demonstrates it?
IDK what calvin had in mind, but the client pull model you suggest can require a lot of round trips. Request a, parse a, execute a, request b, parse b, execute b, request c, parse c, execute c.
Of course, you could always do server push, but hey...that's pretty close to what a single bundled file is :)
Except it doesn't make sense if the JS code expands to a number of machine instructions that would take longer to transfer over the network than the transfer and parsing of the JS code combined.
How about minified JS? What would be the best? Consider the massive amount of machine (or even IR) code that each identifier, built-in function, loop, switch and all other "syntax sugar" aka high-level-language-construct represents..
Could well be, depending on the app, dunno.. it's still the case that on the whole practically almost each lexeme in a high-level language expands into a giant ball of opcodes.. ;)
"We're" (many) already doing that too via gzipped responses (faster to decompress than to compress == neat for web uses). That's my point, a higher language is already "compressing" machine representation (all abstractions kinda do), minification turns its lengthy identifiers into minimal "codes", then the gzip.
I am surprised at just how much faster the iPhone is than the next nearest Google device (a laptop). I haven't used an Android device for a long time, but I hear the "Apple is overpriced" so often that I assume someone has been checking this out.
Actually this is probably thanks to Safari's JS engine. From the graph [1] it seems that Safari is roughly 3x faster than Chrome parsing and compiling JS code on the same Macbook Pro.
Yeah, Safari has stupidly fast startup times compared to Chrome. It's one of the big things that they are working on in their engine.
It's been a bit, and I'm going from memory here, so forgive me if i'm wrong, but...
V8 is introducing a new "interpreter" mode to help here so that the page can start being interpreted ASAP and no JIT overhead needs to force the system to wait until it's done a first pass. And in the long run they want to pull out 2 of the JITs in their engine to simplify the process and speed up the first execution (along with reducing memory usage, and simplifying the codebase to allow for faster and easier additions and optimizations).
It's a great move, but it means that things are going to get slightly worse before they get better.
The "old" V8 had 3 compilers, "Fullcodegen", "crankshaft", and "turbofan" [0]. The current V8 has those 3 + Ignition [1], so it's just adding more on now. But over time they will be removing crankshaft and fullcodegen and it will leave them with a really clean and fast engine [2].
If anyone is interested, [3] is a fantastic talk on this and other plans they have for V8, and it's very accessible for those who don't know a thing about JS engines.
(sorry about the links to google sheets here, it's the only place I can seem to find the infographics)
I am not aware that Edge is "stupidly fast" on startup. Safari though, is indeed currently leading the field.
As you correctly outlined, V8 is indeed transitioning to a world with an interpreter+optimizing compiler only. If you are using Chrome Canary, there is a chance that you are already using the new pipeline :-).
> As you correctly outlined, V8 is indeed transitioning to a world with an interpreter+optimizing compiler only.
ECL and CLISP both settled on this model. CLISP started out as a bytecode interpreter and added GNU Lightning for native code generation. ECL started out as a source-to-source compiler to C and added a bytecode interpreter; both follow C calling convention as much as possible, at first for easy interop with C code and then for easy interop for the bytecode interpreter, which avoids the C++ interop problems mentioned in the talk.
From those who spoke at BlinkOn about Ignition, it sounded like some of the pushback had been about Octane, but there'd been some movement against worrying about the Octane regression from a view that Crankshaft was over-fitted to Octane.
How much of what's remaining is performance versus stability/correctness?
iPhones aren't simply faster at JS, they're faster at everything. Benchmarks run in the same app always come out drastically in the iPhone's favour.
Granted, this isn't a strict apples-to-apples comparison, but the differences are so drastic and always in Apple's favour. That, combined with the actual, physical differences in speed of the processors on the phone itself, indicates that it's not just Safari.
Yeah, and that's becoming quite an old thread (2015), yet the most recent comment is from Nov 2016, and if you read backwards from there it's a pretty strong indicator that the issues are still extant. It's a real PITA. My soon to be replaced (but I like the small form factor a lot) iPhone 5S from 2013 whups the ass off my OnePlus Two from early 2016 when it comes to JS and canvas rendering performance.
I've been a mac user for a few years and this year switched back to iPhone as well. I figure if JS developers don't want to address the speed issue[0], I might as well invest in a company that will.
[0] They are always too busy saying, "Just use React".
Apple have actually put effort into improving single-threaded performance. Whereas competing Android handset manufacturers have wasted effort on a internecine MOAR CORES = BETTER!!! marketing fight.
That honestly isn't the reason I got an iPhone, but you raise an excellent point: for a 600x improvement in speed, and less advertising, one only has to pay 4x the price.
Good point, but to be fair, author probably doesn't have any control over this, as this is default for all Medium blogs. It's another question why Medium decided it is a good idea to ship almost 80k lines of Javascript code for a page whose only purpose is to display a blog post.
> author probably doesn't have any control over this, as this is default for all Medium blogs
The author made a statement
> Ship less JavaScript
They chose to post on medium, they could have posted the content anywhere. They chose to give us the above lesson, while ignoring it. Do as I say, not as I do.
> it's another question why Medium decided it is a good idea to ship almost 80k lines of Javascript code for a page whose only purpose is to display a blog post.
Who cares? Why not simply stop using such shitty services?
Relax, friend. The majority of people are not freaking out over this. Surely you have better things to do than nitpick about this and call other people's products shitty. I recommend choosing to be more positive instead. Have a nice day.
Nobody is freaking out about anything in this thread. I'm pointing out legitimate facts, contrary to some of the other comments. Reality isn't nitpicking. And yes, it's a shitty product. And don't worry, I'm multitasking.
And yet another question is why would a technically apt person, and one obviously concerned with page optimization, choose to give up control of how their content is served.
Time is this really scarce thing that technically apt people often have a limited supply of. He can spend days rolling his own blog app from his own super custom optimized framework and then do all the additional work of getting that content indexed on Google and sprinkle SEO black magic all over it, or he can just put up a blog post on a service where somebody else does all of that for him.
Unless you're really into that sort of thing or are not fully employed, the time saving option is the most sensible one to do if it's "good enough." Which medium is, as evidenced by the fact that we're all talking about it.
As to why he chose medium over his existing blogs only he can tell you. My guess would be that he is using Medium to reach a bigger audience.
The confusing thing is when he he says we and us (talking about his team) it's confusing since it's on Medium. This really should be up on the Google dev blogs if it's official.
Or choose one of the many open source blog engines out there and maybe contribute to it.
It was, of course, a retorical question. We all know the answer.
People publish on Medium because we messed up. RSS died/was killed and we turned to these centralized solutions, instead. Even its name "Medium" is telling.
No, people publish to Medium because it's very easy to do, and looks great. RSS doesn't solve the problem of having to create your own blog, host it, maintain it.
When your blog isn't your main job (or anything close to it), I really don't see the problem with using a service like Medium. We're all busy.
But is the audience for his post the same as the audience he's talking about in the post? I'd wager: no. Not only do developers tend to have more high-powered machines, this is a post we're much more likely to be reading on a desktop that a mobile device, compared to average. Plus, the post makes the point that a delay is hugely detrimental to any UI on a page, like buttons that won't do anything. Blog posts don't have that issue - you can start reading the article without JS having loaded yet.
The blog post repeatedly makes the point that you should measure everything you do and that you shouldn't apply blanket rules to your coding. I think the same logic applies here. The post being on Medium just isn't that relevant.
So we should only care when it's our job to, otherwise we're just too busy?
Even if one agrees with that attitude, which I don't, I'd argue it is the author's job to care.
Contributing to a bloated centralized service is not healthy for the Web. Making sure it remains a relevant platform is in his best interests, even from a purely egoistic point of view.
You still have to either create a template from scratch or modify existing ones to your needs, configure the generator, set up hosting, domain and many other things. Things that take time which could be otherwise used for something better than devops if that is not your are of work :).
> Time is this really scarce thing that technically apt people often have a limited supply of. He can spend days rolling his own blog app from his own super custom optimized framework and then do all the additional work of getting that content indexed on Google and sprinkle SEO black magic all over it, or he can just put up a blog post on a service where somebody else does all of that for him.
Or use one of the existing Google portals they use for sharing content about their products, research, technical findings...
It's probably a mixture of potential impact (Medium posts will always be much more popular and easier to find on Google than some static page he could set up) and ease of publishing (just write the article and publish it, versus managing your own blog platform), plus some added bonuses (sharing widgets, analytics, etc.).
Just because the author works at Google doesn't mean that "they" had anything to do with the blog post. It doesn't follow that "they" would allow their resources to be used in that way, and it also doesn't tell us anything about what the more time efficient choice for the author is.
Interestingly enough this is one of the websites where deactivating JavaScript results in a much _bigger_ payload (mostly because images are eagerly loaded without JavaScript and lazily with JavaScript activated).
True, but the driving force of TFA is about the time it takes to get to interactivity. Loading 1.2MB less out of the gate is going to help that significantly.
In what browser does eager image loading interfere with "interactivity"? Let's face it, they know most readers probably don't read/scroll to the end so it saves transfer costs on a decent scale --- which is fair enough.
"Half a megabyte of packed JS" still sounds way overkill for delayed image loading.. =)
Please show your working. Downloading >400KB, unpacking it, parsing it, running it is always faster across the board than just downloading the images? Interactivity? Please, I can scroll the page without the images or vast amount of JavaScript being loaded. What gain has interactivity on this article? How does your point related to the fact that there's a vast amount of JS in the page, where the author tells us no to do so.
I had to laugh at this but, and I'm not making excuses for Medium here, that is sadly lightweight compared with a lot of sites on the web.
Still, the "ship less JavaScript" comment rings true: people need to stop cargo-culting all the things into their front-ends. It's colossally annoying at the best of times, and the more-so when you're browsing over a 3G connection. You can basically forget using a lot of websites nowadays if you haven't at least got 3G, and plenty aren't that great on anything less than a 4G connection.
Hmm. Usually browsers cache the raw CSS and JS assets - could that be improved so that browsers cache the compiled CSS/JS? That wouldn't help for the first load, of course - but quite a lot for sites like newspapers which are not SPAs but bundle a metric ton of JS cr.p for each page load.
edit: Chrome actually does that, as mentioned in the article - but what about Chrome Mobile and Firefox/Safari/IE?
"Chrome 42 introduced code caching — a way to store a local copy of compiled code so that when users returned to the page, steps like script fetching, parsing and compilation could all be skipped. At the time we noted that this change allowed Chrome to avoid about 40% of compilation time on future visits, but I want to provide a little more insight into this feature:
1. Code caching triggers for scripts that are executed twice in 72 hours.
2. For scripts of Service Worker: Code caching triggers for scripts that are executed twice in 72 hours.
3. For scripts stored in Cache Storage via Service Worker: Code caching triggers for scripts in the first execution.
So, yes. If our code is subject to caching V8 will skip parsing and compiling on the third load."
Soemthing I've wanted for a long while is the ability to warm the codegen cache. It should be possible to instruct the browser to load and parse JavaScript so that on subsequent page loads execution can begin immediately. This would work really well with the model that Sevice Workers are moving towards.
Not at all the message I got from this. Because judging by Safari on mobile and desktop, clearly the issue isn't Javascript, it's that for whatever reason, with all the insane resources behind V8 and Android, they're simply unable to get their interpreters to reach the speed of Safari or Edge.
I'd understand if nobody could make JS go fast, but clearly Apple and MS are proving in real-world-ready code that JS can be quickly parsed and executed.
given Android's execution model (closer to a desktop OS, with many things running in userspace, constant context switching) compared to iOS's "one thing running at a time" model (closer to a game console OS), my guess is that Android benchmarks are less reliable.
Not that it discounts the massive advantage to apple on perf.
$200 Android phone? Look at the Pixel XL and the Galaxy S7 Edge. They're also about 10x slower than the iPhone (hard to tell since the numbers aren't precise enough but it's a HUGE difference).
Basically the state of the art of Android is where the iPhone 5S was. Is that phone even on sale now?
I've made this point before. If you're starting a new project of any size today, developing for those low end devices probably doesn't make much sense because what's now high-end will be mid-range or low-end in a year or two. I.e., by the time you ship.
The problem with Android for the past few years has been that it hasn't really followed that rule: single core performance on current Android phones is not significantly better than it was on the phones of 3 - 4 years ago. This stands in stark contrast with the iPhone.
Now clearly this isn't just about raw processor performance: it's about the software running on those processors and here Safari clearly wins over Chrome.
Say my iPhone 5S, which is still my everyday phone after more than 3 years of heavy use, is (or should be) about the equivalent of owning a low-end Android device. Well, the problem is that in terms of performance, at any rate, it's not: it's streets ahead.
Now we all know that sooner or later Apple will run into a single-core performance wall and will have to scale outwards and, hopefully, when that happens they'll invest in a way that gets a better experience than developers and users have had with Android to this point.
(Also, hopefully things will significantly improve on Android - the article suggests so - because enough people have certainly been bellyaching about it, including me.)
I love that this article, which is quite good, appears on a site whose every page crashes and boot-loops multiple times in iOS Safari and has to be repeatedly reloaded by the browser, presumably running less of its code each time, in order to correctly render some text with images in it.
Sort of gives added point to the thesis, by providing a marvelous example of something you should never, ever do.
I mean, if I were on something older than an SE, I'd shrug and figure it was fair enough, since iOS devices do seem to age badly in my experience. But seriously...
I don't think things are going to get better until browser makers (or Google) start forcing things. It's fine to say "Use less JS" but without an incentive, things aren't really going to change.
Looking at how HTTPS adoption has grown with Google giving HTTPS sites an SEO boost and Chrome giving scary warnings, I wonder if the solution isn't to do the same with JS. Throw in some SEO incentives for pages with minimal JS and see what the market does.
The market is already incentivized; if the performance of your site is lousy, people won't like it. I think fiddling with SEO incentives is too big of a hammer to swing at what is basically a very narrow technical problem, overall JS parse volume.
The sad thing is people are so used to this modern web crap that I don't think they will stop using it. The Facebook "we crashed the app and users never stopped coming back" experiment comes to mind. Sure, technical users like anyone on HN will probably know what it could be like, but a lot of users likely have no idea how fast the web can be. A site that takes 2 seconds to load is "fast". 5+ seconds to load an article on mobile while the page is jumping all over the place is "normal".
My take on the JS performance 1) wait for CPU’s of mobile to catch up with current JS flow [not acceptable, but easiest path ] 2) Web Assembly to create native experience or 3) something like Elm as a JS replacement (front end compilation). We know interpreted langs are slow - however JS is slow now where people see it most [Front end]… they don’t see the slow of “back end”. For example, if a user had to wait for “npm install” when they used the server the first time. The JS community has a great habit of adding “all the things” for significant bloat. [module for leftpad, and lots of go-arounds due to the language being bolted on for it’s current use rather than designed from the ground up]. It’s 6am in Cali…so take it with a grain of salt.
Except for the fact that it is nearly impossible without a dedicated team for optimizing your code. Because javascript is so hard to optimize and eliminate dead code. With es6 modules that could potentially get better...but good luck with getting the rest of your libraries (and the 5 bajillian level 2+ dependencies that they require) on board with that.
A decent first step would be for nodejs to deprecate commonjs and only use es6 modules, and force libraries to update or be deprecated. So lets have this discussion 10 years from now.
No, ES6 modules aren't just syntactic sugar over CommonJS modules, they are a different type of module entirely. One big difference is that ES6 modules have a static structure so that you can determine imports and exports from the source code alone, unlike AMD or CommonJS modules which determine their imports and exports dynamically at runtime. Import and export statements can only occur at the top-level of a module and don't accept expressions of any kind, so they can't be conditional or accept parameters.
This means if your application is only using ES6 modules then newer bundlers such as Webpack 2 and Rollup can perform "tree-shaking" - statically analysing your codebase to determine what code paths are used, and then pruning dead code from the final bundle.
So if your code imports something like Lodash with hundreds of functions but you only call one of them, then only that one function will be in your final bundle.
I think more people should look into Clojurescript with it's advanced optimizations which I believe could help achieve the goal of "shipping less javascript".
One of the idea behind CDN for javascript/css is to leverage caching by reusing the same resources across websites.
But then optimization tools said we should bundle everything into one javascript, which delay the loading time, but defeat the initial purpose.
I wonder if the caching could be more intelligent, by recognizing libraires bundled into the "big" javascript that website delivers, and parse only the new content.
It's not black-and-white. More of a balance kind of thing.
Bundling everything together is best, unless the cases where it isn't and you need some kind of file to be shared globally. Not necessarily in a global WEB context; just the site's own context works too. You want assets to be cacheable.
Regardless, big libraries on CDN's don't make as much sense nowadays as it did maybe 5 years ago. It's not like everybody is still using jquery. There's too many different mainstream libraries, with too many versions.
Slightly related: pre-parsing code and loading already initialized application state is available for the Dart VM for a long time now, and the technology yields faster startup times:
I'm not sure the Angular 2 AOT bit belongs here. Angular 2's AOT just parses your HTML into the backing angular component classes. It's still all Javascript that has to be loaded and parsed.
Why not have the js engine, i.e v8, parse and compile one time, and then refer back to this object code each time to eliminate the startup delay... Either an http header cache-control flag or something similar.
Right,Generally programs store the crude CSS and JS resources - could that be enhanced with the goal that programs reserve the accumulated CSS/JS? That wouldn't help for the principal stack.
Every few years, it’s proposed engines offer a way to precompile scripts so we don’t waste time parsing or compiling code pops up. The idea is if instead, a build-time or server-side tool can just generate bytecode, we’d see a large win on start-up time. My opinion is shipping bytecode can increase your load-time (it’s larger) and you would likely need to sign the code and process it for security. V8’s position is for now we think exploring avoiding reparsing internally will help see a decent enough boost that precompilation may not offer too much more, but are always open to discussing ideas that can lead to faster startup times."
Surprised there was no mention of webassembly, which does exactly this.