Hacker News new | past | comments | ask | show | jobs | submit login
Asm.js: The JavaScript Compile Target (ejohn.org)
218 points by dave1010uk on April 3, 2013 | hide | past | favorite | 103 comments



Where everyone seems to be focused on performance, it strikes me that performance is just the flip side of efficiency, and that asm.js might be a great way to stretch battery life on mobile. Will asm.js web pages run with less power than traditional JS? Also I suspect this can help with the sluggish performance of mobile browsers in general.


Yes, code that runs faster also uses less battery.


Well, not always, but it's pretty safe to assume that if a program completes in 1 second instead of 10 seconds, it's gonna use less power.

Modern CPUs tend to be able to adjust their clock speed in response to load, and I think there are even some shipping processors that can switch off entire cores. GPUs are the same way. In practice, running an asm.js version of some normal JS might load your CPU more evenly and use more of its resources, but it's going to run so much faster that you should come out ahead.

The power draw (measured at the wall by my UPS) from my current desktop PC can bounce from 100W to 400W at the drop of a hat (from the GPU and CPU both clocking up, respectively). So, in that case, code that 'runs faster' (by using the GPU) could potentially actually use more power. But I bet in practice most GPGPU applications end up using less power in total too since they run so much faster.


Running faster may use more power but it generally uses less energy which is the constraint in a battery-powered device.


Agreed. Summarized in the phrase "race to sleep" - see http://en.wikipedia.org/wiki/Dynamic_frequency_scaling


Also, concurrent programs that use lock-free algorithms might perform better than locking ones, but they keep the CPU spinning while waiting to complete and thus may use more power as well.


This is the first I've heard of Portable Native Client (PNaCl), which sounds very interesting.

I think ASM is a great idea for "hacking", but the thing that doesn't make a lot of sense to me, is that you're usually only going to bother compiling to ASM for things which need the performance. The idea that they'll "still work" in non-optimized browsers doesn't seem too convincing -- games would probably be unplayable, and you'd see messages saying "This game only supports Firefox" etc.

And I don't see there being that many things written in C, etc., that one would want to port to the web, that aren't particularly processor-intensive.

Presumably PNaCl would be close to native code speed, a big improvement over ASM. And it seems like what's needed is cross-platform fast execution -- not backwards compatibility with JavaScript that will still run, but run much more slowly.

But nevertheless, it's super-interesting watching what's going on with ASM.


PNaCl sounded interesting — I was really excited when I heard about it — but it's been pretty much dead for a long time now. Even normal NaCl got a lot more attention, and NaCl has not exactly been heartily embraced.

People are only recently starting to talk about PNaCl because it's the only alternative to asm.js, and for some reason some people have such strong negative reactions to the idea of a highly performant language that isn't a bytecode that they'll support a defunct Google experiment before a low-level JavaScript subset.


The problem with NaCl is that it is platform dependent. It is the 386 real mode instruction set which means it is bad for mobile platforms since nearly all of them are ARM based. NaCl will be slower to run since ARM devices will have to emulate the 386 instruction set.


Not true.

With NaCl one has to provide multiple EXEs for each platform that one wants to support be it ARM, x86 or x64.

PNaCl was designed to solve the multiple EXEs problem by just providing one in LLVM bytecode to be jitted runtime.


> With NaCl one has to provide multiple EXEs for each platform that one wants to support be it ARM, x86 or x64.

So websites written today wont be viewable/runnable on architectures created tomorrow unless someone goes back, updates the NaCl tooling, gets the updated NaCl tooling to the original author and convinces the original author to fix his broken, platform-dependent website.

Excuse me if I say that sounds like a bunch of horseshit. If this had been invented (and embraced) before the advent of mobile-devices, half the internet would be unusable on smart-phones and tablets now.

Why on earth would we ant to create that sort of problems for the future? Websites tend to stick around, maintained or not, and new things will always emerge.

NaCl is a bad idea for any cross-platform medium and the internet in particular. End of story. The only people rallying NaCl are Google-fanboys who can't even see past their Google Chrome browser when testing regular websites.

I can't wait for this non-standard monstrosity to die.


Half the internet was unusable on mobile, the flash part...


That's not how it works. The original NaCl targeted only 32-bit x86, but now they also produce binaries for x86_64 and ARM too. If you're running on ARM, you get an ARM NaCl binary.


If the author of the website has updated his NaCl tooling and his website after updating his tooling.

That's a big if.


Ah! I haven't been following the latest news surrounding NaCl. Thanks for the clarification.


I think ASM is a great idea for "hacking", but the thing that doesn't make a lot of sense to me, is that you're usually only going to bother compiling to ASM for things which need the performance. The idea that they'll "still work" in non-optimized browsers doesn't seem too convincing -- games would probably be unplayable, and you'd see messages saying "This game only supports Firefox" etc.

I disagree. Granted, I'm an unusual case as I'm a compiler developer for a living, but my use of Asm.js would primarily be to write compilers for interesting or research programming languages to JS (granted, so I don't have to write JS). In this case the primary benefit isn't performance, it's having a reliable compile target that I can do profiling on. You wouldn't believe how important it is to have a stable (and well-specified) target language when you're writing a compiler.


I don't think it's necessarily true that only projects that need ASM.js's speed will target it. There are already several languages or tools in increasingly common use that target vanilla JavaScript, like Haxe or GWT. Developers don't use those because they can't make JavaScript perform, but because developing in another language is more convenient in some way: better syntax or easier code sharing. If these compilers could be modified to target ASM.js, you'd get the performance bump for free without sacrificing the existing state of usability in browsers that don't natively support it.


if you do that, ASM.js has definitely wrong level of interfacing with DOM and you're screwed.


You're right; I obviously didn't think that through. It does seem like there ought to be a way to abstract out the design effort of working with ASM.js, but it probably wouldn't work with existing code bases.


I don't understand the issue, or why he is right.

Care to explain?

What stops language X from targeting asm.js and still having higher-level parts in plain JS to deal with the DOM?


Right now the best tool to target asm.js is a fork of Emscripten that compiles a program in LLVM bytecode to an asm.js module. So any language that targets LLVM should be relatively easy to compile to asm.js.

The issue is that asm.js doesn't provide the full JavaScript & DOM API. Calls to native JavaScript functions go through a foreign function interface. The natural design in this case is just as you describe: a self-contained core for high-performance calculations called by a shell of plain JavaScript that can interface with the rest of the world. Similar two-layer patterns are used for NaCl Chrome apps, or games using Lua to script a C++ engine.

My original comment was written in the hope that existing language-to-JavaScript compilers could be modified to target asm.js, automatically gaining its performance boost for all future and current projects using, say, GWT. But existing projects aren't written with the two-layer pattern needed to target asm.js effectively. Perhaps the best you could do would be to run every function call that isn't in the module or asm.js stdlib through the FFI. That may not be worthwhile.

(Anyone can feel free to correct me if I'm totally off on something. This isn't exactly my specialty.)


It's not really about working everywhere. It's about providing the existing browser vendors their own parser, and reference implementation to help promote implementation. It's a small well constrained subset of what they already have. And just now have to provide a compiler for. Not trivial, but significantly easier then implementing the PNACL spec. The LLVM bitcode spec itself is large enough to ward of a second implementation.


> The idea that they'll "still work" in non-optimized browsers doesn't seem too convincing

The BananaBread demo ran very smoothly at fullscreen, 1920x1200, on Firefox 19. I was very impressed. I'm running on old hardware: a 3GHz Core2 Duo and NVidia Quadro 1700. I know it's not a full game, but it's awesome anyway.


I don't think BananaBread was that good of a demo for asm.js. It was a demo for regular WebGL (so for most Javascript engines), done a while ago. If they want to sell us on the combination of asm.js with WebGL, they need to show something even more impressive.


What about the fact that they've ported Unreal Engine 3 and are running it now with full speed inside the browser? I hope we get our hands on it soon, but you can watch the videos of it for now.

It runs OK without asm.js, but it's liquid smooth with it.

What else do you need?


Well, that's only the "mobile" Unreal 3. The same engine games on iOS and Android are using. It's not really the "same" Unreal 3 we see used in Devil May Cry 4 and other recent PC games based on it. It's probably a scaled down version even of the original Unreal 3, not the "Samaritan" one.

They probably couldn't have done more even if they wanted to, though, since asm.js or not, browsers can see only use the OpenGL ES 2.0-based WebGL API's. That's why I'm hoping the OpenGL ES 3.0-based (or better) WebGL will arrive very soon, although I haven't even heard if the Khronos group is working on it, so it's possibly they haven't even started yet.


This is the full Unreal Engine 3, all 1 million + lines of C++ code.

The art assets of the Citadel demo are I believe the same as in the Flash demo, which were optimized for a more mobile-friendly and touch-friendly environment.

However we also showed a full Unreal Tournament demo running, as I mentioned in another comment, both at our GDC presentation and in the booth where people could play it. That's a full desktop level.


I believe that's the point he was making, he wants to see that running in his browser.


This post is fundamental to understand the ins and outs of Asm.js this was really helpful. It deserves its summary to make it even more understandable! http://tldr.io/tldrs/515c59e49ac882db1600010b/asm-js-the-jav...

Asm.js's support should make sense for Chrome OS no?


Nice summary, never heard of tldr.io, looks useful.


Unimpressed by the "Unreal" demo. Just a simple scene flyover, no dynamic lights or scenario, characters, particle effects etc. etc. I bet one can put together a demo of similar quality with vanilla JS and WebGL.


We also demoed an Unreal Tournament level, Sanctuary, at GDC, both in our presentation and at the booth where people could play it. Frame rate is good even with lots of bots.


Sure, the Unreal Engine 3 demo (Epic Citadel) had been already ported to Flash one year ago. http://www.unrealengine.com/flash/ Since ActionScript VM is far slower than JavaScript, I'm afraid I have to say the Mozilla's demo does not prove the performance of asm.js at all.

And here's the latest Unreal Engine 4 demo. http://www.youtube.com/watch?v=dO2rM-l-vdQ

I know there are improvements on the web platform each year but I wish people would be very very cautious to say "we've achieved nearly native performance!".

To begin with, I think it's difficult to run 3D games (including Flash and Unity) on browsers because of the asset loading (and AOT compilation) time. If people must wait for seconds and even minutes to start a game, we can't do business with it. As many web developers says, immediate page loading is must. I'm personally running Flash and HTML5 games and I observe 80% people leave just in a few seconds loading.

To solve this issue, game stores (PS3, Xbox 360, App Store, Google Play, Steam and many others) are using the "reserve download and play it later" model. After all, traditional install apps are not that bad architecture...


> Since ActionScript VM is far slower than JavaScript

This is actually incorrect. While ActionScript is actually far slower than Javascript, the VM is actually really good at running C code compiled to it. Providing about a 30x speed up over Actionscript.


John Resig's blog posts never fail to impress.

The bit comparing Asm.js to Google's Native Client is interesting.


I think it's really great of a man who has some very high stakes in traditional Javascript to be so open to Asm.js; I fully expected to read some passive-aggressive dismissal.


Does anyone know how hard it would be to port an h264 decoder to asm.js? If it worked, it could be used to shim h264 support into Firefox.


I believe one has already been ported using Emscripten - but this was a while ago, I'd be curious to see how fast it runs now with Asm.js: https://github.com/mbebenita/Broadway


The reverse has been done: WebM has been ported to JavaScript with Emscripten, which means all browsers (IE9+) can play WebM to some degree.

http://badassjs.com/post/17218459521/webm-and-webp-hand-port...


Fairly easily if you start with Broadway: it’s an Emscripten port of Android’s H264 software decoder that writes to a WebGL canvas.

https://github.com/mbebenita/Broadway


Wasn't Firefox 20 supposed to have H264 support?


Yes. But it hasn't somehow on Mac, because I have been reading it's already there on Windows(never tried myself though).


In theory, would it be possible to take an ActiveX or NPAPI plugin and compile it down to asm.js-compatible javascript?


Yes, but you'd need to implement a compatible API for it to call. And both ActiveX and NPAPI expect access to privileged APIs not normally available to sandboxed Javascript code. So you'd either need browser support for equivalent privileged APIs for Javascript or some kind of emulation layer that fakes the access those APIs expect.


As a node developer, I'm curious to know what if anything asm.js can do for node? Maybe I'm misunderstanding something, but wouldn't asm.js allow basically all of node's core to be written in JavaScript itself, rather than in C/++? I think that would be a tremendous advantage, but maybe I'm missing part of the picture...


That certainly could be a theoretical future for Asm.js! In my post I link to a pure-JavaScript port of SQLite that runs in Node.js (using Emscripten): https://github.com/kripken/sql.js

Presumably either Node would have to switch to Mozilla's Spidermonkey or V8 would have to implement full performance support for Asm.js.


Hopefully the V8 guys have more respect for asm.js than they do E4X, which would be really nice to have in NodeJS. Even though V8 is from Google devs and not directly associated with NodeJS, I think they should be at least considerate of features that are useful outside the browser, but need integration into the engine itself.


But if you're cross-compiling from C++ to get asm.js, what would be the advantage? The upside is easier distribution of code, but presumably in the case of node that wouldn't be a big upside.


Considering compilation differences between 32/64 bit versions across Linux, Windows and OSX, let alone different versions of NodeJS most native modules require a build system on the platform... As someone who's stuck with windows deployments for NodeJS work, it's been a little painful... I'd take the little performance hit for modules that load/run transparently.


I think, that having code in C/C++ is much better than in obscure dialect of JavaScript.


Most of Node's core is already written in Javascript. Also, I can't see what would be the benefit of using Asm.js vs calling C/C++ libraries directly using FFI.


V8 doesn't support Asm.js.


... in an optimized fashion. Being valid JS, V8 could run Asm.js code.


Which doesn't matter in this case, as this fact doesn't give any incentive to Node folks to use it in V8.


Not yet, but if Chrome takes the route of optimizing the asm.js subset of javascript, node.js could gain a lot of traction as a generic high performance asynchronous platform.

And that may just be worth getting excited for.


NodeJS is already gaining traction as a high performance asynchronous platform... Though it's best fit has been in situations that are more IO bound than CPU bound, where it's been a bad fit. This could make it a better fit for CPU tasks, with high IO.

Right now, I have a few workers managed in node, that run compiled code. It works pretty well, but having more of that as modules directly available in Node, with nearly the same performance would be really compelling.

Not needing to compile certain NodeJS modules is what I'd find really compelling.


...not yet, anyway.


I'm surprised the list of Emscripten projects does not contain a linear algebra library. Is that because all the common matrix transforms can already be offloaded to native browser APIs?

https://github.com/kripken/emscripten/wiki


No native browser offers anything resembling a matrix API (or even a complete mathematics api; JS's Math api is pretty minimalistic). People implement it themselves, using libraries like glmatrix.


Why are C/C++ so much faster than JIT'ed Javascript? Can't a subset of Javascript be converted to ASM.js JS that'll run around the same speed? loops, arrays, conditional statements, etc. Why does Javascript have to be so much slower?


C maps datatypes and actions to the underlying hardware, Javascript does not. C puts everything on the stack by default (cheap) while Javascript puts it in the heap (expensive). C uses copy semantics, while Javascript does not (fucks caches, but more importantly now you have to do two loads for each value you want to read).

Javascript is dynamically typed, and comes with eval, so that you can never really be sure what type x is (and therefore what binary code the processor should run), so you have to check and make allowances, which cost performance.

In C you always know the type of a variable so you can match + to either a float, double or integer addition at compiler time, in Javascript you have to (dynamically) check the type of both, possibly convert one or more and then either perform a float or integer addition or string concatenation.

I C you always know the size of all structs, and they can't change so allocating them is cheap and fast, whereas you have to create a hash table in Javascript and access the objects that way (which is a lot more expensive, and potentially you have to walk the chain of several objects due to usage of the prototype property).

The same thing goes for Arrays, in C they are a pointer, in Javascript they are far more complicated.


> C maps datatypes and actions to the underlying hardware, Javascript does not.

That's not entirely true. A lot of 8-bit microcontrollers, for example, do not have hardware support for arithmetic on 32-bit integers; nevertheless, C code using int32_t or uint32_t will compile just fine for those processors. This goes double for floats, if you'll pardon the completely unintentional pun.

It's a matter of degree, of course. C is still a lot closer to the metal than JS.


Great comparison, thanks.


Type safety basically. JITs can produce good code which performs very well, but it still must check that the data it has is of the type it expects. A good JIT may only check occasionally, but it still must check. asm.js can guarantee that its data's types cannot change through static analysis.


> Can't a subset of Javascript be converted to ASM.js JS that'll run around the same speed?

ASM.js code is a subset of javascript. The point is that they have defined a subset of javascript and a set of rules that, while it will run in a normal javascript engine, clearly indicates to the engine that if it supports it, it can do additional optimizations that could break things for "normal" javascript.

E.g. javascript doesn't have proper integers. Doing everything with floats is a performance killer, and for the JS engines to figure out when they can safely substitute integers is extremely hard without hints, so ASM.js specifies how to "fake" integers in a way that ensures the code will still work in standard JS engines (by using bitwise operators to coerce the value to integer values all over the place) while allowing an engine that knows about the ASM.js conventions to optimize the code further.

Nothing stops you from writing javascript code manually this way in the first place (apart from tedium), though it is admittedly a "workaround" to features in javascript that are incredibly hard to optimize well that would've been handled cleaner by language modifications.

The advantage of ASM.js is that the code still works in other JS engines, while if they'd started adding actual extensions to the language we'd be in portability hell for years.


>Why are C/C++ so much faster than JIT'ed Javascript? Can't a subset of Javascript be converted to ASM.js JS that'll run around the same speed? loops, arrays, conditional statements, etc. Why does Javascript have to be so much slower?

"So much slower"?

Perhaps you haven't been around this web scene for more than 5-6 years. JS is so much faster now it's not even funny. JS used to be in the "slow as molasses" category pre-2005.

Now it's like 10 times faster than Python for pure algorithmic code.


I'm asking to have C/C++ compared to JIT'ed Javascript. You are explaining how Javascript is so much faster than it used to be, and that it's 10x faster than Python. The fact that I mentioned JIT'ed Javascript should have given away that I understand Javascript's current performance.

If Javascript were fast enough, we wouldn't be looking for "4-10x" improvements with ASM.js and NaCL. Are we close to the limit of what can be done to improve Javascript performance? Perhaps another 2x?


>The fact that I mentioned JIT'ed Javascript should have given away that I understand Javascript's current performance.

Sure, but "understanding" is different than appreciating.

After all, a JIT is not some magic wand. If a language is very dynamic, memory hungry, less prone to optimizations, etc, JIT code will still be slower than C/C++. Plus, not everything in a JS program is getting JITed.

>Are we close to the limit of what can be done to improve Javascript performance?

With Javascript as it is, yes. Maybe some 2x at best. If we add type hints, contiguous storage, and frozen object guarantees though (some of which ES6 has, IIRC) that can be improved further.


JIT is pretty cool stuff. You can do stuff with it that you can't do with compiled langauges:

http://en.wikipedia.org/wiki/Escape_analysis

[Update] Adding this great read about Java vs C++ performance:

http://www.azulsystems.com/blog/cliff/2009-09-06-java-vs-c-p...


In theory yes. But in actual JITs you don't get that much off of them compared to compiled code.

I've never seen any actual JIT being faster than compiled C/C++/Ada/Fortran, except for isolated contrived benchmarks.


Here's how to think of it. Right now, JavaScript is the best compiler target on the web. It's far from perfect, but it works. Asm.js makes it a better compiler target.


Wow, Mozilla is making a really big marketing push for asm.js.


You know Resig isn't employed by Mozilla anymore, right?

This is just a JS enthusiast (understatement of the year, perhaps) discussing a JS topic.


Maybe I'm a bit old-fashioned here, but I'd expect a discussion of JS performance to be coupled with a performance test conducted by the author. Is that too much to ask?


> Is that too much to ask?

If it were a paid service, perhaps not. But, it's not....the blog is a free service. Not trying to be rude, but the adage applies: you can 'take it or leave it'.

I'm not sure why you feel so entitled that he seemingly owes you something.

If you feel that it's inappropriate for HN, then flag it.


Why? If you have reasons to think that the performance tests that he references are invalid or inapplicable, those should be addressed; otherwise, why should he spend his time recreating low-level tests to talk at a higher level about the subject?


What makes you think this is a marketing piece instead of a normal blog post by John?


Other than rehashing the exact arguments and diagrams presented in the mozilla marketing material and including an extensive "interview" with a mozilla engineer:

- I would have expected him to have tried it. There's no evidence in the post that he even fired up firefox nightly and ran even a simple test. The performance graph is straight out of asm.js marketing material

- In discussing chrome's performance, he just writes "A 4-10x performance difference is substantial" without explaining why chrome's performance suffers so much relative to firefox without asm.js (take the skinning demo. Why is chrome significantly worse than firefox?) This is a point that many have discussed, including some here in various posts about asm.js

- Certain sentences seem artificial. If you've read other posts by him, does "It is interesting to see such a large performance chasm appearing between Asm.js and the current engines in Firefox and Chrome. A 4-10x performance difference is substantial (this is in the realm of comparing these browsers to the performance of IE 6). Interestingly even with this performance difference many of these Asm.js demos are still usable on Chrome and Firefox, which is a good indicator for the current state of JavaScript engines. That being said their performance is simply not as good as the performance offered by a browser that is capable of optimizing Asm.js code." sound like something he would write? At all?


Ok, I'll bite.

- I did not run performance benchmarks on my system because Asm.js does not yet support OS X (which is my primary OS). I fully intend to run some tests, and hopefully make some of my own, when that time comes. In the meantime it's not really beneficial for me to just re-run the same benchmarks that others have run on those platforms.

- I thought this was obvious: Chrome and normal Firefox are so much slower because they do not explicitly optimize the Asm.js code path.

- That's complete nonsense. I can't believe I have to say this but yes, I wrote those sentences.


> - I thought this was obvious: Chrome and normal Firefox are so much slower because they do not explicitly optimize the Asm.js code path.

Not everything is obvious here, actually.

If we are talking about vertex skinning benchmark where the difference between asm.js performance and Chrome's performance looks most "impressive" then a lot of that difference is caused by various bugs and limitations in V8's optimizing compiler: https://code.google.com/p/v8/issues/detail?id=2223

The same issues can easily affect other JavaScript code.

I personally strongly believe that one does not need to "explicitly optimize Asm.js code path" in the meaning "have a separate compilation pipeline for Asm.js code" to make asm.js code faster. All that one needs is to fix things that were neglected during JS VMs evolution.


If others have not read the parent's blog post on Asm.js, I highly recommend it: http://mrale.ph/blog/2013/03/28/why-asmjs-bothers-me.html

I also largely agree with Jason's comment on that post: http://mrale.ph/blog/2013/03/28/why-asmjs-bothers-me.html#co...

I see no reason why these efforts can't run in parallel - as certain optimizations are needed to improve execution performance standardize them and expose them to user-written JavaScript.

I'll say that I would be totally happy if Asm.js was nothing more than a line drawn in the sand that all the browser vendors then aspired to match with normal JavaScript. Knowing what is theoretically possible in a browser is a huge motivator (as was seen in the last browser JS performance war of 2008/2009).


What you say is "One can make a JIT that ignores a single-line declaraton that defines a lot of assumpions. Then that same JIT can do a lot of analysis around, inserting even run-time checks, eventually to get the chance to come to most of the conclusions implied by the very line it ignored." Technically it is possible, practically, it is just as nonsensical as it sounds.

If you have some reasons that aren't technical, I can understand that. Otherwise, your insistence on ignoring "asm.js" can't be rationalized.


Nope, that is not what I say.

I am saying that if JS VMs already do a lot of optimizations which are not going to disappear anywhere even with "use asm" because they benefit normal JavaScript code. Now if JS VMs solve a range of known issues in implemented optimizations, fix known bugs, lift known limitations and then additionally implement certain optimizations then this would greatly improve performance of normal JavaScript code including its subset used by asm.js.

I don't understand desire to have another compilation pipeline on the side, if everything that is needed can be implemented in the main compilation pipeline and benefit JavaScript as whole, without restraining users to a strongly typed subset. This also leads to an unnecessary duplication.

I also think that if you think that restraining is essential you should push for another execution platform (e.g. statically typed bytecode or language). That would be much more honest towards language users and language evolution as whole.


"if everything that is needed can be implemented"

I've never seen anybody proving that. Asm.js guys however proved that using the declaration that specifies the very fixed assertions, it's simple to achieve significant benefits. Once again, ignoring the precise and explicit information which otherwise is too demanding to be deduced simply doesn't have sense unless you're motivated by some non-technical reasons. So why you resist something that's technically better?

Nobody says that you need "a lot" of new code, you have only to introduce the implementation which uses the fixed assertions promised by the "asm.js" declaration. Note that in the approach you propose not only that same functionality has to be implemented but much much more code that would be just there to try to deduce what's otherwise obvious from that one single line.

So engine developers should implement asm.js support now, it's independent of whatever they'd want to introduce for any other optimizations. This simple approach is already there and it's proved it's effective.


> Asm.js guys however proved that using the declaration that specifies the very fixed assertions

Well, C compilers proved that long ago, nothing novel here. It is obviously easier to compile statically typed languages to efficient code. Especially if your language is limited to arithmetic and memory operations and your input source code is actually output of a sophisticated optimizing compiler that performs high-level optimizations for you.

> which otherwise is too demanding to be deduced

Let me repeat: JS engines already spend time on, as you say, deducing all kinds of information. This is not going anywhere, unless you are willing to make all JavaScript slower.

Because it is not going anywhere I am essentially arguing that it should be fixed to correctly infer more useful information than it does right now. Are you suggesting that VMs should just stay essentially unfixed?


At this point the onus seems to be on the asm.js-specific-optimizations-aren't-needed crowd to show that they can get performance that's even remotely close to optimized asm.js when using only non-asm.js-specific optimizations. If so, yeah, implement that! That'd be the best of all possible worlds; but it seems unlikely that loosely-constrained language X is as optimizable (with a roughly similar amount of resources) as tightly-constrained language Y. To my understanding, it's not a matter of current JavaScript engines needing to be "fixed" so much as just that they operate on a different class of problems than asm.js optimizers do, and thus the range of possible achievements is different, too.


> I don't understand desire to have another compilation pipeline on the side, if everything that is needed can be implemented in the main compilation pipeline and benefit JavaScript as whole, without restraining users to a strongly typed subset. This also leads to an unnecessary duplication.

There is hardly any duplication. All the optimizing machinery is being reused entirely. asm.js code is fast because the existing IonMonkey code generation makes it fast.

The only new part is something - the OdinMonkey module - that type checks asm.js code, and if it validates, then it feeds that into the existing IonMonkey compiler, with the types that were discovered during type checking.


> There is hardly any duplication.

There will be a clone of JavaScript parser. IR building is already duplicated as well.

I am aware that middle-end/back-end are shared, though there are some IR instructions are that specific for asm.js.

But I agree: duplication might be the wrong word here.


I am actually not a supporter of a custom parser. But yes, it might end up happening.


And disables the generation of type checks and bailouts in the output, yes?


As part of feeding the type info, it is clear that various type checks are not needed, yes.


It's working on Mac OS X now. I've run the Citadel demo on my own Mac. (I'm a Mozilla employee so I had access to the demo. It should be made available to everyone some time soon.)


>- I thought this was obvious: Chrome and normal Firefox are so much slower because they do not explicitly optimize the Asm.js code path.

I think the parent was asking for an explanation for why Chrome does so much worse than the non-asm.js Firefox. Which I guess is kind of a weird question given that Chrome beats non-asm.js FF in one of the three benchmarks, and is roughly tied with it in another.


Ah! In that case I'm not sure - but according to mraleph's comment above it looks like there might be some issues in V8 for some of the benchmarks: https://code.google.com/p/v8/issues/detail?id=2223


Overall Chrome does similar to Firefox in non-asm.js mode on most benchmarks - slower on some, faster on some. There are however a few where Chrome is significantly slower, we've filed bugs for those things.

The big difference is between Firefox+asm.js and both Firefox and Chrome.

http://kripken.github.com/mloc_emscripten_talk/#/27


Thanks for replying.

"I did not run performance benchmarks on my system because Asm.js does not yet support OS X (which is my primary OS)."

Given the relative percentage of web developers who use OSX (which is certainly non-trivial), wouldn't you think it would have been wise to state that?

But I guess the more interesting question is: did you try it on a system that firefox nightly+asm.js runs on? If so, can you describe that process? Did you try writing some code by hand or try taking a dummy program and run it through emscripten?

Taking a step back, it just seems really strange (and certainly many others, if this involved some other company like microsoft or google) that you would talk about performance without trying it.


There hasn't been any shadow of a doubt that asm.js works -- and it's ok for a blog post to help explain what a piece of technology is about. It's a less-technical post, yes, but if that's what the author wants to write about, so be it.

The better question is if it's the right approach, which is what mraleph has been arguing, and it's a much more appropriate argument.


It sounds more like you're looking at all of this through a strong negative filter.


If objectivity is strongly negative, so be it. You'd ask similar questions if Resig wrote an article of PNaCl without testing it himself.


Shall we run your writing through the same process, and deduce that you are employed by Google to slander Mozilla?

Actually, if they turned evil they'd employ someone better at it. I'm kind of convinced you're just trolling.


Please proceed. Explain how a discussion of performance by someone who didn't actually perform the test is somehow trolling or evil. I'd say the same thing if Resig used Google or Microsoft marketing material rather than running the tests.


Heh, I didn't say you were evil, I said that a hypothetical Google astroturfing would be.

My comments are pretty short. If you can't be bothered to parse them, why respond?


Whether this the aim of this post was marketing or not (as I believe), it's a really in depth and interesting read if you're a programmer. Even more so if you use JavaScript. It would be great if all corporate marketing was like this!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: