Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> asm.js optimizations are also an internal implementation detail.

Come on, really? If you require a full spec to define a very specific format, type annotations, and special designators to actually take advantage of it in any meaningful way, it's not an "implementation detail", because as the user, I have to care about it.

> So asm.js does not expose any implementation details, no more than say CrankShaft and TraceMonkey do in the documents written about "how to write fast JS for modern JS engines" (which often say explicit things about "don't mix types" and so forth).

That's exposing implementation details, too, and demonstrates a failing of JS.

> Of course users do. A portable application would be runnable from all the users' devices, that's a huge plus. Just like users want to play their music from their iPod, laptop, TV, etc., they want to run their apps on all their devices as well. Portability makes that possible.

Users want apps to run, and run well. They don't care how. Figuring out how is our job. Making user's lives suck more because we have lofty ideas is not doing our job.

Apple gets it. Google gets it. Even Ubuntu gets it.

Mozilla doesn't get it.

> That's a big compromise. If we had done that before the rise of ARM, for example, ARM might never have achieved its current success.

Not really. Apple and NeXT navigated these waters successfully for multiple decades via Mach-O and CFM fat binaries, and toolchains built around easily and efficiently supporting multiple architectures.

> But anyhow, of course there are different compromises to be made. The web and JS focus on true portability, with its downsides. If you personally are willing to compromise more to get better performance, then sure, another option might be better for you.

The web is competing with native applications. Now you're trying to compete with native operating systems, yet you're not willing to take the steps necessary to actually compete.

Ultimately, you're creating a two tier systems where platform vendors like yourself get decent performance and runtime environments in which you can produce things like Firefox, and 3rd party developers get crappy performance and runtimes environments where we can produce webapps.

I'd love to see you write and deploy Firefox in asm.js, and then try to compete with Chrome.



> Come on, really? If you require a full spec to define a very specific format, type annotations, and special designators to actually take advantage of it in any meaningful way, it's not an "implementation detail", because as the user, I have to care about it.

First of all, it doesn't require a spec. We could have just done some heuristical optimizations like all JS engines have been doing since 2008, finding more cases where we can optimize and so forth - in fact, this was my initial idea for how to do this, as I mentioned earlier.

But we did decide to write a spec because (1) we want to be 100% open about this, and a spec makes it easier for others to learn about it, and (2) it helps us check we didn't miss anything because we have a formal type system.

>> So asm.js does not expose any implementation details, no more than say CrankShaft and TraceMonkey do in the documents written about "how to write fast JS for modern JS engines" (which often say explicit things about "don't mix types" and so forth).

> That's exposing implementation details, too, and demonstrates a failing of JS.

If so, then that exposes a failing of all JITs, including the JVM. All optimizing implementations expose details. People have optimized for the JVM for years.

If you can't stand anything between you and the underlying CPU, then nothing portable (like JavaScript, C#, Java, etc.) will satisfy you. Actually even a CPU might not, because CPUs also optimize in unpredictable ways, these same issues are dealt with on that level too.

Again, there is room for native apps. But there is also room for portable, standards-based apps. The web is the latter.

> Apple and NeXT navigated these waters successfully for multiple decades via Mach-O and CFM fat binaries, and toolchains built around easily and efficiently supporting multiple architectures

I do see your point, but that isn't quite the same. Apple fat binaries were of a platform Apple controlled. We are talking about the web, which no one controls. But again, yes, to some degree it is possible as you say to overcome such issues.

> The web is competing with native applications. Now you're trying to compete with native operating systems, yet you're not willing to take the steps necessary to actually compete.

I disagree. If we have 2x slower than native now, and 1.5x slower than native later on, we're competitive with native on that front. And we have some advantages over native, like portability, which can have long-term performance advantages (for example, we can easily switch to a different underlying CPU if a faster arch shows up). There are also short-term performance advantages to things like Firefox OS that only run web apps, like their graphics stack being much simpler than Android's or Linux's (you don't need another layer underneath the browser compositor, and can go right into GL).


> But we did decide to write a spec because (1) we want to be 100% open about this, and a spec makes it easier for others to learn about it, and (2) it helps us check we didn't miss anything because we have a formal type system.

If I'm going to target a toolchain at your runtime, and expect decent performance out of it, and expect to see consistent performance with other runtimes that also implement such behavior, then I (and you) need a spec.

Moreover, if I'm going to implement tooling that can make sense of your "byte code", then I absolutely need a spec that's more specific than "it's JavaScript that might be optimized".

This is making me more wary about being able to build reliable tooling around such a "bytecode", not less.

> If so, then that exposes a failing of all JITs, including the JVM.

Yes. But some languages and targets are much worse off than others.

> Actually even a CPU might not, because CPUs also optimize in unpredictable ways, these same issues are dealt with on that level too.

This is why we sometimes get down to carefull statement/instructions ordering to avoid pipeline stalls, or using architecture-specific intrinsics, or worrying about false sharing.

> Again, there is room for native apps. But there is also room for portable, standards-based apps. The web is the latter.

The web could be both, if Mozilla and other web die hards would critically evaluate the accident of history that is the modern web browser. What bothers me most of all is just how much Mozilla can hold back the industry. For example of where the industry could go, look at what happened with virtual machines implementations.

First, they were particularly inefficient, and relied on tricks such as trap-and-emulate. Not all that different from how NaCL is working, especially on ARM, with funny tricks like load+store pseudo instructions. Gradually, hardware vendors took notice, and we saw instruction sets and hardware shift to add enhanced VM-specific functionality (and ultimately performance) -- first with VT-x, and now with VT-d (eg, IOMMUs).

NaCL is in the perfect position to start down the road that leads to re-imagining what no-compromises sandboxed code looks like on the processor level. asm.js is going in the complete opposite direction, and in doing so, has the potential to steer the entire industry away from a path that could introduce significant and beneficial changes in the realm of security, portability, and open platforms.


> If I'm going to target a toolchain at your runtime, and expect decent performance out of it, and expect to see consistent performance with other runtimes that also implement such behavior, then I (and you) need a spec. Moreover, if I'm going to implement tooling that can make sense of your "byte code", then I absolutely need a spec that's more specific than "it's JavaScript that might be optimized". This is making me more wary of building tooling around such a "bytecode", not less.

Then if I understand you correctly, you are in favor of a spec for something like asm.js? But perhaps the problem you see is that you worry the same asm.js code will be slow or fast depending on the browser? Not sure I follow you, please correct me if not.

If I had that right, then yes, that's a valid concern, there will be performance differences, just like there are between JS engines on specific benchmarks already. Note that asm.js is much simpler to optimize than arbitrary JS, so that could decrease in time. But there are no guarantees with multiple vendor implementations.

And that's the real issue. NaCl, Flash, etc have one implementation, so you get predictable performance (and the same vulnerabilities...). But you don't get that with JavaScript, Java, C#, etc.

If NaCl were to become an industry standard somehow, then it would need to have multiple implementations, and have the same unpredictability in terms of performance. Except that it is fairly straightforward to optimize NaCl, so in theory the differences could become small over time - but the exact same is true of asm.js as I said earlier.

> NaCL is in the perfect position to start down the road that leads to re-imagining what no-compromises sandboxed code looks like on the processor level.

Again, PNaCl is a better comparison - even Google is shifting from NaCl to PNaCl (according to their announcements).

I see no reason that asm.js cannot be as fast as PNaCl, both are portable and can use the same CPU-specific sandboxing mechanisms. In fact it would be interesting to benchmark the two right now.


> Then if I understand you correctly, you are in favor of a spec for something like asm.js?

Yes. Imagine I'm writing an "asm.js" backend for a debugger, coupled with toolchain support for DWARF. To tell you the truth, I'm not even sure where I'd start, since it's not like the spec exposes a VM or a virtual machine state -- but if it did, I'd need the spec for that.

> But perhaps the problem you see is that you worry the same asm.js code will be slow or fast depending on the browser? Not sure I follow you, please correct me if not.

That's part of it. Without a spec, I can't really rely on remotely equivalent performance, but that's hardly the only toolchain issue.


asm.js is just JavaScript, it isn't a new VM with exposed low-level binary details. You wouldn't need to use DWARF or write your own low-level debugger integration. You can debug code on it of course, the right approach would be to use the JS debuggers that all web browsers now have, with SourceMaps support.

The goal with asm.js is to get the same level of performance as native code, or very close to it. But it isn't a new VM like PNaCl, it runs in an existing JS VM like all JavaScript does. That means it can use existing JS debugging, profiling, etc.


> You can debug code on it of course, the right approach would be to use the JS debuggers that all web browsers now have, with SourceMaps support.

That's not really a replacement for a real language and architecture-(VM or otherwise)-aware debugger.

> That means it can use existing JS debugging, profiling, etc.

Which is a problem, because all of that stuff is awful compared to the state of the art of modern desktop and mobile tooling.


Matter of opinion, I actually prefer to debug C-compiled-to-JS than C-compiled-to-native these days. Mainly because I can script debugging procedures directly in the JS source and just run them.

But sure, if you prefer gdb or such, then the web platform is not going to be a perfect match for you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: