Hacker News new | past | comments | ask | show | jobs | submit login
From Asm.js to WebAssembly (brendaneich.com)
897 points by fabrice_d on June 17, 2015 | hide | past | favorite | 331 comments



I think this quote speaks volumes - "WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks." Sometimes I think maybe, just maybe the W3C and other web standards groups finally have some wind behind their sails.

It may have taken a while, but with all these individuals and organizations cooperating in an open space, we may finally advance yet again into another new era of innovation for the web.

I am really excited about this, much like others in these comments.

We have been beating around the bush to have a true assembly/development layer in the browser for a long time: Java applets, Flash, Silverlight, you name it - but no true standard that was open like Javascript is open. This component has the possibility of being the neutral ground that everyone can build on top of.

To the creators (Brendan Eich et. al) & supporters, well done and best of luck in this endeavor. It's already started on the right foot (asm.js was what lead the way to this I think) - let's hope they can keep it cooperative and open as much as possible for the benefit of everyone!


> WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks

Is Apple participating?


Yes: "I’m happy to report that we at Mozilla have started working with Chromium, Edge and WebKit engineers "

https://blog.mozilla.org/luke/2015/06/17/webassembly/


This doesn't seem to necessarily imply that Apple is directly involved. But it's good to know that Safari should be included in this effort.


That WebKit issue about WebAssembly was filed by engineer working on "compilers and language runtime systems at Apple":

http://www.filpizlo.com/

https://bugs.webkit.org/show_bug.cgi?id=146064


Although that doesn't necessarily mean that Apple the company is involved, no?


Maybe the engineer who is involved wasn't able to obtain organizational signoff in time for Apple to put their name on the list of top-level endorsers of the proposed standard? Hopefully Apple can issue a statement later.


WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks. I’m sorry the work was done via a private github account at first, but that was a temporary measure to help the several big companies reach consensus and buy into the long-term cooperative game that must be played to pull this off.

So, the effort to get buy-in from the big companies that matter has been going on for some time, and now that we've done that, here's the result: Every big company that matters except Apple.

It's not as though the list of the four big companies that matter is too long to reasonably be expected to name the fourth, and the name of the fourth is not webkit.org or Fil Pizlo.

So, it seems to be either an "Oops, we accidentally left one of the four out of this important announcement," or there is still enough of a problem with Apple that we decided not to delay the announcement any longer to wait for them.


You are assuming a false dichotomy. There's a third possibility: what with a big four-letter-acronym developer conference and lots of other spinning plates, the full buy-in to include the company name didn't make the deadline set to get everyone else on board.

Since Fil opened a webkit bug to implement wasm, I would at least hedge my false-dichotomy bet and avoid getting my mad on. Free advice, take it or leave it.

(The first paragraph is actually me telling you pretty much what happened. Same thing as with WHATWG launch in 2004, BTW. Does not mean "OMG there is something WRONG and only TRULY SHINY COMPANY noticed and OBJECTED". Yeesh.)


That's great news, thanks for the additional context. Apple and the web have a ..complicated relationship, so it means a lot that they have no known objections and are likely to endorse. It seems too good to be true that all the browser vendors would agree on something this beneficial to developers :)


The important question here is: has the github repo been made public?

How can we know what concessions these players have made or what kinds of collusion they have agreed to?

Shouldn't the Web be built in the open -- not subject to a cabal of corporate overlords who won't reveal their intentions or thoughts?


As long as Apple hasn't officially agreed to this, and you seem to have some sort of embargo on even referring to them by name, something other than full buy-in from them is still a possibility. This possibility makes me nervous, because I think wasm is just what the Web needs, and I don't necessarily trust Apple to have "whatever is best for the open Web" as a guiding principle.

As for "getting my mad on," you seem to have gone off on some sort of tirade at the end there implying--how ironic--that I actually wanted TRULY SHINY COMPANY to save us from wasm, and your caps-lock key seems to have gotten stuck when you banged on it.


Please accept my apologies for the ALLCAPs. You were nervous in a way that suggested to me the kind of fannish knee-jerking that I unfairly lampooned.

I hope you can relax. All will be clear pretty soon, I am certain. There's no need to be nervous. Well, no need to be more nervous than usual! ;-)


All is well between us, and I can't thank you enough for what you're trying to do. Here I thought ES6 was the best news of the year for Web dev, but wasm will beat it by far--as long as the long-term cooperation you (correctly) said it needs really comes through. It's hard to relax when we're soooo close to something this tantalizing, all but one have officially committed, but it has to be unanimous, and that one is not...yet...saying....

I'll watch these pages and let out a woot! the minute Apple officially makes it unanimous.


It has never been Apple's style to release a statement for situations like this.

Expect to see it implemented, sites like iCloud made to take advantage of it, performance metrics aka marketing collateral gathered and then a slide or two at an upcoming Apple Event with cheers from the audience.

Apple is a product company first and a technology company second.



Yes, lower in his post he mentions the JavaScriptCore guy.


Yeah, it amazing to see all the vendors working on this. Asm.js has momentum and dedicated support from Chakra and SpiderMonkey but this is actually going to be created, supported with tools and promoted by all the vendors.


Does everyone think this is good news?

I'm all for making the web faster/safer/better and all that. But I am worried about losing the web's "open by design" nature.

Much of what I've learned and am learning comes from me going to websites, opening the inspector and stepping through their code. It's educational. You learn things you may never read about in tutorials or books. And it's great because the author may have never intended for their code to be studied. But whether they like it or not, other people will learn from their code, and perhaps come up with [occasionally] better versions of it on their own.

This has helped the web development to evolve faster, and it's obvious how democratizing this "open-by-design" property is, and I think we should be concerned that it's being traded away for another (also essential) property.

Human beings cannot read asm.js code. And a bytecode format will be more or less the same. So, no matter how much faster and more flexible this format/standard is, it will still turn web apps into black boxes that no one can look into and learn from.


> going to websites, opening the inspector and stepping through their code

Good news then, because even when loaded from the binary representation, browsers will apparently be able to show a canonical text format representation. (more info here: https://github.com/WebAssembly/design/blob/master/TextFormat...) Citation:

  > Will WebAssembly support View Source on the Web?
 
  Yes! WebAssembly defines a text format to be rendered when developers 
  view the source of a WebAssembly module in any developer tool.


WebAssembly isn't going to kill that notion; uglify already did. Nearly all codebases on big sites get run through a build process these days. The open web continues to live on through the open and free software movements.


I'm apparently in the minority but I'm with the GP in that I've never been sure uglify and other minification was actually a great idea for this reason. The gains are marginal if you're using gzip compression (you are, right?); the loss of source readability is a big deal for the open nature.

Saying that the open web lives on through open/free software also seems dubious to me. Most SaaS don't have their offerings up on Github.

I wonder if we're about to discover yet again with the browser itself why the browser beat several cross-platform VMs.


> I've never been sure uglify and other minification was actually a great idea for this reason. The gains are marginal if you're using gzip

Smaller JS parses faster. gzip doesn't affect parse time.


As someone who has used uglify on occasion, sometimes you pull a trick and you don't want the competition to find out too easily. Make'm sweat for it.

And on the subject of Web Assembly, if asm.js is the inspiration, great thing will come for it, really a new era for the web. For example, things like that https://github.com/audiocogs/ogg.js, and to me that's just a tiny glimpse on what the possibilities will become.


I agree that the web is far from the full ideal of "open-by-design", but it's still the most significant platform providing it to some extent.

The problem of uglify can be mitigated if someone invented a binary format for JS that was interchangeable with the current text format. The format would reduce the code's size, while keeping it readable (you'd just have to run it through a binary-to-text decompressor).

I should also say that you can read uglified/obfuscated code. It just takes more patience. Example: if PS was written in JS and then obfuscated, how hard do you think it would be to find out how their color transformation algorithms are implemented?

You can't say the same thing for asm.js code though.


What's the difference between deobfuscating code, and decompiling it? Either way you end up with a weirdly-structured, canonicalized, symbolless mess.


The binary format that I mentioned/proposed would map one-to-one with a text format. They would be interchangeable. That means if you open a file in that binary format in a text editor (with the right plugin), you'd see almost exactly the original code. It's not decompilation. Only decompression.


I think you're confusing cost of parsing with cost of lexing. A 1:1 binary representation of JS (like HPACK is for HTTP headers) wouldn't decrease parsing time (the time it takes to turn a stream of recognized tokens into an AST) at all, which was the goal here.


Agreed, but the idea is still beneficial. If 1:1 binary representation for the data from the lexer doesn't yield much benefit, then a representation for the AST might.

And reading from that data could be much faster. And with a supposedly smaller file format, it probably wouldn't be so necessary for a minifier to rename symbols to single-letter ones to save a few more bytes.

[Edit: added the following paragraph]

This could yield a smaller file format (which is what people usually want from JS minifiers), without sacrificing the readability of the JS code.


Renaming symbols, and minification in general, is not really "necessary", it's just something people do to extract a few % more of performance. If they had that file format, they'd still have exactly the same reasons to rename symbols, so they'd still do it.

After all, if people cared about leaving their files readable, they'd just publish the non-minified version as well. Those who don't, won't avoid renaming either.


I agree that renaming symbols would still save a few more bytes, but I still think that with the right file format, symbol renaming and mangling would become an insignificant, unnecessary micro-optimization.

But that should only be tested in practice, so ... :)


This seems backwards. If I have JS source and the identifiers take up 10% of it, then symbol renaming and mangling can shrink my code size by something less than 10%.

If we shrink that other 90%, so that the formerly 10% is now 80%, renaming becomes more attractive.

This doesn't hold if people have a fixed problem with a fixed solution and fixed constraints... but problems and solutions grow.


> If we shrink that other 90%, so that the formerly 10% is now 80%, renaming becomes more attractive.

Convinced :)


I would say the same thing for asm.js. If you had enough time/patience you could still determine how some given logic worked in some sample asm.js code.


> if someone invented a binary format for JS that was interchangeable with the current text format

We already have gzip, although even with gzip minifying is useful, because gzip isn't content-aware. You can also enable source maps in production if you care about openness. The client doesn't have to fetch the maps/sources until developer tools is opened.


Well, gzip is general purpose. A binary format would be aware of JS' semantics and possibly yield much higher compression rates.

In fact, WASM's binary format is intended to be that way and apparently is yields more compression than using only gzip: https://github.com/WebAssembly/design/blob/master/BinaryEnco...


WebAssembly will be more readable than asm.js, so it is an improvement over the current state of things,

https://github.com/WebAssembly/design/blob/master/FAQ.md#wil...

I do agree though that asm.js is less readable than handwritten JS. However, as others pointed out, when handwritten JS is minified, it also becomes quite unreadable.

Overall, I think WebAssembly will keep the web about as human-readable as it currently is.


Questions like this ignore the economics of minification. If you're a website and you have the option to ship a megabyte of unminified JavaScript or 150 KB of minified JavaScript, the minified JavaScript makes a ton of sense. Sure, gzip helps, but not all the way.

Same thing with WebAssembly: wishing WebAssembly didn't exist isn't going to make it go away. It exists because there is a real need for it: the need to securely and efficiently run high-performance C++ codebases in a web browser. Not everyone has this need, and not everyone will use it, but WebAssembly (and asm.js and Emscripten) solve a real problem for some people, and they simply cannot realistically target the web without this technology.


Well, I didn't really say I wish WebAssembly didn't exist (although the title of my comment somehow implied that).

I've written a more complete discussion here: https://news.ycombinator.com/item?id=9743859


The FAQ at https://github.com/WebAssembly/design/blob/master/FAQ.md has an item about that. The WebAssembly has a view source implementation that translates the bytecode into a textual representation that is easier to read than asm.js


Unfortunately, with minification, obfuscation, and transpilation, I think learning via view source has long been dead on the web. The loss of fidelity is simply too great.

That said, just as you can convert LLVM bitcode back and forth between binary and textual representation, you will be able to with WAsm as well.


Chrome's pretty print does wonders.

It's still not as good as raw source, but I have definitely analyzed obfuscated/compiled JS without incident.


Well, the bitcode is apparently an encoding of LLVM IR, which doesn't translate back to the original source code http://stackoverflow.com/questions/5180914/llvm-ir-back-to-h...


[edit: additional context: https://github.com/WebAssembly/design/blob/master/FAQ.md#wil...]

I think this argument is sort of a nostalgic one, especially with the advent of mobile apps. Mobile apps, their quality and responsiveness, are the standard on which software is judged. For the web to be a positive, competitive, open platform going forward, it's extremely important that user experiences (as in end user, not software developer js user) are the priority. As Brendan points out, parsing to turn something that's actually bytecode into bytecode becomes a hotspot, so this is a way of removing that hotspot. ASM.js has enabled webgl and canvas to have the ability to compete with native experiences... so progressing this out seems like the best way for the web to get better.


We should try to make the web faster. But it seems to me as though that in this process, there is no discussion about keeping the web readable. (Sourcemap's can be turned off.)

I don't think these two features, fast+readable, oppose each other. Not in the long run at least. I'm sure we can find a solution to keep the two together, as long as we consider both to be essential.


It could be addressed culturally. There's no need to mandate a single channel for distribution to humans (source) and machines (binary).

We could define a standard discovery mechanism for embedding a source URL, e.g. github repo + changeset, or PGP-signed javascript.

This would be even better than "View Source" (e.g. source provenance, change history and human reviews/comments), and the browser can still correlate runtime inspection with relevant source code.


What I find interesting about this discussion is the implicit realization that the freedom to inspect the source code is valuable. It has not always been the case that one could make such a statement and expect the majority of readers to instantly understand. What a long way we have come!

I suspect, though, that we still have not come far enough to be able to deal with this problem culturally in the mainstream. The technical solution you present would work exceptionally well along side licenses like the AGPL and would definitely be a boon to the free software community. However, are people as eager to provide freedom to others as they are to take advantage of it?

I choose to mention the AGPL on purpose because it goes a fair way beyond what you propose. It grants all 4 free software freedoms, not just the freedom to inspect code. I could well understand if someone felt that they were very keen on the one freedom to inspect code, but couldn't bring themselves to champion the other 3. The problem is that many, many other people feel exactly the same way about all of the 4 freedoms. Not only do they not see particular value, but they are actively opposed to it.

I think you would fight a losing battle trying to implement this in a standard. In the same way that I couldn't realistically expect to convince most people to use the AGPL in their web projects, despite the obvious benefits that I see, you probably could not convince enough of the powerful players to do this. As a programmer, once you live in a world where this freedom is ubiquitous, it is obvious and even jarring when the freedom is witheld from you. As a decision maker protecting corporate interests, it is merely a matter of tradeoffs and the value is not apparently clear. Certainly, it is not something that most people outside of the programmer community would want to encourage as the default course of action, even if they are willing to engage in it as an exceptional case.

Of course, I would love to be proven wrong ;-)


I view the situation slightly differently: we now have a few examples of business strategies which employ open-source as part of a freemium sales model or for strategic commoditization. Such businesses can identify the subsets of code which qualify for one or more of the software freedoms, i.e. they have already done the legal groundwork for source-controlled separation into different asset classes. A "marketing standard" for View Source would let them earn incremental returns on the already-cleared code.

Github would have a lot to gain from such a standard, as they are already in the code history and metadata business. It could be prototyped with a browser extension, prior to standardization. In time, businesses will care more about software supply chain transparency, for open and closed code. A "View Source" standard could optionally request authorization before the source is made visible, e.g. from a private Github repo. The value comes from the run-time coupling of source and behavior, and it can be derived from source anywhere on the licensing spectrum between proprietary and public domain.


> A "View Source" standard could optionally request authorization before the source is made visible, e.g. from a private Github repo.

Or they could make a protocol to select which version you want to see: the webm or the plain JS. No need to mix external websites like Github in the loop.


A cultural solution, to me, sounds like the ultimate goal. But I observe that culture usually arises from the constraints that a society lives with.

I think many people would gladly opt-in to adopt the standard you proposed, but perhaps commercial projects won't have the incentives to do so.


Like the web itself, commercial companies will eventually come around. It can be the "new new" content marketing, since it's a small step from source-discovery to company-brand-discovery. Code becomes a proxy for author-reader connection, like prose.

We can culturally transform "view source" into a reputation metric that impacts visibility and revenue.

Look at the code snippets in the recent Bloomberg article, "What is Code", http://www.bloomberg.com/graphics/2015-paul-ford-what-is-cod.... Imagine a weekly NY Times list of Top 10 New Code, or Genius.com annotations, or CC-BY code comments which appear in search engine results, etc.

Society will increasingly care about coding, as writing code|prose influences thought, and more people will seek to understand the tools that make us as we make them.

Thanks for raising this important requirement.


Very interesting.

Are there any books/studies/examples of commercial companies doing something like this?


It's more of a vision than an observation :)

Web site easter eggs and job ads-in-code are a precursor, http://www.bbc.com/news/technology-25826678 & https://www.smartrecruiters.com/blog/the-5-most-creative-dev...

W3C has an Annotations Group working in this area, which includes commercial companies and educational institutions, http://www.w3.org/2014/04/annotation/report.html



As others have already stated the web today can be quite hard to read. WebAssembly doesn't make that situtation worse. WebAssembly, however, is and will continue to be well defined and openly documented. I am confident tooling will come that will help humans understand this format.

It might be time for a WebDwarf debug format, though, to help us. Source maps aren't quite robust enough.


The web is hard to read because people have an incentive to use alternative languages/tooling to produce web content.

These alternatives wouldn't exist (or be so widely used) if the web provided the benefits that these tools give.

For example, look at how many people switched from CoffeeScript to ES6/7 once these languages started providing most of (and not all of) the benefits that CoffeeScript provides.

In all honesty, I don't think sourcemaps are the answer to the black-box problem.


I think you have a point, and (elsewhere here, among other places: https://brendaneich.com/2007/03/the-open-web-and-its-adversa...) I've argued that the Web's open-box nature matters. It's not just an accident or hindrance.

If I'm right, we won't see wasm overused such that the web becomes a sea of monolithic and opaque apps (as too many mobile OS homescreens are). Per the FAQ, wasm is also for hot kernel code and reusable libraries helping mostly-JS apps.

It's up to web devs to keep innovating ahead of browsers, to keep pushing for better HTML, CSS, JS, and beyond. One way to do this might be via wasm modules that extend the web. The https://extensiblewebmanifesto.org/ project continues.


I think the only way the web could remain mostly-JS apps, is if the benefits of writing mostly-JS apps would outweigh the benefits of writing, well, not-mostly-JS apps.

And I'm not sure if that's gonna be true.

For example, I'm sitting here, arguing for an open and readable web, and yet I can't wait for a rust-to-asm.js workflow to get stable enough so that I can move the perf-critical parts of my app to rust (you know, the part that could be learned from and hacked on by another developer).


Serious question: why could not those other devs learn from your Rust code, even decompile wasm back into it via their devtools and ye olde view source?


Because the source-map may not be available. And they might not know Rust. Or perhaps having to setup a Rust dev env to play with this code is too much work and it's just not worth it.

Anyway, I think I should explain my thoughts better instead of only pointing out a problem. I wrote this comment: https://news.ycombinator.com/item?id=9743859


Optimizing compilers.


Sometimes you must build with -g. WebDWARF ;-).


Can you please clarify? I've seen WebDWARF mentioned twice now, but when I Google, I get a bunch of results about a shareware clone of Dreamweaver.


DWARF is a debugging data format for ELF files.[1]

WebDWARF then would be to WebASM, as DWARF is to ELF files.

[1]https://en.wikipedia.org/wiki/DWARF


Thank you!


I'm suggesting that when debugging optimized code, you'll use -g as on native platforms. I'm speculating further that with wasm, decompilers will rise to the view source challenge.


I think it is perspective. I would prefer a web where I can use any language I want and not be forced to use only Javascript to create content. So in that sense this could give me more freedom.

For example I may chose to write my code in Rust and that imposes a burden on you (assuming you know only Javascript), but it does not curtail your freedom. Not as convenient as before for you since you now need to learn Rust. I think the freedom has spread to the person who is creating the content (code). The general consumer does not care, but the tinkerer/hacker now needs to put in more effort to parse the creation.


So long as the wasm blob can be read as Rust by a third party.

If it's a one-way Rust source -> wasm transformation, even a Rust user might have a hard time following what wasm spits out.


I agree that giving authors more freedom is a good thing, but as I argued, it also has serious downsides. But I think there are ways to keep the best of both worlds. I wrote a comment here with more detail: https://news.ycombinator.com/item?id=9743859


Not really. I happen to think that a browser is supposed to enable the browsing of HTML documents but every year the browser becomes more of the one takes all platform. Which means my hopes of seeing JavaScript go out of existence keep waning.


I'm glad I'm not the only one thinking this.

WebAssembly is in itself probably a good thing, but yeah, as is evident from the replies you're getting, for lots of people readable sources simply isn't a priority or even a goal.

Which is probably why we have the GPL. But unfortunately, it's not seeing much use on the web.


This is at least addressed in the comments, which point to this link: https://github.com/WebAssembly/design/blob/master/FAQ.md#wil...


AFAIK, WebAssembly code will come from places like LLVM IR. The compiler pipeline can simply choose to omit references to the original source code. Without the source, the WA code (whether in binary or text) will be just IR data. I haven't read IR code, but I'm guessing it's far past the AST stage, with many aggressive optimizations applied to it. It won't look anything like the original source.


"Open by design" doesn't have to mean plain tex. I learned large swaths of SSLv3 and TLS just using Ethereal and later WireShark to watch handshakes. WireShark handles HTTP/2. Byte code decompiles easily. I understand you concern, and I love view-source too. But I don't think this loses that.


Like it or not, the web needs a way to protect business logic, no matter how imperfect it may be. Besides, uglification has already rendered most deployed sites unreadable anyway - no serious business that I know of will deploy non-minified code.


Keep your important business logic server side if it needs be protected. No serious web business lets their propriety magic happen client side, obfuscated or not.


When I learned into computers there wasn't any open source and yet I was able to learn a lot.

In the demoscene it was very rewarding to be able to match other demos without having any help how the others have done it.

Now you just look at the code and be done with it.


did google closure compiler or other js obfuscation tools ruin education or open design?

i dont think view source has been instrumental to the web or education. yes there have been good moments and uses of it, we all have our anecdotes, but it's not a killer feature.

i would happily remove the ability to view source in favor of a faster, richer web site.


https://news.ycombinator.com/item?id=9735042

I think this feature is instrumental, or rather should be instrumental.

We are talking about the platform that will soon define the main medium of human expression. It's no less significant then everyday language. It should be open, by design.


i dont understand this. what does open mean? how open does "open" need to be, to be truly open?

there's 7 layers on the osi stack. there is no internet without them. do you need to know how all of them are designed to be "open"? even if you go on wikipedia and read the designs, you'll get an abstract view of them. but does that tell you enough? the implementations are all different.

focusing just on layer 7, the page you are viewing, do you need to see the actual html blocks, css, javascript to be "open"? of a sizable website, this front facing page is only represents a fraction of all the tools to produce the site. do you need to know how those hidden parts work too to be "open"?

i think the big distinction for me is HTTP vs Internet. if i click on a page and it includes a remote javascript file, i want to know that my computer is connecting to another server to get that file, but do i really have to be able to read it? my computer already connects to others over the internet in non-HTTP settings where I cant read what i'm getting.


The question of "How open?" should be answered in a design process where "As open as possible" is kept as a goal, along other goals such as "As fast as possible" and "As safe as possible", etc. They should be all considered as factors in the trade-off decisions.


Well, some of web apps are already blackboxes, due to use of languages which are compiled to javascript (JS output usually is completely unreadable) and tools like Google Closure Compiler. WebAssembly won't make much of a difference.


There is an interesting discussion going on here, and I'd like to share a few more thoughts for clarification.

There is a TL/DR at the bottom.

---

My main point is that we should see openness as a primary goal in the design of the web platform, alongside other primary goals such as performance and safety.

What do I mean by openness?

A web app is open, when I, a developer interested in how that app is implemented, can look at its code, and say "hey, this code handles that video, these functions manage those audio nodes that make the sounds I'm hearing, and this code decides what happens when someone clicks on that button," as opposed to "these are some ones and zeroes and I don't know what they do." [1]

What's the benefit of this openness?

I think the benefits are obvious, but for starters, when web apps are open, everyone can learn from everyone else's work. And they can build on each other's work. Web content evolves faster. And all of this is there by design. Authors don't have to opt into it. It's there and it's ubiquitous and it's just how things are. [2]

How does WASM endanger this openness?

WASM doesn't necessarily endanger anything. It's good. It allows devs to make faster apps, experiment with different semantics, and discover better ways to define the higher-level features of the web (as per the extensible web manifesto).

But it could also be bad. Here is a hypothetical example:

Imagine someone writes a graphics library for their app. It doesn't have the bloat of DOM. It's fast. And it allows him to make really cool apps.

But it's in WASM. It's not really a web app. Sure it has a URL and can't hijack your computer. But it's not HTML; others can't read it. It's not hackable. It's not mixable. It's not even indexable. Adblock can't tell its content from its ads. It's just a portable black box with a URL.

And devs have many incentives to write apps this way. Not all apps need to be indexed. And everyone could use the better performance.

So imagine a future where most web apps are made this way, each using a different rendering engine, written in a different language. I think it's clear why that could be bad. (Tell me if it's not.)

So, what do I propose?

I don't propose that we ditch ASM or WASM. These are steps in the right direction.

But we need to recognize that while safety/perf/better semantics are natural needs that drive the design of WASM, openness is not a natural need and it won't naturally affect WASM's design. Let me explain:

People need web apps to be safe; if web apps aren't safe, people won't open their browsers. People need web apps to be fast; if not, they'll have a reason to prefer native apps.

So, we have strong natural incentives to make the web safer/faster/etc. But we don't have any strong natural incentive to make it "open." So "openness," naturally, won't get enough attention.

But if we think that this openness should be a feature of the web, then we should treat it as a first-class feature, among the other features like perf and safety. Meaning that when making a decision about adding a functionality to the web platform, we should not only consider how it affects the web's safety and performance, but also what it does to the web's openness.

And we should communicate that to the community, especially when news like this come out. So that readers just don't assume that the web is only gonna get faster, but that it's openness is still gonna be a goal. This will also help create momentum and community feedback for this goal.

What does having openness as a goal mean in practice?

It probably means that committees would constantly look at what devs do in WASM land, and create standardized, high-level versions of them to be used by all developers [3], and also couple these features with great tooling and other benefits that are hard to replicate for WASM. [4]

This makes sure that developers have all the incentives to try to remain within high-level APIs and standard structures as much as possible (thus, keeping web apps readable/interoperable/etc), and only venture into WASM land when they absolutely need to.

I should conclude by saying that I realize this is probably what WASM's authors intend to do. But imho, it is just not being communicated very well. Many people don't notice that openness is a goal, some don't see its benefits. That's why we should communicate openness as a primary goal and write it in our documents, in huge black letters :)

>> TL/DR: Openness is important. There isn't much community momentum behind it (unlike perf and better semantics). There aren't natural incentives to focus on openness, so it might become an afterthought. Web apps might turn into high-performing black boxes. The black box part is bad. To prevent that, we should take openness as a primary goal, and communicate that with the community.

---

[1] WASM is more high-level than ones and zeros, but you get the point.

[2] Today's web apps aren't really open, per my duck-typed definition of "open." They're only open to some extent. It takes a significant amount of time and patience to understand how a certain feature is implemented in a typical web app. And I believe there is an opportunity to make that much easier.

[3] This is to some extent covered in the extensible web manifesto.

[4] It doesn't mean we should deliberately limit WASM's capabilities though. Let me know if I should explain better.

ps. I use compile-to-js languages exclusively. This discussion is definitely not a matter of taste or resistance to having to learn something new :)


Having been on one side of the perpetual (and tiresome) PNaCl-versus-asm.js debate, I'm thrilled to see a resolution. I really think this is a strategy that combines the best of both worlds. The crucial aspect is that this is polyfillable via JIT compilation to asm.js, so it's still just JavaScript, but it has plenty of room for extensibility to support threads, SIMD, and so forth.


> The crucial aspect is that this is polyfillable via JIT compilation to asm.js

So is PNaCl, with pepper.js. The difference there is that PNaCl also provided a more full-featured API, which nonetheless was harder for other browsers to support. Personally, I would have been happy to see NaCl-minus-Pepper standardized, since the sandbox would have been much easier to support than the APIs; browsers could then choose what superset of APIs to expose to that sandbox.


By "a more full-featured API" do you mean that Pepper has more features than the Web APIs? If so, then it's not polyfillable (and the solution is to add more features to the Web APIs). Or if you mean that PNaCl has more features than asm.js, then the solution is to add those features to JavaScript.

I'm glad that asm.js is the starting baseline for this work, because its exact semantics are standardized via ECMA-262. All that has to be done to define its behavior precisely is to create a new syntax for it. LLVM bitcode, by contrast, has a lot of undefined behavior. You could try to spec a subset of it, but why go to that trouble when TC-39 has already done the work for you?


Yes, Pepper has more features than existing web APIs, though in some areas web APIs are finally catching up.

> Or if you mean that PNaCl has more features than asm.js, then the solution is to add those features to JavaScript.

Nope. A polyfill is a temporary solution, and will not be a permanent design constraint. I'm very glad to see that the constraints of JavaScript will no longer be a limiting factor on the web. I look forward to the day (already anticipated in the linked post) when a JavaScript polyfill is no longer a design constraint.


What features in particular are you missing from the Web platform?

I see no good reason to add features to Web Assembly and not to JS. A Web Assembly engine is virtually always going to also be a JavaScript engine, so the implementation effort needed to keep them at parity will be minimal. And the demand from Web authors to add all the new Web features to JS will remain high. The likelihood of Web developers suddenly fleeing JS is comparable to the likelihood of WordPress migrating away from PHP. It's just not going to happen.


> What features in particular are you missing from the Web platform?

In terms of language: Myriad features from C, Python, Rust, and other languages. Static typing. Integers. Tagged unions. Real data structures. Efficient native code (asm.js does not suffice), which would allow implementing those.

In terms of libraries, a few off the top of my head: OpenGLES (no, not WebGL), real file APIs (both provider and consumer, and including memory-mapped files), raw TCP and UDP sockets (not WebSocket), a better and more efficient low-level rendering API...

> A Web Assembly engine is virtually always going to also be a JavaScript engine

Definitely not; browsers will include both for the foreseeable future, but a WebAssembly engine/JIT will be substantially different from a JavaScript engine/JIT, and compilation for WebAssembly won't go by way of JavaScript in browsers with native support. The article itself says the polyfill will be a temporary measure.

> And the demand from Web authors to add all the new Web features to JS will remain high.

> The likelihood of Web developers suddenly fleeing JS is comparable to the likelihood of WordPress migrating away from PHP.

There are web developers who actively like JavaScript, and there are web developers eagerly desiring another language; the latter motivates things like CoffeeScript, TypeScript, Dart, Emscripten, asm.js, Native Client, and many more, none of which would exist if every web developer was completely thrilled to write nothing but JavaScript.

And on the flip side, people who love JavaScript have created environments like node.js so they can use it outside the browser.

Everyone who loves JavaScript will keep using it, and it certainly won't go away. And JavaScript will be able to call almost all new web APIs. But not everyone will keep using JavaScript.

WordPress won't migrate away from PHP, but plenty of sites choose not to use WordPress.


Those language features aren't relevant to Pepper vs. the Web APIs: JavaScript supports integers, tagged unions, and so forth, if you compile a language that supports them down into JS. As for the library features, aside from a few I don't understand/agree with (how is WebGL not equivalent to OpenGL ES, and how is WebGL/Canvas not a low-level rendering API?), a bunch of those are forbidden for security reasons (like raw file access and raw TCP and UDP sockets), so you wouldn't want pages to just have access to them in the manner that a PNaCl-using Chrome extension does anyway.


> a bunch of those are forbidden for security reasons (like raw file access and raw TCP and UDP sockets), so you wouldn't want pages to just have access to them in the manner that a PNaCl-using Chrome extension does anyway.

Sure I do, with explicit permission granted. For instance, consider a web-based VNC client, without needing a server-side WebSocket proxy.

And there are far more where those came from.

Also, the advantage of making low-level web APIs is that any new high-level APIs built on top of them run inside the sandbox, so their implementation is not part of the attack surface.


Considering that people already are installing apps on their smartphones and nod away all the checkboxes

   [x] share all my contacts
   [x] rummage through my images
   [x] tap into microphone and camera at any time
   [x] set my house on fire
such a list of allow/deny simply needs to come to browsers too.

Ideally they should be designed in a way that those feature unlocks get delayed randomly, so that they always "break" any code relying on those APIs by default. I.e. stuff should still work if the user doesn't give you the key to the city.


We had this, it was called "Java applets" (also, DirectX), and it sucked.


Developing on the Web platform downright sucks compared to pure native environments. There are so many things wrong or missing, it's hard to know where to start.

The lack of any kind of standard library comes to mind. The ability to use something other than Javascript, without having it compile to Javascript, would be fabulous. Being able to use something, anything other than a hodge-podge of HTML/CSS and JS for making a UI would be wonderful.

Sooner or later someone is going to invent a browser that can browse something other than HTML and HTML, CSS and JS will become legacy.


it's not polyfillable

I'd be interested to know where/when this "polyfillable" term came about. It's not readily Google-able yet.



If you dig through my comment history, you might be able to find a comment I wrote on my first exposure to this term maybe ~2 years ago. I was utterly befuddled because I thought it was a graphics rendering algorithm (setting pixels in a framebuffer corresponding to the interior of a polygon specified as an edge list), and was horribly confused about why an article discussing some web technology would suddenly transition into a geometric problem utterly disconnected from the original topic.


It's a pretty awful term...


It sure beats "shiv" though. Glad that one has started fading.


shim makes much more sense than shiv


Spackle would have been a fun term.


> I would have been happy to see NaCl-minus-Pepper standardized[...]

If there was no standard API, then sites would have a different API they had to use for each browser. Then we're back to the `UserAgent` days.

Unfortunately, I think it must be an all or nothing ordeal :/

> [...]browsers could then choose what superset of APIs to expose to that sandbox.

Pepper can already do that. All Pepper APIs are retrieved by calling a function provided to the module at start with a single string argument (ie `"PPB_Audio;1.1"` or `"PPB_Audio;1.0"` or `"PPB_FileSystem;1.0"`). It's impossible to use PPAPI without first getting the specific interface structures from that function.


> If there was no standard API, then sites would have a different API they had to use for each browser. Then we're back to the `UserAgent` days.

You would need an API to send and receive messages between NaCl and JavaScript, but otherwise you could get away with just a sandbox that runs native code. Everything else could be incremental.


pepper.js is ahead-of-time offline compilation, which means it can't adapt to individual target runtimes.


So is WebAssembly: you'd compile your code to a .wasm file "offline". In both cases, you compile your code to a bytecode format, which the browser then compiles (directly or via JS polyfill) to native code.


A key part of WebAssembly's strategy is a polyfill that runs on the client and generates appropriate JS (asm.js or otherwise).


Right, there will be a wasm.js polyfill just like pepper.js, for browsers without native support. And once browsers have native support, that polyfill can be dropped.

The difference between PNaCl and WebAssembly is simply that WebAssembly doesn't come with a pile of API expectations established by a single browser, and that it has the blessing of multiple browser vendors. Ignoring the API, the latter could have worked with PNaCl just as well.


Any idea how this will end up working in practice? If it will actually require linking to an external .wasm file, that'll break the way we're currently using asm.js.

I'd like to see something that works more or less to the effect of:

    const wasmCode = new Uint8Array(...);
    
    if (typeof executeWasm !== 'undefined') {
        executeWasm(wasmCode);
    }
    else {
        compileToAsmjs(wasmCode);
    }


The current plan (https://github.com/WebAssembly/design/blob/master/Web.md#imp...) is to integrate WebAssembly into the ES6 module system (so you could have a JS module import a WebAssembly module and vice versa). With this design, the WebAssembly polyfill would hook into the ES6 module loader (if implemented natively) or the ES6 module loader polyfill.


It should also be possible to have a <script type="application/wasm" src="something.wasm" />, and then additionally source a JavaScript polyfill that checks if wasm is supported and translates to JavaScript if not. That would remove the need to have JavaScript shims.


Hmm, neither one of these options (ES6 import or <script> type wasm) can quite fit in with the way we're using asm.js (including it inline with other JavaScript inside a Worker[1]).

If there were a way to flexibly invoke wasm in any scenario where asm.js currently works, I think it would be a lot friendlier and truer to the "it's just JavaScript" goal.

Furthermore, tying this to ES6 seems unnecessary to me. How does this affect those of us using Babel or TypeScript to output ES3 code? The purposes of wasm and ES6 just strike me as completely orthogonal.

---

1: There are good reasons we have to do this, but they're beyond the scope of this thread.


> Hmm, neither one of these options (ES6 import or <script> type wasm) can quite fit in with the way we're using asm.js (including it inline with other JavaScript inside a Worker[1]).

Could you import a module, and then call the resulting functions from within your Worker?

> If there were a way to flexibly invoke wasm in any scenario where asm.js currently works, I think it would be a lot friendlier and truer to the "it's just JavaScript" goal.

"it's just JavaScript" is a goal of asm.js; I don't see anything about WebAssembly that makes it a stated goal of wasm. "It works anywhere JavaScript works, and can call and be called by JavaScript" would be a more sensible goal.

> 1: There are good reasons we have to do this, but they're beyond the scope of this thread.

I'd be interested to hear them.


Not sure if the security model allows it, but maybe you could write the wasm into a blob, create an URI from the blob[1] and then use that as <script> src.

Or maybe data uris.

> How does this affect those of us using Babel or TypeScript to output ES3 code?

Well, if you're already transpiling the whole load of ES6 features then it's just one more shim to consider, no?

[1] https://developer.mozilla.org/en-US/docs/Web/API/URL/createO...


Any idea if wasm / ES6 modules / module loader shims work with object URLs and/or data URIs cross-browser by design?

Doing something similar for Workers, one of the problems we've had is that IE and I think Safari block that, so to make it work we have to fall back on a permanently cached shim that puts an eval statement inside self.onmessage. (Obviously no equivalent to eval exists for wasm, or I wouldn't have a problem here in the first place.)


I wouldn't say this is "tied" to ES6, but rather intends to integrate nicely. If a developer has no interest in being called by or calling JS, they should be able to ignore the ES6 module aspect. For workers, it should (eventually, probably not in the MVP v.1) be possible to pass a URL (which, with Blob + Object URL needn't be a remote fetch and can be explicitly cached (Cache API or IndexedDB) or dynamically generated (Blob constructor)) to a worker constructor.


>Could you import a module, and then call the resulting functions from within your Worker? ... I'd be interested to hear them.

I have more detail here[1], but what it comes down to is that our application's security model is dependent on the ability to pack all of its code into one file.

I'm admittedly not too familiar with the ES6 module system (we're using TypeScript modules), but it looks like importing necessarily requires pulling in an external file. Workers are problematic in a similar way, but simple enough to work around using a few different methods (that I can't think of how to apply to ES6 modules).

> For workers, it should ... to a worker constructor.

In our case, the asm.js modules aren't being used as standalone workers, but rather pulled in to a worker that does a bunch of other stuff, so that probably couldn't be applied here exactly.

If that blob URL setup worked with ES6 import, though, that could work. (It might make things messier for generated TypeScript code though, not sure.)

> I wouldn't say this is "tied" to ES6, but rather intends to integrate nicely.

When I say tied, I mean in the sense that ES6 looks to be the only method of arbitrarily executing wasm code.

To use wasm, I'd need the ability to drop it in the middle of a block of JS.

---

1: https://docs.google.com/document/d/1j5KnpVyDdIXVwEDCQpHGxbs0...


Almost nobody is using asm.js right now, and backward compatibility with asm.js should not be a top consideration for WebAssembly which (with agreement from multiple vendors) will achieve something much more important than asm.js backward compatibility.


My point is less about anything related to asm.js per se, and more that there isn't a good reason for wasm not to support this particular use case.


With this module loader, can you generate your own code to load at runtime reasonably easily and efficiently? Say you're compiling from user input to asm.js within the browser (as I did the week asm.js was announced) -- can you change it to compile to wasm?

(That link pointed to another link on module loading, a long page I didn't find an answer on right away. Admittedly I'm only curious for now.)

Added: https://github.com/lukehoban/es6features#module-loaders shows module loading of runtime-generated JS code, so the answer is probably yes, hurray.


That'd be perfect; thanks! I'll have to play around with that once wasm is in a testable state.

As far as compiling within the browser, it sounds like compiling the wasm will be the same process as compiling to asm.js – except the final bytecode format won't be directly executable as JS. If that's the case, then I'd imagine any asm.js compiler could have wasm support added the same way emscripten will.

I think in both of our use cases it all just depends on how easy it will be to execute the code from there. (In your case, the worst case scenario is that you have to post the generated code to a server and redownload it – ridiculously ugly, but not a blocker.)


In the Java applet days you'd actually have to do that -- round-trip your generated bytecode back through the server -- and I was like "Hello, you must be kidding". That's why I don't just assume they've made direct loading possible.


I haven't followed the module loader work super closely, but the overriding impression that I've gotten is that the ability to customize how code is loaded and from where is a primary goal.


What's wasm's optimization story? Is there an optimizing JIT running on wasm code, or is the code supposed to already be optimized by the AOT compiler (in which case, wasm is less appropriate as a target for languages with very dynamic dispatch, which should still compile to JavaScript)?

EDIT:

I see an optimizing JIT is planned for future versions: https://github.com/WebAssembly/design/blob/master/FutureFeat... while the MVP is primarily meant to serve as a target for C/C++.


There are two different levels of optimization involved here. Since WASM isn't the native assembly language of any processor, the browser's JIT will need to translate WASM to native machine code, including efficient translation of SIMD and other instructions. Separate from that, compilers for various languages will need to have WASM backends, including the WASM constructs that translate to efficient machine code on various target platforms.


Sure, but there's a big difference between simply JITting WASM to native code trivially or with minor optimization, and serious profile-guided/tracing optimization that's required for efficient execution of languages heavily based on dynamic dispatch. From what I gather from the design documents it seems that the goal is to start with straightforward JITting (aimed at languages like C), and later add a more sophisticated optimizing JIT.


Sounds like it to me. High-performance JavaScript JITs have to rediscover type information that doesn't exist in the source language. WebAssembly appears to have enough information to make that mostly unnecessary, so much of that optimization can move into the compiler generating WebAssembly.


> so much of that optimization can move into the compiler generating WebAssembly.

Yes, but JavaScript can't be compiled to efficient WASM code. Dynamic-dispatch languages need an optimizing JIT -- compiling once runtime behavior is known -- to produce good machine code. In any event, it seems like it's a planned feature for a some future release.


Well, you at least need something like an inline-cache instruction (like invokedynamic). You also need GC hooks so the stack layout, object layouts, and pointer liveness are known. I don't see any reason why it couldn't be done in the future.

(We may not be in disagreement here—I agree that JS can't be effectively compiled to Web Assembly as it stands.)


No argument :) I just asked a question and then answered it myself based on what I read.

The design documents clearly state that WASM is intended to be a compilation target for C-like languages, and in the future may also contain a GC and an optimizing JIT.

Languages requiring a GC and/or not amenable to AOT optimization should -- at least for the time being -- continue to target JS as a compilation target (or implement their own runtime in a C-like language and embed it in WASM).


If I'm reading this correctly, they are not planning to add a JIT but just the features required for implementing your own:

    WebAssembly should support:

    Producing a dynamic library and loading it into the current WebAssembly module.
    Define lighter-weight mechanisms, such as the ability to add a function to an existing module.
    Support explicitly patchable constructs within functions to allow for very fine-grained JIT-compilation. This includes:
        Code patching for polymorphic inline caching;
        Call patching to chain JIT-compiled functions together;
        Temporary halt-insertion within functions, to trap if a function start executing while a JIT-compiler's runtime is performing operations dangerous to that function.
    Provide JITs access to profile feedback for their JIT-compiled code.
    Code unloading capabilities, especially in the context of code garbage collection and defragmentation.
From https://github.com/WebAssembly/design/blob/master/FutureFeat...


Right, because wasm is for AOT (ahead of time) languages first. To support dynamic languages that don't map to JS well, the future extensions including JITting wasm will be needed.


Oberon language had a similar system called Juice back in 1997. It does exactly the same thing, e.g. using binary format to store a compressed abstract syntax tree as intermediate format which can be compiled efficiently and quickly. I think it even has a browser plugin much as Java applet. Life has interesting cycles. I don't have the best link to the Juice.

[1] https://github.com/berkus/Juice/blob/master/intro.htm [2] ftp://ftp.cis.upenn.edu/pub/cis700/public_html/papers/Franz97b.pdf


Honestly, everything we're doing recently feels like rediscovering Oberon.

Oberon had name-based public/private methods, like Go. It had ahead-of-time compilation of bytecode, as you pointed out. It had CSP-style concurrency, again like Go. The web for the last two years feels like we're rediscovering Smalltalk and Oberon and acting like we've just invented everything anew.


We didn't acknowledge a debt to Oberon (did Java? It owes one too, Bill Joy evaluated Oberon closely).

My pal Michael Franz at UCI studied under Wirth. Michael and a PhD student, Chris Storck, did an AST encoder years back that influenced me and that I referenced re: wasm.

Oberon's great. JS was in the right place and right time. What can I say? I've been taking all the best ideas from the best projects and trying to get them incorporated, ever since 1995.


Sorry, just to be clear, I don't see anything wrong at all with raiding the best ideas from older languages. It's actually awesome: it means those ideas get used. I'm genuinely just amused at the cycle.


I originally designed the name-based public/private methods for Go. I didn't know Oberon had the same design.


That's just the memory span of society. Traits reappear whenever the context calls for them to emerge. Sometimes explicitly inspired by the past, sometimes implicitely, sometimes really reinvented.

Someone did a similar article 'go vs brand X' https://news.ycombinator.com/item?id=943554

I get the feeling that this loop thing is slowing down. Genes have been passed vertically and horizontally enough times for everybody to be aware of them. So many languages get the same features now: closures, optional static types, generators, streams, list/map/set types, channels.


We make a pilgrim's (if not a whig's) progress over time, good point.

The next generation of programming languages should address safety and ||ism. Rust takes these on based on prior work (e.g. Cyclone) but with influence from C++ best practices, which I think is winning.


You're referring to these ?

https://en.wikipedia.org/wiki/The_Pilgrim's_Progress

https://en.wikipedia.org/wiki/Whig_history

We'll see how Rust fare, especially with the recent stable release. Personally I'd like to see programming to aim at Haskell, even Idris. But let's not fantasize too much (especially in a thread about WebAssembl*).

ps: why on earth did cyclone not catch on ? I'd think people would jump on a C with less rope... https://en.wikipedia.org/wiki/Cyclone_(programming_language)...


I am referring to those things, in general terms (I'm not a protestant or a whig!).

Haskell, Idris, and I must add PureScript are great, but will not sweep all before. Especially when "systems programing".

Cyclone was (a) a research project and language; (b) too heavy on sigils. Rust started with sigils but ended in a great spot, with only & and region (lifetime) ` annotations. Usability, developer ergonomics, matter.


This is enormous news. I could see a scenario where, in ~5 years, WebAssembly could provide an alternative to having to develop apps with HTML for the web, Swift for iOS, and Java for Android. Instead, you could build browser-based apps that actually delivered native performance, even for CPU- and GPU-intensive tasks.

Of course, there would still be UI differences required between the 3 platforms, but you would no longer need 3 separate development teams.


This was also what Java applets were meant to deliver – I remember running some pretty great Java Applet-based games back in the day.

But Java ran as an embed, not a browser-level feature, and was much slower to load than Flash (which came to offer much of the same functionality in an easy-to-use IDE).


I can already imagine a minimal java VM (just transforming bytecode to wasm instead of JITing to machine code) inside browsers, thus allowing java (or any of the other JVM languages) running in the backend and frontend.


Google previewed some Android to chrome app compiler a while ago - can't remember the details. I imagine it would be a pretty good starting point for such a project instead of a full vm.



Why not Swift for the Web? Better yet, Rust.


Rust would really be exciting to see it supported.


That is exactly what some of us are excited about and what others (like the creators of Swift?) are terrified of.


The creator of Swift was a creator of LLVM. Once the language has been open-sourced, it won't be long before people are compiling it to WebAssembly and trying to use it to replace JavaScript. He might not be too upset.


This is probably the best thing that can happen to web development.

For quite a while, I've been thinking about how instead of hacks like asm.js, we should be pushing an actual "Web IR" which would actually be designed from the ground up as an IR language. Something similar to PNaCl (a subset of LLVM IR), except divorced from the Chrome sandbox, really.


I think that's what lots of people have thought since the 90s with Java applets. Despite ES6 and "it's not that bad", there's a lot of people that would really like to not have to deal with JS.

Edit: Really, it's annoying this didn't happen sooner. If Microsoft would have thought of this early on and pushed it, they could have gained a lead in compilers and developer tools, for instance.


asm.js made wasm possible by creating a solid compatibility story and evolutionary path.


I think the biggest win is https://github.com/WebAssembly/design/blob/master/FutureFeat.... Now instead of asm.js being only for emscripten-compiled code (or other non-GC code) WebAssembly can be used for higher level, GC'd languages. And even better, https://github.com/WebAssembly/design/blob/master/NonWeb.md, means we may get a new, generic, standalone VM out of this which is always good (I hope I'm not reading in to the details too much). As someone who likes to write compilers/transpilers, I look forward to targetting this.


I'm really hoping that WebAssembly doesn't bake-in higher-level features like garbage collection. That would preclude replacing GC with something better, such as Rust-style memory management.

Hopefully this will take more inspiration from actual assembly language and virtual machines, rather than from bytecode languages.


GC support doesn't prevent having Rust-style compile-time memory management. In fact, Rust itself may add a GC at some point.


As a library, though, which is very different than providing it as an inherent part of a language runtime.

I'm hoping WebAssembly doesn't inherently imply a runtime.


Efficient GC requires tight integration with LLVM, and I assume that means also tight integration with the Rust compiler. I don't think that can be done as a library or even a plugin.


Rust doesn't need an efficient GC like the kind Java has. Most of the resources you need in Rust are freed when they get out of scope.

What Rust needs is just a better RC implementation. This is because only a small part of your data is reference counted, so it fits better with how Rust works, so an efficient implementation allows it to combine with ownership semantics to outperform even very advanced generational GCs.


> I'm hoping WebAssembly doesn't inherently imply a runtime.

The present documentation seems to suggest that it won't really include a full-blown runtime, but may include a small number of builtin modules:

"The WebAssembly spec will not try to define any large portable libc-like library. However, certain features that are core to WebAssembly semantics that are found in native libc would be part of the core WebAssembly spec as either primitive opcodes or a special builtin module (e.g., sbrk, mmap)."[1]

This brings up the natural question of libraries and dependency management in WASM. Will there be a WebAssembly equivalent of Maven/npm/et al.?

[1]: https://github.com/WebAssembly/design/blob/master/NonWeb.md


As a library that would ideally tie into the compiler to implement a real, tracing, copying GC. Something close to that is already done with Servo's Spidermonkey GC integration.

With WebAssembly, it sounds like they want the same sort of thing- let manual/Rust-style/etc. memory management work, but still allow the necessary integration into the runtime for a GC.


How could you have threading without a runtime?


You can have APIs without a runtime. For instance, the WebAssembly implementation should have a function to spawn a new thread given a function pointer. That should result in a real OS thread running in the same address space. Any further semantics beyond that would be up to the software targeting WebAssembly.


asm.js doesn't include garbage collection and they're initially targetting C/C++, so I highly doubt they're going to add GC any time soon.

The linked 'future feature' reads more to me like a mechanism to access the garbage collector already present in the browser to allow interoperability with JS and the DOM.


It depends on how you define "bake-in". I consider it "opt-in" but it's important. If you don't offer it at all, you will continue to only see adoption as a target of low-level languages or you will have people reinventing GC on top of WebAssembly. The JS runtime has a quality GC right there, why not allow me to use it if needed (but definitely don't require it)?


I highly doubt it would ever bake in GC semantics. High-perf stuff like games are the biggest push for this kind of thing, where nobody wants a GC. One of the biggest advantages to asm.js right now is predictable performance because of no GC.

Most likely an API will be provided for interacting with a garbage collector, but it's completely optional.


it's a virtual machine, but it needs to interop with the rest of the web. at some point this probably means support for garbage collected heaps, but it doesn't mean it's a pervasively-GCd VM like Java. Think more like how WebKit, Gecko and Blink have lots of GC infrastructure in their native, un-GCd code.


Right, it has to interoperate with DOM objects; however, those can be treated as "foreign" object handles owned by the browser and just referenced by WebAssembly. Those objects can't be directly in the WebAssembly "address space"; that would be unsafe. So since they have to be managed separately anyway, they can use whatever memory model the browser uses.


also important in my eyes: shared memory threading: https://github.com/WebAssembly/design/blob/master/PostMVP.md...

Threads themselves may be "ugly", but to build all the pretty abstractions you need them as foundation.


Yet also coming to JS, see https://blog.mozilla.org/javascript/2015/02/26/the-path-to-p.... Not a reason per-se to do wasm, at least in short term. Longer term, great to do full pthread things in wasm and not pollute JS.


Prepare for the onslaught of new (and old) languages targeting the web.

While this is welcome news, I am also torn. The possibilities are pretty amazing. Think seamless isomorphic apps in any language that can target WebAssembly and has a virtual dom implementation.

However, it finally seems like JS is getting some of its core problems solved and is getting usable. I wonder if it might have short term productivity loss as the churn ramps up to new levels with a million different choices of language/platform.

Either way, it will be an interesting time... and a time to keep up or risk being left behind.


> However, it finally seems like JS is getting some of its core problems solved and is getting usable. I wonder if it might have short term productivity loss as the churn ramps up to new levels with a million different choices of language/platform.

This announcement means JavaScript can no longer rely on "you must target JavaScript to work on the web" as a motivating factor. The set of people who like JavaScript as a language in its own right can keep using it, and will have the advantage of new JavaScript engines and extensions implemented on top of WebAssembly. However, the set of people who would rather write in another language will no longer be constrained by waiting for JavaScript to catch up.

Personally, I think this will be a good thing for both JavaScript and other languages.


I'm 100% ready to jump ship there. I would rather work at a shop that does any other language besides Javascript for web guis even with more than a decade of js dev under my belt. The problem is they are impossible to find.


> The problem is they are impossible to find.

Scala.js[0], GHCJS[1], and Js_of_ocaml[2] among others.

[0] http://www.scala-js.org/

[1] https://github.com/ghcjs/ghcjs

[2] http://ocsigen.org/js_of_ocaml/


Languages that compile to JS are plentiful; shops that use those languages are much less so (with the possible exception of coffeescript).


I believe smrtinsert is referring to places that you could work at to write those languages, not that the languages themselves are impossible to find. I don't know if anyone is actually being paid by someone else to write ScalaJS, GHCHS and/or Js_of_ocaml.


> places that you could work at to write those languages

Ah, missed that, assumed OP meant if only there was a way to write JS without having to write JS.


There are some end to end Clojure shops. Pretty sure Relevance uses cljs.


I have already replaced all JavaScript with Go (by using GopherJS compiler) in my personal projects, as of about a year ago.

I am really enjoying it and I don't see myself going back to writing JavaScript by hand anytime soon. I use it to do some frontend stuff that involves DOM manipulation, XHRs, WebSockets. Another use is WebGL, it allows me to write simple multi-platform games in Go that target OS X, Linux, Windows, but also the browser. I would not be writing those games in JavaScript by a long shot.


> Prepare for the onslaught of new (and old) languages targeting the web.

"Prepare for"? As if that hasn't happened already, with virtually every modestly popular language having at least one web-use-oriented compile-to-JS transpiler, and several new languages being invented whose only implementation is such a transpiler.

WebAssembly makes the process easier, and may make it easier to maintain those along with the non-web implementations of the languages, but its a little late to prepare for the onslaught of web-targeting language implementation.


True, but they have (generally) been second class citizens with very little adoption, mostly because targeting JS has had quite a few drawbacks, such as huge payload size or tons of time to parse and optimize.

Assuming wasm delivers on its promises, that should change, and that is when those languages will stop being mostly experiments but real viable platforms.


WebAssembly still has a long way to go. They don't have a plan yet for:

* GC

* Interoperating with browser objects (even strings)

* Debugging and other tooling

I don't think any high level language will be able to compete with JS on an even playing field until that language can use the high performance GC that's already in every browser.

If your language has to either not use GC (huge productivity loss) or ship your own GC to the end user with your app (huge app size), it's an unfair fight.


Right now, though, all non-JS languages are second class citizens. They have to be turned into JS to actually run, and JS gets to run natively. This has huge implications for debugging, among other things.

And that won't change in the MVP of this, where JS will still have capabilities that WebAssembly-targeting languages won't have. But their clear vision is to make WebAssembly a VM that puts languages on equal footing, with tooling and debugging that works cleanly for all targets.


> Prepare for the onslaught of new (and old) languages targeting the web.

I'm not even joking when I say an RPG IV compiler would probably make you some cash.

Of course, I knew people who wrote their website in COBOL (Micro Focus COBOL to be specific). This would probably please them greatly.


Eh, the problem with JS is that it's starting to suffer the language feature bloat that, say, C++ has. I think the party will end in like 5 years, mostly after introduction of shared memory objects and traits.

Meanwhile, this is going to allow people to port really dumb things into the browser and expose all sorts of new crazy.

Still neat, I agree, if you prefer native code. That said, it's not an unalloyed good.


Not sure what you mean by virtual DOM. From the WebAssembly FAQ: "Is WebAssembly trying to replace JavaScript?

No! WebAssembly is designed to be a complement to, not replacement of, JavaScript (JS). While WebAssembly will, over time, allow many languages to be compiled to the Web, JS has an incredible amount of momentum and will remain the single, privileged (as described above) dynamic language of the Web. "

https://github.com/WebAssembly/design/blob/master/FAQ.md#is-...

The WebAssembly team is being incredibly thoughtful and open about their motivation and long-term plans, which is very refreshing.


So it's basically bytecode for the web without compiling to javascript right ?

Any language can now target that specific bytecode without the need for javascript transpilation.

For instance Flash can target this format in place of the Flash player, making swf files future proof since backed by standard web techs.

So it's basically the return of Flash,Java applets and co on the web. And web developers won't have to use Javascript anymore.

The only constraint is obviously the fact that the bytecode has only access to web apis and cant talk directly to the OS like with classic browser plugin architectures.


It is really too bad that at some point in the last 18 years of Java VMs being in browsers that they didn't formalize the connection between the DOM and Java so that you could write code that interacted directly with the DOM and vice/versa in a mature VM that was already included. Would have been way better than applets, way faster than Javascript and relatively easy to implement. The browsers actually have (had?) APIs for this but they were never really stabilized.


I find it interesting that Java didn't become the standard for this as it seems like it has everything and is both fast and mature.

What might be the reason?


There are several important lessons to learn from the Java bytecode format and members of the WebAssembly (including myself) do have experience here. In particular, JVM class files would be a poor fit for WebAssembly because:

1. They impose Java's class and primitive type model. 2. They allow irreducible control flow. 3. They aren't very compact. Lots of redundancy in constant pools across classes and still a lot of possibility for compression. 4. Verification of JVM class files is an expensive operation requiring control and dataflow analysis (see stackmaps added in the Java 7 class file format for rationale). 5. No notion of low-level memory access. WebAssembly specifically addresses this, exposing the notion of a native heap that can be bit-banged directly by applications.



Back when Java Applets were a thing, Sun wasn't friendly with browser makers. JavaScript was a gimmicky alternative that was created by a browser manufacturer. It had the foothold, and it grew.

Nos Oracle isn't interested in Web.


If I could just use my favourite language and not feel like a second class citizen, then I am not sure there would be anything else to complain about as a developer, really. A mark-up bytecode so that we could forget about the nightmare of HTML and CSS as well?


And remember the nightmare of lazy Java class loading, or huge SWF apps with distract-the-user loading splash screens?

The Web evolved to do incremental/progressive rendering, it's one of the best aspects. Undumping a frozen Dart heap was a goal to speed up gmail, but over long thin pipes (my "LTE", most of the time), incremental/progressive wins.

Sure, games come in big packages and good ones are worth the download time. For most other apps, small is beautiful. iOS app update still sucks, not just because semi-automatic.


Fast-forward a few years, and imagine if a browser engine were nothing more than a WebAssembly engine and a "default" HTML/CSS implementation. You could replace that engine with anything you like written in WebAssembly, doing all its own rendering and input using the lower-level APIs provided by WebAssembly. So, every browser gets the latest rendering and elements, or at the very least polyfills via WebAssembly rather than JavaScript.


To me: this more or less sounds like what Mickens, et al. are aiming for w/ Atlantis[0][1].

The browser becomes a simple kernel which knows how to execute a bytecode (their so-called "Syphon Interpreter.") The browser itself provides much simpler, lower-level APIs for doing I/O.

To actually render a page you still need a layout engine, a renderer, and maybe a scripting runtime. The difference is these components are provided _as bytecode at runtime_, they're not shipped as part of the browser itself.

Your page then specifies the environment it needs by requesting the different components you need. Then you just let the environment go to work on whatever page you served.

---

[0]: http://research.microsoft.com/apps/pubs/default.aspx?id=1546... [1]: https://www.youtube.com/watch?v=4c0DdOvH6lg


Sounds terrible for SEO, screen readers and adblockers.


You can already build awful, non-semantic HTML today, with a giant pile of <div> tags, CSS, and JavaScript. The web hasn't fallen apart.

Similarly, just because it'll be possible to implement a full web engine efficiently in WebAssembly doesn't mean sites will suddenly stop paying any attention to standards or accessibility.

As for adblockers, I can think of several ways to block ads in such a world, including hooking the sockets API to block or sandbox access to ad-serving hosts, or providing hot-patches that stub out common ad-serving libraries. It'll get harder, but not impossible.


"The web hasn't fallen apart" for sighted users. Those non-semantic, unusable-to-screen-reader sites are in fact an accessibility disaster for blind users, who can find it nearly impossible to use some sites.

(And a disaster for the companies that build them and get sued later.)


Maybe there will emerge a common "screenreader toolkit" for sideloading a non-visual interface instead of expecting every designer to remember to interleave ARIA hints at magic places in an otherwise fundamentally visual interface, keeping it all in sync as the design evolves, etc.


> "The web hasn't fallen apart" for sighted users. Those non-semantic, unusable-to-screen-reader sites are in fact an accessibility disaster for blind users, who can find it nearly impossible to use some sites.

I'm saying that div-itis is possible today, but it's a bad idea, and as far as I can tell many new sites still use semantic markup when available.

Along the same lines, I would expect that even if people implement a web engine in wasm, many will still use semantic markup.

That said, I agree that people don't pay enough attention to accessibility, and should. But I don't think wasm or wasm-based rendering engines will make that worse.


Even if you use only divs, the actual text is there in UTF-8 to parse out of the DOM, today.

If you go all-in on wasm to do "your own rendering", external software won't know where to find a "DOM" or how to understand it, unless some other kind of reinvented HTML-for-a-wasm-rendering-engine-defacto-standard is invented?

(This is more a rant against "let's reinvent html/css in canvas" rather against a VM-based javascript replacement in general. Even though the latter sounds a bit terrible as well for the open web; imagine what the web would have looked like if .js never existed, and a flash/java .swf/.jar engine was the only way to do scripting in a webpage.)


What I can tell you for sure is that most sites are accessible to the extent that HTML makes accessibility the default, and the instant people start doing stuff that's not accessible-by-default (JS checkbox widgets, for instance), it almost always gets ignored and dropped.

So when you start talking about ignore-the-DOM stuff, my strong suspicion is that it would all be completely and totally inaccessible.


Eh, the web has fallen apart. Dozens of ajax only sites that rely on search engines having a full javascript stack to even display any content.

I’d argue we already lost.


I think we're pretty clearly headed in this direction. It's the only way out of the nightmare of our current webdev world.


Just start loading a single script tag and ignore HTML and CSS completely. Will severely impact usability and SEO though.


The Google Bot executes JavaScript and indexes the result, I don't see why it couldn't run wasm.


While it may very well run the WASM it'll need to interpret the output. How do you rank a running game of candycrush vs. the candycrush website in google results?


Well, unless you produce HTML dynamically Googlbot supporting JavaScript doesn't really matter.


The Google Bot. Again, this makes it harder for competitors to ever enter the market.


Actually, I think this might be a great opportunity to separate applications/datum that don't need to be indexed by search engines from those that do. Of course, we have to be careful that we don't get walled off content in applications which can and already does easily happen, but I don't think that is truly in the spirit of the sharing web that we have all come to know and love.

There are at least two distinct types of content - structured/unstructured content that should be exposed and can be indexed via some known mechanism by external visitors, and application content that perhaps need not be indexed.

This might be a time to evolve/establish what those mechanisms are for the betterment of both application development and content development/sharing.


robots.txt


Oh, goody, so now we all get to worry about a more fragmented development ecosystem.


Has any consideration been given to using a subset or derivation of LLVM bitcode a la PNaCl? I know there are significant potential downsides (e.g. according to [1], it's not actually very space efficient despite/because of being bit-oriented and having fancy abbreviation features), but it already has a canonical text encoding, it has been 'battle-tested' and has semantics by definition well suited for compilers, and using it as a base would generally avoid reinventing the wheel.

[1] https://aaltodoc.aalto.fi/handle/123456789/13468


Of course, several of the people working on WebAssembly are actually from the PNaCl team. And others, like the Emscripten people, have lots of LLVM experience as well. WebAssembly is being designed while taking into account previous approaches in this field, including of course using LLVM bitcode for it.

LLVM bitcode is simply not a good fit for this:

* It is not portable.

* It is not designed to be compact.

* It is not designed to be quick to decode and start up

* It has undefined behavior

* It is not sandboxed.

* It is a moving target (see recent big changes to LLVM IR on geps and loads/stores)

Each of those can be tackled with a lot of effort. But overall, better results are possible using other approaches.


Finally. IMO this is what the web has been calling for since AJAX went mainstream.

They are doing great work. The client's operating system matters little now, but it will not matter at all soon.


This still doesn't fix the biggest issue with running non-javascript code in the browser: browsers still offer no way to know when a value is collected.

e.g. if I allocate a callback function, and hand it to setTimeout, I have no way to know when to collect it.

Sure, you can encode rules about some of the common functions; but as soon as you get to e.g. attaching an 'onreadystatechange' to an XHR: you can't follow all the different code paths.

Every time a proposal comes up to fix this:

  - GC callbacks
  - Weak valued maps
  - Proxy with collection trap
The proposal gets squashed.

Unless this is attended to Javascript remains the required language on the web.


Read up, you've missed a memo :-P. Plan for Harmony-era weak refs is to notify in next event loop turn.

Note turn-based notification. No way will exact GC schedule be leaked via callbacks from the GC. Ditto anything like Java's post-mortem finalization.

But your "e.g." is unclear, though. Why do you need to manually deallocate a callback function passed to setTimeout?


> Read up, you've missed a memo :-P. Plan for Harmony-era weak refs is to notify in next event loop turn.

Got a link to the memo? I haven't heard about this one.

> Why do you need to manually deallocate a callback function passed to setTimeout?

One of my use cases is running lua in the browser. see lua.vm.js: https://kripken.github.io/lua.vm.js/repl.html

When you pass a lua function to a javascript function, I create a 'proxy' object that holds a reference to the function in the lua VM. Only if javascript tells me it's "done" with it can I drop the ref inside the lua VM.


FWIW, this is exactly the same problem we had with making it possible for Embind to allow passing C++ functions into setTimeout.

We'd need to synthesize a JavaScript function that holds a wrapper object for the C++ function, and that wrapper object would need to be deallocated when the timer is done. For general callbacks, this is a hard problem, and finalizers would provide a clean way to solve it.


See, e.g.,

https://esdiscuss.org/topic/weak-event-listener#content-2

It's hard to find something fresh. I will stir the pot at the July TC39 meeting and get the committee on the record for weak refs with turn-delayed notification, or bust!


> See, e.g.,

> https://esdiscuss.org/topic/weak-event-listener#content-2

> It's hard to find something fresh. I will stir the pot at the July TC39 meeting and get the committee on the record for weak refs with turn-delayed notification, or bust!

As recently as September last year you told me it may happen... but with an indeterminate timeframe: https://mail.mozilla.org/pipermail/es-discuss/2014-September...


Yeah, it needs an active champion. I'll poke likely champs.


Weak refs are actually going in??? I got the impression they were dead because TC39 disliked them so much.


You and I had this exact conversation at Eataly a few months back. WeakMaps are cool, but JS is a garbage-collected language. All you have to do to GC something is stop referencing it.

WeakMaps are only really useful when you don't control the referenced object's lifecycle. Even then, you can simply invert your map or check in your callback.


> JS is a garbage-collected language. All you have to do to GC something is stop referencing it.

That works well for JS... but if you are trying to use asm.js (or now wasm), you need to act on things getting garbage collected inside of your code.


You're talking about implementing your own GC inside JS. A valid use case, but a niche one. JS the language doesn't really need to expose GC internals. JS the compile target does. (Though it can be worked around).


> You're talking about implementing your own GC inside JS. A valid use case, but a niche one.

Any asm.js code that calls into DOM apis is going to need this. Otherwise they don't know when to free stuff.

Most of the current asm.js code out there just binds a single canvas element, and works with that. But if you want to actually call anything else....


I'm interested to see what the API side of WebAssembly looks like in browsers; hopefully this will make it easier to expose more full-featured sandboxed APIs to languages targeting the web, without having to tailor those APIs to JavaScript. For instance, API calls that actually accept integers and data structures rather than floating-point and objects.

For that matter, though in the short-term this will be polyfilled via JavaScript in browsers, it'll be fun to see the first JavaScript-to-WebAssembly compiler that allows you to use the latest ECMAScript features in every browser.


The API story for wasm on the web is the set of existing web APIs. That provides the same security model and capabilities as the web has right now.

Currently wasm can't access web APIs except by calling out to JS, which would then call the API, but in the future, direct calling paths are intended.


My hope is that WebAssembly will start motivating a set of lower-level APIs, with higher-level APIs being provided on top of those by WebAssembly "libraries". JavaScript tends to motivate much higher-level APIs, and you can't turn high-level APIs into low-level APIs, only the other way around.


This seems orthogonal to WebAssembly. This is already a goal for its own sake--see the Extensible Web Manifesto.


So for now, the idea is to write C++, compile it to ASM.js, translate it into WebAssembly, GZIP it, transmit it, unGZIP it, then run a polyfill to translate the WebAssembly into ASM.js?

This sounds absurd. I can't even get through getting Clang, LLVM, and Emscripten built from source as it is, it's such a house-of-cards with configuration and dependency settings. Have any of you tried building Chromium from scratch? I have, on three separate occasions, as I'd like to try to contribute to WebVR. End result: fewer gigs of free space on my hard drive and no Chromium from source.

Part of that is my impatience: I'm used to C# and Java, where dependencies are always dynamically linked, the namespacing keeps everything from colliding, and the semantics are very easy to follow. But even Node's braindead NPM dependency manager would be better than the hoops they make you jump through to build open-source C++ projects. I mean, I just don't get how someone could have at any point said "yes, this is a good path, we should continue with this" for all these custom build systems in the wild on these projects.

I could be way off. I'm only just reading the FAQ now and I'm not entirely sure I understand what has actually been made versus what has been planned. There seems to be a lot of talk about supporting other languages than C++, but that's what they said about ASM.js, and where did that go? Is anyone using ASM.js in production who is not Archive.org and their arcade emulator?

I don't know... I really, really want to like the web browser as a platform. It has its flaws, but it's the least painful solution of all of the completely-cross-platform options. But it's hard. Getting harder. Hard enough I'm starting to wonder if it'd be smarter to develop better deployment strategies for an existing, better programming language than to try to develop better programming languages for the browser's existing, better deployment strategy.

This telephone game being played by translator tools and configuration management tools and polyfills and frameworks and... the list goes on! This thing we consider "modern" web development is getting way out of hand. JS's strength used to be that all you needed was a text editor. Everyone--both users and developers--can already use it and run it.

If it's just one tool, I'll get over it. But stringing these rickety, half-implemented tools together into a chain of codependent systems is unacceptable. It just feels like they're foisting their inability to finish and properly deploy their work on us. Vagrant recipes are nice, but they should be a convenience, not a necessity.

Sorry. Good for them. Just finish something already.


For years, I've laughed (from skull throne) at the "whyyyy no bytecode" posts on HN. Now bytecode, still complaints. Still laughing, too.


I never complained about no bytecode. And I'm not complaining about bytecode now. I'm complaining about an apparent lack of commitment to bringing any of these projects to a useful status.

Don't you think developer tools should have good UX, too? Microsoft does, and Visual Studio just works. Oracle does, and Java and Netbeans just work. Even setting up Node and NPM was a better overall experience than trying to get Emscripten working correctly and integrated into my projects.

Emscripten and NaCL are both like 4 years old now. Why hasn't there been greater adoption of these tools? I postulate it's the bad UX. I mean, Emscripten only working with VS2010, three whole versions behind latest, is a pretty good indicator that there is a whole class of users that have just written it off. Is there any concern for that with WebAssembly?


I know, I kid.

Things took a while because vendors had their own prior paths to pursue, while asm.js had to prove it could get close enough to native perf. (You could say Mozilla's prior path was JS.)

Tools for JS are pretty good; VS does TypeScript and JS, will do wasm. Firefox devtools are giving Chrome devtools a run for their (bigger) money.

I think tools are the least of it, while agreeing they make a difference (see Mandreel with VS integration, vs. Emscripten).

NaCl (PNaCl, you must mean) never went cross browser as such, nor could it. WebAssembly is the PNaCl end-game, and a pretty good one.

As for Emscripten, it's a huge win that both Unity and Epic (Unreal Engine) support JS/WebGL as first-class targets. This happened over the last few years, tools built into the game engines. Doesn't disprove your point, you could be right! Let's find out.


Hey Brendan, thanks for not flipping a huge bird to the web community and sticking with it.

For myself, I've had very little to complain about web development since '08 or '09. It became an application platform for me with the emergence of the canvas tag, and since then it has grown into a full operating environment of sorts. There have always been awkward and limiting features to the web because it is such a hodgepodge and I think that's where most of the complaints come from. But the browser compatibility situation continues to improve, which has always been my biggest gripe.

I worry a bit about the exploit potential of WebAssembly as more features are layered atop what Asm.js offers. Don't add too much, okay? ;)


Thanks, Ed.

I see any web-scale runtime standard inevitably having versions spread across the installed base of any server, so prone to the same pressure JS was, which led to polyfills, graceful degradation, object detection, monkey-patching in full. People who complain about this are complaining about physics.

You're right to worry about security, which is why the "MVP" for wasm is co-expressive with asm.js, just faster-to-parse syntax. After that, we must be even more careful.


Not to mention that you still have the option of using vanilla ES5 as long as you like (and it's what I'm doing until I can move to a VS version with built-in TypeScript/ES6/JSX support). That's one of the best things about working with JavaScript: nobody is going to stop supporting the older version anytime soon.

And great work, this is very exciting news.


That's true. I literally have 15 year old JS that still runs. It's not pretty, but it still works as expected.


>"Is there any concern for that with WebAssembly?"

Not really. NaCL was limited to a single browser, and asm.js was a stepping stone towards a bytecode for the web. WebAssembly has support from the major browser makers, plus a (mostly) clean slate for developing a bytecode suitable for the web. This is a golden opportunity for the web to evolve into a high performance application platform.

As for tooling, C# and Java are anomalies, no other languages come close. What sets those two languages apart? They're very popular in the commercial world, and they're based on virtual machines (C++ and JS may have decent tools, but I'd rather debug C# than, and part of that is the ability of the compiler to understand the program structure). A universally supported bytecode for the web is very likely to decent tooling.


I think the emscripten and NaCl (core) teams should concentrate on the tool chain, not on UI tools or IDE integration, IDE support is only useful with full debugger support, and that's a massive amount of work, and I guess every new version of Visual Studio requires a lot more work.

I spent a bit of time to write a set of python cmake wrapper scripts which make cross-compiling to 'esoteric' targets easier (see here: http://floooh.github.io/fips/), it doesn't solve the problem to directly develop/debug for emscripten or NaCl in an IDE, but it lets you easily switch between cross-compile-target configurations, with the idea that you write and debug your code as native desktop application in VStudio, Xcode or QtCreator (or any other IDE supported by cmake), and then only cross-compile to emscripten, NaCl or Android NDK on the command line. In my experience, 99% of bugs happen in the platform-neutral high-level code, and are caught by debugging the native desktop version.


My comments aren't about GUIs. A command-line interface is still a user interface. It still needs to be designed for the user experience. I count developers as a type of user, and just as much care should to be put into the discoverability, accessibility, and usability in the software that developers must use as is put into the software that end users must use.


On the command line I don't see it as an issue. Both emscripten and NaCl provide standard gcc-style cross-compiling toolchains that are easy to integrate with other higher level build systems. That's IMHO the only way that makes sense when providing a C/C++ compiler toolchain. The big missing piece in the puzzle is interactive debugging, hopefully with WebAssembly we also get proper debug information format (something better then source-maps), and can then debug-step in the browser through the original source code (this sort-of already works with emscripten since it can generate source maps, but it is quite rough for big scripts, and mapping back variable names doesn't work very well).


So I should give up on JS programming, go back to C/C#/C++/Java/Swift and have them generate wasm? Kinda sad, I like JS. But I like performance too.

Is there a possibility for JS tooling? Babel etc? Or any compile-to-JS language like CoffeeScript, TypeScript etc being able to emit/convert to wasm?

Or maybe I'm missing the whole point of this? :)


Chances are JS will remain a first-class citizen. WASM is just for those who would prefer to not use JS. Basically, if you like JS hopefully nothing will change for you, if you don't like JS you will have better options than cross compiling to JS.


Boy do I hope you're right! Performance is likely an issue tho. Wasm is designed to be faster, both loading/parsing and execution.

I did notice that es6 import/export works for Wasm, so I can write performant segments in C. Whew!


> Wasm is designed to be faster, both loading/parsing and execution.

If you want that from your JS codebase, I'm sure it would happily compile to Wasm - you'd probably find that a JS-to-Wasm compiler would be one of the first proofs-of-concept for Wasm.


This misses the division of labor behind wasm (and asm.js, and PNaCl). JS is a dynamic language, in full it needs JITting, speculative optimization based on runtime profile feedback, statically unsound type inference rescued by dynamic checks that have low-enough cost.

In contrast, wasm is for C and C++ first, and in due course many languages. The C/C++-first design point is a way of limiting scope creep and catering to important native-code use-cases (games, AI, CV, etc.). It's the starting point, not the final state. But it involves the usual static types trade-off that reduces runtime overhead.

Therefore a full JS engine in wasm would not run very fast compared to today's in-browser JS VMs, which are evolving to be JS/wasm VMs. For dynamic languages to compile to efficient wasm, we'll need the future JIT-generated wasm support and other desiderata on the roadmap. These come after the "MVP" that is co-expressive with asm.js.

So the proof of concept for wasm must start with asm.js and grow to PNaCl in full (which has had APIs and optimizations that asm.js or JS and the Web APIs have not provided yet), and then on to dynamic language runtime support. HTH.


That makes a ton of sense, thanks for taking the time to clarify.


No problem. I could go on (but should cut myself short to save time for other things).

Another dynamic language support part of wasm's roadmap that's post-MVP: GC support, both for GC of allocations from wasm code, and for references from wasm memory into the JS (DOM whatever) object space. One GC to rule them all, and one VM too.


" I can't even get through getting Clang, LLVM, and Emscripten built from source as it is,"

I have no experience building emscripten, but clang/llvm, i do.

Clang/LLVM are really easy to configure using cmake on any platform, and the configuration process just works. So why don't we start with: "What is the exact problem you seem to be hitting in doing this"?


> I just don't get how someone could have at any point said "yes, this is a good path, we should continue with this" for all these custom build systems in the wild on these projects.

I guess nobody ever said that. What people most likely were saying was: "This is a mess, but the expected costs of continuing this path will probably not be greater than the expected costs of changing course".

Note that there's a double "expected" in there. Making decisions under uncertainty is hard. On the other hand, taking a look from the outside and replacing every "expected costs" with whatever turned out to be the case is easy.


> braindead NPM dependency manager

Among all dependency managers I've used, npm is by far my favourite so this comment took me a bit by surprise. Which dependency manager do you like ?


NPM has a nice user interface, but having every dependency keep its own copy of its sub-dependencies is bad. What "npm dedupe" does to move modules up the npm_modules tree as long as version numbers don't clash is also incorrect. What should be done is to have the npm_modules directory have sub directories for each named module, then each named module have sub directories for each version that is required. Then, no module would ever have to have its own npm_modules directory and disk usage would be kept to a minimum. And yes, disk usage has been a problem on some of my larger projects.


Interestingly, about five years ago, he said he couldn't see this ever happening: https://news.ycombinator.com/item?id=1905291


In that comment, I wrote

"A binary encoding of JS ASTs, maybe (arithmetic coders can achieve good compression; see Ben Livshits' JSZap from MSR, and earlier work by Michael Franz and Christian Stork at UCI). But this too is just a gleam in my eye, not on TC39's radar."

And WebAssembly is indeed a compressed AST encoding, not a stack bytecode. Shhh, don't tell anyone. You can still call it bytecode if you like.


With an AST representation, one can do interesting things. Everyone could have their own custom code formatting rules, and it wouldn't have to affect anyone else. Comments would have to be included in the AST for this to work sanely.

This might even go as far as languages, though for this to work well, the code would have to cover an intersection of the functionality in all the languages "rendered" to.

(Haven't watched the presentation yet. I'm in public without headphones.)


Anyone published a webassembly hello world somewhere?


Your secret is safe with me!

(And thanks for the info!)


I guess this is in the spirit of NaCL and its bytecode, and the Java VM/Java bytecode, and the .NET runtime/.NET IR. It makes a lot of sense and I get it then sort of gets competitive with those efforts as well.


It does compete with those efforts in a sense, but ideally it is going to learn from those efforts and provide a powerful cross-browser equivalent to them instead of just trying to replace them.


>NaCL and its bytecode Specifically PNaCL's bytecode. NaCL is a separate product that is native code.


Combined with something like e.g. Flipboard's react-canvas, this means we could bypass and re-implement most of the browser stack...


https://github.com/WebAssembly/design/blob/master/FAQ.md#wil... makes me happy as that was the first concern I had when reading these news :)


Can someone explain why not just go the JVM or .NET CLR path ?

Both well tested, well executed, great tooling, supported on many platforms, compilation targets of many existing languages.

Serious question.. is it licensing ?


1. Both the JVM and CLR have different focuses in terms of startup speed, which matters more on the web than anywhere else (this is one reason Java failed on the web). JavaScript and WebAssembly are designed to start up quickly.

2. If you have both the CLR and a JS engine in the same browser, you have problems with cross-VM GC cycles and the overhead that causes. That is one reason why WebKit opposed adding Dart bindings, for example. WebAssembly is designed to integrate with existing JS VMs.

3. Both Java and the CLR are open source, but both are patented. Both have licenses for those patents, but this has been the cause of much debate, as there are corner cases in those licenses (e.g. if you do not fully implement all of the VM per the license, you may not fall under its protection). This could be resolved - all existing patents could be offered up to the public domain - but I don't see Microsoft or Oracle (and there may be others with patents) doing so.


You can't easily integrate the CLR/JVM into browsers. As the FAQ notes, this can be built within existing JS engines relatively straightforwardly, rather than requiring a new giant runtime engine (as NaCl would have, and the JVM/CLR would).


There was an attempt to do this with the JVM a long time ago, called LiveConnect. It failed.


The fundamental models of the JVM and CLR don't meet the needs of web browsers. You could probably really aggressively alter them to do so, but in the end you've still got to run in a browser and integrate with existing javascript code.


>Can someone explain why not just go the JVM or .NET CLR path ?

I was about to ask the same question. Because once you require browsers to support something other than JavaScript, that's equivalent to requiring browsers to come with a Java plugin or CLR plugin or Flash or Silverlight installed.

It seems to me the only difference is that the web.asm plugin's inventor is the influential inventor of JavaScript, and can harness the support required to add a plugin to all browsers going forward. (unless of course i don't understand this proposal, which is entirely possible).


This is pretty awesome, and is a pretty good use of all the effort that's been going into asm.js

One question though - I found a proposal somewhere on a Mozilla run wiki about a web API for registering script transpiling/interpreter engines. I've lost the web address, but if anyone know any more about this is love to see it rekindled.


I think you're talking about module loaders[1]. I remember seeing on es-discuss that they have been postponed to ES7.

[1] https://github.com/lukehoban/es6features#module-loaders


Functionally that's it (didn't realise it'd been rolled into es6 modules though). Looks like what I was referring to was a third party library: https://developer.mozilla.org/en-US/Add-ons/Code_snippets/Ro...



Nice, but I'm still waiting for 64-bit integer arithmetic!

For our use case, what I like about this is that we can continue to use emscripten and the technology will come to us, rather than requiring app developers to invest in yet another technology (our switchover from NaCl to emscripten was very time consuming!)


64-bit ints coming along with SIMD in ES7^H2016.


Very very happy to see this.

Politically it appears to be a fantastic collaboration.

Technically it looks like they have really thought through this hard -- if you look through the post-MVP plans (https://github.com/WebAssembly/design/blob/master/FutureFeat...) there are a lot of exciting ideas there. But it's not just pie-in-the-sky speculation, the amount of detail makes it clear that they have some really top compiler people who are really rigorously exploring the boundaries of what can be accomplished inside the web platform (SIMD, threading, GC integration, tail calls, multiprocess support, etc).


WebAssembly Backend RFC on the LLVM mailing list:

http://llvm.1065342.n5.nabble.com/RFC-WebAssembly-Backend-td...


This is awesome.

We will probably need a package manager after that (like apt or npm).

A use case could be with ImageMagick, OpenCV, OpenSceneGraph or qemu inside the browser. All of them are huge and useful projects with many common dependencies.


So, is this the long way around to get us Java Applets all over again?


I hope that someone ports mruby to this. I've come to terms with javascript's syntax (via coffeescript), but I'd still rather not deal with javascript's semantics.


There is already an mruby port to the web using emscripten. When emscripten targets wasm, that port will get that for free.


> "MS seems to get the modern Web".

This made me laugh, sadly. They get the modern Web because they aren't winning at native app development.


I think it's more that native app development isn't winning (except in a few niches).


Didn't WMLScript (a subset of Javascript used for WML) have a required byte code representation?


Praise the lord, that was sooner than I expected. Next up: the DOM. Then there will be peace on Earth.

Does anyone know when all this started? I ask because only 83 days ago Brendan was on here telling us pretty emphatically that this was a bad idea and would never happen.


Please cite my post from 83 days ago. I think you're misreading it (perhaps "bytecode" is overloaded). I have good reason to, since I was in on wasm from the start, which was longer ago than 83 days.


I didn't intend to ruffle anyone's feathers with that comment. Yes I suppose I am misreading it, but I don't think I can be blamed for walking away from that exchange thinking transpilation to JS was the last word on the subject.

Anyway great work man. It's a happy day.


Thanks.

Here's Tim Sweeney on diff between "bytecode" and wasm:

https://twitter.com/TimSweeneyEpic/status/611237145857138688


Hi Brendan, just hopping on the end of this thread to tell you that I think it was just disgusting the way that you were boycotted for your donation. It seemed really hypocritical for them to mistreat you for your opinions.


Maybe we won in the end. If Brendan was spending all his time doing CEO type stuff, he'd be spending less time giving us awesomeness like this.


This was a group effort and I am last, not first. Mainly I played prophet, peace-maker, coach, bartender.

Not quite the Tetralogy of Tony Stark (genius, billionaire, playboy, philanthropist) but I'll take it! ;-)


This page makes Firefox on Android crash.


It crashes in iOS Safari on an iPad Air 2 for me.


I'm curious what the debugging story for this is going to be? Source maps?



Without support for proper threads, web assembly programming feels the same as programming a Z80 or 6502 back in the 80s.

And no, webworkers don't cut it, because they don't support structural sharing of immutable data structures in an efficient and flexible way.


Tedious to read out-of-date objections. Please do a bit of research and find out the latest. JS is getting shared memory thread support:

https://blog.mozilla.org/javascript/2015/02/26/the-path-to-p...

This means the MVP wasm likely includes threads (same timeframe, later this year), and post-MVP wasm can go for full-strength pthreads.


Isomorphic Python here I come...


w00t w00t. This is pretty great overall.


Is there already a plugin out there that blocks this crap?


So this is like PNaCl but targeting the web API and by making it collaborative, hopefully a real standard allowing independent reimplementation?

Ironic that Eich is the one to pull the trigger on JS.


Pepper was floated on plugin-futures@mozilla.org a long time ago, roc replied advocating not reinventing web APIs:

https://mail.mozilla.org/pipermail/plugin-futures/2010-April...

The collaborative aspect is important, but one side of a coin whose other side is polyfillability. Nick Bray of the PNaCl team stubbed out Pepper.js more recently, and JS finally got some SIMD and threads love. It's all converging, finally. Yay!


There are a few key aspects where web APIs will want to differ between JavaScript and WebAssembly, though. For instance, some APIs really do need 32-bit and 64-bit integers, not floating-point, and the APIs should be specified as accepting and returning values of those types. An API tailored to JavaScript would have to specify how to encode such a number into what JavaScript can express.

Ditto for data structures (with actual layout, not JavaScript "bag of key-value properties" objects).

Ultimately, I'm hoping new web APIs will be specified as a WebAssembly ABI, without taking JavaScript into account; the JavaScript interface to those APIs can be responsible for questions like "how do I call a function specified as taking a 64-bit unsigned integer?".


JS is getting int64 and uint64, so you need a better example, like zero-cost exceptions, dynamic linking, call/cc (listed in my post).


Can you link to where that is being discussed? I hadn't heard that int64 was being added. My google-fu has failed to turn up anything. int64 would be very useful for long types in ScalaJs. They run really slow because they have to be emulated with the whole split it into smaller parts stuff. Lots of jvm targetting code I have ported at work was written with the "just use a long, it's not much slower than an int" thought process, true on the JVM, not true in JS.


> JS is getting int64 and uint64

It's a fine example for today's browsers that don't have those. Fixing that for future JavaScript doesn't help APIs being designed today, or existing APIs. And it doesn't invalidate the point that JavaScript limitations should not be web limitations.

For instance, how about efficient structs with defined in-memory layout?

Targeting web APIs to WebAssembly means that the web can tell JavaScript to keep up, rather than JavaScript telling the web to wait up.


Wait, how does wasm help APIs being designed today?

Please use the same yardstick when comparing.

Important that JS grow too, including SIMD, shared memory, value types including 64-bit int. Does not alter the case for wasm, per the FAQ: parse time win soon, divergence to relieve JS from serving two masters in longer run.


I'm using the same yardstick, relative to the baseline implementation of each language. APIs won't be able to natively target WebAssembly until browsers have native support for it. However, once browsers natively support WebAssembly, there won't need to be a "today's WebAssembly" versus "tomorrow's WebAssembly" issue; interesting new features belong in the languages targeting WebAssembly, with WebAssembly itself handling things like "how do we efficiently generate SIMD instructions on every target platform". Questions like "what datatypes do we support" become uninteresting; the answer is "whatever both ends of an ABI can agree on".


You wrote

"Fixing that for future JavaScript doesn't help APIs being designed today, or existing APIs."

and I asked

"Wait, how does wasm help APIs being designed today?"

Your different yardstick is the shift from "today" to "future". WebAsm that is a super of JS is in the future, past the polyfill era of co-expressiveness. Fault JS today for lacking int64 by the same (temporal) yardstick should fault wasm today.

In general we agree, but you are arguing carelessly. There is no essential difference between int64 for JS in the future, and int64-equipped wasm in the future. Both are in the future.

The solid ground on which to argue is where browsers all update to support wasm, whose syntax and extension model(s) stabilize, and then wasm can run ahead of JS. But it'll be a managed standards, so don't expect a huge iteration-time win. It'll be good for both JS and wasm in that farther future to decouple concerns, I agree.

Are we done yet?


I'm talking about the meta-problem here: APIs designed today have to care about today's JavaScript semantics, which are high-level JavaScript-specific constraints. APIs designed for browsers with native wasm support care about the semantics of wasm, but those semantics are language-agnostic. APIs should be designed based on a language-agnostic ABI, not based on JavaScript. That's the world I'm looking forward to.

Even the planned future ABI for interoperable dynamic linking is a guideline; any caller and callee that agree can use a different, extended ABI.

> The solid ground on which to argue is where browsers all update to support wasm, whose syntax and extension model(s) stabilize, and then wasm can run ahead of JS.

Of course, and that's exactly the world I'm talking about.

> But it'll be a managed standards, so don't expect a huge iteration-time win.

WebAssembly is in a very critical way smaller than JavaScript, so its iteration time is less important. wasm iterations will be important for performance (to provide constructs that will efficiently translate to the native machine code you or a compiler would have written), and we'll need new APIs for things like thread-spawning, but new language features and functions using those features won't require any changes to wasm.

> Are we done yet?

By all means let's stop violently agreeing with each other. :) Thanks for working on WebAssembly; I look forward to using it.


> APIs should be designed based on a language-agnostic ABI, not based on JavaScript.

Has there ever been such a thing?



Regardless of the timing and the specific example of int64/uint64, isn't the benefit being discussed by JoshTriplett the same thing you are talking about with divergence? If the low-level language of the web is divorced from JS so that each can evolve independently, future web APIs that are designed to be consumed from multiple (high-level) languages can be specified in terms of structures that can be represented in the low-level language, and high-level language-specific bindings to the low-level APIs can present an interface customized to the type systems and idioms of the specific high-level language with the appropriate low-level shims to convert to the low-level representation.


Any possibility for consideration of composable continuations as a WebAssembly language feature, e.g. shift/reset in Racket?

See: Adding Delimited and Composable Control to a Production Programming Environment

https://www.cs.utah.edu/plt/publications/icfp07-fyff.pdf


It's not "pulling the trigger" on JS—per the FAQ, it's fully polyfillable and typically implemented inside the JS engine, as essentially a separate interface to all the implementation work that's already there. This is a crucial difference from everything that's been tried before (including PNaCl). JS isn't going anywhere and continues to be improved.


> JS isn't going anywhere

Sure it is. Step 1: get all browsers to support WebAssembly, so that sites can drop the polyfills. Step 2 (specifically noted in the article): make WebAssembly more full-featured than JavaScript, ignoring the ability to polyfill it because browsers have native support. Step 3 (potentially doable before 2): implement JavaScript in WebAssembly, so it becomes just one of many possible languages, and its implementation is no longer beholden to browser quirks.

That'll make JavaScript a better language for its proponents (latest ECMAScript features everywhere), and entirely avoidable for people who prefer other languages.


No browser is going to implement JS in Web Assembly anytime soon. You have no idea how tight the performance margins are on SunSpider and the V8 benchmarks, for example. The entire compilation pipeline has to be lightning fast; adding another IR will kill you.

Web sites move to new technology at a glacial pace. You're posting this comment on a Web site that, by and large, hasn't even adopted CSS 1.0 for layout yet.


Dream on. JS is "obligate, not facultative", as biologists say, for browser implementors. It lives in source form on the web and must load super-fast.

We can wager about when it might actually die (I said it might given something like wasm, at Strange Loop 2012), but that's beyond the horizon by years if not decades. Gary Bernhardt knows!


People build latest-ECMAScript polyfills today. Future such implementations will likely use WebAssembly.

And anything caring about performance can start targeting WebAssembly directly rather than JavaScript. CPU-bound JavaScript performance will start mattering a lot less. (DOM-access performance will still be critical, but it will be for WebAssembly too.)


I predict that, five years from now, people will still be caring a great deal about JavaScript's CPU-bound performance. Even if all new code switched en masse to Web Assembly tomorrow (which won't happen) people would still want existing content to run as fast as possible.


You moved the goal post from dropping JS (again, dream on) to (in the future) using wasm as compiler and polyfill target.

S'ok, I'm obv. a supporter of the latter, although it'll take some time to make win for smaller programs.


> You moved the goal post from dropping JS (again, dream on) to (in the future) using wasm as compiler and polyfill target.

I'm not suggesting dropping JS; that's too much to hope for. If nothing else, backward compatibility with existing sites will require supporting it approximately forever. I never intended to imply dropping it, just that WebAssembly becomes the interoperable baseline, with JavaScript and other languages being peers on that common baseline.

I disagreed with the statement that "JS isn't going anywhere"; I don't think it's going away, but it's clearly going to evolve and go new places.


This assumes WebAssembly will have virtually no performance overhead when compared to C/C++, and still doesn't address the fact that JavaScript "binaries" will be much larger than today's scripts that rely on a JavaScript interpreter being present inside the browser.


> This assumes WebAssembly will have virtually no performance overhead when compared to C/C++,

It won't have zero performance overhead, since unfortunately it will still require translation to native code. But it'll be far higher-performance than asm.js, and precompiling JavaScript to WebAssembly could produce higher performance than a JavaScript JIT.

> and still doesn't address the fact that JavaScript "binaries" will be much larger than today's scripts that rely on a JavaScript interpreter being present inside the browser.

You don't need to do the compilation in the browser; do the compilation ahead of time and ship wasm bytecode. The DOM and all other web APIs will still be provided by the browser, so I don't see any obvious reason why wasm needs to have substantial size overhead compared to JavaScript.


The Web has enough commercial heft that maybe we could see things like ARM's Jazelle https://en.wikipedia.org/wiki/Jazelle DBX Java support.


Jazelle died for good reason. More interesting would be something like Transmeta/Project Denver, where the JIT is deeply integrated with the CPU.


So much in this thread suggests that influence will be exerted to permanently weld wasm's semantics to Javascript's from day one, so that there will never be a viable compile target that isn't just Javascript. In that case, wasm will be nothing but a Coffeescript-like facade over Javascript semantics, and anyone targeting wasm is really only writing Javascript in an inefficient way, and Javascript is the only first-class language, forever.

If wasm is already intended to be crippled relative to Javascript, it really throws into question why wasm should exist at all. When the response to eventually implementing Javascript on top of browser wasm support is "dream on," like because Javascript on top of wasm will suck because it leaves too much performance on the table, that raises the question of why we think building on top of wasm will be acceptable for languages other than Javascript. If building on top of wasm is unacceptable for Javascript, why isn't it unacceptable for everyone else?

Maybe this also throws into question whether any discussions supposedly shaping the development of a new cross-vendor standard are really in good faith, if the reality will be that nothing new is made other than a new spelling of Javascript, and that wasm will be forever steered by Javascript and incapable of meaningfully hosting an implementation of Javascript because it has been intentionally crippled to require an implementation of Javascript.

Anyone who wants wasm to become a real thing has to be cautious that it is not crippled in order to prevent it from being capable of replacing Javascript (whether it actually does replace Javascript is another issue which can really only be asked if we assume wasm is not already sabotaged to be incapable of that).


Or the relevant stakeholders might actually believe in good faith that having one VM in the platform is better than two.


There are a lot of web coders who seem to believe in the most outlandish ideas and who act as if coding ever more rickety stacks is progress.


To an end user, how is this a different experience from flash? You browse to a website and must execute binary blobs in order to view the site.

Even worse, it's like Flash but where the flash 'plugin' has been written from scratch by each web browser, giving us endless possibilities of incompatibilities which are a nightmare to fix.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: