Hacker News new | past | comments | ask | show | jobs | submit login
Is WebAssembly the Return of Java Applets and Flash? (steveklabnik.com)
509 points by Vinnl on July 26, 2018 | hide | past | favorite | 399 comments



The thing about Flash was that it was pretty WYSIWYG and a lot of creative people flocked to it because it had strong multimedia capabilities. Personally I kind of missed the part where you could build really nice looking stuff without much technical expertise, nowadays you need to know all about CSS, Polyfills and JS quirks so there's always this limitation where you have to understand pretty technical stuff early on to get something done.

Flash was more like using Premiere where you just edited your timeline with a bit of interactivity sprinkled over it, no movie editor ever had to get his hands dirty with some kind of scripting language or low level file formats just to edit a movie.

I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash. It was like the web was warped into the future. Nowadays you can achieve the same but not as elegant. It kind of reminds me of how PC's had to catch up with the Amiga for years. Perhaps starting with Wing Commander things were really on par or better, almost 8 years later.


To me, flash was like the past, or a warped vision of the distant future. The interfaces people created had visual appeal, but that’s all they had. It lacked all of the usability that I was used to finding on webpages, and therefore made for a very frustrating experience. No bookmarking, no back and forward, no right clicking to open a new tab, frequent superfluous sound, major security holes, long loading times. I would hardly call any of this elegant. As a user, I was thrilled when people stopped using flash.

Flash was like using poor quality native apps, which was a step backwards from a browser.


To me, JavaScript is like the past, or a warped vision of the distant future. The interfaces people created had visual appeal, but that’s all they had. It lacked all of the usability that I was used to finding on webpages, and therefore made for a very frustrating experience. No bookmarking, no back and forward, no right clicking to open a new tab, frequent superfluous sound, major security holes, long loading times. I would hardly call any of this elegant. As a user, I was thrilled when people stopped using JavaScript.


Link me to the most user-hostile ECMAScript-based web page you can find, and I'll open up devTools and delete my way out of whatever email-list/login/begware DOM-jail they try to keep me in.

Link me to the most user-hostile Flash-based web page you can find, and I'll watch a ten year old hour-long video presentation from a Gnash developer explaining their 100-year plan to ship a usable open Flash runtime alternative.


There are some bad JS websites, but most of them do have proper bookmarking since pretty much every web framework supports adding multiple pages including forward and back navigation. Browsers are also good enough to handle #links as well within a SPA app. Also, they tend to have faster than typical loading times for navigation within the site.

Flash didn't have any of that, and was much slower to load than modern JS websites.


Just no. I'm on latest chrome on android 6 and when I browse search results on YouTube, then click a video and hit back I expect that the scroll position within the results is restored. Instead when I hit back I only see the red top bar and a loading spinner and a moment later the results show up again and I'm at the top of them. Because we don't have nice paged results anymore, but this hip and ubercool dynamically extending page of results and everything.

And this is not some student's first approach at modern web technologies, its fucking YouTube from google. They can't get their shit working on their own browser. I have yet to see an SPA with a convincing UX that's more than just some wannabe webdev's "about" page.


> this is not some student's first approach at modern web technologies, its fucking YouTube from google

It's sad that Google is revered by the developer community as a role model for software engineering because their frontend web work has—at least for the last ~10+ years—always been terrible and a really great example of worst practice.

Gmail's early HTML version is possibly the last faded memory of a quality frontend product coming from Google. Everything they've created since the advent of GWT has been aggressively anti-user and anti-interop. Gmail took a long time to get full browser support and a longer time to play will with back buttons, etc.—meanwhile the Gmail interface has slowed and bloated with each iteration; the latest bordering on unusability on my very new midrange laptop. Wave never worked in anything but Chrome, the same is true of the early iterations of most of their large newly released products over the years. The Google homepage provides a totally inconsistent experience across browsers—on mobile I see three different results views in three different browsers, two are Blink-based! Why doesn't search by image work on mobile? We're thankfully no longer lumped with the distaster desktop experience the was Google Instant Search.

Similarly, Microsoft and Apple don't have great histories here. When looking for good development practices, you should always look at the example set by companies that need to compete. Monopolies don't need usability.


> Why doesn't search by image work on mobile?

This is a great example of Google's spotty frontend work -- especially since you can simply tap the share button on iOS, and then scroll over to "Request Desktop Site". And voila, image search now works perfectly. You can even click the camera icon to upload an image directly from your phone. And yet, this functionality is completely hidden by default on mobile.


Ironically google landed its initial success with (and maybe partly because of) an ultra minimalist website focused to the task. They seem to have forgotten that and now they have bloat everywhere. Instead of reducing it they put a lot of resources into developing technologies to deliver that bloat more efficiently.


> Instead of reducing it they put a lot of resources into developing technologies to deliver that bloat more efficiently.

Efficiency has a lot of different definitions depending on whom the efficiency is "for". I would say Google put a lot of resources into increasing the bloat (adding abstractions) in order to automate the creation and maintenance of their services. They are the pioneers of removing humans from the equation: they're using ML to create their maps, instead of the previous focus on humans driving around with cameras, they have a notorious lack of human-intervention customer service for most of their paid services, even GWT which I mentioned above removed human JavaScript devs (admittedly in order to allow Java devs to write it, but unlike other transpilers—e.g. Typescript/Coffeescript—which produce a relatively readable direct JS equivalents, GWT is heavily abstracted and the output isn't in any way representative of what a human would create).


Call me a conspiracy-theorist, or crazy. But I say that that is by deliberate design. Just like the push for HTTPS, on some level. You can't proxy most of the web, you can't cache, you can't bookmark SPA and ajax-loaded results, you can't scrape any of the data easily, you can't intercept and modify it if you wish without doing fancy SSL certificate injection.

"Gee, I remember seeing this unique comment on a youtube video. Let me search for it, maybe some web-crawler has indexed it and made it available to the rest of the web. Nope."

"Oh I remember the video, it was XYZ. I'll just go there and find the comment. Oh, it's not at the top of the comments, and there are 5600 comments."

"It's okay, I'll just ctrl-F for that specific word I distinctly remember was used in that comment. Nope, ctrl-f only finds what's loaded in the Dom, you've got to scroll!"

"Hmm, maybe I can just keep scrolling for a while till I find it. Scroll, wait. Scroll, stop, loading icon, scroll some more, wait, scroll some more, wait."

Hey, at least I can share a specific time-stamp location within the video on Google+ and Facebook!


On top of that, opening an email in a new tab from inside of GMail hasn’t worked for at least 2 years now, and it doesn’t look that they’re going to bring back support for this basic functionality any time soon. At least the new GoogleMaps version has gotten pretty acceptable in the last year or so, it’s almost as responsive as the classic one used to be.


Firefox Android doesn't suffer from this defect.


And then you have developers/companies/projects that end up treating JavaScript+HTML5 like "Flash but newer" or "Java but newer" or (for maximum pain) "Silverlight but newer". That sort of mentality - whether conscious or unconscious - underlies the vast majority of single-page apps in my observation/experience.


Javascript interfaces usually have much better usability than Flash ones used to have.

That is because following the standard semantics of the Web in Javascript is easier than breaking it, while for Flash it was the other way around. Of course that does not mean that every JS powered page is good, or that every Flash powered one was bad, and ironically it does mean that the more JS, the worse it tends to be, but the more Flash, the better it used to be.


Yet that doesn't prevent people from breaking it all the time.


You're absolutely right (and an excellently written post; very effectively communicated ), but I think the difference is in what's possible/easy/difficult.

Building a usable experience in Flash was possible, but sufficiently difficult that almost none did.

Building a usable experience in JS is significantly easier than it was in Flash, though sadly still seems to be sufficiently difficult that too few do. The ratio of usable JavaScript apps is a lot higher than it was for flash, but is still a minority. It is at least an improvement though, if an incremental one.


It's true that it is much easier now in JS, but that is because the browsers and the core web technologies have radically evolved in that 10+ years since Flash was the dominant way of creating interactive experiences. Looking back at the eco-system when Flash was still a viable technology, we have to recall where the browsers were at, where HTML was at and also JavaScript itself.

At the start of 2005, I was helping lead a team build a highly interactive experience for a major car company using Macromedia Flash and Flex 2. When we launched the site we had full cross-browser pixel accuracy, fully supported URL deep-linking, bookmarking, page history navigation, web crawling, keyboard navigation, screen reading support, interactive video, highly animated experiences and even had fully integrated web mapping using a beta version of Microsofts first interactive map tech (it eventually became branded as Bing maps).

What we built then was possible only because of Flash, there was no way we could have created the complete experience in JS/HTML/CSS. Granted, we could have done a lot of it in native web tech (and in some cases, we had to under the hood), but to make it fully pixel accurate cross-browser would be tripled the dev & testing time. On top of that, some of the features would have been impossible without Flash.

Let's recall where we were in 2005. Chrome was still 3 years away (it was released in 2008), Firefox was still version 1.0 (1.5 didn't release until Nov. of that year), Gmail was just released into beta (you had to have a friend to get access), Google Maps was still in an experimental beta and was truly pushing the boundaries of what JS/HTML could do. The browser history API didn't exist so we had to do crazy iFrame hacks to create deep-links and history (this was true of any app, no matter what tech). There was no video in browsers without a plugin (HTML5 spec wasn't finalized until Oct. 2015). There was no such thing as CSS animations, they got their first release in Firefox 5 (June 2011).

So, were Flash apps trash? Absolutely, but not all of them. That's like claiming that modern browsers solve it all. We still have massive load times (look at how much time and effort is put into optimizing content delivery), the cross-browser and backward compatibility is a nightmare, and way more of a challenge then it ever has been. In some ways, modern web development is much better than Flash development was, but to be perfectly frank, we have a LONG way to go. Honestly, we haven't caught up to where Flash was 14 years ago.

But, we are getting there for sure. I can see why Steve Klabnik is so excited about WASM. I can envision where it is going and it reminds me of where Flash was trying to go before it was slaughtered by Adobe. It's an exciting time for sure, but we should also look back at where we came from and instead of just stating Flash is trash and it caused the web to be terrible, we should also look at what it did right and what it allowed us to create before we throw it out with the bathwater.


Nice attempt at memeing a response, but that comment doesn't work. Most SPAs that I've seen allow bookmarking, back and forward, right clicking to open a new tab... and they certainly don't introduce major security holes to attack your local machine (they're just websites, not a poorly coded third-party plugin like Flash), and they do not typically have superfluous sound or long loading times. In fact, browsers are blocking superfluous sound more and more, which wouldn't be possible to affect with flash since the browsers didn't control flash.

So... nice one there, green user. You really got'em!


flash used to make websites was garbage, but used to make games that run in the browser it was very nice.


Exactly this. Flash for websites was trash.

But Flash games were a boon. Everyone with an idea was able to make it into a game. I spent a lot of time playing games in Kongregate and similar sites. Many were really interesting, well worth the effort of sifting through the rip-offs and trash.


I used to love Flash games, and I was also watching the projects that tried to get .swf files to run in a Flash virtual machine emulated via something like Emscripten, but I feel like all those projects have died out :/


And Flash for animation: e.g. this was produced and published in Flash and was surely smaller to download than the youtube video of it:

https://www.youtube.com/watch?v=LeKX2bNP7QM

The internet speeds were much slower then, the difference mattered a lot. Also note, from the description:

"Sadly, it's not possible to replicate the pop-up mouse-overs from the "Special Edition" - though I was able to append the "deleted scene" presented in that version. If you want to see that version - or the "Fire BAD!" flash video game where you attempt to extinguish flames on James Hetfield (tragedy + time = humor), you'll have to find the original .SWF files and have a way to play them."


In that regard, looks like Unity is the new Flash.


It is and it isn't. Unity is great but it's got like a min 5MB download for just a spinning cube. Flash (because the plugin was built in) started instantly and had built in support for streaming. Unity can stream, after the 5MB engine download, but for whatever reason it's not common.


I meant more as in people with no technical knowledge whatsoever creating interactive stuff. Just look at the cambrian explosion of indie games from (most) people who probably can't do long division by hand.


Very likely! But some years ago, back when I still played Flash browser games, Unity games didn't work in a browser for Linux, the only platform I use. Maybe that situation has changed? :)


What with a WebGL target for Unity these days, it has. The Unity plugin was deprecated versions ago and they don't even ship a target for it anymore.


It seems the trend was that it was nice for makers, not so much for users.


Which, ironically, is the case with modern Electron apps and bloated JavaScript UI frameworks as well.


I'm a user of several Electron apps and they are very nice for me too.


Electron (or embedded Chromium, etc.) is lazy development: save time on cross-platform work at the expense of your users' CPU overhead/battery time/SSD life.

Spend time watching some of each year's WWDC talks and noting how much effort Apple engineers give to fine-tune certain APIs to enable app developers to optimize battery use, disk access, or graphics rendering. State-of the art stuff. And then your Electron app just ignores all of that.


I think calling it "lazy" is uncharitable, unless you're also going to chastise developers who target only one platform -- they're likewise not doing the cross-platform work.

Manually targeting multiple platforms is really hard. How could one person possibly keep track of all the battery-saving API changes in OSX, iOS, Windows, Linux, and Android?

Yeah, it's annoying that Electron apps are so bloated. But the alternative would be > 50% of people can't use those apps at all because you only targeted one platform.


"unless you're also going to chastise developers who target only one platform"

Which I tend to do, particularly since that "one platform" is almost always either Windows or macOS, neither of which I use on a daily basis (and neither of which I have any desire to use on a daily basis).

Meanwhile, there's such a thing as cross-platform GUI applications that don't try to fit an entire web browser into them. Especially if your programming language of choice doesn't require precompilation, frameworks/toolkits like Qt and GTK and Tk are perfectly viable for cross-platform development, at least on the desktop (which is usually where folks are using the likes of Electron or CEF anyway). They also tend to perform significantly better and stay much closer to the look-and-feel of the rest of the operating system (no, I don't care if you think you know better than me about my sense of style; if your app doesn't respect the look-and-feel of the rest of my system, then it sticks out like mold on a slice of Wonder Bread).


Do you know of any of those cross plaform GUI tools that work in the web browser? Targeting Windows/Mac/Linux is easy but outside of JS I have seen no way that you can write one application that works on Web, Windows/Mac/Linux, Android and iOS


I know Qt supports both Android and iOS (and a bunch of other mobile platforms, apparently, like Tizen and Blackberry).

I don't think any of them target HTML/JS, though.


Because it's not like there aren't any other options for multiplatform development...


> How could one person possibly keep track of all the battery-saving API changes in OSX, iOS, Windows, Linux, and Android?

You don't. You just use system-provided APIs and get the improvements for free whenever the system frameworks are updated to be more efficient.


And then your Mac App ignores all of the people not on Mac.

I'm not saying everything should be Electron, but it certainly has usecases just like native does.


Doesn't have to be a Mac only app. But be considerate to your users and use the best available SDKs for each platform, not one size fits all.


If resources were infinite and there were never any trade-offs, most people would agree with you.

But what's your game plan under other circumstances? Do nothing?


If you're a small-time indie dev, do what works so that you can ship. If you're a well-funded co. like Spotify, GitHub, or Slack, however...


You're making the mistake of thinking that everything is just a technical challenge that you can throw money at, and that technical superiority is the only trade-off.

Just because an enterprise has funding doesn't mean it's wealthy in other, more scarce things like organizational capital. In fact, I wouldn't be surprised if all the client developers at Slack would love to split off into dedicated platform teams and do a rewrite. We as developers love that shit.

But you should try to understand why these enterprises make these trade-offs and why they are still in these positions in spite of how much disposable money you think they have.


Sorry but I shouldn't have to use 2GB of RAM to browse the web. Compare Airbnb to Craigslist. One is visually pretty, the other isn't. One uses every framework under the sun, the other is HTML.


“Nice for you” and “nice for your computer” are being treated as separate things here, I think. Assuming a supercomputer (like the kinds most Electron app devs must surely have), Electron [native] apps have a good user experience.

Electron wrappers for originally web apps, on the other hand, are usually horrible user experiences—often they don’t even support right-clicking elements, or clicking on links to open them in a browser.


> Electron [native] apps have a good user experience.

No, they don't. Ever tried to drag and drop into an Electron app? How about used VoiceOver?


Relatedly, I played quite a few flash games, and they were nice for me.


This is a very broad statement. Would you care to elaborate?


"slack"


"spotify desktop"


Wasn't Spotify originally written in C++?


Even if it was, it isn't now.


It probably still is, since Chromium Embedded Framework (which is what Spotify uses, last I checked) is used via a C/C++ API.


For some cases, nice for the makers is nice for the users.

I don't think more than a handful of the games and little videos that I enjoyed as a kid on newgrounds and addicting games would have been made had they not had such a low barrier to creation.

I do kernel/firmware development now, so I'll be the first to admit that this value proposition isn't valid for all domains. But sometimes, for some use cases, just having a simple to use scripting environment is totally the way to go. Particularly if it's sandboxed well and can't really harm the end user.


Legions of bored teenagers would fight you over that. :) At one time there was no force so powerful as a bunch of friends sharing albinoblacksheep games/videos. Users loved it.


A pretty amazing collection of games was at friv.com


I only got into CS because of Flash. First I spent a ridiculous amount of hours on games, then I learnt to reverse engineer and tweak them, and then a ridiculous amount of hours creating and debugging games/apps with Flash.

Flash for the win! Till the day I die.

AS3 was amazing. When Typescript came out, it was a lot like AS3. I was convinced and became a believer in TS and never looked back.


holy crap, i've never seen this before. it's still up


It was also quite nice for users, assuming the intentions of the makers were good. Ads and security exploits were the truly nasty bits.


Not just games; anything other than hypertexts and forms.

In those days there was no notion of SPA or other contemporary well-established patterns for online interaction.

I authored several commercial sites using Shockwave and the like and it enabled us to give clients the ability to author media-rich and design-heavy presentations in a fashion they were familiar with from card decks applications. At the time doing so in a 'sexy' way made them stand out.

It was a cul de sac, but it was a worthwhile direction to test in terms of UX.

That it was proprietary was weighted differently in those days. The world was dominated by closed standards and open source was still vestigial especially in commercial applications.


How did you reflow the UI when users resized their browser?


You could communicate between flash and the browser:

https://help.adobe.com/en_US/FlashPlatform/reference/actions...


Flash websites could have a back button. Maybe it wasn't quite as easy as putting in a hyperlink, but it was still pretty easy to configure.

Not arguing with you about the rest. As a web developer I still haven't made a website in the last 5 years that was as visually impressive as my Flash projects from before, and sometimes I miss the visuals....

But people forget that the AWESOME intro that you wait two minutes of loading to see... is only cool the first couple times. Many many people where losing 5 minutes of loading time, just to visit websites that they used EVERY day.


I agree but it's about using the right tool at the right time. How else would could they have started YouTube back then for example? Early YouTube definitely was a glimpse into the future.


It can be argued that the existence of flash delayed the creation of proper video support in browsers.


Not sure what you mean by 'proper' - but flash video was a step-up from the previous era where websites gave you a choice of either RealVideo or Windows Media formats - the more quirky sites had Quicktime, and you had to have plugins for each [I'm dating myself here].

By virtue of being installed on 98% of all computers, Flash video was a de facto standard video target. One that could be played on Linux without getting into legal grey areas too!

Without Flash looming over them, IE would never have supported cross-platform video - this would have been Microsoft undermining the WMP format.


It was also a useful proof of concept of what could be done. Without Flash, would there have been the critical mass of users who wanted video in their browsers at all? Or the content?

I think that we would have gotten there eventually, but probably not as fast. The close - but not quite good enough - implementation of video in Flash was as much of a catalyst as it was a crutch to lean upon.


Bullshit. Without Flash or some other plugin there'd at best have been a different standard for each browser, with the only one worth caring about being whatever IE implemented.


And pretty much all of the downsides you mention apply to these newfangled WASM+Canvas apps


Flash Professional still exists (renamed Animate CC) and outputs to Canvas, WebGL, and Flash/AIR. https://www.adobe.com/products/animate.html

I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.


> I agree with you that it's a little sad that amateur animation fell a bit out of vogue, but the tools are right there for anyone whos interested to pick up.

I have memories of being wowed by what creative people could do with flash in the 90s and I would not call that amateur animation. I think the problem (1) is we currently seem to be lacking in tools that let people skilled in the visual arts create things without knowledge of the mechanics, so to speak. It's like if in order to write a great song someone would first have to understand how a musical instrument is built. I would not call Amanda Palmer or Lee Ranaldo amateur yet I bet they probably can't build their own instruments.

(1) I must make the disclaimer that for a couple of decades already I've been working only on the backend, so I may surely be missing part of the picture here.

Edit: Fixed footnote mark.


amateur as in non professional/commercial. people not in for profit trades.


Amateur meaning lover. Someone who does something for the love of it and not the money.


Makes sense then. My wrong for forgetting that definition of the word when reading your comment.


To be fair, there were reasons those things fell out of favor; there was a real tendency for crazy, uneditable code that was way larger than it needed to be. Perhaps some of these things would be less of an issue now (honestly, a bunch of wasteful code is less of an issue now than when most of us were on 56k modems), but they weren't made-up.


Absolutely, my first link was 2400 so I know exactly what you mean :)

And I know there are still issues. Given today's speed, issues are more about security and accessibility.

My point was just that the artists using flash to create good content were not amateur animators.

Edit: My point was wrong as it was based on a misunderstanding of the use of the word 'amateur' by the person I was replying to.


The only good content in Flash was games and animations (like Homestar Runner). If you were using Flash as a Web/application design tool, you weren't making good content. You were making a pain in my ass.


Still waiting here for ADP and weather.gov to get their shit together and enter the 21st century.


What Flash content does weather.gov have? Their website visually appears a bit outdated, but it works just fine.


The looping radar views use flash [1] [2]. The composite regional radar loops don't though [3]. That is unfortunate because the former has more information and detail. There's no reason why it can't be reimplemented with canvas or SVG to manage the overlays.

[1] https://radar.weather.gov/radar.php?rid=buf&product=N0R&over...

[2] https://radar.weather.gov/radar.php?rid=BUF&product=N0R&over...

[3] https://radar.weather.gov/Conus/northeast_loop.php


I suspect a lot of early internet amateur animators got started on pirated/shared versions of Flash. Probably helped the proliferation of that scene a whole lot. Cloud based service models make that much harder.


$20.99 for all of creative suite is also a much lower entry point price.


Things may have changed for today’s teens, but having been a teen through the 00’s that would’ve been too expensive for me. It was hard to even justify a WoW subscription at $15/month even with the near endless entertainment value that provided. Also, with the CC subscription setup, if you stop paying you lose access to your creations. That’s not too big of a deal for working adults, but for a teen with tumultuous cash flow, it’s a huge dealbreaker.

I spent hundreds of hours in Photoshop as a teen, but I likely would’ve have bothered with it at all if it weren’t easily piratable.


12-14 year olds aren't going to be dropping 250 bucks a year. If you look at something like newgrounds, a lot of that was bored teenagers.


Depends on where you live in this World. For young people here (south america) that could be a lot.

That's a problem generally with cloud services. Pricing is mostly done considering some countries and contexts, but totally ignoring other. With physical equivalents some years ago, local distributors made deals with their home offices to adjust for this, but that's not something usually done anymore.


In 2007 flash could run 3d games, do shaders, have multiplayer, run physics manipulation, have 3d sound, do bitmap manipulation, socket programming, and had documentation built into the editor.

Even now html/J's can't do all of the things and most of the things that you can do, are not as fast. While browsers are stuck with legacies to uphold. Flash had no dom to worry about, untyped language (as3) or had css holding it back.

General argument was flash sucks because people make terrible content with it. Which is like saying I hate having hands because I trip things over.. so no limbs = no mess PERFECT.

In turn I think it helped push native apps. Since plain Js/html app just sucked in comparison when it comes to experience and capabilities.

Flash should have been open sourced. Hopefully with webgl and web assembly someone can step in and create something similar


I think the key thing to keep in mind is that Flash gave you all of that from one vendor in a coherent, easy to make use of experience. We can certainly do the same things with JS and the technologies we have outside Flash, but it takes a lot of mental work to stitch it all together. You have to know so many frameworks. You have to know the type of frameworks you're looking for to do `x`. And then there's a lot of performance tuning because something done through canvas isn't as fast as something done with the DOM, etc. Flash was gross but useful.


I agree. In the early 2000s, I learned about animation through Flash. I remember working on a 9th grade project that I illustrated through Flash. I remember learning what frames were, what keyframes were, what vectors were, etc. It was just the right amount of non-technical for me to make sense of it.

Taking a step back, man what a different world it was back then. I'd fire up MS Frontpage or Macromedia Dreamweaver and go to town. The expectations have changed on the maintainability, usability, and functionality parts so I understand why we are where we are today. Both I do miss those simple days.


Flash was much more than animation. ActionScript was a fantastic language for developing dynamic client guis in. Flex's component based model was way better than what we have now -- basically typed webcomponents with a consistent runtime, and mxmlc and mxml was open source. Actionscript 3 was a fine language, with types and a nice compiler to work with.

Steve Jobs and browser security holes killed Flash, and no current open web platform covers all the use cases mxml and AS3 covered for cross-browser development. I could analyze audio channels, run lightweight process concurrency via green-threading, store user files, do i81n translation, streaming websockets and work with actual binary data types in the browser in 2006. I could trigger actions based on events in video and audio streams. I had consistently applied css with animations across components in 2006. I had reusable web components in 206. It's now 2018 and we still don't have cross-browser support for all of that. Oh, and I could run my app on the desktop in offline mode and in the browser.

Security was an issue. Looking back now, I think an Android-like permissions scheme is what it, and the browser, needs to fulfill the promise of write once run anywhere that the browser and the web tends to make.


>Actionscript 3 was a fine language

It was heavily based off ES5, which really helped launch my Javascript abilities forward at the time. I was sad to see it go and ended up working with other technology that never felt as fun.

I remembered recently that Haxe was initially based off of MTASC (something I used in the later AS3 days), and checked it out. It's quite a stable ecosystem that feels very familiar in syntax. Add in HaxeFlixel, and it's almost like Flash never left.


Your timing is off, ActionScript 3 was based off the doomed ES4 which had classes etc. Adobe was even a main participent in the standardization process iirc.


Yup, AS3 was based completely on the ECMA draft at the time and was a spec implementation. Macromedia then later Adobe had representatives on the W3C board and due to politics, worry about compilation, the lack of "learning to code from reading source" and from what I recall concern for backward compatibility the draft was killed. Harmony was the next draft and it eventually evolved into ES5.


Completely agree with you... Going to browsers only from flash was like going 20 years backward. Even features available don't perform at same speed as flash. Sad disaster


i once (~15 years ago) worked at a web design shop where the founders were both architects. they did everything in flash as it somewhat resembled the tools they knew - cad software. one of them was the designer and he was really good and they made amazing websites without even knowing a shred of html or even programming languages (they didn't even use actionscript). in case the customer wanted a guestbook they used a free ad-supported one (until i joined as a programmer).

flash was a good tool for the websites they used to create, usually graphics and animation heavy websites low on interactivity. they used the export function to generate the swf including the html index page to embed it.

over the years focus shifted more and more to dynamic websites with content generated from databases and they were mostly lost there. dynamic content (loaded by http requests from databases) in flash usually turned into a huge pain in the ass after a while. for those projects we switched to a traditional website model where dynamic content mostly wasn't loaded into flash, instead it was a html-by-php website where flash animations replaced header jpegs (i.e. animated passive content).

so, in our case, flash was a good replacement for animated and slightly interactive but not dynamically generated content.


I had a lot of "oh wow" moments at the end of the 90's and beginning of the 00's with Flash.

A lot of users did, too. But it was usually along the lines of "Oh, wow. This page has flash. Well, I guess I'll go get a Coke while the Flash plugin loads into the browser and my computer can't go anything else. If I'm lucky, the whole thing won't crash and take all my work with it by the time I get back."

We romanticize the past.


You know what was worse than that? Embedded RealPlayer.


I get enough "buffering..." spinners these days that RealPlayer jokes are in danger of being re-evaluated.


I never had those issues with Flash. Just low framerates and long loading spinners for the bigger animations.

Java applets OTOH had the exact experience you describe. Those were absolutely terrible.


Maybe they just wanted to play with splendid stuff like http://wordperhect.e-2.org


Macromedia Flash keyframe animation was great even for technical users who didn't know animation. However, as soon as you wanted any kind of interactivity you had to start learning a new scripting language and that was painful for everyone.


> Macromedia Flash keyframe animation was great even for technical users who didn't know animation. However, as soon as you wanted any kind of interactivity you had to start learning a new scripting language and that was painful for everyone.

Not really. Action Script has always been close to Javascript/JScript.Net and now Typescript, it is the same syntax. In fact ActionScript 3 was supposed to be the template for ECMASCRIPT 4, before it was abandoned.


Totally agree with this! I miss the simple days of build once run anywhere and just being able to bash out a fun idea in an afternoon and release it and know everyone would have the same experience. Yeah security was crap with flash but surely that could have been solved. I think the battery use and apple’s desire to make sure everything had to go through their paid AppStore was what killed it really though.


I wish adobe would make a flash-like interface that output html5 / JavaScript / css


Their current Animate product (they renamed Flash Pro) is pretty much what you're looking for, if I'm not mistaken.


When Steve Jobs and company effectively killed "RIAs" I remember Adobe pivoting and promising a software to do what they previously had but output HTML5/CSS/JSS instead of Flash. It looks like it is still alive in the form of Adobe Animate?


The product was originally called Adobe Edge, and it was rolled into Adobe Animate.


> The product was originally called Adobe Edge, and it was rolled into Adobe Animate.

Not Adobe Edge was killed dead, it was not rolled into Adobe Animate. Edge did export animation with jQuery and DOM element, while Adobe Animate exports animations in pure canvas.


Have you tried Tumult Hype/Hype Pro (for Mac)? It’s not as fully-featured as Flash, but it’s similar in many ways and exports to HTML/CSS/JS.

https://tumult.com/hype/pro/


Well look at that. Be great if they changed their pricing model. A possible 2 weeks down the drain seems a bit staggering.


Kind of makes me wonder if there's a market for a React.js editor that works a lot like the Web Inspector does right now, where you can build your hierarchy as a nested list, set properties of each layer through a table, "wire" properties through several layers of hierarchy to reach further down components, and reference functions easily in the table. As long as it has support for lifecycle methods, it feels like it could become a natural UI for writing React apps!


Yes; as I said in the post, I focused mostly on implementor's needs here. When I eventually do a user comparison, this is a huge pro for Flash.


Flash was indeed a tool for cultural creatives. You can tell because it only ever worked properly under Internet Explorer for Mac OS Classic. On every other browser/platform combo, there were framerate issues and the audio would gradually desync from the video. Forget about seeing these issues resolved in the afterthought of a Linux port.


Hehe yes I remember putting silent audio loops in Flash animations so the FPS wouldn't drift too much.


Well, you could run flash in webassembly and get all that back.

But it would still come with many of the drawbacks.


https://github.com/mozilla/shumway

is supposed to be an implementation of the Flash VM in typescript, But it can't even run in latest Firefox browser anymore apparently and no commits from 2 years.


Any reason why we cannot build a WYSIWYG editor as a high level WebAssembly language?


The time is ripe for someone to recreate something like Flash the builds to WASM


Adobe has Animate CC, which is basically Flash Pro with a new name that also outputs to JS, canvas, and webgl. I think it's very probable it will implement WASM in the future.


Flash is an application. WebAssembly is a compiler. Not the same thing.


> If you built an applet in one of these technologies, you didn’t really build a web application. You had a web page with a chunk cut out of it, and your applet worked within that frame. You lost all of the benefits of other web technologies; you lost HTML, you lost CSS, you lost the accessibility built into the web.

But that's also true of an application which relies on WebAssembly (or JavaScript): it loses all the benefits of the web, because in a very real sense it's no longer a web site, but is instead a program running in a web page.

WebAssembly or JavaScript, neither is document-oriented; neither is linkable; neither is cacheable. It's Flash, all over again — except at least with Flash one could disable it and sites were okay. with WebAssembly & JavaScript, every site uses them for everything, meaning we get to choose between allowing a site to execute code on our CPUs, or seeing naught but a 'This page requires JavaScript' notice.

It is the return of Flash, and that's a bad thing. We thought we'd won the war, but really we just won a battle.


I envision horrible "all WASM" websites, just like the old "all Flash" websites, that won't have accessibility, won't be able to be linked to, etc. Worse, I envision this as being another step in the ad blocker arms race. Inevitably there are going to be websites that package an entire WASM-based browser that will need to be used to access the site, nullifying client-side ad and script blockers. I can see the pitch now-- "Keep your existing website but add our tools to prevent ad blockers!"

(Edit: Typos. I should know better than to post from my phone by now. Grrr...)


This is a criticism that would be more suited to the Canvas API than the WASM API. WASM is still meant to drive the DOM API which is still as introspectable as before.

[EDIT]: Steve is right of course, and I misspoke here, "WASM is still able to drive the DOM" is closer to what I meant to say.


I agree with your first sentence, but not your second: wasm is meant to access all platform APIs, not just the DOM ones. Canvas is part of the platform as well.


I think we will start to see a lot of all-in-one frameworks that use wasm for constraint based layouts so people don't have to learn CSS. I hope I'm wrong but I can definitely imagine something like this coming from the enterprise java/.net types.


I sure hope we do. CSS has had 20 years and is still the most error-prone way of doing layout I've ever seen.


I don't think so. Accessibility, links, ad blocking etc. behave exactly the same with wasm as with JS.

What difference do you see?


no if you don't target DOM, if i could have "browser" in a browser that targets canvas or webgl then i cannot block it, only on network level.


You won't be able to block it on a network level either when it's running on a locked-down platform like iOS and using an eventual iteration of TLS that prevents man-in-the-middle inspection. This feels like yet one more step in the direction of "you don't own your computer anymore".


except iOS has some of the best ad-blocking available, with OS level extensions that are almost impossible to get around. So your point doesn't really make a ton of sense.


The OS-level tools won't be able to inspect the websockets-based channel that the browser-in-browser uses to communicate with its back-end. It will all be opaque TLS-encrypted traffic to the OS. The native browser will be hosting a canvas element that will host the UI and running WASM code and that'll be all the OS will see.


i was more thinking on DNS level, but with DNS encryption even that falls on it's face.


DNS encryption is done by the OS, which you control, so you could still null route ad servers.


You can use canvas or webgl from JavaScript too, so there's no change here specific to wasm.


Except JS alone wasn't good enough for that; WASM, especially as a compilation target, seems to make the task of embedding a browser within a webpage easier.


you can but, wasm should be an order of magnitude faster then javascript, this makes it possible to run all kinds of "heavy" apps in the browser. some kind of client that outputs to webgl in this case shoud be doable, in pure js it would be to slow /expensive.


Documents are documents.

Apps are apps.

Sometimes both are in the browser.

It'd be great if all "documents" had an HTML version, with minimal JS. For accessibility, searching, deep linking, etc.


like Flipboard created? React Canvas rendered the site directly to canvas.

https://engineering.flipboard.com/2015/02/mobile-web


How the hell do you retain any accessibility when rendering a custom UI in Canvas?


you dont. they reinvented their own css and dom.


There are sites where accessibility isn't much of a concern (esp things like games) or where it would be easily handled (eg. an image aggregator). For the rest, it seems inevitable that another (probably not-quite-compatible) accessibility layer gets built on top (for example, using the Qt accessibility model when compiling Qt into a canvas).

Overall though, I think wasm shouldn't be replacing HTML/JS.


It already is possible, with JavaScript. WASM doesn't change anything. And the fact that although it happens with JavaScript, it isn't pervasive, I think should assuage this fear.


I agree WASM doesn't bring in anything fundamental to this picture that isn't already there with JS. But that is no comfort.

In the ancient web world, the site author wrote HTML to describe the data she wanted presented and the browser took care of making it accessible. But authors (especially companies) wanted detailed control of how their sites looked, so they turned to flash etc.

JS has long been re-playing this trend in slow motion -- moving away from web pages being interactive documents presented by the GUI app called browser and towards them being stand-alone GUIs like in flash.


SEO is too big of a concern nowadays for a resurgence of black-box websites, specially if they depend on large audiences and ad revenue.


But you can still get social media traffic. I think that it's very possible that will help allow black box web sites to return.


I suspect some appeal of Electron apps is that the user can't block ads or scripts running in what's basically a website.


To be frank, I'm surprised Wildvine hasn't been used in conjunction with DoubleClick/GoogleAds to enforce websites in showing adverts.

Sure, there's "fuckAdblock" but that shortly spawned "FuckFuckAdblock". It's a whole different case when the very browser prevents the content from being tampered with.


My position on this is basically, WebAssembly is no different than JavaScript here. If you think JavaScript ruins this property, well, the web was only in the form you describe for four years, and has existed this way for 23 years now.


The focus on driving WASM performance in the browser platforms, combined with the ability to transpile more languages to WASM, pushes the barrier-to-entry lower. Yes, these concerns aren't specific to WASM, but the platform is being made more capable of hosting this kind of troubling code, and more attractive to developers who would develop these things.


Compiling C++ or Rust to use in a web page is much more complicated than just writing Javascript. I can't see how that lowers the barrier to entry. Your argument seems to be that you can do more with those platforms because they're more performant, which yeah, to a point I guess, but Javascript is already plenty fast for making whatever obnoxious dreams people want to come true and the web seems to have survived it fine.


If you compile C++ or Rust to native targets anyway, I fail to see how it's more complicated.


I don't think the barrier to entry or ease of development is really the issue when we're talking about ad networks.


Ad networks, no-- I agree with that. I'm more concerned about entire websites becoming "apps", complete with browser-in-a-browser functionality (with the inner browser's behavior being completely under the control of the site operator).

It would be an interesting experiment to transpile a less complex browser, like Arachne, over to WASM as a proof of concept to demonstrate how awful this kind of future would be. (Yet another "if I had some free time" wishes... >sigh<)


> It would be an interesting experiment to transpile a less complex browser, like Arachne, over to WASM as a proof of concept to demonstrate how awful this kind of future would be. (Yet another "if I had some free time" wishes... >sigh<)

Don't. Most people will ignore the demonstration, but someone greedy will fork the project, build a library out of it, and start selling as a product to ad networks and media companies.


I agree. Somebody is going to open that Pandora's box, though. I'm glad to see that I'm not the only person who is concerned. I think it's an eventuality, however. Few young developers today have had to deal with walled gardens and don't understand how bad they are. Worse, today's platforms give an unprecedented amount of control to the platform owner to the detriment of the hardware's actual owner, and developers seem more than willing to help create those mechanisms of control. What's going to happen when nobody is left who actually owns their own computer?


Yup. That's what I'm worried about.

And people growing up with today web-first, mobile-first computing model have no clue of the power and capabilities computers have. With data being owned and hidden by apps/webapps, limited interoperability, nonexistent shortcuts, little to no means of automation of tasks, people won't even be able to conceive new ways to use their machines, because the tools for that aren't available.


You just gave me a horrible vision of a robotic hand perched over a smartphone screen being programmed to touch the screen to "automate" tasks because nobody will know any better. (Of course that would never work because our smartphones have front-facing cameras and software to detect faces and verify that we're alive... >sigh<)


Yeah, this is the input equivalent of the analog loophole :).

Now ordinarily, on PCs, you do that by means of simulated keypresses and mouseclicks, using scripting capabilities of the OS or end-user software like AutoHotkey. In the web/mobile-first, corporate-sandboxed reality, I can't imagine this capability being available, so Arduino and robot hand it is.

(But yeah, bastards will eventually put a front-facing depth-sensing camera, constantly verifying the user, arguing that it's for "security" reasons.)


Ad networks are the next Macromedia.


That is certainly not true. The knowledge base you need to even start compiling to WASM is far greater than just JS.


I think it's reasonable to assume that there will be many efforts to create tools and libraries that make it easier. It will become less difficult with each passing day.


The web long ago became not only a document store but also a thin client platform for distributing full client applications to end users. That cat is out of the bag and is not going to be stuffed back in.

WASM is really just a cleaner, faster, more elegant way of running alternative languages to JavaScript in the browser. It replaces transpilers that turned languages like Java or Go into ugly basically machine code JavaScript blobs. It will save bandwidth and improve performance but otherwise doesn't change much. Note that transpiled and uglified JavaScript is already "closed source," so nothing changes there. Anything can be obfuscated.


I do see your point!

I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day. Marketing departments everywhere tend to turn the web into Blinkenlights.

How many support documents of more than 15-20 years ago are you able to still find using the old links? So many sites are working as dumb front-ends for a database.

The information retrieval and persistence over time is not something many worries about.

The cat is for sure out of the bag. I just hope what was still can survive.


>I am however scared that HTML will go the way of Gopher. Why would anyone care to maintain boring hypertext documents when we can have app of the day.

JS or Wasm can't create documents by themselves, they still need a DOM. Even if it's a 2D canvas or some WebGL canvas, it's still a DOM element. Or even if it's just an iframe that loads some blob, on the top level it's still a DOM element. And as such it can be inspected and controlled.


> And as such it can be inspected and controlled

Not if the content is decrypted by EME that's not fully controlled by the browser.


I think marketing departments would quickly notice that most crawlers won't execute all the fancy Blinkenlights.

I would assume that it will take a while for tooling in any other language to get to a javascript level. I think WASM will mainly be support for the latter. Do some excessive calculations.... and yeah, excessive Blinkenlights.


You'd be surprised, marketing departments generally do not have a clue about that specific type of thing. Hell eBay's operations apparently doesn't from my experience. It's incredibly easy to game marketing, and internet marketing is mindlessly easy without the invasive stalking.


> The cat is for sure out of the bag. I just hope what was still can survive.

I hope so too, but as a member of predatory and territorial species, the cat will most likely keep on killing everything else around it.


Yes, but only for about four hours a day, because naps.


Exactly. Well said.


Webassembly is linkable: https://webassembly.org/docs/dynamic-linking/ in the dynamic linking sense.

WebAssembly enables load-time and run-time (dlopen) dynamic linking in the MVP by having multiple instantiated modules share functions, linear memories, tables and constants using module imports and exports. In particular, since all (non-local) state that a module can access can be imported and exported and thus shared between separate modules’ instances, toolchains have the building blocks to implement dynamic loaders.

The code is fetched via URLs so you can link to it in that sense, too.

It's also cacheable: https://developer.mozilla.org/en-US/docs/WebAssembly/Caching...


I believe the parent comment was referring to hyperlinks, not dynamic linking.

The point was more that once webpages become applications running on the client (think single page apps), the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained.


the natural document metaphor of web pages and the tooling built on it (hyperlinks, forward/back, bookmarks, history) falls apart unless you do extra work to ensure that experience is maintained

But not everything needs to be a document. Sometimes the thing you're working with really is an application and not a document.

To me, one of the biggest problems with the current web is that we've commingled "app stuff" and "document stuff" so badly that browsers have been forced to become a shitty, inferior X-server (or Operating System outright), instead of being really good browsers. Browsers for browsing is great... browsers as a UI remoting protocol, is a bit janky.


ah. thanks.

"clickable"

Because you certainly can link to the wasm and js code that come with webassembly instantiateables.


I think the GP means "links" as in "clickable links", not "binary linker/loader".


I think he meant linkable in the web sense, ie, hyperlinks.


Sometimes a program running in the browser will be valuable when it's full window.

Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.


> Would I complain if I could run a full version of Word or Excel in the browser? The browser would become a universal interface in another way and decrease our reliance on particular operating systems.

I for one would, because the browser is an absolutely shitty interface. You're still forced into "there are tabs, which contain sandboxed documents" model of use. Interoperability is nonexistant, integration with machine capabilities is superficial and completely opaque to the user, the data model is hidden (where is my localStorage equivalent of the file browser again?), everything assumes you're constantly connected - it's a corporate wet dream, but for individuals, it's a nightmare.


Nothings perfect, though. If operating systems aren't, I wouldn't expect browsers to be either.

Creating mobile and/or offline first exoeriences for individuals isn't a pipe dream, it was possible and happened in the 90's when connectivity (dialup) informed content (largely offline or downloaded).

I'm not looking at replacement, only reasonable substitutes, which I think will become useful similar to using Google docs on mobile and web.


> where is my localStorage equivalent of the file browser again?

The Firefox developer tools have a "storage" tab that lets you inspect the content of various databases associated with a website.


Default-disabled, read-only and scope-limited to domains your current tab works with, but I guess it's better than nothing.


In my experience, the application-on-browser products consume far more CPU and RAM than the application-on-OS products. For me, that's a pretty big deal: I need the laptop to run as long as possible on a charge. Right now, I would complain if I _had_ to run a full version of Word or Excel in the browser.

Perhaps Web Assembly will drive this power usage down. But as it stands now, I actively avoid more than one of these app-on-browser products at a time.


Well, modern JS is MORE performant that classical scripting languages in benchmark cases, but the fact is that your browser freezes on half of js CRUDs that do data processing, while an analogous Perl application works at near light speed in comparison.

In half cases like that, stuff like sorting, list comparison, deduplication are done in a way that will score low mark even by standards of first year university program.

This is telling of web development industry's approach to doing business.

The most horrid examples of "LAMP sweatshops" of 10 years ago pale in comparison to what the industry has devolved into these days.

My own experience being an involuntary webdev for 3 years left me with following impressions:

1. Webdev is the largest commercial development niche in the whole tech industry. Everything else pales in comparison. It is also about making money quickly. A webapp or even a promo page SPA for a major consumer brand these days can cost up to $100k easily. $100k does not seem a lot to most people here, but such money can be well offered for a 1 month project for a team of of 6-8 professionals.

2. The industry is dominated by shops with 20 to 30 people headcount. Web dev studios generally don't scale much above that because of talent flight. Loss of a single senior dev who supervises hordes of lowest tier mule coders is often the end of a business for most of companies.

3. People from "big dotcom" world are near oblivious to ways of small web dev shops. For people who began their careers at 60k a year internships, getting into shoes of a person who does coding for 30k a year is impossible.

4. Talent flight and turnover is real.

5. This is all about really expensive quick and dirty code.

6. "The big dotcom" type of companies tried time and time again to tap into the market to extract rents, and with exception of Macrovision nobody ever succeeded. This is the reason Adobe is lobbying for unusable, unwieldy APIs in hopes of selling tooling for it.


If I could ask, where do you live? Your experiences don't reflect my own.


Practiced for near 3 years in Canada, and continued for a half a year after in China.

Quit webdev a year ago, now working in engineering consultancy.


I'd hope compiled binaries can run more efficiently than dynamically compiled Javascript over time.

Right now my mobile device is often tapped by Javascript that insists on running in the background.


> decrease our reliance on particular operating systems

By replacing it with a poor simulacrum of an operating system. Browser APIs are an inefficient subset of libc and bsd sockets offer.

And they provide near-zero interoperability with native applications. No filesystem access (beyond the clunky save-one-file dialog), no CLI, no IPC, nothing. That means browsers are building on top of operating systems while not interoperating with them.


> No filesystem access

This is a step forward not backwards. The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed. It leads to apps storing data in funny places, reading files they shouldn't, and general mayhem. Requiring the user to explicitly allow the app to access the file is a good thing.

There are some use cases that are hard to support (like being able to open all the files in a folder). But people are working on a solution.[1]

> No IPC

WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC. And there is nothing stopping a process running in a different browser (or even no browser at all) to connect to a webapp using WebRTC locally.

Additionally, if a new window is opened by Javascript and both pages are in the same domain + port (or subdomains of the same domain and you have access to the parent domain) you can communicate between the windows with simple Javascript function calls. And since browsers are moving towards a 1 process per window setup this is essentially IPC.

> That means browsers are building on top of operating systems while not interoperating with them.

While I can't argue with that. So is X Window. The abstraction between app and OS is a thick gray line not a thin black one.

[1] https://developer.mozilla.org/en-US/docs/Web/API/FileSystemD...


> This is a step forward not backwards. The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed. It leads to apps storing data in funny places, reading files they shouldn't, and general mayhem. Requiring the user to explicitly allow the app to access the file is a good thing.

Most apps being limited to their little part of the filesystem is not a problem. The problem is, now as a user, I can't access those files. I can't view them in a form that suits me, I can't use other applications to operate on them. The true form of the data is forever hidden from me, a secret of the application that "owns" it.


IMO that's a fairly easily solved problem. Browsers can add "localstorage browsers", you might even be able to do it in a browser extension.

I'd also love it if they gave that ability.


But that's the wrong direction. Instead it should map to a file tree that you can explore with your native file explorer and text editor. The browser becomes a silo for your data, inaccessible by every other application.


> The true form of the data is forever hidden from me, a secret of the application that "owns" it.

But that's been true for almost all users, and not just webapp users, forever.


Not necessarily. In the world of desktop software, most users know what a file is, and know that all the data of what they've been working at the moment on is contained within such a file. They know they can move this file around and possibly send to whomever they want. They also know that a file can be opened by multiple applications.

SaaS and web kill that.


I was thinking of all of the files in proprietary, and particularly binary, formats. Maybe some users know that even those files can be opened by multiple applications, when that's even true, but I suspect even more users don't even realize that almost all of their data is stored in a file somewhere, let alone where that file is in the filesystem and in what format it's stored.

Given the ubiquity of Word document and Powerpoint presentation files and the like, most users I'll grant you are aware of the files themselves, and the fact that they can be attached to an email. I'll even grant that a large fraction of those same users could answer 'yes' to the question 'Could these files be opened by another application?'. But almost none would be capable of doing anything with those files without an application that handles everything for them.

I don't dispute tho that an awareness of, let alone existence of, files in a filesystem is a significant benefit and not having access to them is a (relatively) significant loss.


> The security model of allowing apps access to your full filesystem (assuming your user has access) is flawed.

Your are neglecting the option of exposing a limited subview of the filesystem like containers do.

> But people are working on a solution.[1]

The big red box on top says it's not on standards-track.

> WebRTC while not the same and far more overhead (due to TCP sockets vs OS level sockets) can function very much like IPC.

Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?

> So is X Window.

Maybe if you're remoting X, few people do that these days. In practice X applications have access to the same machine that they are drawing on.


> Your are neglecting the option of exposing a limited subview of the filesystem like containers do.

No I'm not. I said the limitation is a step forward. I didn't intend to imply it is perfect. It is not at all perfect.

> The big red box on top says it's not on standards-track.

Correct, but most standards started as experiments by the browsers. I think it qualifies as "people are working on it" but means it is probably far from being standardized.

> Can I send open file descriptors like I can with unix domain sockets? Can I share memory for low-latency atomics? Futexes?

No. But you already knew that. But it does allow for data communication which in my opinion solves the 80% use case for IPC. From my experience (YMMV) the features you described while useful are not needed for most consumer apps.

Don't let perfect be the enemy of good.


> Don't let perfect be the enemy of good.

The problem isn't perfectionism, but that at least some of us believe that things are moving in the wrong direction - towards making vendors own everything, and end-users in control of nothing.


I wasn't implying replacing operating systems, but rather having the ability to substitute them, similar to how web apps can substitute for native apps.

I'm still optimistic that new forms of applications will emerge from this. There are serious pieces needing fleshing out, like file access.

The insecure interoperation between browsers and operating systems perhaps can be reimplemented through a newer more secure interface like wasm or the api.


Yeah, or a full version of a monero miner...


Different psuedo-VMs, I mean browsers, operate differently even on the same specs for various technologies (CSS, JS). They already act effectively like "particular operating systems," except they're less efficient and more obnoxious to work with.


[flagged]


This comment breaks a handful of guidelines and is not civil or substantive.

https://news.ycombinator.com/newsguidelines.html


Potentially fun questions: are there any “DOM-native JavaScript games”? I.e., games that manipulate the DOM for their “graphics”—or even have hypertext in place of graphics—rather than running in a canvas?

The only example I can think of is the Twine engine for Interactive Fiction.


Well there's this.

https://github.com/mozilla/BrowserQuest

Doesn't work in Safari.


You should look into Crafty, it’s a js game engine which can output to either the DOM or canvas, I’m not sure how popular it is anymore but quite a few games used it. There are demo games here http://craftyjs.com


Compiling to Wasm will only get easier. It's only hard now because the target is new and people are still adapting the tooling. There is no reason why it would be any harder than compiling for a machine.

Wasm will almost certainly lead to UI frameworks for the Web. JS people try very hard to get similar stuff, but the language is just not good enough; at the same time the desktop people that have this stuff is claiming for some way to use the same on the Web. People are already working on those frameworks, by the way.


Yes its bad for document markup, but I wouldn't waste time coding the next Excel in HTML and CSS, I'd just straight to a gui language with guaranteed cross platform rendering.


The OP is really spot on.

For the commenters that seem to have some underlying fear that WASM apps will be another incarnation of a "window in a window" or some horrible bitmapped graphics pane that does not fit into the web model:

WASM is just a CPU. It's a bytecode format for expressing low-level, high-performance programs. It comes "batteries not included"--intentionally. By batteries, I mean APIs. WebAssembly modules must import everything they need from the outside. When embedded in JS and the Web, the first and still primary use case of WASM, that means modules can import functionality from both JS and the Web, and call literally anything that JS can call. That means WASM can (though still somewhat clunkily) manipulate the DOM, WebGL, audio, service events, etc, through all of the same APIs that JS can do. There is nothing that prevents a WASM app from looking and feeling exactly like something written in JS.

To reiterate: WASM does not require you to drop down to canvas or render fonts yourself. You can call out to JS or direct to WebAPIs! (again, it just happens to be clunky to do this from C++.) But other languages are working on bindings that make this much nicer. Rust anyone? :)

What WASM gives the web is a proper layer for expressing computation. The APIs and paradigms that build on top of WASM are independent, swappable, interposable, by design. Because it's a layer for computation, and a low-level one, it is by nature language-independent. As Steve mentioned, adding languages to the web one by one does not scale. Thus WASM.


> To reiterate: WASM does not require you to drop down to canvas or render fonts yourself.

The fear isn't that it requires that. The fear is that it enables that.

The web is, and has been over the past two decades, in the constant state of war over control between publishers and consumers. People - and especially businesses - making pages would like to have 100% control over how the webpage/app looks and is being used. But the users would like to have some control over what they're viewing too[0].

The most widely known battle in this war is the battle for ad-blocking. The publishers want you to view lots of ads. You want just the content, without any of the ads. So far, the technology (and economics) favors the user, but it's not a given.

The balance of control on the web was always maintained by the technologies on which the web standardized on. Pure HTML, or even HTML+CSS, strongly favours the user. JavaScript tilts the balance significantly towards the publishers, as now they can (and do) generate content with code, which renders the page difficult to interpret and modify on the user end. One of the biggest complaints about Flash was how shitty the pure-Flash/mostly-Flash webpages were. That's not an intrinsic problem of Flash - this happened, because Flash gave the publishers too much control. And publishers (again, especially businesses) will use (and abuse) any control they're given.

The fear here is, that WASM against tilts the control in favour of publishing, which will lead to abuse and web becoming a much worse place for the consumers. If WASM will, by virtue of efficiency, enable publishers to embed a browser they control within the page, the publishers will use this, because this would single-handedly eliminate most ad-blocking, userscripting and scrapping efforts.

--

[0] - and the power users, like myself, would like to have 100% of that control - think of how much better the web would be if the data was always published in machine-readable format, without tons of bullshit paginations and stylistic choices to scroll and click through. For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.


It's a big trade-off to be sure. On the one hand, I'm worried about the web becoming more closed-source and less hackable for all the reasons you've mentioned.

But on the other hand, I can't help see the enormous potential of a proper assembly language for web. Web technologies have felt like a massive hack for decades: tools designed for basic text formatting and a bit of interactivity which have been stretched in extreme ways to meet the needs of the modern web. Web applications are the most widely used software on the planet, and if you ask me it's about time developers will have the freedom to develop them in the language which makes the most sense for the task at hand rather than the only one which is available. And I am quite keen to see what kinds of new things will be possible when the ceiling is significantly raised for performance optimization.


Yeah, I feel the same sentiment you described, too. When building a web application, I'd prefer to use more powerful tools than JavaScript, and maybe a sane(r) set of libraries for user interface. There's also value in cross-compiling applications and games to web platform, because of ease of end-user deployment - for instance, games playable without explicit installation (of the game, runtime, and support libraries).

So I have really mixed feelings here. On the one hand, I appreciate the power WASM gives. On the other hand, I don't trust the majority of companies on the web to use that power responsibly.


> For instance, when looking for current weather, I want to input my location and a time span, and get weather data. I want to be able to script that. I don't want to waste time looking at ads, pretty pictures, non-relevant text and links.

I feel the same way. But those ads are there because that's the entire business model of people putting weather data out there for free. On most free sites, ads aren't just a sideshow, they're the driving engine. Take away the ads, and there goes the business model.

What we need is some other way to pay for the weather data. Maybe this could be a service provided by your ISP, like NTP or DNS. Or some third party subscription service. Or maybe even taxpayer funded. But if you're using a service that relies on ads as their revenue model, then expect to put up with ads. They're part of the deal.


That's a fair point, and a prelude to a much larger discussion about business models on the web. Suffice it to say, I'd happily accept some deal for compensating the data provider - be it ads, micropayments, or even regular subscriptions - if the resulting data was available in a) machine-readable form, and b) decluttered form on the webpage, so that I could read it efficiently (possibly with support of userscripts/userstyles). As a bonus, such sensible data display will save the provider's bandwidth costs, as on the typical site, 90+% of transferred bytes are not part of the content.


The web used to exist without ads, and it was more functional and useable for users. This idea that we need ads to fund webpages has always been nonsense. They are not at all part of the deal.


Micropayments is another possible solution that comes to mind. Say, pay one tenth of a penny every time you want to look at weather data. Assuming that we can come up with something that works for insignificant amounts and is fast, cheap, and secure to transact. If that is the case, you would just have to send a confirmation token with your first HTTP GET request to access a website with no ads. Competition would hopefully drive prices down and quality up.


they won't abuse because of GDPR. And people are already smarter, the same tricks from the past won't repeat. Browser vendors will be able to easily block too heavy WASM programs, for example which run too long. Or new laws will enforce that. [edit]: Or even very heavy WASM apps will require to be signed by certificate provider, otherwise user will be warned about risks. Just like HTTP vs HTTPS.


Javascript enables that too and some people do it.


I'm typically sharply critical of the web, but I think this comparison is kind of silly. The biggest problem with Java Applets and Flash is the security issues, which were largely caused by giving web pages access to a second, less secure sandbox. WASM stays in the same sandbox as the rest of the web. Flash also had the problem of being proprietary and non-standardized with only one implementation, something WASM does not suffer from.

For those worried about the "all WASM" pages looking like the old "all Flash" pages of yore, consider that Flash and Java applets had their own UI stack and WASM does not. The closest WASM has to that is OpenGL, but you've been able to make all-OpenGL apps with pure JavaScript for some time and it hasn't taken over the web with terrible sites yet. WASM code can interact with the DOM. I guess we could worry about native C/C++ GUI toolkits being ported to WASM, but the web community gets what you deserve for making Electron a thing.

I don't like JavaScript in general but I don't see how WASM is any worse, and if anything it's quite a bit better.


Flash player had a open access spec and there was more than just the Adobe Flash Player as implementations.

Pretty much every AAA video game of a certain period would have been using scaleforms Flashplayer for their user interface.


> Flash player had a open access spec and there was more than just the Adobe Flash Player as implementations.

No, the compiler maybe, Action Script maybe, but not the player. The player is entirely closed source and there is no open spec for the player. Or you need to show it to me.

> there was more than just the Adobe Flash Player as implementations.

Only Adobe's implementation could run all swf files. Scaleform was not an alternative flash player. Any attempt at creating an alternative and feature complete flash player failed.

Flash the tech is not open, at all.


The player is closed source but the SWF spec is open.

https://www.adobe.com/devnet/swf.html


Doom 3 BFG comes to my mind.


Why on earth is this being down voted?


WASM has all the security problems of flash, and then it multiplies them, by making WASM content linkable.

So someone makes a game, and they use this very useful WASM library over here. Only that library exploits spectre or meltdown to steal data. Or maybe it just silently hoses your machine by targeting the new WebGL shaders? Or any myriad number of other things.


Exploiting browser bugs is still just exploiting browser bugs and this is already a problem for JavaScript, WASM doesn't make it worse. Flash introduces a second, black box sandbox implemented by morons.


I don't think you're understanding.

Let me be explicit. There are changes in WASM specifically made that render spectre and meldown mitigations useless. (ie-Browser makers put in spectre and meltdown mitigations, and changes in WASM allow WASM content to get around those mitigations.) Developers cheer the changes, because they make WASM more useful, and to be fair, browser mitigations of spectre and meltdown type bugs make WASM far less performant. But changes which render those mitigations useless are dangerous no matter what your opinion is on how useful WASM should be.

Edit: Should probably mention that the upcoming changes include threading and shared memory. Implemented in a way that enables CPU side channel attacks. (Probably because there is no other way to get threading and shared memory without everything slowing to a crawl, but still.)


> (ie-Browser makers put in spectre and meltdown mitigations, and changes in WASM allow WASM content to get around those mitigations.)

Could you be more specific? I implemented Chrome's Spectre mitigations for WASM and I'm not sure what you are referring to.

> Should probably mention that the upcoming changes include threading and shared memory.

These only give you a high-resolution timer mechanism--which you have to build yourself and is possible in JS with SharedArrayBuffer before. So WASM is no worse in this respect.


It's probably a good thing that a standard isn't designed around Vulnerability of the Day, no?

Some time in the (near?) future those vulnerabilities will just be a footnote in some history book and having to support mitigations forever (due to backwards compatibility) probably isn't the best thing to encode into a standard.

I'm sure some intrepid security researcher will find some new Vulnerability of the Day which can also be exploited through wasm and then they will need to add mitigation to the standard yet again ad nauseam until it becomes some giant bloated unusable mess for which we'll need yet another standard.


Maybe I don't understand Spectre and Meltdown, then. I wasn't aware it was the browser's business to patch that, I thought it was kernel and microcode patches?


Kernel and microcode patches allow the OS to control for meltdown a bit better.

The problem with Spectre is that it is a bit different. Array bounds class might be able to be patched in the long run, but even from the beginning, people have suspected that the other classes of spectre would be more slippery. And true to form, new spectre type variants continue to be discovered and disclosed by Intel even to this day. (SpectreRSB for instance.)

In short, it's not a simple patch that will fix all these strains of bugs out there. In light of that fact, browser makers have implemented mitigations at the application level which are a bit more heavy handed. But, as you can imagine, this is going to impact all content inside the browser. Which brings us to the WASM content, and the threading and shared memory changes. And you know the rest of the story from there.


WASM itself isn't Java, Flash, or Silverlight, but isnt it another step in the ongoing multiyear process of replicating the features of those technologies in a way that they tried to accomplish: compile to one format and run it on multiple platforms?

I think so, and the managers at Adobe and Sun must be kicking themselves for not somehow getting their runtime more open, modular, and standardized now that we see write once run anywhere with a few system hooks is all we need.

Then again... It was a different world in the mid 2000s. The web standardization process? Ha, what was that?

On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?


I think it's a step forward in that it's more integrated into the platform. Remember when TCP/IP used to be an add-on for an operating system?

> managers at Adobe and Sun must be kicking themselves

Both tried. As I recall Sun were blocked by Microsoft, and Flash was bundled as standard with Netscape from about 2001 onwards. Steve Jobs killed that stone-dead when he point-blank refused to support it on iDevices.


Some companies have all the right ideas and for whatever reason still can't execute.

Adobe AIR beat things like Electron and PhoneGap to market by years. IMHO the issue with Adobe is this insistence on 'open' still having various vary opinionated elements. Adobe Air for example had a lot of good ideas but still attempted to evangelize Flash and ActionScript. I _think_ MS is trying to pivot of that grave now with .NET Core. Time will tell if the Mono-to-Wasm or .NET Core Native projects have legs.

I was so very excited about Adobe Air and wrote a production application with it in 2009.

I _think_ a sweet spot for WASM data processing. The data visualization space should explode once I can with data in the browser at near native speed.

https://en.wikipedia.org/wiki/Adobe_AIR


To be fair, AIR was not the first in the domain. Mozilla had XUL/XULRunner ~15 years ago, which could be used to quickly develop kick-ass cross-platform applications in JS (and is still, by and large, the base of Thunderbird and Firefox).

Sun thought that they had something like that with Java Apps ~20 years ago, except they forgot to make installation and UX compelling, and the memory requirements were unacceptable for the time.


I remember trying to do stuff with Adobe AIR and it just felt like a collosal waste of time. As soon as you tried to do anything that interacted outside of their sandbox you were severely limited. I remember some guys did a hack called cairngorm that I looked at but it seemed quite cumbersome. Then there was support, I think it was only after a few years they just gave up and spun it off to Apache ... you need to stick at it longer than that to establish yourself ...


Remember when Adobe AIR was going to come to Android? That would have been an amazing write once run anywhere experience.


You can build Android apps (and iOS too!) with Adobe AIR today. In fact, it's been possible since about 2010.


It did come to Android.

For a while before being terminated, Flash got a native code backend.


If the Mac version hadn’t been a crashy dumpster fire, Steve Jobs might not have done that. Flash on Macs was always mediocre.


By blocking flash apps he increased the motivation for migration to native apps, so there was a sound commercial basis for this as well.


Macs have always been throttled frying pans. They sacrifice much performance for the sake of thinness and design. No wonder Flash always performed badly on mac devices.


Nope. The current "form over function" mentality is definitely a post-Jobs and post-Flash thing.


Not always. They were great little machines 10 or so years ago.


You did see that the recent MBP throttling issue was a software bug and has been fixed, yes?

Admittedly they could be clocked slightly higher if they were larger with better cooling, but they're by no means slow computers.


Flash on Mobiles was two sides, Adobe had a lot of difficulty to implement multi-touch correctly.


> On a side note, I'm seeing more articles pointing out that WASM runs in the JS VM. Doesn't negate the whole advantage of speed for WASM?

It basically means that WASM has the same safety/security model as the JS VM. Just like JS, it is compiled to native code (I'm simplifying a bit, of course) before being executed. However, where JS is one of the languages with the most complicated semantics around, which makes it really, really hard to compile efficiently, WASM has extremely simple semantics and is designed to be really, really easy to compile efficiently.


Thanks for the differentiation. I guess when I've read "WASM" will be native code I expect it to be as in "C" or "C++" native. Not native to a VM


WASM and JS are both JIT compiled in all major browsers which means that they compile to the same kind of native code that C and C++ do, they just do so as the program is running rather than in advance.


What's even better about WASM than JS in this case is that it can be compiled as it is being loaded. With JS, the entire file needs to be downloaded before being executed, but that's not the case for WASM, resulting in even more performance improvements.

https://hacks.mozilla.org/2018/01/making-webassembly-even-fa...


> Doesn't negate the whole advantage of speed for WASM?

Well, here's a benchmark of asm.js JavaScript versus WebAssembly in a real world application:

https://pspdfkit.com/blog/2018/a-real-world-webassembly-benc...

The WebAssembly version outperforms the asm.js version.


If I'm not mistaken, in those plots lower is better. Only WASM on FF has a clear lead?


No, wasm, like asmjs, is designed to be compiled into native code once validated. Unlike asmjs, it doesn't also require a long parsing step. It uses the same code paths used to emit native code from the JS VM JIT.


It’s bootstrapping. You’re building a new thing (example: C++) that works a lot like the old thing (C). So you build a wrapper (Charm) that works on top of the old thing so you can get the conversation going, expand your capabilities and recruit.

Over time you do more of your own thing and you or someone else splits these two pieces of code into three smaller ones. Like the LLVM backend that can be fed by a C or C++ frontend.

As webasm becomes a competitive advantage you should expect to see people split up their javascript VM into three pieces, and Javascript and Webassembly running as peers instead of guest and host.

In a very small way, we kind of saw a similar thing with JSON. JSON was just a strict subset of Javascript and you could emulate it on old browsers with a linter in front of an eval(). Now it’s its own thing.


> I'm seeing more articles pointing out that WASM runs in the JS VM.

It runs on the JS sandbox, but it can not be efficiently emulated by the JS CPU. VM is an ambiguous term.

Browser developers are talking about running JS in the Wasm VM. That will probably be reasonable very soon.


Seems very premature to say the Wasm has "won" when it's only just come out. People were saying Flash had "won" when all the browsers had it embedded by default a decade ago.

Browsers removed Flash support, they might end up rendering Wasm useless by putting it behind loads of permission warnings.

There's also the chance it'll end up lacking as it is and it'll end up being a useless appendage that gets killed off.


Wasm doesn't need permission warnings since by default the sandbox is very restrictive.

I think the reason Browser removed Flash was more because it was an absolute security nightmare for the Browser vendors and they had to fully rely on Adobe to patch the worst of it.

Wasm on the other hand leverages the Javascript VM so browsers don't have that problem. And they don't depend on an external vendor either.


"...Wasm doesn't need permission warnings since by default the sandbox is very restrictive.."

Upcoming changes to WASM include threading and shared memory. Unless browser makers implement those features in a manner that slows the machine to a crawl, WASM certainly will be getting some security warnings. Either because security minded organizations will disable it, or because browser makers will be honest and up front about the risks with those features. (There will be either security implications, or performance implications because they implemented the new features in a secure fashion.)


What exactly are the risks with these features that don't already exist with web workers and SharedArrayBuffer? There's the obvious Spectre issue; anything apart from that?


How is threading or cross thread shared memory a security risk?


Rowhammer/specter


Rowhammer doesn't need either shared memory or spectre and spectre is largely eliminated by browsers running as much as possible in different threads and then relying on OS protections, there isn't much remaining risk


"...spectre is largely eliminated by browsers running as much as possible in different threads and then relying on OS protections..."

???

Do you mean Meltdown?

Spectre is the evasive one. New variants are being found even up to today. SpectreRSB is a rather nasty one that was found 3 or 4 days ago. (Or rather, they told us about it 3 or 4 days ago for instance.)

Anyway, point is, there are no OS protections against variants of Spectre. I'm not sure how there even could be, some of these variants have been known publicly for 3 or 4 days as I said. So right now trying to patch Spectre is a lot like playing Whack-a-Mole. Personally, I think we'll end up having to live with CPU side channel attacks for the foreseeable future. People will just learn, probably the hard way, not to download untrusted executable code.

Of course, that tendency to say "no" on the part of consumers will probably impact WASM. But that should be expected.


> Wasm has a really strong sandboxing and verification story that others simply did not.

Eh, (P)NaCL had a very strong sandboxing and verification story.

Wasm is, basically, NaCL in a form that Mozilla and the non-googles etc could accept. There are small implementation differences but the details are all non-important. It was all political. If they had accepted NaCL we'd have had what we have with wasm only we'd have had it years ago.


> small implementation differences

Naaah it really doesn't seem small. Details are important.

NaCl comes from the plugin world (NPAPI → PPAPI), while wasm comes from the JS world ("removing the JS from asm.js"), and most existing JS engines have been able to just reuse their codegen/backend parts.

PNaCl was based on a bad idea: "let's make a stable subset of LLVM IR". LLVM moves quickly, if you have a stable subset you'll eventually have to translate it to actual updated LLVM IR anyway. And good luck to anyone who wants to implement a simple small interpreter for that!

wasm is really simple — just a close-to-metal abstract machine. No included libc, no bindings to any particular platform (there might be DOM/etc. bindings in the future, but on top of wasm, not right in the core spec).


At this point, Chrome is just as happy to get rid of NaCl. It's one of the last things to still use the crufty old PPAPI interface. Mozilla dropped the predecessor, NPAPI, years ago.


This may be true re chrome wanting to move on, but it seems a bit unfair to liken ppapi to npapi in reply to a discussion on security! Npapi was in-process, which was the whole security problem. Ppapi was built around strong process isolation.


We are standing in a Google hole and you’re complaining about how we would be getting somewhere faster if someone hadn’t put down their shovel.

It’s only a “political thing” if you don’t think that autonomy gives us diversity, or that diversity gives us more unique ideas to consider.

If we went with NaCL then everyone but the dominant player would be playing catch-up forever, worse than they already will be.


Unlike NaCl, Wasm doesn't require building an entire separate (undocumented and unspecified) API to make it work.

Accepting NaCl without accepting PPAPI would not have been very useful. Accepting PPAPI was a non-starter without a lot more work than Google was willing to put into it.


As did Java ...

EDIT "does" - and of course it's flawed but the "story" is pretty great :-D


Indeed. But, making a secure sandbox is the easy part. The hard part is poking holes all over the sandbox so the code can interact with the outside world without compromising security. JavaScript and browsers have spent decades figuring out that balance and working out all the detailed tradeoffs like same-origin policy, cookie rules, etc.


As I understand it, Java has features that make the time complexity (in the O(n^2) sense) and the general implementation complexity (in the probablity of bugs sense) of the Java byte code verifier worse than WebAssembly validation.


the Java and Flash approach was software-implemented security contexts and managers, running the VM in the same process as the browser.

People used to say that you couldn't do strong process isolation because it would be unworkably slow.

And then Google Chrome demonstrated that was a falacy. People actually flocked to chrome because it was faster, despite it using multiple processes and isolating plugins.

NaCL built on that - it's security model was strong process isolation and verification that the code run in that isolated process couldn't 'escape'.

Mozilla is still kinda in denial re process isolation.


> Mozilla is still kinda in denial re process isolation.

Isn't that exactly what Electrolysis[0] was for?

[0]: https://wiki.mozilla.org/Electrolysis


And JavaScript.


Is Javascript the same though? I've seen numerous stories where security vulnerabilities have been introduced by way of third-party hosted libraries. Java has had signed classloading since the 90s.


The same as WASM? Yes. Java? No, for the reasons you state.


No, you misunderstand me. Java has a very strong sandboxing and verification "story". Javascript does not - not even close. Author is saying that WASM does.


> Java has a very strong sandboxing and verification "story"

I disagree that it is very strong (I see java.security.Permission littered everywhere in the stdlib) and it definitely wasn't strong in the days of applets. Maybe in theory, but in practice access to such a large stdlib because it was desktop tech shoehorned onto the browser caused a great many issues.


"story"


NaCL had to be largely rewritten for each no platform just to get the applications to run. WASM doesn't need that.


Isn't the answer really, "no, because WASM doesn't allow you to do anything you couldn't already do with JavaScript"? I don't think anyone would complain about Applets or Flash if the only way they could interact with the web page was through the DOM.

WASM enables nothing. It just makes a particular subset of JavaScript slightly faster.

EDIT: I'm not talking about Canvas. That isn't part of WASM.


Eh, I'd still complain about them; they tended to be terribly poorly written and do a lot of unnecessary work, slowing things down and using too much power. Of course, so do random javascript things.

I kind of miss the days of the HTML/HTTP-only web.


> Of course, so do random javascript things.

My point exactly.

The author is making the wrong comparison. WASM is JavaScript all over again, and, the HN NoScript crowd aside, that's not going anywhere soon.


The author is replying to a comparison often made.


To which the answer should be, "no, but Canvas might have, and here's why it didn't".

Actually, that would be an interesting article to me, because in retrospective, I'm surprised we haven't seen more Flash-like Canvas-based sites. My suspicion is that Canvas's interactivity story isn't yet strong enough: you kind of have to roll-your-own interactivity, pending technologies such as addHitRegion() et al. [1]; most notably there's no support whatsoever for text-based interaction (e.g. text widgets). Hence why most web apps which would have been Flash a decade ago still stick to HTML instead of Canvas.

[1] https://developer.mozilla.org/en-US/docs/Web/API/CanvasRende...


Not sure if I agree. I would still say that WASM enables new types of applications. Of course you could have compiled all that new WASM applications to asm.js or even JS as well. But it would've been slower: maybe even unusable slow. That's why I think WASM enables new types of applications. It's the same as in the past where faster JS engines enabled richer web applications like GMail, etc. Now we get games in the browser and large native applications like AutoCAD.


I worked on a 3d web based modeling app in my last job (Autodesk) that was based on asm.js. (and this was a legit solid modeler with heavy CPU usage - not a toy). It was basically 50% speed compared to the desktop version, which isn't ideal but totally viable. WASM is definitely welcome but you could do the same things in 2015 with asm.js. Honestly I think the biggest selling point is decreased download size. That was our big pain point with asm.js


I don't doubt the claim that a Java Applet can be faster than JavaScript, but I haven't ever seen a modern high-performance web application written as a Java Applet. The CAD stuff, games, etc., are always Flash (EDIT: when they aren't JavaScript). So the comparison is really Flash vs. JavaScript.

And… comparing Flash to V8? I'm having trouble finding modern benchmarks, but Flash is based on ActionScript, which is related to JavaScript. JavaScript has V8, which is a world-class JIT with Google's engineering talent behind it. I'd be very surprised if Flash's JIT was competitive with V8. At least as of 2011 it wasn't [1].

[1] https://habr.com/post/121997/


My answer was in reply to "WASM enables nothing". I argue that WASM is faster than JS, therefore it enables new applications.

I didn't make any claims about Java Applets or Flash performance.


Ah sorry I thought you were replying to "no-one would complain about Flash or Java Applets".

I don't disagree that WASM is faster than JS; in fact I make that point myself in several comments. My point is that WASM's speed does not enable it to be abused like Flash in a way that JavaScript cannot be, because JavaScript is already as fast as (in fact faster than) Flash thanks to modern JITs like V8. The reasons that WASM is not the "new Flash" must therefore be the same reasons that V8 + Canvas are not.


What Flash-based CAD have you seen? AutoCAD and Sketchup web versions are both using WASM. Onshape and Tinkercad use JS.


Sorry, bad example. I have hazy memories of full-screen "creator" type apps in Flash from years ago. You are right that they are all JavaScript or WASM now (owing no doubt to the performance enhancements V8 has brought).


Flash was basically JavaScript that was was easily portable between browsers and allowed faster multimedia applications. In that sense the combination of WebAssemply + WebGL and others improvements is exactly like the return of Java Applets and Flash.

Is the end result from the user perspective the same? I don't know. I'm just little worried that it might be.


"and other improvements" is doing a lot of work there. Flash had an entire scripting language and an API for drawing/creating UI elements, listening to their events, etc.

Raw WebGL isn't going to give you any of that. And listening for any interaction events at all is going to necessitate the DOM.

I don't doubt WASM will end up with something similar, but it's also something that's possible with JS today, there are a bunch of JS game engines that render to WebGL and perform very well.


Yet. But maybe sometime will release something. Actionscript on flash is based on ECMAscript, it didn't seem too far fetched to think the combination of js + wasm can solve some problems.


There would be no need for WASM in that case. JS and WebGL works just fine for 99% of cases.


Yup. Maybe some more use cases will emerge that Js and WebGl can't hit as effectively.


The article isn't about WebGL, it's about WASM. WASM doesn't make anything new possible, it just makes already-possible things faster. (WASM is quite literally designed as a faster replacement of pure-JavaScript asm.js.)


IMO the prospect of less of my bandwidth, memory and CPU cycles being waisted is a pretty big deal. It's silly how much overhead there is with standard web technologies given the fact that it's how most people spend most of their time on their computer.


That's only because you didn't have canvas in the 90s.


WASM doesn't change that.


/asm.js/ doesn't allow you to do anything you couldn't already do with JavaScript

Web assembly does, as far as I understand it.


What capabilities do you see WASM adding over asm.js, beside speed and ease of integration (e.g. built-in linking and parsing)?


I honestly don't know. I follow Brenden Eich and a bunch of smarter people on Twitter who could answer that better than I can.


I've read the entire WASM spec and implemented a WASM loader and linter. It doesn't do anything that JS can't do on its own. It's just a slightly faster asm.js, in a way which is irrelevant to any comparison between JS and Flash, since JS in V8 is already far faster than AS in Flash ever was.


What I've been told by the WASM folks is that WASM won over asm.js because WASM allows you to do things that weren't possible in asm.js. I'm not a low level programmer - my WASM usage comes from Unreal Engine development - hence recommending you seek more knowledgable advice!


Today, there's no difference. In the future, there may be: threading and SIMD are two examples.


JavaScript already has multithreading (Web Workers), and Flash never had SIMD support. (At least, a Google search of the latter turns up nothing.)


I was making the comparison between wasm and JS. Web workers is a browser platform feature, not a JavaScript feature. Not all JS runs in a browser.


I'd still complain. I remember supporting a Java plugin application that was incredibly slow and beyond buggy. Then again, limiting the scope to only what JavaScript could already do could have helped avoid those issues to begin with.


There's plenty of slow JavaScript-only pages out there ...


I also worry that WASM will just bring another era of non-standard Flash and Java Applet like UI design. Just check this QT demo:

http://example.qt.io/qt-webassembly/SensorTagDemo/SensorTagD...

Then there is this gallery demo:

http://example.qt.io/qt-webassembly/quickcontrols2/gallery/g...

The right sidebar right now doesn't scroll with standard mouse wheel. I had to scroll it using mouse drag.


This is actually terrible.


Discussion on the blog post to which this is a follow-up:

https://news.ycombinator.com/item?id=17525858


WebAssembly is yet another approach to withhold the source of programs from the ones that interact with it.

By just sending part of the necessary binary logic to the user, it is ensured they can't easily be independent of the provider or create their own solution.

If someone doesn't see what I'm talking about, then read about the AGPL; it exists to prevent what I'm talking about.

Superficially, it is a shame Mozilla is working on this technology. However, it's official goal is to advance "The Web"©® which uses above mentioned effects to lead to centralization and lack of freedom for users.

WebAssembly and some other modern browser features are the basis for the massive centralized providers we have today. That "The Web" leads to decentralization is a lie.


This is a pretty ridiculous response. There have been compiled languages way before interpreter/jit ones and these didn't halt the advancement of programming and programming communities, especially the sharing code and ideas part.

Also you shouldn't need access to code to create your own solution. Many people have been reverse engineering or borrowing from ideas in the compiled world. When I copy something I like I don't typically look at their code, I focus on the functionality and break down what its doing so i can reimplement it.

The fact you are focusing on centralization is basically saying "i can't steal someone eases code". The web promised to be free and open and that had nothing to do with whether or not you could read the javascript.

Most modern javascript on site is rather unreadable due to being transpiled anyways. There is still an option to use plain javascript instead of WASM. Just like now you can use plain javascript instead of transpiling it.


Companies can develop big, bulky WASM-only framework with graphic primitives & so on, very much like Java applets, and effectively kill the openness of the web standards.

> have been reverse engineering

Reverse engineering some HTML is not difficult. Minified JS is much more difficult. A 50MB blob of WAMS is just too time consuming.

> The web promised to be free and open and that had nothing to do with whether or not you could read the javascript.

On the contrary, making it 100x times more difficult to understand what a website is doing is terrible for security, compatibility, inclusiveness (good luck making a braille terminal for WASM-only websites or using them on a very slow uplink)


>The web promised to be free and open and that had nothing to do with whether or not you could read the javascript.

This had and has everything to do with it. Please, read what you have just written, and think.


There are people that lobby for closed source and distribute closed source, and there are people that lobby for floss and libre software. And then there are hypocrites.

Mozilla is bathing itself in it's image of standing for a free and open world. The truth is that it's business in "The Web" is creating the groundwork for a closed and non-free world. This hypocrisy cries out deafening.


A language being compiled or not has nothing to do with the web being free.

Like I said we can already pretty much transpile our code to practically speaking, an unreadable state.

Companies can already choose to make their code closed source and transpile it. If there goal is to make it more closed they can do that now.

WASM doesn't change that. People can still make their code open source, they can still write in plain javascript. Its really no different. Instead we are giving people options and the ability to have more performant apps in the ones that require it.


Mozilla is also realising that the web needs to compete with native apps as well. Note that it's even easier to hide the source of native apps.

(Furthermore, many apps that would use WASM probably have a significant server-side part as well that is not necessarily open source.)


At least native applications are something static you can verify. With web 'apps' like, say, anything based on electron, the end user has no control about what code is run. Instead the 'app' just pulls down and runs whatever the company/etc wants you to run dynamically and differently every time with the permissions you gave originally.


Your mention of Electron is the perfect example of why this is no different for native apps. Of course, on mobile this is usually mitigated through app stores, but there is no reason similar mechanisms could be introduced for the web if this really turns out to be problematic.


Electron is not a native app. Electron is a browser web app marketed as a native application.


Chrome browser is only 50% of Electron. The other 50% is a complete nodejs distribution which allows access to the OS filesystem, network, anything Java or .NET can do on a desktop... So no, it's not just a browser web app. Electron is absolutely native to the OS is runs on.


The only reason it's got all those permissions is because it is a native app, which can claim all those permissions. Sure, Electron app are browser wep apps marketed as a native application, but by definition, native apps can do everything Electron apps can do.


Is minified JS fundamentally different? JS is already used as a write-only compilation target, and wasm doesn't seem worse than that (if you're concerned about its binary nature, there's a standard textual format).

There's no technical way to enforce that code is shipped to people in an easily readable/editable format: JS or wasm or machine code can all be equally difficult to consume. Enforcing this requires a human solution, and that is exactly why viral copyleft licenses exist (including the AGPL you reference).


No, unreadable, non-free JavaScript is not fundamentally different. Two wrongs don't make a right.

Unreadable, non-free JavaScript is used to reach a goal. That goal is to withhold the source from the users to bind them to the service. This leads to centralization, power imbalances, and thus attacks on the freedom and sovereignty of the users.

WebAssembly is a technology to achieve the formerly stated goal easier and more convenient. Of course, some people will be interested in that and work on it.

Mozilla is working on this. So far so good (or bad). The issue arises when you combine this fact with the self-portrait of Mozilla as the one defending the rights and freedom of the users.


> WebAssembly is a technology to achieve the formerly stated goal easier and more convenient.

WebAssembly is a technology to achieve better performance on the web, and that as a side-effect happens to achieve withholding the source easier.

Actually, it's pretty much like minifiers. And I guarantee you: the main reason most people are using minifiers is to improve performance, not to hide the source. That just happens to be a side effect.


What you call a "side-effect" can quickly turn into the main reason for using it.

Companies can develop big, bulky WASM-only framework with graphic primitives & so on, very much like Java applets, and effectively kill the openness of the web standards.

As an end-user there's no way to avoid loading hundreds of MB of WAMS every day, contrarily to desktop software installation.


Sure, it can become the main reason for people to start using it. It's disingenuous to claim that that is Mozilla's goal, though.

(Apart from that I doubt that it will result in a significantly worse web than what we have today, but we'll see.)


the same is true about js really.


wasm's true goal is to bring more performance and technology-independence to the web, which leads to more decentralization, user power, etc. For instance, more things can "run everywhere" because the performance is acceptable in a browser and so writing native applications for individual platforms isn't necessary. Moving things to the web becomes much easier because it is easy to retarget existing code in native languages without having to rewrite it all. And, importantly for freedom, those mean more things can be freely accessible webapps rather than having to go through the mobile vendors' stores and vetting.

All of these benefits come without a significant cost to user freedom because the cost has already been paid, and will always be paid no matter the underlying technology: companies will always ship obfuscated code if they feel that's important. (Also, I suspect most minification is driven by bandwidth and page speed concerns.)

Lastly, I don't think wasm is actually noticeably more convenient: shipping wasm requires a compilation step, just like minifying JS.


Since Wasm is one of the few virtual machines that are deterministic, it is very useful for distributed systems that need consensus. The way it uses types and external function calls is great for standardizing interfaces. It's also one of the few languages that can be reliably sandboxed. Contrary to Javascript, trust can be minimized.

I just hope that complex features such as multithreading or garbage collection don't become a mandatory component of the base VM. A complete, JITable-Implementation in RPython is now just about 3k LOC.


Go look at the output of the closure compiler (in advanced mode) and then tell me again how it gives the user a meaningful way to alter, read, improve or understand it.


No, not by a long shot. Remember how much time it took for an elaborate Flash animation to load? Sometimes you had to wait for more than a minute before anything even rendered on the page. And Java applets.. were a joke. A Frankenstein approach to embedding applications in the browser. By contrast, Wasm IS Javascript, even if the source code might be from another language, it's a target for compilation. It will only be able to do whatever the browser allows it to do, ie access Web APIs, work with the DOM or just do computations that don't output anything UI-wise. This is not the return of badly executed and badly thought-out plugins, this is "native applications in the browser" done right.


Would it be possible to run the flash plguin/runtime in wasm to get the benefits of flash (easy to produce, a lot of legacy content) with the sandbox of wasm?


In theory, yup! I haven't seen anyone try it though.


Ha, maybe it's time for a WASM-revival of Gnash! [1]

[1] https://www.gnu.org/software/gnash/


That is only up to Adobe or the developers of that other OSS flash runtime/reimplementation.


Not too sure. You can probably lift it to LLVM bitcode, e.g. with [0], and compile back to WASM. But yes, not sure of the specific the DLL calls.

0 - https://github.com/trailofbits/mcsema


simple matter of programming!


> That being said, the web standards process is healthier than its ever been

Some disagree: https://www.eff.org/deeplinks/2017/09/open-letter-w3c-direct...

Obviously it’s better in aggregate than the web of applet and flash times, though.


One failure does not mean the whole process is bankrupt.


Agreed, and I realize that your point was (I assume) more about the historic hot mess.


That’s also true. :)


Wasm follows flash, java (applets) , jvm and even Javascript in the sense of being one codebase that will run on multiple platforms and potentially delivering on promises of the past.

Still, Wasm seems unique in a few ways :

- languages like rust have emerged to help address the browser issues of the past in other technologies

- wasm can exist in the front or back end, in variety of languages but complies to one bytecode like the jvm.

- wasm appears increasingly capable for both front-end or backend development, while not insisting on either.

- wasm is the first standard I can remember where all the browsers (Chrome, Microsoft, Firefox, Apple etc) agreed on a baseline. This is much different than one company building flash, java, etc.

Java applets and flash existed at a time when the html standard wasn't capable of rich frontend experiences, or deep backend.

It's quietly an increasingly promising time, and my hope is the progress continues to gain momentum.


Everybody who complained about JS the only language of the web should be extremely happy about WASM.


For most high level languages, JS is for the foreseeable future a much better compilation target than WASM.


Remember when restaurant menus were in flash and you couldn’t open them on your phone!


You could run flash on android.


Not all of us used Android when Flash was available for it


WebAssembly is only the payback of any low level programming languages to JavaScript. Truth is, there should be opening positions so they could eat some pieces of the big pie too cause all those JavaScript kids became rock stars for no reason.


If anyone wants to take a quick look at running a WASM app to take your understanding from theoretical to applied look no further than Fabrice Bellard's excellent BPG Image format and the JS decoders which build for asm.js and wasm.


> But implementers are important; they control the web.

I’d argue users are an even more important part of the equation. If the UX sucks, no one will use it and you’ll have implemented an empty spec.


Klabnik makes a strong case for WASM here, and one I haven't really seen anywhere else.

Now the question is, who will make the first major framework on top of WASM, and for which language?


Thank you!

I bet you know which language I'm betting on ;)

That said, I don't think a framework is really the thing; I think JavaScript users using wasm in libraries is a much bigger growth area for wasm, at least in the short term.


Wonder what its effect on the popularity of Javascript will be, given that it can be used as a compilation target for pretty much any language (?).


I'd look to the server side; JavaScript is still quite popular there even though you can use any language.


Isn't that just a side effect of frontend devs being able to use their existing JS knowledge serverside?

I'd expect the reverse to happen as well to some degree since currently backend devs are somewhat "forced" to take on JS if they want to do any frontend work. Or does that seem less likely to you?


By that logic, they will also stick to JS instead of using WebAssembly. So, same result.

I think the “backend dev migrates” case is less likely, in a purely math sense. The number of backend devs forced to do front end is smaller than the number of frontend devs.


Java Applets were before their time. It took JS/HTML the good part of a decade to catch up functionality and performance-wise


In a way yes. However there is other tech like WebGL, Canvas, web audio, web video etc, even apps, that make up good portions of what Flash brought, where WebAssembly is the compilation part of that and brings other languages into the mix as the source for the outputted wasm/asm. It also brings faster native runtime and hardware rendering capabilities when combined with other tech. Flash and Java never had native nor good hardware rendering support, the latter which was what ultimately killed Flash and the transition to mobile.

Flash (and Shockwave director the first web 3d platform) actually were Java applets when they first came out then they switched to ActiveX plugins which really did have a good run of innovation on the web. Microsoft jumped on that to push ActiveX and Flash timing was just right as Microsoft was doing everything possible to kill off Java applets in as well as other platforms like Quicktime and Real Player.

We owe lots of web interactivity to plugins and mostly Flash. I think with Chrome killing plugins there is a slight gap in platforms like Flash that push innovation, Java Applets never really took hold and were hampered by Microsoft as well as being slow. Microsoft even tried to make their own Flash in Liquid Motion in late 90s and again in 2005ish with Silverlight, just before mobile took off which changed everything. Mobile killed off plugins as much as Chrome did.

Flash did lots of good things well or the best. Vector animation is one, SVG is pretty good now but Flash was still better. Flash for gaming and video was the best on the web for a long time. Flash was great for hitting web services/remoting before AJAX was as solid. Flash pushed using JSON data over XML though it could do both before javascript well. Flash was the original web sockets. Flash revolutionized interactivity, games and video. Flash made Youtube possible. Flash had display list and DisplayObjects that were like an early virtual dom. Flash was a market standard that the main goal was pushing interactivity forward and needed to to survive, Java applets did not keep up and it is harder to do with standards as they are slow.

WebAssembly may be another Flash/Java Applet go round but plugins and now wasm can help push things forward as plugins did for many years. wasm may help that missing piece on mobile since it is just javascript in the end.

Flash was a great interactive platform when under Macromedia and some under Adobe but Macromedia was really better at managing the development/technology platform that it was. Microsoft was going to buy them once before Adobe got them. Adobe let Flash languish a bit and they lost the hold.

Flash is actually partly responsible for pushing web standards forward in html5, video, webGL, canvas, SVG even JSON and AJAX because they were all innovated on in Flash then brought to the browser. Flash with ActionScript3 even pushed forward javascript ECMA standards. AS3 was based on ES4 which Microsoft, Yahoo and others teamed up to kill, I think it was a solid ECMA version and quite fun and very readable and much like TypeScript [1].

I am a bit sad Flash is gone and wasn't looked after correctly by Adobe, if Macromedia stayed around it may have gone different and WASM would be Flash. However Flash ended up stuck in software rendering land for too long and buggy with security holes that didn't keep up on desktop nor was it ready for mobile. Adobe essentially sent both Director and Flash out to pasture (and Freehand/Fireworks which were good), but we still need something pushing innovation and standards are slow to do that, apps have somewhat become that in place of plugins like Flash today and wasm may help.

[1] https://www.reddit.com/r/javascript/comments/34ps9z/why_was_...


Let's not forget that Flash-the-creation-studio-application was hugely successful with amateur animators, which provided content for sites like Newgrounds et al. Such artists were enabled to produce animations without dabbling with programming.

For those remembering the web in the early 2000s, the Flash-only designer-based websites were the norm for restaurants, luxury brands; it was a designer wet dream, to allow him to design animation enabled and pixel perfect "web sites" without dabbling with HTML and JS code


Yeah Flash was amazing at attracting both developers and designers and led to awesome games and some really innovative interactive experiences.

The web is a better place due to Flash and Macromedia did a killer job, I wish Adobe had kept it going better. Originally I was attracted to Flash 3/4 for making cartoons, Flash was where I launched my first game (and hundreds more) and got lots of fun game/interactive/video work from it. I probably work in games/dev today due to Flash (mainly Unity now, another Flash inspired tool). I loved AS3 as well, great iteration of javascript and only ES4 implementation that was kicked to the curb unfortunately. Flash was awesome all the way to the 2007ish Papervision 3d days (Three.js by mr. doob [1], part of Papervision team along with great designers like Carlos Ulloa [2] and people that work at Unity now like Ralph Hauwert [3], takes Papervision's place) but then Adobe let Flash languish in lacking hardware rendering support and eventually mobile and apps killed it due to native/hardware rendering support, Chrome put the final punch in by changing plugin support. Though HTML5, Canvas, WebGL, SVG, web video, web audio and even wasm is a direct result of the innovation from Flash. Flash even got big in entertainment like video, games and lots of web cartoons still use Flash today for animation (Adobe Animate now) [4] though many use ToonBoom/USAnimation (also inspired by Flash) [5][6].

Sites like joe cartoon, newgrounds, praystation, thefwa etc were all part of it and Youtube jumped on Flash video, big parts of internet history are due to Flash.

Flash was one of those rare tools that attracts interactive/creative designers and interactive/creative developers. I don't know that there is another tool/platform that was as attractive to design and development. Adobe Animate is still around but the Flash community was awesome and attracted all types of creative people.

Flash was so much more than a buggy plugin in the end, it was a great interactive, vector based entertainment and content creating system overall end to end. Flash/Animate is still out there today but not as integrated. Adobe Animate is the same but can export to apps and html5/canvas/webgl. There are also open source tools that can do flash like but not as integrated as well for designers in OpenFL [7], Haxe [8] and lime [9].

[1] https://threejs.org/

[2] https://helloenjoy.com/

[3] http://www.marketwired.com/press-release/flash-3d-vet-ralph-...

[4] https://en.wikipedia.org/wiki/List_of_Flash_animated_televis...

[5] https://www.toonboom.com/company/customer-productions

[6] https://en.wikipedia.org/wiki/USAnimation

[7] http://www.openfl.org/ + https://github.com/openfl/openfl

[8] https://haxe.org/

[9] https://github.com/openfl/lime


Thanks again, Steve, for a very clear and sober explanation of this. It's like you're inside our heads! You grok.


You're very welcome!


TLDR; no, because WebAssembly use browser API, when Flash and Applets do not. There is nothing that can be done in WASM that cannot be done with Javascript API wise, whereas Flash was able to do socket programming 10 years before webosockets and peer-to-peer 15 years before WebRTC.


One can only hope.


Yes.


No.


you could technically rasterise your ui/video/content to a canvas at 60 fps.

so flash-like maybe and probably a step in the right direction, you can then code in sdl/qt/gtk/etc.


Which is precisely the reason it be used as utility libraries. Most CRUD apps should remain JS/DOM based.

For games I would say it makes sense to code the UI as well in WASM.

I don't have an illusion though, it will be abused by devs heavily in the near future who dislike CSS/JS/DOM.


i'd love to write markup that uses actually ui element names, even maybe no markup but also allow css for theming purposes.

then js is to bind components to each other.


A bit late, but React is your perfect match then :)


Is this article's title clickbait?


I had a conversation with someone about this on Reddit yesterday https://www.reddit.com/r/programming/comments/91s3pa/is_weba...


Perhaps an alternative that is more reflective of the actual content would be: Why WebAssembly is not the return of Java Applets and Flash.


The wonders of an article whose title is posed like a question is that the answer is always "no". No need to read it.


>Other technologies were owned by companies

>Java was owned by Sun Microsystems, Flash was owned by Adobe. But why does this matter?

This is the reason corporate sponsors of WASM banish flash and, while at the same time, promote WASM.


Yes, absolutely. The only significant difference is that it isn't a plugin.


One other big difference is that it's not owned by a single private company. Adobe tried to monetize certain flash 3d features at some point, but was rebuked so they withdrew their plans.


Alright, I'll give them that. However, I think this will, like it has with the rest of the web, lead to differences in how each browser handles the spec, various browser-specific extensions, etc. In that way, it's actually worse than having one company behind it.


Wasm is the first major thing I can remember all the browsers agreeing on in a very long time, or ever.

Agreeing of course, after trying to do it on their own and then realizing they were thinking a lot of the same things.


>Wasm is the first major thing I can remember all the browsers agreeing on in a very long time, or ever.

Uh, flexbox, grid, fetch, service workers, even shadow DOM...? The cooperation between browser vendors has been exceptional over the last five years.


Are any of those as large and potentially as broad as wasm?


For frontend devs, sure.


The JS community currently deals with this by way of "polyfills" I wonder will such an approach be easier or harder with WASM ...


Any differing behaviour from the spec is supposed to be behind feature flags so that WASM Applications can reliably detect and use these.


And! is a public W3C standard, is open source, is just a VM, doesn't define own UI components, is not controlled by any private corporation, will be maintained by browsers' editors, and maybe too many other things I'm not aware of...

Remembering now the Monty Python's "What have the romans ever done for us" scene...


Not at all. WASM doesn't define a UI component.


Sure it does, canvas and WebGL are good enough for everything else.


Those aren't part of WASM.


Only because they don't have to be. Canvas and WebGL didn't exist back when Java and Flash were dominant, so they had to include equivalent functionality and step outside the DOM. It is meaningless to say that WASM is different because of this.


The 'WASM is the new flash' argument only succeeds if you conflate at least all of WASM, canvas, webgl, webaudio, websockets and several other APIs and call it all 'WASM.' That is misleading in my mind. All of this stuff was developed independently of WASM and is usable without WASM.

> It is meaningless to say that WASM is different because of this.

Flash can and did extend the browser in arbitrary ways. WASM is operating within a standards regime and relies on standards based capabilities. We need not worry, for instance, that WASM is going to independently implement an API for some peripheral (camera, microphone, etc.) that duplicates or supplants some existing, standards based browser capability. While that remains true there is a demonstrative difference.


> The 'WASM is the new flash' argument only succeeds if you conflate at least all of WASM, canvas, webgl, webaudio, websockets and several other APIs and call it all 'WASM.'

I think it is disingenuous not to include these features, because they didn't exist in any form when Java and Flash where big.

> Flash can and did extend the browser in arbitrary ways. WASM is operating within a standards regime and relies on standards based capabilities.

Yes, it did, because it had to. The features WASM is relying on for similar functionality did not exist at the time, there was literally no other way to do it.

So your argument is that WASM is different because now, nearly 20 years later, these features are part of the standard web.


> I think it is disingenuous not to include these features

Before you ever heard the term "WASM" all of "these features" were available via plain old HTML5 and Javascript. Since that is the case why not argue that browsers themselves are the new Flash? And if so, what does WASM have to do with it?

The answer to the last question is "nothing."

"These features" did not have their origin in WASM. They existed prior to WASM and function independently of WASM because they do not depend on WASM. They are not WASM. Full stop. Conflating them all into WASM is, in fact, disingenuous.


I suppose that's a fair perspective, but really what you're doing here is being pedantic. WASM is just the final step in integrating something like Flash or Java into the browser. That it took 20 years of standards adoption to get there isn't really relevant in my opinion.


Equating 20 years of open standards development to a proprietary extension will never add up in my book. If that's pedantry then so be it.


Even if it's twenty years of open standards that replicates the functionality of a proprietary extension? That it doesn't make sense to you doesn't make sense to me.


Why? WASM fundamentally changes nothing about what web sites can do or how they do it.

This article would make more sense if it were about Canvas, SVG, or any of the other JavaScript-accessible media elements which were added to enable Flash-like behavior from JavaScript. Those were game-changers.

Claiming WASM makes the web more Flash-like is like claiming ES 6 makes the web more Flash-like that ES 5. It doesn't. The existence of DOM scripting and scriptable media elements does, and WASM doesn't change this. Claiming it does is just sensationalism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: