We don't really need to speculate - just look at mobile apps. They're binaries. They're relatively easy to develop, but we need gatekeepers like Google and Apple verifying they're safe, and the quality is horrible at the low end. And devs need to pay an annual fee to distribute them.
Browsers already support something very similar (plugins via app stores) but they get very little love because making a website is the same and the only benefits of making a plugin are too niche for many things to actually need.
(this blog post is pre-WASM, but the main advantage of WASM vs asm.js isn't that WASM is a binary format or even better performance than JS, but that Javascript can focus on being a language again, while WASM can focus on being a compile target)
I don't want to run another virtual machine that eats up RAM, has worse performance and battery life than something natively compiled and generally serves only the interests of the person servicing it to me.
I want apps that are 100% native and work when occasionally connected. And I want the www to go back to servicing content, not apps and garbage to me.
I don't want a gigabyte RAM footprint Electron app that ships Rust in webassembly to read emails when my current mail client used 77 meg of RAM and just works. I don't want 50 browser engines running on my phone.
I think as an industry we've lost our way and turned to crack smoking crazy.
I also want a www of content, but as well as a www of apps. The web is too good to give up. The idea that I can send you a URL into an application state, or navigate between states of different applications, is really cool.
Somehow having more separation between content and applications sounds ontologically impossible, if desirable.
Eventually. At the end users's cost of time, memory, CPU and laggy ass performance while the JIT process is running. This is done on potentially millions of machines multiple times a day.
Which is my point about it serving the needs of the publisher, not the end user.
WebAssembly already exists. It's easier for certain use cases, but not most. It gives good performance, but worse than native software outside the browser.
WebAssembly only gives better performance for things that shouldn't be done in the UI main-thread. As long as you don't do number crunching in the mouse move event handler, I doubt WebAssembly can beat vanilla JavaScript for reactive UIs.
Once you work with the DOM, the (small) overhead of calling from WASM into a JS shim or any performance difference between WASM and JS doesn't matter. The internal DOM implementation will dominate the performance profile unless you do really dumb things.
...for typical portable C code the performance difference is surprisingly small though (within 20% or so). The main advantage of native compilation targets is that you have more "manual optimization potential" via non-standard language extensions like builtins and intrinsics (usually at the cost of portability and compiler compatibility).
That's why you develop and debug a native target, and then just flip a build system option to compile to WASM, it's really not much different from regular cross-compiling.
The single parts are all there (but only since very recently), now they just need to be connected through a VSCode extension. I cobbled such a workflow together myself [1] with a couple of python scripts and existing VSCode extensions which is good enough for me. In the end it's up to IDE vendors to provide a similar workflow.
Yeah, it only covers C and C++, solves your specific use case, and still doesn't change the fact the tooling sucks for everyone else, with exception of something like Blazor or Unity.
Do you really want the thousands of bugs that would occur if we shipped a real binary to each device?
And all security problems?
Sorry, the FooApp 1.1 does not support your iPhone 12. It also crashes on Samsung Galaxy 21 and lower and has a huge security problem on Windows 10 when using an Intel Core i5.
Wasm is not more efficient for many use cases than regular old JS. HTML is text is pretty meaningless - it is parsed into the DOM, which is efficiently represented in native code by the browser. And this DOM - with all its warts - is the biggest benefit of the web, it’s standard, accessible, available everywhere.
WASM can only access it through JS, foregoing its benefits, while drawing on a canvas would kill everything that makes the web introspectable.
WasmGC is now available, so I guess it can also be solved sometime in the future, and I wouldn’t mind it at all.
Though we would need much better wasm debug tooling, similarly to JS consoles — today’s minified, obfuscated JS is not much better than machine code, but at least with the tooling we can sort of touch into the internals. I would like to keep that property of the web.
> I assume browsers will one day expose a native DOM API.
TBH, that's extremely unlikely to happen. The entire DOM is built around the Javascript object model, trying to map the DOM to a C-style API would be possible, but inconvient to use and performance-wise it really wouldn't make a difference compared to going through a JS shim (since performance will be dominated by the browser-intenal DOM implementation, not by the user code manipulating it).
A 'WASM-native' WebGPU API would make a lot more sense, but again, that's extremely unlikely to happen.
The currently required Javascript shim might still disappear, but only because it might be generated automatically under the hood in the future.
Why can't be browsers used only to run native binary apps? Wouldn't that be more efficient? And even easier to develop for?