Hacker News new | past | comments | ask | show | jobs | submit login

I imagine the next step is to provide a WebAssembly binding.

edit: "The API has to execute efficiently on WebAssembly and in multi-threaded environment. That means no GC allocations during the rendering loop in order to avoid the garbage collection pauses."




Well WebAssembly is another thing I'd question the usefulness of. It's going to result in threading finally coming to JS which is nice, but the rest of it is more like a showing of of technical infrastructure for the sake of it instead of helping apps with problems they have.


I would argue that having to write apps in JS, or transpiling to the JS runtime, is a problem for many people.


Sure but you could imagine something more like a .NET or JVM bytecode instead, which would be a more practical target for transpiled web apps instead of webasm.

Instead we ended up with a stack machine & sbrk.


.NET and the JVM are in no way more suitable for compiling C/C++ and the like than wasm is.

The JVM doesn't even have unsigned integers!


Well of course, but I think running C/C++ on the web is a complete nonsensical waste of time. That's not a useful market to target and critically it completely ignores the needs and problems of the current market.


.NET and JVM are both stack machines. The .NET VM is the only one that has features that cater to C/C++-ish languages (i.e. Linear memory access via instructions). WebAssembly's memory model (including sbrk-style allocation) is likely what a sandboxed, linear memory focused .NET VM would have wanted to go with anyway in the interest of minimizing the performance impact of address range checking.


But in what sense would an extra VM layer be more practical? Maybe this is where we disagree. I'm not fond of VMs, especially those two.

C++ or Rust -> wasm bytecode is great. Soon we'll be writing directly to command buffers, no fuss.


If you want to do C++/Rust direct to command buffers why on earth would you bother with the pile of overhead that is a modern web browser?

But ~nobody wants to build UIs like that anyway, so what's your target audience?


Games and other 3D applications? Getting people in-game (say a lazily loaded demo with slightly less perf) with one click is huge.


No, it isn't. Games are already served by consoles first (which obviously won't run webasm), and steam second. There's no market there, and it's already one-click to launch steam to the game in question. Where it will then download in a medium suitable to handling the downloading & updating of a game's assets instead (which, even for a demo, is in the gigabyte range - you aren't lazy loading this).

As for 3D applications what 3D applications? Do you really think Maya is going to be ported to a browser? Why would they bother? Why would they restrict themselves like that?

The web has no advantages in this space, and the needs of those markets is already being served with superior technology and infrastructure.


Have you ever played a flash game? Slither.io? There is a huge market there. Steam is not one click away from a tweet or a Facebook post.

Please have a nice weekend.


Flash games are dead and even facebook games are largely a thing of the past as Facebook is now primarily used on mobile. The casual audience is on their phone in app stores & not on the web anymore.

I assure you those casual game companies are not going to want to go anywhere near vulkan or similar, though, and they are generally fine with the performance scripting languages already give them (hence why they were in flash instead of java applets)

They want strong 2D graphics capabilities primarily, which is largely an ignored category. <canvas> has a 2D context, but it's pretty crappy.


Most web-based games will probably remain to be in 2D (for multiple reasons, including development cost and accessibility), but 3D games on the web are certainly performant enough.

To give one example, bananabread is a tech demo showing off what can be done with asm.js and WebGL (performance is likely to get even better with WASM):

http://kripken.github.io/misc-js-benchmarks/banana/index.htm...

As for whether a VM has an overhead compared to native, of course it does, but you're not going to convince anyone of your viewpoint by stating something that's already obvious to us all. The performance of web-based games doesn't have to be the best, it just has to 'good enough'. I personally don't think we'll fast adoption of WebVR, but I'm glad it exists as it helps to have it as a goal to improve 3D performance, almost certainly leading to reducing the performance overhead of browsers (as low latency is a key component of a good VR experience).


WebAssembly doesn't appear to be a way for threading to come to JS, unless you count adding a JS API that lets you invoke multithreaded code in a totally different format and memory/execution model as "threading coming to JS". I don't think anyone is itching to move V8/SpiderMonkey/etc. to concurrent garbage collection, so concurrent memory access in multithreaded JS code will likely be limited to SharedArrayBuffer/WebAssembly for the foreseeable future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: