Hacker News new | past | comments | ask | show | jobs | submit login
The Risks of WebAssembly (fermyon.com)
66 points by 0xmohit on Sept 8, 2022 | hide | past | favorite | 38 comments



This should be linked to https://www.fermyon.com/blog/risks-of-webassembly instead.


> But we have been noticing an unfortunate trend that some developers have chosen to work contrary to the component model, creating strong links to their own host runtimes. Going this route results in platform lock-in on one hand, and the pointless re-authoring of the same code (tooled for slightly different hosts) on the other.

Oh dear lord. This is such a twisted narrative. The Component Model is a proposal that is only supported by one runtime out of the 20 used in production, none of them in the browser (don't look for it in v8, SpiderMonkey or Javascript Core, Wazero, Wasmer, Wizard Engine, WAVM, ...). So if there's anything that actually locks you in... is probably that!

Even when Wasmer tried to add support for it on wit-bindgen (precursor for the Component Model), the same people from the Bytecode Alliance who are working on the Component Model proposal rejected it [1]. Do they really want collaboration and not lock-in? One begins to wonder.

It gets even more funny when you continue reading the article and you also realize that all people in the WasmDay committee that decides what get's in our out their CNCF conference are also part of the Bytecode Alliance. When the only competition they "cheer" is the one that comes from their approved friends.

I would highly encourage everyone to read some of the practices of the Bytecode Alliance that the AssemblyScript community has redacted, it might be eye opening! [2]

[1] https://github.com/bytecodealliance/wit-bindgen/issues/306

[2] https://www.assemblyscript.org/standards-objections.html


I am skeptical of WebAssembly and component-model myself, but that AssemblyScript page seems alarmist and as can be seen in several issues linked from that page, dcodeIO (from the AssemblyScript community) was definitely not behaving in good faith: https://github.com/w3ctag/design-principles/issues/322

It seems most of the complaints are that selecting UTF-8 as a primary string encoding is "against the practices of the web", which seems patently absurd, but also, just plain boring. I was definitely expecting more along the lines of incompatible object models integrating into component‐model, rather than mass-tagging people over string encodings.


> dcodeIO (from the AssemblyScript community) was definitely not behaving in good faith

I certainly disagree with that take. I don’t see any bad faith, I only see one person being frustrated because his concerns were being thrown under the rug as "non important", I would recommend you to read on dcode's blog to learn more about it [1]. There are always things to improve regarding how we communicate, of course, but those should not be used as a weapon to attack or dismiss someone but as means to improve.

It's also important to note that a few months after, the Wasm committee realized of the mistake and actually tried to solve it with the Wasm Stringref proposal [2].

The way I see the issue is not about UTF-8 vs UTF-16 but about how valid concerns were completely dismissed in what's supposed to be an open community

[1] https://dcode.io/#webassembly

[2] https://github.com/WebAssembly/stringref


There are valid concerns to be had about string encodings, but I do not think a suggestion to "encourage UTF-8 for new formats and APIs" can be said to "literally breaking the Web Platform", nor does it require tagging 17 people, most of them unrelated, just because you happen to disagree the resolution of the committee.


> I do not think a suggestion to "encourage UTF-8 for new formats and APIs" can be said to "literally breaking the Web Platform"

Perhaps you don’t see it that way, but we have to respect when others do.

From the way I understand things, because JS strings (DOMString [1]) are already represented as UTF-16/WTF-16 internally, making the component model strings only UTF-8 puts certain languages in disadvantage against others specially regarding speed (because they’ll need to do extra effort when processing strings in order to be fully compliant, while the UTF-8 ones wouldn’t need to do it). This was the case for AssemblyScript, Javascript, Java and other JVM-related languages (Kotlin, Scala, ...).

Specifically for a language such as AssemblyScript, for which its aim is to be completely complaint with JS, tiny and fast to execute I can certainly understand why that was a concern and why being dismissed could cause certain frustration.

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Right but it's pretty clear that UTF-8 is the better, de facto choice. JS strings only use UTF-16 for historic reasons. If designed today they would definitely use UTF-8.

Since some languages use UTF-8 and some use UTF-16 you're always going to have some languages doing conversions so it makes sense just to pick the best option surely?


Except that picking the most restrictive option prevents two components that are using the same less restrictive semantics to communicate with each other securely (without throwing exceptions or silently mutating data), e.g. Java <-> Java, AssemblyScript <-> JavaScript. To illustrate, if one would design a Component Model for Java, restricting to UTF-8 would make the Component Model for Java a hazard for Java. The same effect happens in a multi-language Component Model, where some languages then work and others don't. Hence "pick the best option" falls short. The argument in all these questionably stonewalled discussions is basically to allow these languages and use cases to exist, which could be as trivial as to make UTF-8 the default if the WASI folks so wish, but also have a Boolean flag for "don't eagerly mutate on WTF-16 pass-through". Yet, even though trivial and rather obvious, this has been fought relentlessly since 2017, and surely one has to wonder why this vehemence.


UTF-8 is not "more restrictive". I'm not sure what you're talking about.


The value spaces are in fact asymmetric. In Unicode jargon, UTF-8 is a "list of Unicode scalar values" while WTF-16, i.e. UTF-16 as seen in practice, is a "list of Unicode code points (except surrogate pairs)". Unicode code points are a superset of Unicode scalar values, aka "list of Unicode Code Points except surrogate code points", hence conversion from WTF-16 to UTF-8 is lossy. In WTF-16 languages, this is not a problem because the system is designed for WTF-16, but it becomes one if the value space is restricted further, here to UTF-8.

That's why, in such mixed systems, https://simonsapin.github.io/wtf-8/ is typically used. Not UTF-8. And Wasm is such a mixed system.


Conversions like this are basically as fast as a memcpy, which will always be required in this system. Did either you or dcode try actually profiling the extra time spent?


Despite that speed is not the critical concern (asymmetric value spaces are), UTF-8 must guard against invalid byte sequences, while WTF-16, where all possible values are valid as long as byte length is a multiple of 2, does not. In practice, the guard in UTF-8 is part of the copy loop over a boundary, typically from untyped memory to untyped memory, while WTF-16 can indeed just memcpy. Unless SIMD can be utilized, the difference is about that of a loop over a load into a branch vs. a memcpy, in case this helps to quantify. Expect an additional final memcpy if the UTF-8 copy should fail before storing anything to the receiver's memory.


Perhaps this is missing some necessary context. Note that the threads happened before a resolution, and have been created a few days before using them as an argument in Wasm. There was and still is no such design principle, because it doesn't make sense. Similarly, the arguments in the thread referenced are nonsense. JSON and CSS modules are files, which have different requirements than API calls. importScript the same, loads source code. Also note that JSON actually preserves idiomatic JS strings in data (APIs) through escape sequences, even though it stores as UTF-8 (files). Preserving in JSON makes sense, since sometimes JSON.stringify/parse are used for things like synchronous IPC, where maintaining integrity is critical. The Component Model should do the same, but refuses. The others are networking APIs, where UTF-8 is common. These are asynchronous and typically untrusted, where the protocols mandate UTF-8, and not maintaining synchronous state is much less problematic there. However, mutating string data as a side-effect of a normal function call is nothing less than a hazard, since some strings then seemingly randomly won't compare equal anymore after a call and stuff like that. WTF-16 (what JS effectively specifies) -> UTF-8 is lossy. The people in the threads know that very well, but keep bringing up the same already debunked arguments over and over again nonetheless to get their way. There, the thread suggests to encourage UTF-8 for new JavaScript APIs as well, which plays into the desires on the Wasm end yet is well-known to be impossible to pull off in JS. Applying this to JS conflicts with the ECMAScript specification, because JS has the same hard backward-compatibility constraints as Java, Dart, Kotlin, C# and other languages that evolved from UCS-2 to WTF-16. None of these languages can change their string semantics, in particular not to something akin to UTF-8, since it's semantically more restrictive and hence would break execution of existing and, arguably, all future code using idiomatic APIs like substring without being aware of potential mutation. It's subtle, I admit, but given that the arguments are factually invalid, the threads can only be one of two things: Incompetence or dishonesty. As always in political contexts, the mere mortal wonders what's better. Given foregoing discussions, where the same people participated, I would rule out incompetence. Now go there and question, and always the same gate keepers show up, simulating responses in good faith. Killing you with kindness, or however that's called. Dare to be unimpressed and follow-up, and someone quick-draws a CoC. That's why I decided to try something new, there pinging various people from the TAG, in the hope to finally get eyes on this behavior by someone knowledgeable, which obviously failed as well. Not sure if that helps, but that's some of the background :)

FWIW, here's the presentation I wanted to give to the Wasm CG that explains the details and pitfalls in an easier to digest way: https://www.youtube.com/watch?v=Ri2NMnSQo4o


It is possible to specify new APIs that say "UTF-8 only, if you pass an unpaired surrogate, you will get an error", and many such APIs already behave that way. They are, on the whole, not a major problem in practice. They are not not fundamentally incompatible with JavaScript/Java/Dart/Kotlin/C# -- it just means that if you wish to call an API that only accepts UTF-8, you must make sure your inputs are valid UTF-8 -- the only lossy case is for invalid Unicode strings, likely generated by accident. It is not dishonest to want to add APIs that behave that way.

It's fine to disagree with me or the committee, but your grand gesturing and your overstatements about how tightening a single bolt on strings will break web compatibility forever is exhausting to listen to, especially when you claim large-scale political conspiracy and use words like "fundamentally incompatible with JavaScript". I don't see any of it.

Your behavior in those threads are absurd, unnecessary, and alarmist, and I really don't have any sympathy for you. Even this reply doubles down on the over-exaggeration.

I think large parts of WebAssembly are mismanaged. I have major complaints about the velocity, instability, and web APIs, I spend a lot of time in them, I'm grumpy, and I'm not usually one to go to bat for any of this. Even within the WebAssembly/JavaScript/WASI interop, there are 20 things I'd put on the list ahead of this. If I have any advice for you, it's to pick a new battle, because this is maybe this is the highest complaint to lowest impact ratio I've ever seen. You lost, just move on.


Allow me to focus on the technical arguments, that I think fall short. Sure, one can go and make a second string type, or a bunch of throwing APIs. I'd question that this actually improves a language in a tangible way, or that anyone would want that for a rational reason that is not induced from the outside. When not doing so, it's incompatible in the sense of the word. Whether incompatibility isn't much of a deal depends, I guess. For someone not having a respective use case, perhaps, but for someone else having exactly that use case, say hashing substrings to then discover unexpected collisions, or streaming 1K chunks of strings over component boundaries, then discovering mojibake after concatenating, it might very well be significant, or even expensive. I mean, there are good reasons all those languages try the best they can to prevent that in their native habitats. What amazes me is that all that came to be because someone has formulated a desire, that could easily be fulfilled in addition with a boolean flag for W/UTF, but refuses to include such a trivial compromise, which surprisingly has more weight than any evidence, or precedents like WebIDL, JSON, or the various language standards. I find this highly concerning, since it conflicts with my understanding of responsible engineering. Also conflicts with Wasm's communication, that literally states that Wasm executes in the same semantic universe as JavaScript, and maintains backwards-compatibility with the Web. Perhaps, if there is something fruitful to spin a narrative around, then that these decisions undermine the exact value proposition of AssemblyScript, that was supposed to be used in tandem/closely with JavaScript, which now becomes risky on the fundamental level of the most prominent higher-level data type, strings. Plus, of course, when two AssemblyScript components communicate. That makes these string decisions particularly unfortunate for me personally after having spent all that time and effort, working towards Wasm's goals in good faith, perhaps explaining my persistence on the matter. Quite a dilemma.


Sure, there are technical arguments for this approach, but there are also technical arguments for "the strings in our strings API should really be valid Unicode strings". I have no horse in this race and I have no preference, but I do see both options as completely valid.

The problem is when you say things like people that prefer the latter approach "can only be either dishonest, or incompetent". Putting it kindly, you're basically only making enemies at that point, and you seem unwilling to consider other points of view, at best. You seem absolutely baffled about why your tone, phrasing, and language are making others uncomfortable, even as you continue to insult those you're trying to influence.

I have never met you before this conversation, and I came away with a very negative impression. There are reasons you aren't being listened to, and they are problems with you and your behavior, not grand conspiracies.

There are battles worth staking your entire professional reputation over -- the GC repo is full of people doing that -- but this is definitely not one of them.


Well, I tried. Anyway, even if I was the abhorrent monster you keep painting in your almost exclusively ad hominem argumentation while accusing me of what you are undoubtedly guilty of yourself, I'd argue that none of this justifies plain ignoring technical concerns in a standardization effort. This exchange is an almost perfect reflection of the practices prevalent in the Wasm CG, that made the Component Model, and likely other things elsewhere, possible almost uncontested. And surely this is deliberate abuse, and I hope people can see that. And coincidentally, that's exactly my critique. I hope nobody is surprised that being at the receiving end of this, despite your best efforts, for years, is an extraordinarily frustrating experience, and that this is exactly the point.


> It seems most of the complaints are that selecting UTF-8 as a primary string encoding is "against the practices of the web"

And by "against the practices of the web" they really mean against the practices of Java/JavaScript. But I thought that was the whole point of WASM?


Ever since working in Nokia, I'm highly skeptical of standardization bodies being able to solve technical problems in a satisfying way. Companies use standardization bodies to play poker with their competitors. It's a complex game that involves patents, getting your pet proprietary features rubber stamped, and making sure you retain some inherent advantage over competitors. Above all, it's a very slow process. I saw Nokia play this game .. and lose.

Apple and Google ignored several of the established standards to basically raise the bar and sideline the entrenched incumbents bickering over arcane details in those standards. Apple ignored MMS and most of other 3G features with the original iphone and it never became a feature any Apple user cared about. Google ignored things like J2ME. Other things they ignored were most of what operators were "standardizing" to prevent the internet being usable on their networks. All of that got shelved once Apple and Google allowed you to just use browser and internet based alternatives.

Operators got demoted to routing IP packets around; the very thing they were trying to prevent by standardizing things that they controlled. The whole legacy business of charging per call minute or text message is completely dead at this point. They were trying to standardize that so they could keep the gravy train going. It failed.

Not all standards are that bad of course. But resolving complex technical issues through a standardization body results in complicated solutions that are years/decades late to market and aren't necessarily very optimal.

Standard bodies are very good at standardizing and normalizing the status quo though. Build something, get people to use it, and then standardize it so people can be assured about interoperability. This typically results in pragmatic standardized solutions. The more implementations are out there, the better. A lot of standards are so-called industry standards where a company or group of companies agree on doing things a certain way and then formalize their commitment by providing a specification.

HTML5 is a good example. XHTML 1.0 is what happens when standards bodies go wrong. HTML5 happened because the W3C lost the plot at some point and got sucked down a path of writing/dictating increasingly less relevant standards in such a way that browser builders took it upon themselves to write a proper standard for what browsers were actually doing. Nokia was all over the W3C (I knew some people involved in various working groups). Since HTML5 happened then, things run a lot smoother on the web.

WASM came out of that community and was moving relatively quickly because of that. From cute demos to being a not so visible but increasingly core part of the modern web in the space of a few years. Garbage collection to enable deep integration with e.g. the DOM in browsers, sockets to be able to do IO, threads to make better use of modern hardware are all things that are pretty fundamental to get right and they are happening. WASM isn't being standardized in a void. There are experimental flags in Chrome that you can turn on right now to explore the progress with these features. Firefox is not far behind.

However, there is a worrying and growing amount of companies that are asserting themselves in the ByteCode Alliance. Too many conflicts of interest and political issues. Decision making must be getting quite hard.

The way out is to move ahead with stuff that works and ask for forgiveness rather then permission. If enough people use it and it works, it will get standardized. Standardization comes at the end of that process, not at the beginning.


> Since HTML5 happened then, things run a lot smoother on the web

Eh, that's sure one way to describe the situation. Another one being that it's only Chrome left. Sure, W3C completely and utterly fucked up, but I thought we were beyond regurgitating 2009ish "HTML5 rocks" propaganda and genesis myths.


I use Firefox. Safari is used exclusively on IOS and pretty dominant on MacOS. HTML5 works fine across those. The web app we develop has occasional issues with Safari that we need to look into. Things not working in both Firefox and Chrome is extremely rare for us and usually fixed pretty easily in the rare case we notice it.

I remember having to deal with xhtml, html 4, Netscape, Opera, Internet Explorer, etc. a world of pain.

Even by 2009, things were already massively improved (with the exception of IE of course). WhatWg was founded in 2004 and that's when work started on HTML5. That's before Chrome was a thing. That didn't happen until 2008 and would have not been possible without decent specifications provided by WhatWg. Apple and Mozilla were working together on that one.

Chrome happened because of HTML 5; not the other way around.


> The web app we develop has occasional issues with Safari that we need to look into. Things not working in both Firefox and Chrome is extremely rare for us and usually fixed pretty easily in the rare case we notice it.

FWIW, that mirrors my experience entirely. i primarily develop with FF and test with Chrome/Chromium as a sanity check. Safari, once the app is out the wild, is always the outlier (and i don't use it, so can't fix its shortcomings except via trial and error).


From my recent experience with WebAssembly developing a cryptographic library for Nodejs and the browser [1], I have to say that once someone needs to use memory allocation, typed arrays from JS to WASM (I did not manage to make the opposite work) etc. it quickly becomes obvious that there is lack of documentation and build system fragmentation that only hurts community growth IMO. If I was less motivated to finish the undertaking, I would just give up and go with libsodium-wrappers or tweetnacljs.

I started with clang targeting wasm32-unknown-unknown-wasm as my build system but this just did not work with malloc/free, unless I was targeting WASI, but if I targeted WASI I would not be able to run the module in the browser except with a polyfill that was hard to set up with C/TS stack. I ended up with emscripten because it was importing the module with all the right helper functions but there I was getting memory errors on debug mode but not in production. I needed to pass the Uint8Arrays from JS to WASM in a very specific way (with HEAP8), otherwise the pointers were not working properly, but I was not able to find this in the documentation. I only found out from a stackoverflow comment somewhere after two weeks of brain melting (why would Uint8Array(memory.buffer, offset, len).byteOffset not work?).

After I compiled the project successfully and the JS was giving the correct results, I decided to compile with -s SINGLE_FILE command in order to make the package as portable as possible, but this increased the size significantly because it translates the bytes into base64 that are then converted into WASM module from JS. A package manager of a compiled language that outputs cross-env JS that solves these problems automagically would be, IMO again, a game changer for the ecosystem. I believe this is what AssemblyScript tries to achieve but I honestly could not make it work for my project after experimenting with it for one or two days.

I get that a lot of the problems come from the incompatibility of browser and Nodejs APIs and different agendas from the various stakeholders, but I would very much like to see these differences be reconciled so that we can have a good developer experience for cross-platform WASM modules, which will lead to more high-performance components for JS, which is a programming language that affects so many people.

[1] https://github.com/deliberative/crypto


Does anyone have a link to the web assembly component model, and is there any alternative?

Maybe there can be some kind of universal device driver plugin or something.

It's weird to me that there wasn't initially an organized effort to escape the web browser by coming up with some sort of UI system or ways to integrate other devices etc. Don't know if that has changed.



What does one benefit from compiling and running microservices in WASM as opposed to running it in say Rust?


- Strong isolation constructs. With WebAssembly, there's no access to the host system by default. Even something as simple as reading the system clock requires exposing the appropriate WASI API to the WASM module.

- The ability to snapshot and later restore running state.

- A cross-platform and language-agnostic way to distribute plugins (e.g. Envoy proxy allows loading WebAssembly modules as extensions)


A snapshot a running state of a bytecode based execution engine, where did I heard that before,

https://en.wikipedia.org/wiki/Jini

https://dl.acm.org/doi/10.5555/867785


You can now build Jini and agent systems on top of the web!


With WASM you get the sandbox for free. This makes it safer for the service to provide arbitrary file uploads that get run on their platform without adding the overhead of virtualisation or containerisation.

Another thing WASM adds is cross language calling capabilities, i.e. calling Rust code from Go in a standard way. All methods are hidden behind a severely limited runtime interface so all have to comply with the respective standards.

This all comes at the cost of performance and efficiency just like on the regular "cloud".

Individually, you'll have a much easier time running your payload on bare metal, especially if you write code in a language that doesn't already compile to bytecode or gets interpreted and JIT'ed.

In a multi tenant system, the free sandbox means you'll have rather little work to safely run arbitrary payloads. If they can get enough customers, they'll be able to provide services cheaper by leveraging their economy of scale and low cost security mechanisms.


Is included sandboxing really that useful? Why not use a standalone sandbox, or OS’s permission system, MAC, etc? Especially that the biggest problem is the program corrupting itself, which will readily happen if we make C and friends more popular again for these kinds of applications.


There's an important difference in effectiveness when it comes to blacklisting APIs that were designed to be open rather than whitelisting APIs from within a closed system. There are tons of sandbox escapes that get discovered and patched each year, usually because of race conditions or the complex systems that the container APIs and general system APIs use to protect against each other. This is the reason why symlinks require administrator permissions in Windows, for example: the security model around symlinks is very complex and there have been tons of privilege escalation/arbitrary file read/write exploits just around something as conceptually simple as a symlink. Microsoft chose to restrict them to privileged users because they consider them to be too difficult to properly defend against.

A WASM executable can do nothing but allocate memory and change the contents of such memory. There's no I/O, no socket state machines, no kernel communication, it's all just numbers in, numbers out. Even something "simple" like entering a string into a WASM program involves treating that string as arbitrary memory to be operated on by the WASM runtime and things like structs, pointers and classes must go through several layers of abstraction before they can be used.

From this numbers-in-numbers-out mechanism further standards are currently evolving, standards that allow exposing callbacks and other such APIs into the WASM code. There's still an abstraction layer between the two, but the runtime can now allow the code to do a bit more.

I'm not too fond of the way these APIs are turning WASM into "Java but Javascript" but it's still good to see the security first approach to WASM runtimes that won't allow the code to do anything they don't provide rather than to allow everything except for a subset of functionality. The default model of modern operating systems is "you have all permissions, except these" rather than "you have these permissions and may ask for more". Trying to use seccomp and syscall filtering in Linux is a terrible experience because it's hard to know for sure that you've set up every possible limit you can in order to neuter exploits.

To see the benefit of this approach, look at the sandboxing in mobile operating systems like Android. Earlier permissions weren't a good fit for application developers and over time they've evolved. Google has had to patch in more and more limitations to prevent malicious behaviour like tracking (except for their own tracking, because Google). They've lifted some restrictions, like the INTERNET permission being granted by default in practice, but most of their API changes have been centered around taking away application functionality and providing a more restricted alternative.

Now, I very much like the option to root my phone and do whatever the hell I want on it, but I don't want the random crap the local weather app's advertisers try to force down my throat to have too much of an impact on my phone.

Personally, I tend to install PWA versions of web apps rather than download stuff from the Play Store if I can because I don't want these websites to have that much access to my phone (and often they're just wrappers around websites anyway).

With WASM, C's problematic memory management isn't as much of a problem. Your program can crash, show weird text, do anything weird you can think of, but it can't jump to kernel mode or escape the sandbox without a very significant flaw in the runtime whitelisting code. Redirect the control flow of a WASM program that can only serve web pages and control the files in two specific paths and you're practically nowhere. You can probably read some files but there isn't even a guarantee that you can upload the contents of the database anywhere, because networking is opt-in!

For most people in most cases I would very much prefer to keep running on bare metal myself. Safe programming languages are great and much more capable of being optimized because WASM itself is two compatibility hacks stacked on top of a Javascript library; it'll take a while for the format to be competitive and even then it won't compete with Rust in terms of performance.

However, the inexplicable urge to move to vendor lock-in through stuff like Amazon Lambda and the mistaken idea that every application needs the complexity of Kubernetes for some reason are undeniable. Within these contexts, I can see the benefits of what WASM people are trying to achieve.


Thanks for the detailed explanation. Under OS security model I meant mostly more modern approaches like Android (especially GrapheneOS) or iOS — desktop OSs have plenty of catching up to do here.

Android for example runs everything as a separate user with very restricted abilities, and require IPC for any “elevated permission” operations, which are checked by a separate daemon process before being allowed on its behalf.

And I think heap corruptions should not be taken lightly - I was thinking that you mostly mean server processes before, as in these cases a memory corruption can lead to exposing other user’s data, which is a huge step back from the predominantly used Java/C#/etc backends. The concept of well-defined failure is very important and for example a Java program will never get into such an undecidable state an “unsafe” language can.


you don’t run a service in rust, you compile, distribute, and run machine code on your cpu. compare to wasm where you compile to portable ISA and the runtime recompiles to real ISA. the wasm virtual environment is a gazillions of times simpler than a hardware context with all the attendant complexities.


Yes, abstractions are simpler, but they are also slower. It depends what you need.

(personally I am going to invest heavily in wasm)


One gets to re-invent 20 years of JVM and CLR, even more if counting mainframe/micro language environments, and sell the company to VC as mondern cloud.


Don't there need to be per- CPU/RAM/GPU quotas per WASM scope/tab? Or is preventing DOS with WASM out of scope for browsers?

IIRC, it's possible to check resource utilization in e.g. a browser Task Manager, but there's no way to do `nice` or `docker --cpu-quota` or `systemd-nspawn --cpu-affinity` to prevent one or more WASM tabs from DOS'ing a workstation with non-costed operations. FWIU, e.g. eWASM has opcode costs in particles/gas: https://github.com/ewasm/design/blob/master/determining_wasm...


I just hope WASM doesn't become too complicated. Or that at least that the complicated bits are not mandatory.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: