Less "inherent vulnerabilities" than "If you're using a memory unsafe language (like C), the protections WASM gives you aren't enough to mitigate every possible vuln you might have had in your C code"
The WASM platform does provide some really cool protections, but it's not a silver bullet, you still need to fix your C code.
> The WASM platform does provide some really cool protections, but it's not a silver bullet, you still need to fix your C code.
I mean, the way you're phrasing it, that's true of every system. IIRC seL4 had a couple bugs where the code _and_ the formal model both got it wrong in the same way, and it therefore passed the proof.
The guarantees that wasm attempts to provide are that it's ok for me as a user to run random code in a wasm sandbox, rather than that code is inherently better at it's internal security by being in a sandbox.
ie. wasm doesn't attempt to make the developer's job any easier, other than making it more likely that users will be willing to run their code.
Like, I don't think my c code is any safer when I run it on a system with a hypervisor.
This paper is important. It doesn't point out anything that you shouldn't already know if you're an experienced systems programmer, but may be a good reminder that WebAssembly's sandbox isn't the last word on security. You still need to be on alert for things that can trigger XSS, which can be easier when you have a memory unsafe language like C or C++ in your stack.
There's a good point in there about how node.js is exposed to all these vulnerabilities too.