A common approach (e.g. Cobol, Ada, Common Lisp, Haskell, Clojure, MATLAB, Julia, Kotlin) seems to be to provide two operators: One that uses truncated division, one that uses floored division. By convention, rem truncates, and mod floors.
(it's actually a reference to the Parable of the Pearl, but that name clashed with PEARL, a real-time programming language from the 70s developed in Germany)
For example, let's assume you have a graphics editor running in the browser that stores files in the cloud. If it uses a vulnerable C library to decode image data, an attacker might be able to play havoc with your files despite the sandbox never technically having been breached.
This can be mitigated by either using a safe language, or having the decoder run in an isolated wasm instance. Either way, you have to design your application with these considerations in mind and can't just take arbitrary, potentially vulnerable applications, compile them to wasm and be done with it.
The worst that can happen is that the image tool saves back a corrupted image. But that's also possible with a buggy (but memory safe) Rust image loader/saver, memory safety doesn't automatically fix all classes of bugs.
Apart from that it would be quite a feat to use internal memory corruption for anything useful in WASM, because both the code and callstack live outside the sandbox and are not accessible from within (e.g. tricks like return-oriented programming are not possible in WASM.
Thanks for the link, very interesting! But TBF: the 'host program' has to be written in a very specific way to allow that rogue Javascript execution, it's very similar to allowing an SQL injection to happen.
I also wonder why stack canaries wouldn't work on WASM, since the compiler creates stack frames on the data-only stack just the same (but maybe Clang's `-fstack-protector` doesn't work for some reason in WASM, I'll actually need to check that).
In order to be useful, a sandboxed program needs to communicate with the environment (the equivalent of system calls). If you can corrupt internal state, you can control the arguments to those calls, which may have security implications.
For example, if you corrupt a program that's allowed to use web sockets, you'll be able to port scan the user's local network.
If that actually works in a browser wasm environment then it's also possible from Javascript, which is a memory safe language (eg either the sandbox works or it doesn't, that also includes the external APIs).
Note that in context of differential geometry, the name in the denominator is not associated with the function, but a coordinate system on its domain:
Let M be a differential manifold, eg M = ℝ² and φ a chart, eg cartesian coordinates φ: p ↦(x(p), y(p)) where x,y: M → ℝ. Then, ∂/∂x denotes the holonomic vector field tangent to the coordinate lines t ↦ φ⁻¹(x(p) + t, y(p)) through any p ∈ M.
It is convenient to identify vectors and directional derivatives (this is in fact one possible way to define tangent vectors on manifolds), which for a function f: M → ℝ yields
(∂f/∂x)(p) = ∂/∂x|ₚ f = lim_{h → 0} ( f(φ⁻¹(x(p) + h, y(p))) - f(p) ) / h
This is a good point. Noting that "d/dx" is a vector (field) is an antidote to the ideas that vectors are little arrows, or "a magnitude and a direction", or an ordered list with numbers in the slots. Vectors can be thought of in lots of ways, and here they're thought of as operators on functions.
That looks like you have to load up a local file with an exploit, use a png library not being used by major software that also doesn't check for issues with the png file (because they already need to deal with malicious files) and the end result is that it will run javascript if javascript is able to be run from webasm in that context.
It is still worth looking at and is actual information, so I appreciate that.
Don't focus on the specific exploit, it's a general issue:
In order to be useful, your wasm application will likely have to be able to make systems calls, or whatever its equivalent might be on your particular host environment. If you can corrupt internal state, you can control the arguments to these calls. The severity of the issue will depend on what your application is allowed to do: If all it has access to is a some virtual file system, the host will still be safe. But if that virtual file system contains sensitive data, results may nevertheless be catastrophic if, say, it can also request resources over http.
As far as I know, the Mozilla Corporation is owned by the Mozilla Foundation, which is a non-profit with the mission to make the internet a better place, or something along those lines. So shouldn't we look at the boards of things like the Wikimedia Foundation or the Internet Archive instead of bay area CEOs for comparison?
It's also not a particularly good look when your own salary keeps rising while you're laying people off and market share keeps plummeting (in my neck of the woods, Firefox actually used to be dominant at 60%+)...
That said, I haven't looked at this in any kind of detail, and all I know of Baker is what her Wikipedia article tells me, which includes writing the Mozilla Public License, managing mozilla.org on a volunteer basis for a while and being instrumental in the creation of the Mozilla Foundation.
I wouldn't classify any of those salaries as exorbitantly large. Hell, I'm fairly certain at least some people shitting on them here earn more than that working at a for-profit.