Google first developed NaCl, and that was rejected because of this problem. The main driver, I think, was that you can't predict the computing landscape. Just because we have certain common architectures around right now, doesn't mean there won't be new architectures around 20 years from now, and/or doesn't mean the architectures around now won't be dead by then.
The thing about the web, and web standards, is that web clients (browsers) are expected to be able to work with web apps, on a common "web platform." Users expect to be able to take an up-to-date web browser and point it at any arbitrary old website, and have it work. And web platform engineers agree that this is how things should be, and make sure that browsers do everything they need to do to make this possible.
But having the possibility of a web app delivering one of N different binaries for each of a bevy of random ISAs—and no requirement to support all ISAs, only whichever ones the site's author felt like deploying—means that the browser authors of 20-years-from-now, to support users' expectations of arbitrary old web apps "just working", would need to ship N virtual-machine interpreters, one for each ISA that people ever compiled web binaries for.
Basically, it'd be a not-quite-combinatorial explosion in VM/runtime implementation work, which would decrease the quality that any one VM/runtime could have (which is really bad when those very implementations are one of the main sources of security vulnerabilities for attacking computers today.)
And, even then, there'd still always be sites broken because they shipped native code only for a platform nobody ever bothered to build support into browsers for; or used an instruction only available on some extension of an ISA that only appears in some particular proprietary chipset.
The web-standards people all agreed that, if you were going to have "object code for the web", it was much better to constrain the web to one "abstract machine" ISA. All the browser authors could then put all their effort behind implementing just one high-quality runtime/VM serving that abstract machine.
The only dispute, after that, was what form the ISA would take. Google suggested LLVM IR, and was shot down. WASM came up with their own proposal, and it got accepted, probably mostly because it was a proposal for a standalone formal standard, rather than a de-facto part of something else.
Probably, any ISA that was standalone in a similar way could have been used instead of WASM. But that is a surprisingly rare quality in an ISA. (For example, neither JVM nor CLR bytecode is a standard independent of the platform/runtime it's a part of. You can't become a member of the "JVM ISA steering committee", only a member of the Java working group.)
I don't see this as a fundamental limitation.
Perhaps I should have said QEMU instead of VirtualBox.