I'm used to seeing browsers and kernels pwned left and right, but multiple VMWare escapes? Is this how all VMs are nowadays or just something wrong with VMWare in particular?
It would be interesting to compare the number of VMware escapes, before and after the entire US development team was laid off and replaced with a maintenance team (in China?).
Interesting perhaps, but useless. The number of successful attacks we see on a product is an equation taking in the number of interested parties, the amount of collective experience (in the product) those parties are able to draw from, and the relative reward that the attack can give you.
Assuming that the reward doesn't drop (and it certainly hasn't -- that's gone up every year as VMs have become more and more critical), the collective knowledge keeps growing and thus this will always trend towards more attacks, not fewer.
The layoffs were only a year ago and the product itself was in a mature/stable market, which was one reason for the layoffs. Pre-layoff, VMware's desktop hypervisor had existed and been attacked for more than a decade. Other complications are a shrinking PC market and new versions of Windows.
That's only half the equation. The other half is the complexity of the design along with its assurance activities. It's why the security and separation kernels with user-mode VMM's either had no or few flaws in pentesting. Whereas, VMware's techniques take a pile of complexity, are built with less assurance (see parent comment), and process malicious inputs in privileged modes.
Definitely designed and assured to be shattered. I'd like to see what results they get on mCertiKOS or Muen.
I was at CanSecWest, though I wasn't in the room while these vulnerabilities were being exploited. The claim I heard was that the qemu toolset, compared to the VMWare toolkit, are less functional but also less vulnerable. Still, the qemu device drivers were the source of a significant number of vulnerabilities. Indeed, there was a talk, "Dig into the qemu security and gain 50+ CVE in one year". That talk was, frankly, rather scary.
It is disappointing to see that no matter, how many exploit mitigation and sand-boxing techniques are used, Edge+Windows is still an exploitable combination (multiple exploits targeted this successfully). I am a bit disappointed to see this since it was a new browser from ground up, and they could have gotten things right. But then browsers and operating systems are big attack surface and it is difficult to get everything right. That is where exploit mitigation techniques come into play. Apparently they also do not offer much.
Secondly, it is interesting to see that all (except one) security teams, that won the contest, were Chinese teams. I am surprised at the absence of US/Russian/EU hackers. Perhaps they are selling their exploits at much larger premium in black market, to NSA et all.
> I am a bit disappointed to see this since it was a new browser from ground up
It's really not. The browser does have some new innovations... But it isn't something new from the ground up.
EdgeHTML is a fork of Trident. They dropped legacy code, and fixed a ton of things... But still a fork. In fact, EdgeHTML was available as an experimental feature in IE11.
Chakra is a fork of the JavaScript engine that was running in IE.
Edge is nowhere near a brand new browser. But it does look like they're on the way to get IE right under the new name. (Though security still needs some more work, apparently.)
I wonder if it was operation or legacy software side that screwed up security potential of Edge. Microsoft did publish a design for strong security of browsers around the time Google did Native Client. Here was Microsoft's:
They're capable of making most of a browser immune to attacks likely to hit it. They can also make a strong system for containment. These simply weren't present in Edge to degree MS Research had designed. There's something blocking the transfer of those quite-practical tech that MS Research has been building.
Not to mention static analysis tools that should detect one easily. Covert-channel analysis, required in security certs since 1980's, can find most information leaks. Some tooling existed for it, too. Use-after-free prevention is trickier on static analysis with only one, mainstream language doing temporal safety w/out GC's. So, 2 out of 3 easily prevented/caught with stuff decade or more old.
Such architectures have been commercially deployed in embedded, mobile, and desktops for quite a while. Earliest one I remember still supported was about 2005 for x86 desktops. All by companies or CompSci groups much smaller than VMware in labor and budget simply applying methods that worked in the past in high-assurance security. Cutting assurance down where complexity or budget demanded but only where it was necessary. These big, mainstream companies cut it way down for reasons of profit maximization of existing market share. Then they end up at Pwn2Own or their customers on breach lists.