Microsoft using blink/chromium is a good thing IMO.
Once Apple is forced to open up iOS to other browsers, the blink/chromium monopoly will be complete.
The only check on Google's ability to dictate what a web browser is will be the influence of companies like Microsoft, who are invested in blink/chromium and have the resources to potentially fork it if Google does something they don't like.
I would not be surprised if we see a blink/chromium based Safari in the future so that Apple can join the party as well.
Your "dream" there fuels some of my nightmares. You may discount the potentially devasting threat of a possible massive ecological collapse from relying on one single implementation of web rendering because you don't think open source projects can die because "someone has the resources to potentially fork it", but open source is no guarantee.
Beyond that, even discounting the threat of ecological collapse due to a single codebase, most of the web's best standards have come from lessons learned from multiple implementations. Without multiple implementations we lose a lot of guide rails away from bad single vendor "standards". Even if you think downstream players like Microsoft can act as a big enough check on Google's rubber stamp in standards boards like WHATWG, there's a huge difference between Microsoft developers evaluating potential standards because they have to build their own implementation of them versus just reading them in code review to accept an upstream PR they didn't need to write themselves.
Also, as close to a single implementation as WebKit and Blink still sometimes are, it's hard to argue that Apple isn't already still a contributor to the WebKit/Blink/Chromium Hegemony. It doesn't matter if iOS opens up to other browsers or not right now while WebKit and Blink are still so close to mostly not matter. Apple is sort of protecting everyone from a true V8 (JS Engine) monopoly, but the renderer has always been close enough to not matter to those terrible web developers only testing websites in Chrome and assuming they work "everywhere" and supporting a "Works Only in Chrome" agenda.
Can someone explain what's the whole deal with this "diversity" of browser engines?
Do people also like "diversity" in C++ compiler implementations with "diverse" quirks and bugs unique to each implementation? or the diversity of Python interpreters? What's the point other than incompatibility and head ache?
> Do people also like "diversity" in C++ compiler implementations
Not a C developer (and welcome C devs to chime in here) but my understanding is yes. People were really happy when we went from GCC as being the standard compiler to LLVM.
> the diversity of Python interpreters?
Yes, back (a decade ago) when I used to hack Python it was pretty cool to be able to get IronPython and re-use a bunch of .net assemblies in a language I liked.
Right now in TypeScript land people are enjoying deno provide competition to node.
For the GCC and Clang it's a case of a newer/better compiler architecture taking over an older one. At no point in time I see the advantage of keeping several implementations around just for the sake of "diversity".
If it were just unequivocally "newer/better architecture" they wouldn't coexist side-by-side in the first place, one would have entirely replaced the other by now. Obviously, the differences between the two are much more complex than that.
One reason they are expected to coexist for some time to come is exactly that a lot of Linux (in particular) applications and libraries require gcc specifics. The fact that there are "gcc specifics" that lock those applications and libraries to gcc and can't just "upgrade to the obviously newer/better architecture" is itself an argument for "diversity". C/C++ is "supposed to be" a portable language. If developers were keeping to standards and if clang were indeed objectively better architecture, then nothing should still be using gcc, right? Diversity is one way you encourage developers to stick to portable standards (because then they can use whichever compiler is fastest/better today and switch at-will as all the implementations compete to outperform each other).
The history of C/C++ is a great example. It also has more than a half century of history of the ecology widening into a lot of mostly standards-compliant implementations then one dominating for a while, crashing the diversity. The dominant one starts to do things less by the standards on the one side and developers on the other side start to get used to (over-)developing to quirks specific to that implementation. Then either a massive 0-day infects the entire ecology with very little resistance or there's a portability crisis because a new machine architecture or new operating system or something else like that that the old dominant vendor is ignoring or trying to sabotage.
We've even seen hints that all is not paradise even when that dominant vendor is "open source" in the post-gcc era where there was a long run of years where gcc was extremely dominant and there were concerns about platform-wide 0 days and where the gcc developers were playing fast and loose with the standards. Certainly those were "open source community decisions", but it was still too easy to PR non-standard features "that felt good" without going through deeper standards review processes.
It's generally seen as a great thing that today we aren't in one of those "one vendor dominated" periods and that we have both gcc and clang as competing, independent open source implementations each with relatively high adoption and slightly different niches/portability goals/downstream uses. It's also done much to help push commercial vendors back to competing on standards compliance. (Microsoft's C/C++ compiler is more standards compliant than ever, for example. Including its STL is now open source.)
I agree. This isn't as scary as people say. The threat is that Google does something evil and the web needs to adopt it. Thing is, Chromium is open source. Microsoft or anyone else can immediately fork it. Crisis averted.
The threat isn't just that Google does something intentionally evil.
Imagine if there's a WebKit 0-Day that Blink inherited discovered tomorrow that breaks ~96% of today's browsers as soon as that bad actor discoverer weaponizes it. (Or worse uses it to send malware or as an RCE vector.)
That's not something Google did intentionally. That's not something that forks can "immediately" fix. That's something that possibly forks make worse because everyone is going to be auditing different but similar codebases for the same bug and there may be some political infighting and finger pointing and accusations in a mexican standoff between the forks all accusing each other.
Most importantly, that is a lot of potential devices at major security risk to a single ecology "family of forks" of a code base.
What other piece of software does something like 96% of internet connected devices possibly have in common? We've got a variety of operating systems (Windows, Linux, macOS, iOS, Android, etc). We've got a variety of hardware stacks. We've got a variety almost everywhere but this one massive bottleneck growing worse. There's still some hope in the case of the hypothetical WebKit 0-Day that that diversity alone is enough to keep things from getting truly bad, but if you believe in software security you've probably had "defense in depth" drilled into your head and does "everyone uses the same browser codebase" sound like defense in depth?
That's the biggest nightmare scenario. There's lots of little iterations of that, including all the various little ways that standards might slowly get ignored or broken until some day the future web is wondering where the standards specs even are and can't find its way out of the box it was trapped in. If it happens little-by-little enough ("boiling the frog" as the aphorism goes) no one sees it as a clear threat at the time and no one thinks to "immediately fork it" until it is already far too late. To repeat something that needs repeating a lot in these conversations: the problem with IE6 wasn't that it was terrible and "behind", the problem with IE6 was that it was amazing and forward thinking and had boiled some frogs building specifications too quickly ahead of the standards bodies. People didn't notice the problems they had from that until after they'd built websites and apps with IE6 in mind because it was great at the time. People didn't notice the problems until after Microsoft felt they had done enough and took their ball and stopped playing because they had "innovated enough". People mostly didn't notice the problems until it was too late to get out of the box they'd been trapped in without realizing it. IE6 today has a reputation as a terrible browser that was stuck "behind" standards, but that's not where it did the most damage. It did the most damage when it was "the best browser to use", "the most innovative and powerful browser" and "everyone uses IE6 and is happy with it".
With that reasoning, do you also go and reimplement standard libraries just to create enough variance so that if one of them has a security flaw then your code might not inherit it?
IMHO, some software (eg. OpenSSL, rendering engines, etc.) need to be heavily scrutinized and trying to keep multiple implementation around for the sake of diversity makes no sense to me.
Of course if a new implementation of say a rendering engine with a radically different approach emerges I'm all for it taking over.
OpenSSL is, I think, another "case in point" example: Heartbleed was an amazingly awful 0-Day that afflicted OpenSSL and I can only imagine how much worse things might have been if Windows didn't use SChannel and FreeBSD (and thus iOS/macOS) didn't use LibreSSL (and if LibreSSL's fork auditing hadn't made a difference in that situation).
Even (especially?) highly scrutinized libraries seem to need diversity in implementation to avoid single point of failure problems.
I tend to attribute it a lot more to Chrome's many years of deceitful marketing and bundling itself as adware in other products including Adobe products.
Depending on your view of Firefox OS and if it was "Firefox enough", the "Mozilla Foundation doesn't focus enough on Firefox" is a relatively new complaint, but Firefox's biggest losses in market share happened well before even the Firefox OS effort.
(And the Firefox OS effort was to try to keep a competitor against Android and its Chrome hegemony viable, so it was a direct reaction to lost market share. And the current complaints of "Mozilla isn't doing enough Firefox" today are all the complaints about the various ways that Mozilla is trying to diversify their revenue stream and that too seems obviously because of lost market share, not the cause of it.)