Hacker News new | past | comments | ask | show | jobs | submit login
A Quantum Leap for the Web (medium.com/mozilla-tech)
495 points by Manishearth on Oct 27, 2016 | hide | past | favorite | 132 comments



[disclaimer: I co-founded Mozilla Research, which sponsors Servo]

It's awesome to see the Gecko team continue to tackle big, ambitious projects now that electrolysis is rolling out. And I'm so excited that they're betting big on Servo and Rust. Servo has really been taking advantage of one of Rust's promises: that you can reach for more aggressive parallelism and actually maintain it. I believe recent numbers showed that effectively all of Firefox's users have at least two cores, and about half have at least 4. The more we fully utilize those cores, the smoother we should be able to make the whole web.

Over the last year, all three communities have been laying groundwork to be able to land Rust components in Firefox and share components between Gecko and Servo, and now it looks like that's teed the Gecko team up to commit to making use of some big pieces of Servo in the coming year. Some of the initial builds of Firefox with Stylo that Bobby Holley has showed me look really amazing, and WebRender could be a game-changer.

And the Servo project is just getting warmed up. ;) If you're interested in what they're up to next, check out Jack Moffitt's recent presentation from a browser developer workshop last month:

https://www.youtube.com/watch?list=PL4sEzdAGvRgCYXot-o5cVKOo...


I have to say, the work being done on Servo is really exciting—the first tectonic shift in browser engines to come along in years.

pcwalton's talk about WebRender earlier this year[1] was one of those rare technical presentations that left my jaw on the floor. In particular, the insight that modern browsers are just AAA game engines with a security model, so they should be architected similarly, changed the way I think about browsers. That game developers and Mozilla are both so excited by Rust's ability to safely write parallel systems at scale makes a lot of sense.

[1]: https://air.mozilla.org/bay-area-rust-meetup-february-2016/#... Previous HN discussion: https://news.ycombinator.com/item?id=11175258


> The more we fully utilize those cores, the smoother we should be able to make the whole web.

I wonder why the GC/CC are not multithreaded though. It seems like those are fairly isolated components, considering the entire application gets suspended so they can do their job, i.e. prime candidates for parallelism.

When forcing a collection on a large firefox instance it can easily spend 20+ seconds collecting on a single thread while a java VM can munch churn through something like 1 gigabyte per second per core.

In other words, from the outside it looks like a low-hanging fruit that has not been plucked.


> I wonder why the GC/CC are not multithreaded though. It seems like those are fairly isolated components, considering the entire application gets suspended so they can do their job, i.e. prime candidates for parallelism.

> When forcing a collection on a large firefox instance it can easily spend 20+ seconds collecting on a single thread while a java VM can munch churn through something like 1 gigabyte per second per core.

Several reasons why this isn't low-hanging fruit and isn't as valuable as it may seem:

1. Servo runs separate origins as fully separated JS VMs on separate threads. Because they're separate runtimes, the collections can already happen concurrently.

2. The isolated JS runtimes imply isolated heaps, so there is no need for global collection (and in fact no way to do it even if we wanted to with the current SpiderMonkey architecture). Small individual per-origin heaps are much faster to collect than large global heaps.

3. While collection is happening, like in other modern browsers, the Servo chrome and scrolling are fully responsive. But Servo goes beyond that: animations, layout, and repainting can also happen while a GC is occurring. This means that CSS transitions/animations on the page (all animations, not just transform and opacity!) will still happen while the page is collecting, and you can switch tabs and resize the window while the GC is running. You can even interact with cross-origin iframes on the page, so GCs triggered by ads won't affect the content you're reading!

4. SpiderMonkey already has incremental and generational GC, so we've done most of the work necessary to reduce stop-the-world pauses already.

5. Making a single-threaded GC concurrent is a lot of work. It's not low-hanging fruit by any stretch of the imagination.

6. Furthermore, making a single-threaded GC concurrent usually involves a throughput loss, because now you need operations to be atomic that weren't atomic before.

That said, the GC is something that the SpiderMonkey team continues to make steady progress on, and that work directly benefits Servo. So fear not, we aren't neglecting the GC either :)


Writing a parallel GC surely is not "low-hanging fruit" by any reasonable measure. Java already has to deal with concurrency in their objects, and throughput is more of a concern for server-like workloads, so they are dealing with a different situation. In Firefox, most of our effort has focused on improving responsiveness and eliminating work (through things like compartmental GC). (Some GC sweeping is already done in parallel in Firefox.)

That said, I believe IE does some concurrent marking, so there is certainly room for improvement.


> and throughput is more of a concern for server-like workloads

Many java applications also have to worry about latency, not just throughput. Parallel collections cut down on pause times, too. Simply because they can get about the same work (in terms of CPU cycles) done in less wall time. I.e. parallelizing non-concurrent GC phases also improves responsiveness.


That would be hard to do without writing a new javascript engine, which would expand Servo's current scope pretty dramatically. My guess would be that as long as there are big gains to be made that can be done without rewriting SpiderMonkey, the servo project would rather only tackle one massive project at a time :)


> all of Firefox's users have at least two cores,

As an end user, it concerns me slightly that the visible change will be that Firefox pegs two cores instead of just one.

An absolute explosion in Javascript usage on the web together, of low quality haphazardly put together at runtime, together with the convenience of tabs makes this a problem. Mozilla may do everything right but that doesn't necessarily help the end user.

Is there anything I could do easily to remedy this, without bothering with white listing Javascript? And is there any activities at Mozilla concerning this, perhaps with identifying the most trivial cases of scripts spinning without doing useful work? Maybe pausing DOM changes for documents that aren't visible?


Disable javascript? Not that it's massively fun for you without javascript on the web, but it sounds like you have a very performance dependent workload on your computer. I don't think it's anything most people will have to worry about though, cpu scheduling is pretty good most of the time, and people's javascript suck less and less.


> Maybe pausing DOM changes for documents that aren't visible?

We already throttle background tabs heavily in various ways.


I would personally not be so eager to disclaim such impressive credentials, but I suppose you have your reasons.


It's telling that not one comment in this entire thread mentions security and exploitability, two areas where Firefox is not just terrible, but the worst choice amongst (Chrome, Safari, Firefox) (1) but also Edge and IE 11. Everyone is focusing on performance, as if that's the BIG issue these days. Talk about having your priorities screwed up.

Reading your comment, and the linked post, I'm left with the impression that this will not change in the foreseeable future. Every exploit shop considers Spidermonkey a security clusterfuck yet it's still in use. Multiple processes do absolutely nothing for security unless combined with sandboxing ala-Chrome. Continuing to use C++ rather than fully embracing Rust (or something even better than Rust) also does nothing for security. Iteratively improving things on top of a Javascript engine that's a security disaster and a C++ core will not give us a secure browser, as one can not build castles on top of sand.

At some point people need to realize that one has to scrap the pile of mud and start again, on solid foundations. Alas I feel that these lessons escape the Mozilla folks and thus their browser will remain low hanging fruit for adversaries.

In an age where Nation States can MITM __the entire planet on demand__ [QUANTUM] and the FBI delivers Firefox 0day through TOR exit nodes, this blatant disregard for security should be entirely unacceptable. I don't really blame Mozilla, but those who use Firefox and give Mozilla their share in the browser market. If we don't demand better, we shall never have it.

(1) http://cyber-itl.org/blog-1/2016/9/12/a-closer-look-at-the-o...


> Every exploit shop considers Spidermonkey to be a security clusterfuck yet it's still in use.

Can you elaborate why SpiderMonkey is worse than Chakra, JavaScriptCore, and V8?

SpiderMonkey has approximately the same security track record as all of them. I think you're conflating the lack of sandboxing and some mistaken security-related decisions in the Firefox chrome with some sort of intrinsic security problem in the JavaScript engine.

> Multiple processes do absolutely nothing for security unless combined with sandboxing ala-Chrome.

And sandboxing is being actively worked on right now. In fact, it's an utmost priority.

Nobody is talking about doing Quantum instead of sandboxing. Rather, Quantum is being done alongside sandboxing.

> Continuing to use C++ rather than fully embracing Rust (or something even better than Rust) also does nothing for security.

I don't agree. Replacing C++ code with Rust code reduces the attack surface. The more of the browser you write in a memory safety language, the fewer opportunities for memory safety holes that attackers have.

Do I want to shrink the trusted computing base as close to zero as possible? Absolutely! But that's a long-term journey, and I would love to see Servo components get some real-world production use along the way.

> At some point people need to realize that one has to scrap the pile of mud and start again, on solid foundations. Alas I feel that these lessons escape the Mozilla folks and thus their browser will remain low hanging fruit for adversaries.

There's only one browser vendor that is actively working on "scrapping the pile of mud and starting again", and it's the browser vendor you're complaining about. You claim that the lessons of security "escape" me, but I have been working day-in and day-out for years on a browser engine written from the ground up in a memory-safe language.


Do you see sandboxing at odds with performance? I know it causes a hit to RAM, but I have no sense for impact on CPU usage or user responsiveness metrics.

Once sandboxing is implemented, will there be a push to implement the privileged process in Rust?


It's not just the lack of sandboxing, Spidermonkey is qualitatively worse than V8. The metric used is ease of finding exploitable bugs _without taking sandboxing into account_. I do not know where you get your security track record from, but there is a big asymmetry in public-vs-private information on the matter. Most of the research into exploitation happens behind closed doors and the general public is not privy to it.

Reducing the attack surface is meaningless when the attack surface is humongous, yet people keep reiterating the same old fallacies.

Mozilla has said nothing about starting again on solid foundations. All I see are iterative improvements on top of the same, rotten core, with an emphasis of performance to boot. This is not progress.


> I do not know where you get your security track record from, but there is a big asymmetry in public-vs-private information on the matter.

No, there isn't. Security bugs are made public in both Bugzilla and chromium.org once enough time has passed. Both engines have been around for years and years, so there's been plenty of time to gather data.

Sorry, but I'm not going to just trust "I can't link to anything because it's secret".

The only security-related feature that V8 has that SpiderMonkey doesn't is limited constant blinding in the non-optimizing JIT only. This is a cosmetic feature that does little, because an attacker can trivially subvert it. See: https://bugzilla.mozilla.org/show_bug.cgi?id=677272#c58

> Reducing the attack surface is meaningless when the attack surface is humongous, yet people keep reiterating the same old fallacies.

How are we going to get the attack surface down unless we reduce it?

> Mozilla has said nothing about starting again on solid foundations.

1. What do you think Servo is?

2. Who else is talking about "starting again on solid foundations"?


Bugs are only made public when developers find out about them then fix them. Many exploits simply die to code churn rather than developers actually fixing what they understand to be a vulnerability.

I have no horse in this race, but I don't think you should assume you have all the information.


My knowledge may be out of date, but afaik one example of SpiderMonkey being worse is that it requires manually rooting objects; I don't think Chakra or v8 have this manual requirement that leads to security vulns.


> How are we going to get the attack surface down unless we reduce it?

Ideally you never introduce it. Alternatively, you never let it get to the point where getting it down is essentially a no-op.

>> Mozilla has said nothing about starting again on solid foundations.

> 1. What do you think Servo is?

A research project in a Mozilla lab that is not fully embraced yet. Parts of it may, slowly, end up in Firefox. How does this improve matters or introduce solid foundations? I have yet to see a concrete commitment from Mozilla that Servo will be fully utilized in Firefox. Even the linked article demonstrates the opposite. Even so, let us assume that Firefox uses Servo. We're still in security hell due to SM.

> 2. Who else is talking about "starting again on solid foundations"?

In practice, Microsoft did tremendous work with Edge.

Chrome is not perfect, but is as close to perfect as you can get while still using rotten tools (C++).

There is a shift inside Apple too. We will see how it manifests.


> Ideally you never introduce it. Alternatively, you never let it get to the point where getting it down is essentially a no-op.

I can't understand this logic at all. You seem to be denying that you can ever reduce software attack surface in existing software, which is obviously false. It's undeniable that seccomp-bpf dramatically reduced attack surface in the Linux kernel, for example.

> A research project in a Mozilla lab that is not fully embraced yet. Parts of it may, slowly, end up in Firefox. How does this improve matters or introduce solid foundations?

I would think the important thing that determines whether we're developing the software is whether we're developing the software, not whether there are immediate product plans. By that logic, AT&T never developed Unix.

> Even so, let us assume that Firefox uses Servo. We're still in security hell due to SM.

No. I've already explained why this is incorrect. Switching to V8 would make negligible if any difference in terms of security.

> In practice, Microsoft did tremendous work with Edge.

EdgeHTML is a fork of Trident! Its development was not "starting again on solid foundations"!

> Chrome is not perfect, but is as close to perfect as you can get while still using rotten tools (C++).

Chrome is a fork of WebKit, which is a fork of KHTML! Its development was not "starting again on solid foundations"!


> Even the linked article demonstrates the opposite

No it doesn't. Servo is still chugging along. Firefox wants to get Servo's advances early.

> I have yet to see a concrete commitment from Mozilla that Servo will be fully utilized in Firefox.

Because Servo is a long term project. And writing a new browser engine is hard, if you want to be 100% web compatible. Servo's getting there, but it will take time.

> We're still in security hell due to SM.

No it doesn't. Content sandboxing is being actively worked on. Even if not, pcwalton already mention that SM and V* are roughly on par wrt safety features (aside from sandboxing).

> Alternatively, you never let it get to the point where getting it down is essentially a no-op.

You have yet to demonstrate why you think Firefox's attack surface is that bad. You linked to a blog post which uses this (http://cyber-itl.org/blog-1/2016/8/12/our-static-analysis-me...) metric, which is light on the specifics (or reproducability). ASLR and content sandboxing may be enough to bump Firefox back to the top. There's nothing there to convince us that the metric used maps well to real-world exploitability.


> Multiple processes do absolutely nothing for security unless combined with sandboxing ala-Chrome.

Seems like sandboxing exists (in some form) and is part of the plan? https://wiki.mozilla.org/Electrolysis#Security_Sandboxing

> Continuing to use C++ rather than fully embracing Rust (or something even better than Rust) also does nothing for security

Rewriting Spidermonkey in Rust is a major project in itself. JS engines have been highly optimized over the years and it's pretty hard to make a competitive new one. I would estimate that rewriting SM would be a project that's larger than Quantum and Electrolysis combined (I could be very wrong with this estimate).

> I'm left with the impression that this will not change in the foreseeable future.

There are folks who want to start replacing bits of SM with Rust code. Also, the build system platform support isn't yet in a state where you can write rust code and have it work for all supported platforms IIRC, so you can only use it for experimental things or nonessential features. Of course this will change by the time Quantum lands.

Not sure what the current status of SM oxidation is (there certainly is interest), but just because one project focused on speed exists, it doesn't mean that there aren't other projects focused on safety. This post and the comment you speak of are talking of the speed-focused project. You can't really draw conclusions about other, unrelated bits of the browser from this. Sandboxing seems to be pretty high priority, for example, but there's no reason for a post here to talk about this.

Also, security is still incremental. The castles on sand analogy only applies if an unpatched exploit exists in SM. This may be more common for Firefox over other browsers (IIRC this really isn't, it's just a matter of not having sandboxing, which I talked about above), but ultimately they get patched (except for 0days hoarded by malicious parties) and reducing the rate of exploits by using Rust elsewhere is certainly a plus.


They do seem to have Chrome-style Sandboxing in-progress, I guess we'll see how it turns out.

0day being used by various parties is exactly what I'm talking about here. Most of it will not get patched anytime soon and I dare say is orders of magnitude "bigger" than the exploitable bugs that are reported and patched. Yet you don't seem to break a sweat about it, in fact you are comfortably dismissing it under "malicious parties".

Doesn't that strike you as weird? I know people have trouble putting threats that are not fully visible in perspective, but there is enough information out there for everyone to be able to establish an accurate-enough picture of what is happening. The entire Internet has turned into a domain of War, and we will live with Firefox for the years (or worse, decade) to come.


> Note that I did say "Sandboxing ala-Chrome". The link you used demonstrates sandboxing for _plugins only_ which is practically useless, most of the Firefox 0day I've seen goes after Spidermonkey.

No, the link mentions "Content" all over the place, which is websites (and IIRC SM runs in the same process). It seems to be a work in progress, but that was all I was going for.

> And 0day being used by various parties is exactly what I'm talking about here. Most of it will not get patched anytime soon and I dare say is orders of magnitude "bigger" than the exploitable bugs that are reported and patched. Yet you don't seem break a sweat about it, in fact you seem comfortable dismissing it under "malicious parties".

No, I focused on patched exploits because IMO they can cause more harm (especially to users who don't get the patch in time) over zero days. Zero days can still cause harm, but because they're hoarded it's often less harm -- they can mostly be used in targeted attacks, since in broader attacks it's less likely for them to work under the radar (and stay unpatched). I could be wrong here, sure (and whether targeted attacks are less harmful than broad ones is debatable too, since targeted attacks generally do much more). But I'm under the impression that 0 days in SM (or indeed any software) are not as common as you seem to believe (of course, data on this will be incomplete). Hence I focused on patched exploits (not knowing that you were talking about 0 days in particular).

I was not breaking a sweat about either type of vulnerability because content sandboxing is something being worked on. These vulnerabilities are still bad with sandboxing, but it no longer is something that is hard to incrementally improve on.


> No, the link mentions "Content" all over the place, which is websites (and IIRC SM runs in the same process). It seems to be a work in progress, but that was all I was going for.

You are right, I've updated my post.


> A first version of our new engine will ship on Android, Windows, Mac, and Linux. Someday we hope to offer this new engine for iOS, too.

More people need to put pressure on Apple to allow third-party browser engines on iOS. Fortunately, they're already getting sued over this, but just in case that doesn't succeed, there should also be bigger public pressure on Apple to allow them.

http://www.recode.net/2016/10/7/13201832/apple-sued-ios-brow...


I've had a Gecko port stood up and running on iPhone hardware several times in the past 6 years but we've never sorted out a real path to shipping. The most recent incarnation felt a lot nicer than Safari with our async pan/zoom architecture. Maybe I should just get Servo running and we can ship it as a tech demo. :-P


I'll do the CI work to keep it building :-)


I did a sad LOL at that line


This sounds to me like Mozilla is getting impatient with Servo. Servo was more than just a parallel browser engine it was the only new web engine not based off of decade old codebase.

It was a statement that it's feasible to hold off on monoculture because compatibility isn't impossible to achieve on new engines.


> This sounds to me like Mozilla is getting impatient with Servo.

I mean, yes. In a way. I think this was always part of the plan for Servo -- if things go well start uplifting ideas and/or code to Firefox. This isn't exactly impatience; it makes perfect sense to do this.

This isn't a change in gear for Servo, though. Servo is still chugging along. There are still no concrete plans (that I know of) for a Servo product, however Servo is working on stuff that it needs for it to be a product (i.e. it's not just focused on trying out researchy ideas to be used by Gecko), like proper SSL security. So there's nothing stopping Servo from being a product in the future, and Quantum doesn't affect this.

Quantum affects Servo in a couple of ways:

- There are now paid Gecko engineers hacking on bits of Servo (yay!). On the flipside, paid Servo engineers (e.g. me) are working on Quantum, but for the most part this involves improving Servo itself.

- Rough edges are being polished, web compatibility is being addressed for the quantum components. These were things that were always being worked on, but weren't always a priority for a given component. For example, there have always been optimizations that we knew we could do, but we hadn't done them so far since there were other researchy things to focus on. Now these are getting implemented.

- The build/CI system will probably have major changes to make it possible to smoothly integrate with Gecko. This doesn't really affect goals, just the day-to-day servo developer experience.

It doesn't affect Servo's goals or de-emphasize Servo itself. It just gets some of the advances that Servo has made to the public faster.


Servo is still being developed with the same manpower. Quantum is not de-emphasizing Servo. It would make little sense to do so, since the lack of legacy in Servo is part of what has given us the freedom to experiment with things like parallel restyling in the first place. And Quantum helps Servo, too—by giving us real-world Web compatibility experience with portions of Servo's codebase sooner, it helps us shake out bugs faster than we can with Servo alone.

(Disclaimer: As always, I speak for myself, not for my employer.)


It would make no sense to de-emphasize servo. It seems like it would make a lot of sense to de-emphasize Quantum/firefox/everything mozilla is wasting their time and money on that isn't Servo.

I hope that either mozilla is lying for political reasons about their lack of intent to use servo outright, or that the servo team forks from mozilla and takes funding from patreon (or snowdrift when it launches) and builds something great, without the burden of mozilla and firefox.


> It seems like it would make a lot of sense to de-emphasize Quantum/firefox/everything mozilla is wasting their time and money on that isn't Servo.

Here's how I think of it: What's the biggest risk to Servo? I think most people would answer that (besides it being a lot of work), as it's a big boil-the-ocean project, there's a lot of newly-written code that hasn't been battle-tested yet. How do we fix that? By getting early user adoption on pieces when they're ready, so we can shake out the issues. Quantum is the perfect way to do that.

Of course, I happen to care about Firefox's success too, so getting great features to Firefox users as soon as possible is also a big win from Quantum. :)


The competition in the browser market is much harder than for example the market of programming languages and even there big version bumps have huge costs. If you can avoid it, you don't want all your resources tied to Perl 6, PHP 6, Python 3 and so on, while your users are complaining because the usable version of your software does not improve any longer.


I think emphasizing Servo is the wrong goal.

Ideally, when rewriting a large legacy app you want to take the StranglerApplication[1] approach. Netscape tried rewriting its software from scratch and we all know how that ended for Netscape[2] - Netscape is no more. Luckily, Firefox came from that whole mess. But I assume a lot of effort was lost, going from Netscape to Firefox.

In idealized scenario, Servo slowly starts taking replacing parts of Firefox. First URL parser and media player, then slowly DOM, CSS, Renderer, and finally takes over and rewrites JS VM.

The takeover is silent and gradual and user doesn't notices anything outside speed improvements. It's like some kind of silent Borg infection.

[1] http://paulhammant.com/2013/07/14/legacy-application-strangu...

[2] http://www.joelonsoftware.com/articles/fog0000000027.html


I think the user needs to notice a lot of improvements, at all at once, because firefox really feels hopeless at this point. I don't know though, maybe a slow turnaround will work, or a sudden rendering integration will be the turn around it needs.


Project Quantum is an opportunity to ship some of the Servo components in Firefox and gain millions of users, real-world tests of the technology, and hopefully dramatically expand our contributor base. As we mention in a post to the Servo mailing list (https://groups.google.com/forum/#!topic/mozilla.dev.servo/3b... ), Servo has a lot coming in 2017 unrelated to Project Quantum or Firefox.


Speaking for myself, I don't want "Servo components" in Firefox; I want Servo in Firefox. I want an engine as fast as the one they have shown of Servo as being; not pieces of it that make Firefox incrementally better; I want a browser that is miles ahead. That's always what I found interesting about Servo.


At the same time, you want a browser that keep working with the web at least on the same level as Firefox is right now.

Servo is getting us 80% there, the remaining 20% of quirks, non-standardized behavior and web sites coded with only Chrome/Fx/IE in mind are going to be increasingly difficult to cover.

On the other hand some pieces of servo are close to have "full" compatibility. That means that we can put them in Firefox already.

The result is that web coverage alignment between Servo and Firefox increases which is great for Servo, and the performance and safety between them improves as well which will benefit Firefox.


It also wouldn't be good to have Servo in Firefox until unglamorous things like accessibility are implemented. There's a lot that goes into mature, production software that isn't there in a new research prototype. So incrementally introducing pieces of Servo into the production codebase is probably better.


Incremental improvements are fine, but they don't need a codename or a blog post. Incremental improvements are just something you always do in software development.


Incremental improvements that together amount to a significant rearchitecture are not the same as incremental improvements that are just fiddling around the edges.

But even the latter may need a name if they will involve months of time and a large enough group of people that all need to be able to refer to them. Blog posts are more debatable, of course.


There's a giant gap between your personal lack of interest and the notion that this blog post is something no one finds interesting.


> Speaking for myself


Perhaps in some comment to which I did not reply. The parent I am attached to had no such disclaimer but did explicitly say this is not worth a blog post.


Even if you don't care about Gecko, this benefits Servo. It lets us battle-harden core components like WebRender and the style system.


It also benefits the Rust language for the same reasons. Firefox is already shipping components compiled in Rust[1].

[1] https://hacks.mozilla.org/2016/07/shipping-rust-in-firefox/


I can tell you from where I sit at Mozilla, everyone is very excited about Servo. It just takes time to build things! :) It wouldn't make sense to wait for Servo to reach full web compatibility before starting to integrate it into production uses. This is really just a natural next step in the progression of adopting Servo at Mozilla.


Compatibility in new engines is ... hard. Servo is basically going to need to spoof the WebKit UA and duplicate a bunch of WebKit bugs (the Edge approach) or spoof the Gecko UA and duplicate a bunch of Gecko bugs. Some specs now have an explicit "does your UA say you are Gecko, or does it say you are WebKit" switch with behavior specified for both branches. :(


> Some specs now have an explicit "does your UA say you are Gecko, or does it say you are WebKit"

woah! that's truly tragic.

at the risk of ruining the rest of my week, may i ask where?


Sure. https://html.spec.whatwg.org/multipage/webappapis.html#conce... and the pieces of that spec that use it is the main place so far. There is also discussion about maybe using the same switch in https://github.com/whatwg/dom/issues/278 if it turns out that too many pages assume "not webkit" means XMLDocument.load exists.

It could be worse. When the ECMAScript committee discovered that parsing for function-in-a-block differs between browsers and that sites sniff and do different things in different browsers such that none of the browsers can change to each other's behavior, they went ahead and specified a "compatible subset" of behavior such that if you follow those rules you will get the same behavior in all browsers. That's fine, but if you _don't_, as web pages do not, then you're back to square 1: pages sniff and do different things. And the details are not written down anywhere, so a new implementation has no choice but to try to reverse-engineer the sniffing somehow and then also reverse-engineer the actual runtime behavior of function declarations of whatever browser's sniffing response they decided to fake. The good news is that this function-in-a-block thing is literally the worst situation I have encountered in web standards, so all the other ones suck less. ;)


Considering that Servo is an embeddable engine, why not keep the scope of Servo focused in building a standards-compliant engine?

Replicating other engine's bugs sounds like too much effort towards the wrong direction :/

Although, after the -webkit-disaster I don't really expect anyone to like the idea of a standards-compliant engine :(


> Considering that Servo is an embeddable engine, why not keep the scope of Servo focused in building a standards-compliant engine?

That's what we've done so far. We haven't exposed any Gecko- or WebKit-specific stuff that I'm aware of. We have our hands full with the standard :)

(That said, we have implemented things that are unspecified but are de facto standards implemented by both Gecko and WebKit. That includes basic things like <button> and tables…)


Any efforts underway to turn those de facto standards into actual standards as Servo encounters them?


Yes, when we have time. For example, notriddle has been going great work trying to spec hypothetical boxes: https://github.com/notriddle/ServoHypothetical


> why not keep the scope of Servo focused in building a standards-compliant engine

It depends on how you define standards-compliant.

> Replicating other engine's bugs sounds like too much effort towards the wrong direction

Well, web pages depend on those. So your options are either to standardize them or to have a standard that, when implemented, gives you a non-working product.

In practice, they're being standardized (just like XMLHttpRequest was a non-standard thing that ended up getting standardized). See https://compat.spec.whatwg.org/ and https://html.spec.whatwg.org/multipage/webappapis.html#conce... and the bits of the HTML spec that reference it, the bits of https://drafts.csswg.org/cssom/ that mention "webkit", and so forth.

So in the end, a "standards-compliant" engine will implement those standards, which are themselves to some extent a codification of old implementation bugs; pretty normal for standards...


> Considering that Servo is an embeddable engine, why not keep the scope of Servo focused in building a standards-compliant engine?

So far, we are (as far as I know)! We try very hard to avoid nonstandard behavior, and if we can't avoid it, try to push for its inclusion in the standard. Given that there are no users that rely on Servo, we can even wait for the standard to be changed before implementing it.

We often try to implement the standard to the word, with code that closely matches up to the standard. This has helped find tons of bugs in the standard (or in the tests). Nonstandard behavior in components shared with Gecko are off in Servo mode.

But if Servo is to be a product we probably will have to change our stance here eventually.


Isn't this announcement basically just saying, though, that Servo is going to replace Gecko piece by piece, rather than all at once?


Servo components will replace Gecko components Where It Makes Sense (tm).

Servo will remain the place where the most radical experiments happen and once proven, will "stabilize" their way into Gecko.


Servo has always been and still is a research project. It was never intended to go into production as a fully-fledged Mozilla product. Quantum is the initiative to bring a lot of those technologies into Gecko.


> Servo has always been and still is a research project. It was never intended to go into production as a fully-fledged Mozilla product.

That's not true. Servo is designed to be production-grade.


To elaborate on this "production grade" comment, Quantum is using the same code that's in Servo. I think the fact we were able to build a production grade style system so quickly (and that is 4x faster!) is a testament to the advantages of Rust.


> That's not true. Servo is designed to be production-grade.

I never said that it wasn't. I said that it wasn't going to be shipped as a production Mozilla product.


> I never said that it wasn't. I said that it wasn't going to be shipped as a production Mozilla product.

That's also not true.

Maybe it will, maybe it won't; that's a decision that has to incorporate many factors other than technical ones. From my point of view as an engineering lead, I'm designing the engine to ship.


One key difference here is that this keeps Servo open to continuing to be a research vehicle for new technologies without the exposure of compatibility for millions of users. If we ship Servo in a product that does change the equation a bit.


I think part of what pcwalton and metajack are getting at is that Servo isn't "just a research vehicle" but rather intends to keep seeking adoption. I don't believe adoption and research are nemeses -- in fact, I'd say adoption is a core part of Mozilla Research's MO. I talked about this a little bit in a talk I gave this summer:

https://www.youtube.com/watch?v=9OHcJzJQ2Nk


They should use Yahoo's front page as their performance baseline.

Whenever I load it, the favicon starts to flicker, multiple movies (ads) start playing, and I can't tell whether scrolling has been badly hijacked by some rogue js plugin or if the performance of their video playback is just that bad.


Yahoo's home page explains so much about the company -- unable to maintain even the simplest and most basic features of their site. I'm still on Yahoo mail, and there were a few months this year where the basic search functionality didn't work.

Yahoo would probably make a great case study in corporate culture gone wrong.


> Yahoo would probably make a great case study in corporate culture gone wrong.

"would probably"? :)


There are so many major brand websites that are so awful that I want to grab anyone who admits to working there and yell 'have you tried using your own damn product lately because it sucks' but I'd rather not be returned to prison for my unique form of UI feedback.


Quantum wiki page with info on how to get involved: https://wiki.mozilla.org/Quantum


I hope that low power usage remains a key priority. Surprisingly, I don't see any mention of it in this article.


There is some discussion about power usage in this video by Jack Moffit: https://www.youtube.com/watch?list=PL4sEzdAGvRgCYXot-o5cVKOo...


Does anyone know how this compares to the current implementations of competing browsers? ie. is Firefox still playing catch up in some respects or is this leaps ahead of the competition too?


I think performance of the style systems in Blink and Firefox are similar. Since Servo's style system is linearly scalable, we expect most users to get a ~4x speed improvement on styling.

Users's don't care specifically about style performance, though, but about things like interactivity. We think Servo's style system will improve those things, but don't have numbers for that on hand.

To give a concrete example, it takes Firefox ~1.2 seconds to restyle the single page HTML5 spec with it's current non-parallel style system. Firefox with Stylo can do this in ~300ms. That is close to a full second off of initial page load.


How does WebKit's JITing style system compare? (As far as I'm aware, nobody else has anything like it, though it is still single-threaded as far as I'm aware.)


I would eventually like to look into a CSS JIT. But I would not like to just copy WebKit—I would like to look into treating CSS as a global DFA and compiling accordingly instead of looking at local optimizations only.

As always, the problem with JITs, especially on the Web, that it's hard to beat the interpreter when you take compilation time into account, and most selectors you see in the wild only apply to a small number of nodes.


For context, [here's an IRC discussion about implementing style matching as a single global DFA, and what it's tradeoffs would be](http://logs.glob.uno/?c=mozilla%23servo#c550146).


Just curious, has anyone seen if there are performance improvements in Javascript DOM manipulation (eg adding elements or attributes in JS)?


How does it compare to Microsoft Edge?


Currently no major browser has significant components implemented in a safe language like Rust. And while there are some efforts towards granular parallelism in existing major browsers, none really took a "start from scratch and get it right" approach like the Servo components that Quantum intends to adopt (and that's very noticeable in Servo perf).

So on both of those aspects, the plan here is to go significantly further than any existing major browser.


Here my experience with the current Firefox compared to Chrome :

- Firefox consumes way less ressources than Chrome. I can open 20 tabs it will consume a reasonable amount of memory.

- Firefox crashes way more often than Chrome. I get a crash everyday mostly because of a aggressive Javascript found in pages riddled with ads that abuse tricks in order to force the viewser to see ads ( Adblock can help )

- Firefox is way slower when it comes to Javascript execution than Chrome, no question and it looks like it supports less HTML5 features than the competition.

If Servo can effectively limit Firefox crashes then the investment is worth it. Because that's tne most annoying thing with Firefox. But the amount of memory used by Chrome is just unacceptable, along with spawning 20 processes just because they can ... Chrome is a memory and resource hog.


I've never had my Firefox crash on me in at least a year (both on Linux and Mac OS). But I have multiple levels of adblock, so maybe it's really helping. :)

You could also try to wipe your profile and see how a frugal Firefox without any addons behaves. (Remember to backup your bookmarks to a file before wiping!)


> Firefox is way slower when it comes to Javascript execution than Chrome

If you have specific pages where you're seeing this, please file bugs on Firefox and add "bzbarsky" to the cc list for the bug!


Thanks for reaching out; will do if I encounter a case. I don't run Chrome daily, and my personal experience with Firefox is good, including with "big webapps" like Atlassian products (JIRA, Confluence), but I often read the comment that "big pages/webapps" are faster on Chrome than Firefox.

See https://news.ycombinator.com/item?id=12794971 for examples of such talk, but it's rarely precise enough to file a bug :/


Right, that's my issue. People keep claiming something is slow, but when I ask for an actual example they almost never come up with one, so I can't profile and fix it... ;)


"But nowadays we browse the web on ... that have much more sophisticated processors, often with two, four or even more cores."

Having batched GPU rendering / rasterization makes real sense, yes. When it shown, the browser is the largest screen space consumer.

4K displays (300ppi) increased number of pixels that need to be painted by 9 times. Thus CPU rendering / rasterization is not the option anymore, yes.

But browser is not the only process competing for those cores.

2 or even 4 cores ... You have more front running applications than that these days. Some of them are invisible but still CPU intense.

In order to get significant benefits from parallelism in browsers the number of cores shall be measured in tens at least I think. If not even more than that. To run things in parallel like bunch of Cassowary solvers for each BFC container.

I suspect that the main bottleneck at the moment is in existence of methods [1] that force synchronous layout / reflow of the content. These are things that kill parallel execution. DOM API shall change towards batch updates or other more parallel friendly means.

[1] https://gist.github.com/paulirish/5d52fb081b3570c81e3a


> 2 or even 4 cores ... You have more front running applications than that these days. Some of them are invisible but still CPU intense.

Occasionally that's true, but users typically interact with just one app at a time. Even if background tasks are running and taking up CPU time (which is not the common case—if it were then games wouldn't work!) the foreground app should get priority in using the CPUs. So background apps aren't relevant here, any more than background apps are relevant in a single-core scenario.

> In order to get significant benefits from parallelism in browsers the number of cores shall be measured in tens at least I think. If not even more that that. To run things in parallel like bunch of Cassowary solvers for each BFC container.

First of all, that's not true, based on our measurements. We've seen large improvements with as few as two cores. Second, CSS parallelizes just as well if not better than Cassowary: unless text actually has to wrap around floats we can lay out every block in parallel. Even when there's a lot of text wrapping around floats, we can lay out all the floats in parallel.

> These are things that kill parallel execution.

Even if we had batch updating APIs, existing browser engines would be in no position to offload layout and restyling to a separate thread. The DOM and render tree are too intertwined in typical engines.


"we can lay out every block in parallel"

I wouldn't be so optimistic, especially with flexboxes and grids.

      display:grid;
      grid-columns: 1fr max-content 1fr minmax(min-content, 1fr);
The above requires intrinsic min/max width calculations (full layouts de facto) before you can even start doing layout of cells in parallel.

The same is about our old good friend <table> by the way where each cell is BFC.


Intrinsic inline-sizes have been handled just fine in Servo from day one with a parallel bottom-up traversal. This is well-known; the technique goes back to the Meyerovich paper.


You can make a super fast web browser, but that doesnt solve the fundamental issue: The web is not designed for performant applications. Resources loading, javascript, rendering... Solve that first before you build a fast engine...

Off course a fast engine is good. But dont forget the root problems with the web.


Yes. The article speaks of "zero latency", but that is simply not achievable with a network connection that has latency. My guess is that in current browsers, on average 90% of the latency is in the network (as opposed to rendering). So even if the render step was perfect, you would only get a measly 10% performance gain.


    we’ll be rolling out the first stage of Electrolysis to
    100% of Firefox desktop users over the next few months.
From my experiments with it, this still does not fix the problem that the javascript in all windows shares one core. A script running in one browser window still slows down the other windows.

A problem that Chrome has solved years ago. So I think this is not really a leap for the web. Just FireFox catching up a bit.

FireFox is my main browser. The way I deal with it is that I start multiple instances as different users. So they run inside their own process. This way I can have a resource hungry page open in one window (For example a dashboard with realtime data visualization) and still work smoothly in another.


> So I think this is not really a leap for the web. Just FireFox catching up a bit.

To be clear, Project Quantum is the next phase of architecture, post-Electrolysis. We're also simultaneously working on multiple content processes (which is how Chrome often avoids inter-window jank), but not under the Quantum umbrella.

We think we can do better though, which is where Quantum comes in. The Quantum DOM project is designed to solve exactly the problem you're describing, while using fewer resources and scaling to more tabs. Stay tuned!


This sounds like it directly conflicts with sandboxing tabs, either for security or stability. Am I missing something?


Mozilla engineer Bill McCloskey describes how Quantum DOM will use both multiple OS processes and cooperatively-scheduled, user-space threads to preempt and throttle iframes and background tabs:

https://billmccloskey.wordpress.com/2016/10/27/mozillas-quan...


> this still does not fix the problem that the javascript in all windows shares one core

Yes, that's what makes it the "first stage".

The "leap" part is not Electrolysis; as you note that's just table stakes. The "leap" part is what we can work on now that the Electrolysis first stage, which was very labor-intensive, is done.


Servo has a solution for that: it runs all origins in separate threads. (Note that this goes further than Chrome does; Servo runs cross origin iframes off the main thread too, while Chrome does not.)

Gecko is solving this too, with Electrolysis and Quantum DOM. But because the architectures are so different at the DOM level (direct bindings vs. an XPCOM/direct binding hybrid, tracing vs. reference counting, off main thread layout vs. main thread layout, Rust serialization vs. IPDL, etc.) the Servo solution doesn't directly apply to Gecko. So Gecko and Servo are working on their own solutions mostly independently, rather than sharing code as in most of Quantum.


Parts of Quantum are addressing this, and also Electrolysis will also probably grow to fix this as well. You can already turn on multiple content processes in Firefox to get Chrome-like behavior (at the cost of some rough edges; there's a reason it's not currently the default). I think the setting is dom.ipc.process-count in about:config. I have mine set at 10.


Yes, I heard about it. For now, I disabled Electrolysis completely again, because I had pages freeze sometimes when I had it on. Maybe because it is incompatible with some plugin I use. My current solution to tun multiple instances as different users works best for me so far.


> A problem that Chrome has solved years ago.

The trade-off is the crazy amount of resources Chrome uses,even on a multicore machine. 20 process spawned and shit like this can bring down any computer. Chrome resource usage is excessive.


You can alleviate this problem by increasing dom.min_background_timeout_value.


I don't understand what the difference is between Quantum and Servo. To me it sounds like a new name for the same thing. I recall Servo being promoted this way three years ago.


Quantum is about new architectural advances within Gecko. These advances can be achieved by taking code from Servo (Quantum Style), ideas from Servo (Quantum DOM), or in general new ideas (not sure if any of the quantum projects are of this form).

Servo is still its own thing. The timelines are radically different, Quantum is something you should be able to directly benefit from a year from now; Servo is something you should be able to directly benefit (as a product) from in the far future. You of course may indirectly benefit from Servo when its advances are applied to other browser engines (which is what quantum is).


Servo is a project to build a new rendering engine in Rust, taking advantage of its security properties and ease of doing parallelism to do things that other rendering engines are not willing or are not able to do in terms of parallel execution.

Quantum is a project to significantly change some of the core parts of Gecko. Some of this will involve sharing some code with Servo, some will involve changing the Gecko code directly.

There is overlap between the two in the form of some shared components, which may be causing your confusion.


Pet peeve of mine: a "quantum leap" is literally the smallest change of state that is physically possible, but it's come to mean the opposite in popular use.


Time to shelve that pet peeve, because any physicist should be able to tell you that this claim is one of those hilarious "people keep pretending this is true" when it has nothing to do with what a quantum leap actually is.

In physics, a "quantum leap" is a synonym for atomic electron state changes. These don't take place as a continuous gradient change, they jump from one state to the next, releasing or absorbing photons as they do so: the changes are quantized, and any change from one state to another termed a quantum leap. And it really is "any change": they're all quantum leaps.

There is also no stipulation on direction, state changes go both up and down. So while there certainly is a lower bound to the value that we will find in the set of all possible quantum leaps in atomic physics (because you can't go "half distances", the whole point of quanta is that it's all or nothing). the idea that a quantum leap is "one value" is patent nonsense; different state transitions have different values, and can be anywhere between tiny and huge jumps, and can release massively energetic photons in the process.

And if the term has been given a new meaning in everyday common language, then cool. That's how language works. It doesn't need to be correct with respect to its origin, people just need to all agree on what the terms used mean. In every day language, understanding that "a quantum leap" means "a fundamental change in how we do things" (which is implied always a big deal, even if the change itself is small) demonstrates a basic understanding of conversational English. So that's cool too.


I am both trained in physics and working on Project Quantum, and right now I am so annoyed by reading all the replies to the blog post on various forums going "quantum means tiny y'all played yourself". -_-


In physics, a "quantum leap" is a synonym for atomic electron state changes.

Right. The smallest such change physically possible, which is exactly what I said.


Again with the pretending there is only one possible change here: a state change in high orbit excitation levels is nowhere near "the smallest such changes", yet we still call them quantum leaps because that's what they are. The distances and energies involved are HUGE at the atomic scale, which is the scale we must necessarily stay grounded in when discussing what a quantum leap "really" means. (where "really" is a misnomer because there is nothing that requires a scientifically precise term to be reflected in common everyday language. The only important part is that you don't use the meaning of the latter in the context of the first)


It's a barrier that can't be smoothly crossed, so it implies revolution over evolution.

I think you're accidentally mixing in some Planck with your quantum.


This was refuted by a physicist elsewhere in this thread: https://news.ycombinator.com/item?id=12807384)


Doesn't Chrome/Chromium already run as multiple processes?


I believe it is "only" one process per tab on Chrome. This sounds like having multiple processes per tab (or at least, active tab), splitting up different kinds of work to "specialist" processes.

Clarification / refinement, anybody?


Chrome is one process per tab (well, per origin, because same-origin JS), and within a tab is mostly single threaded (at least, this is my understanding).

Servo has a one process per tab mode (off by default), but regardless of that within a single tab both styling and layout can be parallelized across many threads, and rendering makes full use of the GPU.

This means that the active tab will be much faster. You don't care about rendering/layout speed for inactive tabs, and usually not styling speed either (restyles usually get triggered by interaction, though if you have a CSS game or something in a background tab that might matter). So process-per-tab doesn't make use of the resources it could, unless you're parallelizing within the tab. I suspect most background tab processes will be asleep or doing much less work; so while it helps for isolation (security, and less jank caused by one badly performing tab), it doesn't help as much for response time in a single tab.


Chrome is working on isolating at origin boundaries, not really paying attention to it myself though.


There is this concept called "threads" which can be found in most operating systems.


Process, thread, task, whatever.

(I so remember the first time I was reading about threads in OS/2 and Novell back in 1989 or 90 - no risk not worth taking to squeeze a little bit more out of 10 MHz CPUs with OS's with crummy process management, I guess)


I wish I could see how much processing power/memory was being taken by each tab.


It's about damn time!


Looking forward to this!


Sorry but posting this on medium is a quantum leap backwards. That too from Mozilla.


Heh. It's interesting to me that in the vernacular a quantum leap is a large change while in physics it's a very quick, extremely tiny event.


No! This meme is just wrong (and it does grind my gears to an unreasonable amount as a physicist). In physics quantum means discrete, it does not mean small. Hence the idiom "quantum leap" used as "a discontinuous abrupt jump in quality" is perfectly in sync with the way physicists use the phrase.


I'm not saying it means small, just that quantum leaps are nanoscale. And also, the state after the leap isn't better or worse than the state before, just different.


I still disagree. Quantum does not imply nanoscale - we are building bigger and bigger systems that exhibit inherently quantum behavior (for instance superconducting cavities used in quantum computing research are centimeter scale). Moreover, the jump is not necessarily in a spatial variable - it can be a very big jump in energy levels for instance, which is the case with laser emissions.

Sure, a (big or small) discrete jump does not necessarily imply a jump to something better, but that has little to do with my complaint about the meme that "quantum implies small".


I stand corrected. Thank you.


No, in both it's a discrete change. It just happens to be that in physics most discrete changes are at a small scale.


Maybe it's a clue that Mozillians are secretly building a handlink:

http://quantumleap.wikia.com/wiki/Handlink


"Although changes of quantum state occur on the sub-microscopic level, in popular discourse, the term "quantum leap" refers to a large increase.[5]"


Isn't that more or less what I said?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: