Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 48 Beta, Release, and E10S (asadotzler.com)
242 points by davezatch on June 7, 2016 | hide | past | favorite | 184 comments



Notably, it'll remain disabled by default if you're using extensions (pretty much everybody here who is running Firefox?): The groups that will have to wait a bit for E10S account for about half of our release users and include Windows XP users, users with screen readers, RTL users, and the largest group, extension users.

You can manually enable it, though[0]: However, if you want to permanently opt-in, you can do so with a simple pref change. Just go to about:config and toggle browser.tabs.remote.autostart to true. On your next restart, e10s should be active. To verify that it is active, go to about:support and look for a number higher than 0 in "Multiprocess Windows".

I didn't realize they had this kind of fine-grained control over A/B-testing in their Firefox beta population, as Asa puts it: We have all the knobs. I understand where they are coming from, but it feels a bit weird to have a piece of software running on my computer checking in with a remote server to enable/disable functionality.

[0] https://wiki.mozilla.org/Electrolysis


The way I understood it he was talking about the release channel population, not just beta.

Here’s what that looks like. When we launch Firefox 48, approximately 1% of eligible Firefox users will get updated to E10S immediately. The 1% of release users should get us up to a population similar to what we have in Beta so we’ll be able compare the two. About ten days after launch, we’ll get another round of feedback and analysis related to the release users with and without E10S. Assuming all is well, we’ll turn the knobs so that the rest of the eligible Firefox users get updated to E10S over the following weeks. If we run into issues, we can slow the roll-out, pause it, or even disable E10sS for those who got it. We have all the knobs.


Any reason why it won't be enabled for people using extensions? This makes me worry that there's some significant incompatibility with extensions, and that when they do enable it for everyone, it'll break some of the extensions I use. I suspect that splitting the UI and content into multiple processes would make certain things very difficult.


It is exactly that. Some extensions (it used to be "many extensions" before E10S was enabled on the Nightly and Aurora channels) assume that they can call back and forth between page content and browser chrome, which is easy when they're all JS objects in the same memory space, but which doesn't work nearly as well between processes.

It might be worth downloading Firefox Developer Edition (which uses a separate profile) and installing your usual suite of extensions to check they all still work.

The only extension I personally know is weird under E10S is "It's All Text", where the edit button doesn't appear on textareas until you switch to a different tab and back. Other extensions like uBlock Origin, Tree Style Tabs, Open In Browser, HTTPS Everywhere, Document Font Toggle all work fine.


A big one that's problematic is LastPass. It can hang the browser for a long time, 10+ seconds or so. Very annoying.


Please try out the development version of lastpass, as most of the e10s compatibility work has gone into that:

https://addons.mozilla.org/en-us/firefox/addon/lastpass-pass...


Is there an e10s_okay extension or should I just run ff dev ed. and see what happens?

I'm running:

    Lastpass
    uBlock
    Zotero
I have others but I'd ditch them. I'm not keen at all on giving up the three above.


> "It's All Text"

I hope that gets fixed. That's one of the most useful extensions I have installed, and about the first thing I install when starting afresh in Firefox.


How about vimperator? Vimfx is OK as a stop-gap but that's one extension I wouldn't want to lose permanently.


Here is the Firefox bug report for Vimperator don't working with e10s:

https://bugzilla.mozilla.org/show_bug.cgi?id=1100918

The Vimperator developers are working on it here:

https://github.com/vimperator/vimperator-labs/issues/211


Nope, vimperator doesn't work under e10s at the moment. See the tracking issue here:

https://github.com/vimperator/vimperator-labs/issues/211


Extensions that access stuff that "lives" in the content process can be rewritten or shimmed; but this hasn't been finished yet for all extensions. There's a list of extension status here: https://www.arewee10syet.com/


You can check the compatibility of popular add-ons here: https://www.arewee10syet.com/


They've said that for a long time. It's why they are moving to WebExtensions.

http://www.tomshardware.com/news/firefox-improves-security-n...


Evidently there are still a few lingering problems with NoScript (which appears to be the most popular extension not currently marked as compatible, according to https://www.arewee10syet.com/). That one's a big deal for me, so I may hold off on enabling electrolysis for now.


That's why Webextensions is coming out.


That's the point of beta software. They're making it public, but it's still a release whose purpose is to investigate problems with the code before an official release.

I wonder if they're isolating E10S to stock-Firefox as means to avoid too many interactions while debugging or some other reason. I don't even wanna ask why they're still supporting Windows XP...


Because in many countries, XP is the most used OS, and Firefox is the only well supported browser on it. Hence it's a significant market share they can get without any competition fighting for it.


About 15% of Firefox users run XP, a little more than Windows 10 or OS X. Even if Mozilla dropped support for XP, I doubt those users would switch to some other unsupported browser. Firefox wouldn't lose those users (in Statcounter metrics), but they would be stuck on the same Firefox version forever.


It let us switch our servers to SNI-only as we still have clients with users on XP. (IE on XP does not support SNI.)


Is E10S going to replace the current model? I mean - is the plan to drop the choice when E10S is stable/works for all known scenarios?

I'm asking because I might have a bit of a problem with my tab addiction (my phone regularly reaches 60+, my laptop or desktop probable sit around 2-3 times that for weeks at a time) and I don't think that'll translate well into a process-per-tab model.

Yes, I know about bookmarks, read-later tools and that this isn't the healthiest habit.


FF doesn't use process per tab. It will use one process for chrome and one for all the content rendering for all tabs.


It will in the future from what I understand. You can enable it right now in nightly.


You can set a certain max number of processes. I don't think there's a way to explicitly do process-per-tab or process-per-domain.


There isn't. It is planned to add an algorithm for grouping tabs into processes and dynamically adding/removing content processes, but that's low priority right now, because the current focus lies on getting the single content process e10s ready for the release channel.

More info in the e10s-multi bug and its dependencies:

https://bugzilla.mozilla.org/showdependencytree.cgi?id=e10s-...

(also a nice way to look for known issues that might affect you if you're interested in trying out e10s-multi)


What benefits does that provide over 1 process per tab?


Edit: I misread your comment. As RussianCow mentioned, it's to keep resource usage under control.

The main advantage is that browser chrome is still responsive during rendering. You can switch tabs, open menus, resize windows etc. Also, rendering is a little faster since you don't have to interrupt it constantly to keep the UI responsive. And if the content process dies, all your tabs are still "open" and you can reload the content.


I think the parent was asking about the advantage over Chrome's model of having one process per tab, not over the current single-process model.


Chrome doesn't have a model of one process per tab, fwiw.

It has a model of one process per tab until you have enough tabs, then tabs start sharing processes based on various heuristics.


Shot in the dark here, but Chrome was developed back when it was common to have a single core CPU without any hyper-threading. An easy way to balance load is this scenario is just make a bunch of processes and have the OS do all the heavy lifting in regards to scheduling as everything has to share that one thread. You also get the benefit of bad processes dying without taking down the entire browser. At the time Chrome was coming around, browser stability was a larger issue than it is today.

Now that everyone has a multi-core cpu, it makes more sense to not really worry too much about scheduling. The OS will just move stuff off a busy core and onto a less busy core dynamically. So why not just separate the UI front end from the rendering and call it a day? Why bother with so many processes when you have cpu cores to spare? The UI and renderer will run in their own cores when they get too cpu hungry. Also, if the renderer crashes, no biggie, the UI just re-renders everything.


But if you have a tab with heavy JS, that's going to slow down all the other content as well since they must share a CPU. If each tab got its own process, the heavy one could get its own core. Same if one crashes, it takes all the others down.


Maybe they can't do multi-process because of legacy code concerns.

Also, you gain memory savings as you aren't spawning all those processes which have redundant libraries and such loaded.


I've read many times that Chrome is a heavy resource consumer due to their one-process-per-tab architecture. Maybe others can say more ...


I've heard the 1:1 mapping was only true at first, they tried to merge processes by domain IIRC.


Presumably, you don't have the overhead of running a separate process for each tab, which adds up quickly when you have 50+ tabs open.


Significantly lower memory usage.


None, really. Process per tab is the end goal - this is just a step along the way.


When they first announced this model I ended up in the group that had this "feature" enabled. It was completely useless. I was almost about to change the browser for Chrome because I could barely broswe a few minutes before it crashed due to excessive CPU and memory usage. And then I got another update which reverted back to the old single process architecture.

If it's as messed up as it was back then, it looks like I'll be having to switch broswer. I have a similar usage pattern as yours, with the added disadvantage that I group my tabs in windows, and every window seems to carry a big overhead.

Btw, I'm not "memory constrained", I always have 2 to 5 GB free memory, but once the firefox process is over 1,5 GB then trouble begins, at 2 GB it's about to crash, and over that it can sometimes survive as large as 2,5 GB, but I've never seen it use more -- it always crashes before reaching 3 GB.


It has improved dramatically since then. I'm not saying it's perfect, but I think you should give it another shot instead of complaining about how bad it used to be.


I'm still using Firefox, just dread the day this feature comes back. It is now disabled in my settings, I just wish they make it optional instead of mandatory, because for my use case it brings serious stability issues.


I remember that phase, it was below-alpha quality. It's now okay, no fear to have, only some occasional hiccups like asynchronous errors (click somewhere at time T, click action triggered regarding the element at time T+x).


You can't use beta software and complain about stability issues.


What? You were going to switch browsers because you were having problems using a beta or nightly version? I have an idea... Use the release version!


Release has even worse memory issues, I had to go beta to get memory usage down and prevent crashes in the first place.


Memory issues? I have have never had any memory issues with Firefox and it's been almost crash free now for the last six releases.

It's 2016 now so maybe it's time to upgrade your box and it's 4 GB of RAM.

EDIT: How is Firefox eating up to 3Gb of RAM on your machine? I never seen mine go over 1.5GB and that's been after a week of not closing it (64-bit version/no Flash installed).


I've easily gotten it up to 4GB. Five windows, several tens of tabs per window. Webapps like GMail (2 instances of it), GCal (also 2), Facebook, Tweetdeck, Slack, etc. open all the time eat a ton of RAM, and just various complex websites/apps sitting open over time will often leak a bit.

I just restarted FF an hour or so ago, and haven't even had it reload most of the tabs that used to be open (FF restored them, they're just not loaded yet as I haven't tabbed to them), and I'm already at 1.5GB resident with probably 15 active/loaded tabs.


It doesn't sound like you are doing anything much different than me except I don't use Facebook. I normally keep my browser open for weeks at a time and never break 2GB unless hit some crap website. Addons?


Could be. I haven't tried disabling any of them to see if they're causing high mem usage. Good next step, thanks.


200 tabs split in 12 windows, each window seem to carry a big overhead. Btw, I said "2 to 5 GB of free RAM", so that means I have more than 4 GB total... pay a bit more attention before being pedantic.


I read your comments just fine but you weren't making sense but now you explained it. 200 tabs in 12 windows? Yeah, nobody cares your browser is crashing now. You can switch to Chrome and maybe it won't crash as much but it will use more memory.

Personally, I would reevaluate how you use browser.


> Btw, I'm not "memory constrained", I always have 2 to 5 GB free memory, but once the firefox process is over 1,5 GB then trouble begins, at 2 GB it's about to crash, and over that it can sometimes survive as large as 2,5 GB, but I've never seen it use more -- it always crashes before reaching 3 GB.

Was this for a 64-bit build of Firefox?


Yes, 64 bit beta. Took a a bit of digging to find it, but it was the first thing I checked once I began seeing this crashes. The only thing that fixed it was when it went back to the single process architecture.



People blame chrome for being memory hog and it's probably exactly for this reason. I fully expect mem usage to go up for E10S release as well... As a daily FF user both at work and at home, I'm not sure if that's a good thing.


Most people still claim Firefox is a memory hog, but from my observations Chrome uses a lot more from it's one process-per-tab model compared to Firefox's single process for many tabs. It might not be significant for a small number of tabs, but as someone who frequently has 50 tabs open (I know, I have a problem), I'm concerned at how much this could increase things.

That said, memory usage isn't my biggest concern, I generally have no shortage of memory, and don't particularly care if my browser is using 1GB of RAM. If this improves stability and performance (which seems to have gotten much worse recently on OS X in particular), then it's a fair trade off.


> but as someone who frequently has 50 tabs open (I know, I have a problem)

How do people get work done WITHOUT having 50 tabs open? It's a pretty regular occurrence for me, between having MSDN, the PostgreSQL handbook, documentation for a half dozen libraries, my "read later" list, etc., there's almost never a day I am not drowning in tabs. Thankfully, the Tree-Style Tabs extension saves the day here!


I don't know why people do that. Having 50 of them open does not increase your brain processing capacity, you probably still work with like 10 at maximum. My day typically consists of 6-7 pinned ones and 10 tops dynamic. As a rule, if the tabs reach the end of 1920x1080 screen, it's time to close something :) No tree-style required.


I am normally flipping between groups of them over the course of 15 minutes, I may not touch them all within the same window of time but they all have a purpose - and beyond that the tree helps me keep track of my thought process as I typically open most links in a new tab, so I can see the hierarchy I am working with.


I'm not sure if Chrome has an option to disable it, but the thing I dislike the most about Chrome is when you open a past session and it reloads every. single. tab. Firefox is smart: it only reloads the page when you click it. This saves plenty of memory, but Chrome... oh boy.


Chromium/chrome has an extension that helps with tab-mania that works precisely because it's process-per-tab: Tabsuspender https://chrome.google.com/webstore/detail/the-great-suspende...



Memory usage is closely being watched as part of the 10s transition. Of course, e10s is not a free lunch, but according to early claims [1] it's 20% more than non-e10s Firefox and half the usage of Chrome's multi-process implementation.

[1]: http://www.erahm.org/2016/02/12/are-they-slim-yet/


I'm using nightly builds since about two years and can't remember when exactly I enabled E10S. When I enabled it, I noticed around 20mb more memory usage. So imho this is a good thing, not that much memory for a lot more safety and if Firefox crashes for some reasons, all I have to do is clicking the "reload tabs" button.


So how is it? Does it feel better? Faster ? Slower? Where? When?


It mostly feels smoother, i.e. less laggy. To me, that's most noticeable when scrolling, which comes from the added implementation of Asynchronous Panning and Zooming (APZ).

Basically, beforehand when you turned your scroll wheel, Firefox first sent a scroll event to the webpage, then the webpage could execute whatever it needed to execute on scroll, and then Firefox actually scrolled the page. With APZ, that's now handled in parallel, i.e. it already scrolls what's there and alters it at the same time with whatever the webpage wants to do on scroll.

This also applies for zooming the page, as you might have guessed from the name, and as far as I can tell also to resizing the window.

Other than that, all animations in the browser UI are a lot smoother, too. For example, I can now just hold down Ctrl+T (which opens new tabs) and it happily chugs through playing the tab-animation hundreds of times with seemingly the only limiting factor being my screen's refresh rate. (And that's on a below-average laptop.)


I've used the developer channel and regular Firefox side by side the entire time and haven't noticed a difference to be honest. One extension I use (rikaichan) froze in e10s, but was fixed.


More performant in my experience - less lag and beachballing. Mac running El Capitan.


Isn't Chrome multi-process per-tab? Unless I'm misunderstanding, Firefox's e10s will create just 2 processes: UI / content.


The goal must be a single controller ("UI") process and multiple content processes for each tab, otherwise they'll be missing out on the advantages of a multiprocess model, namely, stability and isolation.

They're first going to do two processes, probably for debugging without too many moving parts, then multiprocess. See https://wiki.mozilla.org/Electrolysis#Schedule_and_Status and https://wiki.mozilla.org/Electrolysis/Multiple_content_proce....


It's not supported, but you can go into about:config and change dom.ipc.processCount to a number higher than one. I haven't had any trouble running at the number of hardware threads, minus one for the UI.


Ok now that I've read all the problems that could cause https://bugzilla.mozilla.org/showdependencytree.cgi?id=e10s-... , I feel bad about recommending it.


> The goal must be [...] multiple content processes for each tab

According to the wiki page you linked, it doesn't seem to be the goal for them:

"The (practically) unlimited process count is on some platforms not a realistic goal, and having a separate process for each open site instances or even sites does not seem like a reachable goal (for now at least). Which means using content processes as a security membrane cannot be a goal either."

If this is true, I don't see any advantages of e10s introduction.


They're going to find out that managing processes by themselves is going to be very difficult (similar to M:N threading), and opt for a Chrome-like model where each tab is its own process.


That was the original Chrome model but they switched to M:N.


I don't think going full process per tab is the answer either, I think that some kind of system that adds additional processes for heavy tabs while keeping as many on single content processes would be a good middle ground.


I thought the plan was for Firefox to use the same process-per-tab model as Chrome, but this post does seem to imply that it's just 2 - "we’re using project Electrolysis to split Firefox into a UI process and a content process."

They also go on to say that "After that, we’ll be working on support for multiple content processes.", so I assume the plan is to have one perocess-per-tab eventually. I'm surprised they aren't there already though, considering this has been in the works for a few years now.


There was a period where electrolysis didn't receive much work, and then there was a second push to get it working. That's why it seems to be taking so long.

As for process-per-tab, I can't find anything official about it on the bug tracker. The problem with that option is it leads to quite high memory usage, and hides the existence of memory leaks (chrome really likes to eat my ram). We'll have to see how memory usage goes as they experiment with multiple content processes.


It is starting with just two for now, but will be increased after more optimization. There's a setting you can modify in about:config that will allow you to manually set the number of processes if you want to experiment with it.

source: https://www.reddit.com/r/firefox/comments/4hmii4/firefoxs_ap...


At least Chrome is relatively fast. The only reason I use Firefox is because of a couple of plugins, but otherwise I would dump Firefox and switch to something else (and the plugins barely affect the speed). I really don't know what has been happening to Firefox in the past few months/couple of years, but that's not the browser I remember.


Try disabling all of your extensions or using the “Refresh Firefox” command in about:support. These days I rarely notice a difference between the two except on extremely JavaScript-heavy sites – things like asm.js demos, not Gmail – when E10S isn't enabled and so the UI blocks.


Try about:config Change your browser.sessionhistory.max_total_viewers = 5

It is a breeze. http://kb.mozillazine.org/Browser.sessionhistory.max_total_v...


Process isolation for tabs could reduce the impact of memory leaks though, right?


Probably not. I have had Chrome tabs go to 10GB of RAM before.


RAM is cheap. Getting hacked isn't (especially when you consider ransomware these days). That's the reason Chrome is the most secure browser around, and Firefox isn't even invited to Pwn2Own anymore.

https://it.slashdot.org/story/16/02/12/034206/pwn2own-2016-w...


From further down the same page, one comment says that CVE counts for FF are better than Safari or Edge, though Chrome is still by far the best.

https://it.slashdot.org/comments.pl?sid=8737473&cid=51495357


That's nonsense. They did not target Firefox, because it has not made any major (architectural) changes in regards to security, since they've last targeted it.

No major changes means that there will also likely not be any new security vulnerabilities and therefore they would almost certainly not find anything new.

This does not mean that Firefox is less secure, it just means that Mozilla has not innovated much. And if you look at actual statistics, you'll see that Firefox is not behind at all.

You could maybe make a separate case that this will lead to bad security in the future, as some innovation is inevitably going to be necessary, but with addon signing, E10s, WebExtensions and Servo all up and coming, you'll have a hard time to actually defend that case.


Great, so we have no open browser left anymore?

Chrome is at best visible source – Google controls development, not the community – and the others are even fully proprietary

Great fucking world.


How about making Firefox more secure (one of the goals of E10s)?


That can only be done if Firefox has users.

Which is hard to do when Google competes unfairly, even illegally – which they did, for example with using AdSense for Chrome ads everywhere, and not paying website owners where the ads were displayed the normal rates, or by adding fraudulent banners "Your browser (Firefox) is outdated, upgrade to Chrome now" to Google Search.


Can someone answer why this is wrong instead of just downvoting?


Well, it's wrong, because it can not only be done, if Firefox has users. You don't need to care what 99% of the world population is doing. As long as Firefox exists, you can use it.

You could say that without many users, there won't enough real-world exposure, but at the same time, Firefox will also be targeted far less, so you wouldn't have to worry about security vulnerabilities as much.

And while yes, if at some point Chrome becomes Internet Explorer 2.0 and webpages target nothing else anymore, that could cause Firefox to be hardly usable, but that's then an entirely different nightmare and not anymore relevant to the current situation.


But it isn’t – the amount of open source devs working on FF depends on how many people use FF, and the amount of fulltime devs depends on funding, which depends on the amounts Yahoo or Google pay, which depend on how many users Firefox has.

With no devs, you can’t implement those features or the security easily.

And especially for security you’ll want fulltime employees.

Also, you’ll have to pay more than Google pays their Chrome devs to ensure the people will keep working for you.


We built Firefox 1.0 and shipped it to the world with zero revenue. Open Source is funny like that. Some people work on it because they love it, not because they're being paid.


Indeed, and one can surely maintain Firefox 1 or 2 easily with a bunch of volunteers.

But maintaining a whole OS with a whole virtualization layer, a whole sandbox system, several supported scripting systems, its own scheduler, its own graphics stack, and support for compatibility with any bug a competing system has?

Modern browser are reaching complexities we’ve only seen in whole operating systems before. You don’t see a bunch of volunteers maintain everything from Linux to KDE at once, it’s hundredthousands of people working in a very different way.

To be able to compete against Google, be able to force them to implement features you pioneer, and force them to avoid implementing other features, to be able to keep up with them and reach better security than them, the current Firefox team is not enough.


> Chrome is at best visible source – Google controls development, not the community

Chromium is open source, Chrome is open core. Chromium may also be cathedral (rather thean bazaar) development model, but that's only distantly related to open source or not (bazaar is facilitated by open source, but not required for it.)


Eh, it’s not really related.

Google even openly discourages forks, or any contribution that doesn’t fit their ideals.

It’s only truly open if development is controlled by a democratic community, or if it’s easily possible to be forked and competed with.

Which isn’t the case.

And for users open development is a lot more important than open source: Being able to get their own changes into the browser, ensuring the browser is made by the users, for the users.


Google even openly discourages forks...

Google may discourage forks, but there's nothing they can do to prevent one, and forking Chromium is no more difficult than forking any other project its size.

... or any contribution that doesn’t fit their ideals.

That's one of an OSS project maintainer's more important jobs. I used to maintain some decent-sized open source projects, and part of my job was to say "no" when someone wanted to add something I didn't think belonged (for whatever reason).


Forking Chrome is useless, Google can just take a bunch more of engineers to work on it, add some features to chrome, and extinguish your project again.

Forking Chrome would be like fighting against Embrace, Extend, Extinguish after you’ve already lost (which we have with Chrome, compared to what KHTML used to be like)


That's true of any large open source project, whether heavily backed by a company or not. You need both users and developers to make a fork successful.

Regardless, though, there's nothing that says the hypothetical developers of a Chromium fork couldn't devote some time to merging reasonable fixes & new features from upstream.


The main issue I find with E10s (nightly user here), is on some heavy sites which requires firefox to load on a specific tab, will often get the dreaded "swirl arrow reload" animation on other open tabs until the site is loaded.


FWIW, I use multiple processes (the number of cores * 2 + 1, it's a magical number I grabbed from somewhere), and get that far less frequently.


Is that number of physical cores? Or would you use 8 with an i7?


I guess I use full number of logical cores


My biggest issue with FF is Firebug - it just can't compete with Chrome dev tools. On the other hand, FF has one huge advantage - Pentadactyl.


I haven't used Firebug in decades, the native tools are more than enough (and way better presented than Chrome imo). If it's not enough there is Firefox Developer Edition[1].

But not talking about development. How can you use a browser that does not allow for tabs to be displayed on the side? Tree Style Tab[2] on Firefox makes it the best browser for me.

[1]: https://www.mozilla.org/en-US/firefox/developer/ [2]: https://addons.mozilla.org/en-US/firefox/addon/tree-style-ta...


Firebug or the Firefox Developer Tools? The built-in DevTools have been pretty great for a while now.


Does Firebug has anything over the built-in FF dev tools at this point?


The built-in dev tools have improved drastically in the latest versions of Firefox and that's also why [Firebug.next](https://getfirebug.com/downloads) (aka Firebug 3) isn't a standalone panel any more, but is integrated with and built upon the native dev tools.


This is not important, but how did they arrive at "E10S" when they went about shortening "electrolysis"? Anybody know?


It's the number of letters between the first and last letter of the word. See also: i18n, l10n.

Origin:

"A DEC employee named Jan Scherpenhuizen was given an email account of S12n by a system administrator, since his name was too long to be an account name. This approach to abbreviating long names was intended to be humorous and became generalized at DEC. The convention was applied to "internationalization" at DEC which was using the numeronym by 1985."

http://www.i18nguy.com/origini18n.html


It's similar to the naming convention used by other long, hard-to-spell words for projects. First letter of word, number of letters in the middle, last letter of word.

e10s = electrolysis

i18n = internationalization

l10n = localization

k8s = kubernetes


Interesting. I really dislike that though as I prefer to just learn how to spell the word instead and probably even more mental acrobatics involved. I can see the usage for a product codename to differentiate electrolysis from codename:e10s.


Could someone running e10s load up newegg.com and see if it freezes the browser for about 30 seconds? Separating processes is supposed to mean that chrome stays responsive, but that hasn't been my experience with it.


Loaded 2,875KB over 107 requests in 168 seconds and no visible freezing on newegg.com with e10s enabled.


So just me then :( I've tried it on two machines, with and without extensions installed. On my poor laptop it locked up for over a minute.


R/Firefox is doing a performance AMA today, asking/helping people to run a profiler extension while hitting flows that cause sluggishness. You should stop by and see if they can help you out and get a bug report started.


> 168 seconds

wtf?! It loads fully loads in around 4 seconds for me (Chrome stable x64, Windows 10)


The load time has nothing to do with Firefox or my operating system, it's a direct effect of bandwidth (mine is obviously limited). It's a content-heavy page with a huge number of requests.


Loads here in Firefox (not e10s, with about 100 tabs) in 2.962 seconds -- 232K loaded, with 70 images. No delays or freezes.

I'm also running uBlock Origin and Ghostery, which is blocking 7 trackers....


I'm trying Firefox about once a year because I want to like it for being independent and privacy minded. But every time I find myself going back to other browsers for seemingly minor reasons.

The most important one is that firefox opens new windows a few hundered milliseconds too slowly (on Mac OS X). Just a little bit too slowly for me to just hit Cmd+N and blindly start typing a url. I have to stop and pay attention to when the window is ready, hundereds of times a day.

Another annoyance, a much less important one, is that Firefox wastes a lot of screen real estate at the top.


You can configure Firefox to not show most toolbars, leaving only url bar with a few buttons on the same line. Make sure to click the option for small back/forwarrd


I know, but that still uses twice as much vertical space as a tabless Safari window. It's not a huge problem though.



Thanks, but that makes my main problem with slowly opening windows even worse.


You open new windows hundreds of times a day?


I don't know about him, but I do. And this latency is indeed annoying. I'm still in FF out of sheer believe in FOSS in general and Mozilla in particular, but right now every single action I tried is just faster on chrome.


Yes


Can you explain why? I'm genuinely curious.


Windows as tab groups. Some of those are quite persistent, but many are very transient and don't live longer than a dozen minutes.

There are OSes with potent window managers. Ideally tabs should even be part of the WM itself† and not be reinvented across each app††, or disappear altogether (e.g Chrome on Android, Exposé/Mission Control per app window grouping)

† possibly even allowing tabs to be a mix of apps

†† features such as pinned tabs, the defunct Firefox tab groups/overview, Vivaldi are especially egregious examples.


The primary reason is that I search for everything instead of navigating to it on individual websites. That includes function signatures and other API stuff I need all day long.

And I don't keep windows or tabs open because they are hard to find again and start to pile up. So I open and close them a lot.

I don't use the bookmarks menu directly. I just open a new window and start typing. Either the page I'm looking for is in the auto complete or I hit enter and it goes to the search engine.

To sum it up, I have trouble remembering hierarchies and various navigation and UX ideas that web designers come up with. I have trouble visually grasping what's on a page (it's a kind of disability I always had). So instead I search.

In fact, that's also one of the great things about Firefox. Typing in its location bar lets me find other open windows and tabs. I love that.


> Typing in its location bar lets me find other open windows and tabs. I love that.

Prefix with % and it will return only that kind of result.


Because it's an easier workflow.

Open a new tab, type the address (or clic the thumb) and go.

If you do it on a current tab, you are always in danger to erase your current tab by mistake from mistyping the open in new tab shortcut.

Another thing is that if you have your hand on the mouse, it's faster to click + than to type said shortcut.

What's more, FF can't hold as many opened tab as I wish without slowing down. So I have to clean regularly, and so I reopen tabs I closed 10 minutes ago.


There's a plugin called TabsMixPlus - https://addons.mozilla.org/en-Gb/firefox/addon/tab-mix-plus/

This allows you to ensure that you open new tabs when you want/customises tab behaviour.


I used it a lot years ago, but not so much anymore.


>screen real estate

That's one of my biggest complains about ff, using chrome somehow feels clean because of this.


Huh? I just compared the two on Windows and Chrome wastes ~3 pixels more for the chrome at the top than Firefox does. Not only this, Firefox is customisable using style sheets! You can just reduce the unused padding yourself using for example, Stylish


Jesus! you are right, I am honestly confused now. I always thought ff used more space than chrome.

Even now looking at the two makes me feel as if ff is some how bulkier, I am looking at the same page, and ff look really huge EVEN IF ITS NOT!!...

That's really odd.


On the Mac, Firefox uses a bit more space than Chrome. Safari uses only half as much.


I have all three browsers open on Mac OS with multiple tabs open in each. I measured the tab plus address bars of each:

    - Firefox: 79px
    - Chrome: 72px
    - Safari: 61px


You left out the most important case with only one/zero tabs where Safari gives up the space otherwise used by tabs and the other browsers don't.

That's about 90% of my browser windows and it makes Safari use only half as much space as the other browsers.


Wonder what happens when the ui process crashes with the backend process still running? Or vice versa? It seems like killing Firefox just became a whole lot more difficult.


I just tried - the "Firefox Web Content" process was automatically killed quickly after the "Firefox" process was killed. No surprise there, it would be weird if they hadn't considered that scenario.

If the "Web Content" process is killed while the main Firefox process is running, Firefox asks if you want to restore the tabs - which is half the point of e10s.

"killall firefox" would be easy to do anyway.


It's very likely that OS-level facilities will be used to interlock those processes in such a way that no tabs can exist without the UI to attach to... It seems fairly obvious that this scenario should be handled in some way.

You probably won't have to kill anything new.


I imagine it will still be straight forward on Linux/Unix.


Since a couple of years I can't use firefox anymore with the usual must have privacy extensions. It's super slow and laggy compared to chrome ;( I wish I would use firefox but I can't see it freeze completely everytime I click on a link.


Firefox got to be much better a couple of years ago. In my experience, it's generally equal to or better than Chrome.


Will it be enabled on Linux too?


Yes.


Good, thanks!


I am still patiently waiting until unloaded tabs have a memory footprint of 0KB instead of 0.7MB.


Serious question, why does that matter? If you've got a thousand tabs open, that's 700mb, or roughly 4% of your (probably) 16gb of memory. Why worry?


I have a pretty old machine without memory upgrade options. Of course, it would not be an issue on a machine with 16 GB. I've been following the bugs with tab-related memory and perf issues for quite a while, but maybe you are right that Moore's Law fixed them by now and I should upgrade.


This is great. I've been using Firefox since Phoenix and it only continues getting better!


Sounds like even though it's been delayed for years, Electrolysis is still not anywhere close to being ready for prime time. And it's not even a full process sandbox. How many more years are we going to wait for that?


That sounds awesome!!! 7-years-late awesome, but still great news!


Congrats!


>Splitting UI from content means that when a web page is devouring your computer’s processor, your tabs and buttons and menus won’t lock up too.

Finally. Every browser I use does this when I load a huge reddit thread while RES in enabled.


Tried it, still not as responsive as chrome


In what way?


hangs on js intensive pages, like jira


> Splitting UI from content means that when a web page is devouring your computer’s processor, your tabs and buttons and menus won’t lock up too.

Well, for that particular problem, there is a much simpler solution than splitting application into two distinct processes. It's called multithreading. Moving UI into a separate thread would not only be simpler than moving it into a separate process, but also wouldn't result in 20-50% memory usage growth.


That's what they're doing now and it's the problem, not the solution.

Multiple threads run in the same process address space. This creates security and stability issues. One website can affect the complete experience. That's exactly the problem they're trying to solve, the same way Chrome did, by using multiple processes. That way, for example, when one process misbehaves and gobbles up too much memory, the OS won't kill the entire browser but just the offending tab. (The OS won't kill individual threads.)


The author justifies introduction of e10s using an example of inter-tab CPU contention. Yes, this is a real problem, I experience it using Firefox. But for that particular issue, a sufficient solution is multithreading.

As you wrote, separating each tab into a different process will bring security and stability benefits. But blog post author does not mention any of them. The only justification for e10s he brings up is CPU-UI blocking problem (which is not a justification for e10s at all, because it can be solved with multithreading).

And there is a reason why he doesn't mention security and stability benefits: They do not plan [1] (at least for now) separating each tab into a different process. All tabs will still share the same content process. This means that e10s would not bring any security and stability benefits you wrote about.

So one may ask: what's the point in introduction of e10s at all? I ask myself this question as well, not only regarding e10s, but also many other Mozilla decisions from last few years.

[1]: https://wiki.mozilla.org/Electrolysis/Multiple_content_proce...


Have you ever seen the source base of a browser like Firefox or Chrome? You must do small incremental improvements if you want your software to continue to work. The hardest part is disentangling everything from a single process architecture to multiprocess. That's what they're doing now. Switching from a single content process to multiple content processes is significantly easier.

I don't understand how you conclude that "they do not plan separating each to into a different process" from the link you posted given it opens with "After e10s is enabled for all users, the next step is to introduce multiple content processes." To answer your question, then, the point of introducing e10s is to make it easier to switch from a single process to multiprocess architecture.


Yes, but few lines below they wrote: "The (practically) unlimited process count is on some platforms not a realistic goal, and having a separate process for each open site instances or even sites does not seem like a reachable goal (for now at least)."

So per-tab content processes (which btw is the only way e10s can result in security and stability benefits) is not a realistic goal for them.


I'm not aware of any browser that implements "per-tab content processes". For example, Chrome caps the number of content processes and multiple tabs in Chrome start sharing the same content process once enough tabs are open.


Technically speaking, it's a lot easier to do a separate process per tab, and let things crash just like any other running process. If they don't stop there and try to actually manage processes by themselves according to some messed up logic, then yes, I agree with you that there's no point to e10s and probably Mozilla.


> and let things crash just like any other running process

People always say that as if crashes would be a regular thing happening every few minutes. I don't experience browser crashes for days, so I find it kind of odd that such a relatively rare failure case is considered a good justification for using a process per tab.


I stopped using Chrome a couple of years ago because of multiple crashes per day. The browser itself didn't crash, but individual tabs did. Or else Chrome tabs failed to load (giving just a pale blue background) presumably because it had run out of RAM.

I used OneTab for a while, to reduce memory use, but still found Firefox more efficient and more stable with hundreds of tabs, mainly because the vast majority weren't actually loaded until I needed them.

It's not hard to crash Firefox -- just keep loading an infinite page of gifs etc. But once you know that, you don't do it.

Under normal working conditions, I agree: none of the main browsers crashes often enough for it to be a problem.


e10s will bring some security and stability benefits. If one webpage crashes, all webpages crash, but you can click to reload. Additionally, this enables sandboxing, so that code from webpages can be run with lower levels of access. Sure, it is not perfect, but it is a large improvement from the status quo.

(Even process-per-tab is not perfect for security: a compromised page from one origin could open an iframe to another origin.)


They say that they want to move to multiple content processes, so I guess that their end goal is still one content process per tab.

Also, another reason for e10s is to decouple the rendering engine from the UI parts, in order to a) get rid of XUL and b) offer a path for replacing Gecko with Servo once Servo's ready for primetime.


> a) get rid of XUL

And the same time break tens thousands of extensions (which are the main reason why most Firefox users still use this browser).


> security and stability

That's shifting goalposts. GP quoted that it's about responsiveness and resource consumption. And resource consumption is obviously lower with multi-threading since more data structures can be shared.


That's not shifting goalposts, that's the goal from the start.

You seem to be throwing a lot of technical terms. Could you elaborate on where exactly do you see overhead coming from in a multiprocess vs. multithread architecture? And which data structures can be shared between threads but not between processes?

Scheduling overhead is identical and pure OS memory overhead is slightly higher for processes (they need more "accounting" data), but everything else can be optimized at the application level. Furthermore, shared memory lets you share data structures between processes just fine (heck, some data can be shared between processes and the kernel!), and since I assume the content processes will be dynamically linked, you're not even going to load shared libraries multiple times.


> shared memory lets you share data structures between processes

In theory maybe. In practice data structures contain pointers, which you can't just shove into cross-process shared memory. Which in turn results in copying things and serialization/deserialization overhead when doing IPC.

With multi-threading you can just share regular data structures (think std::map) and put them under locks or copy-on-write logic if they need to be modified.

Of course you can share some large blobs, like pixel data or loaded files. But the major data churn consisting of many small, nested data structures involves copying and (de)serializing.


An experimental investigation into a multithreaded model (for UI and webpages) was investigated a few years ago (see https://bugzilla.mozilla.org/show_bug.cgi?id=718121 ), and I don't think it looked to be "much simpler". The major problem with moving to multiple processes is that JS running the UI has to be adapted to dealing with page JS that is "somewhere else". Because the JS execution model is based on a single thread, I don't think it would be so much easier to deal with JS on another thread than in another process. Requiring multiple JS runtimes would likely also increase memory usage even in a multithreaded model, though it does seem like it would be less than e10s.


If you're saying that splitting UI and content into different processes is different from splitting the two up into different threads, could you explain how that is different? I am especially interested in how it results into a 20-50% memory usage growth, especially now that memory usage by browsers is a hot topic.


Also the main process doesn't actually stay responsive for me with E10S, but I suppose I should try with extensions disabled.


I just don't why you consider that to be more simple...


IPC is more complex than coordinating threads.


That's false.

IPC, shared memory, and credentials/descriptor passing are common patterns that exist in tons of software today. OpenSSH, Postfix, probably every iOS app.

Multithreading is great, but it's not suitable for a browser that may be fed malicious code and has a huge attack surface.


Those mechanisms existing does not prove that they have zero overhead and complexity increases over multi-threading.


To add more of an explanation to your comment.

Yes, in those simple terms, it would be 1) simpler to coordinate multiple threads in the same process compared to multiple processes, and 2) more efficient too.

1 is true because any system you might use to communicate between processes would involve an IPC layer and with a single process, you could use an identical system with the IPC removed.

2 is true because multiple threads in the same process can communicate directly without requiring IPC (which requires data serialization, shared memory for performance, and more system calls). i.e, sharing data has much higher cost using IPC. You can skip some serialization if you use shared memory, but you still need to set that up and manage shared buffers. (Different applications might share data in different ways so that the IPC overhead becomes negligible, but non-IPC could always be implemented faster.)

But the benefit gained from using multiple processes for a web browser (that is essentially a VM for running untrusted JS/web code) is security in the face of bugs. It comes from how operating systems provide tried-and-tested process isolation. If sites from separate origins are rendered or parsed in their own process, some types of browser bugs are harder to exploit to gain access to data from other origins. And for bugs that just cause crashes without security implications, you can crash a single tab and let other tabs continue, let the main UI continue.

On its own, that is good, but not great.

The big benefit is that the big 3 operating systems that browser developers are concerned with provide some level of per-process sandboxing. For example, one can launch a process with a limited set of privileges such that the only thing that process is allowed to do is communicate via IPC and compute stuff. The operating system enforces this for you. Provided you have a strong IPC layer, a process with such limited privileges can do very little damage if it is compromised.

Developing a strong IPC layer is not extremely difficult as it essentially boils down to reading a stream of messages from an untrusted source and detecting invalid messages without compromising yourself.


I'm aware of all those things, but look up-thread to see that this was about the performance and resource consumption argument where multi-process is the inferior choice, that's what this argument is about.

> And for bugs that just cause crashes without security implications, you can crash a single tab and let other tabs continue, let the main UI continue.

That's using non-security bugs as an argument for increasing memory footprint. It would be preferable to have those bugs fixed instead.

> Developing a strong IPC layer is not extremely difficult as it essentially boils down to reading a stream of messages from an untrusted source and detecting invalid messages without compromising yourself.

That's implementation complexity. But you also have to consider the performance overhead of proxying method access to arbitrary objects and bouncing each method invocation and its results back and forth. That's needed for backwards compatibility with addon code unaware of the process separation.

https://developer.mozilla.org/en-US/Firefox/Multiprocess_Fir...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: