Hacker News new | past | comments | ask | show | jobs | submit login
64-bit Firefox is the new default on 64-bit Windows (blog.mozilla.org)
253 points by cpeterso on Aug 14, 2017 | hide | past | favorite | 93 comments



This is very good news for Emscripten / WebAssembly developers. One of the main limitation in ASM.js / WASM based applications is a size of emulated HEAP. It has to be a contiguous memory block, what is so unlikely to allocate by browser (from my tests it's ~10% of users cannot allocate 256MB for HEAP). It's that due a memory fragmentation.

Having 64 bit by default fixes such problem. That's a good move. Well done moz://a


It dosen't fix the problem unfortunately, the JIT code in firefox is effectively 32bit, it can only operate on a 32bit contigious range.


It does fix the memory fragmentation problem, in that it's almost guaranteed you'll have a contiguous range you can allocate (assuming you have enough memory in the first place). In practice this has been a serious issue.

There is a separate issue of programs that need more than a 32-bit allocation for their memory. Not many apps hit that limit so far, and it's tricky to support anyhow since you need 64-bit ints for pointers and JS doesn't support that well. wasm64 will address this eventually.


It's easier to get a contiguous chunk of memory if you're able to wrangle it in the 64-bit space, then slice out a 4GB chunk for 32-bit code.


Why do you capitalise heap?


`HEAP` is the name of a large typed array variable that represents the original program's heap.


Must be a PERL programmer.


Why type moz://a ?


If you type mozilla in Zilla Slab Highlight Bold[0], it ligatures to moz://a, suggesting that mozilla is the correct spelling, and moz://a is a logotype.

[0]: https://fonts.google.com/specimen/Zilla+Slab+Highlight



I have not really written much of Windows code, so forgive my ignorance but: What is actually so hard about porting 32-bit -> 64-bit applications on Windows compared to Linux?


IIRC, in the case of Firefox the problem was not porting Firefox itself to 64-bit. That was done a long time ago. The problem was all of the 32-bit dependencies people had installed.

For example - if a user only had 32-bit Flash installed and they updated or installed 64-bit Firefox, they'd be annoyed that Flash "broke". That might not be the best or most current example, but it's that kind of thing that held up 64-bit default on Windows.


The biggest issue is Flash. You can't run 32-bit plugins in a 64-bit process (well, there are some potential workarounds, but nothing I'd want to do for production work).

A lesser issue is that the x86-64 JIT for JS was generally worse in performance than the 32-bit x86 JIT, although this has been fixed for a few years now.

It's not that Firefox couldn't be 64-bit on Windows, it's that the developers didn't feel it was ready to ship it to a large userbase.


> The biggest issue is Flash. You can't run 32-bit plugins in a 64-bit process

At the risk of sounding cynical, some people might consider that a reason to prefer the 64-bit version. ;-)


> The biggest issue is Flash. You can't run 32-bit plugins in a 64-bit process (well, there are some potential workarounds, but nothing I'd want to do for production work).

On Mac I believe both Firefox and Safari used a separate broker process to go between 32-bit plugins and the 64-bit browser. Is it that much messier to do on Windows?


This is a bit long, but I know the answer in detail because I'm the one who originally wrote to code to make 32-bit plugins work in 64-bit Firefox on OS X :) I'll never forget the moment I first got a YouTube video to play properly in 64-bit Firefox with 32-bit Flash.

It was much easier to do on OS X than on Windows. That's primarily because on OS X, at least at the time, Firefox was a Universal Binary, meaning there were complete 32-bit and 64-bit binaries for OS X in the same application package. We had been doing this for reasons I won't get into here, but they were unrelated to running plugins with mismatched bit-ness. That being the case though, we "just" (devil is in the details) started a 32-bit child process, which had all the same functionality (e.g. plugin code implementation) as the 64-bit process, and made sure that plugin API data going between the processes was always fixed length (e.g. int32 vs plain int) so there was no misinterpretation.

Mozilla could probably have done something very similar on Windows, but they weren't already installing both 32-bit and 64-bit binaries on every Windows machine. Doing that just for the sake of plugins would, at a minimum, have been a big bullet to bite. And there was/is no nice Universal Binary package equivalent on Windows.

Also, and this gets more nitty-gritty, the NPAPI plugin API on OS X was much easier to get working across two processes of different bit-ness. This was mainly because we had recently revised the OS X NPAPI (we called it Cocoa NPAPI or something like that) and got Adobe and many others to switch. The revised API was much easier to work with. The ancient Windows NPAPI was, and is, a total f*ing mess.


What I really dislike is that I have not seen any non-MS JITs that do Windows x86-64 function tables for SEH properly.


Windows is an LLP64 platform, meaning that the "long" type on Windows stays at 32bit even when you're compiling 64bit binaries. Getting 64bit integer support for PHP on Windows took a major port of all basic type usage to move away from longs to custom types with a higher degree of control. I don't know whether or not this was a problem for Firefox or not.


If you think about it, I think its a little bit a fault of the language (AFAIK the C specification) defining basic integer types too loose.

What is the advantage of having "int", "long" types that just give minimum byte requirements (with associated storage classes, etc.) instead of more specified types like int32_t and uint8_t? In the end you deal with fixed-size integers or not (using some bignum library), but if you deal with them you have to take their byte length into acount in your code anyway I would say (to deal with overflows and stuff).


> What is the advantage of having "int", "long" types that just give minimum byte requirements instead of more specified types like int32_t and uint8_t?

The advantage is that these more specific types might not even exist. For instance, some machines had 36-bit words, so a 32-bit type would have to be emulated, at the cost of extra instructions.


I think the minimum sizes were chosen because of a combination of (a) there was a much greater variety of hardware available then, and (b) people hadn't learned that fixing these sizes is better.


They were that flexible because you'd compile ONE program ONCE for (any ported) platform and that was that.

It took a little while, but eventually C got https://en.wikibooks.org/wiki/C_Programming/stdint.h (in C99 :( ).


> They were that flexible because you'd compile ONE program ONCE for (any ported) platform and that was that.

If it was ever so simple, specially given UB and compiler specific semantics across all those compilers and operating systems on the 90's (before C was hardly relevant outside UNIX).


http://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdi...

stdint even includes 'int with at least x bits' & 'fastest int type with at least x bits'


This particular problem was never really an issue for Mozilla, because the codebase has used fixed width types (first the NSPR types and later the stdint.h types) for 20 years.


To clarify: When compiling in 64 bits mode, the "long" type is 64 bits on gcc and 32 bits on msvc.

The other compilers vary. The C standard only specifies that int has to be at least 32 bits. That's one of the many issues of porting from 32 to 64 bits.


Sorry to nitpick, but the C standard specifies that int must be at least 16 bits, and that long is at least 32 bits. (Actually, it just specifies the minimum range, and tries not to be specific as to the bit representation).

https://en.wikipedia.org/wiki/C_data_types


Typo. I meant to write long, not int ^^


When compiling in 64 bits mode, the "long" type is 64 bits on gcc and 32 bits on msvc.

That's not true for MinGW, and for good reasons (deviating from the ABI followed by the OS and all libraries would be stupid).


This actually makes porting easier, not harder, in most cases, since code semantics remain the same.

What makes porting complicated in general is dealing with pointers and handles. These are 32-bit or 64-bit, depending on the architecture. Unfortunately, it was pretty common for old code to assume that they're always 32-bit, and e.g. shove them into ints (WinAPI itself was guilty of this on many occasions).

Technically, such a thing was never portable - not even with longs - until C99 brought us intptr_t, there was no guarantee that any integral type was large enough to hold a pointer. But in practice, it worked, so people did it.


Differing 64 bit data models don't really matter when software depending on exact widths uses exact width types. Unfortunately, on MSVC this hasn't been supported for many years out of the box, so each project had to include its own stdint.h replacement one way or another to do this.


Here's is a good summary of possible issues porting 32-bit C++ code to 64-bit from PVS-Studio developers.

https://www.viva64.com/en/a/0004/


The OS doesn't really matter here. If the original developers made assumptions about the size of the pointers (4 vs 8 bytes) then it's a bit tricky but certainly not impossible.


This doesn't hold up, though, because 64-bit firefox has been the default in 64-bit linux for like 10 years, and it's probably existed in windows for about the same amount of time.


There's an important difference between Linux and Windows, however: on Linux, "long" is always as wide as a pointer, while on 64-bit Windows, a pointer won't fit in a "long" or "LONG" or "DWORD".

But the real reason 64-bit Firefox is the default on 64-bit Linux, is that 64-bit Linux usually doesn't have the 32-bit compatibility libraries installed by default. On Windows, on the other hand, 32-bit compatibility libraries are always available.


Server Windows versions can have WOW64 disabled.


> I have not really written much of Windows code, so forgive my ignorance but: What is actually so hard about porting 32-bit -> 64-bit applications on Windows

Speaking strictly about things surrounding your code, as opposed to the code itself, there's stuff you just don't have on Linux.

On Linux pretty much everything is open source and provided by your distro, built for your architecture. The need for compatibility layers are absolutely minimal. This has a resounding positive effect in that quite a lot of things can be assumed fixed and in place.

On Windows, not so much.

Most applications are pre-built by others. Most are 32-bit. So Microsoft tries to maintain backwards compatible execution environments for both 32-bit code and 64-bit code in Windows.

This means that 32-bit processes trying to access system DLLs, will need to find 32-bit system DLLs somewhere. Same for 64-bit. So you have duplication of system-DLLs, in different folders. You can't just assume a standard, fixed path will lead you to the right place.

For the COM subsystem, all the available components an application can access is stored in the registry, alongside their physical path on disk.

You may need different DLL-files for these COM components in 32-bit processes and 64-bit processes. So now you need the registry to point to different locations based on the bitness of the application running. So yeah. Portions (but not all parts!) of the registry is also duplicated with different references for different processes.

That means that when going from a 32-bit process to 64-bit process, all things you've previously written to the registry... May not be there for you to read in your 64-bit process.

And you better make sure your installer is the same bitness as your process... Or you know what? That 32-bit installer wont be allowed to prime the 64-bit registry your application needs!

This may mean that all COM components you've registered in your installer, now is inaccessible to your (and other) applications because they are 64-bit.

And I'm sure the list goes on. These are just the most obvious non-pointer based things I can think of which complicates matters.

In short: On Windows... Keeping everything 32-bit and closing your eyes to all terrible things mentioned above will ensure your application keeps working, because Microsoft put in the effort to guarantee that.

The second you step into 64-bit, you need to have all these things solved. There's no going half-way. So most people who has little to benefit from 64-bit executables simply don't.


Windows is kludgy code built on top of klundgy code. From personal experience, there's different ways Visual Studio handles 32 vs 64 bit applications. A lot of memory issues tend to come up that don't appear in the 32 bit version. Plus there's different directories that are typically used, and potentially a lot of other issues.


From personal experience porting fairly large and complicated multiplatform userspace application codebase from 32-bit to 64-bit on both Windows and Linux, I think the required effort is about the same.

Linux also uses different directories and has "potentially a lot of other issues". Windows uses LLP64 rather than LP64[1] but IMO that's just different not clearly/necessarily better or worse.

[1] https://blogs.msdn.microsoft.com/oldnewthing/20050131-00/?p=...


Holy moly! Is anyone else wondering why this took so long? It's almost 2018 for god sakes. I don't think I've used a 32-bit program in over a decade.

Yeah flash is 32 bit but flash has been dead for the greater part of that decade. Not sure how it could possibly be the case that moz was still shipping 32-bit Firefox as default until yesterday.


You must run a very limited set of programs to not have run a single 32-bit one in over 10 years. Even now, on my 64-bit Windows 10, I open my task manager and see over a dozen 32-bit programs running (mostly background processes). You might be running 64-bit Windows, but WoW64 (32-bit binaries on 64-bit Windows) is used constantly.


Nope, I just don't use Windows. I use Linux and MacOS where, if you don't do anything too out of the ordinary you are in 64bit land 100% of the time. On my Mac I don't see any 64 bit programs running and on Arch everything is pretty modern.


Windows still ships both 32-bit and 64-bit versions, and there are still a lot of people with older devices, such as netbooks, that are running 32-bit versions.

Consequently, when you're shipping software for Windows, it's beneficial to have a 32-bit version to capture that part of the userbase. But since 64-bit Windows will also happily run 32-bit apps, once you have a 32-bit version, there's no particular reason to have a 64-bit one. The only time you get something from 64 bits is when you need a lot of memory, which most apps do not. On the other hand, by having a single version, you simplify the acquisition and installation story for the users, cut your testing matrix in half etc.

Hence, most Windows software is still 32-bit.


Even on Linux or Mac, if you want to play steam games, you'll have to run 32-bit code.


I don't play games on my computer but if I did I think that is an OK divergence especially for older titles. This is a web browser we are talking about.


The venn diagram of Firefox users and people who believe Windows 7 32bit is still the best version of Windows and refuse to upgrade probably has a decent amount of overlap.

These are the realities of shipping for the minority not the majority. Users who are a blip for some companies end up being huge to others.


Windows 7 is still the most popular OS, but the adoption of 64-bit OSs is nowadays at around 78%.

https://hardware.metrics.mozilla.com/#goto-os-and-architectu...


Are there slam-dunk arguments why 64-bit is always better for applications? If so, I haven't seen them, and neither have various people I follow who seem to know what they're talking about.

Example: https://blogs.msdn.microsoft.com/ricom/2016/01/11/a-little-6...


When our CAD people work on large 3D models, Autodesk Inventor will happily gobble up 32 GB of RAM, and if they had machines with 64 GB, I suspect it would make good use of those, too. Also, I am told editing high-res graphics and video benefit both from the larger address space and the ability to use more RAM.

But I admit, these are the exception to the rule.

Having used both Linux and Windows in 32 and 64 bit versions - in a few cases on the same machine -, I did not notice a difference in performance[1]. If the performance hit due to 64bit is really that substantial, I could imagine the larger number of registers (and possibly larger caches) make up for that.

[1] Possibly, as so often in the days before SSDs, I/O was enough of a bottleneck that the difference between 32 and 64 bit code became unnoticeable.


There aren't. For most apps, producing 32-bit-only is preferable. If you're going to produce 32-bit and 64-bit, you should make available a stub installer that downloads the right one or an installer with both in it because most users have no idea if their OS is 32-bit or 64-bit. Mainly because they don't even know what that means.


Visual Studio, Dropbox, OneDrive, Steam


Because of plugins. Browsers have more plugins than most other software and the bitness needs to match between all of them.

And saying the flash has been dead for the greater part of a decade is just completely wrong. You have to be just really, completely out of the loop if you think that applies to any kind of reality for general users.


What is the crash rate improvement all about? I get the address randomization, but what changes in 64 that fixed crashes?


The limited address space combined with fragmentation due to a non-compactable heap allocations (C/C++) can lead to unsatisfiable allocations long before you actually hit the 4GB limit. And many allocations are not fallible, which means the browser has to OOM-crash if they cannot be satisfied.


They frame it as '64 bit users with more than 4GB RAM' so a lot of those are OOM related. More discussion @ https://groups.google.com/forum/#!topic/mozilla.dev.platform...


To add to what it has been said, some asm.js based scripts (like games) need a relatively big (say, 128-256 mb) and aligned chunk of memory that is frequently not available on 32 bit machines even though there's a lot of free memory.


Is there any reason it needs to be a contiguous block of memory?


It's much simpler and much faster to make a chunk of code completely isolated in user space by masking the addresses it reads/writes to. For example, masking with 0xFFFF is the same as doing modulo 16. Add another bit to the mask and you're doing modulo 32, and so on. That's why I think the block must be not only contiguous, but also aligned. I may be wrong about the alignment now that I think about it. But the contiguous part is clear.


Legacy code and development time.

When I write C++ code that deals with large datasets, I try to avoid large continuous buffers as much as possible. For cache locality, an aligned 1GB buffer is practically the same as a set of 64 aligned continuous buffers, 16MB/each. You’ll only hit RAM latency when crossing the boundary between the buffers, i.e. only 63 times through the whole 1GB of data.

One problem is std::vector. The standard says the whole vector must be continuous, i.e. to split large buffer into smaller chunks, one needs to implement a custom container on top of that.

Apparently, authors of C++ think the problem will go away by itself, with the switch to 64-bit platforms.


The problem does go away because of how addressing works. If you read my other comment, the address is masked for fast and safe sandboxing. If you have a huge virtual address space, it's practically guaranteed you'll have a contiguous chunk even when the actual memory is very fragmented.


For one thing, modern game engines often have custom allocators and a large array of performance critical bits of code that assume cache locality. So in general, having the memory involved be non-contigous could break lots of assumptions.


tab hoarding = hundreds of tabs = more than 3GB memory used = crash (max userspace memory with large address space aware is 3GB)


Unless I'm mistaken, the limit for 32-bit processes on 64-bit Windows is actually 4GB; the 3GB limit is when running on 32-bit Windows.

https://msdn.microsoft.com/en-us/library/aa366778.aspx


Only 2 to 3 GB are usable by the application, out of the 4GB addressable space. The rest is reserved.


On 32-bit Windows, sure; because the process has to share its 4GB virtual address space with the system (the system getting the upper 1GB/2GB, depending on if 4GT is on). But it is my understanding that on 64-bit Windows, 32-bit processes do not have to share any of their 4GB virtual address space with the system, and can use the entire 4GB.

https://msdn.microsoft.com/en-us/library/aa384271.aspx https://msdn.microsoft.com/en-us/library/aa366778.aspx


Though as noted in your MSDN link, 32-bit Windows application must use the IMAGE_FILE_LARGE_ADDRESS_AWARE linker flag to opt into a full 4GB virtual address space when running on 64-bit Windows OS. 32-bit applications that were only tested on a 32-bit OS might break if they see pointers above 0x80000000 so, in the name of backwards compatibility, Microsoft wisely kept the 2GB default. :)


>> 2 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE cleared (default)

>> 4 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE set

While we are at it. The flag is a bit in the header of the .exe file. It should be set at compile time in the visual studio option, but it can be set into any file really with an hex editor.

Don't expect to be able to use the 4 GB even with the flag. There is a lower hard cap that depends on the version of windows.


win64 FF has been built with LARGEADDRESSAWARE for the last 7 years.

https://bugzilla.mozilla.org/show_bug.cgi?id=556382#c28


Yes, out-of-virtual-memory crashes (OOM) are, in theory, no longer possible with 64-bit applications. We still do see some OOM crashes from 64-bit Firefox users.

32-bit applications on 64-bit Windows can access a full 4GB virtual address space (if they are compiled with a special linker flag) without needing the 3GB Windows boot-time flag.


For a while there I thought Microsoft will ship Firefox as default and halt their browser efforts altogether.


Edge will go open source before that happens.


Big parts of Edge are already open source.


Exactly my point.


> If you prefer to stay with 32-bit Firefox after the 64-bit migration, you can simply download and re-run the Firefox 32-bit installer from the Firefox platforms and languages download page.

Any reason to do so? It'd have to be some fringe use case of '<4GB RAM system & user doesn't need fast codecs'. If it's a closed business environment I'd imagine they're on IE


The only reason someone would stick with a 32-bit browser is for performance reasons. 64-bit applications have a larger memory footprint from all the 64-bit code and pointers. We didn't see much performance difference when testing 2 GB Windows machines hands on, but Firefox telemetry analysis showed that user retention and crash rates were worse for users with <= 2GB than for > 2GB. Only about 1% of Firefox Windows users have < 2GB and about 5% have exactly 2GB.

Users with <= 2GB can still install 64-bit Firefox if they really want to by downloading the 64-bit full installer instead of the "stub installer" (a tiny downloader that detects 32- and 64-bit OS and downloads the appropriate full installer). Likewise users with > 2GB can download the 32-bit full installer, too. We wanted the Firefox installer's default to provide the best user experience, but still give users the option to run the (32- or 64-bit) application of their choice.

For comparison, 64-bit Chrome's minimum memory requirement is >= 4GB.


> For comparison, 64-bit Chrome's minimum memory requirement is >= 4GB.

Not officially (there is no memory system requirement): https://support.google.com/chrome/a/answer/7100626?hl=en


When Google announced the migration of 32-bit Chrome Windows users to 64-bit in May, they used 4GB as a minimum for 64-bit. That Chrome system requirements page doesn't specify whether there is a minimum memory requirement for 64-bit Chrome on Windows, but I would assume it is still the same.

https://chromereleases.googleblog.com/2017/05/stable-channel...


Presumably in case there's still bugs, since a lot of the Windows code probably was not really well tested for AMD64. I'd also assume 64-bit builds take up more memory due to having 64-bit pointers.

Maybe most importantly they just don't want to assume, at their scale, that there isn't at least one user that might need some time to be able to move to the 64-bit builds for one reason or another.


I for one have had serious stability issues with version 55. It always crashes on facebook and cnet. And the best part, the crash submission dialog which comes up in 32 bit firefox doesn't show up for 64 bit. Also, uninstalling 64 bit and reinstalling from 32 bit installer still installs 64 bit FF. I had to remove the user config before reinstallation.


For legacy internal systems, and legacy OS. There are more than you would imagine still in use.


Now i respect Mozilla and all that they stand for, but that graph is about as useless as graphs come


Twice as secure, crashes maybe 1/5th as many times. Totally bizarre choice to use a graph they could have gone "2X as Secure!" "5X as Stable!"


Mozilla has come a long way. I remember them turning off the nightly 64-bit Windows Firefox build[0], then back on[1] after people complained.

[0] http://www.computerworld.com/article/2493395/internet/mozill...

[1] http://www.computerworld.com/article/2494189/internet/mozill...



I helped write this blog post. The early drafts used the term "64-bit Firefox" but we changed to "Firefox 64-bit" for SEO reasons. Apparently, 10x more people land on Mozilla's Firefox blog from web searches for "Firefox 64-bit" than "64-bit Firefox", so the voice of the people won. :)


Just wait for 2018!


Love the unit-less graph around abstract concepts.

https://xkcd.com/833/

That said, kudos for making this the default.


Firefox 55 feels very near chrome slick on windows (it used to be randomly laggy). Kudos moz


Is this the last "modern browser" to use 64-bit as default?


Well, it's not Chrome which needed the RAM, nor is it Safari that only needs to support the Mac-ecosystem, nor is it Edge that doesn't need to support anything prior to Windows 10.

They only just dropped support for XP and Vista in February...


I am not convinced about the stability of 64 bit FF. Even with no add-ons, stability is poor. Hope it gets fixed as more users start using it and more crash data shows up.


I've been using 64-bit Firefox Nightly for some time with few crashes. The last time it crashed, I was politely asked (by the Mozilla developers) to do this:

1. restart Firefox, enter 'about:crashes' in the URL bar

2. find the report ID that corresponds to the crash

3. submit it as part of a bug report to Mozilla's Bugzilla

My crash problem was diagnosed within a day and fixed.


I started using 64-bit Firefox on Windows about a year ago, and carried over all my add-ons. It crashes way less than my 32-bit Firefox, which tends to die after allocating 1.8 GB of memory. What makes you say 64-bit FF is more unstable for you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: