There are plenty of mischievous things an attacker could do if you just verify each chunk separately, including reordering or omitting chunks, and truncating the file.
Any sort of verification presumes that you have an initial trusted checksum against which to verify, so I don't believe this solves the separate problem of obtaining such a checksum.
IIUC your question here boils down to "what is the point of a hash tree?" as opposed to e.g. a list of individual chunk hashes. The answer is that a hash tree lets you verify an individual chunk by looking at the hashes of O(log(N)) chunks rather than having to look at the hash of every chunk.
Thanks. That is meaningful, but I still don't understand how "the recipient can stream a video attachment, while still verifying each byte as it comes in" isn't basically also true for sequential hashing with periodic chunk hashes.
For no reason other than "legacy reasons" - much of the client-side crypto code in the current Firefox Sync is inherited from an earlier system that predates widespread acceptance of GCM as a best practice. If we designed it from scratch today it would almost certainly be using GCM instead.
Hi, Firefox Accounts developer here. You're correct in your understanding that that login flow is ultimately driven by a webpage, and this is a deliberate trade-off that we made in the interests of reach and usability of the system.
It's certainly a trade-off that not everyone is comfortable with, but we're confident it's the right one for the majority of our users. You can read some previous discussions on the topic in these bugs (and additional suggestions/feedback therein is definitely welcome):
>The reason the login form is delivered as web content is to increase development speed and agility
You saved some sprints but invalidated the purpose of the project. Very agile.
>Ultimately I think we can have web content from accounts.firefox.com be just as trustworthy as, say, a Mozilla-developed addon which might ship in the browser by default, which is a pretty high bar. We're not there yet, but it seems worth pursuing to try to get the best of both worlds.
The safety of the default installation is crowdsourced across all users and can't be targeted. The safety of the JS I load from Mozilla is not and I would have to verify its safety every time. Unless I'm misunderstanding something it can never be as trustworthy.
A more complete comment from rfk on the bug tracker:
> The reason the login form is delivered as web content is to increase development speed and agility. You're right that web content has a larger potential attack surface than code that's built into the browser, but using web content also brings other kinds of security benefit that may not be obvious. That agility meant that during the incident in [1] we were able to respond quickly and effectively to protect users data, and to roll out an updated login flow containing an email confirmation loop. It means that when we ship two-factor authentication over the coming weeks, it will be immediately available to all users on all platforms. It means we can address Bug 1320222 in a single place and be confident we won't lock out older devices. And it means we can easily bring new Firefox apps like Lockbox into the Firefox Accounts ecosystem.
> Our approach has been to embrace the benefits of web content while trying to reduce the potential attack surface as much as possible. That includes some simple things like hosting the web content on its own server to reduce exposure to application server bugs, and shipping default HSTS and HPKP settings for the accounts.firefox.com domain. It also includes some in-browser measures to prevent interference with FxA web content, such as (the currently private) Bug 1415644. As a future step I'd like to see us implement content-signing for accounts.firefox.com and have it enforced by the browser, following the example of things like Bug 1437671.
> Ultimately I think we can have web content from accounts.firefox.com be just as trustworthy as, say, a Mozilla-developed addon which might ship in the browser by default, which is a pretty high bar. We're not there yet, but it seems worth pursuing to try to get the best of both worlds.
"Every time" for this use case is once per browser install, at the moment you perform the authentication with Firefox Sync, which is the same as the number of times you'd want to verify the binary right before authenticating.
The tradeoff they made here has essentially zero impact on the number of times you need to verify their code, it's just a matter of whether you'd have to verify browser native authentication code or authentication code delivered through a website written in JS, at the moment you authenticate.
A concern like the one raised in this thread is certainly valid for websites that have expiring sessions, where you can switch accounts and log in and out of. And we certainly do need better tools around signature verification and version pinning for websites like we do for binaries (content-addressed networks like IPFS may have good answers there).
But for this use case, it's not a practical concern by any measure, and all this alarmism seems really misdirected.
You're still not addressing the ease with which a targeted attack can be directed at a single user.
In order to compromise firefox native code, they would have to compile malicious code and ship it to everyone. My distro maintainers would need to include the malicious binary in their repos, including a signed hash of the compromised binary, and I'd need to install it, where my package manager would verify the hash.
In order to compromise a single user's browser session, they'd simply need to fingerprint the user's browser and then serve them different content than everyone else gets. No hashes or signatures on javascript, no safety in numbers, etc.
if someone is using a package-manager that uses code signing then indeed, the binary is harder to attack than the JS. (only because the package-manager would need to collude).
However, a lot of people get their software from downloaded .exe's or auto-upgrading installations. For them, JS or binary are equally vulnerable. (All it takes is a mozilla signature)
Besides, it is undeniably better to only be vulnerable to an active attack from mozzila, than to be vulnerable against a passive attack from them.
Most distributions disable auto-upgrade in Firefox, for many reasons (security and auditability being one of the main ones) so you won't get auto-upgrade from a distribution.
And even the "download .exes from the internet" usecase is precisely as secure as downloading JS from the internet that is verified once per install. To attack someone who has an auto-updating Firefox and downloaded it from the internet, you need to intercept and attack TLS -- but only when the upgrade happens which is a fairly limited opportunity. The JS attack has the exact same properties if the above comment (that it only gets downloaded once per install) is true.
So therefore it is strictly less secure in the optimal case, and it is no more secure in the sub-optimal case. So security really isn't a strong argument (the real argument is that it allows for more "agile development" -- which is an understandable argument if you cop to that being the only reason for such a design).
If you can attack TLS, game is over, you can't trust anything. A huge majority of Firefox users use built-in update mechanisms. Making life harder for majority of users to improve security of the selected few is a questionable decision. And if you're really insisting, you can always install some addon which will calculate hashsum of JavaScript resources.
> Making life harder for majority of users to improve security of the selected few is a questionable decision
I agree in theory, though as an aside this isn't true for distribution packages because usually they are GPG signed with keys that are in TPMs on the release machines. Of course any other internet communication relies on TLS not being broken.
But another attack would be modifying one of Firefox's mirrors to host malicious Firefox (not a TLS attack but an attack of a specific mirror). GPG detached signatures for distribution packages protect against this and many other such problems (obviously some attacks against the build servers of a distribution would be problematic, but the same applies for any software project).
Though to be fair, I don't know if Firefox's auto-updater uses an update system like TUF or a distribution-style update system (which is mostly equivalent in terms of security) which would protect against these sorts of problems.
> Making life harder for majority of users to improve security of the selected few is a questionable decision.
I don't understand how logins being built-in to the browser is making life harder for the majority of users. It wouldn't make a difference to them. It would make a difference to the development team, but one could easily argue that the development team should be willing to make life slightly harder for themselves in order to make Firefox users more secure.
> So therefore it is strictly less secure in the optimal case, and it is no more secure in the sub-optimal case. So security really isn't a strong argument
I agree. I was arguing for having some form of e2e encryption (like Firefox currently has) as opposed to not having e2e encryption. I wanted to argue against the idea that, because the e2e was implemented in JS, one might as well not have it.
Then, regarding the gap between e2e in JS vs e2e in binary, my point was that JS is just as good in most cases.
> Most distributions disable auto-upgrade in Firefox, for many reasons (security and auditability being one of the main ones) so you won't get auto-upgrade from a distribution.
Does that mean that the code is only signed by the package distributor, and not mozzilla? Because in that case, the package manager becomes a single point of failure. Then again, I guess that is always the case.
Still, it would be weird that, as far as mozzilla trust goes, a signed exe from internet is better than a signed package from your preferred package manager.
In openSUSE our build system can be configured to auto-check the signatures of the source archives used for building, so you can check the builds to make sure that we are building from an official source releases (assuming the GPG detach-sign their source tarballs -- something I recommend any software release manager do).
But most distributions do their own builds, and without reproducible builds being universally available -- not to mention that distributions usually have minimal compiler hardening flag requirements as well as patches in some cases -- you couldn't reuse signatures for the final binary. Also the entire package is getting signed, so the binary's signature wouldn't be sufficient (and checking it on-install would be quite complicated as well).
> Still, it would be weird that, as far as Mozilla trust goes, a signed exe from internet is better than a signed package from your preferred package manager.
I think that has always been the general case, since distributions are an additional layer of people maintaining a downstream copy of a project. But don't get me wrong, most distributions have processes that allow you to verify that the source used for builds of large projects like Firefox are built using the real sources.
There's also usually several layers of human and automated review before a package upgrade can actually land in a distribution.
The vast majority of Firefox users receive updates from Mozilla via the auto-update mechanism, which would also be vulnerable to the compromise in a similar way.
(A Linux distribution could also be compromised and used in a targeted way of course)
>> then serve them different content than everyone else gets
To help my understanding, to achieve an attack like this, would the attacker need to circumvent SSL on the client, or takeover the script serving web server? Or is there another attack vector that I'm not seeing?
The attacker in this case would be Mozilla itself. No need for an MITM. In this hypothetical, a government agency contacts Mozilla and says "Here is a canvas/HSTS/other fingerprint. Please serve this malicious code when this fingerprint accesses the login."
The point is that Mozilla can single out individual users for targeted attacks, whereas they could not do that if they had to put the malicious code into Firefox itself.
Right I see. So the barrier with Firefox itself, is that the malicious code wouldn't get built into the product and served as an update. However, in that scenario, Firefox could serve a malicious update to a single user, only that it's harder to fingerprint that.
With attack vectors it's also about ease of exploitation. In this case, the ease is high. If the person you are responding to compiles their own browser, the bar to put an exploit in there is already much higher. Yes, there are still attack vectors. And there always will be. The point is they're harder to access.
Your initial comment was pretty adamant that Mozilla had really messed up by delivering the code as JS. However, what is the attack vector that they've introduced by taking this approach?
It sounds to me like you're referring to a man-in-the-middle style attack. However, to be best of everyone's current knowledge, that's simply not possible with SSL.
It's only possible if the attack vector includes having already compromised the user's computer and installed a root certificate. At which point this is all pretty moot.
I think you have me confused with someone else. I have made no points except the ones in the post you are responding to.
In this case it looks like you're missing the fact that you can change the JS on the server with a high amount of ease and a low discoverability (it can be changed just for you and it won't show anywhere else).
> I think you have me confused with someone else. I have made no points except the ones in the post you are responding to.
My apologies, that's what I get for reading on my mobile.
> In this case it looks like you're missing the fact that you can change the JS on the server with a high amount of ease and a low discoverability (it can be changed just for you and it won't show anywhere else).
You raise a reasonable point. It is indeed something everyone should be aware of. It's mostly a matter of trust, not security.
However, the same is equally true of someone you trust changing the binaries, source and/or hashes that are delivered to you; whether you got those from Mozilla, or somewhere else.
I agree that we don’t currently know of easy attacks on SSL if you’re pinning certs (which it sounds like Mozilla does here). But all you need is a rogue CA to MITM SSL if you’re not pinning certs, so I don’t think “simply not possible” is an accurate description of SSL as generally used by the broad web-dev community.
The question is how hard it is to detect tampering. My linux distribution builds firefox from source and signs the build. The builds are also checked to be reproducible.
> there’s a world of difference between automatic updates from e.g. Debian and automatic updates from Mozilla.
In what way?
This is obviously somewhat anecdotal, but...
I'm the developer of Heimdall. Software that flashes firmware onto Samsung phones. The software quite literally has the ability to replace almost every piece of software running on your phone. If it were compromised, it could not only own a user's phone, but also potentially everything a user accesses on said phone.
Sure my software is open-source, and I encourage anyone interested to inspect the code, I'm sure there are bugs. However, the `heimdall-flash` package in the official Debian repositories... I didn't make it, and I have no connection with whoever did. Now, don't be alarmed, despite being several years out of date, to the best of my knowledge it's a perfectly good package, and I'm thankful that the maintainer went to the effort. However, it would be so easy for someone to have published a malicious package. This is pretty powerful software, it has significantly more power than root on your mobile phone.
I love Debian, both philosophically and in practice. But does it really deserve your trust more than Mozilla?
It's perfectly normal for Debian packages to be maintained by other people that the original developers of that piece of software, isn't it? Debian has more than 60000 packages but doesn't have 60000 package maintainers – the roles are quite separate.
For example, Linus Torvalds doesn't maintain the Debian kernel packages. If whoever does were to put malicious code in the kernel packages, that would be very bad, just as if Heimdall were compromised, which is why Debian has a relatively small set of trusted package maintainers and doesn't let just anyone put code in the official distribution.
This is very interesting and thanks for clarifying, but if you concede that there is a security trade-off here for the sake of usability, then isn't this, by definition, not "Private by Design".
As in: you chose other principals to guide your design other than privacy?
Nobody purely chooses privacy or security to guide their design. An implementation of Firefox sync that was purely, 100% private by design would be airgapped, it wouldn't sync over a network.
Arguably, a private by design implementation of Firefox sync wouldn't even exist. You significantly increase your number of attack vectors by making your session available on multiple devices. What happens if your Android phone is compromised? Better to only have your session on one device.
Obviously I'm being hyperbolic here, but the point I'm getting at is that security isn't black and white, and you will always be making tradeoffs for usability, no matter what the context is.
What that means for "private by design", I dunno. Maybe it's just a buzzword. Maybe it's just a matter of degree. Other people can debate that if they really want to. But I do know that the moment you put doors on your house, it's less secure than it used to be.
The actual valuable question is, "is Mozilla's tradeoff good enough for usability that it justifies the decrease in security?" I'm not sure whether the answer to that is yes or no.
The privacy is at least verifiable. In the sense that users can at least look at the implementation themselves, and (granted with some difficult) potentially detect changes.
This is much better than simply sending your password off to a third-party and having to trust that the company is doing what they say they're doing.
Sorry, but you (mozilla) deliberately crippled your system to the point where mozilla or certain key personnel of mozilla/whoever operates the data centers or anybody running some MITM middlebox - if they wanted to/where compelled to - could easily target specific users or large groups of users rather stealthily, by your servers (and any successful MITM) having the capability to underhand compromised code on a request-by-request basis that could sneak out the actual password thus compromising all the user's data you store.
Then you go ahead and publish articles how your system is "Safe" and "Private by Design" underpinned by "here is SOME MATH to prove it!".
And just now, you state that you "traded off" the "Safe" and "Private by Design" properties for "<unspecific marketing speech, something about reach and usability>", but are somehow "confident" that falsely advertising your broken implementation as having properties it does not possess is the right thing to do (for most users). wat?!
Why is the code that derives stuff from the password and the UI for entering the password not in the browser itself?
Surely not for the merit of the users, as there is no real usage/UX difference for them between "https://accounts.firefox.com/" and "chrome://firefox-account/".
Could this be solved by creating a WebExtension that completely replaces the frontend with a bundled copy and then including it in Firefox? If you trust the browser and therefore the bundled extension, you don't need to trust the server at all.
FWIW, as a developer on the Firefox Accounts team, I strongly endorse the sentiment of this article. We've occasionally found ourselves merging microservices back together because the abstraction boundaries we designed up-front weren't working out in practice...
> Unmaintained critical infrastructure is bad news.
(article author here)
To add to @callahad's excellent points: unmaintained critical infrastructure on your security perimeter is even worse, and a service like Persona is about as security-critical as you can get!
Persona was (and will remain until the end of November) covered by Mozilla's bug bounty program, meaning that it has been getting regular security bugs filed against it. Most have been spurious, some have not, but each of them has been a fire-drill because Persona gates access to so many of Mozilla's internal services.
We have been able to respond effectively so far, because there's a core of ex-Persona developers kicking around other projects at Mozilla, who we've been able to pull back in for these critical maintenance tasks. But that's not sustainable indefinitely.
The only responsible choices for a security-sensitive service like this are (a) staff it properly, or (b) tear it down gracefully. I'm personally quite disappointed that we couldn't find a path to success for Persona at Mozilla, but I'm grateful we've at least found the resources to do (b).
Not that it really matters after this announcement, but it has always been possible to disable the suggested tiles, from the little settings gear icon on the newtab page.
But isn't this menu a bit deceptive? This is a setting menu with 2 options: disable suggested tiles, and show blank page.
Show blank page can be set from the regular options page.. while the 'disable suggested tiles' option is not in the regular options. So the only unique feature of this menu is disabling suggested tiles.
It's as if, this menu was created entirely to hide disabling suggested tiles.
They force these so called features on their users, but they don't get enough crap for it for doing so simply because they can be disabled. Most people miss the point that majority of people who use Firefox probably don't even know what tiles were to begin with. (same goes to Pocket, Hello etc.)
> Firefox Accounts has an active userbase orders of magnitude larger than Persona's.
> We may be able to reintroduce a notion of federated identity into FxA at some point in the future
As a member of the team working on Firefox Accounts, here's one (hypothetical!) way that might play out in practice:
* grow FxA userbase to significant size, integration with Firefox to significant quality
* allow websites to add "log in with Firefox" via OpenID Connect and get a really slick experience for Firefox users
* influence OpenID Connect ecosystem to be more of a level playing field for smaller IdPs (e.g. increasing adoption of IdP discovery and dynamic registration)
* a win for openness on the web!
Not as big a win as widespread adoption of Persona would be, but a win nonetheless.
This sort of thing isn't exactly on our concrete roadmap, our short-term focus remains on supporting Mozilla's own service ecosystem. But be assured that it's on our minds.
Why fallback to just OpenID Connect? You could use a multi-pronged approach to prop up BrowserID using (continuing to use) OpenID Connect as bootstrapping fabric. Maybe along these lines:
* Merge the login.persona.org fallback login provider and Firefox Accounts (grows both userbases!); you'd probably need a nice path forward to FxA users with Persona-supported email accounts, but that may just be a matter of selling "Connect your FxA account to your Gmail account for easier password management; Here's how FxA will (not) use your Gmail information"; (worst-case you need to figure out the BrowserID protocol changes to allow individual email opt-out to a different provider. Maybe easy enough to do in the case of the bridge providers to Gmail/Yahoo as login.persona.org is already mediating that...)
* Use the Firefox Accounts "brand" for the fallback provider; this gives potentially a needed distinction between the fallback provider and the platform/tooling (so long as you can do user acceptance testing to maybe avoid confusing users), so that Persona <=> OpenID Connect as a developer brand and FxA as the consumer brand
* Setup an OpenID Connect proxy to Persona/BrowserID and call that the "Login with Firefox"...
A proxy could drop in to existing OpenID Connect workflows, but really be a wrapper around the BrowserID navigator.id. With Firefox Accounts as the main fallback this is still seen as "Login with Firefox" button This would require fewer changes to the auth code of existing websites (drop in next to your Google/Facebook buttons), but then as people get used to it you can start to encourage websites to "skip the middleman" of the Proxy and directly use Persona/BrowserID navigator.id to back that "Login with Firefox" button.
Maybe the only twist here would be a way for other browser manufacturers/plugin-providers to play in this space and keep the branding friendly. An idea might be to add a navigator.id.branding spec that if implemented could override "Login with Firefox" to show a "Login with Chrome" or "Login with Edge" button. The trick of course there would be balancing site CSS abilities and browser branding abilities. Doing so, however, would further reduce the need of consumers to know/learn/interact with the Persona brand and at that point they are just associating that button as the "browser login button". On the other hand, if FxA is the fallback provider it seems fine to just always have it say "Login with Firefox" and it may be less confusing that way, but with the original goals of BrowserID it might be nice to have it be browser/plugin-configurable solely for that reassuring "Login with my Browser" feel.