So that's pretty unfair.
1) They state they are working on a fix for normal, release channel users who don't want to run studies
2) they tell you to temporarily run studies to get the fix within up to 6 six hours (could be faster; set expectation)
3) You can explicitly install nightly or 66.4 before it's pushed if you want a fix now
Yes, it's unfortunate, I'd expect them to meet it head on, push a tested fix in a timely matter, admit a mistake was made, explain publicly how/why and apply learning moving forward. Beyond that, what's your expectation?
Not saying that their current actions are wrong, just that the optics of it are terrible for them.
There was a chain of bad decisions that led them here though: 1) thinking it's ok to disable software after its installed (using cert expiration -- I'm ok if the cert was revoked but that's a totally different discussion), 2) Taking more control of people's local software than many people are comfortable with, especially considering that their main market is tech savvy people that tend to be more sensitive to this than most 3) Making some of these things opt-out rather than opt-in, giving the perception that they may value data collection and control more than their users privacy.
For what it's worth, the (initial) mechanism for disabling add-ons (your 1) has been present since before Firefox 1.0. It was designed to quickly deactivate any malicious add-on as soon as it was detected, before it had a chance to do too much damage. In my books, that's a good thing.
Here, the mechanism that kicked in was the protection against add-ons that could have been signed with stolen credentials, which would make them clearly malicious.
Of course, it turns out that the problem was an expired cert, so a bug/human error. But generally speaking, I think that 1 is good.
> It was designed to quickly deactivate any malicious add-on as soon as it was detected, before it had a chance to do too much damage. In my books, that's a good thing.
I hate this attitude from security people so much. If for the sake of fighting malicious code you are crippling the software usability or my user experience, you are the malicious code.
I hate that attitude from entitled users so much. If you don't want security, you're welcome to have a malware-ridden system, but don't think that this means all users should have to put up with malware-ridden systems.
I wish that was true, but in fact I have no way to disable this and similiar amazing security entrenchments. The monthly device bricking windows updates, for instance.
If I can't do anything with my hardened computer, I don't care if is eaten alive by malware, it is useless anyways.
At work, as the guy who have to fight on behalf of the sysadmins and the users dozens of clueless security advisors who are hardening everything according to security best-practices written by similarily clueless experts, I'm seriously astonished by the common backward thinking. If you are blocking access to all users pdf files, for an instance, you are the malware, you are causing disturbance to the business operation and annoying everyone.
This petulant antagonism ("You are the malware!", "No YOU ARE!") between users and security is contrary to everyone's interests.
Go sit in separate corners, both of you. Think really, really hard about how both of your jobs are critical to the long-term success of the business. Don't come back until you've meaningfully internalized that.
I'm having hard time understanding what are you referring to. All I'm saying is that I had a perfectly working system and now it is no more functioning properly. Why would anyone see this as more secure is beyond me.
Similarily, when "security best-practices" are leading to hundreds of my users losing SSO access to their BI system, or hundreds of printers rendered unusable because of security update that requires administrative permissions for reinstalling the same drivers that were perfectly working so far, I don't care for your security benefits. They suppose to defend against the thing that you are causing. You are already damaging the organization with thousands of working hours lost, and everyone is frustrated, as a bonus.
> If you don't want security, you're welcome to have a malware-ridden system
No, I am apparently not. Microsoft, Apple, and others insist on making it difficult. At least I found out today that I can install unsigned Firefox extensions once I switch to a special "unbranded" build. I'm glad Mozilla, at least, still offers that.
Here's the thing: I disable a lot of the security stuff you're not supposed to disable, when I can. I use a Jailbroken iPhone. My Mac has SIP and Gatekeeper turned off. Windows Defender is turned off on my gaming PC, and I lower Microsoft's driver signing requirements to the greatest extent allowed. I also ran an unpatched day-1 build of Windows 10 for around four years, with the autoupdate system forcibly neutered. (I now run LTSB, instead.)
I have never been bitten by a virus, ever†. I don't know if that's because of all the security measures I'm not able to turn off or because I've been lucky or something else. I suspect it's because I don't run dodgy software. Or maybe my life is a lie and all my devices have been infected for the past decade, and I never noticed.
In the meantime, I'm not seeing the upside to software forcing hardened security.
---
P.S. While I share their frustrations, I don't endorse the GP's attitude. I know that a lot of people really are doing difficult work with the best of intentions.
† Except for a handful of times when I was testing suspicious software in a disposable VM. That doesn't count for obvious reasons.
Just because you haven't been affected by a virus doesn't mean you never will. For example, the patch for the zero-day exploited by WannaCry was sent over windows update a few months before WannaCry existed. I personally use Linux, so Windows Update doesn't exactly exist for me, but I still update my system whenever such updates are available. Both SIP and driver signing are both mechanisms to prevent the installation of rootkits. If you do get a virus, such mechanisms would prevent it from hiding itself or causing more damage to the system.
Not running dodgy software is indeed a very effective way to not get viruses, but that doesn't mean you shouldn't take more security precautions if they are available. What if, for example, malware exploits a zero-day in your browser and successfully installs itself without any interaction from the user? Windows Defender and related antivirus could detect malicious activity from such malware and remove it on windows and Gatekeeper/SIP/driver signing and related systems could severely limit its impact.
Mozilla probably introduced extension signing to prevent less technically inclined users from having adware installed into their browser. X.509 introduces certificate expiration, and X.509 is the most widely used mechanism for signing things. The only reason you have this problem is because the folks at Mozilla forgot to renew that intermediate certificate. While I agree that it should be possible for users to disable extension signing, as users should be able to do whatever they want on their own system, you shouldn't blame them for forgetting to renew a certificate associated with an additional security measure built to protect users from malware.
> The only reason you have this problem is because the folks at Mozilla forgot to renew that intermediate certificate. While I agree that it should be possible for users to disable extension signing, as users should be able to do whatever they want on their own system, you shouldn't blame them for forgetting to renew a certificate associated with an additional security measure built to protect users from malware.
Well, we agree on everything, in that case. :) I have no problem with sensible defaults that can be adjusted, nor do I take great issue with Mozilla's specific mistake in letting the certs expire.
Edit: Also, just noting that I only lessen security measures when I have a reason, not because I'm rebelling against security practices or something. I use a Jailbroken iPhone because I run Jailbroken software, and I disable driver signing because I use unsigned drivers. It's not that I don't see the risks, so much as I'm very unconvinced the risks are worth the downsides, for me.
Sorry, but where is the difference from a secure-system that allows central control - and a male-ware backdoored system?
All that is diffrent is the promise of non-maliciousness. Which often does not hold up. Cause money is corrosive to those little centralized empires of "all-can-fail-but-me".
Security is diversity, as in having a non-centrally controllable ecosystem, that is not a mono-culture. Your updates are the danger, your urge for control is the forrest fire.
Linux is not secure because its updated often. As package maintainer take-overs have show- that is even a vector.
Its secure, because its fragmented into a thousand small populations, which offer no real financially interesting attack vector for a large scale take over.
Linux's diversity also makes it difficult to package and distribute non-malicious software, so I'm not sure that's the poster child for how to do security without compromising usability.
(The situation is admittedly getting better with Flatpak.)
It's just the IBM mainframe priesthood reasserting its dominance, lurching from the tomb of computing history to save us all from ourselves. Nothing to see here.
Today, more than half of my open tabs disappeared in an instant, and were not even an option to re-open until either I waited around ("up to six hours...") or manually installed the workaround. All of my in-progress work in any of those tabs? Gone.
That absolutely qualifies as crippled usability. The mere fact of such a thing being possible is a usability defect. On what basis do I trust that my work is not going to disappear on me like that again?
Firefox will continue to have bugs. All software will continue to have bugs. I'm so sorry that you lost some tabs in your browser but shit really does happen and acting like this is some violation due to overzealous security controls is inane.
I did not say "all trust". Please don't presume to inflate my explicitly stated position — especially while also minimizing the impact this incident had on me, and others. I did not merely "lose some tabs"; those, I could just re-open. I lost work. That data, effort, and time are gone.
If you think this clownshoery hasn't cost Firefox any trust, then you're being as naïve as you accuse me of being "absurdly overdramatic" and "inane".
Bugs are a thing, totally conceded. Sloppy certificate management is, too, but it's an entirely other class of thing. Deliberately conflating them is at least as disingenuous a debating tactic as pointing at Chrome, which is utterly irrelevant to this incident. That's straight-up "whataboutism".
Full stop, this was foreseeable. This was preventable.
EDIT: Phrasing.
EDIT 2: I won't respond further to the same kind of tone.
Alright, I apologize for the tone. It's unnecessary to make something like this into a heated discussion.
That said, the part I was referring to is:
> The mere fact of such a thing being possible is a usability defect. On what basis do I trust that my work is not going to disappear on me like that again?
The possibility of a bug happening is hardly a usability defect in my mind. Or if you want to call it one, it seems like a perfectly reasonable one - this was a defense born out of necessity when malicious extensions were more of a problem.
And I think that the "On what basis" question definitely implies a total lack of trust, but sure, maybe not. The basis is that this is a single instance of a failure over the course of the features' lifetime, for a feature that has existed for absolutely ages.
I pointed to Chrome as an example of similar issues cropping up across codebases to show that these sorts of bugs do happen. I don't consider that whataboutism.
All bugs are foreseeable and preventable. Systems are complex. I think you're putting the issue in a very unfair light, even though it's very reasonable to be upset about time and effort that is lost because of the issue.
First, thank you for responding in a manner that invites a response, rather than demands refutation.
I understand your perspective, and appreciate your recognition of my own. That said, if you think I'm putting the situation in an unfair light, I think you're downplaying it at least as much.
In my eyes, this is no mere "bug"; it's an abject process failure. As a reply to another of my comments in this discussion suggests, this is more on the level of, "Oops, we forgot to renew our domain name...", than it is, "Gosh, we didn't validate the pointer returned by the frobnitz function, when the whoozle isn't initialized yet..."
Dealing with expiring certificates before they expire is covered in like the second week of Certificate Management 101, as it were. If it's necessary to stick an intermediate cert in there, then it's doubly so to keep it current.
> The basis is that this is a single instance of a failure over the course of the features' lifetime, for a feature that has existed for absolutely ages.
The plural of "anecdote" isn't "data", but an existence proof is an existence proof. That the problem has gone from zero occurrences to one, no matter over what period, literally makes it infinitely more likely to recur, if you want to be that reductive...
Being a former ops guy the items you list resonate with me. On the one hand I do feel for the developers and hope they come up with a fix soon. On the other side, this is frustrating and there were some bad decisions made that a typical ops person would have pointed out and been ignored. The ignoring of ops guys until something breaks is something that has been consistent in my experience. Anyway for the sake of having an alternative to Chrome I hope they fix this yesterday.
One of their biggest 'selling points' is that they protect your privacy. It's really, really off brand for them to be distributing a critical bugfix through a telemetry collection channel.
it's also entirely predictable that a non-negligable fraction of users -after enabling studies and verifying everything works again- will ... simply go on with their lives and forget about disabling studies...
I also don't understand why the certificate graph is not exposed through a user interface, so that the user can add and remove certificates, or enable and disable certificates at their own discretion. This should have been obvious when the certified add-ons were introduced. Then all they would have to do is host the certificate file on their own domain and everyone could follow the simple steps in the GUI to replace the expired certificate...
I might be wrong, I don't have data, but, as far as I can tell most users either use the installed browser or chrome (or whatever their tech savvy friends/relatives install for them).
> using cert expiration -- I'm ok if the cert was revoked but that's a totally different discussion
CAs can delete certificates from their revocation lists after expiration, which means that you can't tell the difference between a certificate that was never revoked but merely expired and a revoked-and-then-expired certificate.
As an alternative perspective, I'm totally fine with FF disabling the extensions when the cert went invalid, and I'm also happy that it auto-updated itself to fix the issue. To me the optics are pretty good: a mistake happened and they were able to recover pretty fast, and my browser wasn't exploited by bad actors in the meantime.
Public perception. For instance, one of the first comments on their post is this:
> Why not just post a link to the fix that can be installed WITHOUT enabling Studies? This sounds like a clever plan to get more people to share their data via Studies…
I definitely don't agree with that guy, and I doubt that's a majority opinion, but asking people to use a workaround that benefits them (Mozilla) after they broke things for a lot of people is bad publicity for sure. For what it's worth I think Mozilla is doing the right thing here, just it's not going to make them look great.
I'm not sure I care how unfair the characterization is. I heavily use container tabs — ahem, 'usecontainers — and all of my open container tabs disappeared at once, with no indication of why or what to do about it, when this happened. I lost an absurd amount of work and state because of that. I only knew what caused it by inference, because I'd just previously read The Fine Article (which, btw, gave no indication that losing state like that was something I should expect, merely, "No active steps need to be taken to make add-ons work again"...)
I still prefer Firefox over all the other browsers, and will continue to use it, but the project has lost a lot of trust and goodwill over this.
The optics are indeed awful, and this was fully preventable. Firefox fucked up, full stop.
Based on the timing of initial tweets and blog posts on this fiasco, I'm pretty sure I was in the first 10%, if not first 1%, of people who experienced this. And I was in a plane at 36,000 feet trying to work on a cross country (U.S.) flight when suddenly about 130 tabs in 7 windows disappeared. Really, REALLY bad. Panic, frustration, confusion...
I was more than 50% sure that all was not lost forever, that it was some "glitch" (Extensions all showed the same bloody red status), but I was tweaked. I work in security (embedded systems, not computers/IT) so I have a very good understanding of certificates, TLS, PKI, etc. There are many ways things can get out of whack if the people in charge screw up.
Regardless, this is embarrassing, dare I say shameful (pretty much almost up there with "Ooooppsss... we just lost our domain - it expired and no one thought to renew it)
Come on, guys, get it together. Have a procedure, document it, practice it, stay in front of it.
EDIT: after installing the fixed XPI, I have to sadly report that all data has gone. All my carefully-managed containerized life was wiped clean. Heads should roll.
Complete shambles. And the worst thing is, I suspect it's all a plot to have more people opt-in to the shitty telemetry. Otherwise, why not push an update through the usual channels? Had it been a security-related fix, would have they used "studies"? I bet not.
Switch to what? Chrome? Because you don't like having to re-opt-in to studies? That would be ludicrous given Google's privacy track record. Opera? They're owned by a Chinese investment firm now. Edge? MS's whole OS is based on data collection.
Have they stopped whitelisting Facebook and Twitter in their "tracking blocker" yet?
The BS coming from their blog post surrounding this whitelist makes me distrust them completely: "Loading a script from an edge-cache does not track a user without third-party cookies or equivalent browser-local storage" (...) "Given that most users on the web share IP addresses with other users because of NAT, it is unlikely this can be used to reliably track users"
Not only it's quite possible to know if the user is behind CGNAT or not, meaning the tracking works just fine for millions of users, but carriers have been known to inject user IDs in the replies of users behind CGNAT.
The handling, or the bug itself? Sound like the damage control is fine (although worrying that they have no way to distribute hotfixes more rapidly than this).
The bug in the first place, on the other hand, seems pretty negligent. Not that it's incomprehensible, just pretty stupid.
Lynx, Midori, Safari, IE6. Maybe Palemoon, Samsung browser is another possibility. Maxthon was popular for a bit. Maybe you're super 1337 and run in only curl or wget.
CVE-2016-9179 was published about Lynx in November 2016. Lynx took more than 5 minutes to release an update, with the fix included [1] in 2.8.9dev.11 not reaching a production release until July 2018 — almost two years after the CVE was published.
With a response time like that, I don't see how Lynx will satisfy their "5 minute fix" need any more than Firefox did.
I am positive my first comment on this thread was not an order of magnitude over the line, if we can quantifiably measure such a thing, and it got censored anyway. "I just switched my browser. Bye bye Firefox."
I mean if you want me to be reflective it's really not going to work if we're refusing to admit that either both of these comments should have been censored or the one above should not have been censored. But I get it, life's not fair and the squeaky wheel gets the grease.
Firefox zealots get to railroad commenters on HN, noted, I certainly won't engage in these discussions again.
No, we’re not flagging you for your use of swear words; we’re flagging you for your lack of construction to the conversation. You clearly are not interested in having a conversation
Yes, it's unfortunate, I'd expect them to meet it head on, push a tested fix in a timely matter, admit a mistake was made, explain publicly how/why and apply learning moving forward. Beyond that, what's your expectation?