Hacker News new | past | comments | ask | show | jobs | submit login

Webmasters should have thought of that before littering their website with hundreds of off-site scripts and packaging all data and behavior and sending it off to dozens of tracking companies.

"great products". Yeah, websites used to be much, much better before loading every bit of text with a remote javascript.

Here's a behavioral data point: Go back to making good websites and stop leaking private data everywhere. That's a great product.




>loading every bit of text with a remote javascript.

This update is only about a select list of "bad" domains, though. I don't think most users would want to block literally all third party scripts (Source: I use uMatrix set to do that, and every other site I visit requires a complicated ritual of unblocking layers of scripts, frames, and XHR. Don't even get me started on static sites that display blank without scripts from a dozen CDNs. At least it's a good opportunity to rethink if I really want to go there)


What good is blocking google tracking subdomains while neatly packaging the exact same data and sending it off to google cdn and tagmanager?


My website is broken by this feature [1]. It does not leak private data, as Mozilla devs said here [2]

> According to the original screenshot in the thread, your web page is sending an HTTP request to https://www.reddit.com/api/v1/access_token. If the user has previously visited reddit.com, this request will include the user's reddit cookies normally. Also, the HTTP request I mentioned before has a Referer header that points to the address of your web page by default in most browsers. So Reddit will be able to tell which user has visited which page on your site. In other word, Reddit will be able to see the user's browsing history, as if they had access to the user's computer.

> Note that nobody is blaming you or your site here.

[1] https://revddit.com/user/rhaksw

[2] https://groups.google.com/d/msg/mozilla.dev.privacy/XO84Ezrw...


I don't get it. The Mozilla dev explained, as you quoted, that the API access sends the reddit cookie to reddit while not being on reddit. That's leaking private data. "In other word, Reddit will be able to see the user's browsing history, as if they had access to the user's computer." You know who owns reddit, right?


> the API access sends the reddit cookie to reddit while not being on reddit.

A few things,

(1) Why does it matter in this case? Under what scenario can you imagine reddit abusing the knowledge that certain users are reading metadata about reddit accounts off-site?

(2) It seems to me Firefox could selectively choose not to send cookies and the referrer header in this case, rather than rendering entire sites broken. In that manner, sites accessing social media APIs can function, no data leaks, and everyone is happy.

(3) Hundreds of sites are broken like this. An issue tracking them has been open for 5 years [1]. The list used to identify "tracking" websites is huge and not maintained by Firefox [2].

(4) Due to this list, it is virtually impossible to build a web service that queries any social media site and runs on Firefox under default settings, significantly handicapping apps that can be built. Devs' recommendation was for me to move the code to a server, which would be expensive to maintain and would limit usefulness to users by obscuring code and introducing per-IP rate limits from the external API, in this case reddit's.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1101005

[2] https://github.com/disconnectme/disconnect-tracking-protecti...


I would argue that 99% of sites sending cookies and personal information to social media do so for tracking purposes.

You may be the exception. But its a privacy tradeoff that benefits the majority.

And let me say: if webmasters had shown any respect for privacy in the first place, maybe this would not have occured.


> I would argue that 99% of sites sending cookies and personal information to social media do so for tracking purposes.

You could click on the reports on this page [1] to find out which sites are broken. Maybe I'll do it when I have a chance.

Any site that uses an API published by one in the disconnect.me list is rendered unusable. That list is 3,000 domains long, so even if there were only an average of 1 legit non-tracking site accessing each domain, that's still 3,000 broken websites.

> You may be the exception. But its a privacy tradeoff that benefits the majority.

I don't know that a tradeoff is necessary. It seems to me it would be possible to not send cookies for the 3,000 domains in the disconnect.me list when tracking protection is enabled.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1101005




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: