Getting some semblance of control over my browsing habits and enabling the implementation of multiple-proxy-based browsing strategies is one reason why I wrote chrome-private.sh [1].
I go through hundreds of disposable browsing profiles every day.
Firefox containers does the same thing with a lot less work. If you get the multi-account-containers extension and the temporary containers extension, you won't have to log back into everything all the time but will still get isolation.
Make sure to disable container sync when using this combination. It's easy to accumulate hundreds of temporary containers which somehow hoses your Firefox profile on Mozilla's sync server (and can also produce gigabytes of logs in your local profile directory).
It seems it's currently impossible to recover your profile once you reach "Maximum bytes per object exceeded" - the remote end won't even let you delete the offending data.
Not only that, you can effortlessly launch Firefox with temporary profiles to emulate what the script in the OP is doing. I think that Firefox's strict privacy setting + containers are all that are needed, though.
Well, there are definite benefits to being in full control of profile segmentation.
You have no idea what the extension is doing unless you audit it. And even then, there are second-order effects that could surprise you [1]. Plus, the extension writer has to maximally cover the set of possibilities where a site can store/retrieve data. These are a lot of unknowns you're trusting and assumptions you're making.
It's far simpler to treat the profile directory as contaminated waste and nuke it at will.
The only assumption you're making is that the browser implements a profile in a given directory properly.
Sure, there's probably less exposure from nuking the profiles directory, but I don't think you're being fair to containers:
- Containers are a feature built into Firefox, these extensions just expose a UI for it. The Multi-Account Containers plugin [1] is published by Mozilla. You don't need to trust anyone but Mozilla to use that base set of functionality.
- The container functionality in Firefox is the result of some work from the Tor Browser being upstreamed into Firefox [2]. It seems reasonable to assume that it's well-implemented.
- The limitations of the extension that you linked to don't seem any worse than your profile-segmentation approach. It's just saying that it's possible for multiple websites to get opened in the same container, which is similar to how you could end up opening multiple websites in the same profile.
I reworded my comment. My point remains in that a lot of possible issues disappear if I just choose to destroy the entire directory. There is a clear difference between one Chrome instance (and all its associated windows/tabs) segmented into one profile directory and in-process segmentation. The issues at [1] seem to invoke unexpected behavior in that you're asking for a new temporary container but you can't know ahead of time if that's what you will get. I'm writing "seem" because it's not clear from the description how that interaction works. They could be better worded.
With segmentation enforced at instance boundary (rather than in-instance), there is no unexpected behavior of this sort. All links open in the segmented instance that the browser window/tab you're using belongs to. If you want a new container, you start a new instance and you know that's exactly what you will get. There is no possible "fail open" result. Note that I'm not saying the Firefox behavior you described is a major issue, just that it proves you can have unexpected scenarios.
Moreover, jedberg is correct in that cross-profile data leaking is possible (partly what I meant by "implementing profiles" properly), except that it's very easy to see if that's happening without auditing Chrome. Use a tool that records all filesystem operations (e.g. dtrace on macOS).
At the end of the day, I choose one set of trade-offs over another.
Of course, all security is a trade off between convenience and privacy. But the hassle of a new profile all the time seems like a lot of extra inconvenience for little extra privacy.
So far in practice I've never seen the isolation fail. The combination of the two plugins seems to counteract each one's failures.
And Mozilla makes the multi-account-container plugin. I trust them a lot more than I trust Google to make Chrome not leak information across profiles.
I can say that I would prefer to use Firefox instead of Chrome, except that Firefox has no OSA scripting endpoints on macOS (a real shame) and I've written a lot of code that depends on Chrome APIs over the years.
Even if I did move to Firefox however, I would still implement a directory-based profile segmentation strategy.
It's an OS-wide scripting framework, that a lot of (most) applications running on macOS support. The beauty of it is that it works on top of Apple Events and is layered. An application may inherit a standard set of exposed scripting endpoints and may also implement its own specific set of behaviors.
Even the standard inherited set of endpoints can be extremely powerful due to accessibility functions that are baked into the entire operating system.
Chrome offers both. Firefox offers nothing. Of course applications can additionally offer their own scripting APIs (Chrome has DevTools, I'm sure Firefox has something equivalent) but a major advantage of OSA is its stability and uniformity. It's a hidden treasure for power users and obsessive feedback loop minimizers. Alas, when it comes to regular users, the most Apple managed to do with it was Automator.app, a very constrained experience that did not really take advantage of the underlying power.
There is no hope of privacy on the internet today without some sort of "tumbling" strategy. It is wholely impossible for anyone to prevent all sources of their information from leaking, and once your data from one source is out there, the rest of your precautions are potentially useless. So the only viable strategy is to muddy the waters by creating enough spurious data that an attacker can't know what is real and what is fake.
Different concerns. Ungoogled Chromium removes Google-specific features from Chrome. That is unrelated to non-Google-specific browsing features such as providing cookies back to hosts on repeat requests in the same profile.
I'm doing something similar but less ephemeral. I have many web profiles, but they are dedicated, for example I have a Google profile where I open Gmail, drive, photos, etc. I have an FB one, a reddit one, an HN one, etc. About 15. Then Firefox for my regular browsing.
That's pretty cool ... I'm left wondering what a visualization of the state accumulated by visiting different websites would look like, e.g. google, twitter, nytimes, etc.
It's also used for fingerprinting, because there are a lot of different GPU configurations with slightly different reported features and slightly different e.g. rounding, dithering, antialiasing, and other rendering behaviors.
Mostly for security, rather than fingerprinting, concerns.
Fingerprinting avoidance is a complex issue and it's reached the point where you can't simply disable JavaScript / WebGL and assume you're ok. One needs to run additional extensions to project a browser-view that blends in. You can use chrome-private.sh as a base layer you can build on to get there, but you are not going to get it by default (which is why I'm not making any anti-fingerprinting claims in the README).
"According to the suit, the company collects information, including IP addresses and browsing histories, whenever users visit web pages or use an app tied to common Google services, such as Google Analytics and Google Ad Manager."
I'd really appreciate some more technical details here.
Does this include web server logs that record incoming IP addresses?
Is the expectation here that Chrome would set a DNT header in incognito mode and Google properties would then obey that header?
Which there is a warning of every time you open an incognito window. It's not fine print either, it's one of about six bullet points. "Your activity might still be visible to ... Websites you visit." This one is going nowhere I suspect. Just because a lawsuit asks for big damages does not mean the plaintiffs are likely to prevail on their claims.
That's not quite how the law works. If I tell you to get off of my property, and you stand on my lawn, you're trespassing, even if I didn't put up a military-grade wall. Or if I have a basic chain link fence, and you climb over it, you're clearly trespassing in situations where without the fence, if you were to incidentally walk across my lawn, you'd be okay.
The point of digital trespass laws is very similar. Just because your technological measures are imperfect (as the bullets say) doesn't authorize you to circumvent them.
What's damaging in this case is that Google created the signalling mechanism, gave it to users, and then intentionally chose to circumvent it.
Courts are also not machines. A lot of this comes down to intent and reasonableness. If you're fingerprinting my browser when I'm in incognito, that feels like an intentional digital trespass which courts would probably recognize. If you're incidentally collecting my IP in your server logs, that feels okay. Programmers get caught up in this all the time -- they read laws and contracts like code (strict literal meaning). Lawyers read them looking at things like impact, intent, whether things are substantially similar, and so on.
> What's damaging in this case is that Google created the signalling mechanism, gave it to users, and then intentionally chose to circumvent it.
Absolutely not.
Incognito mode, as in every browser, means you're history isn't recorded on your machine. And as GP said, Chrome even explicitly explains your ISP or websites may still track you.
And Chrome isn't doing anything to circumvent it.
Intent and reasonableness here is perfectly fine on Google's part. Incognito mode successfully prevents storing history on your machine. Analytics successfully track you. And nobody's being misled. Incognito mode has never been advertised or marketed as anti-tracking, because it's not supposed to be. It's just a convenience to clear cookies and history, nothing more.
If there is a law against Google collecting analytics data, then it's going to be illegal whether in incognito mode or not. That would be the situation with trespassing. There's a law against it.
But there is no such law against collecting browsing data. So there has to be some other legal theory under which Google would be liable. One example would be deceptive practices or fraud, where Google says one thing then does another. Unfortunately, we don't have the full text of the complaint yet as far as I can tell. But your "that's not how the law works" dismissal is actually going to be totally irrelevant to the complaint, because there's no legal comparison between trespassing and collecting browsing data.
With trespassing, if I cross your lawn, I'm probably okay. If I cross your lawn and there's a "no trespassing" sign or a fence, I'm probably not okay. It's vague.
There are equivalent laws for technology. For example, you have the confusingly vague CFAA. If I've indicated to you that I don't want you grabbing files from my computer, and you do, you've likely broken it. It's even called digital trespass. On the other hand, if you have a public FTP server up, I can grab files. If you have a public FTP server up, and you've told me I'm not permitted to access it in person, or in an automated banner, or in an automated banner which my browser never shows me, things get legally complex.
Pretending "private browsing" or "incognito mode" doesn't act like such a sign isn't very honest. Google disclaims this information might be visible to web sites, which is honest, but most indications are given that it's intended to help screen some of that. Intentionally working around incognito mode is almost certainly at least somewhat illegal.
While I share your skepticism of the suit, I think that that line may be seen as misleading. Google Analytics is NOT a website I visit, in general. Still, despite the Incognito mode, GA may well track me across the internet.
Personally, I always took Incognito/In-Private browsing to be just a "delete cookies and history on exit" mode. But the way it is presented may suggest to many people that it is significantly more than that, even with the disclaimers in Chrome. I would not hold my breath for a successful suit based on that, though.
Saying that "Google Analytics is NOT the website I visit" is the same as saying "React is NOT the UI I'm using" or "Stripe is NOT the store I'm buying from".
Modern day websites use readily available modules to build out functionality. Just because those modules were originally built by someone else doesn't mean that it's not part of the website you visit.
I am aware of this as a developer. But as a regular user, when I am told that visiting HackerNews in incognito mode won't prevent HackerNews from tracking me doesn't tell me that it won't prevent Google and Facebook and who knows how many others too.
Basically, this is one of the key ideas behind the GDPR: that I should have a legally-enforced expectation that when I'm agreeing to share my data with X, I'm not implicitly agreeing to also share it with Y and Z; and that it is X's responsibility to see to this.
So sure, X is free to use GA, but as a User I shouldn't have the expectation that Google knows I've visited X's sight.
And comparing GA to React is really disingenuous, especially in this context. One is an active monitoring solution that hoovers up data and sends it to a 3rd party, the other is a static library that is entirely run in my own browser, or sometimes on the origin server as well.
> My argument was that drawing a distinction between a "site" and modules that are part of that "site" but are from other parties is dubious.
It is not only not dubious, it is in fact enshrined in law. I brought up the GDPR explicitly to highlight this. Specifically in the context of tracking and personal data, there is a distinction between the site I am visiting and the legal entity that is controlling it on one hand, and other entities that it contracts to achieve its purposes.
If your understanding of 'a website' includes all of the 3rd party trackers that it may be using, then the wording becomes obviously correct. I would venture though that this is not the common connotation of the phrase 'you may still be tracked by the website you are visiting', which I believe most people would take to mean more 'the origin server', i.e. 'I may still be tracked by the 1st party entity who owns the site I am directly visiting, but I will no longer be tracked by other parties'.
In fact, by your definition of 'the site I am visiting' , incognito mode offers no more tracking protection than regular browsing, as I can never be tracked by anything but the site I am visiting, including Google analytics, Facebook, and any other ad networks that they chose to use; I am never tracked by any site that I am not currently visiting, obviously.
Incognito Mode states plainly that it DOES NOT prevent the website you are visiting from tracking you.
Another commenter said that this wording implies that it DOES prevent Google Analytics, since it is not part of the site.
My argument was that drawing a distinction between a "site" and modules that are part of that "site" but are from other parties is dubious (also, likely impossible).
Google is not the website being visited, and if it's not a big deal, then why doesn't it say, "Google will still track you using third party analytics, font requests, ad pixels, single sign on, and recaptcha"?
I mean, if nobody really cares, why not be more direct about it?
while they do warn you, it is about time that they implement a real incognito mode, maybe using Tor by default... The current incognito only protects your privacy from other people that are using the same computer which is not really that helpful since you could just create another user to achieve basically the same (if your disk is encrypted)
They should because it would be the right thing to do... and Firefox is more popular then the Tor browser, so it would help spread Tor adoption... a lot of people don't understand the benefits of hiding your IP... so they aren't going to download the Tor browser
It is for me. Tor runs as a daemon, then I have my Firefox proxy settings to always hit Tor.
Onion addresses work, and Firefox refuses to connect if the Tor daemon isn't running.
Paired with Firefox actually honoring my requests to purge all information upon exit (instead of chrome only "kind of" doing it), it certainly works out quite well for me.
It is based on Firefox but it is not Firefox... but anyways, unlike the Tor browser, maybe Firefox should not use Tor in regular mode (only in incognito).
Yes, it's quite interesting, and it borders on semantics. I.e. someone misunderstood privacy in this context as understood in general, that you keep things for yourself. But private/incognito browsing on Chrome means privacy from other people using your computer, not from the biggest user-tracking company.
Maybe instead calling it "incognito mode", which is totally inadequate, they should call it "temporarily disable browsing history" instead.
Server logs with IP addresses are acceptable to most European privacy regulators if you only use them for a technical purpose such as debugging. And not keep them longer than needed for that.
So practically: logs are fine, delete them after a while. If you store the same information in a permanent database and use it for analysis you're in trouble and should have asked permission.
The fact that the user uses a private window or other means to indicates they don't want to be tracked probably makes this a more clear case.
> The fact that the user uses a private window or other means to indicates they don't want to be tracked probably makes this a more clear case.
That’s a very confusing statement. My server logs don’t filter incoming log entries based on user agent, and certainly not on whether you’re using a “private window” or not.
In addition, the goal of a private/incognito session is to be indistinguishable from regular sessions, otherwise websites can easily discriminate against private sessions (which they’re already trying as hard as they could).
Edit: Wow the number of people on this thread claiming websites should be able to opt people out of logging based on whether they’re using a “private window” (which websites should have absolutely no idea about) makes me question if I’m even on Hacker News.
Edit 2: Chrome sends an X-Client-Data header (which in a sense includes an installation ID, but allegedly has limited entropy) to certain Google properties, and rightfully got a lot of flak for it. It does not do so for incognito sessions. And now we have people arguing that Google should de-incognito incognito sessions to their analytics properties. Crazy stuff.
> Wow the number of people on this thread claiming websites should be able to opt people out of logging based on whether they’re using a “private window” (which websites should have absolutely no idea about) makes me question if I’m even on Hacker News.
Especially since "Websites shouldn't be able to tell if you're in incognito mode" has been highlighted in the past as a privacy ask, yes.
People can at least agree "website shouldn't be able to tell if you're in incognito mode" and "website should not track you if you are in incognito mode" are two mutually exclusive features, right?
How about: "Websites shouldn't do digital fingerprinting to intentionally circumvent incognito mode." Does this make sense to you as a legal argument?
And yes, this does mean that if it comes to litigation, a lot of this will depend not just on what you did but why you did it.
If you write analytics, unaware of incognito mode, you're probably okay.
If you write that same exact code because your boss comes in and says "shad, we're losing A LOT of user data to users in incognito mode. Could you do some kind of digital fingerprinting so we can still track them?" then you might be criminally liable for digital trespass -- you've intentionally bypassed my security mechanism.
That's the kind of feel-good law that ends up very unenforceable because it ignores technical reality. Not a fan.
If the exact same action does the exact same harm and is legal or illegal based on intent, enforcing that law is going to enrich a lot of lawyers but isn't going to practically rope in many company's behaviors.
That's not a proposal for a law. I'm not arguing about how the law ought to work. For better or worse, that's a description of how the legal system in the US works RIGHT NOW.
And yes, it does enrich a lot of lawyers.
Look up the CFAA cases, for a great set of example of how these laws can explode in this exact domain -- people charged with digital trespass who bypassed no or minimal technical measures. And it doesn't feel good either in most of those cases.
To be frank, though, if this gets applied to Google, it will feel pretty good.
Correct; it does not. Which is why steps were taken to minimize remote servers' ability to use secondary signals (such as access to localstore APIs) to make an educated guess about whether the user was operating in incognito mode.
It's not me who doesn't know how it works; it's the people who think "New York Times shouldn't be able to whine at you if you're in incognito mode to go buy a subscription" and "servers should be required to modify how they handle your traffic if you're in incognito mode" are compatible protocol features.
Detecting when users of any browser are using Incognito mode goes against the spirit of privacy, even if it’s not being done to directly track people as such, and any information that some browsers share and others don’t helps add to a browser’s fingerprint.
But you're not using the IP addresses in the logs to circumvent the fact that cookies are being deleted by your user's device right? Because it looks like that's what Google was doing, and that's actively circumventing the decision of the user not to be tracked which is illegal in many places.
I don’t use GA on my own websites, or analytics cookies, or tracking pixels, or whatever. But I sometimes use goaccess on my server logs which tells me the number of unique visitors (based on unique IPs) and such. Whether they visited in private sessions or not, of course. So I guess in a sense I’m “tracking” them through logged IP addresses, but it’s completely orthogonal to whether incognito mode is on.
My interpretation of this article is it's related to incognito mode.
How is it circumventing a "decision of the user not to be tracked" when "private" modes usually explicitly state they can't/don't stop websites from logging information.
This is actually no longer true: Google recently added third party tracker blocking when you're in incognito mode. Of course, every other browser does this even outside of incognito mode, but the point stands that Google has actively added features to prevent websites from tracking you in this mode.
At GDPR presentation that I attended recently in Norway the message was that IP addresses must be masked if they go to a permanent storage. For IPv4 it could be OK to clear the last byte of the address, but in general the message was to save the least amount of information. For example, if IP is used for locating regions, then save the regions, not the IP itself.
Unless Google associates your anonymous browsing data from incognito mode, with your regular browsing data from regular mode, I don't see a case here. And I really doubt they do that, it would be such a clear and egregious violation of the reasonable expectation of privacy.
The other possible complaint could be that websites still can collect information on user behavior on the website, even if it is more anonymous in incognito mode. This is expressly what incognito mode says on the tin. You can use it to avoid saving your weird porn history locally, but not prevent websites from knowing what anonymous visitors are doing on their website. If the average consumer isn't tech-savvy enough to get this distinction, I'm not sure what Google could do besides putting this explicit warning in every new tab.
Summary: seems like this case will go nowhere, but still makes for soundbitey headlines and gives people an excuse to rehash their usual gripes that "my data is the next oil"
(IIUC, most ad targeting is still based on you explicitly searching for something you want to buy and ads matching those keywords, or retargeting from a website you've already visited but abandoned your shopping cart at, not some all-knowing profile of your deepest wants and desires).
Disclaimer: I work at Google but nowhere near the Analytics or Ads teams.
I know that Google is making money through use of my personal data. I wish they would be required boldly state on every page that my personal data is being used, and that they are profiting from it. Finally, there should be some steps offered to remedy some of the above such as requesting that parts of the data that would be convenient for me to have deleted would be deleted. And the parts that are inconvenient such as my ability to login to sites I commonly visit could be preserved since that minimum requirements are met so I can easily access those sites.
And while I have no big conspiracy theories at the moment about how Google is doing anything evil, there is certainly no guarantee that something in the future could impact me. For example there could be some bad players working inside Google, Google could be acquired, or the government could take control in some way. these are all things that could be dangerous to me in the future if Google continues to preserve large amounts of my personal information.
FYI - Google does offer a series of tools for you to manage your data. If you are signed in there is my activity[1], and takeout[2]. There are options to control targeted ads[3] and auto-delete location and activity data older than 3 months[4].
Google is really good about takeout. It's really bad about maintaining my privacy. The opt-outs are limited in scope to the point of being almost meaningless. I don't mind ads being targeted as data being collected about me. Google's privacy tools are a joke.
Google's security tools are a joke too. Google silently drops security support for Android phones after two years, and people unwittingly walk around with zero-day exploitable phones. Chromebooks are similar. If you want to maintain a secure Google Apps domain, you need to pay Google huge bucks. It benefits everyone, especially Google, if the Internet is safe, and Google's attitude here will come back to bite it.
My core issue is the unfulfilled promise of the panopticon.
I'm still waiting for the recommenders, personalizers which help me.
Not boost engagement. Not amplify tiny differences. Not catalysts for virality.
I was on the recommender, personalization team for a high end fashion retailer. Joining, I thought "Woohoo! Teach the computer tell me which dress shirt to buy! Pick the right t-shirts! Find tasteful but understated socks! Finally!"
It took me a while to peel back all the layers to reveal the team's secret sauce. Turns out there isn't any. The most performant algorithm was "stuff you've looked at before" (~70%), followed by "what's hot" and "what's new".
While most of our effort was put into all the Big Data Machine Learning Booyah stuff, I'd characterize the attributable "lift" as little better than noise. Terrible ROI. We would have been MUCH BETTER off improving the data quality, search features, and browsing experience.
(I had some other more radical ideas. A whole thesis built around authenticity and actual engagement. Way past StitchFix. Alas, too weird for the brick & mortar types. Imagine explaining TikTok influencers to your great aunt. But I'd be happy to have someone pay me for a brain dump.)
In conclusion, nothing this last decade has shaken my hunch that digital ads are a giant con job. At least outside of political advertising. (My bro has worked in ad tech for 15+ (?) years. Our spirited debate has never stopped.)
WARNING! We use your personal data to improve your experience using Google products. We believe this will lead you using the service more often and, in doing so, see more ads which we profit from.
Hah as if it did improve the service compared to listening to basic product feedback. YouTube recommendations alone seem to cause people to lose hair, and that’s not even touching ads.
But what am I talking about! Google KNOWS scale, I’m sure it’ll come together eventually.
... and to help funnel investments into promising web properties giving us an insurmountable competitive edge in search. While we could use this as an unfair advantage in any takeover talk, we have a motto and a friendly logo which clearly signal our benign intentions.
Side note, we are spinning out Google into a separate entity, because we are growing so big. The new conglomerate is called Alphabet. It's just a silly accounting action, nothing to see here.
(Disclaimer: I have a big personal gripe with Google, but I don't hate the company in general)
Something has been on mind for a while.
I see lawsuits against Google collecting / selling personal data and ideas to combat its monopoly in search. What I don't see is a discussion about regulating companies that have data on the majority of the population.
I know for a fact that Google used search insights to inform strategy. By knowing what people search for and modeling our behavior, they have an unprecedented ability to forecast future events. I expect that Facebook and other, lesser-known companies do the same. I believe it is dangerous for a company to have this ability.
I am not an expert in public policy and politics. Would it make sense to have regulatory oversight over all companies that have data on, e.g., over 50% of a country's population?
This has been one of the chief complaints for GDPR, right? While GDPR provides consumer data protection, it also creates regulatory and compliance barriers for new competition entering those markets, with heavy penalties.
Regulations, a lot of times, tend to have the opposite of the intended effect. In this case, you'd need to define what is meant by "having data".
Is having an email or phone enough to qualify?
Maybe yes.
In that case, think of a rapidly growing startup, which breaches that mark (50% or whatever the law says) - and now has to comply with the law.
But the startup is not capable of compliance, because the law was made for behemoths like google.
This startup will go belly over and die soon.
Google's monopoly saved.
Alternatively, leaving it in the public domain for civil suits to be filed has a tendency of natural selection. If a company is TRULY big enough, and has that kind of data, someone WILL sue.
> In that case, think of a rapidly growing startup, which breaches that mark (50% or whatever the law says) - and now has to comply with the law.
> But the startup is not capable of compliance, because the law was made for behemoths like google.
If I were 'king of the world' I would consider something like this, but would not have it be a binary 'must comply or exempt' but a spectrum of ranges from 'totally exempt' to 'totally regulated' depending on what percentage of 50% you had.
If you have 5% of user emails, you are responsible for the bottom 10% of regulations and/or you need to fully comply with the regulations for a sample size of 10% of your users.
IDK I need to give it more thought, but first, another zoom meeting awaits.
That doesn't sound very realistic to me. A startup doesn't just capture 50% of any significant market over night. It will have plenty of time to hire some compliance staff as it grows.
Regulation and civil lawsuits don't serve the same purpose. Regulation is making the rules. Courts interpret the rules in light of a specific situation.
I agree that regulation can be counter productive. It can create a level playing field or cement dominant positions of encumbants. So let's have good regulation.
"By knowing what people search for and modeling our behavior, they have unprecedented ability to forecast future events. I expect Facebook and other, lesser known companies do the same. I believe it is dangerous for a company to have this ability."
I have thought of this as "search-based front-running". How much of it goes on, I would like to know.
> they have unprecedented ability to forecast future events.
Per Matt Levine's "anything can be securities fraud if you don't tell your shareholders about it", then having the "unprecedented ability to forecast future events" and not informing your shareholders about it could be argued as securities fraud in court. Something to think about.
One of my favorite articles on that front is where two Capital One fraud analysts used internal purchase data to invest in consumer-oriented brands. Yes, it became insider trading:
Key quote: The Huangs, who are not related, began with a $147,000 investment and together made more than $2.8 million from the trades, a three-year return of 1,819 percent, the SEC said.
I wouldn't be surprised if over the years employees have been making canny decisions on the stock markets. Facebook also has a lot of information via its portfolio of services to enable this too.
And because they didn’t perfect their chat and video calling app strategy, it’s fine for companies to have unrestricted access to personal data without any oversight.
No one is arguing in favour of that, but the statement that all the data lets them perfect their strategies and shut out competitors is provably wrong.
Google+ and Wave were both doomed because they were invite-only. If you have to ration invites, it isn't much of a social network. Invite-only successfully created artificial demand for Gmail because invited users could immediately communicate with non-Gmail users. That didn't work for Google+ or Wave since they didn't interoperate with services that people already used.
How poetic. This lawsuit brought to you by Boies Schiller, the law firm that aggressively intimidated Theranos whistleblowers. I would argue that their actions contributed to the suicide of Ian Gibbons, as well as threatening his widow with legal action after his death.
>I would argue that their actions contributed to the suicide of Ian Gibbons [...]
Anyone who has read Bad Blood [1] should have no doubt about this fact. They destroyed this man and lead him to suicide in order to try to save their house of cards.
I finally got around to reading it recently, and I think that aspect of the entire saga was the most startling. Not the fraud, not Holmes' ability to spin to the public and her board, not the work culture that she and Sunny Balwani created for their employees, but the severity and frequency of the real and implied legal threats Theranos employed through Boies Schiller and associates on literally everyone that touched the company.
The way they tried to bully Tyler Shultz [1] was also fascinating. I was impressed by his ability to resist them. I think his parents mortgaged their home to aid in his legal defense.
He tried to warn his grandfather, a member of the board, that something was wrong at Theranos. His grandfather said "Tyler, they can't convince me that you're stupid, but they _can_ convince me that you're wrong." His grandson turned out to be right.
This lawsuit is idiotic. They're claiming Google is doing something wrong by tracking users from websites using ad tech when someone uses their browsers private browsing mode.
Actually I think it nicely highlights the contradictions inherent in Google's monopolistic position. The user clearly expressed an intent to browse "privately" in one Google product - Chrome. The fact that this doesn't then completely block Google Analytics or forward the preference for privacy to other Google properties is a choice that conveniently aligns with Google's ad business but ignores the intent of what the user is trying to do.
Alternatively, this is working as intended since Google isn't acting as a monopoly by tying disparate products together. You get the same experience in chrome incognito as Firefox incognito.
Google runs both Chrome and Analytics. And they do not warn explicitly that other Google products may store the history even in the private mode. Which some may interpret that Google lies when claiming about their private mode. So the case may have merit in theory.
The confusion is that google makes chrome. Plain and simple. A feature (incognito) with a bypass (analytics) is a non-feature since they are both done by the same company. So the practical solution is for them to be not done by the same company.
If you start from the axiom that Google should be broken up, this is a reasonable conclusion. But punishing a company for keeping separate products separate would actually undermine most anti-monopoly law.
They are not separate in the monopoly sense though, are they ? Chrome, presumably a cost center, exists only to facilitate google's other businesses. Nobody has punished google for keeping products separate. Quite the contrary. It has always been about extending dominance in one segment to an unrelated one. So google could say that they treat all trackers equally in the incognito mode, and that would be correct, but if google analytics is the revenue center, and the cost center is giving it a free pass, the argument that other trackers can also bypass incognito seems weak.
Why is it idiotic to expect Google to respect a users wish not to be tracked? Half the world has laws that require Google to ask permission. Using private browsing mode is a very clear indication the the user does not want to give them that permission.
It's really not that hard. Google and Facebook just don't want to understand it.
> Using private browsing mode is a very clear indication the the user does not want to give them that permission.
How? Incognito is meant to keep your history clear from other people who use the computer. It explicitly does nothing about websites tracking your activity.
I was under the impression that it wasn't supposed to be possible for a website to tell that the page was loaded in incognito mode; in fact, I thought it was important that they not be able to. As such, how would the page with good analytics even know the user was incognito.
I think the OP doesn't mean it's idiotic for Google to respect a user's wish not be tracked. I believe he's referring to lawyers not understanding separation of browser and web services.
I was really hoping this would be a lawsuit about Google collection of information period, rather than a quibble like this. Please too remember that Boies etc are the same scumbags (and that's the correct term) who defended Theranos, harassing those who informed the government.
That every other website in existence is behaving "badly?" Do you think Chrome should tell the website that the user is browsing in incognito mode so that websites will treat them differently? For example, a paywall website with a free article limit would simply not show content to users saying they are in incognito mode.
Enabling incognito mode in Chrome and Firefox even explains that it doesn't stop server-side tracking, so it's not even misleading.
There is DNT for example, which can easily be used to transmit the user's intentions automatically and without unnecessary interaction. The technical means are there, they just need to be enforced, which maybe this lawsuit will improve...
It's the website's responsibility to respect DNT=1, then not load the ga script. DNT is not widely adopted and there are no legal mandates for it. It is simply an expression of preference. Blocking third party scripts and third party cookies is something you can set in all browsers which will block GA.
It's not so idiotic. Google collects information that people don't want them to collect. That arguably objectionable, even if it's not strictly illegal. It will be interesting to see whether this case gets a sympathetic judge, but I imagine it will end up being dismissed like previous lawsuits of this variety. Nice bit of publicity for the law firm though.
I think the debatable point here is that the lawsuit implies that the problem is that Google is not integrating analytics and Chrome enough. Analytics works in a certain way, and chrome works in another. The expectation that because both products belong to Google, they should treat each other in a special manner (i.e. Site-side Analytics should be behaving differently when encountering a chrome browser) is pretty iffy.
Not my proposal. Regardless, a case is being made that Google is in a position to ensure its analytics services comply with the GDPR when a browser has been put into a mode clearly intended to opt out of tracking.
Other websites or services may be able to say they can't control how those browsers in such modes communicate that intent. Google cannot.
I agree with you, but I think the problem is a disconnect between what incognito actually does and what people think it does. That distinction is pretty obvious to people who have done any sort of web development, but it's hard to explain to the average user.
If the issue is on the collection side, what technical mechanism does Google have for identifying incognito traffic? There's already a game of whack-a-mole being played because news sites are detecting if you're in incognito mode, but that's in JS, so the access logs were already collected, and the the hacks for detecting incognito are slowly getting fixed, so it's not a viable solution.
DNT is a header, but it's a preference and not legally binding.
The only thing Google did that was "wrong" is that incognito gave uses the impression that they weren't being tracked externally, when in reality the only thing it does is not save your history and start with no cookies.
What you're saying only serves to support the argument that the tracking is wrong in the first place, IMO. Google doesn't have a right to follow everyone around the web and flex their muscle just because they can. Since they have a monopoly and nobody is even close to challenging them, someone needs to step in and regulate them.
There are the data privacy laws that mandate users be able to opt of tracking, and it should be pretty clear to a non-psychopathic company that a person using the private mode on their browser is opting out of any tracking.
Google is one of the biggest companies in the world. Do they really need to be wringing their users as hard as they can to milk out every last drop of ad revenue potential?
That's not what incognito mode is. It literally says on the first page when you open incognito window. People want to skip reading technical details when provided but then get angry when a company does the exact thing they warned about.
Also where does this end? Should car dealerships be allowed to analyze your data? Cars you've bought, cars you test drove etc.?
A disclaimer that a technical vulnerability exists isn't some kind of implicit invitation for a company like Google to use that vulnerability for their benefit. It's true that it's technically possible to track users using a private browsing mode, but it is not ethical.
If you have a car with a broken lock, that's not some kind of open invitation to burglars to steal whatever they want from your car.
You are asking that browsers announce to websites that the user has incognito mode enabled, which immediately opens the door to websites refusing to serve users running incognito mode at all.
I think that's entirely fair: Websites should not track users who do not want to be tracked, but websites should decide whether or not to serve clients which aren't to be tracked.
Well, ideally yes, but let's be honest here. If we can't trust websites to not abuse data, how can we possibly trust them to respect a polite request not to gather the data in the first place? I would personally rather have an incognito mode feature that I have confidence performs the task it claims to do without alerting the website, than one that loudly announces its presence, on the hope that it might be respected.
I don't trust Google, but I certainly don't want Google to trust random websites on my behalf.
Google is just a digital Eye-of-Sauron[0] and very difficult to avoid if you do any meaningful surfing of the open web. They have their fingers in many different pies.
There is even small subcultures on the web dedicated to avoiding Google by doing things like running 'degoogled chromium' and blacklisting various Google domains in their /etc/hosts file. Sadly all these mitigations don't work because Google already has a dossier on many people and even if you don't have a Google account, Google keeps tabs on you via fingerprinting or other means and knows who 'you' really are (using simple correlation and heuristics).
Then this raises the issue of: what can be done? I prefer to just be nihilistic about it and accept that Google already has dirt on me, despite my mitigations (I have a bit of history blindly handing over personal data to Google for a number of years). I think young people these days in 2020 have a great opportunity to implement mitigations and are better suited than me to browse privately, since I'm already contaminated by Google. (I still mitigate however, but it's not enough).
> Google surreptitiously amasses billions of bits of information --every day -- about internet users even if they opt out of sharing their information, three consumers alleged in a proposed class action lawsuit.
a billion bits is only like 130Meg a day, seems like google is doing pretty good.
My profile is also ridiculously inaccurate. Though I do use ublock origin, privacy badger, etc and disabled most tracking features in my google account. Also on VPN pretty much all the time. I‘d really like to see the ad profile of someone who is completely transparent and doesn‘t spend any effort to obfuscate at all. I‘d imagine it would be much more accurate.
Interestingly my instagram ads are incredibly on point a lot of the time. You cant really hide your behavior in the app so the get a really good picture, probably.
“We strongly dispute these claims and we will defend ourselves vigorously against them,” Castaneda said in an email.
I wonder what it is they actually dispute, when the claims are so basic? Most popular websites use GA, so of course Google is watching every single user action across the Internet, regardless of if they have tried to 'opt out' via any methods, laws, processed, etc.
Obviously this article is written for the general public and not in legalese. But this line gives a hint:
> The suit includes claims for invasion of privacy and violations of federal wiretapping law.
The claims Google disputes is that they are breaking the law.
> I wonder what it is they actually dispute, when the claims are so basic.
The claims can be summarized basically, but proving that someone has broken the law in court is not basic. This basic overview of federal wiretapping laws is over 70 pages[1]. Laws are filled with specific minutia. Here is another good overview of privacy law[2]. Even though the statement "Google violated my privacy" seems simple and self evident, to prove it in a court of law there are tons of very specific criteria you have to prove.
Great response, thanks. The US is definitely a more difficult environment to prove that their privacy violations are not just violations of privacy, but also of law. I wonder if these wiretapping laws are really the best route to go about attempting to prove this.
The claims are basic, the conclusions are ridiculous. The lawsuit is on the conclusions, which probably won't even fly, but some trolls managed to get their course-project-level PDF read by millions.
I have a personal gmail and a work gmail (G Suite), each using my full name, but both run in separate browsers using the same IP address. I always wonder if Google keeps a "master record" of people and all their associations, if not for security or to work with law enforcement.
In my experience in adtech, not only can Google do this pretty accurately, but other third parties as well (e.g. DMPs and the like.) Even if they couldn't make a deterministic association, they have enough data points to make a probabilistic association with high likelihood (ex: "Given all these data points, we're 95% confident that these two people are the same. Therefore we're going to attribute the actions to the same person.")
Now, to qualify my response a bit, this isn't necessarily for security / law enforcement, but mainly for better targeting parameters. Example: frequency capping of ads (buyer specifies that you only see an ad X number of times in a given time period) or more relevant targeting (you don't see completely different ads in different browsers as if you're from two non-overlapping demographic groups.)
Facebook does this to build shadow profiles of everyone, even if they are not Facebook users. I'm sure Google does similar things. They certainly could.
IMO, that doesn't make a good trade off. The identity problem space is already super complicated due to ITP while the risk is very high. If you make mistakes, you will accidentally leak personal information to others while the benefits... are pretty minimal, probably having 1~2% more revenue from those people with dual accounts. Given that joining signed-in and signed-out identity is explicitly prohibited as a condition of Double Click acquisition, I doubt if there's any incentive to do this kind of joining for Google.
Probably. I recall Project Veritas showing hidden video of a Twitter dev speaking about databases that link all profiles of someone (using things like IPs, fingerprints, etc. to stitch the relationships together).
You're probably getting downvoted for discussing Project Veritas (which is a silly reason for downvoting IMO) but your point is still accurate: these companies are defnitely running models to create probabilistic identity models in the absence of deterministic associations.
Deterministic = I specifically say I'm person X and am logged-in.
Probabilistic = I am not logged-in on this browser, but am on the same computer, same IP, and am logged-in on a separate browser at the exact same time under name X. Therefore I'm very likely person X.
It's not silly. A source that has been often shown to fabricate evidence and post straight up lies doesn't belong on HN. Many of their videos, including the "undercover interviews" with Twitter and Google have all been shown to have been edited/cut to completely change what the person was talking about or trying to say.
Nothing coming from PV can ever again be taken with a single grain of salt.
I really hate changing the topic here, but I feel it has to be said sometimes.
It's _very_ frustrating when people immediately discredit someone or something because they don't agree with them, even if they don't have the full story.
Project Veritas does some good work, even if they're blasted in popular culture/mainstream media for being 'biased', 'alt-right', 'etc'. I bet you the downvotes you're getting are just because you mentioned Project Veritas.
They have some very out-landish views, but when they actually put people on the street or go undercover, they've revealed dirt on a lot of companies and people.
It's shocking to me to see society go from "let's look at ALL sides of the coin, no matter how egregiously offensive they are to me" to "fuck 'em, they're trash media, they suck, they shill and are racist, alt-right losers and I'm not going to look at anything they post because in my mind everything they do is bad!"
I don't even go to Project Veritas outside of what I hear in the media, but I still give it a fair look and make my own opinion.
I'm sorry but no. I'm all for looking at all sides, but Project Veritas is not that. They have done extremely shady things such as intentionally sending a fake victim to WP [0], as well as editing/slicing sentences (which they recorded without consent) to create an entirely different narrative, different from what the person was originally trying to convey [1].
Even a single one of these incidents is enough to completely throw every thing you've done and said into doubt, let alone the half a dozen that PV has behind it. There is absolutely no way you can take anything they say or do serious after they've been caught times and times again lying and misleading.
It has absolutely nothing to do with how outlanding their views are, and everything to do with the fact that what they say or do cannot be trusted. I used to in the beginning, but they are way past being given a "fair look".
Definitely. They have logic that links data sets, all it takes is one slip up for them to know youre the same person, probably with levels of probability about whether you are the same person.
I don't think Google is doing fingerprint (at least for Ads AFAIK, but there's not much reasons to do so in other products) and Chrome blocks fingerprinting in order to make Facebook and other competitors' life harder.
> “We strongly dispute these claims and we will defend ourselves vigorously against them,” Castaneda said in an email.
I've seen this _exact_ phrasing so many times in responses to lawsuits that I'm now starting to wonder if future lawyers receive this template as a graduation fair well package.
Lawyers generally copy each other. There is standard text which has gone through hundreds or thousands of litigations and hasn't done harm, and that's almost always used going forward.
A lot of oddball conventions, such as THE USE OF ALL-CAPS in specific places, which have no reason to be legally meaningful, but are always done.
Some of them do turn out to be important. Standard clauses build up over time.
I think plaintiffs are right in this case. Google is openly breaking a number of laws, including CFAA. This constitutes unauthorized access. CFAA is a broken law with an overly-broad definition of unauthorized access, which the tech industry abuses all the time. It will be nice to see them get abused back. Perhaps they'll have incentive to fix it.
I think it's worth noting the class-action lawsuit is asking for at least $5 Billion
I'll also encourage the use of DuckDuckGo.com for your search and [1] uBlock Origin, [2] Searchonymous (it prevents google from tracking your searches if you're logged in to an account), and [3] Google Search Link fixer (removes link tracking from Google Search links) in Firefox, but would happily update if anyone has better recommendations.
I go through hundreds of disposable browsing profiles every day.
[1] https://github.com/atomontage/chrome-private