Hacker News new | past | comments | ask | show | jobs | submit login

This is Google not only intercepting people's smartphone traffic, but a lot more:

- Google will send you a router to intercept your entire household's internet traffic on all devices with a browser (https://support.google.com/audiencemeasurement/answer/757439...)

- Google will send you a device that listens 24/7 to audio in the room to figure out what you are watching on TV and listening to (https://support.google.com/audiencemeasurement/answer/757476...)

- Google's project includes tracking of desktop and laptop internet activity via a browser extension that can basically read literally anything you do online (https://support.google.com/audiencemeasurement/answer/757448...)

This isn't just trying to figure out what new up-and-coming apps are going to be the next big thing, this is Google building out very far-reaching profiles of your entire household, in return for some gift cards. This is signing away your family's entire digital life (and a significant part of anyone they interact with in a browser).




I'll go ahead and play the Devil's advocate because every constructive conversation needs one.

People who signup for this already know what they are doing and the program has a privacy section[0] saying that the data is only shared with Google which is pretty much akin to having any Smart Speaker. Not only that but it also says that the data wouldn't be used to "advertise to you or sell you anything" which is not the case with Smart Speakers.

In essence, from a privacy point of view, when compared to having a Smart Speaker, this is better I'd say.

0.https://support.google.com/audiencemeasurement/answer/902874...


Not everyone who signed up, or who is signed up, is capable of giving informed consent (those under 18). There is also a lack of understanding of exactly how much information is being collected - it's one thing to say 'we will track all your traffic' and another to understand the long term implications of that action.

Also, I'd just like to call out how much of an outright lie the phrase "Your [...] data will never be shared or sold to anyone outside of Google" is. If Google were to go into bankruptcy procedures at some point in the future, that data is simply another asset which will be sold off to someone else, with no respect for the privacy policy it was collected under (not to mention how privacy policies can change and retroactively cover data already collected).


Typically when children participate in a study, it's their parents that have to consent. I don't know how scientific research would work any other way?

Also, what do we know about the contract they signed? It's probably not the regular privacy policy.

Informed consent is kind of a grey area. How much do you normally know about the consequences of your actions? How can you prove that you know what you're doing? Should your freedom to enter an agreement be restricted if you can't prove that you know what you're doing?

Legally, you prove consent by signing a contract. Sometimes people go above and beyond that to make sure people really and truly know what they're getting into, but there's a question how far you should go.

I don't think we can know just from reading this article how informed the people signing up were. They didn't interview anyone to find out.


Typically when children participate in a study, it's also been previously approved by an Institutional Review Board, and the IRB expects to either see that the potential harm is minimal, or that the potential benefit is easily great enough to justify the risks.

It's hard for me to see either of those cases being a slam dunk here. There's at least some potential for serious harm in the event of a data breach. And the potential benefit - an already rich company getting even richer - doesn't carry a whole lot of moral weight.


I believe IRBs are only required for federally funded research such as that performed by universities.

I’ve never heard of corporate market research being reviewed and approved by an IRB. Not sure how that would work since the board would hardly be independent anyway, but I’d guess the main reason it doesn’t happen is there is no law requiring it.


In most places, you need to be very, very sure that research subjects understand what they are agreeing to.

Passively reading something usually isn’t enough. The minimum often some kind of interactive explanation that includes a way for subjects to ask questions. I’ve heard tell of groups giving short “quizzes” for experiments with wired requirements and side effects.

For kids specifically, you often need to get both the parents’ consent and the child’s assent. The details vary with the child’s age (older kids opinions get more weight) and the nature of the research (kids can opt out of basic research, but the parents might be able to override the kids’s desire to (say) avoid a shot if it’s part of a potentially lifesaving clinical trial).


Informed consent is kind of a grey area.

I was surprised recently to learn just how grey.

The company I work for used to have a big full-page opt-in form for text messages. Then one day it was pulled from the web site. Why? Because our usually very privacy-conscious legal team decided that if anyone gives us their phone number anywhere, it counts as consent to receive text messages.

Very sad, and in my opinion slimy. I’m glad I’m not on the social media team.


That wouldn't be consent under GDPR so I hope your company doesn't have customers in the EU


Not a single one. Fortunately, the chances of it having an interest in Europe are indistinguishable from zero.


Facebook used account recovery phone numbers for advertising and analytics.


It has been my experience that people actually have no clue what they're signing up for. In a few reddit conversations today, I found people saying things like "sure they can see what websites I'm connected to, but my important information is encrypted, so I don't really mind". Sorry mate, they can see that too.


Encrypted data can be read if you install a root CA, which is what the Facebook app does, but the Google version does not appear to do that.

There’s an “enterprise certificate”—installing the enterprise certificate allows you to side-load applications. This is relatively benign. Both Facebook and Google do this, in both cases apparently a violation of Apple policy.

There’s a “root certificate”—installing the root certificate allows you to do MitM attacks and read encrypted traffic like messages, bank passwords, etc. The Facebook app appears to do this and I would characterize this reckless, irresponsible, and unambiguously unethical.


The Google version has a browser extension though, which is as good as a root cert.


It doesn't do anything on mobile as extensions are not supported, no?


User opt-in does not excuse violating Apple’s terms of use. Google _likely_ did not get Apple’s opt-in for this approach. If they did not, they will _likely_ see their enterprise certificate terminated for precisely the same reason.


Still playing devil's advocate : I am more perplexed by the necessity to have Apple's approval than anything else here.

Sure this particular app is debatable.. but I have also worked in the music streaming industry. While being super respectful of the users, we still have sometimes had to wait for months for Apple's approval.

Having a single agent being able to gatekeep what you can install on your phone at their own discretion is an issue since there will always be the temptation to prevent any competitor from getting in your space.


I am more perplexed by the necessity to have Apple's approval than anything else here.

As I understand it, it’s because the apps were being distributed using a method that is supposed to be used only inside the company. Like for beta testing software, or for in-house applications used by employees only. Anything going to the general public is supposed to go through the App Store under Apple’s terms and conditions.


is there any other way to distribute an app without having to ask for apple's blessings though ?


One reason I use an Apple device is that I like knowing every app I use has been reviewed by them


That doesn't require Apple to lock down the platform. All it requires is for them to offer a store of Apple reviewed apps. You can choose those apps, other people can choose apps from other stores.

You're basically saying you like Blockbuster video because there's no porn and they only have the "edited for the Airlines" versions of movies.

Great, but other people would like to use their device for whatever they want.

Yes, I know you'll say "so buy a device from someone else". I don't agree with that anymore than I think Ford should be able to make a car you're not allowed to drive anywhere Ford says your not. If Ford did that I don't think the answer should be, "if you want to drive other places buy a car from someone else". IMO the answer should be it's illegal for Ford to control my car to that level.

I'm hoping Apple loses the case against their monopoly on the App store (although given the details of the case I don't think this particular case will succeed so I'll have to wait for another)


That's a complete non-sequitur. The complaint was that the owner of the device doesn't have the choice to install software that wasn't approved by Apple. The fact that you only want to install software approved by Apple is completely irrelevant to the question of whether you should be able to install whatever you want, because your ability to install whatever you want does not in any way affect your ability to only install software approved by Apple.

You only buying Nike shoes does not require that all other companies are banned from selling shoes, you only eating McDonald's burgers does not require all other restaurants to be closed, and you only buying software through Apple does not require that there is no other ways to install software either, that's just authoritarian bullshit.


Not really, no. You can distribute IPAs to users for them to sign locally, which Apple can't really control, but this is much more difficult.


Yes and no, see other replies for nuance. Apple controls software distribution for their platform, in exactly the same way (in principle) that games console manufacturers have done so for many decades. They do so in the same way Atari controlled software distribution for the 2600 in 1977 and Nintendo controls it for the Switch today, or that Volvo does for software performance packs for their vehicles. All these and many, many, many more are closed platform devices you buy, with optional software features you can purchase from the vendor.


Yes, all your users would have to be signed up for Apple's Developer Program ($99/year) or have a jailbroken device.


Does it matter?

It's their platform, they can choose the rules behind app distribution. They chose to allow it through methods controlled by themselves. If a third party wants to distribute apps, then they have to abide by the terms set by Apple.


yes, but 'inside the company' and 'testing' are vague. On one hand u have Larry Page testing the latest version of Gmail app - clearly eithin bounds - but what if its a contractor using an app filled with analytics that won't be released to the public, then what if its a focus group with 5 people, then what if its a large study like this, etc.


That's precisely what TestFlight is for. Internal use, testing, and production each have their avenues: Enterprise certificate, testflight, and app store


Any time you have a human element to a process, there are bound to be unforseen consequences. For all we know, their review process could be devoid of SLAs for completing reviews and Agents could sit on reviews for months for arbitrary reasons because no one is following up on them.

We're a large NFP but not a particularly large organization so I'm not sure if we get preferential treatment but our business lead has a direct point of contact with an App Store representative and has used it to get extended information about and, from our perspective, force through Apps releases stuck in review.

He isn't the type of person to sit idly by when a process is taking an abnormal amount of time so part of me thinks it isn't a case of preferential treatment but rather the squeaky wheel getting the grease.

I'm sure having someone he can get on a phone and hold accountable goes a long way. I'm still unclear how he managed that.


“Still playing devil's advocate”

You are not playing. You are presenting arguments and attempting to sway opinion.

I’ll ask directly: do you not stand behind the arguments you’re making?


I didn't downvote you, but I imagine I know why others did.

You seem not to understand the concept of Devil's Advocate, while the person you responded to clearly does.

It's only the argument that counts. A person is thereby shielded from association with the argument and does not need to show their real intent.


As somebody whose read some of those privacy policies, I'm not sure I understand what I'm agreeing to.

For example, in gsuite they only claim to not mine your data for advertising purposes. Does that mean 'we only use your data to provide the service', or does it mean 'we use your data for everything but advertising'?

Many people I know barely even know how their data is or isn't used, and are shocked by recent news stories. That's not informed consent imo


Based on direct involvement in the process of writing one such policy for one such big internet co., my view is that the confusion and vagueness is largely deliberate. A small part of it is the result of too many people involved pulling in different directions.


> it also says that the data wouldn't be used to "advertise to you or sell you anything" which is not the case with Smart Speakers

1. Google's entire business model is advertising through collecting more complete user data.

2. Why would they collect this data if not to further their advertisement business?

3. I've never heard of companies collecting data that wouldn't further their business pursuits.

This data is either "digital-oil" or "digital uranium" both still extremely valuable.


I think they mean the research data won't be used directly to target ads or try to make sales to you (standard guideline for market research), rather than the second order effect of allowing them to improve their overall ad targeting which then affects you.


Sure, data won't be used directly, it'll be laundered through ML. Doesn't change the final result - the data is used to sell you stuff.

Data laundering needs to be recognized as a thing.


not necessarily - the data could be used to understand overall usage patterns and preferences in the target population as a whole - much in the same way as traditional surveys and focus groups. If indeed it is put into ML which then tries to match the data with the actual participants, then I agree that that is misrepresentation and probably criminal.


Real world experience indicates those assurances mean jack squat exactly as soon as the corporation needs them to.

They have enough paid devil’s advocates arguing for them disingenuously; no need to spread their propaganda for free!


Curious what data you have to suggest Google will (or has) broken this assurance ever?


Apart from the several times they've been fined in the EU and various other countries for (summarising here) slurping data, lying about deleting it, favoring their own services, and so on?

From a very quick online search:

* https://www.theregister.co.uk/2012/04/30/google_slurp_ok/

* https://www.nytimes.com/2017/06/27/technology/eu-google-fine...

* https://variety.com/2019/data/global/google-fined-57-million...

Of course, such fines (etc) are subject to potential change over time because lawyers. But they're definitely not a bastion of good actions and credibility. :(


Does that matter? That data will be around forever (longer than Google). Do you trust every person (and lawyer) Google will ever hire from here on out, and every company who will purchase that data once Google is gone?


Actually Google anonymizes the data and deletes it after analysis (I have no idea if they do but since we're just making things up with no basis in fact my guess is just as valid if not more valid than yours)


A guess that Google will delete data is just as valid as a guess that they will keep it? Or more valid?

If you'd rather deal with evidence and reason than with guesses, I can understand that. But dude...c'mon.


More valid - heres something that might surprise you all - my 20 year old anonymized data is as worthless to google as it is to me - so yes I don't care if someone gets a hold of it after google no longer exists


> People who signup for this already know what they are doing

I've seen this argument a few times recently. It's not clear to most people what they're signing up to. They've clicked through a EULA, that somewhere in thousands of lines mentions a separate Privacy Policy which they would then have had to load, in order to decipher more thousands of lines of legalese to find the few lines of 'active ingredient' which would tell them what's actually being recorded.

Even when it does technically declare somewhere in there what "might be" recorded (usually in terms like "we record data such as [...]" which don't actually limit what they can record), the company itself will handwave it away as "we're just covering ourselves in case we have to write your name down if you call us for tech support" or similar inanities.

The fact that so many people were surprised about the Cambridge Analytica scandal when selling such access is explicitly Facebook's business model and everyone participated voluntarily should tell you everything you need to know about how much the average user of these services "knows what they're doing".

Devil's advocate is exactly the right word for defending these practices - they're like a Disney villain holding a giant contract scroll and a fountain pen dipped in blood. Don't worry about the fine print, just click ACCEPT, and I'll give you what you want, right?


Amazon Echo and Google Home only send data to the servers when they detect the wake word. That's a lot different than sending everything.


Sometimes they make mistakes (false positives for the wake word). Amazon even has a blog post about that https://developer.amazon.com/blogs/alexa/post/8a7980f4-340c-...


People who signup for this already know what they are doing

But do the people they interact with, or invite into their home, know what they are doing? Does Google presume to have consent-by-proxy here?


Makes me wonder if "smart" speakers generally are legally dubious in two-party-consent areas ...


so you agree that google is the devil and you thought that the devil needs an advocate..? -[0]

[0] https://xkcd.com/1432/


> Google will send you a router to intercept your entire household's internet traffic on all devices with a browser ()

Yes, shocking that requesting a router that will report "the sites you visit, device IP address, cookies, and diagnostic data" for market research will take a look at my household's internet traffic...

It's just Nielsen but from Google (Nielsen these days literally works by putting a microphone in your house and collecting all your internet activity). The novelty of this story doesn't have anything to do with these things, just the iOS app.


Exactly. The parent comment completely misses what the real issue in this story and the Facebook story is. It was never really about the research app itself, but rather about how they used Enterprise keys for non-enterprise use cases. The app itself is entirely legitimate, extremely clear in what it does, opt-in and completely separate from the rest of the ecosystem.


Anyone on HN understands the implications of this. But does your average user? I doubt it. They just know they can install a little box and get paid. Digital privacy means nothing to far too many people


I'd argue that the average person probably understands the actual implications better than the typical HN user.


I don't think so. For instance, people I spoke with on reddit who installed the FB app thought that due to SSL, all their communications would be encrypted, and FB would only see who they were talking to, not what. Of course, the entire point of the root cert is to break SSL.


What? How can you make that argument? The average person has absolutely no clue of the various ways websites are tracking them already, let alone the potential amount of data Google would be getting by aggregating all this through their router.


> Cookies

Yessir, how could there be a problem here? And all those third-parties consented to their communication intercepted and stored by, well, another third-party?

No, this is some CFAA federal crime trojan stuff.


Sounds quite similar to the way TV ratings are done - is this really such an issue? It's very much an opt in service designed to measure audiences.

Unlike hidden terms in privacy policies it's made quite clear what's going on here.


Indeed, the real issue here is the use for Apple's Enterprise keys, not the app itself. The parent comment entirely misses the point of the article.


Your cable box isn’t decrypting your entire household’s tls traffic when it reports episode watch statistics.


Can you actually do that from a router? Wouldn't you have to put that tls cert on every laptop and phone?


Not only can you not do it from a router, Google's chrome browser is one of the leading reason's why you can't.

however, with the browser extension installed you don't need to - they just read the content after your browser decrypts it.


Router could mitm and chrome could allow it since they have google CAs in their chain.

Don’t know if they are doing this, but it’s certaibly possible to do this at router (or router dumps traffic to google to decrypt).

It’s much easier for google to do this since they make chrome. Another company would have to adjust the cert trust on each machine.


> Router could mitm and chrome could allow it since they have google CAs in their chain.

That would require Google either sending the traffic back to google and out again (slowing a lot of things down) or Google putting a signed private key on the router themselves (a violation of CA agreements and a remarkably stupid thing to do in general). If they did that it would not be difficult for someone to extract that key and certificate. This would be a huge security breach.


The decryption doesn’t have to happen in real-time since it’s just analytics. Dumping all traffic off to google doubles bandwidth but could be done in a way to minimize slowdown for users.

I agree that it’s a security breach, but it happens all the time. Look at enterprise products like ForcePoint [0] that will do deep inspection on https sessions because they have custom CA installed on enterprise clients. Many companies do this.

Because it’s their router hardware it would be possible to present anyone extracting the intermediate mitm carts and keys. The data are likely sensitive, but that’s what They have already.

Tools like ForcePoint don’t put a “real” CA cert on the device. They typically create a new CA per device, install that into the downstream client CA trusted roots and then generate mitm certs signing with this new cert.

[0] https://www.websense.com/content/support/library/web/hosted/...


Routing all your traffic through Google's networks wouldn't result in a noticeable network lag.

Corporations install root certs all the time. It wouldn't require violating CA agreements.


This means that Google has the login details for your bank. You are almost definitely in violation of the bank’s ToS by divulging them


It's most likely targeted only at relevant sites.


Is it a little baffling to anyone else that Facebook's entire plan if Apple killed their research-app Onavo iOS workaround was "loudly point out that Google does it too"?

Surely there must have been more to their contingency plan? They have great engineers and I'm surprised there wasn't more than a PR response up their sleeve. Just for example, in retrospect, why did they use the mainline Facebook iOS enterprise certificate to sign this app rather than a cert from one of their subsidiaries or acquisitions -- wouldn't that have de-risked a bit?


Is it a little baffling to anyone else that Facebook's entire plan if Apple killed their research-app Onavo iOS workaround was "loudly point out that Google does it too"?

Not to me. That’s pretty much how I expect Facebook to operate these days.

Sadly, it’s not just Facebook. Pretty much every time any article posted on HN points out how Company X is misbehaving the thread is flooded with “But... but... Company Y does it, too!” It’s like the SV bubble falls back on the logic of a five-year-old whenever they get caught with their hands in the cookie jar.


This happened with Apple too in the past when the news reported on the working conditions at Foxconn. But to be fair if two 5-year-old kids threw trash on the ground and the adults only punished one of them... you can imagine what happens.


Yes, Same thing with any criticism of China.


I'm pretty happy the way that Facebook is signaling the security of iOS by complaining that they have no other way to break into phones except by social engineering people to install root certificates.

This episode is a win-win.


Sorry to disappoint, but I think this is just one of many ways. example - as an iOS dev u want to advertise on fb. for that to be effective u want to track conversions. easiest way to do that, esp. for a small dev - add the facebook sdk to the app. and you're done - facebook can potentially hoover a lot of data from an app that has no obvious relation to it.


It appears to be a routine tactic for Facebook. I guess it must work, so why stop?

https://www.cnbc.com/2018/11/14/facebook-hired-pr-firm-that-...

> Facebook expanded its work with Definers Public Affairs, a Washington public relations firm, in October 2017 after enduring a years-worth of external criticism over its handling of Russian interference on its social network.

> Definers Public Affairs wrote dozens of articles criticizing Google and Apple for their business practices while downplaying the impact of Russia's misinformation campaign on Facebook.

> Facebook also used the firm to push the idea that liberal financier George Soros was behind a growing anti-Facebook movement.

https://www.nytimes.com/2018/11/15/technology/facebook-defin...

> Definers began doing some general communications work, such as running conference calls for Facebook. It also undertook more covert efforts to spread the blame for the rise of the Russian disinformation, pointing fingers at other companies like Google.

> A key part of Definers’ strategy was NTK Network, a website that appeared to be a run-of-the-mill news aggregator with a right-wing slant. In fact, many of NTK Network’s stories were written by employees at Definers and America Rising, a sister firm, to criticize rivals of their clients, according to one former employee not allowed to speak about it publicly. The three outfits share some staff and offices in Arlington, Va.

https://www.thedailybeast.com/facebook-busted-in-clumsy-smea...

> The social network secretly hired a PR firm to plant negative stories about the search giant, The Daily Beast's Dan Lyons reveals—a caper that is blowing up in their face, and escalating their war.


So what? it's opt-in, is consent not enough anymore? does a privacy maximalist mentality needs to be imposed on everyone?


Yes, because these systems inherently violate the privacy of parties that didn't consent to the spying. This includes private individuals, and competing companies with whom the "consenting" individual interacts.

A thought experiment: imagine a corporate TOS including a clause that specifically prohibits use of devices/software that violates the provider's privacy. E.g. an end user's account can be terminated because they're using Google/FB/other "voluntary" spyware...


So how many individuals and organizations do I need to get permission from to install something on my phone?

All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.


> So how many individuals and organizations do I need to get permission from to install something on my phone?

The problem isn't about installing something on your phone, it's about handing over every single private communication you have with others without getting their approval. It contradicts your first assumption that people have _opted in_.

> All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.

Are you're suggesting that no consent is necessary for you to put other people's private information on sale?


Private correspondence is always like that.

If you send me a snailmail, I have all right to publish what you right to me (with a few exception, of course).

That's why we got all those email disclaimer nonsense.


You may have the right to publish some emails when necessary. It is a whole different thing to sell every bit of private correspondence to a third party in secret. Not only is it a betrayal of trust, a quick search on the internet suggests that it may be illegal in some cases[1].

[1] https://injury.findlaw.com/torts-and-personal-injuries/invas...


This is excessively reductionist logic. There's a huge gulf between, for example, the snarky reply below ala "forwarding emails is illegal" and "an app is en masse siphoning all communications between its users and others".

The privacy backlash is precisely about people becoming more aware of the latter class of behavior and rebelling against it. Complaining that "no one would be able to do anything" is a straw-man without relevance to the actual social conversation going on right now.


You seem to be operating on the assumption that you're defending something other than the status quo.

(A quick example of this brand of individualism that offers the individual right, non-declinable, to be analyzed and sanctioned by the government: https://news.ycombinator.com/item?id=18704330)


And that, kids, is why forwarding emails is illegal.


> does a privacy maximalist mentality needs to be imposed on everyone?

I see you follow Google closely; close enough to know that privacy concerns have hardly impeded its growth and dominance. Same with all other major tech companies.

Does a privacy minimalist mentality need to be imposed on everyone? (I'm asking rhetorically. In either form, it's not a substantial argument: it's a strawman. Privacy isn't a measurable quantity, and each person or community cares about protecting or revealing different things.)

Edit: Q.E.D. https://news.ycombinator.com/item?id=19039593


What does any of that mean? my point is against imposing one's POV on others and that people are free to consent to stuff you might not like.


Consent has never been enough. It has always been about informed consent as a minimum, and free choice as the target ideal.


> does a privacy maximalist mentality needs to be imposed on everyone?

I kinda think so, yeah. Everyone flies past the privacy stuff, or gives compromises, or is traded free Farmville points, or given extra coins, or a new shiny feature in exchange for it, their friends are on it, their celebrity is on it, etc.

People will opt-in for all those reasons not realizing or seeing what they gave up or its consequences.


People don't know what they're consenting to. In reddit conversations I've seen people say that they're fine with the app because "everything private is encrypted with SSL". They don't realize the whole point of this is to get around that SSL encryption.


What if they know?

This Google app, unlike Facebook's, do not decrypt traffic.

Now tell me who don't know what they consent to.


Well if they do know, then who cares. You do you. But, at least in FB's case, they don't.


We don't allow poor people to sell one of their kidneys for cash.

I know this is not the same as privacy, but consider it when saying "is consent not enough anymore"


Scams are opt-in.


How is that a scam? are you saying participants didn't receive their gift cards? are the terms agreed to false?


You were asking if "consent was not enough", as if such a thing was inconceivable. I'm just giving you one of many possible counterexamples.


There is consent and then there is informed consent.


Informed consent is enough. That's the problem though.


Man, this is interesting and complicated.

I'll submit this for consideration: you're right, it's 100% fine from an individual standpoint. But there's an aggregate effect of some sort that is a concern. Every move like this changes people's standards and expectations. Call it a "cultural shift", and call us "conservatives" along this particular axis. If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.

So what does that imply we should do? I'm not sure. Maybe simply push back as we are. Maybe try to impress onto opters-in just what they're selling, and maybe they'll reconsider. Maybe ask Google to simply brand this differently. Call it the "Truman Show Package". So at least everyone is aware that, while the data being collected is valuable, and while everything is 100% a-ok so long as everyone consents, this is NOT NORMAL and nobody should accept it as such.


Who exactly is the "us" and "we" in this scenario?


> If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.

So, people who fit that "if" clause. I assume it's a significant number of people here.


I "love" how the reward are gift cards, not actual money. How much more insulting can they be: your privacy is not even worth liquid currency. Some people probably need the gift cards, but it's as upsetting as people selling blood for gift cards.


Giftcards can be tracked. Money can’t :)


Isn't this some tax thing? If they pay you money, they have to file a 1099 with the IRS. If they give you a gift card, it's up to you to record it as a gift on your taxes.


You might be correct, good person. I'm curious if this is standard in the industry...


well think of the other possibly bigger perk: they get your business and loyalty again in the future at a discount since the card was given as an enticing amount (e.g. "$50" but after markup is only really $28 to the business).


It's the same when I've taken consumer surveys at conferences, taste tests in the mall, etc.


Also, some percent of users will forget to use the gift card, or be unaware of how to check the available balance, or use it down to the last penny.


Presumably, this allows for sending money to children who are unlikely to have a bank account.


Prepaid debit cards are totally a thing though.


Don't know why you're downvoted - sending prepaid debit cards has taken the place of cutting a check in a lot of instances. When I cancelled DirecTV years ago, they sent my refund as one.


...or just maybe, some people have different value judgements than you do. Those people also probably think statements like yours are paternalistic and condescending.


Great, just show me where FB got an ethics board approval on both the consent forms and nature of data gathered on underage youth and then maybe I'll believe that people were properly told of the risks involved .


Would you like to elaborate on this with some facts or will you just throw shade with a hypothetical statement?


Is the idea that people have different values or even morals a wild idea? Else how could 2 people see abortion as either self-determination for women vs actual infanticide?


Some people have different points of view, for sure! I am objecting to the argument that:

some people have different value judgements than you do -> those people also probably think statements like yours are paternalistic and condescending.


Are they physical gift cards, or e-gift card codes. If the later, one obvious advantage over other payment methods is that they can be delivered by email or by a web page.

I think most people would prefer that to receiving a check in the mail, or cash in the mail, or supplying banking details for a direct deposit.

They seem to have cards available for multiple merchants. If they have a decent selection there should be something available from a merchant a given subject actually buys things from. If so, a gift card is pretty much as good as cash.


Gift card code distribution is also much easier to automate and track. I'm in a small research lab and we recently moved from cash to gift cards. It's much easier to distribute both physically and digitally.


giftcards talk, money walks.


i wonder why they didn't insist on paying people via Google Wallet


> TV Meters also come with a camera, but that camera isn’t used, and doesn’t collect data.

I, too, like to add unnecessary sensors to devices which considerably increase their value, despite never using them.


> The TV Meter needs to hear the TV clearly so it can work, and people watching TV will need to see the screen while they’re watching TV.

And for some reason it's important that you can see the device even though the camera is turned off?


It could also be that right now the camera is not used but in a future it will. It would just require updating the terms and conditions. No need to send new hardware.


Chances are such an update will not be cognitively registered for many of these consumers, who originally read that the camera would not function when signing up for the product. It's shady as shit no matter how you slice it.


And people wonder why I didn't want Nest. I was already skeptical before it was bought out and I learned that without an Internet connection it wouldn't work.


My girlfriend has a Nest at her work which saved her last week when the power cycled and the heating system turned off. She was able to restart it from home.

I wouldn't be so thrilled to have a thermostat that requires internet to function at home. Or is that only for those networked features?


A dumb heater with a dumb mechanical timer connected to a dumb thermostat would have restarted too.


It wouldn't have been sufficient to save the buildings pipes in -30°C in a restaurant full of embedded draft lines, however.

Nest did give her control over the entire building's system. IIRC it was actually an employee that shut the system off after a power cycle/outage thinking they were protecting the equipment, but failed to restart it. If she hadn't had the insight the system would have been off all night.

I think it has its perks.


better still, the router hardware google sells as a product (onHub) requires a Google account to work [0-

So consumers are paying to buy a router that will allow google to mine and link all browsing behavior to their google account.

Also, if you own a google home, it won't work without all sorts of permissions being enabled at the account level, including web activity and app history [1]

Both of these fantastic privacy violating products are available for Purchase at an electronics retailer near you.

[0]https://on.google.com/hub/support/

[1]https://twitter.com/benthompson/status/864293485439893505


> Google's project includes tracking of desktop and laptop internet activity via a browser extension that can basically read literally anything you do online

What browser extension is this?


It's linked at the end of the paragraph :)


Google Chrome


I wonder why a router is really useful to them. With most sites using https, not much beyond DNS requests and plain IPs can be captured, without forcing users to install a CA.


> With most sites using https, not much beyond DNS requests and plain IPs can be captured

Agreed, but don't minimize the value of logging all DNS requests. You can get an unbelievable amount of deeply personal information from having a list of every DNS lookup. As an experiment, fire up a pi-hole and look at the logs of your own requests. There will likely be a lot of info in there you wouldn't want public.


Okay, so this is a throwaway account whose only posts are in defense of Facebook.

Readers on HN, you’re supposed to be far more skeptical and employ your Young Reaganite “Trust But Verify” glasses before upvoting blindly like this. Even if the facts are correct, the talking points are clearly presented as (in the favorite words of so many on here) “submarine PR.”


> Please don't impute astroturfing or shillage. That degrades discussion and is usually mistaken. If you're worried about it, email us and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: