This isn't just trying to figure out what new up-and-coming apps are going to be the next big thing, this is Google building out very far-reaching profiles of your entire household, in return for some gift cards. This is signing away your family's entire digital life (and a significant part of anyone they interact with in a browser).
I'll go ahead and play the Devil's advocate because every constructive conversation needs one.
People who signup for this already know what they are doing and the program has a privacy section[0] saying that the data is only shared with Google which is pretty much akin to having any Smart Speaker. Not only that but it also says that the data wouldn't be used to "advertise to you or sell you anything" which is not the case with Smart Speakers.
In essence, from a privacy point of view, when compared to having a Smart Speaker, this is better I'd say.
Not everyone who signed up, or who is signed up, is capable of giving informed consent (those under 18). There is also a lack of understanding of exactly how much information is being collected - it's one thing to say 'we will track all your traffic' and another to understand the long term implications of that action.
Also, I'd just like to call out how much of an outright lie the phrase "Your [...] data will never be shared or sold to anyone outside of Google" is. If Google were to go into bankruptcy procedures at some point in the future, that data is simply another asset which will be sold off to someone else, with no respect for the privacy policy it was collected under (not to mention how privacy policies can change and retroactively cover data already collected).
Typically when children participate in a study, it's their parents that have to consent. I don't know how scientific research would work any other way?
Also, what do we know about the contract they signed? It's probably not the regular privacy policy.
Informed consent is kind of a grey area. How much do you normally know about the consequences of your actions? How can you prove that you know what you're doing? Should your freedom to enter an agreement be restricted if you can't prove that you know what you're doing?
Legally, you prove consent by signing a contract. Sometimes people go above and beyond that to make sure people really and truly know what they're getting into, but there's a question how far you should go.
I don't think we can know just from reading this article how informed the people signing up were. They didn't interview anyone to find out.
Typically when children participate in a study, it's also been previously approved by an Institutional Review Board, and the IRB expects to either see that the potential harm is minimal, or that the potential benefit is easily great enough to justify the risks.
It's hard for me to see either of those cases being a slam dunk here. There's at least some potential for serious harm in the event of a data breach. And the potential benefit - an already rich company getting even richer - doesn't carry a whole lot of moral weight.
I believe IRBs are only required for federally funded research such as that performed by universities.
I’ve never heard of corporate market research being reviewed and approved by an IRB. Not sure how that would work since the board would hardly be independent anyway, but I’d guess the main reason it doesn’t happen is there is no law requiring it.
In most places, you need to be very, very sure that research subjects understand what they are agreeing to.
Passively reading something usually isn’t enough. The minimum often some kind of interactive explanation that includes a way for subjects to ask questions. I’ve heard tell of groups giving short “quizzes” for experiments with wired requirements and side effects.
For kids specifically, you often need to get both the parents’ consent and the child’s assent. The details vary with the child’s age (older kids opinions get more weight) and the nature of the research (kids can opt out of basic research, but the parents might be able to override the kids’s desire to (say) avoid a shot if it’s part of a potentially lifesaving clinical trial).
The company I work for used to have a big full-page opt-in form for text messages. Then one day it was pulled from the web site. Why? Because our usually very privacy-conscious legal team decided that if anyone gives us their phone number anywhere, it counts as consent to receive text messages.
Very sad, and in my opinion slimy. I’m glad I’m not on the social media team.
It has been my experience that people actually have no clue what they're signing up for. In a few reddit conversations today, I found people saying things like "sure they can see what websites I'm connected to, but my important information is encrypted, so I don't really mind". Sorry mate, they can see that too.
Encrypted data can be read if you install a root CA, which is what the Facebook app does, but the Google version does not appear to do that.
There’s an “enterprise certificate”—installing the enterprise certificate allows you to side-load applications. This is relatively benign. Both Facebook and Google do this, in both cases apparently a violation of Apple policy.
There’s a “root certificate”—installing the root certificate allows you to do MitM attacks and read encrypted traffic like messages, bank passwords, etc. The Facebook app appears to do this and I would characterize this reckless, irresponsible, and unambiguously unethical.
User opt-in does not excuse violating Apple’s terms of use. Google _likely_ did not get Apple’s opt-in for this approach. If they did not, they will _likely_ see their enterprise certificate terminated for precisely the same reason.
Still playing devil's advocate : I am more perplexed by the necessity to have Apple's approval than anything else here.
Sure this particular app is debatable.. but I have also worked in the music streaming industry. While being super respectful of the users, we still have sometimes had to wait for months for Apple's approval.
Having a single agent being able to gatekeep what you can install on your phone at their own discretion is an issue since there will always be the temptation to prevent any competitor from getting in your space.
I am more perplexed by the necessity to have Apple's approval than anything else here.
As I understand it, it’s because the apps were being distributed using a method that is supposed to be used only inside the company. Like for beta testing software, or for in-house applications used by employees only. Anything going to the general public is supposed to go through the App Store under Apple’s terms and conditions.
That doesn't require Apple to lock down the platform. All it requires is for them to offer a store of Apple reviewed apps. You can choose those apps, other people can choose apps from other stores.
You're basically saying you like Blockbuster video because there's no porn and they only have the "edited for the Airlines" versions of movies.
Great, but other people would like to use their device for whatever they want.
Yes, I know you'll say "so buy a device from someone else". I don't agree with that anymore than I think Ford should be able to make a car you're not allowed to drive anywhere Ford says your not. If Ford did that I don't think the answer should be, "if you want to drive other places buy a car from someone else". IMO the answer should be it's illegal for Ford to control my car to that level.
I'm hoping Apple loses the case against their monopoly on the App store (although given the details of the case I don't think this particular case will succeed so I'll have to wait for another)
That's a complete non-sequitur. The complaint was that the owner of the device doesn't have the choice to install software that wasn't approved by Apple. The fact that you only want to install software approved by Apple is completely irrelevant to the question of whether you should be able to install whatever you want, because your ability to install whatever you want does not in any way affect your ability to only install software approved by Apple.
You only buying Nike shoes does not require that all other companies are banned from selling shoes, you only eating McDonald's burgers does not require all other restaurants to be closed, and you only buying software through Apple does not require that there is no other ways to install software either, that's just authoritarian bullshit.
Yes and no, see other replies for nuance. Apple controls software distribution for their platform, in exactly the same way (in principle) that games console manufacturers have done so for many decades. They do so in the same way Atari controlled software distribution for the 2600 in 1977 and Nintendo controls it for the Switch today, or that Volvo does for software performance packs for their vehicles. All these and many, many, many more are closed platform devices you buy, with optional software features you can purchase from the vendor.
It's their platform, they can choose the rules behind app distribution. They chose to allow it through methods controlled by themselves. If a third party wants to distribute apps, then they have to abide by the terms set by Apple.
yes, but 'inside the company' and 'testing' are vague. On one hand u have Larry Page testing the latest version of Gmail app - clearly eithin bounds - but what if its a contractor using an app filled with analytics that won't be released to the public, then what if its a focus group with 5 people, then what if its a large study like this, etc.
That's precisely what TestFlight is for. Internal use, testing, and production each have their avenues: Enterprise certificate, testflight, and app store
Any time you have a human element to a process, there are bound to be unforseen consequences. For all we know, their review process could be devoid of SLAs for completing reviews and Agents could sit on reviews for months for arbitrary reasons because no one is following up on them.
We're a large NFP but not a particularly large organization so I'm not sure if we get preferential treatment but our business lead has a direct point of contact with an App Store representative and has used it to get extended information about and, from our perspective, force through Apps releases stuck in review.
He isn't the type of person to sit idly by when a process is taking an abnormal amount of time so part of me thinks it isn't a case of preferential treatment but rather the squeaky wheel getting the grease.
I'm sure having someone he can get on a phone and hold accountable goes a long way. I'm still unclear how he managed that.
As somebody whose read some of those privacy policies, I'm not sure I understand what I'm agreeing to.
For example, in gsuite they only claim to not mine your data for advertising purposes. Does that mean 'we only use your data to provide the service', or does it mean 'we use your data for everything but advertising'?
Many people I know barely even know how their data is or isn't used, and are shocked by recent news stories. That's not informed consent imo
Based on direct involvement in the process of writing one such policy for one such big internet co., my view is that the confusion and vagueness is largely deliberate. A small part of it is the result of too many people involved pulling in different directions.
I think they mean the research data won't be used directly to target ads or try to make sales to you (standard guideline for market research), rather than the second order effect of allowing them to improve their overall ad targeting which then affects you.
not necessarily - the data could be used to understand overall usage patterns and preferences in the target population as a whole - much in the same way as traditional surveys and focus groups. If indeed it is put into ML which then tries to match the data with the actual participants, then I agree that that is misrepresentation and probably criminal.
Apart from the several times they've been fined in the EU and various other countries for (summarising here) slurping data, lying about deleting it, favoring their own services, and so on?
Of course, such fines (etc) are subject to potential change over time because lawyers. But they're definitely not a bastion of good actions and credibility. :(
Does that matter? That data will be around forever (longer than Google). Do you trust every person (and lawyer) Google will ever hire from here on out, and every company who will purchase that data once Google is gone?
Actually Google anonymizes the data and deletes it after analysis (I have no idea if they do but since we're just making things up with no basis in fact my guess is just as valid if not more valid than yours)
More valid - heres something that might surprise you all - my 20 year old anonymized data is as worthless to google as it is to me - so yes I don't care if someone gets a hold of it after google no longer exists
> People who signup for this already know what they are doing
I've seen this argument a few times recently. It's not clear to most people what they're signing up to. They've clicked through a EULA, that somewhere in thousands of lines mentions a separate Privacy Policy which they would then have had to load, in order to decipher more thousands of lines of legalese to find the few lines of 'active ingredient' which would tell them what's actually being recorded.
Even when it does technically declare somewhere in there what "might be" recorded (usually in terms like "we record data such as [...]" which don't actually limit what they can record), the company itself will handwave it away as "we're just covering ourselves in case we have to write your name down if you call us for tech support" or similar inanities.
The fact that so many people were surprised about the Cambridge Analytica scandal when selling such access is explicitly Facebook's business model and everyone participated voluntarily should tell you everything you need to know about how much the average user of these services "knows what they're doing".
Devil's advocate is exactly the right word for defending these practices - they're like a Disney villain holding a giant contract scroll and a fountain pen dipped in blood. Don't worry about the fine print, just click ACCEPT, and I'll give you what you want, right?
> Google will send you a router to intercept your entire household's internet traffic on all devices with a browser ()
Yes, shocking that requesting a router that will report "the sites you visit, device IP address, cookies, and diagnostic data" for market research will take a look at my household's internet traffic...
It's just Nielsen but from Google (Nielsen these days literally works by putting a microphone in your house and collecting all your internet activity). The novelty of this story doesn't have anything to do with these things, just the iOS app.
Exactly. The parent comment completely misses what the real issue in this story and the Facebook story is. It was never really about the research app itself, but rather about how they used Enterprise keys for non-enterprise use cases. The app itself is entirely legitimate, extremely clear in what it does, opt-in and completely separate from the rest of the ecosystem.
Anyone on HN understands the implications of this. But does your average user? I doubt it. They just know they can install a little box and get paid. Digital privacy means nothing to far too many people
I don't think so. For instance, people I spoke with on reddit who installed the FB app thought that due to SSL, all their communications would be encrypted, and FB would only see who they were talking to, not what. Of course, the entire point of the root cert is to break SSL.
What? How can you make that argument? The average person has absolutely no clue of the various ways websites are tracking them already, let alone the potential amount of data Google would be getting by aggregating all this through their router.
Yessir, how could there be a problem here? And all those third-parties consented to their communication intercepted and stored by, well, another third-party?
> Router could mitm and chrome could allow it since they have google CAs in their chain.
That would require Google either sending the traffic back to google and out again (slowing a lot of things down) or Google putting a signed private key on the router themselves (a violation of CA agreements and a remarkably stupid thing to do in general). If they did that it would not be difficult for someone to extract that key and certificate. This would be a huge security breach.
The decryption doesn’t have to happen in real-time since it’s just analytics. Dumping all traffic off to google doubles bandwidth but could be done in a way to minimize slowdown for users.
I agree that it’s a security breach, but it happens all the time. Look at enterprise products like ForcePoint [0] that will do deep inspection on https sessions because they have custom CA installed on enterprise clients. Many companies do this.
Because it’s their router hardware it would be possible to present anyone extracting the intermediate mitm carts and keys. The data are likely sensitive, but that’s what They have already.
Tools like ForcePoint don’t put a “real” CA cert on the device. They typically create a new CA per device, install that into the downstream client CA trusted roots and then generate mitm certs signing with this new cert.
Is it a little baffling to anyone else that Facebook's entire plan if Apple killed their research-app Onavo iOS workaround was "loudly point out that Google does it too"?
Surely there must have been more to their contingency plan? They have great engineers and I'm surprised there wasn't more than a PR response up their sleeve. Just for example, in retrospect, why did they use the mainline Facebook iOS enterprise certificate to sign this app rather than a cert from one of their subsidiaries or acquisitions -- wouldn't that have de-risked a bit?
Is it a little baffling to anyone else that Facebook's entire plan if Apple killed their research-app Onavo iOS workaround was "loudly point out that Google does it too"?
Not to me. That’s pretty much how I expect Facebook to operate these days.
Sadly, it’s not just Facebook. Pretty much every time any article posted on HN points out how Company X is misbehaving the thread is flooded with “But... but... Company Y does it, too!” It’s like the SV bubble falls back on the logic of a five-year-old whenever they get caught with their hands in the cookie jar.
This happened with Apple too in the past when the news reported on the working conditions at Foxconn. But to be fair if two 5-year-old kids threw trash on the ground and the adults only punished one of them... you can imagine what happens.
I'm pretty happy the way that Facebook is signaling the security of iOS by complaining that they have no other way to break into phones except by social engineering people to install root certificates.
Sorry to disappoint, but I think this is just one of many ways. example - as an iOS dev u want to advertise on fb. for that to be effective u want to track conversions. easiest way to do that, esp. for a small dev - add the facebook sdk to the app. and you're done - facebook can potentially hoover a lot of data from an app that has no obvious relation to it.
> Facebook expanded its work with Definers Public Affairs, a Washington public relations firm, in October 2017 after enduring a years-worth of external criticism over its handling of Russian interference on its social network.
> Definers Public Affairs wrote dozens of articles criticizing Google and Apple for their business practices while downplaying the impact of Russia's misinformation campaign on Facebook.
> Facebook also used the firm to push the idea that liberal financier George Soros was behind a growing anti-Facebook movement.
> Definers began doing some general communications work, such as running conference calls for Facebook. It also undertook more covert efforts to spread the blame for the rise of the Russian disinformation, pointing fingers at other companies like Google.
> A key part of Definers’ strategy was NTK Network, a website that appeared to be a run-of-the-mill news aggregator with a right-wing slant. In fact, many of NTK Network’s stories were written by employees at Definers and America Rising, a sister firm, to criticize rivals of their clients, according to one former employee not allowed to speak about it publicly. The three outfits share some staff and offices in Arlington, Va.
> The social network secretly hired a PR firm to plant negative stories about the search giant, The Daily Beast's Dan Lyons reveals—a caper that is blowing up in their face, and escalating their war.
Yes, because these systems inherently violate the privacy of parties that didn't consent to the spying. This includes private individuals, and competing companies with whom the "consenting" individual interacts.
A thought experiment: imagine a corporate TOS including a clause that specifically prohibits use of devices/software that violates the provider's privacy. E.g. an end user's account can be terminated because they're using Google/FB/other "voluntary" spyware...
So how many individuals and organizations do I need to get permission from to install something on my phone?
All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.
> So how many individuals and organizations do I need to get permission from to install something on my phone?
The problem isn't about installing something on your phone, it's about handing over every single private communication you have with others without getting their approval. It contradicts your first assumption that people have _opted in_.
> All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.
Are you're suggesting that no consent is necessary for you to put other people's private information on sale?
You may have the right to publish some emails when necessary. It is a whole different thing to sell every bit of private correspondence to a third party in secret. Not only is it a betrayal of trust, a quick search on the internet suggests that it may be illegal in some cases[1].
This is excessively reductionist logic. There's a huge gulf between, for example, the snarky reply below ala "forwarding emails is illegal" and "an app is en masse siphoning all communications between its users and others".
The privacy backlash is precisely about people becoming more aware of the latter class of behavior and rebelling against it. Complaining that "no one would be able to do anything" is a straw-man without relevance to the actual social conversation going on right now.
You seem to be operating on the assumption that you're defending something other than the status quo.
(A quick example of this brand of individualism that offers the individual right, non-declinable, to be analyzed and sanctioned by the government: https://news.ycombinator.com/item?id=18704330)
> does a privacy maximalist mentality needs to be imposed on everyone?
I see you follow Google closely; close enough to know that privacy concerns have hardly impeded its growth and dominance. Same with all other major tech companies.
Does a privacy minimalist mentality need to be imposed on everyone? (I'm asking rhetorically. In either form, it's not a substantial argument: it's a strawman. Privacy isn't a measurable quantity, and each person or community cares about protecting or revealing different things.)
> does a privacy maximalist mentality needs to be imposed on everyone?
I kinda think so, yeah. Everyone flies past the privacy stuff, or gives compromises, or is traded free Farmville points, or given extra coins, or a new shiny feature in exchange for it, their friends are on it, their celebrity is on it, etc.
People will opt-in for all those reasons not realizing or seeing what they gave up or its consequences.
People don't know what they're consenting to. In reddit conversations I've seen people say that they're fine with the app because "everything private is encrypted with SSL". They don't realize the whole point of this is to get around that SSL encryption.
I'll submit this for consideration: you're right, it's 100% fine from an individual standpoint. But there's an aggregate effect of some sort that is a concern. Every move like this changes people's standards and expectations. Call it a "cultural shift", and call us "conservatives" along this particular axis. If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.
So what does that imply we should do? I'm not sure. Maybe simply push back as we are. Maybe try to impress onto opters-in just what they're selling, and maybe they'll reconsider. Maybe ask Google to simply brand this differently. Call it the "Truman Show Package". So at least everyone is aware that, while the data being collected is valuable, and while everything is 100% a-ok so long as everyone consents, this is NOT NORMAL and nobody should accept it as such.
> If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.
So, people who fit that "if" clause. I assume it's a significant number of people here.
I "love" how the reward are gift cards, not actual money. How much more insulting can they be: your privacy is not even worth liquid currency. Some people probably need the gift cards, but it's as upsetting as people selling blood for gift cards.
Isn't this some tax thing? If they pay you money, they have to file a 1099 with the IRS. If they give you a gift card, it's up to you to record it as a gift on your taxes.
well think of the other possibly bigger perk: they get your business and loyalty again in the future at a discount since the card was given as an enticing amount (e.g. "$50" but after markup is only really $28 to the business).
Don't know why you're downvoted - sending prepaid debit cards has taken the place of cutting a check in a lot of instances. When I cancelled DirecTV years ago, they sent my refund as one.
...or just maybe, some people have different value judgements than you do. Those people also probably think statements like yours are paternalistic and condescending.
Great, just show me where FB got an ethics board approval on both the consent forms and nature of data gathered on underage youth and then maybe I'll believe that people were properly told of the risks involved .
Is the idea that people have different values or even morals a wild idea? Else how could 2 people see abortion as either self-determination for women vs actual infanticide?
Are they physical gift cards, or e-gift card codes. If the later, one obvious advantage over other payment methods is that they can be delivered by email or by a web page.
I think most people would prefer that to receiving a check in the mail, or cash in the mail, or supplying banking details for a direct deposit.
They seem to have cards available for multiple merchants. If they have a decent selection there should be something available from a merchant a given subject actually buys things from. If so, a gift card is pretty much as good as cash.
Gift card code distribution is also much easier to automate and track. I'm in a small research lab and we recently moved from cash to gift cards. It's much easier to distribute both physically and digitally.
It could also be that right now the camera is not used but in a future it will. It would just require updating the terms and conditions. No need to send new hardware.
Chances are such an update will not be cognitively registered for many of these consumers, who originally read that the camera would not function when signing up for the product. It's shady as shit no matter how you slice it.
And people wonder why I didn't want Nest. I was already skeptical before it was bought out and I learned that without an Internet connection it wouldn't work.
My girlfriend has a Nest at her work which saved her last week when the power cycled and the heating system turned off. She was able to restart it from home.
I wouldn't be so thrilled to have a thermostat that requires internet to function at home. Or is that only for those networked features?
It wouldn't have been sufficient to save the buildings pipes in -30°C in a restaurant full of embedded draft lines, however.
Nest did give her control over the entire building's system. IIRC it was actually an employee that shut the system off after a power cycle/outage thinking they were protecting the equipment, but failed to restart it. If she hadn't had the insight the system would have been off all night.
better still, the router hardware google sells as a product (onHub) requires a Google account to work [0-
So consumers are paying to buy a router that will allow google to mine and link all browsing behavior to their google account.
Also, if you own a google home, it won't work without all sorts of permissions being enabled at the account level, including web activity and app history [1]
Both of these fantastic privacy violating products are available for Purchase at an electronics retailer near you.
> Google's project includes tracking of desktop and laptop internet activity via a browser extension that can basically read literally anything you do online
I wonder why a router is really useful to them. With most sites using https, not much beyond DNS requests and plain IPs can be captured, without forcing users to install a CA.
> With most sites using https, not much beyond DNS requests and plain IPs can be captured
Agreed, but don't minimize the value of logging all DNS requests. You can get an unbelievable amount of deeply personal information from having a list of every DNS lookup. As an experiment, fire up a pi-hole and look at the logs of your own requests. There will likely be a lot of info in there you wouldn't want public.
Okay, so this is a throwaway account whose only posts are in defense of Facebook.
Readers on HN, you’re supposed to be far more skeptical and employ your Young Reaganite “Trust But Verify” glasses before upvoting blindly like this. Even if the facts are correct, the talking points are clearly presented as (in the favorite words of so many on here) “submarine PR.”
> Please don't impute astroturfing or shillage. That degrades discussion and is usually mistaken. If you're worried about it, email us and we'll look at the data.
- Google will send you a router to intercept your entire household's internet traffic on all devices with a browser (https://support.google.com/audiencemeasurement/answer/757439...)
- Google will send you a device that listens 24/7 to audio in the room to figure out what you are watching on TV and listening to (https://support.google.com/audiencemeasurement/answer/757476...)
- Google's project includes tracking of desktop and laptop internet activity via a browser extension that can basically read literally anything you do online (https://support.google.com/audiencemeasurement/answer/757448...)
This isn't just trying to figure out what new up-and-coming apps are going to be the next big thing, this is Google building out very far-reaching profiles of your entire household, in return for some gift cards. This is signing away your family's entire digital life (and a significant part of anyone they interact with in a browser).