This isn't just trying to figure out what new up-and-coming apps are going to be the next big thing, this is Google building out very far-reaching profiles of your entire household, in return for some gift cards. This is signing away your family's entire digital life (and a significant part of anyone they interact with in a browser).
I'll go ahead and play the Devil's advocate because every constructive conversation needs one.
People who signup for this already know what they are doing and the program has a privacy section[0] saying that the data is only shared with Google which is pretty much akin to having any Smart Speaker. Not only that but it also says that the data wouldn't be used to "advertise to you or sell you anything" which is not the case with Smart Speakers.
In essence, from a privacy point of view, when compared to having a Smart Speaker, this is better I'd say.
Not everyone who signed up, or who is signed up, is capable of giving informed consent (those under 18). There is also a lack of understanding of exactly how much information is being collected - it's one thing to say 'we will track all your traffic' and another to understand the long term implications of that action.
Also, I'd just like to call out how much of an outright lie the phrase "Your [...] data will never be shared or sold to anyone outside of Google" is. If Google were to go into bankruptcy procedures at some point in the future, that data is simply another asset which will be sold off to someone else, with no respect for the privacy policy it was collected under (not to mention how privacy policies can change and retroactively cover data already collected).
Typically when children participate in a study, it's their parents that have to consent. I don't know how scientific research would work any other way?
Also, what do we know about the contract they signed? It's probably not the regular privacy policy.
Informed consent is kind of a grey area. How much do you normally know about the consequences of your actions? How can you prove that you know what you're doing? Should your freedom to enter an agreement be restricted if you can't prove that you know what you're doing?
Legally, you prove consent by signing a contract. Sometimes people go above and beyond that to make sure people really and truly know what they're getting into, but there's a question how far you should go.
I don't think we can know just from reading this article how informed the people signing up were. They didn't interview anyone to find out.
Typically when children participate in a study, it's also been previously approved by an Institutional Review Board, and the IRB expects to either see that the potential harm is minimal, or that the potential benefit is easily great enough to justify the risks.
It's hard for me to see either of those cases being a slam dunk here. There's at least some potential for serious harm in the event of a data breach. And the potential benefit - an already rich company getting even richer - doesn't carry a whole lot of moral weight.
I believe IRBs are only required for federally funded research such as that performed by universities.
I’ve never heard of corporate market research being reviewed and approved by an IRB. Not sure how that would work since the board would hardly be independent anyway, but I’d guess the main reason it doesn’t happen is there is no law requiring it.
In most places, you need to be very, very sure that research subjects understand what they are agreeing to.
Passively reading something usually isn’t enough. The minimum often some kind of interactive explanation that includes a way for subjects to ask questions. I’ve heard tell of groups giving short “quizzes” for experiments with wired requirements and side effects.
For kids specifically, you often need to get both the parents’ consent and the child’s assent. The details vary with the child’s age (older kids opinions get more weight) and the nature of the research (kids can opt out of basic research, but the parents might be able to override the kids’s desire to (say) avoid a shot if it’s part of a potentially lifesaving clinical trial).
The company I work for used to have a big full-page opt-in form for text messages. Then one day it was pulled from the web site. Why? Because our usually very privacy-conscious legal team decided that if anyone gives us their phone number anywhere, it counts as consent to receive text messages.
Very sad, and in my opinion slimy. I’m glad I’m not on the social media team.
It has been my experience that people actually have no clue what they're signing up for. In a few reddit conversations today, I found people saying things like "sure they can see what websites I'm connected to, but my important information is encrypted, so I don't really mind". Sorry mate, they can see that too.
Encrypted data can be read if you install a root CA, which is what the Facebook app does, but the Google version does not appear to do that.
There’s an “enterprise certificate”—installing the enterprise certificate allows you to side-load applications. This is relatively benign. Both Facebook and Google do this, in both cases apparently a violation of Apple policy.
There’s a “root certificate”—installing the root certificate allows you to do MitM attacks and read encrypted traffic like messages, bank passwords, etc. The Facebook app appears to do this and I would characterize this reckless, irresponsible, and unambiguously unethical.
User opt-in does not excuse violating Apple’s terms of use. Google _likely_ did not get Apple’s opt-in for this approach. If they did not, they will _likely_ see their enterprise certificate terminated for precisely the same reason.
Still playing devil's advocate : I am more perplexed by the necessity to have Apple's approval than anything else here.
Sure this particular app is debatable.. but I have also worked in the music streaming industry. While being super respectful of the users, we still have sometimes had to wait for months for Apple's approval.
Having a single agent being able to gatekeep what you can install on your phone at their own discretion is an issue since there will always be the temptation to prevent any competitor from getting in your space.
I am more perplexed by the necessity to have Apple's approval than anything else here.
As I understand it, it’s because the apps were being distributed using a method that is supposed to be used only inside the company. Like for beta testing software, or for in-house applications used by employees only. Anything going to the general public is supposed to go through the App Store under Apple’s terms and conditions.
That doesn't require Apple to lock down the platform. All it requires is for them to offer a store of Apple reviewed apps. You can choose those apps, other people can choose apps from other stores.
You're basically saying you like Blockbuster video because there's no porn and they only have the "edited for the Airlines" versions of movies.
Great, but other people would like to use their device for whatever they want.
Yes, I know you'll say "so buy a device from someone else". I don't agree with that anymore than I think Ford should be able to make a car you're not allowed to drive anywhere Ford says your not. If Ford did that I don't think the answer should be, "if you want to drive other places buy a car from someone else". IMO the answer should be it's illegal for Ford to control my car to that level.
I'm hoping Apple loses the case against their monopoly on the App store (although given the details of the case I don't think this particular case will succeed so I'll have to wait for another)
That's a complete non-sequitur. The complaint was that the owner of the device doesn't have the choice to install software that wasn't approved by Apple. The fact that you only want to install software approved by Apple is completely irrelevant to the question of whether you should be able to install whatever you want, because your ability to install whatever you want does not in any way affect your ability to only install software approved by Apple.
You only buying Nike shoes does not require that all other companies are banned from selling shoes, you only eating McDonald's burgers does not require all other restaurants to be closed, and you only buying software through Apple does not require that there is no other ways to install software either, that's just authoritarian bullshit.
Yes and no, see other replies for nuance. Apple controls software distribution for their platform, in exactly the same way (in principle) that games console manufacturers have done so for many decades. They do so in the same way Atari controlled software distribution for the 2600 in 1977 and Nintendo controls it for the Switch today, or that Volvo does for software performance packs for their vehicles. All these and many, many, many more are closed platform devices you buy, with optional software features you can purchase from the vendor.
It's their platform, they can choose the rules behind app distribution. They chose to allow it through methods controlled by themselves. If a third party wants to distribute apps, then they have to abide by the terms set by Apple.
yes, but 'inside the company' and 'testing' are vague. On one hand u have Larry Page testing the latest version of Gmail app - clearly eithin bounds - but what if its a contractor using an app filled with analytics that won't be released to the public, then what if its a focus group with 5 people, then what if its a large study like this, etc.
That's precisely what TestFlight is for. Internal use, testing, and production each have their avenues: Enterprise certificate, testflight, and app store
Any time you have a human element to a process, there are bound to be unforseen consequences. For all we know, their review process could be devoid of SLAs for completing reviews and Agents could sit on reviews for months for arbitrary reasons because no one is following up on them.
We're a large NFP but not a particularly large organization so I'm not sure if we get preferential treatment but our business lead has a direct point of contact with an App Store representative and has used it to get extended information about and, from our perspective, force through Apps releases stuck in review.
He isn't the type of person to sit idly by when a process is taking an abnormal amount of time so part of me thinks it isn't a case of preferential treatment but rather the squeaky wheel getting the grease.
I'm sure having someone he can get on a phone and hold accountable goes a long way. I'm still unclear how he managed that.
As somebody whose read some of those privacy policies, I'm not sure I understand what I'm agreeing to.
For example, in gsuite they only claim to not mine your data for advertising purposes. Does that mean 'we only use your data to provide the service', or does it mean 'we use your data for everything but advertising'?
Many people I know barely even know how their data is or isn't used, and are shocked by recent news stories. That's not informed consent imo
Based on direct involvement in the process of writing one such policy for one such big internet co., my view is that the confusion and vagueness is largely deliberate. A small part of it is the result of too many people involved pulling in different directions.
I think they mean the research data won't be used directly to target ads or try to make sales to you (standard guideline for market research), rather than the second order effect of allowing them to improve their overall ad targeting which then affects you.
not necessarily - the data could be used to understand overall usage patterns and preferences in the target population as a whole - much in the same way as traditional surveys and focus groups. If indeed it is put into ML which then tries to match the data with the actual participants, then I agree that that is misrepresentation and probably criminal.
Apart from the several times they've been fined in the EU and various other countries for (summarising here) slurping data, lying about deleting it, favoring their own services, and so on?
Of course, such fines (etc) are subject to potential change over time because lawyers. But they're definitely not a bastion of good actions and credibility. :(
Does that matter? That data will be around forever (longer than Google). Do you trust every person (and lawyer) Google will ever hire from here on out, and every company who will purchase that data once Google is gone?
Actually Google anonymizes the data and deletes it after analysis (I have no idea if they do but since we're just making things up with no basis in fact my guess is just as valid if not more valid than yours)
More valid - heres something that might surprise you all - my 20 year old anonymized data is as worthless to google as it is to me - so yes I don't care if someone gets a hold of it after google no longer exists
> People who signup for this already know what they are doing
I've seen this argument a few times recently. It's not clear to most people what they're signing up to. They've clicked through a EULA, that somewhere in thousands of lines mentions a separate Privacy Policy which they would then have had to load, in order to decipher more thousands of lines of legalese to find the few lines of 'active ingredient' which would tell them what's actually being recorded.
Even when it does technically declare somewhere in there what "might be" recorded (usually in terms like "we record data such as [...]" which don't actually limit what they can record), the company itself will handwave it away as "we're just covering ourselves in case we have to write your name down if you call us for tech support" or similar inanities.
The fact that so many people were surprised about the Cambridge Analytica scandal when selling such access is explicitly Facebook's business model and everyone participated voluntarily should tell you everything you need to know about how much the average user of these services "knows what they're doing".
Devil's advocate is exactly the right word for defending these practices - they're like a Disney villain holding a giant contract scroll and a fountain pen dipped in blood. Don't worry about the fine print, just click ACCEPT, and I'll give you what you want, right?
> Google will send you a router to intercept your entire household's internet traffic on all devices with a browser ()
Yes, shocking that requesting a router that will report "the sites you visit, device IP address, cookies, and diagnostic data" for market research will take a look at my household's internet traffic...
It's just Nielsen but from Google (Nielsen these days literally works by putting a microphone in your house and collecting all your internet activity). The novelty of this story doesn't have anything to do with these things, just the iOS app.
Exactly. The parent comment completely misses what the real issue in this story and the Facebook story is. It was never really about the research app itself, but rather about how they used Enterprise keys for non-enterprise use cases. The app itself is entirely legitimate, extremely clear in what it does, opt-in and completely separate from the rest of the ecosystem.
Anyone on HN understands the implications of this. But does your average user? I doubt it. They just know they can install a little box and get paid. Digital privacy means nothing to far too many people
I don't think so. For instance, people I spoke with on reddit who installed the FB app thought that due to SSL, all their communications would be encrypted, and FB would only see who they were talking to, not what. Of course, the entire point of the root cert is to break SSL.
What? How can you make that argument? The average person has absolutely no clue of the various ways websites are tracking them already, let alone the potential amount of data Google would be getting by aggregating all this through their router.
Yessir, how could there be a problem here? And all those third-parties consented to their communication intercepted and stored by, well, another third-party?
> Router could mitm and chrome could allow it since they have google CAs in their chain.
That would require Google either sending the traffic back to google and out again (slowing a lot of things down) or Google putting a signed private key on the router themselves (a violation of CA agreements and a remarkably stupid thing to do in general). If they did that it would not be difficult for someone to extract that key and certificate. This would be a huge security breach.
The decryption doesn’t have to happen in real-time since it’s just analytics. Dumping all traffic off to google doubles bandwidth but could be done in a way to minimize slowdown for users.
I agree that it’s a security breach, but it happens all the time. Look at enterprise products like ForcePoint [0] that will do deep inspection on https sessions because they have custom CA installed on enterprise clients. Many companies do this.
Because it’s their router hardware it would be possible to present anyone extracting the intermediate mitm carts and keys. The data are likely sensitive, but that’s what They have already.
Tools like ForcePoint don’t put a “real” CA cert on the device. They typically create a new CA per device, install that into the downstream client CA trusted roots and then generate mitm certs signing with this new cert.
Is it a little baffling to anyone else that Facebook's entire plan if Apple killed their research-app Onavo iOS workaround was "loudly point out that Google does it too"?
Surely there must have been more to their contingency plan? They have great engineers and I'm surprised there wasn't more than a PR response up their sleeve. Just for example, in retrospect, why did they use the mainline Facebook iOS enterprise certificate to sign this app rather than a cert from one of their subsidiaries or acquisitions -- wouldn't that have de-risked a bit?
Is it a little baffling to anyone else that Facebook's entire plan if Apple killed their research-app Onavo iOS workaround was "loudly point out that Google does it too"?
Not to me. That’s pretty much how I expect Facebook to operate these days.
Sadly, it’s not just Facebook. Pretty much every time any article posted on HN points out how Company X is misbehaving the thread is flooded with “But... but... Company Y does it, too!” It’s like the SV bubble falls back on the logic of a five-year-old whenever they get caught with their hands in the cookie jar.
This happened with Apple too in the past when the news reported on the working conditions at Foxconn. But to be fair if two 5-year-old kids threw trash on the ground and the adults only punished one of them... you can imagine what happens.
I'm pretty happy the way that Facebook is signaling the security of iOS by complaining that they have no other way to break into phones except by social engineering people to install root certificates.
Sorry to disappoint, but I think this is just one of many ways. example - as an iOS dev u want to advertise on fb. for that to be effective u want to track conversions. easiest way to do that, esp. for a small dev - add the facebook sdk to the app. and you're done - facebook can potentially hoover a lot of data from an app that has no obvious relation to it.
> Facebook expanded its work with Definers Public Affairs, a Washington public relations firm, in October 2017 after enduring a years-worth of external criticism over its handling of Russian interference on its social network.
> Definers Public Affairs wrote dozens of articles criticizing Google and Apple for their business practices while downplaying the impact of Russia's misinformation campaign on Facebook.
> Facebook also used the firm to push the idea that liberal financier George Soros was behind a growing anti-Facebook movement.
> Definers began doing some general communications work, such as running conference calls for Facebook. It also undertook more covert efforts to spread the blame for the rise of the Russian disinformation, pointing fingers at other companies like Google.
> A key part of Definers’ strategy was NTK Network, a website that appeared to be a run-of-the-mill news aggregator with a right-wing slant. In fact, many of NTK Network’s stories were written by employees at Definers and America Rising, a sister firm, to criticize rivals of their clients, according to one former employee not allowed to speak about it publicly. The three outfits share some staff and offices in Arlington, Va.
> The social network secretly hired a PR firm to plant negative stories about the search giant, The Daily Beast's Dan Lyons reveals—a caper that is blowing up in their face, and escalating their war.
Yes, because these systems inherently violate the privacy of parties that didn't consent to the spying. This includes private individuals, and competing companies with whom the "consenting" individual interacts.
A thought experiment: imagine a corporate TOS including a clause that specifically prohibits use of devices/software that violates the provider's privacy. E.g. an end user's account can be terminated because they're using Google/FB/other "voluntary" spyware...
So how many individuals and organizations do I need to get permission from to install something on my phone?
All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.
> So how many individuals and organizations do I need to get permission from to install something on my phone?
The problem isn't about installing something on your phone, it's about handing over every single private communication you have with others without getting their approval. It contradicts your first assumption that people have _opted in_.
> All these privacy histrionics are supplanting all other individual rights, personal accountability has to break out of this permission loop, even legally speaking, otherwise no one would be able to do anything.
Are you're suggesting that no consent is necessary for you to put other people's private information on sale?
You may have the right to publish some emails when necessary. It is a whole different thing to sell every bit of private correspondence to a third party in secret. Not only is it a betrayal of trust, a quick search on the internet suggests that it may be illegal in some cases[1].
This is excessively reductionist logic. There's a huge gulf between, for example, the snarky reply below ala "forwarding emails is illegal" and "an app is en masse siphoning all communications between its users and others".
The privacy backlash is precisely about people becoming more aware of the latter class of behavior and rebelling against it. Complaining that "no one would be able to do anything" is a straw-man without relevance to the actual social conversation going on right now.
You seem to be operating on the assumption that you're defending something other than the status quo.
(A quick example of this brand of individualism that offers the individual right, non-declinable, to be analyzed and sanctioned by the government: https://news.ycombinator.com/item?id=18704330)
> does a privacy maximalist mentality needs to be imposed on everyone?
I see you follow Google closely; close enough to know that privacy concerns have hardly impeded its growth and dominance. Same with all other major tech companies.
Does a privacy minimalist mentality need to be imposed on everyone? (I'm asking rhetorically. In either form, it's not a substantial argument: it's a strawman. Privacy isn't a measurable quantity, and each person or community cares about protecting or revealing different things.)
> does a privacy maximalist mentality needs to be imposed on everyone?
I kinda think so, yeah. Everyone flies past the privacy stuff, or gives compromises, or is traded free Farmville points, or given extra coins, or a new shiny feature in exchange for it, their friends are on it, their celebrity is on it, etc.
People will opt-in for all those reasons not realizing or seeing what they gave up or its consequences.
People don't know what they're consenting to. In reddit conversations I've seen people say that they're fine with the app because "everything private is encrypted with SSL". They don't realize the whole point of this is to get around that SSL encryption.
I'll submit this for consideration: you're right, it's 100% fine from an individual standpoint. But there's an aggregate effect of some sort that is a concern. Every move like this changes people's standards and expectations. Call it a "cultural shift", and call us "conservatives" along this particular axis. If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.
So what does that imply we should do? I'm not sure. Maybe simply push back as we are. Maybe try to impress onto opters-in just what they're selling, and maybe they'll reconsider. Maybe ask Google to simply brand this differently. Call it the "Truman Show Package". So at least everyone is aware that, while the data being collected is valuable, and while everything is 100% a-ok so long as everyone consents, this is NOT NORMAL and nobody should accept it as such.
> If we don't want to live in a world where companies pull this sort of shit on _every_ user to the point that the privacy-conscious among us have no choice, we want fewer people around us to be okay with it.
So, people who fit that "if" clause. I assume it's a significant number of people here.
I "love" how the reward are gift cards, not actual money. How much more insulting can they be: your privacy is not even worth liquid currency. Some people probably need the gift cards, but it's as upsetting as people selling blood for gift cards.
Isn't this some tax thing? If they pay you money, they have to file a 1099 with the IRS. If they give you a gift card, it's up to you to record it as a gift on your taxes.
well think of the other possibly bigger perk: they get your business and loyalty again in the future at a discount since the card was given as an enticing amount (e.g. "$50" but after markup is only really $28 to the business).
Don't know why you're downvoted - sending prepaid debit cards has taken the place of cutting a check in a lot of instances. When I cancelled DirecTV years ago, they sent my refund as one.
...or just maybe, some people have different value judgements than you do. Those people also probably think statements like yours are paternalistic and condescending.
Great, just show me where FB got an ethics board approval on both the consent forms and nature of data gathered on underage youth and then maybe I'll believe that people were properly told of the risks involved .
Is the idea that people have different values or even morals a wild idea? Else how could 2 people see abortion as either self-determination for women vs actual infanticide?
Are they physical gift cards, or e-gift card codes. If the later, one obvious advantage over other payment methods is that they can be delivered by email or by a web page.
I think most people would prefer that to receiving a check in the mail, or cash in the mail, or supplying banking details for a direct deposit.
They seem to have cards available for multiple merchants. If they have a decent selection there should be something available from a merchant a given subject actually buys things from. If so, a gift card is pretty much as good as cash.
Gift card code distribution is also much easier to automate and track. I'm in a small research lab and we recently moved from cash to gift cards. It's much easier to distribute both physically and digitally.
It could also be that right now the camera is not used but in a future it will. It would just require updating the terms and conditions. No need to send new hardware.
Chances are such an update will not be cognitively registered for many of these consumers, who originally read that the camera would not function when signing up for the product. It's shady as shit no matter how you slice it.
And people wonder why I didn't want Nest. I was already skeptical before it was bought out and I learned that without an Internet connection it wouldn't work.
My girlfriend has a Nest at her work which saved her last week when the power cycled and the heating system turned off. She was able to restart it from home.
I wouldn't be so thrilled to have a thermostat that requires internet to function at home. Or is that only for those networked features?
It wouldn't have been sufficient to save the buildings pipes in -30°C in a restaurant full of embedded draft lines, however.
Nest did give her control over the entire building's system. IIRC it was actually an employee that shut the system off after a power cycle/outage thinking they were protecting the equipment, but failed to restart it. If she hadn't had the insight the system would have been off all night.
better still, the router hardware google sells as a product (onHub) requires a Google account to work [0-
So consumers are paying to buy a router that will allow google to mine and link all browsing behavior to their google account.
Also, if you own a google home, it won't work without all sorts of permissions being enabled at the account level, including web activity and app history [1]
Both of these fantastic privacy violating products are available for Purchase at an electronics retailer near you.
> Google's project includes tracking of desktop and laptop internet activity via a browser extension that can basically read literally anything you do online
I wonder why a router is really useful to them. With most sites using https, not much beyond DNS requests and plain IPs can be captured, without forcing users to install a CA.
> With most sites using https, not much beyond DNS requests and plain IPs can be captured
Agreed, but don't minimize the value of logging all DNS requests. You can get an unbelievable amount of deeply personal information from having a list of every DNS lookup. As an experiment, fire up a pi-hole and look at the logs of your own requests. There will likely be a lot of info in there you wouldn't want public.
Okay, so this is a throwaway account whose only posts are in defense of Facebook.
Readers on HN, you’re supposed to be far more skeptical and employ your Young Reaganite “Trust But Verify” glasses before upvoting blindly like this. Even if the facts are correct, the talking points are clearly presented as (in the favorite words of so many on here) “submarine PR.”
> Please don't impute astroturfing or shillage. That degrades discussion and is usually mistaken. If you're worried about it, email us and we'll look at the data.
Google in recent years has felt like the Lyft to Facebook's Uber. It does a lot of the same evil things, but uses just enough restraint and decorum to let Facebook remain the one in the hot-seat. It avoids scrutiny by simply being less icky.
> Google in recent years has felt like the Lyft to Facebook's Uber. It does a lot of the same evil things, but uses just enough restraint and decorum to let Facebook remain the one in the hot-seat. It avoids scrutiny by simply being less icky.
Many of Google's services have a great deal more basic utility than anything Facebook provides. I think that also provides additional cover for them.
When Facebook pushes for more data collection, it almost always looks way more self-serving than when Google does.
Google also tends to be much more up-front about data collection, and seems to have better practices regarding keeping it from third parties.
That still doesn't make it good, but it helps. Whereas Facebook literally created a shell-company to mislead both users and Apple (who was trying to act in the users' best interest).
Stop spreading fake news. The app was literally called Facebook Research - I know you want to believe that there was a lot of hiding going on but it is really not the case.
Go visit https://play.google.com/store/search?q=facebook&c=apps&hl=fr . Are you able to tell which apps are supported by facebook and which are not ? For example, "onavo protect" is about 80th in the list, who in their right mind would think "yup, this one totally owned by facebook".
I wouldn't ever think that anything called "Facebook XXX" is a facebook app, even more so if it's clearly sold by a shell company. It was hiding, even though it was ironically in plain sight.
I know you want to believe the worst about Facebook. But it doesn't change the fact that the app was literally called Facebook Research and when you downloaded it had Facebook logos all over it - to make it clear that the app was from Facebook. Dive deeper, get data and then make arguments vs. having your feelings take the better of you.
Also, re: "Many of Google's services have a great deal more basic utility than anything Facebook provides"
I would so gladly pay money for a version of Google's services which was 1) client-side encrypted, 2) not mined whatsoever by Google - even the metadata, and 3) ad-free. They really do make useful, high-quality software, but I refuse to give them my life's data. I switched to using Apple for everything cloud-related, and while Apple Maps isn't as bad as people think, and Safari, Notes, and Calendar get the job done, they're also not as good as Google's equivalents.
The thing about Gsuite is that they're paid versions of ad-based apps. I'm skeptical that Google doesn't still "mine the hell" out of the activity in them, at most it just suppresses the ad display in certain places. To do otherwise would have Google 1) spending money on privacy features that would 2) reduce tracking-based revenue. Market-logic says they'd decide to continue to track their paid users.
If Apple had to punish Google the same way Google punishes it's users it would have to ban ALL Google apps permanently for lifetime and also suspend all related accounts permanently and then shutdown it's email,youtube and adsense account with money remaining in the account.
Finally Apple has to stop responding to requests for ban reason and send out automated emails that their decision is final and binding.
Also, a company that offers a wide variety of services and collects your personal data to distribute internally among those services seems to get a pass. Compared to companies that are more vertical and benefit by selling your data rather than using it themselves. Even though you've lost your privacy either way.
Can you elaborate more on Lyft? Been using it instead of Uber due to their issues but I wasn't aware of problems with them. Not entirely shocked, however.
They have the same business model, same opposition to regulation, same generally bad arrangement with employees (sorry, "contractors") and riders alike. When my city (Austin, Texas) passed a law in 2017 requiring fingerprints and government background checks for ride-share drivers, Uber and Lyft left the city in protest (other ride-share startups sprung up in their wake and did just fine, so the law clearly wasn't a great burden on businesses).
However they did not, as far as we know, track government officials' locations to evade investigation (https://www.nytimes.com/2017/05/04/technology/uber-federal-i...). It seems you have to have a man-child as your founder and CEO before you'll do something that brazen.
I think that's just the HN bubble? HN threads always seem weirdly pro-Google and anti-Facebook compared to my real-life friends' views (even when some of those friends work at Google).
It used to be like that, I think. In the last year or so, the threads became both anti-Google and anti-Facebook, and to much stronger degree than regular people (including techies).
I.e. here people at least notice and care. Every other community I've been a part of - at workplace, at hackerspace, at home - everyone's just "so they do spy on you, who cares ¯\_(ツ)_/¯".
Yes, I think this comes with the tech side of this platform. Many developers still dream of a job at google and would take it in an instance. You would need a lot of mental gymnastics to call out googles evil practices regarding privacy at the same time.
In my non-tech circle of friends both are equally regarded to as evil spying megacorps.
Google's approach appears to be less sinister than Facebook's approach (it doesn't obfuscate what the research is really for), but I suspect Apple's decision today to revoke FB's Enterprise cert will force them to also take action on Google's Enterprise cert. Well played, TechCrunch.
I might be naive in this but I hope Apple does revoke the certs, not because of the privacy implications (which I still do have concerns about), but because I would like to see the rules applied evenly. We’ve seen Apple apply the rules differently to big vs small developers but this time it’s Google and FB and by revoking FB certs for being in violation of terms, they must unquestionably revoke Google’s certs lest they bring on another bias debate.
The rules should not be applied the same to everyone. Apple needs to look out for its long term interests and it can deal with small companies in ways that it can not deal with Google or Facebook without harming said interests. It's sort of like the quip, "Owe the bank $10,000 and they control you. Owe the bank $1,000,000,000 and you control them." In the world of business there are unequal relationships.
What enterprise certs are meant for and what they are used for in practice are very different things. Apple themselves bought out the Testflight service and integrated it into their toolchain and it was built on using enterprise certs to distribute to external users.
It has always been used as a release valve for the standard restrictions of the app store process.
> Apple themselves bought out the Testflight service and integrated it into their toolchain and it was built on using enterprise certs to distribute to external users.
This isn't true; when you logged in with TestFlight, they pushed a configuration profile to you that would collect your UDID. The developer would then register those UDIDs with Apple and sign the app with an ad hoc provisioning profile / normal developer certificate, as Apple intends.
You could also distribute enterprise apps through TestFlight, but they did nothing specifically to help you do that, they were just a hosting service in that respect. They certainly didn't abuse enterprise certificates themselves, although somebody using their service could, just as they could use any hosting service to distribute the apps OTA.
We're talking about the old TestFlight service from before Apple acquired them.
It's the new TestFlight service that distinguishes between internal and external testers. There's very little similarity between the old and the new TestFlight beyond their general mission statements. They don't work in the same way at all and the new TestFlight was built from the ground up.
Depending on exact definitions, the people in an outside compensated "research panel" could be seen as "employed by" the company doing the research. There's consideration being exchanged for services rendered.
I don't think lawyering over the terms is what's important here. Instead what matters is whether Apple thinks it is better for its business to pull Google's enterprise cert.
Apple holds all the cards here. If it wants to punish Google over this, it will, and Google won't be able to do a thing about it, terms or no terms.
Apple has immense discretion here, of course. But a punitive enforcement action with potentially large spillover effects, in impairing other apps, is being discussed.
Whether that's considered justified, and even whether such an action might create a legal/antitrust liability for Apple, could depend on the actual terms/definitions Apple provided.
In the long run, Apple can adjust their terms subject only to a few legal and market checks. But in the short term, Apple should respect the terms they've offered, contractually, to other entities.
I don't think antitrust is relevant in this case, because Apple doesn't have a monopoly in smartphones. In fact, if anyone does, it's Google, since Android accounts for 75% of mobile phones worldwide [1].
True, kind of — it depends on the definition of “market power.” There is a dominant market share test used by the courts, generally a market share over 50% is required while some courts require a much higher percentage.
> Further, Apple has a 100% monopoly control of the iOS App Store.
That irrelevant. Best Buy also has a 100% monopoly control of Best Buy stores. That doesn’t mean that Best Buy has a monopoly on consumer electronics stores, nor does Apple have a monopoly for computer app stores.
True, Apple does have a monopoly when it comes to the iOS App Store.. but no more of a monopoly than Baskin Robbins has for ice cream sold by Baskin Robbins. That doesn’t mean you can’t by non-Baskin Robbins ice cream somewhere else, nor does it require Baskin Robbins to sell Hagen-Daz. If you want to be a sell your stuff at a store, you have to follow the rules of that store and pay the commission. Just like there are other places to buy computer software for a mobile device, there are other places to buy ice cream.
There are much greater implications for consumer welfare, and competition across many tech/service/product markets, in App Store control than in branded ice cream.
I've actually suggested, not fully seriously, that my employer give us the option of taking part of our pay in Costco gift cards.
A Costco member can buy and load gift cards, and then give them to a non-member, and that card will admit the non-member to Costco stores and allow them to buy things there on the card.
Some of us wouldn't use Costco enough to justify buying a membership, but would like to use them occasionally, and this gift card hack would allow that.
Sure would. If that would pass, I would simply create a club, let everyone join up and call them members of my organization and ship apps through it, bypassing Apple, the 30% and App Store review all in one shot. Considering how I've been treated at the hands of the iTunes support team, I'd do it in a heartbeat.
Kind of disappointing that so many people just want to downvote this thought, but no one can quote (or even seems to care about) the actual terms that Google (and Facebook) are alleged to have violated.
All hail arbitrary Apple, unbound by contract or law! Save us from ourselves, Tim Cook!
(f) Allow Your Customers to use Your Internal Use Applications on Deployment Devices, but only (i) on Your physical premises and/or on Your Permitted Entity’s physical premises, or (ii) in other locations, provided all such use is under the direct supervision and physical control of Your Employees or Permitted Users (e.g., a sales presentation to a Customer);"
Thanks! But, depending on the definitions of 'Employees' and 'Permitted Users' (and perhaps 'Permitted Entity'), that section's wording doesn't necessarily make the Facebook/Google use prohibited.
FWIW, I am a paid-up, registered member of the Apple Developer program, but still can't access that page, as it is apparently limited only to enterprise developers. So I can't check the definitions/details myself.
That limitation also helps me understand why so few, here or in the journalistic coverage I've seen, seem willing to quote the exact section violated. Those curious and willing to answer perhaps cannot access the formal wording, whereas those with the rights to access the terms may fear they'd be violating some obscure provision of their Apple agreements by merely quoting it.
Apple just took Google’s money to be the default integrated browser/Siri search engine in iOS. Apple’s morality on privacy has a price. The public outrage will need to be huge. Heh.
I remember a while back them talking about how they use Bing with Siri and about how Apple anonymizes the queries so Bing doesn't know who's actually doing them. I would imagine they'd apply the same anonymization to other search providers too.
Reminds me of the anonymized AOL search dump from back in the day[1]. Needless to say, the dump was a trove of personally identifiable information despite being "anonymized".
My vague recollection is Apple would actually take the Siri request and route it to Bing for you, so Bing doesn't know your IP, and they would attach a unique identifier that they'd rotate every 15-or-so minutes, so that way Bing can correlate multiple requests done back-to-back but loses any correlation with requests outside that 15-minute window.
For those billions, Apple links to the Google search results page in Safari when you type a query in the iOS system search field. That page includes Google search ads. According to the 2017 annual report, Google made $77 billion in revenue from “Google properties revenues”, which is mainly search ads but also includes other sources such as YouTube. Ads served by Google elsewhere (such as AdMob or DoubleClick) aka “Google Network Members’ properties” are only $17 billion. Search intent is an extremely strong signal for ads even with no personal information, although the ads are probably personalized if the user is logged into Google in Safari.
$9 billion is the “traffic acquisition cost to distribution partners” in 2017, which includes payments to Apple as well as other entities that send people to Google properties. This cost comes out to only 11.6% of the associated revenue. It’s likely that the amount of money paid to Apple is contractually linked to the revenue Google gets from ad clicks that can be tracked back to a link from Apple.
So there’s really no need for theories about the abstract value of user data here, it’s a simple referral fee for the ad clicks that are Google’s main business.
For actually serving Siri search results directly, I'm not sure what benefit the search provider gets. My guess is they're really after the "Search Web" fallback if you're not satisfied by Siri's initial synopsis, which will take you to the search page (and therefore the search provider will now know who you are).
OK, thank you for clarifying. It does read that way though, basically "Google was not abusing it like fb, thanks to Techchrunch they will lose their cert anyways".
Wow, this is making me realize how much power Apple holds as the arbiter of enterprise certs. They hold more power to quickly punish other corporations than even the government does, in a certain respect. I hope they continue to leverage that power for good--iOS has tremendous market share and Apple can entirely force other companies to play by their rules.
From what I've read, the Facebook research app installed a new root certificate on the device which allowed man-in-the-middle interception of all encrypted internet traffic. Which is obviously a huge issue.
From what I've read, the Google app does not appear to be doing this.
I am assuming the main reason that Apple revoked the enterprise distribution certificate for Facebook is due to this man-in-the-middle attack on encrypted traffic. The fact that they are solely circumventing the Appstore using an enterprise distribution certificate is a different issue.
Is my understanding of all this true? It would seem to me that Google isn't really doing anything that bad and Facebook has had its enterprise distribution certificate revoked for good reason.
“We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization. Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data. “
I wonder if Facebook may argue on what constitutes an "organization" or "consumer", because the sum of all Facebook users is clearly a something... and as much as those users consume, they are also the ones ultimately contributing to Facebook's profits... it's not so black and white after all.
One thing I'll point out is that Comscore has been doing this with websites since the early 00s. They packaged it as some sort of "free website antivirus protection" or some related BS, and when you installed it it installed a root certificate in your browser, specifically so that Comscore could read all your SSL traffic for ecommerce purchasing analytics. I even remember if you used their uninstaller to uninstall it, it left the root cert in place. IIRC Comscore at one point claimed millions of installed users.
Google is doing exactly the same thing as Facebook. Both are using Apple's enterprise distribution certificate to track what is going on on users' devices.
"in short, many people lured by financial rewards may not fully take in what it means to have a company fully monitoring all your screen-based activity"
I would like to see this spelled out. What are the risks? Why is this any worse than being a Nielsen family back in the day?
It seems like the most likely thing to happen to the people who signed up for this is: nothing. They helped a tech company with their research and got paid. That's it.
Nielson data is a very small very targeted piece of data.
Nielson may be able to somewhat tell something about your family (so critically it is about a group of people instead of an individual person, which is a major difference).
Considering how most people use their phones, this basically gives Google all of your information. There is little to nothing they would not know about you.
Worse is them being able to get information about other people (thanks to them messaging you, while you have this tracker running) without their consent.
That, however, doesn't address the question - why do you think people can't freely choose for themselves what they're doing with their data.
It's one thing to do a nastly covert data mining and another thing to go to a person and say "hey, I'll pay you to do A on THIS device with THIS app which you CLEARLY need to install and activate". One is dishonest, the other just seems like basic freedom in modern world?
I don't necessarily have an issue with people being able to choose.
However I think there is an issue of actually understanding what is being done with your data.
But still doesn't address that it isn't just your data that is being gathered up from your phone. Your consenting to at least some data from the people you text and email being grabbed up and there is nothing they can do about it except not talk to you, assuming they were to even know.
But the point of this was market research. I highly doubt anybody gives a shit, even the uptight brigade here on hackernews, about whether company X knows I'm using company Y's app. It's not like FB wants to read your private texts. They want to know what apps you're using and how you spend your time so they can make their own products better.
The underlying assumption seems to be that people are incapable, unable and too dumb to choose for themselves, if they want to give their habit data in exchange for direct payment. Apple needs to come and save them from themselves.
This is my problem with many "privacy advocates" on the internet. It often feels like the end goal is not, to help consumers, or make it clearer what technology is doing, but to get rid of large tech companies on principle or maybe return the internet to the glory days of the the late 90's and early 00's, or just end consumerism in general. It just so happens that privacy is the most effective path to that goal.
They’re also giving away data on everyone they interact with without these people consenting to it. This is not only dumb but a violation of law in Europe. So yes, saying that people are “incapable, unable and too dumb” to act responsible on their own agency is not something I would doubt. If not to “save them from themselves”, I appreciate it when Apple comes in to save my privacy from being violated by proxy.
Apple isn't specifically saving anyone here, they're specifically enforcing the terms of their Enterprise app distribution program.
If, however, you want to make a wider argument about whether or not it's reasonable for Apple to take a very opinionated view of how their main computing platform should apply security/privacy protections, you're very welcome to do so. I happen to greatly value the choices Apple has made for iOS, and I encourage everyone who disagrees, to go and use Android.
It has been pointed out else where in the thread that they do have mics in them. It is possible that Nielsen is only looking for audio from TVs and Phones, but Google also wants general audio for speech recognition. But I'm not sure.
I don't understand what's so objectionable here? there is ample disclosure, it's for adults, and those who opt in receive rewards, is this really about everyone being concerned for apple's terms or is it just that it's "hip" to portray tech companies in the worst possible light?
> Join the Apple Developer Enterprise Program for 299 USD per year and get everything you need to start distributing proprietary in-house apps to your employees.
You really want a source? That's a completely obvious violation. Otherwise you'd see lots of apps by 3rd parties being used without going through App Store approval. Come on.
This kind of program doesn't just suck up the data of the people who opted in with (theoretically) clear knowledge of what it means, it also likely pulls in data about the other people they communicate with using their phone, and none of those people consented to anything.
Even if we were to pretend that isn't the case, why is it so necessary for companies to be able to suck up so much data about people? Why shouldn't we insist that companies respect our privacy in the same way we respect each others' privacy (I'm assuming that you don't go and peek into your neighbours' windows to figure out what they've been buying recently).
"Putting the not-insignificant issues of privacy aside — in short, many people lured by financial rewards may not fully take in what it means to have a company fully monitoring all your screen-based activity..."
Not putting those issues aside, it's oddly paternalistic of us to assume people selling their data aren't making a rational trade. Uninformed data collection is one thing, but when we start looking down our noses at folks who are willing to get compensated for letting a big company spy on them consensually? That starts to look a bit "We know what's best for you, and it's not to let megacorporations have your data" elitist.
What if people look at the sum total of what companies have done with big data and like what they see?
Yeah, this seems _way_ less nefarious than what's going on behind the scenes to everybody else without their consent. At least they get people to consent and compensate them in this case, whereas nobody asks you whether you'd like to be tracked on the internet, and there's no way to turn it all off.
If I were to guess, this stuff isn't just "here's a router and we'll send you a check". There should also be a huge and very detailed survey the user needs to fill out so that Google could then correlated that data with similar demographics based on co-visitation.
This is also not new. Many years ago at Google I sat directly across the hall from the team that did demographic inference based on (IIRC) Nielsen panel data. I think it's safe to say that _every_ advertising company does this one way or another.
Basically at a high level they'd look at where Nielsen-paid tracked folks (about whom they knew everything) went on the web, and then looked at you, and their algorithm would guess your gender, age, income level, education level, etc etc.
"may not fully take in" is pretty clearly NOT making that assumption, so you're doing a lot of name-calling over something that doesn't appear to have been said.
There was an update to the article: Google totally discontinued and disabled the app, apparently in under three hours. It'll be interesting to see how Apple responds given that this was a "first offense" and didn't appear to be actively trying for deception, but it was also still an abuse of the enterprise certificate, which the first version of Facebook's app wasn't.
I'd put an asterisk on "totally", considering the app is still available for Android[0]. They acted quickly when their hand was caught in the cookie jar, they didn't suddenly grow a conscience.
Oh of course. Even the original iteration of Facebook's is still on the Play store. I was more commenting on how scared Apple had about sharing Facebook's fate.
Everything about this app, from how it was coded, how it was signed, how it was distributed and how it was documented on google.com was done with full knowledge of what was going on - it had to be.
You can't build, release, or document an app like this without knowing that you're outside the normal App Store process.
They know how many installations are active and how many people work for the org. When wildly out of line they could send a lawyer-gram and then shut it down if they don't get a response that accounts for the difference. Enterprise apps still phone home to the mother ship (Apple) and will refuse to run if, for example, the device doesn't have an internet connection for some number of days. (90 if memory serves.)
> Enterprise apps still phone home to the mother ship (Apple) and will refuse to run if, for example, the device doesn't have an internet connection for some number of days. (90 if memory serves.)
I would not be surprised if the list of apps that a user has is not disclosed to Apple, and the only thing that is exchanged is Apple's "blacklist" of revoked certificates.
Ok, but this is now Apple having to police thousands of organizations. That's just not practical. I think they should handle it on a case by case basis.
Very few publicly traded super giant companies act with any spine. Most just employ this tactic.
Apple still seem to be one of the very rare companies with this much power who even half-heartedly try. Not to defend Apple, of course, since they've done stuff that doesn't need to be defended.
I'm very curious to see how this plays out, though.
I don't yet see any discussion here on the effective difference between the assumed conclusion of this news, so I'll add this:
How do you expect the difference between Google and Facebook's internal ecosystems, and even their competitive stance (or lack thereof) with Apple, would affect the decision to revoke the enterprise certs?
Hypothetically, if Apple proceeded to revoke Google's enterprise certs on the same basis, wouldn't it have significantly less effect on Google's day-to-day operations? Certainly there large teams of iOS developers working on iOS apps, but I would expect (with no basis but idle speculation) that most other Google employees outside those teams are on Android and it would simply cause issues with Google's ability to develop apps on a competing platform.
If this is correct, then couldn't this act be interpreted as more of an anti-competitive move? I'm curious others' thoughts on this aspect of the recent sequence of news.
Is Amazon next? Microsoft? Who else would benefit from something like this, and have the ability to pull it off?
I am hoping this gains more traction and Google and Facebook get very publicly shammed.
However, I am curious if there is anything Apple can do to stop this without actually breaking the functionality of enterprise deployment certificates.
Let's not forget that the appstore is an anti-competitive lawsuit waiting to happen. If apple is going to antagonize some of the biggest lobbying tech firms there's no telling what the windfall would be.
>If apple is going to antagonize some of the biggest lobbying tech firms...
Google and Facebook would be the antagonists with the Enterprise Certificates and Apple would be protoganist, in these scenarios, yeah?
In other words, it's not as if Apple is telling these companies to use their Enterprise Certificates to skirt Apple's review process and then pulling their certificates when they're discovered to be doing so.
Calling Apple the antagonist, at least in these scenarios, seems a bit disingenuous to what's actually playing-out.
Disclosure: Did the acquisition marketing for this product across a few countries.
I think it's unfair comparing the 2. The Google product isn't grabbing the same about of personal data. It was focused on device usage, apps, media consumptions etc. Personally I was surprised Google were respectful enough to not be grabbing this data passively in the first place. This app was an add-on that was upfront about what it was doing and paid people for providing this access to their behaviour. And this desire to understand how people use products seems a reasonable thing for a company to want to know about improving and creating products.
Do people saying this is bad feel its wrong to get people in a research group for an unboxing to see how people unfamiliar with a product do things.
As long as its opt-in and clear about what its doing this seems reasonable and feels people are making a storm in a teacup... but maybe I'm missing something?
Sigh, another Techcrunch article. Another "Before you continue... TechCrunch is now part of the Oath family" interstitial page and no way to actually turn off tracking, just links to numerous privacy policies. No thanks.
I have an issue with calling enterprise certificate distribution a "back door". It is a good thing for specific uses. Facebook & Google misused it for something inappropriate, explicitly against rules Apple set for it. It is hard to enforce this kinds of distributions, but it is very useful for lots of people and companies, which are not into data collection business.
- Google will send you a router to intercept your entire household's internet traffic on all devices with a browser (https://support.google.com/audiencemeasurement/answer/757439...)
- Google will send you a device that listens 24/7 to audio in the room to figure out what you are watching on TV and listening to (https://support.google.com/audiencemeasurement/answer/757476...)
- Google's project includes tracking of desktop and laptop internet activity via a browser extension that can basically read literally anything you do online (https://support.google.com/audiencemeasurement/answer/757448...)
This isn't just trying to figure out what new up-and-coming apps are going to be the next big thing, this is Google building out very far-reaching profiles of your entire household, in return for some gift cards. This is signing away your family's entire digital life (and a significant part of anyone they interact with in a browser).