One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
<< find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
Honestly, kudos. The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
<< As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And people laughed at Red Reddington when he said he had no email.
There was a post from someone a long time ago who has an email address and name similar to Make Cuban but not quite. He got quite a few cold call emails meant for Cuban. A lot of them were quite sad (people asking for money for medical procedures and such).
As we all know some of the "consent" pop-ups have a first page of general settings, and then a "vendors" page to further deselect all the "legitimate interests".
I recently noticed that a fraction of the "vendors" allow deselecting the "legitimate interest" but have the "consent" tick box marked and unmodifiable.
The following vendors have un-deselectable "consent" tickboxes:
Skimbit Ltd
Confiant Inc.
Lumen Research Limited
Visarity Technologies GmbH
DoubleVerify Inc.
Revcontent, LLC
Adssets AB
Integral Ad Science (incorporating ADmantX)
Mirando GmbH & Co KG
Polar Mobile Group Inc.
Rockabox Media Ltd
Telecoming S.A.
Seenthis AB
HUMAN
Papirfly AS
NEXD
One Tech Group GmbH
illuma technology limited
CHEQ AI TECHNOLOGIES
Adjust Digital A/S
VRTCAL Markets Inc
Cavai AS
Kiosked Ltd
Protected Media LTD
Oracle Data Cloud - Moat
Bannernow, Inc.
Jetpack Digital LLC
GeoEdge
Ensighten
IVO Media Ltd
Online Media Solutions LTD
Mobkoi Ltd
Redbranch, Inc dba Fraudlogix
Alphalyr SAS
Silverbullet Data Services Group
Stream Eye OOD
adbalancer Werbeagentur GmbH
Somplo Ltd
Velocity Made Good LLC
Vyde Ltd.
Adelaide Metrics Inc
Sqreem Technologies Private Limited
TMT Digital Inc
dpa-infocom GmbH
Brandhouse/Subsero A/S
streaMonkey GmbH
Alkimi
Zeit Agency ApS
Sitewit, Corp
AccountInsight Ltd
Aderize, Inc.
fraud0 GmbH
Channel99, Inc.
Videobot Ltd
Appstock LTD.
Dando online LTD
EMBRACE Systems GmbH
Hiili SL
YIELDBIRD SP. Z O.O.
Volentio JSD Limited
BEAPUP SOLUTIONS LTD
Public Good Software Inc.
Kidoz Inc.
DataDome SA
Sarigato Sp. z o.o.
Gesher Software LTD dba bridgeupp
Playdigo Inc
Sipo Inc
EliteAppgrade
SpinX Pte Ltd
Creatopy INC
Codevelop Technologies GmbH
Adgrid Media, LLC
ProgrammaticX LTD
Nitrouppi LTD
9 Dots Media Ltd
Vudoo Pty Ltd
Mobavenue Media Pvt Ltd
Carbonatix LTD
1) What is up with these?
2) Are these even legal under GDPR rules?
3) Does this not nullify arguments by certain 3 letter agencies that users "consent" to their tracking?
4) Who is behind these companies? Any idea on how to approach this from an investigative journalism angle? Can we figure out the headquarters, employee counts, CEO's of these companies?
5) If "undeselectable consent tickboxes" qualify as legally valid consent, doesn't this set a precedent to foist off miryads of types of lack of consent as "consent"? Will this enable legalizing rape? Where does this Pandora's box end? How is this any different from:
6) As far as I understand, an illegal contract is void. If the forms submitted by users contained undeselectable "consent tickboxes"; then the forms no longer constitute legal contracts. Observe that this is regardless of the preferences of all the other tickboxes: even if users were to lazy to deselect all the deselectable tickboxes, the mere presence of deselectable tickboxes voids these forms as contracts. This means that all the other vendors didn't receive any consent, since their specific submitted form-as-a-contract is void, even if the majority of the vendors had consent tickboxes that could be deselected. It would seem prudent for such companies to insist that the forms don't contain undeselectable tickboxes for any companies since it would nullify the consent they hope to receive.
As a European resident, I have put in a complaint to the company for you. Should it be dismissed out-of-hand, I will forward a complaint to my national Information Commissioner's office. I will post any results.
out of curiosity: which company did you put in a complaint to?
about posting any results: I assume you are aware that after some time it is no longer possible to add comments to a HN discussion, I assume you will post any progress as a HN submission?
> The rules should apply to the ones foisting this system upon us as well. This is probably the only way to make anyone in power reconsider current setup.
Unless your problem is with the company doing the privacy violations, this doesn’t make any sense.
Where I live, which is not in the USA, I'm confident my doctor's office doesn't sell their contact list - or at least, not without statistical anonymisation and aggregation for research purposes.
They probably outsource processing the data and storing it to other entities, but that will be under contracts which govern how the data may be used and handled. I assume that's not what "sell the data" means in this conversation.
It would be such an egregious violation of local data protection law to sell patient personal details for unrestricted commercial use, including their contact info, and it would make the political news where I live if they were found out.
Also "not in the USA" i actually work on a medical ish application these days (not the in production version, mind but a fork with new features that's entirely separate at the moment).
I have access to ... zero patient data. Our entire test database is synthetic records.
HIPAA is pretty much the only halfway effective privacy regulation the US has. It imposes strong regulatory, licensure, and even criminal censure for violations.
It's formulated so that they can give those contacts away rather than sell them, but only to the rest of the medical goods & services supplychain that are involved in your care, who are also bound by HIPAA.
The worst dark pattern this has generated so far seems to be pharmaceutical company drug reps bribing your doctor to change what they would prescribe you.
The worst that's likely to happen without regulation, as far as I can tell, involves an associated provider just leaking UnitedHealthcare's full database of every patient and every condition.
Exactly this was tried by the likes of James Oliver and journalists/comedians of that caliber running ads and gathering data from politicians in Washington.
>One big privacy issue is that there is no sane way to protect your contact details from being sold, regardless of what you do.
>As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
Fortunately this is changing with iOS 18 with "limited contacts" sharing.
The interface also seems specifically designed to push people to allow only a subset of contacts, rather than blindly clicking "allow all".
The far bigger issue is the contact info you share with online retailers. Scraping contact info through apps is very visible, drawing flak from the media and consumers. Most of the time all you get is a name (could be a nickname), and maybe some combination of phone/email/address, depending on how diligent the person in filling out all the fields. On the other hand placing any sort of order online requires you to provide your full name, address, phone number, and email address. You can also be reasonably certain that they're all accurate, because they're plausibly required for delivery/billing purposes. Such data can also be surreptitiously fed to data brokers behind the scenes, without an obvious "tiktok would like access to your contacts" modal.
On android you can choose whether to grant access to contacts. And most apps work fine without.
GrapheneOS, which I use, also has contact scopes, so troublesome apps that refuse to work without access will think they have full access. You can allow them to see no contacts or a small subset.
There's also multiple user profiles, a "private space", and a work profile (shelter) that you can install an app into, which can be completely isolated from your main profile, so no contacts.
It surprises me how far behind iOS is with this stuff. Recently I wanted to install a second instance of an app on my wife's iPhone so she could use multiple logins simultaneously, there didn't really seem to be a way to do it.
The point is that it doesn't matter whether YOU grant access to your contacts. As long as anyone who has you in THEIR contacts decides to just press "share contacts" with any app, you are doxxed and SkyNet is able to identify you for all practical purposes.
You have two different points in your comment. Firstly, iOS has not been behind on having apps work if they don’t get access to a specific sensor or data. It’s on Android that apps refuse to work if they’re not given contacts access or location access and so on. Comparing the same apps on iOS and Android, I have found that Apple’s requirements for apps not to break when a permission is not granted is well respected and implemented on iOS apps. The same apps on Android apps just refuse to work until all the permissions they ask for are granted. YMMV.
I do agree that iOS is behind by not providing profiles and multiple isolated installations of apps, and it would be great if it did.
I think it's not properly appreciated that Apple fully endorses all of this. For two reasons: (1) the provision of the output of billions of dollars of developer time to their users for no up front cost (made back via ads) is super valuable to their platform; and (2) they uniquely could stop this (at the price of devastating their app store), but choose not to.
In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
>I think it's not properly appreciated that Apple fully endorses all of this. [...] they uniquely could stop this (at the price of devastating their app store), but choose not to.
A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps. Just look at f-droid. Most (all?) of the apps there might be privacy respecting, but good luck getting any of the popular apps (eg. facebook, tiktok, google maps) on there.
>In light of that, perhaps reevaluate their ATT efforts as far less about meaningful privacy and far more about stealing $10B a year or so from Facebook.
What would make you think Apple's pro-privacy changes aren't "about stealing $10B a year or so from Facebook"? At least some people are willing to pay for more privacy, and pro-changes hurts advertisers, so basically any pro-privacy change can be construed as "less about meaningful privacy and far more about stealing".
F-Droid will never have popular apps because it requires them to be open source. In fact F-Droid does the build for you, generating reproducible builds and avoiding the risk of adding trackers to the binary that aren't actually in the source code. With F-Droid the code you see is what you get.
> A perfectly privacy respecting app store isn't going to do any good if it doesn't have any apps.
40 years ago apps were sold on floppy disks. 30 years ago they were sold on CD-ROMs. 20 years ago, DVDs.
Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing. Apps being blocked or banned from stores for choosing to not respect your privacy is a good thing.
>Online-only apps are a recent thing. A privacy respecting app store certainly can be a thing.
I'm not sure you're trying say. I specifically acknowledged the existence of f-droid as a "privacy respecting app store" in the quoted comment.
>Apps choosing to not respect your privacy, and being blocked or banned from stores, is a good thing.
"a good thing" doesn't mean much when most people haven't even heard of your app store, and are missing out on all the popular apps that people want. Idealism doesn't mean much when nobody is using it. Apple might not be the paragon of privacy, but they had a greater impact on user privacy than f-droid ever will. To reiterate OP's point: what's the point of having a perfectly private OS and app store, when there's no apps for it, and your normie friends/relatives are going to sell you out anyways by uploading their entire contact list and photos (both with you in it) to google and meta?
Or because they were tricked. eg. LinkedIn’s “Connect with your contacts” onboarding step which sounds like it’ll check your contacts against existing LinkedIn users but actually spam invites anyone on your contact list that doesn’t have an account.
Wasn't this also how some services would connect e.g. your bank accounts? They'd ask for your credentials and log into your bank to scrape its contents.
And I kinda get it, some services external to your bank can help you manage your finances etc. But it's why banks should offer APIs where the user can set limited and timed access to these services. In Europe this is PSD2 (Revised Payment Services Directive).
I think the key point is that they would take your Linkedin password and try to use that on your email without asking you, in case you reused passwords.
The linked wikipedia article below says that they asked you for your email password specifically -- is there any evidence that they would try to use your linkedin password itself?
This is how a load of emails were sent out from my Hotmail account to anyone I had ever contacted (including random websites) asking if I want to connect with them to Facebook. The onboarding seemed to imply it would just check to see if any of my contacts were already using facebook.
God damn this feature. About ten years ago I inadvertently did something in LinkedIn and ended up spamming everyone I knew with LinkedIn invites. It annoyed a lot of people.
Grapheneos lets you pick this for apps before they even launch. You can revoke their network access, as well as define storage scopes for apps at a folder level, so if an app needs access to photos, you can define a folder, and that is the only folder it can scan for photos.
I used that when submitting parental leave at work. I didn't want to provide full access to all my photos and files for work, so all they got was a folder with a pic of a birth certificate.
A big problem with GrapheneOS is the fact it only officially supports Google phones. Google is apparently incapable of selling those things globally, limiting availability.
There's also the fact hardware remote attestation is creeping into the Android ecosystem. There's absolutely no way to daily drive something like GrapheneOS if essential services such as banks and messaging services start discriminating against you on the basis of it. Aw shucks looks like your phone has been tampered with so we're just gonna deny you access to your account, try again later on a corporation owned phone.
GrapheneOS is amazing from a security and privacy perspective but it doesn't matter. The corporations will not tolerate it because it works against their interests. They will ban you from their services for using it. Unlike Google and Apple, they have no leverage with which to force the corporations to accept terms unfavorable to them.
iOS and Mac also let you do this, for photos, contacts and files.
Apple is also pushing developers toward using native picker components. That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
> That way, you don't need to request consent at all, as you only get access to the specific object that the user has picked using a secure system component.
This is an interesting contrast with the earlier philosophy of phone OSes that the file system is confusing to users and they should never be allowed to see it.
From an user perspective, photos aren't files. Music isn't files. Contacts aren't files. Apps aren't files. App data isn't files.
The only things that "walk like a file and quack like a file" are documents, downloads, contents of external storage, network drives and cloud drives, and some Airdrop transfers.
Yes, it's technically possible to use the files app to store photos, music etc, but if you do that, "you're holding it wrong."
Until the app's devs get wise to this, and do not allow the app to function without the network access. It could be as simple as a full screen, non-closable screen that says the app requires network access with a button to the proper setting to correct the issue.
Such "go away" screens are in violation of Apple's AppStore rules. You cannot make a permission a condition of using the app, and stop the user from using it if they don't grant that permission. The app should gracefully do as much as it possibly can without the permission.
Try signing in in any Google app without allowing data sharing with Safari. It's not possible. They don't let you.
It's kind of weird that Apple introduced this big fat tracking consent popup, but they don't really do anything to actually prevent cross-app tracking...
This holds for every app and every permission? Because I'm quite sure I recently used an app that closed for not allowing a permission. May be misremembering..
5.1.1 (iv) Access: Apps must respect the user’s permission settings and not attempt to manipulate, trick, or force people to consent to unnecessary data access. For example, apps that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos. Where possible, provide alternative solutions for users who don’t grant consent. For example, if a user declines to share Location, offer the ability to manually enter an address.
This wording is actually a lot weaker than I remember it back when I wrote iOS apps. The developer also was not allowed to exit the app or close it against the user’s intent, however I can’t find that rule anymore.
I agree with these guidelines (although they could be improved), although I think that some things could be done by the implementation in the system, too.
> For example, if a user declines to share Location, offer the ability to manually enter an address.
This is a reasonable ability, but I think that the operating system should handle it anyways. When it asks for permission for your location, in addition to "allow" and "deny", you can select "manually enter location" and "custom" (the "custom" option would allow the user to specify their own program for handling access to that specific permission (or to simulate error conditions such as no signal); possibly the setting menu can have an option for "show advanced options" before "custom" will be displayed, if you think it would otherwise make it too complicated).
> that include the ability to post photos to a social network must not also require microphone access before allowing the user to upload photos
This is reasonable, that apps should not be allowed to require microphone access for such a thing.
However, sometimes a warning message makes sense but then to allow it anyways even if permission is not granted; e.g. for a video recording program, it might display a message about "Warning: microphone permission is not allowed for this app; if you proceed without enabling the microphone permission, the audio will not be recorded." Something similar would also apply if you denied camera permission but allowed microphone permission; in that case, only audio will be recorded. It might refuse to work if both permissions are denied, though.
Yeah, "unnecessary" is the word that may as well render the whole section moot unless it's actually properly enforced. If I can remember I'll test it today and see how it goes.
Yeah like the ChatGPT app that doesn't work without a Google account. I have Google play on my phone, just no account logged in. I do have Google play services like firebase push which many apps legitimately need. But ChatGPT just opens the login screen in the play store and exits itself.
I'm always wondering why these idiots force the creation of an account with their direct competitor. It's the only app I have that does this. But anyway I don't use their app for that reason, only use them a bit through API.
>Its not. Apple still owns your stuff. There is no difference between Apple and other 3p retailers.
That could be taken to mean anywhere between "Apple controls the software on your iPhone, therefore they control your contacts" and "Apple gives out your data like the data brokers mentioned in the OP". The former wouldn't be surprising at all, and most people would be happy with, and the latter would be scandalous if proven. What specifically are you arguing for?
The "vulnerability" part doesn't seem to be substantiated. From wikipedia:
>The images were initially believed to have been obtained via a breach of Apple's cloud services suite iCloud,[1][2] or a security issue in the iCloud API which allowed them to make unlimited attempts at guessing victims' passwords.[3][4] Apple claimed in a press release that access was gained via spear phishing attacks.[5][6]
Regardless of their security practices, it's a stretch to equate getting hacked with knowingly making available data. Moreover you can opt out of icloud backup, unlike with whatever is happening with apps mentioned in the OP.
Useless without limiting the kind of data I want to share per contact. iOS asks for relationships for example. You can set up your spouse, your kids, have your address or any address associated with contacts. If I want to restrict app access to contacts, I also want to restrict app access to specific contact details.
> (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
When I was at a medium-sized consumer-facing company whose name you’d recognize if you’re in the tech space (intentionally vague) we had some customers try this. They’d find product managers or directors on LinkedIn then start trying to contact them with phone numbers found on the internet, personal email addresses, or even doing things like finding photos their family members posted and complaining the comments.
We had to start warning them not to do it again, then following up with more drastic actions on the second violation. I remember several cases where we had to get corporate counsel involved right away and there was talk of getting law enforcement involved because some people thought implied threats would get them what they wanted.
So I can see why companies are quick to lock out customers who try these games.
I wonder if it ever evoked an dive into exactly what happened to leave these customers with thinking this was the most likely avenue for success? Hopefully in at least some cases their calls with CSRs were reviewed and in the most optimistic of best cases additional training or policies were put into place to avoid the hopelessness that evokes such drastic actions.
That would require empathy from someone who is, right now, bragging about how they sicced their lawyers and the cops on customers they were fucking over.
I'm going to guess that the answer would be "nope, didn't care." That Cirrus isn't going to pay for itself, friend...and you can't retire at 40 without breaking a few eggs.
I remember when Google was locking accounts because people had the audacity to issue a chargeback after spending hours trying to resolve Google not delivering a working, undamaged phone they'd paid well over half a grand for. Nobody at Google cared, but when the money (that Google never fucking deserved in the first place) was forcibly and legally taken back, the corporation acted with narcissistic rage...
> That would require empathy from someone who is, right now, bragging about how they sicced their lawyers and the cops on customers they were fucking over.
How do we know they were fucking them over?
There's always going to be a subset of people who take any perceived slight as an attack on their honor, and will over-react. (I've had death threats for deleting a reddit post, fwiw.)
> So I can see why companies are quick to lock out customers who try these games.
Most of the companies who customers try these "games" against are places like Google and Meta that literally do not provide a way for the average customer to reach a human. None.
Those have got it coming for them, the megacorps' stance on this is despicable and far worse than the customers directly reaching execs who could instantly change this but don't because it would cut into their $72 billion per year net profit.
This is a case where laws simply did not catch up to the digital era. In the brick and mortar era it was by definition possible to reach humans.
I get that your company was smaller and probably did allow for a way to reach a human but that's not generalizable.
Long ago when Google tried to launch its very first phone somewhere in Europe I can distinctly remember that it was initially not allowed to because of some regulation that mandated a company selling telephones to have a customer service.
Can't remember if they eventually found a loophole or if the regulations were changed.
For most situations, Walmart and CVS have fine customer service compared to most. You just need to show up in person. Fortunately, their business model might make that a little annoying, but not even difficult for most.
The challenge isn't getting customer service. It's getting someone who isn't reading from a decision tree that conveniently doesn't include any paths where the corporation has fucked up.
My only connection to Amazon support has been for AWS.
Perhaps though this should be an example of good customer service where talking to a human is easy, and not lumped in with the likes of Google where its impossible.
Perhaps your experience with the online shop is different, but frankly they're in my "good" column, not my "bad" column.
This is not like for like: AWS assigns reps because the dollar amounts are significant compared to your monthly cell phone bill or a purchase at Amazon. It's not surprising that buying a car gets you better customer service than renting one.
Two companies that are so gigantic they combine to a great percentage of number of "company interactions" the average Westerner has on a daily basis.
Anyway, I don't think it contradicts my point? Your company exist, mom and pops exist and there's a whole spectrum between them, so it's not generalizable.
Honestly not condoning people crossing the line of threats/abusive behavior, but it sounds like you worked at one of those companies that make it impossible to get ahold of someone, don’t respond to customers, or other poor customer service issues, and then are surprised people resort to this
What's funny is that the exec I got on the phone was super supportive and helpful and was genuinely amused to hear from me and hear what was happening. He put me in touch with their "Executive Support Team" and it was after this that I guess someone realized they didn't like the route I had taken.
I feel somewhat vindicated after this announcement (though it does nothing to bring my account back):
> Accessing any kind of customer service for Cash App was a challenge, too, according to the CFPB. Block included a customer service number on Cash App cards and in the app's Terms of Service, but calling it would it ultimately lead users to "a pre-recorded message directing consumers to contact customer support through the app."
As a result of sales drones getting hold of my number, I have to put my phone on silent and never pick up unless I recognize the number. Very unfortunate. What if there is an emergency with my kids?
If you're using iOS you can set certain contacts to bypass silent mode so that you still hear their notifications/calls. I know it doesn't help with unknown numbers, but just saying in case you're not aware. I'd be surprised if you can't do the same on Android.
Yes, thanks, I've configured that for kids and other loved ones. But I can't pick up anything else, even sales people from India manage to use a number that appears local (in The Netherlands for me), so I might miss a call from the kid's school.
I've just added the numbers of my kids school onto the list and it's been fine for me. I've never had them contact me from anything other than the schools number, but I'm in the UK and I would be very surprised if a teacher tried to call me from their mobile phone or something.
Oh wow, I knew this was a rampant problem in the US, but I didn't realise we had that at that scale here in the Netherlands as well. Hoping I can dodge the bullet a little longer...
> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies
The article author claims that you can't get this stuff for under $10k. Where do you find it for pennies?
As a test I downloaded it and got my wife’s full email and cell phone number easily from their free trial. And the full price would be on the order of pennies per contact.
The thing is...contact details aren't really private information, basically by definition.
The distinction is contact details privacy is based on the desire not be interrupted by people you didn't agree to be interrupted by - i.e. it's a spam problem - and realistically to solve this requires a total revamp of our communications systems (long overdue).
The basic level of this would be forcing businesses to positively identify themselves to contact people - i.e. we need TLS certificates on voice calls, tied to government issued business identifiers. That would have the highest immediate impact, because we could retrain people not to talk to anyone claiming to be a business if there phone doesn't show a certificate - we already teach this for email, so the skill is becoming more widespread.
A more advanced version of this might be to get rid of the notion of fixed phone numbers entirely: i.e. sharing contacts is now just a cryptographic key exchange where I sign their public certificate which the cellphone infrastructure validates to agree to route a call to my device from their device (with some provisioning for chain of trust so a corporate entity can sign legally recognized bodies, but not say, transfer details around).
This would solve a pile of problems, including just business decommissioning - i.e. once a company shuts down, even if you scraped their database you wouldn't be able to use any of the contact information unless you had the hardware call origination gear + the telecom company still recognized the key.
Add an escrow system on top of this so "phone numbers" can still work - i.e. you can get a random number to give to people that will do a "trust on first use" thing, or "trust till revoked" thing (i.e. no one needs to give a fake number anymore, convention would be they're all fake numbers, but blocking the number would also not actually block anyone you still want to talk to).
EDIT: I've sort of inverted the technical vs practical details here I realize - i.e. if I were implementing this, the public marketing campaign would be "you can have as many phone numbers as you want" but your friends don't have to update if you change it. The UI ideally would be "block this contact and revoke this number?" on a phone which would be nice and unambiguous - possibly with a "send a new number to your friends?" option (in fact this could be 150 new numbers, one per friend since under the hood it would all be public key cryptography). I think people would understand this.
What definition of contact details makes them not private?
Contact details (your phone number, email or address) are definitively private information, you should be the one that decides who gets them and who doesn't.
But it's not meant to be shared widely, for most people it's meant to be shared with consideration and/or permission.
Also, it's not just about "a desire not be interrupted by people you didn't agree to be interrupted by", it's about not having the data in the first place, for any reason, including tracking of any sorts.
Pre-internet/cell phone nearly everyone had their name/phone number/address in phone books. Libraries had tons of phone books. And you could pay for the operator to find/connect you to people as well.
Contact info being private is a relatively recent concept.
I think this could be one of the more legitimate uses of blockchain - distributed communications, contacts, and a refundable pay-per-call system to make spam calling uneconomical. Communication in general does desperately need an overhaul, phones are effectively useless as phones nowadays.
>> And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone.
I find it funny how easy it is to find scammy websites which promise to remove your data (right...), but how hard it is to find the actual marketplaces where people trade this data. It also makes you think about what other systems have similar asymmetric interfaces for the public and the ones in the know (yes, I know there are plenty).
Assuming these marketplaces operate within the bounds of the law, would it break HN’s ToS to post them? I’d be interested in pursuing the same strategy.
And the combination of contacts are also unique enough to identify you. Even though they change over time. Some fuzzy matching, take in another few bits of fingerprint like device type and country and voila no advertiser ID required.
Ps smart idea to use it for that purpose. If I failed to get proper service I'd just review bomb the company everywhere and soon enough I'd get a call fixing my problem and asking to remove them :)
> there is no sane way to protect your contact details from being sold
I can think of one: make it illegal to buy, sell, or trade customer data. All transfer of data to another party must have a record of being initiated by the individual.
Yeah, I wonder if it might help to create a little newsletter for politicians and regulators. Send emails telling them exactly where they are, what apps they use, and so on. And send them the same information about their children.
Eh, California protects politicians from having their real estate holdings posted online by government, and afaik, most county recorders have decided it's easier to not let any of it be online than to figure out who is a politician and only restrict their information.
Of course, much of it is public information so businesses can go in person, get all the info and then list it.
To use a line sometimes attributed to Beria, “give me the man and I will give you the case against him”. By which I mean that I’m sure they will find some means of making you sorry.
I mostly connect through Signal. I do technically have a phone number that my close friends and family have, but its a random VoIP number that I usually change every year or so. Surprisingly no one has really cared, I send out a text that I got a new number and that's that.
How? Most of the services I use, from Walgreens to banks to retirement accounts, require a phone number either for 2FA or just to verify that you’re you when signing up. After changing my phone number this year and having to go through the rigamarole for each service, I decided never again.
I've had limited luck feigning ignorance with a bank recently. "I don't know why I'm not getting a code" "No, I don't have another phone number" "I still can't log in to the web portal". They dropped the phone number requirement in favor to sending the OTP to email in the end, but it took way more effort than is reasonable. I tend to include a request to the CS person to pass along a request for TOTP/authenticator apps but given the request for a phone number is likely intentional I doubt the feedback is getting too far. In my naive mind, if enough people do the same, maybe they'll get the message.
Yeah, companies are not dumb, and they know when you have VoIP number vs a full account with an "accepted" company.
I can kind of see why not allowing 2FA to a number that could be easier to loose, but that's weak argument. Of course they don't want someone from .ru to get a US number with all of the baggage that would entail
There are flaws to their methodology. For half the companies, to change your number from A to B, you first must verify a NONCE with A, then verify a NONCE with B. This just means you have to possess two phone numbers for a period of time — Weeks, or in reality, months — while you change the long list of services over to the new phone number.
There is a simpler/better way and that is to verify you have your email address before allowing you to do a NONCE with B.
Changing your number every year could mitigate as it would introduce entropy and stale data into the system. When done at scale, data lifetime would behave similar to the automatic deletion of messages on whatsapp. Somewhat mimicking an in person verbal conversation where only people's memories remember what was said and even their version gets changed every time memory recall takes place. Systems already exist in real life that protect privacy, it's just that we do a poor job of reproducing them with tech.
Changing your telephone number every year could be an artificial holiday like valentines day or halloween. It can be done if people deem it's important.
I already do this. It gives me an opportunity to trim the contact list to people I actually talk to regularly which I send the new number too. Also shows me my footprint online since I have to update the number. I only change it for places I actually use regularly or are important.
I just block all unscheduled calls and calls from unknown numbers. If there isn't a calendar event and it isn't coming from a known family member or close friend, the call doesn't go through.
I also have multiple cell and virtual numbers and give different ones out to businesses, banks, friends, and family. Businesses that don't need to ship me stuff also get a different address than ones that do.
I don't register to vote anymore because they leak my residential info. When they can agree to stop leaking it, I will participate again.
I have done this as well. I once got an travel insurance claim rejected by some outsourced handler and found out who the CEO of the insurance startup was. I emailed him and magically it got resolved
If you're US based, there's tons of data broker sites, and you can glue together the information for free as various brokers leak various bits (E.g. Some leak the address, others leak emails, others leak phone numbers). And that's by design for SEO reasons, they want you to be able to google someone with the information you have, so they can sell you the information you don't have.
Some straight up list it all, and instead of selling people's information to other people, they sell removals to the informations owner. Presumably this is a loop hole to whatever legislation made most sites have a "Do Not Sell My Info" opt out.
What you do is look up a data broker opt out guide, and that gives you a handy list of data brokers to search. E.g.
Where are you buying this? Might be handy for a job search. Zoominfo basically doesn't have a b2c offering and I am not paying several thousand for an experiment in improving my career
You are very lucky. In China, virtually all websites are required by law to use your phone number (verified by SMS) to register and/or to use. And all numbers must be linked to your ID.
I can make calls from my phone/laptop, using VOIP.
I could receive as well if I wanted to, but I rarely need to be called, so I do not normally keep a number, and I could not be called when out and about anyway, because wifi-only, but you do get an answerphone, so people can leave a message.
I can relatively easily skip trace people but where are you buying specific peoples information? Do you mean youre skip tracing or buying directly from data brokers?
You may be committing some type of violation of privacy laws if you're contacting them via phone and they're on the do not call list. Because they work at a company does not mean that you and the employee have a business relationship.
I'm really happy to see this level of detail and research. So many privacy-related articles either wholly lack in technical skill, or hysterically cannot differentiate between different levels of privacy concerns and risks.
People commonly point to Mozilla's research regarding vehicle's privacy policies. (https://foundation.mozilla.org/en/blog/privacy-nightmare-on-...) But that research only states what the car company's lawyers felt they must include in their privacy policies. These policies imply (and I'm sure, correctly imply) that your conversations will be recorded when you're in the vehicle. But, they never drill down into the real technical details. For instance ..... are car companies recording you the whole time and streaming ALL of your audio from ALL of your driving? Are they just recording you at a random samples? Are they ONLY recording you when you're issuing voice commands, and the lawyers are simply hedging their bets regarding what sort of data _might_ come through accidentally during those instances? Once they record you, where is the data stored, and for how long? Is it sent to 3rd parties, etc? Which of these systems can be disabled, and via what means? Does disabling these systems disable any other functionality of the vehicle, or void its warranty? Lastly, does your insurance shoot up if you have a car without one of these systems? etc ...
The list of questions could go almost indefinitely, and presumably, would vary strongly across manufacturers. So much of the privacy news out there is nothing but scary and often not very substantiated worst case scenarios. Without the details and means to improve privacy, all these stories can do is spread cynicism. I'm really glad to see this level of discourse for the author.
Those aren't questions that have fixed answers. The data available is pretty far beyond what I'm personally comfortable with though.
One OEM I'm familiar with had such a policy. My org determined that we needed a statistical reference to compare against within a certain area. Some calls were made to the right people and shortly after we had a (mildly) anonymized map of high precision tracks for every vehicle of that brand within the area over some period.
I'll answer the, "Does disabling it void your warranty?" question. The answer is almost always "no". Unless the modification you make to something actually directly or indirectly caused damage to it, companies in the US cannot "void the warranty".
> Lastly, does your insurance shoot up if you have a car without one of these systems?
This question I can answer with a reasonable degree of certainty; no, it does not.
Insurance companies increase rates for automobile coverage for many reasons, real or illusionary. But "does your insurance shoot up" strictly for not having a recording device in a vehicle is not one of them.
Do some insurance companies charge less when provided access to policy owner driving patterns which the companies infer reduce their risk? Sure.
> Do some insurance companies charge less when provided access to policy owner driving patterns which the companies infer reduce their risk? Sure.
> But that is a different question.
In what way? A discount for allowing surveillance is identical to an extra charge for disallowing it. They're identical, unless the "base" rate is set externally somehow.
$5 for lemonade, $3 off if you skip the lemon == $2 for sugar water, $3 extra to add lemon.
That "better" analogy is a restatement of "$5 for lemonade, $3 off if you skip the lemon."
> In this case, the discount is "opt-in."
The base price is not a force of nature. $5 with the option to opt-in to a $3 discount sounds great, until you realize that just a month ago the price was $2 by default. They raised the default by $3, but allowed you to opt-out of that increase. Whether you label that "opt-in" [to the discount] or "opt-out" [from the increase], you end up in exactly the same place.
> A discount for allowing surveillance is identical to an extra charge for disallowing it.
I don't think this is necessarily true. You're right that there's an unknown base rate, but that means you can't say what you're saying as well. And if you have other companies that offer non-driving-pattern policies as well, and they're a similar price, you can see it's a discount not an added cost.
In fact, regardless, other companies are your best bet in combatting rising prices for any reason.
Yes. That is what "...unless the base rate is set externally somehow" means.
It is different initially, when only one company is offering the "discount" and they have not yet adjusted their base price upward. In fact, the people who want the discount will presumably flock to their service, which may even mean they won't raise the base price all the way up if it makes their costs lower. But if that works, the other companies will follow suit.
In short: there's a period of time when there's a difference, and you have a real choice. If the difference is real, it will get locked in to the entire industry. It's a positive economic profit, and those go away.
In theory you’re already paying the merchant fee in the “price”. So merchant found a way to improve margins and credit card companies found a new revenue source
Or phrased less inflammatory manner: "Corporations can enter into contracts and engage in legal action just like people can". Even the much maligned Citizens United v. FEC basically boils down to "groups of people (corporations or labor unions) don't lose first amendment protections just because they decided to group up".
Except not everyone in a corporation has the right to speech. I'm prohibited by my employer to say anything on the company's behalf, but the C-suite and board are able to speak on my behalf. So, the company's leadership has a right to free speech, I don't.
You still have that right; you simply entered into a voluntary agreement with your employer not to exercise it in exchange for money. Happens all the time.
That's just life. Modern society obligates us to do things like feed, clothe, and house ourselves; they aren't just going to result because you exist. Getting a job is an sacrifice we make to fulfill those other basic obligations.
To discuss further would require us to go into the rabbit hole to debate whether capitalism is the right structure for society, but so far, everything else that's been tried has been worse.
Let's bring back indentured servitude, you have a right to not be a slave but you should still be able to enter into a voluntary agreement not to exercise that right.
That’s a facetious reply and you know it. Agreeing not to say certain things is practically a universal requirement of employment, for example, to preserve trade secrets. And indentured servitude is illegal.
>Except not everyone in a corporation has the right to speech. I'm prohibited by my employer to say anything on the company's behalf,
Yeah, that's how organizations typically work? You might have "freedom of movement", but that doesn't mean you can work in your CEO's office. Organizations also limit who has access to its bank accounts, but that doesn't mean it's suddenly illegitimate for companies to engage in transactions.
It makes me wonder, if everyone 'owned' their own data, I wonder if it could be used as a form of UBI. Everyone has data from using services, everyone owns it, everyone can sell it to make a living just doing whatever they are doing everyday.
This is only just a shower thought I had the other day though, there are probably many pitfalls when it comes to such an idea.
Unlikely. I'd think the most valuable data is generally the type that can be used to extract money from you. Targeted ads and such. So, your data's value would increase in proportion with your spending power.
I don't support UBI but that's a fascinating idea. Unfortunately the data is worth micropennies in the individual, so only worth something in aggregate, like a class action settlement where you end up with a cheque for $0.34 for damages which makes it not even worth your time, it'd only be good as the backdrop for a science fiction novel or as an experiment by a YouTube video by a well known creator to see how little money it would make. I would read the hell out of that book and watch that video tho!
Connecting information to that kind of personal gains sounds dangerous. There is probably non-negligible abuse potential, like college kids legally printing money at weird scale.
You will never generate enough money from information about your consumption to fund your consumption. Obviously there's other data, but you get the point.
UBI isn’t meant to fully fund consumption. It’s “basic” income such as rent or groceries. I will accept that consumption data doesn’t cover consumption and that the value is already priced in but I don’t accept that it has no value or that UBI is meant as complete income replacement.
Honestly the path to "UBI" is probably just socialized/subsidized basic needs.
Build masses of government housing, make a healthcare public option with sliding-scale costs, and you're 90% of the way there - food and decent low-end broadband are frankly already cheap enough for the government to cover with maybe some "Don't gouge Uncle Sam or else" clauses and that's about everything.
IDK, I think almost all interesting data has no obvious single owner, because it gets created as a side effect of an interaction between two or more parties.
Take the transaction information from example above. The record of you buying products X, Y, Z for total t=x+y+z at time T, with card C - both you and the store could argue they're entitled to it. It's about you and money you spent and products you received, but it's also about them and the money they received and the products that were taken off their inventory. Then the card issuer will interject saying, "hey, the customer uses a card we provide as a service, so we're at least entitled to know which card was use to pay, to whom, when, an what the total amount was!". Then both yours and stores' banks will chime in, and behind them, also the POS terminal provider.
Truth is, they all have a point. We like to think that paying for groceries with our watch is like a medieval peasant paying for fruit with metal coins at a town market. It's not. Electronic payments always involve multiple steps handled automatically, in the background, by half a dozen service providers linked by their own contracts and with their own legal reporting requirements, and each of them really do need to know at least some details about the payment they're participating in.
A simpler example: this comment. It's obviously mine. It's also a response to you, and it only makes sense in context of the whole subthread. Should anyone reply to it, they'll gain a stake in it, too - and then, arguably, everyone following this discussion have a right to read it, now and in the future. After I hit the "Reply" button, I can't in good conscience claim this comment is mine and only mine. This is why I'm personally against the practice of unilaterally mass-deleting of comments on open discussion boards, like e.g. plenty of people do on Reddit, forever ruining useful discussions for the public.
(It's also why I like HN's approach to GDPR, which is, you can get your account disassociated from your comments, and you can request potentially identifying content be removed, but the site won't just mass-delete your comments automatically.)
This is fairly easily answered through legislation like the GDPR which classes this data as personal data if it’s associated with an identified or identifiable person.
A legislative body writing something down doesn’t mean society has agreed to it.
If someone journals and writes down everyone they met with locations and dates, they will laugh you out of the room if you tell them they are violating GDPR.
This also leads to stupid shit like people not being sure if they can point a camera at their driveway to catch vehicle break-ins.
Finally, classifying something as “personal data” because it’s about me still doesn’t make it “my data”.
Health data in the US is strictly regulated, very personal, but is definitely not mine. I cannot remove things from it or prevent it from being shared between healthcare institutions.
Is there any documentation on this to read further? I.e. what the different levels contain and how much on average is the cost reduction for the merchant.
The cost reduction is very small, it’s applied to interchange fees. I’ve been directly responsible for implementing this functionality on payment gateways for multiple processors because it helps reduce fraud holds as well.
Separate question, what are your ethics around the surveillance of Americans' economic activities by private actors? What "rights" are relevant in this space and which do you subscribe to?
I'm not going to debate you about anything, I just don't get the chance to ask insiders any of these questions.
My ethics are “this is unequivocally wrong without consent”.
Thankfully my work was on payment products that serviced businesses and government entities, so I did not really have to deal with that moral quandary.
However it gets muddier in other spaces as well. There are types of cards, like HSA/FSA that require something similar to level 3 data called IIAS that is used to determine what parts of your purchase are eligible. In the parts of the systems I have worked with, this is covered by HIPAA, but I have no idea if there are “clever” methods to sneak that data out of the chain elsewhere.
That just sounds like a standard cross-merchant loyalty program? I don't think there are many examples in the US, but once you realize it's a loyalty program you really shouldn't be surprised that they're tracking your purchase history. That's basically the entire premise.
In Germany, the major cross-merchant loyalty program Payback gives you one or two rounds of extra consent choices about the tracking, and the type we see here is absolutely not mandatory for participating. It does of course let them give you more personalized and useful coupons, but one can participate while declining that permission.
So called loyalty programs should be illegal on multiple fronts,
- Privacy: There's obvious tracking of purchasing trends. This derails into selling user data to everyone that makes people increasingly easy to track.
- Customer-dependent pricing / Price-discrimination: This is awful for economy, in econ 101 you learn that business want to charge each customer as much as they are willing to pay, but this differentiated pricing is just getting their hands into everyone's pockets.The free market principles rely on perfect knowledge, and every step made to make pricing harder is an attack against self market regulation.
Price discrimination is not a priori bad. A fixed price with enough margin to support the business may be too high for price sensitive consumers. If you can charge more to less price sensitive consumers, you can, at the margin, make a little bit on these price sensitive consumers, and overall everyone is better off - more consumers are satisfied and their marginal willingness to consume a unit of the thing being sold is more equalized.
Yes, this is the reason why it's sort of illegal, but done anyways.
Honestly, beyond paying fewer fees on the bus as a kid, I'm pretty sure I'm being scammed everytime I experience price discrimination.
I feel it's easier to make it illegal and give away reasonable credits to all consumers. I wouldn't discriminate in credits either, I'd rather have public transportation being free for all than claim to save money that society needs to spend anyway.
It doesn't help that lying about the price at any point just makes accounting harder, and creates space for wrong, uncompetitive pricing, or awful deals that would hurt business and society in the longer term anyway.
pricing is all made up to begin with though. your can't take the cost to make an item, add a reasonable amount of profit and that's the "real" price. that's just not the reality of running a successful business. human psychology is far too complicated.
at the end of the day, prices are just a number you make up, and hopefully it's a big enough number that your stay in business. hopefully it's a big enough number that you get rich. but sometimes it's a fire sale and you just end up owing less money to your vendors.
> at the end of the day, prices are just a number you make up, and hopefully it's a big enough number that your stay in business.
The only requirement is to make up a single for all your customers that are getting the same thing back. It'll be made up and account for business factors like risks, profits, etc.
I don't think everyone is better off, at best the "less price sensitive" is unaffected. But then you have to have have some way of stopping arbitrage via the customers paying the lower price through some sort of identity checks or restrictions. I think that's an unavoidable negative outcome and it's not clear that it would always be outweighed by allowing more people to consume the product.
There are ways to adequately approximate that kind of price discrimination without detailed tracking though, like giving discounts to students, seniors, and people receiving various kinds of welfare benefit upon showing proof of status.
Yeah it isn’t as accurate as the privacy-invasive kind of tracking, since students and seniors can be wealthy and eligibility for welfare benefits doesn’t always consider assets or gifts from well-off family. But it’s accurate enough to give the economy most of the same benefit without the privacy downside.
I do think it’s fine for people to opt in to more tracking as a separate consent choice beyond merely participating in a loyalty program, for example to get more personalized and therefore more useful offers, but not as a condition of participation to merely receive at least standard offers and accumulate points. That’s how they generally work in Germany.
>I do think it’s fine for people to opt in to more tracking as a separate consent choice beyond merely participating in a loyalty program, for example to get more personalized and therefore more useful offers, but not as a condition of participation to merely receive at least standard offers and accumulate points. That’s how they generally work in Germany.
Sounds like that'll push retailers to switch from a system where they give points/discounts to everyone, to one where points/discounts are "targeted", which of course requires opting into tracking. Like I said before, the whole premise of loyalty programs is that you're being tracked in exchange for rewards. You really can't expect to have your cake (discounts) and eat it too (not being tracked).
my grandmother collected green stamps from the grocery store, which she saved for food discounts.. I don't think that there was any customer ID involved at all..
honestly, describing pervasive tracking of purchasing associated with govt ID as "normal" is .. its a sickness and parts of it are illegal now. It is not required or "normal" at all, from this view
> Sounds like that'll push retailers to switch from a system where they give points/discounts to everyone, to one where points/discounts are "targeted", which of course requires opting into tracking. Like I said before, the whole premise of loyalty programs is that you're being tracked in exchange for rewards. You really can't expect to have your cake (discounts) and eat it too (not being tracked).
As I said, in Germany you can indeed have your cake and eat it too in this regard, if you’re okay with the offers you receive being less targeted and therefore less appealing.
My understanding is that GDPR requires them to offer the option to decline the personalized targeting without being blocked from participation overall, and this is probably the same anywhere in the EU. But I don’t have personal experience with this in other EU countries and could be misunderstanding.
>As I said, in Germany you can indeed have your cake and eat it too in this regard, if you’re okay with the offers you receive being less targeted and therefore less appealing.
The "cake" in this case refers to the offers you had before GDPR came into effect and/or regulators started enforcing it. They might give opt-out people some token offers to appease regulators, but I doubt it'll be anywhere close to the offers they had before.
> They might give opt-out people some token offers to appease regulators
It’s not an opt-out situation. As per GDPR requirements, these programs have a specific opt-in prompt for personalized targeting, separate from the one which is for generally collecting and redeeming points as a member, and it’s not pre-chosen by default.
I think one can assume that many people will decline to opt in, especially in a culturally privacy-focused country like modern Germany and since not opting in is far behaviorally common than explicitly opting out, but also that many others will knowingly consent in exchange for the benefits. So I think they would generally want to give decent offers to both categories of people, since the non-consent group is large enough to matter. Of course the personalized ones would be better, otherwise nobody would want to give that consent.
Myself, I’ve consented to some but not all of the personalized targeting and information sharing from the loyalty programs I participate in here, after reading the descriptions of the requested consents in detail and making a conscious choice. In at least one case I converted a no to a yes after thinking about it longer. It’s good to have that transparency and control, and not to have the legalese surreptitiously remove your right to sue the store should that become necessary as is common in the US (forced arbitration is generally illegal here in B2C agreements).
As for the rest of your most recent comment, I wouldn’t know; I didn’t ever live in Europe before the GDPR.
It's the normal term, in that it has been normalized as such. But it is otherwise not accurate except in the barest, most monetaristically self-fulfilling-prophecy way.
I believe that's opt-in. At least it seemed to be when my landlord switched to Bilt.
There's a section of your Bilt profile that shows your other credit cards and whether you want them linked. It's pretty freaky to see them listed in the first place.
I definitely keep them off.
Bilt is ultimately a big points/reward program though, so you might get points for having them connected.
I still haven't figured out exactly what Bilt's business plan is, but the main part seems to be trying to get as much financial data on people as possible, and partnering with landlords to do so, and since it's how to pay your rent you can't unenroll completely. (Unless you maybe mail your landlord a paper check?)
It was initially opt in for me, then they made it mandatory.
(Sure, I could pay by check but consumer banking technology/US in the US already feels like is is lagging a decade behind other countries without voluntarily going further back. Paying by check every month would be quite inconvenient.)
I'd already decided to avoid bilt as much as possible, but reading this thread prompted me to try going a little further.
> Request to Know... The specific pieces of Personal Information we collected about you.
> You have the right to opt-out from having your Personal Information and Sensitive Personal Information sold to third parties. You also have the right to opt-out from having your Personal Information and Sensitive Personal Information shared with third parties for purposes of cross-contextual advertising
I’ve had to deal with Bilt [0]. In case you’re not aware, they have a “feature” called Instant Link that automatically pulls ALL of your personal and sensitive financial data from financial institutions, including your credit card accounts, balances, etc. They apparently do this via a partnership with a company called Method Financial [1].
It’s frankly the most intrusive thing I’ve ever encountered in any software I’ve ever used—I’m not sure how it’s even legal, but this is America where we have no real privacy rights.
Instead of giving you the option to opt in for them to get this level of access, they automatically enroll you into it when your account is created, pull your data, and then allow you to “opt out” afterward, which enables them to have access to your personal and sensitive financial data anyway. And since you literally must have an account with them if your building uses their services for rent payments, they’ve effectively rigged the system to force millions of folks to unknowingly give them access to their personal and sensitive financial data.
Anyway, in your Bilt privacy settings, there are some options you can disable (including Instant Link), and I recommend that you disable ALL of them, although given the dark practices of this company, I don’t even trust that those settings are actually honored.
Side note: Did you know about a company called Method Financial that somehow has real-time access to ALL of your personal and sensitive financial data? Did you know that this company you never heard of that has said access then sells that access to the highest bidder? Do you remember agreeing to any of that anywhere? Yeah, me neither (on all counts)…
Thanks for the heads up. Luckily I can go back to analog with certified funds to pay rent. I suspect, without evidence, this is due to the relatively strong tenant protections in Chicago.
You might want to discover about sophistication and pervasive facial recognition technology used by major retailers. Paid by cash? It can still be tracked to you. For "fraud prevention", of course.
>Paid by cash? It can still be tracked to you. For "fraud prevention", of course.
They can already track you through your phone and/or credit cards. Why bother setting up a massive facial recognition system for people paying with cash when they only account for 10% (or whatever) of overall shoppers, and have less disposable income than average?
Word of mouth: retailers in China have been using face recognition technologies to identify key customers so that they can be greater by name when delivered their favorite drink upon entering the premises.
The trouble with "word of mouth" is that you can't tell whether something is actually real, or vaporware that some account executive dreamed up to close a deal.
I agree, which is why I qualified it. I was working at a retailer, building it's cloud systems at the time. It was told to me by a colleague who claimed to be told that by a peer from China at a conference.
Facial recognition on a small corpus of known faces (what everyone experiences on Facebook, their phones, etc) is an easy problem.
Walmart picking up a face walking into a store and matching it against 30 million possibilities is going to return so many false positive matches it’s going to be completely useless.
Facial recognition is illegal where I live, both for gov't and commercial uses. Several major cities in the US have banned it (e.g., San Francisco, Boston, etc.).
I'm assuming you're using your Bilt card when this happens.
Your Bilt agreement stipulates how itemized transaction data (level 3 in payment terms, with level 2 being "enriched" with subtotals/tax and merchant information- which is what you typically see with your normal bank)
Card networks (Mastercard, VISA) have different fee structures that incentivize more detailed information like level 3 for lower processing fees for merchants - here's more details on levels https://na-gateway.mastercard.com/api/documentation/integrat...
What's most interesting to me about that is that they are willing to disclose that data to your email provider. Amazon, for example, is pretty cagey about what you've bought when sending emails, probably because they don't want Google to be able to use that information to target ads to you. (Not because they are Good and care about your privacy, but because they think they're going to beat Google at advertising. How's that going?)
So yeah, I don't get why they would do this. It gives their advertising competitors valuable data for free, and it pisses off customers by telling them that they're being tracked when they shop at Walgreens. Strange stuff.
Oh, here I thought it was because every time I want to remember info about an order, it forces me back to their platform, rather than simply searching my email like I do for every other item I've ever purchased.
Loyalty cards are one avenue for data brokers to get your purchase history. Credit cards can also sell your purchase data. Currently the only safe-ish way to be anonymous is with cash. That may disappear with pervasive face recognition and cell phone tracking.
What’s most strange to me is why this Bilt company would pay for that data feed and somehow think it provides some value to you. It’s obviously just creepy way of saying we know too much about you
This happened to me with square (block). I bought furniture, and they used square and required my email address for delivery. And then after that, anywhere I used square to pay for something using the same card, they would email me a receipt. I complained and they played dumb and never did anything.
Unfortunately the GDPR is largely toothless if a company without an EU presence chooses to ignore it.
I live in Ireland and my data is in the databases of several US data brokers. Thise conpanies can't be forced to to comply with the GDPR because they simply do not have an EU presence. You don't have to search far to find stories from people people who made complaints to their local Data Protection office about such issues only to be told there's nothing that can be done.
HN rants about it because it’s not a good solution. It identified a problem but caused an idiotic fallout (cookie banners) and failed to actually put in a framework to enforce that companies aren’t just lying.
> failed to actually put in a framework to enforce that companies aren’t just lying.
That's not true. I work in an European company and we were contacted by the agency to give a complete list of partners that we use, reasons for why it is justified, which routines we have for deleting old data etc.
I guess in theory we could have lied and made up data, but only an idiot would risk lying to the government. Everyone at my company took it seriusly and tried to provide as accurate data as possible. There were also several follow up questions that had to be answered.
The mindset of lying to the government to "protect" your employer seems so far fetched. Why should an employee lie to the government? If it turns out that the company was in violation of GDPR the worst case scenario for the company is a fine. If the government finds out you are lying, the employee faces jail time. The trade-off is simply not worth it.
Maybe it's easier to lie to the government in some countries, but not in my country. The government agencies actually checks and verifies your claims.
The lie doesn’t have to be intentional. All it takes is a really simple accidental debug logging flag to collect what amounts to a GDPR violation.
The point is that no effort was made to implement a technical solution to protect privacy. So it’s upsettingly trivial to violate the GDPR unknowingly and any company that is even a little unscrupulous (of which there are hundreds) can easily ignore the law.
> The point is that no effort was made to implement a technical solution to protect privacy.
And you want the government to do that?
Why haven't the companies who at every turn shout how privacy conscious they are haven't done that?
It's now been 8 years of GDPR. Why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser? Oh wait...
Yes, it’s their job. Building codes have technical specifications and don’t allow people to opt out. Airspace is very tightly regulated with technical specifications.
> Why hasn't the world's largest advertising company incidentally owning the world's most popular browser implemented a technical solution for tracking and cookie banners in the browser? Oh wait...
Because the government is the thing that is supposed to produce useful regulations, not an advertising company.
GDPR is like trying to solve smog by passing a law that says people can opt out of smog by staying out of the city. No regulations to actually reduce smog.
This literally just happened to me last week. I emailed them to ask them how to stop this:
> I understand you want to opt out of all points and rewards and not be tracked.
>
> We're constantly working to make Bilt as rewarding as possible. Currently, we don't have an option to opt out of points or rewards. To prevent your transactions from being tracked, the most effective step is to unlink your card from your Bilt account.
>
> To unlink the card:
>
> Go to the Wallet tab > Scroll down to the Your Linked Cards section.
> Look for the card you would like to unlink and tap View all benefits.
> Click the ellipsis [:] on the top right, then tap Edit > Unlink.
Gah, I hate this service and will avoid renting on buildings that use it in the future.
Hopefully exclude? By whom? At some point, somebody has to decide it was sensitive, by what standards? Does Bilt decide to not use it after they were already sold the data? Does the aggregator after already been sold it by the harvesting seller? Does the harvesting app reduce the appeal of their data by deliberately excluding the data? Does the harvesting app care to spend the money on doing that?
That's what I do, but I assume some stores like Target also track you by Bluetooth, facial recognition, etc, and can correlate any past or future cash purchases if you use your credit card once for maybe a large innocuous purchase.
What if landlords could reach their grubby hands into the data firehose their tenants spew out? I can save 5% on some useless shit at X store, you say? Sign me up!
Bilt as a concept is the biggest pile of late stage enshittification horse shit I’ve ever seen.
It would be amazing if you could build and send fake profiles of this information to create fake browser fingerprints and help track the trackers. Similarly, creating a lot of random noise here may help hide the true signal, or at least make their job a lot harder.
Unfortunately fingerprinting prevention/resistance tactics become a readily identifiable signal unto themselves. I.e., the 'random noise' becomes fingerprintable if not widely utilized.
Everyone would need to be generating the same 'random noise' for any such tactics to be truly effective.
A sufficient number of people would need to, not everyone. And if I were the only one then tracking companies wouldn't adjust for just me. Basically, if this were to catch on then ad trackers wouldn't adjust until there was enough traffic for it to work. Also, that doesn't negate the ability to use this to create fake credentials that aids in tracking ads back to their source.
Here's a real-life example:
You show up alone at the airport with a full-face mask and gray coveralls. You are perfectly hidden. But you are the only such hidden person, and there is still old cam footage of you in the airport parking lot, putting on the clothes. The surveillance team can let you act anonymous all you want. They still know who you are, because your disguise IS the unique fingerprint.
Now the scenario you're shooting for here is:
10 people are now walking around the airport in full-face masks and gray coveralls. You think, "well now they DO NOT know if it's ME, or some terrorist, or some random other guy from HN!"
But really, they still have this super-specific fingerprint (there are still less than 1 person in a million with this disguise) and all they need is ONE identifying characteristic (you're taller than the other masked people, maybe) to know who's who.
It's kind of how people used to make fun of the CIA types and "undercover" operatives.
Look for the guy wearing a conspicuously plain leather jacket and baseball cap. "Why hello there average looking stranger I've never met. Psss, 'tis a fair day, but it'll be lovelier this evening.'" "Oh ... it's Murphy the spy you want."
Also, found out the CIA declassified a bunch of jokes several years back in searching to respond. [1] Most are already dead links on CIA.gov, yet there's a few remaining. Nother one on people commenting on the CIA. [2] "These types are swin- Ask in Langley if they work for the CIA. Every- Ask in Langley. They will tells one knows them." 'You, it's the big building behind.'
The garbage in the last sentence of this comment is due to the second link including incorrectly OCR'd text from an image of a newspaper using a two column layout. Both links are very amusing.
I think this is a slightly different case no? If the ad network is using a very high precision variable to soft-link anonymized accounts, then randomizing the values between apps should break that.
Your analogy applies more to things like trying to anonymize your traffic with Tor, where using such an anonymizer flags your IP as doing something weird vs other users. I’m not convinced simply fuzzing the values would be detectable, assuming you pick values that other real users could pick.
I'm sure the ad networks do a lot more than use high precision variables for soft-linking.
These are professional networks with a ton of capital thrown behind them. They have pretty decent algorithms, heuristics, etc; and you don't make money (compared to the other data correlation teams) if you do simple dumb stuff. I'm certain they take into account those trying to be privacy-conscious, if only to increase their match rates to be competitive.
Swapping fingerprint details is different than your example since it happens immediately and out of view. You could change fingerprints very often/create a new set for every browser tab. Additionally, as I pointed out before, they won't adjust until there is enough usage and when there is enough usage then the random settings are hard to distinguish because it isn't 1 in 1m. I get that they will keep trying to track down things that make browsing specific, but that is what updates are for. We need to at least make it hard.
Unfortunately the fox is building the hen-house. They 'should' build products that improve my experience but they have very little incentive to do that when they get paid so much for the data they can extract. What would actually do it is regulations similar to financial regulations. OS/browser companies shouldn't be allowed to do business with data brokers. Then they would have one primary customer, the consumer, and competition would focus on the correct outcome. But 'regulation' is an evil word so we aren't likely to see anything like that actually happen.
Technically, information are the bits you DON'T know. Once you know the bits, it isn't "information" in the Shannon sense, in that it takes no energy to reset a message if you know all the bits, but takes N-units of energy for N unknown bits of information. (See; Feynman's lectures on computation)
It's also useful for making ads more effective & manipulation overall. As long as you can connect the data you track & buy, you can use Thompson sampling. In fact, why would we think knowing the name of a person is anything but bad business?
I'm in this industry, and I have knowledge about this.
It's important to point out that it takes a long time for uptake of new versions of ad SDKs. The general assumption is that it takes about 6 months after release of a new version for 50% of ad traffic to come from that version or newer. Also, for every version you release, approximately 1% of traffic will never upgrade past that version.
In that kind of world, over-collecting data makes sense, especially if you think nobody will ever find out. Like total / and free disk space. There's no good reason to need those, right? But let's say an advertiser comes to you and says "we want to spend $1M / day to advertise our 10GB game, but only to devices that could install it." All of a sudden it's useful to know that a device only has 8GB of disk space, or only 100MB of free space.
So OK, if we didn't collect disk space, now it makes sense to collect disk space. Let's add it to the SDK. It takes a month or two to release a new version of the SDK. 3 months to get any meaningful traffic from it, and another 3 months to get up to 50% of your traffic. Assuming the ramps are linear, 4 months of 0%, and then 3 months of ramping to 50%, 30 days per month, you'll make $22.5M in the first 7 months. But if you had the logic in there to begin with, you'd have made $210M during the same time period. That makes it an easy choice for the business folks.
There are answers to this, but they all have drawbacks. You could limit data that ad agencies can collect. This reduces the value of ads. And agencies have learned that some data (like location) is low-value and high-risk, so they're removing the ability to supply it. I think it'd be better to support a model where ad code can be updated independently of the app. This way we could push out bug fixes faster, and could remove our just-in-case collection, but Apple has no signs that this is coming soon, and Google's answer has been such a shit-show that we aren't considering it viable over the next 4 years.
Edit: To address screen brightness specifically, it's a very rough proxy for age of the user.
> But let's say an advertiser comes to you and says "we want to spend $1M / day to advertise our 10GB game, but only to devices that could install it."
I don't want to call you a liar, but having seen ads that are presumably targeted at me, it feels like a total fiction to say that anyone is actually capable or interested in doing this.
I get advertisements for just absolute nonsense garbage that has no bearing on my life, and no bearing on anything that could have possibly been collected from my device.
The closest thing is that when I was in Mexico for a week, some of my podcast pre-roll ads were in Spanish. (Which, I should note, I do not speak fluently enough to even understand.) Even now, the occasional ad I'm served on a podcast is in Spanish.
And that's it. They saw that my IP came from Quintana Roo, and (somewhat reasonably) decided that I need to hear Spanish-language content. Even when I physically moved back to the United States.
The mobile ad industry is weird, and has some perverse incentives. Good games don't advertise (they don't need to). Games that hook the users just enough that they can show them more ads tend to plow that money right back into advertising to get more users. Those are the ads you see 99% of the time, and they're not really targeted. They're just people who know that the average 15 second interstitial will net them $0.006 in revenue, so they bid for it at $0.005.
Are there whales that spend $1m / day in advertising. Absolutely, 100%. Are they running at all times? No. We typically see that kind of spend from a single advertiser around 30 days out of the year. They're short campaigns, typically around a launch of a big title, and they always try to target as narrowly as they can to maximize their impact.
You're right about it using IP geo-location to guess where you are and what language you want. We also use that to determine if we should show you the GDPR disclosures. But try looking at ads on a Xiaomi phone versus a Samsung and you'll see a different set of ads, because one of those purchasers tends to have more disposable income.
I believe some apps actually have to automatically brighten up your screen when displaying a QR code for scanning, and then reduce back the brightness of its previous setting when moving out of the QR code.
I believe the Whole Foods app does this for its first screen.
Everything listed changes way too often to be useful for tracking. My guess is that it's for anti-fraud purposes. Someone setting up fake devices and/or device farms is likely to get similar values, which means they can be detected via ML or whatever.
> screen brightness, memory amount, current volume and if I'm wearing headphones
None of those are likely to change when you navigate from one website to another, with tracking/ads disabled, which is what they want to be able to track. Otherwise they'd just use their cookies.
One device visits a site where you sell ads. A minute later, an unknown device with identical battery, volume, headphone, brightness, model number, browser version, and boot time to the second arrives on another site you run ads on. There's a pretty good chance they're related, because the odds of all those being the same plus those two sites and recent timings involved is rather low: https://coveryourtracks.eff.org/
Plus it doesn't have to be perfect. It just has to be good enough in bulk to sell.
Combine this with IP, timestamp, and some behavioral patterns, and you’ve got an extremely robust tracking mechanism that operates outside of explicit consent mechanisms.
>If it was LTE, I bet the lat/lon would be much more precise.
False. Apps don't have access to cellid information unless they also have location permissions, in which case they can just request your location directly.
>the free apps you install and use collect your precise location with timestamp [...]
This is alarmist and contradictory given that the author admits a few paragraphs up that the "location shared was not very precise". It might be possible for the app to request precise location via location services, but the app doesn't request such permissions (at least on android, you can't check for requested permissions on iOS without installing the app and running it), so such apps are most definitely limited to "not very precise" locations.
>At the same time, there is so much data in the requests that I'd expect ad exchanges to find some loophole ID that would allow cross-app tracking without the need for IDFA.
At least in theory they're not supposed to do that, but it'd be hard to enforce.
"If a user resets the Advertising Identifier, then You agree not to combine, correlate, link or otherwise associate, either directly or indirectly, the prior Advertising Identifier and any derived information with the reset Advertising Identifier. "
Cell carriers will gladly sell that information to apps. You can make calls to them over the cellular network (even if Wi-Fi is active!) and they will hand it back to you. No location services is required to do this.
"Precise" has a specific meaning for iOS Location Services and this ain't it. Presumably it's just doing IP geolocation which could be the same post code, or it could be the wrong city entirely. I'd expect it to be much worse on LTE than WiFi.
>Eh. Zip code level location + timestamp is still pretty invasive, even if, pedantically, that’s not very precise.
That's basically sent to multiple parties (ISPs, transit providers, CDNs, analytics/advertising/diagnostics/security vendors) everytime you visit a website. If this counts as "invasive" to you, you shouldn't be connected to the internet at all, much less buying a tracking device (a smartphone) and installing random ad-supported apps on it.
> Advertising Tracking ID was actually set to 000000-0000... because I "Asked app not to track".
> I checked this by manually disabling and enabling tracking option for the Stack app and comparing requests in both cases.
> And that's the only difference between allowing and disallowing tracking
This is revealing! I'd wondered about Apple's curious wording "Ask App not to track" leaves suspicious wriggle room - apps may not track by an id, but could easily 'fingerprint' users (given how much other data is sent), so even without a unique ID, enough data would be provided for them to know who you are 99% of the time.
Amended Dead Privacy Theory:
The Dead Internet Theory says most activity on the internet is by bots [0]. The Dead Privacy Theory says approximately all private data is not private; but rather is accessible on whim by any data scientist, SWE, analyst, or db admin with access to the database, and third parties.
Apple sets Advertising Tracking ID to 00000-0000 because it's the only technical control they have. However, apps are also supposed to respect the signal with regards to other methods of cross-site/app tracking and disable fingerprinting mechanisms.
It's not the only technical control they have - every single datapoint an app can gather is ultimately provided from the OS. They could let you disable access to metrics that have proven to be useful for fingerprinting.
They could also attempt block known tracking code - all games with IronSource ads will run the same tracker binary, byte for byte. There's a lot of things they could do, but don't, since in the mainstream they have a pretty good reputation when it comes to privacy.
They have other controls. For example, a game does not need to know your precise battery level (respect the low power mode setting), or precise screen brightness (respect the dark mode setting), or precise storage or volume (appropriate is sufficient). They really don't need to know if you're using wired or bluetooth headphones, and can request a specific entitlement if they have a valid use for that information.
99% of games do not need precise location (some exceptions are pokemon go, etc). They can request and receive an entitlement.
> There's no "personal information" here, but honestly this amount of data shared with an arbitrary list of 3rd parties is scary.
Why do they need to know my screen brightness, memory amount, current volume and if I'm wearing headphones?
Screen brightness, boot time, memory, and network operator could probably fingerprint any device all by itself.
Automatic brightness probably helps honestly. It could help confirm whether someone is in fact in an area that has high levels of lighting around them (e.g., in a store, at a beach on a sunny day, etc.)
Everything little piece of data that is gathered and used can help even if it isn't immediately apparent.
Now I could be wrong on this, but I feel like advertisers don't need to know something is true about a user, they just need to be confident something is true about a user and that's where data points like screen brightness can be of help to them.
Kind of a joke, but it could be useful for determining if they should serve light-mode or dark-mode ads. But I suppose they could just detect if dark/light mode are enabled.
I find it fascinating reading hacker news, full of IT folk who simultaneously build software that enables and profits from the advertising and personal information selling & tracking industry - are also the same people who complain the loudest about it. Unbelievable.
Probably because people like us have more visibility on the huge scope and consequences of this kind of privacy invasion. Most people don't actually see this with their own eyes. They probably know it's happening in the back of their heads but it's not 'real' to them. It's very real when you know you could technically run a report of all your users that also have grindr installed.
I'm sure most of us would prefer not to work somewhere that does it but we need to eat too.. And we have no input in this.
For example recently I was given a presentation on a new IoT product at work. Immediately I asked why we're not supporting open standards stuff like matter as a protocol. And I was told that'll never fly with marketing because they want to have all the customers to have eyes on their app for their 'metrics' and upselling. I told them fine but I'm definitely not using this crap myself. But it was shrugged off. We are too few for them to care about. And it makes us very unpopular in the company too. So it's a risky thing to do that doesn't help anyway. The "don't fight them but join them and change from within" idea is a fallacy.
Yes, because everyone on Hackernews is identical and working on the exact same stuff. It's not like it's a few companies enabling this and each marketing department going like oooooh i want that.
There’s no code of conduct or rule book that anyone should follow so ethics is determined at the individual level. That quickly turns to, either I build it for them or the next guy will. Resistance is futile type thing.
Most other types of engineering have published rules and standards and industry credentialing including ethics tied into it and loss of credentials for an ethics violation would be career ending in many cases.
(I can only think of straw-man examples. Does the private prison industry have problems getting architects, civil engineers, electrical engineers? Does the pharma industry have problems getting chemical engineers for manufacturing addictive painkillers?)
Architects have to build to codes and have their plans signed off by an engineer that is very much liable for the basic safety of the structure.
I’m a CFO and the CPA credential helps a whole industry of accountants avoid outright shenanigans that would take place if we could report financials the way sales, marketing and some others would prefer. We also have a whole layer of audits to help make sure what we say is true, is true.
It’s obviously not perfect and There’s always going to be bad actors but having industry guardrails does help a lot more than is obvious. This is one of those things we’re the absence of data is the data. The fact it’s pretty rare for a skyscraper to structurally fail and Enron type financial fraud situations are relatively rare. It’s hard to imagine how much things around us would be worse without checks and balances.
As for pharma example, I think it’s a good point but also a bit of a case study in where this should have worked but didn’t. Those sometimes are necessary things. Just like how originally technologists thought social media was beneficial to society, it could perhaps be revisited with a different opinion with a different perspective with benefit of hindsight. It’s pretty subjective and opinionated but I personally think R&D should be pretty loose. In pharma, you have to be pretty open minded as it seems sometimes things are discovered while in search of something else. The business of pharma, the sales people pushing those addictive pain meds, should be able to push them (with an expectation of presenting accurate data of research/side effects/etc). Prescribing physicians are ultimately the best check. Even when lied to about addiction stats, they didn’t seem to perform the appropriate check/balance as their profession would normally have done and sound alarms / stop prescribing. Instead, as a whole, they leaned into the idea that pain should be more aggressively managed than it has been in the past. They were all very slow to act even when addiction had been identified as a problem. The confluence of all these things has caused the industry to become introspective and change some things in hopes to avoid a similar repeat. Just like Enron did for finance and household accident data drives improving building guidelines. Software remains the Wild West without something similar in place.
To circle back to the CPA example as that’s what I’m most familiar with, it doesn’t tell me not to work in a particular industry. Like, private prisons need accountants. But it tells me what type of accounting practices are acceptable. I’d imagine a similar example for the context of this topic, is you wouldn’t be told not to work for an adtech company but in that employment you would be able to say certain types of data sharing is decidedly inappropriate according to your industry standards and you would be putting your career in jeopardy by building a feature sales requested. Furthermore you have things like whistleblowing hotlines and eventually other companies that couldn’t work with your adtech company because doing so would be considered an ethics violation on their part. Etc etc.
We might not be the same. Every time someone asks for tracking anything I complain and question a lot. People hate me, but if there is no real use case for storing all information we can get I will veto as much as I can.
The IT folks working in the advertising industry are much more the "who cares, everyone has all our data already anyway".
So you think individuals have control over how the industry works? The insight it gives devs is why some are so outspoken about it. This is a good thing.
A long time ago I had the idea to create an 'accountability server'. The high level idea was for it to generate unique credentials so that you could track to the source who sold your info. There are some ways to do that now, but I wonder if it is time to start exploring that idea again. If you exposed it as a VPN/proxy+app that ran on a server in your home, so that you could collect your own data and provide unique credentials on account creation, then I wonder how much that combination could figure out. Since it could act as a man in the middle it potentially could annotate credential source and see the ads and potentially track them to source. "This male enhancement pill ad is linked to your tire purchase." There is a lot of hand waving here, but I wonder if something like this could be built. The first step to stopping things like this is showing people who did it to them.
Wouldn't this require access to bid side data? The OP mentions it's pretty easy to get, but any company using this to expose advertisers is going to get their access cut off pretty fast. As the saying goes, "snitches get stitches".
My thought here is that there is likely a lot of leaked data on ads themselves, that is one of the reasons why you would need the VPN/proxy. Additionally you could (potentially) create fake browser fingerprint credentials on the fly to feed sites and have the VPN/proxy track the ads that show up for those credentials. (other credentials like email and the like could also be created by the app for you) You don't see the bid data, but you may be able to control the tracking that spurs it and you can see the results of it so a setup like this could likely make some inferences.
I don't know this industry well and the tech here has long sense eclipsed me so I really don't know what is possible but I imagine there are possibilities with this setup.
wow @apokryptein thanks for posting my article here... I'm shocked it's #1 rn.
if anyone has any questions regarding the post - I'm here to answer & talk!
Wow, such a beautiful article you have written. Usually, I’m not into reading much but after very long I’ve completed a full article with such interest. Thanks for it !!
I don’t know the answer to this, which is why I didn’t mention it in the post.
However, I could speculate that these data-broker companies scrape leaked [hacked and stolen] data from various panels and then match records on their end.
kinda OSINT for bad reasons.
The browser has less access to your system, and usually only if you give a specific website permission to use these features. Mobile operating systems are slowly changing that though.
what should imply checking available web apis? the comments is correct, browser can't access your location without explicit confirmation from the user, the same apply for other web apis, or at least mention a bunch of them which you know don't apply instead of linkin MDN
The more APIs available for JS to interact with, the more granular and detailed browser fingerprinting can be. For example, how your browser renders WebGL can differ depending on what graphics card (and drivers) you have. The resulting values can be read back and stored to create a detailed fingerprint of who you are -- this could potentially be done by Google Fonts or AdSense or any number of the countless ad and analytics frameworks loaded on basically all websites.
Browse the source in the following directory to see a plethora of examples of how web APIs are used to fingerprint users -- and this is just one publicly-accessible library we can easily review the source code of (proprietary, obfuscated ones likely use additional methods): https://github.com/fingerprintjs/fingerprintjs/tree/master/s...
One example used in multiple places in the above repo is "matchMedia"[0] which was a Web API method added a while ago (well, many years ago) to give a programmatic result of whether a given CSS media query matches or not. This can be used to detect, for example, user preferences like whether the display is HDR-capable[1], or the Accessibility setting "reduce motion" is enabled[2].
what is contained in the latest js standard that does let you collect fine grained information of your users without their consent? web apis that have to deal with sensitive data all requires explicit user confirmation to be used
At least on android the browser is limited by the android permission system, i.e. if you dont give browser GPS permissionit cannot give pages dito. In addition the browser will ask if you want to grant an app access to something like positioning data.
Furthermore, it is hard for a web page to run in background and receive user data.
How much money is tied back to, or generated from, wifi AP SSID databases for geolocation ?
Because wow that would be simple to spoof and chaff and spam.
It's dinnertime here but if I had a few minutes I could make (my own house) appear indistinguishable from (Chase Center) from the perspective of SSID landscape.
It would cost nothing and is trivially easy. Even if they pair MAC addresses that's not a big hurdle. I'll bet relative signal strengths are not measured.
Google's Geolocation Services used to charge 4$ for 1000 requests to their Cellular/Wi-Fi Geolocationing Service. Essentially you send a list of Wi-Fi MAC addresses and their associated RSSI values and get back a latitude and longitude with an accuracy metric. It was surprisingly good, when GPS wasn't available (sub 50 meters accuracy).
> There's no "personal information" here, but honestly this amount of data shared with an arbitrary list of 3rd parties is scary.
Why do they need to know my screen brightness, memory amount, current volume and if I'm wearing headphones?
> I know the "right" answer - to help companies target their audience better!
For example, if you're promoting a mobile app that is 1 GB of size, and the user only has 500 MB of space left - don't show him the ad, right?
Author jumps to the incorrect conclusion here. The answer is fingerprinting.
I think they are pretty clear if you read the documentation. Accessing to the exact value of these always need some privacy-related privilege on ios and android.
Without those privilege, all you can get is an approximate.
The thing I found I grokked, and think is important from this article is that private browsing doesn't end this information flow. It only marks the JSON data blob as "asked not to be identified or collated" and its substantively an honour system. There are penalties (lawsuit against google for misleading people on the fact data was still collected) but the walls to breach here are low, given that non-PII can be crossmatched, to confirm "who you are" in some sense.
There is no such thing as "private" browsing inside the factory installed browser, with factory installed DNS, and any kind of location data, or other cross-collating information along with your IP. The loss of privacy may be contextual and somewhat statistical, but it would be wrong to assume you weren't identified.
What it does do, is let you see how bidding mechanisms in services like flights and hotels will change bid when the same location as you comes to request service and doesn't have the prior search cookie state. Thats useful I guess.
"find things at a different pricepoint" cookie monster mode?
“Private browsing” is just private mostly in the sense of your local device, as in, your browsing history and cookies aren’t saved to disk. That still has some use as cookies are more direct tracking than fingerprinting (e.g., keeping you logged in, or a Google analytics script seeing that you’re a logged in Google user with a real identity provided to Google).
“Ask app not to track” turns off Apple’s own device identifier, but doesn’t stop other types of identifiers from existing, as the article described by the way it showed how ad networks make their own device identifiers collected by various apps.
Apple’s “privacy protections” are nothing more than marketing.
“Ask app not to track” is a wash and privacy theater at best. One of the reasons I still run ad blocking on _all_ websites and at the network layer. Sorry “content creators” but you need to get your revenue from elsewhere (ie, sponsored content).
Now I want a phone that scrambles all of this data on a per app (or phone) basis.
Malicious app wants this data? Sure you can have it. But you will get randomized values for every bit of information — resolution, lat/lon, brightness, battery level (user can set range of 90-100%), ….
This is one of the many good reasons to avoid the google app store but most apps in general.
Let it be known, having an app to do something which used to be doable by a website is to me a red flag. Although I refuse to install anything other than what I genuinely trust.
Which does absolutely nothing if your device or the app in question is permitted or otherwise not prevented from making DNS-over-HTTPS (or, less commonly because of its discrete port, DNS-over-TLS) queries.
I'm referring to devices and apps that are 'hard-coded' to query specific DoH servers/providers, therefore bypassing and regardless of any user-configured DNS server/s. And because DoH operates on outbound TCP/443, the lookups are indistinguishable from any other 'web' traffic.
Even some of the most popular desktop web browsers are configured to utilize DoH by default nowadays.
The most that a network administrator can do to prevent this is configure firewall IP blocklists of known DoH servers and NAT all outbound 53 (and 853) traffic to a desired resolver (like a local Pi-hole instance, for example).
ignoramous probably meant that in order to block access to all IP addresses that it has not recently resolved, the firewall must also host or communicate closely with a resolver. This is a tautology, not a spec.
Facebook hard-code IP addresses when their domains are blocked. I found this out while using NextDNS alongside that logging functionality that iPhones have. It’s insane the lengths that they go to.
It's not insane at all. It is the entirety of their business model, so it makes sense that they will do everything possible to keep that sweet surveillance cash flowing.
Very interesting and disturbing research, definitely a wake up call for me. Does anyone know/can anyone recommend me software that can block these sorts of requests from going through? I know of pihole which blocks adds but does it also filter out these sorts of things?
You need to have a wifi only android phone, rooted, no google apps, and uninstall anything that talks to the internet. That includes analyzing network traffic, open ports, and so on.
I did this with a Kali Nethunter distro back in the day for "reasons", privacy not being one of them. This makes the phone very hard to use for regular things.
I would like to bring attention to this project. They aim to function in an application firewall like manner and manage to block connections by category, classified by domain name. Android only though, and the 'full' version is available only on f-droid due to some anti-adblock-like Play store policy. https://trackercontrol.org/
Long ago there was XPrivacy project for Android that allowed to granularly set permissions for each app & system service and ensure they won't get the real private data. It's no longer alive these days, I guess.
Can someone share their experience with the alternatives for the modern latest Android?
- It was a clean state of a somewhat old phone (iPhone 11, factory defaults + new apple id)
- A single (old) app was installed (Stack by KetchApp, 10-12 years old)
- Was sending out an update a second pretty much instantly (5 kB - ~300 KB every second)
- Within a minute: IP, Lat / Lon, country, phone model, carrier / network operator, vendor, OS version, connection type (wifi), headphone status (?), volume setting (?), screen brightness setting (?), battery status (?), CPU count, system RAM, free RAM allocation, free hard drive capacity, system boot time (?)
Might as well just screen grab the Task Manager equivalent and hand it to them. Have better, quicker data about my own current RAM allocation and free hard space than I do. It hands them when the system booted for an ad? The headphone, volume, brightness, and battery was just "what" kind of headshake about invasiveness. Somebody'd hand wave they need it (we want it, we want it). They obviously don't.
Edit: It's almost Remote Desktop, on an iPhone. Realtime (~1 Hz) RAM / ROM allocation. Not sure how many Apple user even know how to check their realtime RAM / ROM allocation. The free hard drive space especially is just asking for botnet downloads.
Edit: Right, and ... disabling tracking doesn't mean anything because numerous updates blatantly ignore the setting ("uc": "1", // User consent for tracking = True;) and it's just a flag while they still send your vendor specific customer identifier anyways.
Really interesting article, and great investigation, just disturbing how much on an effectively clean phone.
I dislike that as a developer, knowing something like the headphone status could be useful for the functionality of the app. But some other unscrupulous person is just exfiltrating it! This is part of the reason I agree with Apple’s stand against apps with sub-apps/“desktop like” due to not fine-grained enough permission settings. There is a significant privacy downside to “superapps” and now Elon is pushing for the X everything app.
Yeah and if you ask for permission for every little thing then users are going to get bombarded even when it's needed for legit purposes. It's a difficult tradeoff to make, even if you want to do the right thing (and I'm not really sure that Apple and especially Google really do)
> The headphone, volume, brightness, and battery was just "what" kind of headshake about invasiveness. Somebody'd hand wave they need it (we want it, we want it). They obviously don't.
Well the why the ad industry wants it is clear: fingerprinting and segmentation. Someone consistently low on battery? Push them ads for powerbanks.
This is actually part of what I find so wrong about this entire idea.
With all this fine granularity, it seems like ads would be incredibly relevant. Specifically about what you need with something that might actually result in a click-through to purchase a product. Especially if they get real-time updates on my hard-drive status and battery state.
I don't remember the last time I got an ad that was actually relevant. Pretty sure the last ad that was even clicked on was one of those little windmills that swirls crazily, cause it seemed like it might make a cool lawn ornament. Turned out it was tiny. Years of online purchases, and they don't even suggest stuff I want.
It is an excuse. Google doesn't choose ads for you, they shoot out this bundle of info about you and just display the highest bidder. That means whatever ads you see are basically dominated by whoever overpaid the worst.
Related: has anyone else noticed the practice of using cheap commodity 'living room' appliances to get access to your data? A while ago I bought a ceiling light for my daughters' bedroom, brand unknown to me. It had a built-in speaker controlled via bluetooth, and dozens of light patterns and colors it could emit via a ring of small leds. My daughter was extactic looking at the youtube promo vid. Turned out that to use any of these features, you needed to install their app. Fine okay installing. Then the app demanded access to contacts and camera or it refused to connect to the ceiling light. Fine okay uninstalling the app and returning that crap.
Reddit app has no permissions on my phone, but the feed suggests communities based on my location never the less. I've been traveling for the last two months, every city I've been has been suggested
Just check https://ipinfo.io/ to see how close your IP points to your location. For most targeted content the city is good enough. And honestly if I'm one of 1 million people it's ok.
I wonder: to which extent are purchased/brokered app real-time location data feeds used by various intelligence services to target missile strikes in war zones? In e.g. Ukraine/Russia.
The leaked tools that NSA used, like XKEYSCORE, used publicly available data collection methods, including purchasing advertiser lists, to cross correlate all the data and form a profile. So anybody could do this stuff.
Anyone understand why an apparently accurate latitude/longitude showed up in one of those traces despite location services not being enabled for the app in question?
Phones send out probe requests to get a list of open wifis. If you have a static access point, with a known geo location, software can be running on that point to remember a mac address of the phone from a probe and store it. Thus enabling real time tracking.
Im like 60% sure this is how they figured out who the Bomber was in Austin TX.
This is also why some Chinese apps put everything inside a single app and request every permission there is, then track you through Wifi SSIDs seen by your device.
Apps that have to link hardware via Wifi sometimes do, they take complete control over wifi in order to create a wireless access point and make the device connect to it during setup. I think Nikon camera remote control does this, also Meta Horizon, with Meta Quest VR headset, IIRC.
There's also Wifi ranging feature, but it shouldn't need to expose SSIDs to the app, I don't know the API, it should be limited to giving a precise location:
That sounds like something that's also not that risky. Short lived, temporary access point with randomized BSSID/mac address should not be useful for long term tracking if done well.
It is not, if the developer only does what is expected. I believe when you have to perform this, the Android authorization asked to the user is complete control over the network adapter settings.
Thanks for asking. Came here to ask since I was curious about this too. I don't find any of the replies here convincing:
- List of open wifis: AFAIK, and in my experience, apps need special permissions to do anything at the wifi level. And yes, iOS location services use wifi info but it's disabled, that's the point;
- IP back to geo: then why not send the IP itself directly?
- Mozilla location services: same as above, why not send the info you send to Mozilla directly to the data harvester which can call Mozilla itself?
Basically, all these companies, ad networks, data brokers, big tech with absence of basic privacy laws (not to be confused with 4th amendment that binds Fed and State gov only, but does not restraint companies) act with wilful conspiracy with US government regulators, washing each other hand like a monopoly. This data gets enriched and collided and is perpetually on a permanent record.
One of the big WTF moments I've had in the "web vs. device" Ad-related privacy journey was realizing that even if you create an "anonymous" account on an app in your phone, your device ID is shared and can be recognized by Ad vendors.
Example:
- You're using a known account on a Mac to search for a shelf to buy
- You're using a anonymous account to browse Reddit on an iPhone
And the shelf Ad pops up on the Reddit feed. Yep, as long as you logged in with a known account on both devices, they're now linked by device id. An all you do on those devices (regardless of the account) can be traced back to you.
I read about this in "Chaos Monkeys" but it never really hit me until this experience.
This is typically just done through IP address. That's why I get ads for my girlfriend's preferred eye cream brand despite the fact that we browse using different devices and the advertiser should have access to that data point.
This is why I am so against letting Web Browsers have access to so much device information. Every time, a web dev says they should be able to push notifications, or get battery information, or whatever, this is why they should be ignored.
Use NextDNS (https://nextdns.io) on your mobile phone as a Private DNS provider, and switch as many apps as allow it to be web apps, i.e. https://m.uber.com works just fine, and use Firefox on mobile and enable `about:config` via `chrome://geckoview/content/config.xhtml` , from there switch `beacon.enabled` to false.
Far less requires an actual app than most people imagine. It's the apps that leak so much.
Mobile phones suck as computers. NetGuard PCAP files are must read if using a mobile phone as a computer.
One setup that works reasonably well is
NetGuard --> Nebulo --> DNSdist on own router
On phone,
(a) set DNS in Wifi to localhost, i.e., disable service provider DNS
(b) set VPN to Block all connections without VPN
(c) set Netguard to forward port 53 to Nebulo
(d) set Nebulo to run in non-VPN mode
(e) set DNS configuration in Nebulo to DNSdist on router
On router, point DNSdist at nsd or tinydns serving custom root zone containing all needed DNS data. Apps like NetGuard, Nebulo, PCAPdroid, etc. allow one to easily export the DNS data needed for the zone file.
There is at least one leak in this setup. Nebulo's "Internal DNS server" can only be set to Cloudflare, Google or Quad9. In theory this should only be used to resolve the address of the DoH provider and nothing else. But not allowing the user to choose their own DNS data source and forcing the user to keep pinging (querying) Cloudflare, Google or Quad9 is poor design. Those addresses are unlikely to change anyway.
Using a browser in place of other apps seems like good strategy but the browser "app" is far, far more complicated than many open source "apps" and much more difficult to control.
Firefox is not only filled with telemetry, almost no one compiles it themselves, it has more settings than any normal user can keep track of and it is constantly changing. Layer upon layer of unneeded complexity.
You connect to a special WiFi SSID and compares your traffic to known tracking/ad domains (Pi-Hole Lists mostly) and the "food" is the packets being sent to those servers.
its crude and has some high false positive rates, but it does have a chilling effect for me when exploring what data is going where
NetGuard with DROP OUTBOUND policy once again proves helpful. The only app that shows ads that I have on my phone is a PDF scanner, and I don't allow it internet access.
Even just to look at the picture near the top (which is also repeated near the bottom), if you do not allow the app to track you that only disables one of the items of the information and not all of them. This is explained later in the document, that it is not explained very well to end users. I agree that it could be explained better. Perhaps, "Allow app to track your activity..." can have a option to display a more elaborate description, explaining that it only affects the Advertising Tracking ID (and what that means) and has no effect on other methods of tracking.
And, looking further in the document, we can see there is more.
Some of them, such as IP address and timestamp it is reasonable to use for programs that access the internet (although it should be possible for the user to set up a proxy and/or adjust the clock in order to change these things, the server would still use its own timestamp anyways).
Available memory also makes sense to be readable (although ideally, the user should be allowed to limit the amount of memory available to specific programs, in order that there is enough memory remaining for other programs; the reported total memory should then include only the memory available to this program and not to all programs), and the same should be true of the number of CPU cores and the amount of available disk space.
Others probably should not normally be known by most programs (but some are usefulf or some kind of programs), and even when they are, the operating system ought to allow users to reprogram what information is available and what filters, logging, etc will be used.
The presence of wired headphones probably should not be accessible by software, and the redirection should be handled by hardware. Perhaps an exception makes sense if the settings need to be different, e.g. mono vs stereo, although even then, programs should only see those settings (and only if they have audio output), and the user should be allowed to override them due to preferences (e.g. some users might want mono even if connected to external speakers or headphones; on my computer sometimes only one speaker works and sometimes both, so it is useful to me to be able to switch to mono).
Furthermore, there is the consideration, if the advertisers/spies are stealing your power and network bandwidth and quota in order to do these things; then, that is theft.
Imo, the real takeaway here is that ad-tech isn’t just tracking people — it’s that it's becoming a decentralised surveillance network where no single entity takes any responsibility. Even with "Ask App Not to Track," your IP, geolocation, and device fingerprint still end up being leaked! It shows that tracking isn’t a feature anymore — it’s the business model.
Android for sure, since version 8 I'm certain but probably even 5 or 4.x (so 10+ years ago)
Always annoys me when I want to use a WiFi scanner to determine the range of an access point in different locations for example and it needs me to turn on location access first before it can get WiFi data. The open source app doesn't have an Internet connection so there's no way for it to send back data to the mothership even if it had an SSID database baked into the apk. For me, and traditionally, the location switch is to turn on or off energy-hungry GPS hardware, not gatekeep when I trust apps to collect my location. I can set those to "only while in use", deny their Internet access, or just not install them if I don't trust them with the location permission
But all it takes is one app with that permission to tie you to all the others. And there are always apps that need your location at some point to provide useful data. At this point I’m not sure there’s any single app I trust.
This is a wonderful write up. The part that isn't clear to me is how they're getting the geolocation data if location services are turned off. Are they just going off geo-ip lookups? If you grant access to Bluetooth or finding devices on your local network, they can get more information to track your location. Absent that, how would they get better than geo-ip?
I'm surprised people think they have any kind of privacy - especially when using free services. They are not free. You pay with whatever data can be extracted from your devices and behavior.
Also, there's a looong list of companies who know the location of your mobile device, starting from the cell phone tower operator to Apple/Google and many in between.
True. But even paid apps have access to these data and can collect it without our knowledge. A genius called Stallman proposed solution decades ago. Free software aka. open source software. But outside of tech community, open source is not a known term. Maybe we should market it wherever possible, if we want true privacy and freedom.
The people who reported about the Gravy analytics leak is 404media. They're an independent techincal media group that has been reporting on stories I haven't seen elsewhere. They're pretty awesome. I've personally paid-subscribed. (I'm not affilated with them, nor am I receiving comp to say this)
I clicked the link at the beginning of your article, that led to the Google sheet with the list of apps. That list had 12,373 lines, not “over 2,000”, fyi. And while most of the apps looked like small time games that I have never downloaded and would probably not download, I saw included there “Microsoft Office 365”. Interesting.
Here's my messy list of all the apps I recognized in that Google Sheet:
meetup, tinder, crunchyroll zynga/wordswithfriends, Microsoft Outlook com.microsoft.office.outlook, Weather channel, Microsoft 365 (Office) com.microsoft.office.officehubrow Opera Mini browser beta com.opera.mini.native.beta, BuzzFeed - Quizzes & News com.buzzfeed.android Tetris® Block Puzzle com.playstudios.tetris4 Sonic the Hedgehog™ Classic com.sega.sonic1px Grindr Flipboard: The Social Magazine Flightradar24 | Flight Tracker Bejeweled Blitz com.ea.BejeweledBlitz_row FarmVille 3 – Farm Animals Plants vs Zombies™ 2 com.ea.game.pvz2_row SimCity BuildIt 913292932 Tetris® 1491074310 Opera Mini: Fast Web Browser com.opera.mini.native TuneIn Radio: Music & Sports 418987775 Yahoo Mail – Organized Email com.yahoo.mobile.client.android.mail Angry Birds 2 com.rovio.baba Skip-Bo 1538912205 CamScanner - PDF Scanner App com.intsig.camscanner Rakuten Viber Messenger com.viber.voip Candy Crush Saga 553834731
agreed, however there are duplicates in that list + same app for ios / android.
if I’m not mistaking I did a simple unique count on it (or catch the 2000 number from 404media post)
Whilst I trust that the author did in fact look at the data of each request eventually, the screenshot they provided of Charles could not have been of the exact requests they intercepted given Charles is indicating that those are not yet SSL proxied (except for the 2 GET requests).
EDIT: please ignore, author did it differently to what I expected.
This technique doesn't work anymore on android because you can no longer add certificates to the system store and apps are free to choose to accept the user store CAs or not. That was changed in Android 7. For "security" they say. Security of Googles business model I'm sure.
I read an interesting newspaper article about how the police confiscated a hired gun's iPhone and found that he ran a search on the city his victim lived in. It is these little digital breadcrumbs that makes life easy for the prosecution.
Seriously if you are going to do illegal things never ever buy a smartphone.
I don't think there is a hope when it comes to our privacy and ads and our data being sold - none. Even if I'm somewhat off the grid or low in activities, the indirect way of targeting me still exists, by my family members, friends, people associated with me. I surrender.
There is hope. The upcoming US State privacy laws are resulting in IP addresses having the last octet blanked out, and IDFA's zeroed out, at least for SSPs and DSPs. Companies such as Apple and Google will still have this info since they control the OS.
Would be interesting to know how much data leaks on a new iPhone with some of the iOS privacy settings enabled and a handful of popular apps installed (WhatsApp, Instagram, Google Maps, Uber, etc).
And then if you use a commercial VPN with DNS ad-blocking enabled, how much more does this help?
It seems still possible to avoid being tracked (protections, filters, degoogle, etc), and the business is not very interested in the minority willing to trade off functionality, practicality and ease for privacy.
For how long?
I don't understand how this isn't considered an incredible national security issue, e.g., what stops an actor buying data for high value targets known to use certain apps, like the President or Prime Minister of a country?
> This is the worst thing about these data trades that happen constantly around the world - each small part of it is (or seems) legit. It's the bigger picture that makes them look ugly.
I paid for pcapdroid, it's a network monitoring app that use the vpn protocol on Android to monitor every packet sent, register which app made the request, to whom, dates and so on.
In it's paid feature, you can select app to block internet connection or you can select country, ip and host.
After browsing my internet logs, it shocked me to see some app I had absolutely no idea were spying so much.
Xiaomi home ? Yeah I knew Xiaomi app would be spyware. But Spotify for instance, how could I guess it sends every few hours data to remote server including Facebook ones.
Until I find replacement for Spotify, but most music streaming app do spy on its user (and I don't mean just learning what music you like), I can still block all the graph.facebook.com tracking.eu.miui.com Google ads.gdoubleclick.net and so on.
It's open source but firewall is paid feature, i highly recommend it if you're on Android.
There is even the possibility to decrypt packet and analyze them although it require root, i did it on another phone and yeah it's similar to what the author found. Every single bit of data, ip adress, since how long the phone is on, the wifi connections, when did I unlock the phone and so on.
Every data taken individually is not important to me but this stream of little data constantly going God knows where is creepy as fuck.
If you have the equipment (e.g. a spare Linux computer and WiFi router) and know-how, you can set up something like mitmproxy (looks very similar feature set to the Android App, but likely requires more effort to set up) to your home network. That's what I did some weeks ago, and then basically the same exercise you did (just my whole network instead of just phone), looking what's going on. And yeah...it's not good.
Even if I trust some companies to be trustworthy, I can't possibly vet a gazillion entities getting telemetry requests, and not all of them can have their shit together, security, privacy or ethics-wise.
It made me ditch some Microsoft software, but overall escaping spying feels like a lost battle, unless you go do spartan Richard Stallman-like computing (IIRC he had pretty hardcore stance over the software he'll use).
Anyway like most things it's a journey, not an on off switch. First you get aware then you make change and the situation gets better, it doesn't have to be perfect to be better.
On my Android phone, I had to make clear cut on which app I could keep after seeing the logs. The apps from Google, microsoft, amazon they are all gone. Even the play services and the play store replaced with aurora.
It cuts at least 2/3 of the network requests.
Then you have the case of individual apps that use Facebook SDK or other advertiser, there are often alternatives in the open source community and when it's not the case there are always less privacy invasive alternative on the store.
For instance, my default Samsung weather app was sending lots and lots of data. The alternative on the froid were not in my taste.
I eventually found out about weawow, it's not open source but it doesn't require any weird permission, no ads, it's not constantly sending data in the background and my logs says it only connect to weather.weawow.com.
I mean it's fine.
After spending weeks with the firewall, i was able to identify the spying app and replace most of them. My network log now is pretty empty when I'm not using the phone.
Posting here from an anonymized account about Meta. No one probably recalls that meta stopped most of their background location services(Remember Nearby Friends) on the main application ~2021-2022[1]. It was just not even worth a repeat NYT story with this much money on infra to collect locations.
But, this is basically after they figured how to do "good enough" location targetting using IP and a bunch of this info this guy talked about. You don't actually need a lat, long, just the 1 mile radius/city area is good enough to run ads and they have ALL of that.
This was why meta's revenue dropped so much after apple's move, they could not fall back to collecting precise location. This is the last game in town. You shut this down, meta's precise targetting will suffer gravely, ads will become flakey.
One last thing. You may ask, who are the businesses that need precise lat longs? are like this one[2]. These businesses are like whack-a-mole. They saturate the app market steal data get money and shit down when someone yells and in a few months and comeback again, rebranded and come back as another app. They exist not just to collect data but to act as an arbiter on who get eyeballs on IRL activities to influence behavior at the (Top of the funnel) TOFu. In the Worst. Possible. Way.
I really find this fascinating - great article. As an experiment if you like, can anyone identify anything about me from this post? And if not, why not? Would love to know.
I think one thing people are discussing a lot here about Privacy around contacts and sharing. Limiting access to contacts , completely or partially, is the wrong way to design such systems. There are two problems with this approach:
1. Having permission to contacts is NOT a capability. Running a function on it that is by design not leak PII is infinitely more valuable and a capability.
2. Asking users to grant permission is broken by design: You are giving a very bad multiple choice to the user: `(a)Creepy? (b). LessCreepy (c). Don't Use App`
Instead if we only granted operation rights and hid the actual information instead it would be so much better. We need a separation of data from the function to empower apps to give better choices to users.
When someone wants to install an app rather then go to a website that could do the same thing, they are advertising that they are up to something nefarious.
It looks like these all come from the Reddit embed in the middle of the article. Default uBlock Origin settings blocks 13 urls (and more, due to Reddit's frequent pings), but disable 3rd-party frames brings it down to 1 url (since the original embed was blocked).
how is this legal? This is apples problem and they need to be sued into oblivion. If i have location tracking turned off on my phone why is my phone still sending location data?
With GPS off, location can triangulated from cell tower usage to within 3/4 of a square mile (smaller uncertainty in urban areas where cell towers are closer together). I'd heard before that some data brokers do this, but in this article the writer mentions reverse DNS lookup on IP addresses, which they mention is less precise (ZIP-code level).
Only if you don't turn WiFi off. To my understanding even the "soft off" option present in iOS stops the phone from beaconing, and just listens in order to collect data for building augmented location services. I don't know what the Androids do. These days both of them also offer randomized MAC address to curtail such tracking.
Total bs. Do not give location permissions to untrusted apps. If the app insists on it, use mock GPS feature on android which will spoof your location. Can we all please stop exaggerating the slopiness of normies with their pretentious acts of being shocked after not being cautious about their privacy? Privacy is not by default, you have to put some effort into it
Starting earlier this year I've set up a mitmproxy a lot on my entire home network, and often have it on for all traffic at times. I put up an old NAS and I'm abusing it as a mitmproxy tool for my home.
There would be so much to write about what I've seen. I've thought of making a blog post. I use mitmproxy to check on sketchy apps and to learn in general.
The information sent out is fascinating. I knew extensive telemetry is pretty norm these days, but it's another thing to see it with your own eyes.
My exercise has also made the typical "yes, we collect data/telemetry, but it's deanonymized/secured/etc. and deleted after X days so no worries" sound very hollow; even if a company goes in good faith by their own rules, how am I supposed to trust the other 1000 companies who also do data collection. If someone hacked my mitmproxy itself and downloaded all the payloads it collected, they would probably know me better than I do.
Random examples on top of my head from mitmproxy (when I say "chatty" I mean they talk a lot to server somewhere):
I had GitHub CoPilot neovim plugin. I didn't realize how chatty it was until I did this (although I wasn't surprised either, obviously completions are sent out to a server, but it also has your usual telemetry+AB test experiment stuff). I had wanted to ditch that service for a long time so I finally did it after seeing with a local setup since open stuff has mostly caught up. Also it's not actually open source I think? I had no idea (I thought it would just be a simple wrapper to call into some APIs, but: no PRs, no issues, code has blobs of .wasm and .node: https://github.com/github/copilot.vim)
Firefox telemetry, if it's turned on, is a bit concerningly detailed to me. I think I might be completely identifiable on some of the payloads if someone decided to really take a go at analyzing the payloads I send. Also I find it funny that one of the JSON fields says "telemetry is off". Telemetry is actually on on the menu (I leave it on purpose to see stuff like this); just in the JSON for some reason it says off. I'm not sure if that telemetry is meant to be non-identifiable though in the first place.
Unity-made software (also mentioned in the article) send out a Unity piece at start-up that looks similar to the article, although I didn't take a deeper look myself.
Author mentioned the battery: I also noticed that a lot of mobile apps are interested in the battery level. I didn't connect the pieces why but the article mentions Uber 4% battery surcharge, and now it makes a bit more sense.
One app that has at least once been on HN at high scores starts sending out analytics before you've consented to any terms and conditions. One of the fields is your computer hostname (one of my computers has my real name in my hostname...it does not anymore). Usually web pages have "by downloading you accept terms and conditions" but this one only presented that text after you launch app before you get to the main portion. I never clicked it (still haven't), but I allowed the app mellow on background to snoop on its behavior.
Video games: The ones I've tried seen mostly don't do anything too interesting. But I haven't tried any crappy mobile games for example. One unity game on the laptop, Bloons TD 6 sends out analytics at every menu click and a finished game sends a summary and is the "chattiest" game so far, although seems limited to what the game actually needs to do (it has an.online aspect). The payloads had more detailed info on my game stats though, they should add those to the game UI ;)
Apple updates don't work through mitmproxy (won't trust the certificates). Neither do many mobile apps (none of the banking ones did, now I know what a mitm attack would look like to my bank app).
Some requests have a boatload of HTTP headers. I've thought of writing a mitmproxy module to make a top 10 list. I think some Google services might be at the top that I've seen. (I think Google also has developed new HTTP tech, is it so that they can more efficiently set even more cookies? ;)
I think anything Microsoft-tied may be chattiest programs overall on my laptop. But I haven't done stats or anything like that.
Aside from mitmproxy, I'm learning security/cryptography (managed to find real world vulnerabilities although frankly very boring ones so far...), Ghidra, started learning some low-level seccomp() stuff, qemu user emulation, things in that nature to get some skills in this space. Still need to learn: legal side of things (ToSes like to say 'no reverse engineering'), how to not get into trouble if you reverse engineer something someone didn't like. I've not dared to report some things, and to not poke some APIs or even mention them because I don't know enough yet how to cover my ass.
Modern computing privacy and security is a mess.
I've worked a good part of my career at a DSP company (it would be in the box that says "Criteo" on it on the author's article). So I have some idea what companies in that space have as data.
I realize this feels like a pipe dream, like a million miles away from our branch of reality in 2025, but I really think the entire online surveillance advertising industry needs to be burnt to the ground and (maybe, partially) rebuilt. Many of the problems we see nowadays are rooted in the fact that data is being collected and used to (supposedly) profitable ends.
Sure, there may be the occasional honest actor in the industry, but they're so marginal and outcompeted by dishonest and shady ones that it really doesn't matter. IMHO the right move is to simply ban any collection that's not strictly necessary. Kind of like GDPR but without the "if the user agrees" exceptions.
Reminds me of a regulation about artificial stone (?) being banned in Australia, not because it's impossible to use safely but because the regulator concluded that the entire supply chain is unwilling to and disincentivized from using the material safely, so the best move at this point was to ban it outright.
I think a big part of the ability of these shady companies pulling the brightest minds away from more clearly beneficial fields is that important flavors of ideology necessary to motivate people to take less lucrative work have been stripped from "business". There's a lot fewer appeals to art, history, cultural stewardship, or empire-building present in things like transportation, medicine, construction, etc. than there were in the past. Any flavor of "for the glory of God/the nation/the People/the Art" etc. is pretty effectively stripped out of American business, and I think that's the only kind of thing that would motivate someone who could make $250k in Adware to make $100k in something else.
People are now very well-trained to look out for their own bottom line, and take jobs accordingly.
Doing things because they increase some non-monetary value has fallen out of fashion for sure. A colleague of mine recently shared, in a group social setting, a sense of disappointment that his daughter was studying to be a doctor. I was, as far as I could tell, the only one to note that there is practical utility to having doctors.
But the "be part of our mission" was shown to be hollow over and over too. First and foremost, you as an enployee are making the investors and CEO rich. The mission is usually exploiting the employee, even when it's not exploiting the world. Employees have recognized the real social ethic (money over everything) and are just playing the same game. Which is sad.
Ideally the people who see these choices would make alternative choices that will leave their grandchildren better off in the world. It has taken only a generation for the "greed is good" mentality to drop us into this fetid soup.
I think the phrase you called out--"be a part of our mission"--that most corporations (and, mimicking them, government agencies) regurgitate is itself the approach to socialization that causes people to feel less inclined to work for any non-profit reason. "Part of OUR mission" redefines the company as the entity to be loyal to, rather than casting the company as part of society itself. You can't replace constructs that tend to inspire people to heroism and selflessness with a corporate avatar and expect the fabric of society to remain similarly motivated. It does make a set few people a hell of a lot of money in the short term, though.
Ugh. Eye roll on the whole “make the world better”. So few SV products remotely make the world better. The purpose of the vast majority is to make money at all costs. I also disagree with what most people claim is making the world better. All in all, I think social media is a net negative for the world for any good that might be found. Every SV thing after that is just chasing the recketship to the moon dream.
I have the same question. It did not seem easy for me to find a job where we are at least not writing malware (according to my judgement). But it's far from making the world better :|
It’s funny how in recent headlines the NSA & FBI have been saying to use secure messaging app. Yet the FBI is infamous for claiming the need for back doors into these encrypted apps. What are we to think of the opposing views? Are they really being benevolent for the citizens, or do they no longer need the back door, or do they have a back door whether intentional or not?
They're not homogenous organizations. Not sure about the FBI, but AFAIK the NSA has always been in an awkward spot of being split between defensive and offensive missions. It wouldn't be particularly surprising to have one arm going "you should all use encrypted messaging, it's the most secure" while the other arm is frantically trying to break or backdoor said encrypted messaging.
The world is changing fast and reasons for actions may be more complex
and interesting than you assume.
Were they ever _not_ benevolent to US citizens as a whole, even if
misguided? There may be last-ditch attempts to extend benevolence to
US citizens as a takeover looms. If leaks from the Office of Personnel
Management are to be believed, then right now US government is in the
process of a soft coup, being dismantled along lines of political
loyalty. I expect those working in intelligence and law enforcement
who support democracy see the writing on the wall and will act sooner
or later.
Reliable end to end encryption is an important tool for citizens of a
nation that may need to organise in a hurry. We might see new Edward
Snowden type revelations of programmes, naming key people or giving
clear advice not to trust certain US based entities or services. Civil
servants may act professionally as non-politically as they can, but in
the end, if only to protect their jobs, they're going to come down on
the side of democracy.
It seems that they have changed their minds about surveillance back doors, after some devastating attacks, where Chinese state actors (among others) used back doors created for compliance with warrants to get in.
But that was the pre-Trump NSA and FBI. Now the Chinese and the Russians just need to get some DOGE volunteer to give them whatever they want, since Elon now has root on all the government payment systems and is too undisciplined to do things in a secure way.
May be Steve Jobs was right all along. We dont need Smartphone with App Store. Either 1st Party Apps and Everything else should be on Browser or Apps that uses Browser Engine.
A while ago a co worker told me "why would you care about your privacy? all my data is already out there anyway and what can even be done with it anyway".
What would be the ideal response to such an absurd comment? At the time I found it hard to answer because she surprised me with that opinion.
Edit to note: the explanation should be compatible with a professional context. I don't want to scare my co workers or appear crazy/paranoid.
Losing privacy makes you more vulnerable to economic exploitation (price discrimination, salary negotiation, insurance premiums, etc). Therefore protecting your privacy is a form of economic self-defense.
Just ask for their email password and see what they say. Usually though this comment is just them trying to change the subject because very few people know or care about any of this
Seriously, anyone who ever says they have nothing to hide, show them this story.
"A Redding Police Department officer in 2021 was charged with six misdemeanors after being accused of accessing CLETS to set up a traffic stop for his fiancée's ex-husband, resulting in the man's car being towed and impounded, the local outlet A News Cafe reported. Court records show the officer was fired, but he was ultimately acquitted by a jury in the criminal case. He now works for a different police department 30 miles away."
There are a few examples I use when I hear such ignorant statements:
1. Not caring about privacy cuz you’ve got nothing to hide is like saying you don’t care about freedom of speech cuz you’ve got nothing to say.
2. If you don’t care about privacy, why don’t you poop with an open door, for everyone to observe?
Because I don't want to rest of the house to smell?
A different argument that appeals to some is that you might not have something to hide, but what about the people who do? For the greater good of society, whistleblowers are needed to expose malfeasance by the corrupt and it's going to make it much harder for any of them to come forwards if their reward is literally exile to Russia. If you're in support of a slow slide into dystopia, go ahead and argue against all privacy. Whether a given situation rises to that level is an different but adjacent topic, but appealing to something some people can believe in, such as not letting the rich and powerful get away with being utterly corrupt in their dealings is a way to find common ground, with some. not everyone cares about that though, but it's an additional argument for privacy.
The problem is, I could not formulate anything in this way in a professional setting. I want my co workers to understand because I feel a bit uneasy working with people who do not but I also don't want to scare them.
As soon as your cousin clicks "Yes, I would like to share the entire contents of my contacts with you" when they launch TikTok your name, phone number, email etc are all in the crowd.
And I buy this stuff. Every time I need customer service and I'm getting stonewalled I just go onto a marketplace, find an exec and buy their details for pennies and call them up on their cellphone. (this is usually successful, but can backfire badly -- CashApp terminated my account for this shenanigans)
reply