Hacker News new | past | comments | ask | show | jobs | submit | jx47's comments login

That's a neat little snippet. Thank you for sharing it. :)


But there is a difference. Sure the same technology might be integrated into several cloud storage provider. But i can choose to use a storage provider or not. If Apple is activating this technology on your phone you can not opt out. Your phone will be searched.


Yeah this is what really irks me, cloud providers always struck me as "optional", this is adding a feature which can trivially be tweaked to simply scan your entire phone for wrongthink and send the results to the CCP.

If Google decides to implement this, which doesn't seem very unlikely because of the optics of NOT implementing this, people who aren't tech savvy won't really have a realistic way to opt out.

I just don't think there is very much evidence that in practice false positives are a big issue and the article is very much pushing that argument.


If you think Apple is going to install something onto your phone to scan everything for wrongthink and send it to the CCP, you shouldn’t be installing their updates or using an iPhone anyway. They can already do that at any time. This feature doesn’t make it any easier. On the contrary, this feature seems specialized for its purpose, and to slide down your slippery slope they would need some different and more general kind of spyware.


I think my "slippery slope" concerns are less technical and more overton window shifting. If you have this practice of scanning images in place, it's not as huge as a leap to suggest we should register hate propaganda images (for instance, infographics) and not allow Apple to lawfully host it on its servers and so on. You just get less and less steps away from justifying banning things like propaganda against the government or individuals in the government and such.

I would agree on a purely technical level your phone is already pwned by Apple so worrying about this on a technical level is closing the barn door after the horse got out. However from a social perspective one of the holdouts against photo scanning has stopped being a holdout, and doing things from client-side makes it seem less wrong to do other things client-side.

Besides the privacy stuff, this also is a bit more of a slide towards software that you purchase being ultimately controlled by and for the benefit of parties other than yourself.


Apple is a US company and this feature is rolling out to US customers in partnership with US law enforcement. I don’t think we have invoke the CCP boogeyman to be worried about this.


This comparison of images to known hashes is only for Apple's iCloud Photo feature, which is optional.

That said, many hosted providers/social networks have similar features - they just have server-side implementations and might not have felt the need for disclosure.


Google already has this.


But this will be triggered only while uploading to iCloud, so if you are not using iCloud for photos the algorithm will never run? Or that’s why I understood.


For now, the next step is to do on device scanning with fuzzy hashes whatever that means.


What does that mean?


Correct, Apple confirmed this hash-matching is only for their own iCloud Photos service (and appears to be designed with that service in mind).


Only if you use iCloud photos, so you do have a choice.

In practice any major cloud provider is going to or is already doing this. We need a better regulatory approach, it isn’t practical to put the responsibility on providers.


"On device scanning" and "only if you use iCloud photos" doesn't make any sense does it? If it's really the case they're just preparing the ground for the next step which they hope won't get as much publicity...


Microsoft and Google hash your unencrypted photos on their own servers. Apple could easily do the same… I wonder why they didn’t…


Probably because they are also adding a feature to parental controls which when enabled checks images sent to the phones of your children 13 and under and if the image matches a known bad image they give the child the option to accept it or not, warning the child that if they elect to accept it their parents will be notified and given a copy of the image.

That has to be done on the phone because Messages is end-to-end encrypted. If they are going to have to have hash matching on the phone anyway for that, it makes sense to also use that for checking images that are to be sent to the cloud.


The detection of sexual images in kids’ messages doesn’t use the same hashing setup as the iCloud Photos detection feature.


No, there is a plan to also do on device scanning.


They've said the on-device scanning is done only when you are using iCloud Photos. So it's on-device scanning but only when you are deciding to share your photos with Apple... At least that's how it is for now.

https://www.macrumors.com/2021/08/05/apple-csam-detection-di...

"CSAM image scanning is not an optional feature and it happens automatically, but Apple has confirmed to MacRumors that it cannot detect known CSAM images if the iCloud Photos feature is turned off."


There’s two things going on I thought the iMessage stuff was on device but maybe I’m misreading things. It’s unfortunately cloaked in secrecy…


> We need a better regulatory approach

We won't get one though, because the "think of the children!!!" crap is extremely pervasive and anyone going against it will be smeared as a pedo defender.


It is unrealistic to expect corporations, which exist solely via regulations of various governments, to act as if they are immune from government regulatory control. In fact, a lot of negative press for FAANG (add extra letters as needed) specifically makes the point that they are acting as if they are bigger than any government.

In that mindset, the answer is absolutely to combat and to attempt to change broken laws first.

Also, it isn't just "think of the children". For instance, there have been some _terrible_ proposals under the banner of Right to Repair. People tend to not want to invest the time in understanding the ramifications of the actual proposals, and instead vote for or against the concept. One of the reasons ballot measures are both empowering and terrifying.

Good regulations take time and care - and generally, less is more.


In cloud storage they can manually review the picture on server side for false positives before throwing to law enforcement.

You are consenting to the upload and are aware that it can be searched so not a legal problem.

Law enforcement will also review the picture in the cloud instead of busting your door to search your phone and the whole scenario from article.

When they do it client side they can't just upload your content on hash collision. (or maybe they do which is also a problem in itself)


This is a lie. Did you try reading the actual announcement by Apple? It's right here: https://www.apple.com/child-safety/.

- The scanning will be performed only if photos are to be uploaded to iCloud.

- The database will be encrypted multiple times in a way that it can't be clearly read.

- There's not notification to Apple in any way in case of matches.

- Instead, each match result is again encrypted in a way that is inaccessible to Apple and uploaded together with photo

- If there a lot of positive matches, they eventually will become able to decrypt it. That's when they will do manual check, lock the account if it is correct and notify authorities.


There are multiple features. This particular one is about client-side scanning before upload to Apple's iCloud Photo service, which is optional to use.

Presumably it is client side so that they can do anonymization/encryption of photos on the server, and treat any data access outside the account and account they have shared the photo with as an audited and cross-organizational event.

But if you want to use another hosted service, you can... and likely get their implementation of a similar system. Presumably this is US regulatory compliance.


Who would back up their phone to the cloud if they were involved in illegal activities?

This is going to bite more innocent people through false positives than criminals who already know how to get away with these things.


Presumably the cloud synchronization checks are not a feature Apple wanted to add, but one which they had to under US regulations. Other providers have done this for years server-side, but Apple needed a different approach since the photos are E2E encrypted[1]

It is not a ML model but a list of known image hashes, and is only enabled for US-based accounts, furthering my suspicions this was minimum-effort for regulatory compliance.

Note they _do_ have a feature (also announced today) that uses ML models, but it is meant for local filtering and parental controls/notifications. This feature is also US-only and the parental notifications policy is fixed and age-based. I believe this is both to fit into regulations (e.g. US recognition of rights based on age) and into cultural norms.

I suspect they will have different rules in different jurisdictions when this rolls out further in the future.

[1]: With separate key escrow HSMs for account recovery and legal compliance with e.g. court-ordered access.


this is false. it's only for photos uploaded to apple's cloud.

the tech runs locally, but only on those photos.


Why do some people seem to think that it will stay limited to that forever? It's not just you, it's multiple people in here who think that this is ok. They are our devices. They should not be running software that can get us arrested on our devices. It won't stop there. It never stops there when they can ask for more.


If you take their technical summary [1] at face value, they designed it to be limited.

Even if the hashing and matching happen on the local device, a match can only be revealed server-side. The hash database distributed to local devices will be blind hashed with a server-side secret key and the locally derived hash match will need to be decrypted with that key to be read by Apple. So theoretically if the local device doesn't upload content to iCloud, no content matching can be revealed, even if the hashing and matching has been done locally.

Of course, you also need to trust that Apple won't be uploading those locally derived hashes to iCloud without the user's permission if iCloud backups are disabled.

[1]: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


because we’re not discussing hypotheticals, but real life.

and in real life governments elected by the people have been pushing for this for years. the result has been google and all the other cloud providers already implementing this. apple was the last big one to hold out.

will they expand this in the future? sure, whatever. the system is so broken, and i’m so powerless, that at this point in time it doesn’t matter what i want.

at least it will only apply to the US. the ROW is spared. at least for now.


You can opt out by not using iCloud for your photos. If you prefer, you can install Google Photos or OneDrive onto your iPhone and let your images be scanned in the clear in the cloud instead.


This is only applied to photos being uploaded to iCloud.


I might be mistaken but how it's not the same thing?

- You don't like storage provider - you don't use it

- You don't like ecosystem/smartphone provider - you don't use it

On top of that you can opt out even now by disabling updates it just means that you won't have access to the newest iOS and take risk that at some point software developers will stop supporting OS you decided to stick to.


>You don't like ecosystem/smartphone provider - you don't use it

This just doesn't work like this in our day and age. We are one ecosystem (android) away from complete domination of this scanning technology. You could argue that I could use a librem or something but at that point all librem users automatically become suspicious because "all major manufacturers have this, he probably has something to hide".


apples and oranges, unless you want severly affect your quality of life, owning a smartphone is prety basic thing for adults

you could as well live in woods away from society, but that's obviously not the solution to bad laws


If it only was that easy. Social pressure to use certain services often trump personal preferences. Sure, I can change provider from X to Y, but unless I successfully convince my mother to do the same, I'll end up having both X and Y as my mom will still use X.


These banners are there to fool you into accepting all cookies. They are basically a dark pattern at this point. The GDPR and the so called cookie law state that strictly functional cookies have implicit consent by the visitor. Even selfhosted tracking via cookies is considered functional. The GDPR/cookie law also does not enforce those banners. They only state that the user has to consent to every form of tracking.

So every time you see one of these huge banners it is the deliberate effort by the website owner to trick you into accepting the tracking.

https://gdpr.eu/cookies/


Nobody wants to argue with GDPR regulators which cookies are "strictly necessary" and they certainly don't want to pay lawyers to review the purpose and use of every cookie.

It's not a trick, it's just that the easiest path for all sites to comply is to obtain blanket consent for everything.

Classic perverse incentive.


Because the bad seller is also tainting the product reviews. Is (in the case of OP) the camera really a five star product? Or is it a five star product because one seller basically bought x percent of the positive reviews?


I think the difference between the criticism of Chrome and the criticism of Firefox lies in the marketing. Firefox is constantly telling me that they are the good guys. The last bastion of privacy and user-first on the web. But in reality they are doing a lot of questionable and shady stuff that that they need to be called out for.


Well sure, call them out on that stuff.

But don't then finish that callout by saying that nothing matters and that trying to fight against browser monoculture is pointless.

The "final thoughts" section of this article is depressing to read. There is absolutely still a difference between Mozilla and Google, and it is absolutely still worth fighting to avoid a browser monoculture, even if there's only a small chance of success.

I definitely agree that Mozilla has problems, but that's not the thrust of this article. The thrust of the article is that Mozilla has problems, and therefore everyone's efforts to improve the web are meaningless.


> There is absolutely still a difference between Mozilla and Google

General attitude of these corporations is one thing, but is there really a difference between Firefox and Chromium w.r.t. privacy?


Yes, I think so. Firefox ships with a ton of anti-fingerprinting features that can be enabled in about:config (many of them lifted directly from Tor), containers are an intuitive way for people to isolate sites from each other, Encrypted DNS is turned on by default (Chrome only upgrades to encrypted DNS by default if the current resolver supports it), and Firefox's addon API for adblocking is already slightly more capable than Chromium's, and will be much more capable once Manifest V3 ships.


It states in the first section of the Domain Name Registration Policy that you are eligible to register a .eu domain if you have an organisation or are a natural person that is registered in the EU or a country that has an agreement with the EU. Leaving the EU will cancel the eligibility to register or maintain an .eu domain. So i really do not know why the EU is the bad guy in this scenario.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: