You seem to be confused about what Apple is doing.
Nothing anywhere says that Apple scans the pictures you store on your own device. They only scan the pictures you upload to their cloud service.
The scanning is done on device before upload in Apple’s case, and in the cloud after upload in Google’s case, but either way it is only done to photos that are uploaded to their cloud services.
If you are really claiming Apple scans the photos you don’t upload to iCloud Photo Library, then you are lying or dissembling.
It's splitting hairs in the end. You're arguing that they only pre-scan it for bad stuff if you decide to upload it, so they don't end up with that material on their servers.
BUT THE CAPABILITY TO SCAN CONTENT OF PHOTOS ON DEVICE EXISTS. The argument is tomorrow they can simply start sending scan meta data or captions of image content up to a server without you opting into cloud storage.
The capability exists on device. It's all baby steps.
It’s not splitting hairs to point out a lie about what Apple is doing. If you are also trying to say that Apple is scanning files that are only stored locally, then you are also a liar. If you support the spread of that false information, then you are dishonest.
As to the capability on the device, the capacity to scan for CSAM is very narrow and is very hard to repurpose.
The capacity to upload images to iCloud Photo Library on the other hand has been there for years.
At any time Apple could add some other kind of scanner if they want to, and there is no reason for it to use this mechanism. It would be terrible if they did but it has nothing to do with this.
Anyone with programming experience can tell you that if all they wanted to do was check arbitrary files against a list of hashes, they would be a simple mechanism to write.
There is no way that this mechanism helps them scan for other things. It isn’t even a step in that direction, let alone a baby step.
Ok - if we are going to narrow the scope of the debate to the nature of the scanning, that is the core of my argument to begin with.
I'm not an expert in how CSAM works, but if it only as a block list against certain images it will be very ineffective.
The way I would expect it to work, is to recognize the content of images. The SOTA on this is pretty impressive. Knowing the content of the images is what Google does in the cloud, and its great. I can search my images for "Green taxi" and it will find it. Water. Sunsets. Anything.
If apple is introducing the ability to recognize photo content on the device (even if right now it is destined for the cloud as a pre-scan), it doesn't really matter that the model is only tuned to find child abuse, as an example.
Tomorrow, the hyper-parameters or the ontology could be expanded to search for anything. Political affiliations, location and timestamps (this doesn't even need modeling!), illegal objects or substances, etc.
The deal is that Apple is saying "we are going to use your device to determine what is in your pictures." The circumstances and scope of those determinations can change, but the expectation that Apple will be doing it is now publicly established.
https://www.theguardian.com/technology/2014/aug/04/google-ch...