Hacker News new | past | comments | ask | show | jobs | submit login
There’s a crack in the iPhone foundation and it could get a lot worse (macworld.com)
229 points by amrrs on Aug 12, 2021 | hide | past | favorite | 285 comments



I take issue with most of the alarmism about this CSAM scanning. Not because I think our devices scanning our content is okay, but because of the implication that there's now a slippery slope that didn't exist before. For example, from the article:

> While today it has been purpose-built for CSAM, and it can be deactivated simply by shutting off iCloud Photo Library syncing, it still feels like a line has been crossed

Two simple facts:

(1) The system, as described today, isn't any more invasive then the existing CSAM scanning technologies that exist on all major cloud storage systems (arguably it's less invasive - no external systems look at your photos unless your phone flags enough of your photos as CSAM, which brings it to a manual review stage)

(2) Auto-updating root-level proprietary software can be updated to any level of invasion of privacy at any time for any reason the provider wishes. We aren't any closer to full-invasion-of-privacy with iPhone than we were before; it is and always has been one single update away. In fact, we don't know if it's already there on iPhone or any other proprietary system such as Windows, Chromebook, etc. Who knows what backdoors exist on these systems?

If you truly believe that you need full control and a system you fully trust, don't get a device that runs proprietary software. If you're okay with a device that isn't fully trustworthy, but appears to be benevolent, then iPhone isn't any worse than it was a month ago.

Until there's evidence otherwise, iPhone will continue to be as trustworthy as any other proprietary closed-source system. If you need more than that, please contribute to projects that aim to produce a modern, functional FOSS smartphone.


I agree with (2). But I do think there is a fundamental (human, psychological, emotional) difference between a tool running on a server and one running on my device.

The mental and ethical lines are clear when I hand content to a server, that's a contract I can understand and agree to. When it runs on my device, even if it's because of the option to send it to the server, that feels fuzzier. As Ben Thompson put it it's the difference between Capability and Policy.

This is now rubbing in our faces that the privacy is only in the policy, not the capabilities of the software. And the capabilities of propriety software are pretty knowable, through a combination of inspection and long term observation.

The end result isn't any different today, but it highlights the political position that Apple and others are being put in. I honestly don't fault Apple for this on anything except their clearly poor messaging, but they are getting pulled from all sides by governments who don't like the idea of them actually implementing a fully closed and end-to-end encrypted system for user data.


> But I do think there is a fundamental (human, psychological, emotional) difference between a tool running on a server and one running on my device.

Strongly agree. A law requiring police cameras in rented trucks is an invasion of privacy and a loss of rights for the populace, but it's a whole lot better than having those cameras in your home.

But the sad truth is that handheld devices are actually more like rented tools than they are private homes. Apple devices are windows into SaaSpace; their local storage and compute capabilities are merely implementation details. They aren't anything like the PCs of yore.


> But I do think there is a fundamental (human, psychological, emotional) difference between a tool running on a server and one running on my device.

I agree, but it's frustrating to me because it's an illusion: if your device is running proprietary software, you don't control the device. At that point there's little difference between your device and a server in the cloud, aside from technical characteristics like latency and compute power.


I think Apple is getting so much heat for this because they have at least maintained an outward appearance of separation between device and cloud for so long.

Some of this was just due to being further behind on the SaaS ramp than MS and Google. Everyone expects a Google-services enabled Android device to basically be a cloud thin client, but less so an iPhone. And Apple has actually stood up for encryption and not putting in openly exploitable backdoor capabilities just to appease law enforcement, see their back and forth with the FBI a couple of years ago.

Apple seems to be coming up with awkward solutions as they try to do the "right thing" in some places (more end to end encryption) while governments / law enforcement don't want hidden data because of "the children", terrorism, etc. They announced this proudly as though they had finally found the holy grail to negotiate this quagmire but the messaging did not go over at all how they seemed to expect.

I don't think there's a path forward for fully open and non-proprietary software that everyone uses. It hasn't happened on the desktop so I doubt it will on mobile devices. And most people want their devices to be cloud terminals, not sovereign system states.

So I think it comes back down to a legal and political issue of what requirements governments put on these tech companies, how data and privacy protections are guaranteed (or forced to be broken), and how that gets enforced.


The difference is I supposedly own my device. This is a line we use for all sorts of things, such as, say, hacking charges.


You explicitly do not own the software running on the device. It's Apple's software running both on your device and on their servers.


Yeah, by even using the device at all, you already agreed to a very long EULA which includes provisions for whatever the vendor of the product wants to do. So, like you say, there's really no reason to have a shred of trust in proprietary software where you can never see for yourself what backdoors (etc.) might be there.


> The mental and ethical lines are clear when I hand content to a server, that's a contract I can understand and agree to.

This system will only work on photos you are going to upload to icloud photo library. If you dont enable this, no picture will be scanned. In my view this is exactly the same as before where this scan was running on the server.


I'm also surprised that this is the thing that gets people up in arms about iPhones. I mean, you can't even write legitimate application for yourself and run it on your own phone without insane hoops to jump through. Not to mention that you can't buy an iPhone and decide to change the OS or whatever with the hardware, even if Apple stops supporting it.

I have no issue with people who give of very very basic user freedom because they want to opt into the iOS "ecosystem", but I don't see how this particular thing would push anyone of the ledge.


Simple, the line that was crossed is scanning your local files versus scanning cloud files (even though they promise this system is disabled if you disabled icloud photos.)

Once this system is in place, it takes only 1 tiny adjustment/exploit to scan other private stuff on your devices, which you would never upload to a cloud.


> it takes only 1 tiny adjustment/exploit to scan other private stuff on your devices

This has always been true, there's no "Once this system is in place" qualifier necessary. This is the reality of running someone else's proprietary code at the root level on your device.


I respectfully disagree :) that's black/white thinking, by that reasoning all software not inspected by yourself, and having some automated update system, can be considered unsafe. The world is not just black and white...


> scanning your local files versus scanning cloud files

Correction, scanning cloud destined files, as a step of the upload pipeline. Local files not destined for the cloud are not scanned. It's a scan happening before the network stack of the pipeline, rather than after. But it still is already in the pipeline, an its way, at the point of scan (at least that's my understanding of their docs).


The real percentage of people who would care to do either of those two things is infinitesimally small. Not non-existant, just very small compared to:

All HN users in this thread < All tech journalists who are writing about this topic < All iPhone users


I agree with you, but you're painting the alarmists with a broad stroke. There's room for those who trust Apple with their data, are fine with the implications of server-side data storage, and are not fine with their devices performing the scan on offline data.


> There's room for those who trust Apple with their data, are fine with the implications of server-side data storage, and are not fine with their devices performing the scan on offline data.

Right, my point is that the latter (scanning of offline data) does not yet have any evidence of occurring, and we aren't "closer" to that being a reality than we were before: it is and always has been one software update push away from being a reality.


It's not so much that the slippery slope didn't exist before — Apple has just greased it.

> we aren't "closer" to that being a reality than we were before

I want to believe this but it feels very much like we are. And I don't think it's purely psychological or alarmist.

> it is and always has been one software update push away from being a reality

As a software engineer it's hard to shake the feeling that we're now just a patch release away from misuse, where as last year we were a major update away.

It's still only “one software update push away” either way if you're counting releases, but it's not if you're counting tickets in a product backlog. It's easier for governments to expand an existing feature than to pressure companies to build one from scratch. Apple already built the feature security services dreamed of even though it had no legal obligation to (it has to report CSAM it finds; it does not have to actively search for it on your device).

It's also easier to normalise the use of client-side scanning when one company has shipped it (without talking to the rest of the tech industry, and after declining invitations to talk to Cyber Policy/Internet Observatory teams at Stanford who are trying to help the industry as a whole with an improved collective approach). That pressure and the likely additional implementations of client-side scanning we'll see further expand the potential for abuse.

I really enjoyed the Stanford discussion available on YouTube[1]:

> @8:07 > …the other issue being that it wouldn't be terribly difficult to expand this client-side system to do all of the user's photos so we kind of have to trust the software to only apply this to things that are backed up to iCloud.

> @17:24 > I think it's inevitable that some government — quite possibly the U.S. — is going to want to expand this to counter terrorism and, well, I think for a lot of people that sounds reasonable. The problem is that the definitions of terrorism are somewhat malleable…

None of this feels alarmist to me. Yes, proprietary software is only ever an update away from throwing us all down the slope. But the slope is not a pure binary thing that is or isn't — companies can increase the gradient by decreasing the work it would take to hurt us, and that's what Apple has done here.

[1]: https://www.youtube.com/watch?v=dbYZVNSOVy4


> As a software engineer it's hard to shake the feeling that we're now just a patch release away from misuse, where as last year we were a major update away.

I don't agree. Putting aside the "NeuralHash" algorithm that Apple made to try and match CSAM, scanning the files on a device is remarkably simple, and something Apple already does as part of your device's standard functioning (indexing files, attaching labels to pictures based on content detected by AI models, etc).

Apple could have already implemented a secret image tag for "terrorist material" or "drug material" that is attached automatically to images, hidden from the user, and phoned home / reported to FBI when a threshold is met. How would you know this system doesn't already exist? Literally all the components for this system were already in place.


It would be great to have information about how much work this took. The only thing I've seen mentioning timescale was the leaked Apple/NCMEC memo:

> Today marks the official public unveiling of Expanded Protections for Children, and I wanted to take a moment to thank each and every one of you for all of your hard work over the last few years.

So this doesn't sound like a feature where all the puzzle pieces were in place and they just needed NeuralHash.

Even if you casually “put aside” NeuralHash as you suggest, the amount of research and testing that has happened to ship the system they've described is not trivial.

I stand by the idea that this was never a point release away.

> Apple could have already implemented a secret image tag for terrorist material" or "drug material" that is attached automatically to images

Apple cannot even consistently tag cats and dogs. There is no way it was ready to ship a feature that tags drug or terrorist material in a way that generates few enough false positives that agencies won't just turn it off.

I do agree with you that we have no way to know what's running on closed-source devices (or even open-source ones, unless we personally audit the whole process from dust to device).

For me, though, “you can't ever really know what's running on your device so why care about contentious new things you've just learned will definitely be running on your device” is not compelling.

I might swallow 10 spiders a year in my sleep, but if someone offers to feed me one I think it's fair to decline.


It only scans data that’s going online though.


Go ahead and take issue. You missed the part about them turning your own device against you and normalizing it as "oh it's just a little thing". This is huge and you shouldn't downplay it. Phones are at the center of our digital life and this is allowing a government proxy in to (first steps, likely to be followed by many more). I am quite glad that people are "overreacting" to it. It seems like you want us to just eat our grass like sheep. I will keep raising issues with it not matter how many times people say "alarmist" or "must be a pedo" or whatever is the insult of the day.


> that exist on all major cloud storage systems

Except for those cloud storage systems that have offered client-side encryption from day 1.

Pretending that server-side scanning is something naturally given and should be accepted as baseline biases the arguments that follow.


It isn't necessary to believe in a slippery slope to be very alarmed at this.

Their 1-in-a-trillion estimate of false positives is very likely total horseshit (there was a recent article on dissecting that claim from someone familiar with CSAM).

It also opens up a "swatting" avenue where if someone hacks your phone they can upload CSAM images and apple will sicthe authorities on you. Good luck with that.

Really we need to bring back comp.risks and start training ourselves to think a bit more cynically about the downsides of technology.


Agreed, and I'll add that the existing cloud systems have been running for a long time and have yet to slip into the various dystopian hypotheticals I've seen tossed around. Google isn't scanning Drive for documents containing anti-government sentiments. I'd be very interested in evidence that suggests otherwise.


csam will surely be followed by copyright scams


Arguments about CSAM aside, I think this is the final nail in the coffin of walled gardens and closed devices. If a company can unilaterally swoop in and throw the EULA out the window on a device you bought years with absolutely no recourse then what are we supposed to do?


Apple is altering the deal. Pray they don't alter it any further.


Final nail is when Apple starts preventing you from installing non approved apps on your PC. It's coming and the excuse will be security.

Sorry, you can't install this bit torrent client or this crypto wallet...


Apple is already doing that with iOS and iPadOS. Apps are already taken down from the App Store in China at the request of the government.


You know it can’t happen. If developers jump ship who’ll fill their app stores?


It's a catch 22 cause developers want to sell their apps so they will still comply


They don't care about developers. Touch bar ESC key, thinnes instead of performance etc.


You agreed to let them alter the terms of the deal. It's in the EULA.


>In short, Apple has built a CSAM detector that sits at the doorway between your device and iCloud. If you don’t sync photos with iCloud, the detector never runs.

If you don't want your photos scanned, turn off iCloud. You can disagree about whether or not that's anti-consumer, but you cannot argue they are changing anything about your device or it's EULA (they are changing the EULA of a service).


That's today.

Next will come scanning of your phone's content. "Hey, if you never send email or chat, the scanner never runs."

And keyboard logging. "For the children".

Encrypt end-to-end all you want, it doesn't matter if your device is running spyware.


Sure, if theoretical new policies come into effect that fit the description of the parent, then there will be policies that fit the description of the parent.

We are in agreement.


I am curious how that works for iMessage. In the example screenshots they shared, they show their scan runs on iMessage too. Is there a way to disable that? Also disabling iCloud would be an option if they allowed for third party backup/sync apps to run background tasks.


>Is there a way to disable that?

It's a feature of Communication safety in Messages. This is opt-in.

https://www.apple.com/child-safety/


You are right. Though that’s opt in “for now”.


> I am curious how that works for iMessage.

I don't know the details here, but you can disable iMessage and just send plain SMS, which in theory does not send the data to Apple. But then, part of the point of using iMessage is that the message contents are not sent to your carrier and have some encryption in place. I would guess your best bet is to find another phone manufacturer and use a different messaging service that provides better security if you want to opt out.


According to John Gruber this does work on SMS, but is a feature of the app, not the service:

> It’s also worth pointing out that it’s a feature of the Messages app, not the iMessage service. For one thing, this means it applies to images sent or received via SMS, not just iMessage. But more importantly, it changes nothing about the end-to-end encryption inherent to the iMessage protocol. The image processing to detect sexually explicit images happens before (for sending) or after (for receiving) the endpoints. It seems like a good feature with few downsides. (The EFF disagrees.)

https://daringfireball.net/2021/08/apple_child_safety_initia...


You may be confusing CSAM detection for iCloud Photo Library with a new parental control feature for under 13s in the messaging app.


Ah so the csam detection doesn’t apply to incoming iMessages? (For now at least I guess- who knows about what Apple decides to do once this gets adopted).


No they are different things. CSAM detection is looking for specific images from a database before upload to iCloud photo library: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

The child protection part can be enabled for under 13s if they’re in a family account. If enabled messages app will try and detect adult images being sent and received and give a warning to the child, it can also let the parents know about it.


I really wish this would be the end of it. But 99% of consumers will keep buying into these walled gardens and will forget about all this within a week if they even heard of this in the news. Because the next iPhone will come in pretty purple that they absolutely must have.


I think when most people understand what the thing is doing they either don’t care or actually agree with it. There’s a large overlap with tech people and people that have strong privacy feelings. If you chat with normal people they are generally completely onboard with scanning for this type of material.

I had quite a hard time explaining to some people why the anger towards this wasn’t pro-pedo essentially because they couldn’t understand why anyone would be against these people being found.


The final nail as in nobody will give a shit about it and Apple's revenue will continue to rise? Because this what will happen and was happening for the last decades.


Which part of the EULA is being thrown out?


Don't buy it


"Consumer choice" is not how you stop hostile all-pervasive monopoly machine learning algorithms. The number of times I've heard from smart people: "Don't use Google"; "Don't use Amazon"; "Microsoft are evil" (true) - now "Don't buy it". Running into the hedgerows and away from the mainstream of digital life is not an option for all but a tiny fraction of people. "Just get an Android phone!" is an even worse idea.


I have to disagree - privacy requires openness. Indeed, "just get an Android phone" from Google, OnePlus and several other major manufacturers is a viable solution - they allow you full control over the device without their permission.

You don't want Google? No problem, blow the ROM away and install Lineage without GApps, you can even mess with MicroG if you're reliant on some app that needs Play Services.


Apple became the biggest corporation in the world because they worked out how to build humane interfaces into the technologies that shape the future. That you believe Grandma will be just fine nerding around with system internals, or will care what ROM is(!) shows just how far out of touch so many hackers really are. I'm sorry, but no: Android is not a solution.


Of course! I'm not suggesting it's a solution for everyone at all. Just realize that Grandma will never have privacy like that.

Corporations and governments combine to basically make privacy a niche thing for tinkerers only, you simply cannot mass-market the development attitudes nor technical and opsec skills needed to achieve a real degree of privacy on the user side.


It's impossible to have privacy just for you in a society that otherwise doesn't have privacy. Think Facebook: even if you don't share much details, you friends certainly do. They will post your photos together and happily tag you on them, they will write how you hung out together, they will geo-tag your shared commutes, etc. Even if you don't share too many details with Apple/Google/Amazon/etc, your mobile carrier certainly does. Your Facebook/Twitter/TikTok does. Everyone around you does.

The solution to this problem does not lie in a technical plane, nor does it in digital escapism.


Haha, what an argument.

Grandma does just fine on Android and would be utterly confused by iOS.

It's all about what you're used to.


UX discoverability on iOS is atrocious. You have to know what the special gestures are just switch and open apps, whereas Android has buttons.


In terms of avoiding CSAM detection they give you no more control than Apple. If you use Google’s Photo Library, your photos have already been scanned.


That's an option for the tech literate.

I want an option that is "grandma" compatible. In other words, it isn't really viable if you can't just buy it off the shelf, or have to dig into settings to opt out. Right now if I go to Best Buy (or insert your favorite retailer here), I can't buy hardware off the shelf that won't send my data to a remote server by default.


What if I don't want linux? Not many options available in modern phones...


Seems like a really... weird preference for a kernel on your phone.

A UI, sure, I could understand that, but a kernel?


Why would it be weird to prefer a BSD kernel rather than linux one?

I'm not too keen on contributing to linux monoculturization of IT anyway.


Well, almost. Both of them have their "fuck the user" quirks. Oneplus spent a goodly amount of time exfiltrating private data to their own servers until they got caught and Google has a bad habit of disabling features like HDMI-out to force you to buy their other products like Chromecast.


You’re not running away from a mainstream digital life by buying a different computer, that’s just FUD. I’m nearly all Linux run and this has not decreased my ability to have a mainstream digital life…whatever those qualifications are, which seem to be specifically tied to software Apple allows you to use. These companies only want you to think life will be worse off by not buying their products.


Meanwhile in the real world, my brilliant friend, who cares about privacy, called me in a panic because she couldn't work out how to shift/right-click a video in her browser to download it to her hard drive, and you believe Linux is a viable alternative to the vast majority of ordinary people?


I never suggested that.

Shitty UI design is irrelevant of OS.


Regulate it.

The FBI/CIA/CCP are not the people we elect.

My representative is behind the charge to undo the app store monopoly. We can also push for less surveillance, less centralization of power, etc.

It seems if the OS vendor doesn't get to have a default browser, default app store, and default cloud store, that they don't get access to scan your files without asking either.

Legislate.


Agreed. Democratically decide where the lines are, or ought to be, then move those lines with amendments and revisions as needed. Judge those lines in the broader context of our legal system. We balance these issues in the physical world with warrants and judicial oversight - that's what we need here - sensible middle ground the democracy can live with.


That's becoming less of an option by the day and will only accelerate if Apple gets their way.


Literally just get an Android phone.


If you think it's not spreading to Android phones, you are naive. Apple set the precedent, now it's gonna come to every mainstream device.

The only option is to abandon ship.


For a split second I lamented the demise of Windows Phone. But they too would just follow. But I do lament the end of Firefox OS.


Sure, now find me a new car that doesn't have the ability to do OTA updates


I could not find any Toyota that does OTA updates. Honda has a website to do a USB update, but I could not find evidence that they do OTA either:

https://usb.honda.com/

Subaru lets you do it over your Wifi, but does not seem like they come with a modem:

https://techinfo.subaru.com/stis/doc/ownerManual/Gen4_FOTA_H...

Although, Mazda’s website does say this, so maybe it will not be for long before they all come connected to mobile networks:

https://newsroom.mazda.com/en/publicity/release/2021/202106/...

> Mazda intends to fortify our initiatives of development of fundamental software technology in order to be able to accommodate for next-generation Mobility as a Service (Maas) and update vehicle functions Over the Air (OTA)

> Five Japanese OEM companies 3 including Mazda will jointly develop standard engineering specifications of next-generation in-vehicle communication devices to push for a standardized communication system in order to provide safer and stress-free connected services sooner.


Assuming the marketplace is functioning, the demand for these sort of "features" (aka: non-features) would assume there would be a rational supplier to give it.

The next step in that debate is "yeah but the big monopolies are making it impossible for a little guy to get in." Which is true.

We can agree the regulatory capture is bad.

In the meanwhile, Google is not openly saying they will run ML on your images on your phone. With Android, you don't have to sync to the cloud, and you could even replace or add your own camera option. You can side-load without jailbreaking, etc.

Now, not that most consumer friendly option, but the advice for this crowd is still good - if you still have an iPhone and this is the last straw for you, there are plenty of good options that still exist, today. And then - let's fight regulatory capture and big government so a more dynamic marketplace can take root.


> Assuming the marketplace is functioning, the demand for these sort of "features" (aka: non-features) would assume there would be a rational supplier to give it.

The assumption that a "functioning" market will do a good job of catering to even fairly popular wishes does not seem to hold true in the real world, including for cases in which I'm pretty damn sure it's not some kind of government interference causing it not to. It's utterly common for plain ol' commodities subject to no special government scrutiny or control and with many suppliers to provide no option for features or product-types that would surely have many buyers, simply because no-one expects the returns to be as high as doing something else with the same capacity.

AFAIK this happens for a bunch of reasons, including that information is very, very far from being perfectly shared in all parts of the market, that there are significant costs associated with quality information-gathering, and efficient use of capital tending to cause production to cluster around tiny little bits of the possible product space (similar to how pharmacies like to build right next to each other, rather than spreading out to reduce travel-time-to-a-pharmacy in an area)


Sure, but I already have an android. The concern I have is that the regulatory landscape will change if Apple opens Pandora's box. Google could just as easily do the same thing, hampered only by their complete inability to keep androids updated. I have a pine phone but what happens if the Congress critters decide everyone should have this feature and networks should ban devices that don't.

The point is that we're REALLY playing with fire here.


> Assuming the marketplace is functioning

It's not. That's the problem.


Google’s photo library already scans for CSAM.


But not in your device.


No - it’s worse than that. They can scan for anything they like. They have made no promises about limiting what they do.


Source?



Nothing in those links says that Google can scan the pictures you store in your device using your own device to do the scanning.


You seem to be confused about what Apple is doing.

Nothing anywhere says that Apple scans the pictures you store on your own device. They only scan the pictures you upload to their cloud service.

The scanning is done on device before upload in Apple’s case, and in the cloud after upload in Google’s case, but either way it is only done to photos that are uploaded to their cloud services.

If you are really claiming Apple scans the photos you don’t upload to iCloud Photo Library, then you are lying or dissembling.

I assume that is not what actually what you mean.


No, I said Apple uses MY phone to scan pictures before they reach their servers. Google does not use my phone to do that and scans them themselves.


> I said Apple uses MY phone to scan pictures before they reach their servers.

Now you appear to be making a false statement about your own words.

You only mentioned photos you store in your own device, and you said nothing about uploading to servers.

Here’s what you said:

> Nothing in those links says that Google can scan the pictures you store in your device using your own device to do the scanning.


It's splitting hairs in the end. You're arguing that they only pre-scan it for bad stuff if you decide to upload it, so they don't end up with that material on their servers.

BUT THE CAPABILITY TO SCAN CONTENT OF PHOTOS ON DEVICE EXISTS. The argument is tomorrow they can simply start sending scan meta data or captions of image content up to a server without you opting into cloud storage.

The capability exists on device. It's all baby steps.


It’s not splitting hairs to point out a lie about what Apple is doing. If you are also trying to say that Apple is scanning files that are only stored locally, then you are also a liar. If you support the spread of that false information, then you are dishonest.

As to the capability on the device, the capacity to scan for CSAM is very narrow and is very hard to repurpose.

The capacity to upload images to iCloud Photo Library on the other hand has been there for years.

At any time Apple could add some other kind of scanner if they want to, and there is no reason for it to use this mechanism. It would be terrible if they did but it has nothing to do with this.

Anyone with programming experience can tell you that if all they wanted to do was check arbitrary files against a list of hashes, they would be a simple mechanism to write.

There is no way that this mechanism helps them scan for other things. It isn’t even a step in that direction, let alone a baby step.


Ok - if we are going to narrow the scope of the debate to the nature of the scanning, that is the core of my argument to begin with.

I'm not an expert in how CSAM works, but if it only as a block list against certain images it will be very ineffective.

The way I would expect it to work, is to recognize the content of images. The SOTA on this is pretty impressive. Knowing the content of the images is what Google does in the cloud, and its great. I can search my images for "Green taxi" and it will find it. Water. Sunsets. Anything.

If apple is introducing the ability to recognize photo content on the device (even if right now it is destined for the cloud as a pre-scan), it doesn't really matter that the model is only tuned to find child abuse, as an example.

Tomorrow, the hyper-parameters or the ontology could be expanded to search for anything. Political affiliations, location and timestamps (this doesn't even need modeling!), illegal objects or substances, etc.

The deal is that Apple is saying "we are going to use your device to determine what is in your pictures." The circumstances and scope of those determinations can change, but the expectation that Apple will be doing it is now publicly established.


> I'm not an expert in how CSAM works,

You don’t have to be. It’s easy to find docs on this exact technology with Google.

> but if it only as a block list against certain images it will be very ineffective.

It is not.

> The way I would expect it to work,

You are basically completely wrong. Read up on it and then we can discuss it.


So it’s worse. They don’t wait until they’ve had several hits before they get any knowledge of it, they have a list of people who might have these images in their library.


You dropped your "/s"


> This all leads me to believe that there’s another shoe to drop here, one that will allow Apple to make its cloud services more secure and private. > If this scanning system is essentially the trade-off that allows Apple to provide more privacy for its users while not abdicating its moral duty to prevent the spread of CSAM, great.

Even assuming this other shoe drops, then this logic implies that there is a moral duty to implement other types of scans to prevent other serious crimes. Surely Apple shouldn't abdicate its moral duty to prevent murder. There's really not much of a limit to this argument. As long as there is the level of false positives is similar to that of the technology now being implemented, there is a moral duty for Apple to deploy the surveillance methods.


> Even assuming this other shoe drops, then this logic implies that there is a moral duty to implement other types of scans to prevent other serious crimes. Surely Apple shouldn't abdicate its moral duty to prevent murder. There's really not much of a limit to this argument

Over 70,000 Americans die from overdoses each year, which is nearly double that of yearly vehicle fatalities. If Apple is already scanning photos and messages, why not save tens of thousands of lives while they're at it by detecting heroin and fentanyl dealers, too?


Exactly. How about we put it this way: "CSAM is a problem, but corruption within politics is more of a problem. Apple, morally, really has no choice BUT to monitor all politician communications and police communications looking for certain suspicious or risky phrases."

The camera is facing the wrong direction...


This is a very kind, and naive take on the issue. Case in point:

> But would it be able to say no to China?

In order to keep selling on the Chinese market, they already agreed to host their Chinese customer data inside China with a local cloud provider, and censor content according to Chinese requirements, and fully comply with the CCP requests. So once the "tank man" photo is added to the list of banned pictures, there's no doubt they'll happily comply.


China is a nice canary. If it's legal there then it's probably no good.


Every overreach in surveillance of the last 30 years has been justified as a way to stop child porn or terrorism. Law enforcement deliberately chooses these issues because they know they are an easy way to manufacture consent for greater surveillance. Mission creep isn't hypothetical. Many of the provisions in the Patriot Act are now completely entrenched and still used to today for drug enforcement and petty crime.


If you read old spy novels, you'll read that one way foreign spies recruit domestic spies is by first asking them to do innocuous stuff ("send us some newspaper clippings"). Once they have them doing that, it is easier to get them to break the law and pass over _real_ secrets.

In other words, "just the tip".

This move by Apple is the proverbial tip. Eventually you can bet Apple will let governments upload not just hashes of CSAM, but "hashes" of faces they'd like to monitor. Or objects in images (maybe weapons, or chemicals, etc.)

I am so glad I don't use any Apple products.


The foundation is made of sandstone and full of cracks and mud from the beginning; you never had root or the ability to replace the kernel. Apple has always been able to do whatever they want and they've rented that privilege to carriers and law enforcement in the past.


I really hope what comes out of all this, is developers and money flowing into linux phones. Pinephone looks good in theory but it sounds like it is not there yet.

We need more options than just Android or iOS. Sure Android is "open sourced" but we all know its limited without all the Google things also installed.

We need a true third competitor.


I hope for this as well. The best time to build up freedom-focused big-tech alternatives is before things become dire, not after.

Having had time to cool off, my response to this is to continue using Apple devices and supporting their privacy-focused way of CSAM scanning rather than the worse alternatives.

BUT, I'm going to start buying and supporting alternatives in parallel, and eventually start contributing to them.

If my biggest concern is government overreach, yet I think that we're "not there yet", then its incumbent upon me to be a small part in helping to make sure the alternatives are ready once we do "get there".


This has finally pushed me into finally getting a liberapay account and donating to a few projects, mostly related to XMPP and mobile Linux.

My reasoning being that in the FOSS world there's not a single company you can pump funds into (Pine64 does not proft much or at all from their devices, and Purism is just one company, not yet shipping the phones in large quantity), but individual developers scattered all over the world DO contribute and DO need our support.

It's the only way, until we get more Pines and more Purisms (minus the drama).


> In short, Apple has built a CSAM detector that sits at the doorway between your device and iCloud. If you don’t sync photos with iCloud, the detector never runs.

Is this correct? I have just read conflicting things about this online, I think even Ben Thompson wrote that it runs on the device too?


Both statements are correct--for now. It runs locally on device for photos synced to iCloud. Presumably it's a slow boil: they'll make it run on all photos in a year or so, once the press from this one has blown over.


Thanks (to you and the other responders) for clarifying; that makes sense.

Bit of a shame... I wish it just sat elsewhere on their server before cloud syncing and then I could just disable cloud sync + use the phone without worrying.


There's the rub. I think folks would feel better about if Apple-owned equipment was doing the scanning, even though arguably it is worse for privacy.


> Presumably it's a slow boil: they'll make it run on all photos in a year or so, once the press from this one has blown over.

What is the basis for this assumption?


History....


But this move matches Apple's current stance on backups. They have access to iCloud backups, but do not have access to local ones. This has been the case for a while now. This mirrors the situation with iCloud photos. In other words, maintaining the privacy of non-synced photos would be consistent with their previous stance on local backups.


It is correct, but it’s also true that it runs on the device. It’s actually an interesting trade-off as it’s more private in some ways, at the cost of potentially being less private if misused in the future.


Wait if this is really true, why are people freaking out? Wasn't Apple already scanning photos uploaded to iCloud? And now it just happens on-device?


No they were only scanning iCloud email. They were lagging well behind other services in identifying this content, only finding a few hundred compared to millions by Facebook for example.

The system only works with iCloud Photo Library, it needs a server side component to continue the process.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

This is a very interesting read and personally I think they’ve gone to extreme length to make this system as private as it could be.


It doesn't matter what the first version does. They will not stop there ... Russia started with "protecting children" too... look what they do now 10 years later. Imagine what they will demand from Apple with such capabilities. And Apply will most likely cooperate ...

" The authorities are now moving not just to reduce the influence of foreign tech companies but also to force them to promote Russian services — as with the new regulation mandating government-approved apps on all new smartphones.

Apple has agreed to this "

https://time.com/5951834/russia-control-internet/


> look what they do now 10 years later

Care to elaborate? I am genuinely curious


>... In 2012, Russia began blacklisting and forcing offline websites with the purported goal of protecting minors from harmful sites ...

> Government critics have been targeted; Navalny’s Live Journal blog, which published investigations about corruption in Russian politics, and other political opposition sites were blocked. (Roskomnadzor said they were banned for calling on people to illegally participate in mass events).

>The “sovereign Internet” law required Internet Service Providers (ISPs) to install Deep Packet Inspection (DPI) equipment, which has been used by some countries, like China, for censorship. DPI equipment enables Russia to circumvent providers, automatically block content the government has banned and reroute internet traffic.

>It has required search engines, including Google, to delete some results...

>some have buckled under Roskomnadzor’s threats to block them if they don’t comply with censorship orders. In 2018, Facebook-owned Instagram, which has 54 million users in Russia, complied with the regulator’s requests to remove posts connected to corruption allegations by Navalny. In a tweet Navalny accused Instagram of submitting to “illegal censorship orders”. “Shame on you Instagram!” he wrote.

>After Roskomnadzor threatened to prosecute social media sites for encouraging minors to join the January protests, the regulator said TikTok deleted 38% of its related content, while YouTube and Russian social media site VKontakte removed half.

All from the link I've provided above already in the previous post: https://time.com/5951834/russia-control-internet/


Thanks; I missed that link.

The statement about DPI is wrong though: DPI is ineffective without government-mandated root CAs.


Yeah they are wording that poorly, it’s definitely on the device. This article feels like the most charitable read of CSAM.


An interesting thing in this whole tempest is the notion of the CSAM itself. Specifically, there are more than a few people whose argument against Apple seems to be, implicitly, that CSAM is something of a moral panic. It exists but not at the scale that Apple needs to embed this technology in every device. Other posts alleged NCMEC lies about stats, ostensibly to justify their budgets (if CSAM is not a growing problem, then they don't need more funds). Others insist that the existing means of policing don't work in an increasingly technical world; and the bad guys are good at this stuff. I don't know which side is right - or if there even is a "right", here.

To me, then, the whole thing at least partly rests on those questions. If CSAM is a moral panic, then this technology is not just bad, but doubly so. Apple got played by the Feds, or maybe they really have been dreaming of backdooring their devices and finally had the rationale they needed.

But if CSAM isn't a moral panic but an actual growing problem that needs a technical solution (or assist), what are we to do? Just shrug and say "well it's a hard problem, maybe one day we'll solve it" and hope for the best?


This article on HN today

https://tutanota.com/blog/posts/wer-wird-belauscht/

"Politicians regularly claim that they need to ban encryption to protect the children. But who is actually being monitored?"

Apparently by orders of magnitude over any child related issue, it's drugs.

That seems to imply that it is a "moral panic". If they said they were doing for drugs, or drugs + other stuff then that might be more convincing.

Moreover if the article is to be believed - such mechanisms are used at hugely higher rates for things they are not pitched as being for. So that breaks the narrative of 'we'll only use it for X'.


Few years from now, most people would be totally okay with Apple letting random governments coerce them into putting “illegal” non CP related photos to their database in the name of “national security” or what not. All it takes is one tragedy for the population to give up their principles and rights in the name of tiny bit of “safety”. Just a decade ago, freedom of speech used to be something most people would fight for even for things they opposed and found offensive. Now a days, people make excuses why companies coerced by governments are totally fine with censorship because “it’s just a private company”. “It’s a private company” principle is how censorship is done in authoritarian countries too- private companies simply do the job of the governments. I have zero hope for the future of free internet. Soon journalists like Julian Assange and whistleblowers like Snowden would be even easier to shut down and track down in the name of “national security” which Apple has made easier to do.


Can anyone recommend a good framework or analysis of the barriers to open-source, general-purpose computing on mobile devices? I have read with interest the various open-source mobile operating systems posted to HN, but don't feel like I have a good sense of the landscape. Switching to open source feels urgent (mobile has become so important and so privacy-invasive), but it also feels like this is a steep uphill battle; that mobile has been culturally positioned as not-for-open-source.

I'd love to see this problem analyzed in terms of layers. My sense is that one barrier is walled-garden apps. I'm happy to give these up. But I also have the sense that there are hard barriers involving firmware (especially cellular modems?), identity (SIM cards are designed to contain secrets you can't access), and connectivity (you can't decouple a phone number from its client as easily as you can with an IP address).


(Sorry to reply twice but my noprocrast timer is still running out.) If you want it in wiki/git form: postmarketos.org is the project you're looking for as far as stuff on the application processor goes. Most people prefer to treat modems as black boxes due to the belief that modem firmware needs go through regulatory testing (I'm not too sure that's true but I've only done WiFi stuff.)


Purism, the makers of the Librem 5 phone, have addressed these issues many times in their news/posts: https://puri.sm/news/


Non-free graphics drivers and the incompleteness of gnome-calls and libpurple-sms. Those two are really the main issues.


IMO this was bound to happen the moment users accepted that they didn't have root on their hardware, and handed the keys to the manufacturer. It's a slippery slope. Apple is as much in control as we are. We are all just sliding down the path set years ago.

Today it's scanning your photos. Tomorrow it will periodically turn on the camera, do on device processing to check for child abuse, then only send the recording to FBI if it thinks so. Otherwise the recording will be deleted.

Once you cede control over your property, this is bound to happen sooner or later.


> With its on-device CSAM scanner, Apple has built a tool carefully calibrated to protect user privacy. If building this tool enabled Apple to finally offer broader encryption of iCloud data, it might even be a net increase in user privacy.

> But tools are neither good nor evil. Apple has built this tool for a good purpose, but every time a new tool is built, all of us need to imagine how it might be misused. Apple seems to have very carefully designed this feature to make it more difficult to subvert, but that’s not always enough.

> Imagine a case where a law enforcement agency in a foreign country comes to Apple and says that it has compiled a database of illegal images and wants it added to Apple’s scanner. Apple has said, bluntly, that it will refuse all such requests. That’s encouraging, and I have little doubt that Apple would abandon most countries if they tried to pull that maneuver.

> But would it be able to say no to China? Would it be able to say no to the U.S. government if the images in question would implicate members of terrorist organizations? And in a decade or two, will policies like this be so commonplace that when the moment comes that a government asks Apple or its equivalents to began scanning for illegal or subversive material, will anyone notice? The first implementation of this technology is to stop CSAM, and nobody will argue against trying to stop the exploitation of children. But will there be a second implementation? A third?

I’ll add this: Is our outrage at Apple enough to stop governments from requiring that they implement features like this?


Can someone please explain why we are against even the smallest reduction in End-to-end encryption by a single vendor you can avoid using (iCloud) but then talk about cryptocurrencies, for example, as "enabling black markets and economies".

Don't you see that it's two sides of the same coin, and end-to-end encryption can easily enable any sort of privacy coins and anonymity? We as advocates need a consistent position, otherwise we just have a gaping double standard.

If you're worried about anonymous transactions in crypto, because they can lead to, say, money laundering or financing terrorism, why are you not worried about end-to-end encrypted communication by those same terrorists?


If that makes you feel better, I advocate for both.


I picked up a cheap fire tablet on prime day, mostly to talk to the cars. The UI on that thing is a horrid mess, and I just don’t want to bother wrapping my head around it.

But AAPL’s actions of late have me wishing there was a 3rd option for a phone.


This is the best Apple can do. Apple can’t say to governments that client-side scanning is impossible, because it obviously is. In fact it’s quite easy. There is no magic technical fix, but at least Apple has preserved e2e encryption of cloud data. This is a political/societal issue; Apple is subject to the rule of law like everyone else. Contact your elected officials if you want it changed.


No, there's no regulation or law requiring this sort of action. Everyone is speculating it is Apple's response to a foreseen future where there are laws requiring such technology to preserve some amount of privacy, but it absolutely is not a current requirement in the US.

They could have implemented true E2EE but they chose not to in deference to law enforcement and they could have fought the battle there. The amount of speculation being done on Apple's behalf here is baffling.

I think Jason's take is pretty measured and it's pretty clear that he's uncomfortable with this, especially given his long history with reporting on Apple but he is also pushing this false dichotomy.


You're assuming that if there's no explicit law there is no other avenue governments have to put pressure on Apple. This seems to go against previous stories that have come out about the relationship between eg government and google.


Right, which is why there's no bite behind their remarks about resisting government imposition. They could have implemented E2EE and fought the government about it openly, but they didn't.

Now, I really doubt the only reason they didn't implement E2EE was because of government pressure because making E2EE an invisible feature (which Apple loves to do) is a difficult task. I will however plainly state that the option of creating spyware to scan images on your phone with E2EE on the cloud or neither of these features being able to exist is a false dichotomy that is being pushed as a narrative by a lot of people.


All arguments aside for a moment, do they just not see the logical inconsistency here?

>Apple’s Head of Privacy implies that it’s because it was ready, but that’s a bit of a dodge—Apple has to choose what technologies to prioritize, and it prioritized this one.

Apple didn't have to prioritize this ahead of anything in order for it to be ready to ship. They could have been working on it in the background for some time.


Working on it in the background would have made it a higher priority than everything they weren't working on.


How long until phones are so not controlled by their users any more, that it becomes a plausible defence to say “that wasn’t me, it must have been the <app or OS vendor> that caused my device to do it”?

Conversely, how might Apple implement non-repudiation of user actions? If they control the platform utterly, how can they prove that a user really did something?


This detection and scanning are going to testify possession of illegal material, not a specific action.


Why does the scanning code have to run on the device?

Why can't it run in cloud only when a device tries to upload something.

Then users can just turn off using the cloud services and retain their naive notion that what's on their phone is really private.

When are we doing to see a cloud service provider that matches iCould and other service providers but promises total anonimty


I feel very uncomfortable with the potential misuse of this technology and the fact that this is simply a stamped Apple spyware. Another way to go around this is to not update to iOS 15. Let's launch a movement, something like #NotUpdating to send a clear message that our devices are not meant to spy on us.

#NotUpdating #NotUpgrading #BoycottApple


By Apple moving this process from the cloud to the device, they have the opportunity to make the process transparent and auditable. That would actually be an improvement. Everyone could see what was being looked for, to the point where certain groups could even inspect the image behind every hash.


What if this own goal was done on purpose as way to prove the business cannot sustain this level of intrusion? It could be a piece of a larger game to build a case against Congress trying to force its hand further. The outrage has got people politically motivated. It could be weaponized against the surveillance state.


I strongly disagree with the author. Apple did this feature not for "increased privacy" or "better cloud security". Sorry to say that but that's a desperate believer right there.

Why did Apple do it, though? To create precisely this kind of cognitive dissonance. It's not Apple that's observing you. It's the government. Apple has basically created an API for government surveillance and can now say "see, we don't collect the data. We don't even know what big brother wants to know about you."

Apple just side-stepped the question on how to reconcile trust in propietary software and surveillance requests from the government. And everyone who tells you that it's not guaranteed that this feature will be extended to texts, spoken language, and other contents, is not worth talking to.

We now live in a world, where our personal computers come with "AI" that detects illegal use of those computers and automatically turns us in to the authorities. This general pattern won't disappear, ever. In a couple of years you will have to carefully consider every word you say in a car, every movie you watch, every website you visit, every program you write, because some neural net will use it to create a fingerprint that might be classified as illegal behavior.


I think the author, amongst others, speculates that Apple will announce E2E encryption of your iCloud data, such as backups, in the future. This is not an unreasonable speculation, nor does it imply support of the whole CSAM scheme.

BTW: I've been an Apple fanboy since the mid-80s, but I find the whole concept of my phone suspecting me of child pornography appalling. It's the thin end of the wedge, and IMHO Apple have made a huge mistake.


“It’s the thin end of the wedge”

This is the biggest worry. Apple essentially waved their hand in their FAQ and said something to the effect of they’d never do something like that.

Yes you would. You’ll get called to congress and told in no uncertain terms what you’re expected to do, you’ll say no, the law will be changed, then the gentleman in a suit that people don’t say no to will call to your nice headquarters for you to walk him through his shiny new surveillance tools.


Yes, Apple's defense is literally the "If you're not doing anything wrong you have nothing to worry about" line that is universally recognized as laughable.


Not only that, when they suggest that morally they would or would not do something they’re either being dishonest (which to be fair I don’t think is the case) or genuinely naive as to the limits of their control here, even over their own products.

Once the tool exists, they’ll add whatever features governments in major economies tell them to or they’ll be forced into adding them.

All this said, I don’t blame Apple alone here and I have mixed feelings about it still - as I really do believe in their fight against CSAM. I just don’t think this is the magic solution to solving CSAM while preserving an innocent user’s right to privacy that they’re selling it as.

I’m also certain there are engineers who worked on this system who also share these concerns.


Congress has already gone a few steps into your hypothetical. The EARN IT Act[0] proposed to ban E2E encryption based entirely on making companies criminally liable for any CSAM they can't scan for. While EARN IT wasn't passed, I wouldn't be surprised if Apple sees the writing on the wall and figured they'd get out ahead of things.

I also feel it's important to note that every major image host is doing the same thing Apple is doing, but server-side on unencrypted images. What Apple is planning to implement is, out of context, an improvement; in the same way that Google's FLoC is an improvement over ad tracking based on third-party cookies. Get a trusted client to cooperate in some smaller act of self-surveillance and we don't need the server to do full take.

That being said, I don't think we even need to rely on governments threatening Apple into doing surveillance. The scanning scheme relies on paying human moderators to basically look at CSAM all day and check if it's a valid match. Big tech moderation is already understaffed and overworked as is - so it's almost certain that the people involved will just forward everything as their sanity degrades. Apple will almost certainly wind up accidentally or unknowingly forwarding along bogus CSAM reports, no coercion required.

[0] https://en.wikipedia.org/wiki/EARN_IT_Act_of_2020


> I've been an Apple fanboy since the mid-80s, but I find the whole concept of my phone suspecting me of child pornography appalling.

In The Cuckoo's Egg, Cliff Stoll tells about having his computer page him when it detected a hacking attempt. Telling about one time when his pager went off, he said, "My computer wanted me."

Now we have so many notifications and suggested actions from our phones and computers, it feels like we have become computer peripherals. Our computers want us to do things. They have agency, and we're responding to it.

But at least we still had agency, too. The computer had the initiative, and we could choose to respond, or not. Now, though, the computers are limiting our agency. They are becoming the controllers, and we the controlled.

I don't think it's just that Apple has made a huge mistake. I think humanity is making a huge mistake. We are allowing tech to turn us into children.


I agree Apple is giving in just like they did in China. However, they were already doing this on the server. Wasn't that already the thin edge of the wedge?


At this point "my phone" is a pretty meaningless distinction. It's just Apple's phone as a service. For whatever reason people have drawn this, to me arbitrary, line where it's decided that it's not your MySpace page or your Squarespace site just because someone else distributing it on your behalf but once you hold it in your hand that rule can't apply because reasons.

It's been a decade and a half of software not being the user's agent and instead some company's agent that in some cases you buy the hardware for.

Unless how we talk about software changes this is just the world we live in now. How many apps do you have that are genuinely extensions of your intent as a user and not company kiosks? Of probably about 100 apps on my phone I think I have maybe 5.


It is unreasonable speculation, insomuch as it begs the question _why_ - they're unrelated propositions. There's no legal framework where youre only allowed to encrypt data if you check if it's child porn first.

My biggest fear is the only reason I've seen this come up this week is the Apple community, that built me into a programmer from a waiter, has become susceptible to the same kind of irrational tribalism we see everywhere else - "surely this bad thing is happening because it's part of a secret plan to fix the other bad thing!"


E2E encryption of iCloud data is completely useless if it also comes with a feature to selectively upload some of that data unencrypted for snooping purposes. Even ignoring the issue that child porn will turn into child porn or terrorism or the cause du jour, the imperfect nature of the scanning includes a review process. This is more like Youtube content id than it is like an md5sum of known CSAM files. That's the violation.

Another article recently points out that even the md5sum databases include bad matches (a rhesus monkey was mentioned).


I find myself already contemplating the websites I visit, the videos I watch and the things I post, even anonymously, because my interests include military electronics among other things. Even when I'm not worried about attracting unwarranted attention I still hesitate because my life is being hoovered up by big data and if I accidentally watch one click bait video suddenly the algorithms are hell bent on only feeding me fake news, alt right indoctrination material and conspiracy theories...


I mean it would be great if you could configure your device to just refuse to even load these banned items. I don't want that shit on my things.


Who decides what the banned items are? Can we be sure these items won't suddenly change, when an authoritarian government gets voted in?


If it's things that will get me imprisoned why would I want them? If there's an authoritarian government, I'm not planning on protesting by viewing childporn.


Sorry, you seem to have visited hacker news more than 30 times. This website is frequented by criminals who share hacking tools and information about illegal encryption technologies used for CP. You have been reported to the authorities.


Don't cower in fear. Learn to use about:config, VMs, VPNs, and TOR.

Not everyone is able to do this, but you're technical enough to be spending time on Hacker News.

(Also, this doesn't preclude lobbying for restrictions on government and corporate surveillance. It's not a dichotomy)


What about smartphones? Increasingly, an up-to-date and unmodified Android or Apple smartphone is required to interface with the modern world. Want to use a Linux phone? Sorry no banking app for you. No check-in for flights. So you either need to accept massive day-to-day inconvenience or subject yourself to surveillance and control. This era of computing is a disaster from the perspective or user-control. You know what, Stallman was right.


> Increasingly, an up-to-date and unmodified Android or Apple smartphone is required to interface with the modern world. Want to use a Linux phone? Sorry no banking app for you. No check-in for flights. So you either need to accept massive day-to-day inconvenience or subject yourself to surveillance and control.

On my desk right now there are five Android phones (two of them don't even have SIM cards). I use them for different things. I wouldn't dream of signing in to all of them with the same Google account.

If you want to retain some degree of privacy and use Instagram on your phone, you need to get another phone for that.

They're cheap.


First, having a Free computer doesn't prevent you from also having a corporate terminal. Devices are inexpensive these days, especially used devices that can have Free software installed onto them.

Second, phones are a terrible platform for privacy preserving technologies, both currently (mobile OS dumpster fire) and intrinsically (lower resources, poor input, well-known network identity). Anybody who cares about digital privacy should have a real computer laptop/desktop that they use for the bulk of their activities.


And that is why having an open web is important.

Not being locked-in into apps is critical nowadays.


How does open web make it better? Then your data is swimming on a server outside your device on an unknown operating system with god knows who looking at it.

Someone earlier astutely stated the metaphor “the thin edge of the wedge”; the movement to the “cloud” via the open web over the last decade was the thinnest part of the wedge.


Without one we would be forced to use apps on a locked down OS and unable to switch to any up-and-coming alternatives at all.

Even to the extent web is open today, I’m able to run Linux on all my PCs/laptops and not to have an extra one to run proprietary OS in order to participate in modern life.


My bank has a customer portal website. My airline has a check-in website. Using a website instead of an app is a minor inconvenience, certainly not a "massive" one.


That's not a realistic "solution" for any more than a minuscule fraction of users.

If we don't want to live in a surveillance-based dystopia, we need to push back against it politically as a society, not play cat-and-mouse as individuals.


> we need to push back against it politically as a society, not play cat-and-mouse as individuals.

And for that to work, we have to counter the usual "but think of the children!!!" narrative with one of our own - prevent CSAM creation at the source. I have written about how to get started previously: https://news.ycombinator.com/item?id=28114076


I’ve been a loyal VPN user for a decade plus. The problem is the web’s becoming increasingly unusable for anyone who comes from a VPN IP. Every single search and website click results in 90 seconds of barely decipherable captcha challenges.


Presumably the widespread proliferation of (ironically!) iCloud Private Relay and the like will help alleviate this.


One is better off suffering that and letting it inform your behavior (avoiding sites that implement that trash, to the extent you can), than continuing to buy into the surveillance web and making it ever harder to separate yourself.


How do you know if the VON you're using is trustworthy though?


I've recently tried Express, Proton and Nord. They all seem to have the same issue. Don't know if anyone has any suggestions of VPNs that don't lead to captcha hell, but I'd appreciate any pointers if there's one out there.


What you say is agreeable and true. But privacy is a collective problem for society! If I have privacy for myself, and can express myself freely, but essentially nobody else can, what kind of society do I get to live my so-called free and private life in? A cyberpunk dystopia, that’s what.


There's something weird about this line of thinking. You know it's the very people who invented those boogeymen that you're afraid of becoming associated with, who are also the ones that want to do this to you? It seems like you already bought into their propaganda for justifying surveillance but you don't like the result.


> It seems like you already bought into their propaganda for justifying surveillance but you don't like the result.

I think one can arrive at the opinion they would prefer not to have their recommendations flooded with stuff from from alt-right or conspiracy Youtube/Twitter/whatever without buying into propaganda.


I believe the demonization of the term "conspiracy theory" is propaganda. Aren't we in a thread discussing a theory that organizations that are not apple will eventually conspire with apple to utilize this invasive new technology? I find it so confusing that the idea that different factions may be working together in non-public ways is such an impossibility to so many people.


Sure, fine. But one would not arrive at using terms like that if they were not being exposed to and buying into the propaganda being put out there. The proof that they're affected by the propaganda is them using the jargon of it. Before that, no one was going around saying "God I hate it when I see conspiracy theories!" It wasn't something on anyone's radar.


I can see that, but that sense of the term (which is, I'm confident, the main one in use by a long shot) dates to, like, the aftermath of the JFK assassination and early Roswell-related groups, no? The term itself is older, but I mean the colloquial use of "ideas expressed by kooks and crackpots with excessively-complex and unlikely explanations for things, with an intense resistance to even clear evidence that they're wrong, and far too strong a prior that their unlikely idea is correct". IOW I think you lost that fight by the 70s, at the latest—it's been on people's "radar" since then.

Meanwhile, groups/people/movements that completely match that description are real and it's handy to have a term for that behavior, even if you think it's applied too broadly (and maybe it is). We've got the similarly-abused word "cult", but abuse aside it's still handy to have a way to refer to groups that actually do closely fit the colloquial, pejorative meaning of the term, and there's no way you're going to get people to stop using the word that way.


>This general pattern won't disappear, ever.

It should disappear due to active push back because freedom, democracy and human rights cannot coexist with such surveillance.

Privacy is where independent thinking is happening and generated and if it's not guarded properly the whole idea of democracy is not possible. More on that here : https://news.ycombinator.com/item?id=28084578


Yes, it should be obvious by now why Apple must back down.

But to convince Apple there needs to be plenty more articles about CSAM and where it leads.


For some reason, people living from paycheck to paycheck are expected to be "good citizens", but if you make billions (and pay much less in tax, percentage-wise), your obligation to not be a shitbag is, well, waived.

Everyone here is hating on Apple, but looking at what Peter Thiel is involved in - oh, that's FINE. He is rich, he is better than you, obviously. Don't believe me? Watch the Peter Thiel shrine that is HN downvote this.

https://www.vice.com/en/article/5db4ad/google-bans-safegraph...


And yet... https://en.wikipedia.org/wiki/Mass_surveillance_in_the_Unite...

Apple has just created a product that slowed the government down and they wouldn't stand for it and so they compromised.

But America talks a good talk about pushing back against surveillance but if it doesn't affect people day to day, no one will care.

Hell, almost a million Americans are dead and about half care more about being able to go to a bar than the easily preventable deaths.


Honest question though - but what _should_ Apple do here? I'm moving away from Apple due to this decision so i'm no fan of it.. but i also don't think it's on some corporation to spend their money to defend my freedom.

Corporations have done nothing but repeatedly invade my privacy, exploit my data and attempt to squeeze money out of me at every turn. The relationship has seemed adversarial for quite a few years. So what should i expect from Apple here?

It sort of feels like the alternative is for Apple to stand up to governments. To fight for the people.. But this seems super, super hopeful. A desperate plea founded in wishful thinking that corporations have any incentive aligned with ours.

So what in your eyes is the right solution here? Apple letting us down here is par for the course here to me. But i also don't see an alternative where i'm not just blindly hoping some corporation will defend my freedoms for me. I've lost all trust in literally every other corporation.. honestly, should i have trusted Apple?


Apple should do nothing. Child pornography is a problem for law enforcement. It should not involve blanket and universal surveillance of personal devices.

Today it's child pornography. What about tomorrow? Let me make a case. First, let me make something clear at the outset. I am 100 percent against child pornography. For all I care, castrate all the adults involved, when you catch them.

Now, with that made clear, let's take a look at how bad, comparatively, pornographic photos of underage individuals on a person's phone is—and my apologies to those who cannot stomach the following discussion.

The photos have already been taken. The harm here is indirect. The harm is that possession of photos means that you have become part of the market for child pornography, and provide incentive for the production of future materials. The direct harm occurs when the materials are being produced, and the direct harm is categorically worse than dissemination and possession of the photos.

Compare that to something like terrorism. Recruiting terrorists and planning terrorist activities is, I would argue, a greater and more direct harm than mere possession of child pornography. (Leave aside the production.) So, how can someone accept that this is okay for child pornography and not terrorism? And once we've accepted it's okay for terrorism, who is the "potential" terrorist? What is terrorist contraband?

And now that we've accepted sniffing for terrorists, what's next? Election "manipulation" that "threatens" our democracy? Public health "misinformation"?

This is the Panopticon.


> Today it's child pornography. What about tomorrow?

Our latest Antipope.org post (discussed here yesterday), suggested we extend the child-pornography treatment to possession of unauthorized cryptocurrency.

https://news.ycombinator.com/item?id=28145918


Ah, thank you! Yes, I see. Users of cryptocurrency are monsters who, because of the carbon footprint of mining, threaten the very planet. Why, that goes beyond mere hatred for humanity—you have to include the entirety of the Five Kingdoms.

https://www2.palomar.edu/anthro/animal/animal_3.htm

How could anybody be against criminalizing it!


> For all I care, castrate all the adults involved, when you catch them.

Interesting. I’m not familiar with a criminal justice system with a false positive rate lower than NCMEC’s perceptual hashes.


I think this is the best comment I’ve seen to date on this topic. From beginning to end, word for word.


> For all I care, castrate all the adults involved, when you catch them.

I doubt it would suffice. Chemical castration, perhaps.


> Apple should do nothing.

Is this a viable option when governments are pushing them to do something? To give governments access, etc?

Ie why should Apple fight governments here. They seem to have next to no incentive to do that, in the capitalist sense. Our society doesn't typically financially support moral stances, so is fighting the government, which "doing nothing" surely is, a realistic stance? It seems insanely hopeful to me.


> Honest question though - but what _should_ Apple do here?

Implement encryption and say the same to the FBI the last time they wanted apple to unlock something: Sorry, we can't.

It's not like encrypted cloud storage without holding the keys or client-side scanning would be new ground. Providers for that already exist and have existed for years.


But then doesn't that invite the exact types of people that this type of technology is trying to distance from? If everything is truly 100% encrypted and there's nothing that can be done about it, then iPhones just become tools of criminals and there's no way Apple wants that type of image tied to its brand. Maybe we messed up by making the conversation only meaningful between governments and corporations. So how do we change that or turn the tide when voting with our wallet is the only option?

I don't really agree with this system either but I, at least, appreciate the sentiment that they're trying to solve this problem while also providing E2E encryption (assuming that the internal info is correct and the end goal is full E2E encryption).


> just become tools of criminals

Or activists, journalists, doctors, lawyers, victims of police overreach, etc. If this is a question of framing then shouldn't apple be more than qualified to handle it?


Apple can’t fight this battle. No one can. The best you can do is make sure you are well versed in the current revision of wrongthink. Make sure you stay up to date. Adhere to the correct beliefs, actions, and words. Start now!


double plus good advice, Citizen.


> Honest question though - but what _should_ Apple do here? I'm moving away from Apple due to this decision so i'm no fan of it.. but i also don't think it's on some corporation to spend their money to defend my freedom.

Honest question, were are you going that has less privacy issues than this?


For the Phone, no idea. For my laptop i'm switching to the Framework Laptop and Linux (for work). For my phone, god only knows.

I _want_ to support some Linux-only phone, completely off of Google/etc. However, i suspect that will be too poor of a UX, lacking basic functionality that i deem essential. I may still buy them to support them, but i think i'm going to have to buy a Android-based phone and then heavily de-google it. I don't like this option because it still supports the Google ecosystem, but it may be a necessary evil while the Linux-phone market hopefully grows, and while i continue to buy the Linux-phone market to "do my part".



Calyxos is one i'm debating going. Seems a more realistic option (compared to Purism). Does CalyxOS recommend phones to buy for their OS?


I think only the Snapdragon SoC is fully supported. So, yes, the google Pixel phones and the Xiaomi Mi A2 are curently recommended: https://calyxos.org/get/

I've flashed mine 2 weeks ago and I am very pleased. Without the Google Services Framework you are restricted to the F-Droid store though (not a problem for me personally). And there are several Apps that won't work even if you install the GSF and the Aurora-Store: Banking-Apps, Online-Check-In for Flights, etc. -- again not a problem for me personally, I fly maybe once every 5 years and use Online-Banking from my browser...


Librem 5?


What makes the Framework laptop so great is that it provides a user-controlled system without being ancient and crippled. It stacks up pretty comparably to a standard XPS or MacBook. As much as I respect Purism's efforts the Librem 5 is so much less functional than an iPhone it's almost a different class of device.


Some of the open source phone projects are interesting. But I stopped carrying a phone entirely about 3 years ago.

HN is a weird crowd, so this obviously isn't a solution for +99% of people. But I'm choosing to opt out entirely. Living in the 90s is annoying when coordinating with unreliable people, but this isn't some kind of impossible feat. I don't even think about it anymore, it's a non-issue. Just get on with it.


> Honest question though - but what _should_ Apple do here? I'm moving away from Apple due to this decision so i'm no fan of it.. but i also don't think it's on some corporation to spend their money to defend my freedom.

They are sitting on ~$200B in cash. If they want to continue to be viewed as protecting users data and privacy, that is exactly what they should do. Spend some of that money in court, defending the privacy of their users. But they won't, and they will lose the thing that differentiates them for some subset of their users.


Even $200B is minuscule when compared to the money that governments can spend to make this a requirement for everyone. People need to push back on the situation that led to Apple even thinking they needed to do something like this. We need to go against the root cause, not the solution one company came up with to try and fight against that cause.


It's not about what Apple should do. This story is not so important to their business, marketing-wise it's a minor annoyance. It's all about what users/you should do. Support, participate in, and use open/self-hosted, private and properly secure services and devices.

Yes, it's hard, cumbersome, and not "there yet" for a long time to come on a lot of fronts, but it's the only way out of the corporate multiverses.


And we know that all of that is feasible, because Linux on Desktop is a great thing now.

Years ago it was something that only nerds and techy people would use and now one can easily install it for any family member and never look back.


> Honest question though - but what _should_ Apple do here?

How about not moving forward with this system?

Corporations have to obey the law, but they don't need to be zealous and build spying infrastructure when they aren't required to.


If this is a preemptive response to full E2E encryption, does that change your opinion in that regard at all?


Not who you asked, but: it wouldn't change mine. My phone is one of the most private things I own, it stores private thoughts, communications and pictures. If someone goes out of their way to try and sniff data in there, that's a no go. I might even go to the extreme of saying that I'm willing to even give child pornography consumers some uncomfortable freedoms, if it guarantees that my freedoms remain untouched.

(and all of that would be before I even consider false positive rates)


If this was the case I'd think Apple would have mentioned it by now to appease the situation.

But even if that was the case, the tool will now exists for any dictatorial government to spy on "troublemakers". The Chinese government, which has already gotten full compliance from Apple, would be foolish not to use it.


I don’t think that trusting a corporation is any good either. Imagine the alternate scenario where the government is benevolent and has no interest in surveilling its citizens. Companies in this case still have an incentive to gather as much data as they can from their customers because the ability to manipulate people into patronizing their products is a competitive edge.


The exec's should quit (if they care about privacy).


I remember a decade ago that Freedom of Speech was a pretty common sentiment shared here and pretty much everywhere online. Today though, that freedom is no longer even prized over the rights of private mega corps, sometimes doing the bidding of the government.

On one hand it's encouraging to read your take, but I have no faith that in another 10 years, the government scanning all our devices for "misinformation" that the then regime deems as such, will be anything but hand waved away by the majority of SV employees as "it's the law".


“They’re just consequences for abusing your right to privacy,” they’ll eventually explain.


The proposal is the exact opposite of how you characterize it.

> Apple just side-stepped the question on how to reconcile trust in propietary software and surveillance requests from the government.

They actually inserted themselves into the process, not side-stepped it. They could’ve kept status quo (no scanning) and kept supplying iCloud Backups to governments at large rates. As part of this, they’re also forced to add a greatly increased human review function.

> We don't even know what big brother wants to know about you.

They’ve explicitly stated what they’re scanning for and went further to say they will not scan for other types of content. You may not believe them, but this is not a content-neutral or blind justification.


You're suggesting that Apple's motivation was "precisely" to enable general widespread government surveillance? Why would Apple have an interest in doing that? What evidence do you have that this is their motivation, how do you know what motivates them?


I would suspect because governments repeatedly want this surveillance and as the GP comment correctly pointed out, this gives governments what they want while Apple sits on the side and acts innocent.

Given pressure from governments for this type of surveillance is, i think, well agreed upon - Apple giving in while avoiding direct responsibility seems quite the incentive in my view.


I also wonder about it. Maybe the agencys had something in there hands to make pressure to the apple people who decided this. Maybe optimised taxes, maybe lovers, we will never know.

For sure there had to be a lot of pressure. I don't believe Apple did just because it was a beautiful morning once.


Makes me think further...are there any documented cases of a three letter agency being on the board of any company?

I'd guess they tried it once or twice, and it was too slow so they went for more forceful means.

This just reasoning from what they've already done in the past.


An SRE friend told me that they're pretty confident that a three letter organization have attempted a serious breach at a data center. There's a good chance that Apple has negotiated a deal so they'll stop attempting hacks in the future.


“A SRE friend” is “pretty confident?”

That’s a very bold claim to make on hearsay. Does your friend work at Apple? Are they just speculating, or do they have actual evidence? Why do you believe them?

C’mon, people, I know this is a knee-jerk issue, but exercise some basic critical thinking skills.


Basic critical thinking skills would tell you that Apple would spend millions of dollars developing something that isn't making them any more money. The most obvious explanation is that the government is pressuring them to somehow.

As far as the attack goes, I have a few more details, but I'd rather no reveal them since I could be violating confidentiality somehow. I understand why you'd be skeptical.


So the gov't it got exactly what it wanted and instead of getting Congress to actually pass a law, they just effected a much simpler solution by force.

Winning!


> Why would Apple have an interest in doing that?

To follow the law? You can already read a bagful of stories where it caved in on law enforcement pressure. It already follows much invasive legal systems in China or India like any other major tech company. The measures there are being presented as "anti-terrorism", maybe a less marketable term in the US currently.


> We now live in a world, where our personal computers come with "AI" that detects illegal use of those computers and automatically turns us in to the authorities.

Wait a second, but only if you're uploading photos to iCloud? They were presumably already doing this once you upload them, so what is the big difference? I have totally missed this detail in a lot of the coverage, and it changes my view of it.


It's my understanding that they weren't actually checking photos on their servers against the CSAM database, which is why Apple's reporting of CSAM on their network has historically been orders of magnitude less than other companies -- Facebook last year reported ~20M, other companies reported tens or hundreds of thousands, and Apple reported, IIRC, 265.

I'm aware the general sentiment here is "don't assume governments are putting pressure on Apple to step up their reporting, this is all on Apple bad bad bad bad bad," but sorry -- those numbers do indeed lead me to strongly suspect governments are putting pressure on Apple to step up their reporting. And I don't think Jason Snell (the author of the linked article, who probably knows more about Apple's world view and internal politics than almost anyone outside the executive team does) is wrong to theorize that Apple sees this CSAM-hash-detection system as a way to meet that pressure while protecting user privacy as much as possible, because they've convinced themselves that "protecting user privacy" means "do as much on device and as little in the cloud as possible".

To be clear, I don't think this is a good idea -- letting the camel's nose in the tent through a different opening is still letting the camel's nose in the tent. The irony, though, is that I think Apple's inventively bad solution comes from the most canonically Silicon Valley mindset ever: all problems are engineering problems.


Yeah it's kind of odd, because now law enforcement/the courts can say "well, you have the capability to scan content on device, you just choose not to in some cases", which would seem to open Apple up to being forced to do it in far more cases.

I think we have to look at this through the lens of governments around the world attempting to ban E2E encryption. E2E encryption has both security and privacy benefits, and IMO most people are far more affected by the security benefits, given the frequency of data leaks/hacks. So maybe Apple is saying "let's preserve some of the E2E encryption benefits", but show it can be compatible with LEO demands.

I think the unfortunate reality may be that that is the least bad option.


I think there are a lot of great arguments to be made against this system, and the road that it is going down.

But I think your arguments would be improved by actually reading and understanding the current system Apple is proposing. Which, for example, does not include automatic reporting to authorities based only on a hash detection.

At the end of the day this is mostly an issue of political and legal requirements. Apple is doing what they think they need to do to cover their bases as they wrestle with governments over things like E2E encryption policies.


Think of it this way. Ever see the movie where they put the neck shackles on people that blow up if they do something wrong? Running Man is an example. With local device scanning, I feel apple is putting a neck bomb on me, but promising they won't blow it up unless I do so and so. I can't take the neck bomb off and never should have let them put it on in the first place. My head is now at the mercy of Apple now and forever because it's locked on. Now Apple can change the terms of service, boom. Apple has a bug, boom. Why in the hell would I want this neck bomb on me?!


Sure. I'm not saying this is good. I'm saying there are a lot of kneejerk arguments being made that directly conflict with Apple's description of the system.


The problem is that you have to trust their description when the technology is a black box. And trust that it won't change in the future.


Sure. But like I said the causes are mostly political and legal.

A utopia of GPL-only mobile devices with no proprietary cloud service connections doesn't seem like a valid option. At least not for a large enough share of users to make a societal difference.


>We now live in a world, where our personal computers come with "AI" that detects illegal use of those computers and automatically turns us in to the authorities.

Nah. Everybody else will live in that world because they are addicted to convenience. Personally, I'll be scrapping my iPhone and buying something with free software.


For my edification, what are you going to buy? I want to look at it and if it fits my requirements for a device, I might get right in line behind you.


I don't know yet, I'm aware of the Librem 5 and Pinephone, the former being more attractive to me because of the isolated baseband modem. But let's be real, the user experience of these devices doesn't come close to an iPhone, so it will be a tough transition I reckon. But one I deem necessary.


I looked at the Pinephone and, user experience not coming close is an understatement. Looking at the table of OSes, it looks like getting it working would be like installing Linux on a laptop circa 2008. Librem 5 looks a bit better, but a 6mo lead time for a $900 phone with a 720p screen and n WiFi? I’m going to continue my search.


The author felt nothing like a true believer. At least not compared to coverage from MacRumors or other mainstream media sites which keep mindlessly vomiting up the "but Google's been doing basically this for ages, so it must be harmless" type logic. Intentionally misinterpreting every relevant point from critics.


Google has been doing this for ages in their cloud. As have all the other major clouds.

Apple's the first to do this on your own device. Sure, only content that would have been uploaded to iCloud. For now. But this is the start of that particular slippery slope.


I'm thinking about encrypting all data in the cloud and disabling the platform specific image upload/backup altogether. Not sure how I would a) automate the backup process from our phones to the cloud and b) how to make it somewhat convenient to look at and share pictures among the family

Any form of automation and convenient goes against the main point that is privacy.


I use cryptomator for this. On Android it allows auto uploading to any of the major Android-enabled clouds, while encrypting. You do need to keep your vault unlocked though for the auto upload to work, otherwise it will need to catch up whenever you unlock it. Not sure if it has this feature on iOS.

If you share cryptomator and the vault to your family/friends they can access it but it's not possible to look at the pics on the web that way.

The app is also paid but it's a one-time fee and not expensive.


Imagine if your luxury car came with cameras equipped in all the compartments, sending an alert to authorities that what looks like a baggie of cocaine was detected in the glove box, blowing away any protections from unreasonable search.


> 'Apple ... can now say "see, we don't collect the data. We don't even know what big brother wants to know about you."'

I don't know what Apple is thinking, but I doubt it's that, because people who care about being surveilled are very unlikely to be persuaded by that argument. (And obviously people who don't care much about being surveilled are unlikely to care very much whether such surveillance happens on device or in the cloud.)



I can actually see it as something that encourages parents to have fewer qualms about giving their kids iPhones. I don't know how the thing works but if it makes it annoying for my kids to take/send nudes, then great.

Edit: I guess at worst Apple's just sending a cryptographic hash of unencrypted data so that the known childporn can be flagged. It's basically like sending a hashdeep record of the data. It's an interesting concept with at lot of implications for tracing the spread of files within networks without knowing what they contain. Presumably if it was expanded to all files in cloud storage then LE could say something like "Give be a list of everyone that has object XDFAECXDZE" when LE knows that XDFAECXDZE is Catcher_in_the_Rye.pdf and Apple does not.


>I don't know how the thing works but if it makes it annoying for my kids to take/send nudes, then great.

My understanding is that this only works for known abuse images and even then only if the files are identical. So apple has no way to know if high schoolers are sending nudes to each other.


Very well articulated, in a worse prose, it would be “you will have less and less privacy as you depend on more technology because the govt will want more and more power.”


> We now live in a world, where our personal computers come with "AI" that detects illegal use of those computers and automatically turns us in to the authorities.

No, we don't. Your refusal to understand how any of this actually works, your desperation to conflate this process with the worst hypotheticals, does more damage than Apple ever could.


Are you refusing to see how this opens Pandora’s box of surveillance on a device that many people store there most personal communications, memories, and other data?


If everything is a Pandora’s box, then nothing is.


Scary.


Dystopian.


You could just get a cell phone that you can turn off. I think they still make them, but get them while you can. Eventually that won't work, because your car will be able to observe you just as easily probably.

I have an old razor flip phone I might start using again. That and an iPod touch with no cellular.

This is here to stay. Time to adapt to it.


Can’t you “turn off” anything by just sticking it in a Faraday cage to block it from phoning home?


You can block the signal, but since the new scanner is now on the device itself, it can still snoop and phone home later. I want a phone that I can use as a phone still but be able to control when it does stuff and what stuff it does. I don't need a portable computer if it's just gonna spy on me all the time.


> Apple has basically created an API for government surveillance and can now say "see, we don't collect the data. We don't even know what big brother wants to know about you."

This is straight up a lie. Comparing hashes is not anything crazy, nor is it an API for big brother


Except the list of hashes is a black box. Today it's child porn, tomorrow it's anti-government or something similar. It's a slippery slope that's only going in one direction.


Agreed, but as of today, we are at the top of the slope. Maybe we skip, maybe we don’t.

As implemented, I agree with Apple


Where there is a slope, riders will come. The only way to prevent people from riding the slope is to never built the slope in the first place.


“Anti-government” is absurdly vague. Not how this system works.


> “Anti-government” is absurdly vague.

That's the point. It's not an open to the public list, it's secret and controlled by few. It can contain whatever they want it to contain.

> Not how this system works.

That's absurdly naive.


The only naiveté is self-inflicted by reading clickbait articles instead of how the system actually works. When people start worrying about Apple scanning for abstract subjects like "anti-government" they're worrying about some hypothetical system, not the one that's been built.


No, we’re worried about the precedent it sets.


Scanning for CSAM on a server was a precedent for scanning on a device. Everything is a precedent for anything.


Your entire iPhone is a black box. If you don't trust Apple, there are so many points in the chain where they could backdoor you. The fact they are out in front of this is a good thing.

Also, I'm willing to sacrifice a tiny bit of privacy to stop child porn collections.


This[1] comment explains it very well. It just cements why I moved back to Linux on my laptop, and will be moving to something running open source software on my phone, and open hardware on all my devices.

[1]: https://news.ycombinator.com/item?id=28159353


But Apple doesn't report to authorities based only on a hash match.


They hash the files on your device.

The hashes are then sent out to for comparison to the hash table.

Right now they are only comparing to the CSAM table but they can add other later.

Also, how are the hashes sent? Can they be intercepted?


Yes but there is a manual review step. A hash match does not automatically report to authorities.

If the Apple reviewer looks at the images that were flagged and doesn’t see CSAM, they don’t report it.


A human at Apple reviews thumbnails of the flagged images to see if they are actually CSAM. You do not get reported just because of a hash match.


>Comparing hashes is not anything crazy.

If they just hash photos, and if they only use the NCMEC database and if they only look at photos being uploaded and if the government has no access, then it is not anything crazy. But by implementing the system they invite scope creep at every one of those ifs.


This is exactly how they describe it. If it changes, then I’ll change my position.


The whole feature could be killed with a loaded question by a journalist - "Are there that many pedophiles on your platform?". That should be the default whenever government or corporation decide to "protect children" in Orwellian way.


I'm pretty sure pedophiles will be looking for another platform following this news. Now any platform that doesn't scan for CSAM will be associated with it.


Why is everyone assuming this is Apple's choice vs the best option they've been handed by USG? Google has scanned for years and apple has clearly come under some pressure on other fronts.


Between this and chat control, future suddenly looks very bleak for those who care about owning their privacy


Hahah, if you ask Tim Cook he probably would tell you that there are a lot of cracks in the iPhone foundation. Alas, that's why this guy was paid that much...

But anyway, media being media, not bad to waste a few minutes feeling good about oneself that one can form an opinion about such a huge and powerful corporation.


Counterpoint: If E2EE is, in practical terms, unbreakable, and if the endpoints are hardened, perhaps Apple's client-side scanning initiative is simply an expected result of frustration by law enforcement in criminal investigations. A law enforcement person told me that the CSAM issue is more widespread than the average person might expect.


This how is a micro Communism starts. So far within one strong monopoly. Two things need to happen right away: new electronic hands off data protection legislation and Apple's monopoly dismantled. Not to forget the right for repairs by ordinary users.


Micro "Communism"?! You are aware what communism means...

https://en.wikipedia.org/wiki/Communism

So no - not communism.

Fascism? Authoritarianism? Perhaps?

Agree we need right to repair for ordinary users and also agree we need data protection/privacy legislation.


I hope people who are mad at Apple and who also criticize people for using the ‘slippery slope fallacy’ reassess their criticisms of those that try to predict negative outcomes. Slippery slope means something different than you think it means.


I've heard some people saying we ought try to escalate our definition of what "end to end encryption" means, to make the definition exclude platforms where your data at rest is no longer secured by you the end user.

It's an interesting argument.

My gut reaction is that end to end is still a good definition as is, that we shouldn't update it. This highlights a separate & just real issue, illustrates that encryption still interactupon multiple other layers of platform, which are themselves security issues.

But we should try to re-rally around this. There's no catchy pro-user term like "encryption" here, or "end to end". "omniversal encryption," to imply across all time and space, or "end to end, you to me". Finding the right name for this weakness, for what we want & will not accept short of: we should find this flag to fly.


What if pedophiles and child molesters weren't demonized, but instead treated as victims of some mental disease that requires rehabilitation and treatment so they can be normal members of society?

Just kidding, hunt them down and send them to hell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: