Hacker News new | past | comments | ask | show | jobs | submit login
EFF Joins Global Coalition Asking Apple CEO Tim Cook to Stop Phone-Scanning (eff.org)
549 points by DiabloD3 on Aug 21, 2021 | hide | past | favorite | 209 comments



> [...] and the parental notification system is a shift away from strong end-to-end encryption.

That particular statement doesn't make much sense to me. The parental notification system is just a frontend action (one of many, like link previews and such). What does that have to do with iMessage's encryption?

I can see an argument about a shift away from privacy (though it only pertains to minors under 13 receiving sexually explicit images). But I think it's misleading to say that it relates to iMessage's encryption in any way.


It is not about the encryption, it is about what an "end" is.

Generally, we consider the "end" to be the end user. If someone else can see the message along the way, it is not end to end anymore from a user perspective, even if it is from a network perspective. And Apple has complete control over your device through software updates. So that the leak is from your device or from the network is a mostly meaningless technical detail.


But the message isn't being sent anywhere. The parental notification stuff doesn't do that.

And obviously both the sender and receiver software processes the unencrypted messages in a variety of ways.


As a parent, I consider myself the "end user" of my child's device, not my child. That I might be looped in on any messages sent or received by this device is not at all a leak, it's a convenience—much like how I can receive messages sent to me on my phone and my laptop.


When someone sends a message to your child, the message is for your child, not for you. Your child is the "end point" and you are an evesdropper.

This is a case where I think it is justified, as long as your child is a minor and you are his legal guardian. But as acceptable as it is, you are still a spy and the app is spyware.

The fears, justified or not, is that the same feature that can be used for parental control can also be used on adults without their consent.

Personally, I think that right now, the fears are overblown, but I also think that Apple got the backlash they deserved. Privacy is not to be taken lightly, it is something that both protects freedom and helps criminals, also a strong political stance. It is not just a marketing took against Google.


Im a parent of young ones and I wouldn't want control of my kids' devices or messages. Good parenting is about preparing your kid for the world, not jumping in whenever there is a fire. Frankly I find this attitude very creepy. At what age do you "turn off" access? If you can't handle your kid being online then don't pay for their device or phone plan.


It’s also about building trust and a relationship with your child.

I also think this behavior is creepy and would do it myself. But interestingly it is preparing them for a world in which there really are no secrets anymore.


If the world is full of goblins and demons you don't become a demon to "prepare them". You are their rock for as long as you're around, and when you're gone they'll be the rock for their children or other friends and family.


> But as acceptable as it is, you are still a spy and the app is spyware.

I vehemently disagree that a feature could be classed as spyware if it clearly and unambiguously declares exactly what it's going to do prior offering a choice whether or not to do it. By that logic, the mere presence of a "report phishing" button in any cloud email service would be sufficient to declare it as spyware.

But fine, whatever. You're welcome to call any form of parental oversight of a child below the age of thirteen "spyware" if you like, so long as you accept that it's entirely my choice as a parent. This feature isn't on by default. If your opinions as a parent are different, don't turn it on. It's not your place to deny me access to this opt-in tool which can help keep my child safe.


When I was a child, my mother refused to buy me any computers. I had to earn money mowing lawns and raking leaves to buy a laptop.

Once I had a laptop, she told me she wished there was software she could install on all of her children's computers that would allow her to view what each of us was doing on her TV.

I considered that spying - even if it was intentional on her part. I detested how few rights I had as a child.


> I detested how few rights I had as a child.

That is called being a teenager, I think.


Minors deserve no privacy? Is that what you're saying?


If you're talking about teenagers, I would say there's valid debate to be had. Minors, hell no. There's no debate. Parents are entitled to know—I'd personally go further and say a responsibility to know—what's going on in their life.


Teaching kids that they have no right to privacy is a great way to raise people who don't respect the privacy of others.

Parents don't have an inherent right to know every single thing in a child's life. By that token, if a child is gay, and their parents are rabid anti-gay bigots, they still have a "right" to know whether their child is seeking out information relevant to LGBT teens. I'm sorry, but I wholeheartedly reject that premise.


Teenagers are minors. Doesn't sound like a very trusting environment.


I guess there's a language difference. I'm Australian.


The dictionary defines a minor as "a person under the age of full legal responsibility" which I think teenagers would still fall under, even in Australia?

Understandably the term might be used differently or more loosely elsewhere.


I'm also Australian but I know what "minor" means.


I would have you know that in some countries this would be illegal. A child has a right to private conversation just like any other person in many law systems.

And yes it is obviously a spyware.

The report phishing analogy is also completely wrong because it does not include any surveillance. You report the phishing and send the message to some authority. It is the same as going to the police after receiving a threatening mail, whereas the spyware is like the postal service opening each of your letters.


You appear to be confused, or conflating this with something else. None of that describes the system Apple has implemented for iMessage. If the child doesn't want the adult to see anything, that's a choice available to the child.


once given a device, a child's technical literacy will quickly surpass that of their guardian and as such, any form of 'protection' will be just as quickly circumvented. there is no technological panacea that will keep your child safe because that is not how predation happens in the real world.


I can't agree with this more. As a child, my technology skills were several orders of magnitude greater than my parents -- and the horrible spyware the school installed on its computers was a red rag saying "crack me" to a rapidly growing bull. These measures, arguably well-intentioned though they are, fundamentally teach kids that they don't own their devices and it's okay to spy on people. Good parenting is fundamentally different from that.


Somewhat related.

In my HS computer science class all it took was asking the teacher to use her computer to format a floppy because “my floppy drive was broken” and i needed a new disk. She agreed and not 15 mins later i had my workstation booting with no management or oversight.


This was certainly our experience when we were children. Near universal, I'd wager. But I don't think it's nearly as true for parents born in the eighties and nineties.

And to the extent that it might be true, it's still not a valid excuse for objecting to the existence of any such tools.


Not true today. Thanks to locked-down and dumbed-down smartphones, Many parents of the PC-era are actually more tech-inclined than their children


The parent paid for the device, don't they then own it? Or can kids sign up for phone plans and buy $1,200 devices these days? Normally, if I buy a device, I want control. If I chose to get alerted to porn being sent to my 10 year old - that should be permitted.

I don't care if the sender hasn't given consent, I don't give them consent to send porn or naked photos to my kids !


The one who pays doesn't matter. Unchecked and applied to adults that line of thinking gives right to terrible things, including slavery.

Children are a different story, they are not free, they are under the control and responsibility of their legal guardians. In fact, the status of a child and the status of a slave are not so different, that's why there are very strict laws regarding child labor, because otherwise it could easily end up in actual slavery.

Here the problem is not parental control, that is fine, the problem is that the backdoor is now in place and could be potentially used against adults with all the nasty implications. Again, I think the fears are overblown, but I understand that it makes people uncomfortable, especially since they may have bought an overpriced device on the premises of privacy. Most of the most egregious privacy violations start with "think of the children", even more than terrorists and covid.


This is just such a crazy position.

If I buy a car it doesn't matter that I own it? If I rent it to you I can't set the terms of that rental?

This is just nuts from the EFF.


There is an enormous difference between a situation where both parties are independent adults, and where the parties are a parent/guardian and their pre-teen child. Trying to make analogies between these is fraught with error.


You have the fructus but no longer the usus and associated rights. If you own an apartment it does not give you a right to set up webcams to look at the tenants.


You do if those "tenants" are minors.

See: baby monitors.


It doesn't matter how much the device cost or how much you want control or how justified your access is, if you are able to intercept private messages sent between two other people then those messages can't reasonably be considered encrypted end-to-end


How does this work if you buy your spouse/SO a device? Or a parent/grandparent? Even a gift for a friend?


You are making this out to be more complicated than it is. A gift is a gift. A device "assigned" to someone by an administrator (e.g. a work laptop) isn't a gift. The gift of a device with administrative strings attached is just that.

Furthermore, by law the parent still effectively owns anything "given" to a pre-teen child. If the parent wants to take the device away from them and sell it, they're well within their rights to do so.


So now you’re the IT administrator of your family and you’re assigning devices to your employees? It’s a gift when it’s your spouse but not when it’s your child?

> Furthermore, by law the parent still effectively owns anything "given" to a pre-teen child.

Now that’s very broad. Which country/state/etc? I think if you actually looked instead of assuming, you’ll find that people under 18 can definitely own things.


If you buy the device, you own the device. Software too. It is just the OS can disable itself for a string of reasons.


To protect the children, should Apple should require verification of parental status? Ie upload birth certificate, photos of parents, child, whatever is necessary to establish true legal guardianship.

This surveillance is a huge can of worms. Apple should stay faraway.


The EFF's challenge is when people abuse the system (ie a partner requiring their partner to register as child), or abusive parents.

From the letter: "Moreover, the system Apple has developed assumes that the "parent" and "child" accounts involved actually belong to an adult who is the parent of a child, and that those individuals have a healthy relationship. This may not always be the case; an abusive adult may be the organiser of the account, and the consequences of parental notification could threaten the child’s safety and well being. LGBTQ+ youths on family accounts with unsympathetic parents are particularly at risk."

Should Apple should require verification of parental status? Ie upload birth certificate, photos of parents, child, whatever is necessary to establish true legal guardianship. Still doesn't protect the children from abusive parents though.

These features need to be killed off yesterday.


It's very creepy that you want to eavesdrop on your child in that way, in my opinion. This is the micro version of governments wanting ubiquitous surveillance.


Partially agree, there was some weirdo on another hackernews thread that monitors their 17 year old daughter's phone and has no plans to stop until Apple remove the right on the 18th birthday. Crazy. That said I think it would be reasonable to check up on your 13 year old to ensure there is no pedophiles messaging etc.. I'd probably draw the line at 14 after having a open chat about risks, peados, impersonation, social engineering etc


Do I want to eavesdrop on my pre-teen child? Hell no.

Am I entitled to know when my pre-teen child has received a message that contains a potentially explicit attachment, or text which matches the traits of child grooming? Hell yes.


> And Apple has complete control over your device through software updates.

This has been true of all operating systems with integrated software updates since the advent of software updates. In this respect, nothing has changed for over a decade.


They went into this in more detail in 2019: https://www.eff.org/deeplinks/2019/11/why-adding-client-side...


EFF would probably argue that technologies like safe browsing are also a shift away from E2E encryption. They were strongly against email spam protection in the 90s for this reason.


You mean when they said blanket banning email lists is bad and shouldn't be done? Because yeah that's still true.


That was in the 2000s and a different issue and not opposed on privacy grounds. There they argued they were defending freedom of speech. In the 90s the argument was against early probabilistic models.


I remember those arguments against spam filtering!

Now we all accept it as annoying but necessary?

My guess is this may meet a similar fate.


Framing the EFF's position as being against browser safety or spam protection is disingenuous.


I didn’t say they were against either. I said their opposition to iMessage parental control features as a change to E2E encryption translates directly to other messages client safety features that may reveal the content of communication to a third party. (Nobody in the discourse seems to take their FUD about iMessage seriously, given that the focus has largely been on known CSAM detection.)

In particular, you can be concerned about E2E encryption being compromised while still believing parents should have some transparency over whether their kids are receiving explicit images. Not clear EFF offers a solution that meets both, but I did not say they are opposed to the latter.


Does Tim Cook have a choice here?

I would be surprised to hear that the genesis of this idea was inside of Apple vs. one or more govts pressuring Apple to add this functionality for them.

It is also likely they even suggested that Apple should market this as anti-pedo tech to receive the least pushback from users.


If Apple is performing the searching of user private data due to pressure or incentive from the government it would make apple an agent of the government from the perspective of the fourth amendment. As such, these warrantless searches would be unlawful.

If what you suggest were true, we should be even more angry with Apple: it would mean that rather than just lawfully invading their users privacy, that they were a participant in a conspiracy to violate the constitutional rights of hundreds of millions of Americans.


The time to be angry with apple was years ago when they launched their false marketing campaign claiming privacy on their closed devices. A lot of people fell for it, and were happy to believe whatever they said. All the while they have been two-face-timing by turning over user data to governments (including the US and putting user data on Chinese servers) anyway, with the highest data turnover rates actually. Everyone was happy to turn a blind eye to these happenings as long as it didn't affect them.

We should be angry with _ourselves_. What's happening now is that this has hit closer to a lot more people, who are now dissecting every detail, blaming others, performing mental gymnastics, and launching 'open letters' so that their brand identity and perceptions aren't proven wrong. Convincing them to roll back their recent changes will not somehow make Apple devices private, when it was never private to begin with.


What can I, as an individual, do to protect my digital privacy then? Would that mean ditching the Apple ecosystem, and going full linux laptop & phone? As much as I would love to buy a pinephone, they just don't seem like a truly viable alternative...


There's been a few relevant threads on how to switch. I have these two handy since I started them:

Non-surveillance phones: https://news.ycombinator.com/item?id=28164208

Non-surveillance laptops: https://news.ycombinator.com/item?id=28216287


Thank you! These are both great posts.


I've seen this argument before. Yes, perhaps I could have switched my OS back then.

I don't really buy the implication though. "The time to be angry is years ago"; "We should be angry with ourselves". If these are true, what are you saying? That we should suck it up and be quiet? That we missed our chance and need to deal with it?

Fairly sure this is one of those logical fallacies used on young children who forgot to use the bathroom before a long road trip.


Americans, Chinese, Europeans, everyone. This is an equal-opportunity privacy rights violation. Investigatory and spy agencies around the world must be thrilled that this is moving forward.

Of course, we’ll accept it and the dullest of us will even cheer it as “doing the right thing” while stating that they “have nothing to hide”.


The latest episode of The Daily podcast [0] from The New York Times said that Apple executives were told by members of Congress at a hearing that if they didn't do something about CSAM on their platform the federal government would force them through legislation. And it's not a completely idle threat; just look at the proposed EARN-IT Act of 2020 [1], which would pretty much outlaw end-to-end encrypted services without a law enforcement backdoor.

[0] https://www.nytimes.com/2021/08/20/podcasts/the-daily/apple-...

[1] https://en.m.wikipedia.org/wiki/EARN_IT_Act_of_2020


Government by threatened legislation is much worse than government by actual legislation. Legislation is public, Legislation can be opposed, legislation can be reviewed by the court and so-forth. Allowing yourself (and your users) to controlled by threats of legislation is allowing democracy to be discarded.


Wow good take. Appreciate this perspective. Let the system do its work!


Be that as it may, there has not been presented any proof that Congress would dictate client-side scanning on users devices.

But it still doesn't change that Apple needs to stop on-device scanning.


Sure there is a choice.

They could open source the OS allowing independent auditing and independent privacy focused builds to emerge so people have a way to opt out, as they have on Android via projects like CalyxOS.

That is of course if privacy was actually a serious Apple objective.


Apple doesn't have a history of complying with every request from governments and police. They care more about their bottom line IMHO. So I was surprised by this "feature". I don't see how it sells more phones for them. Actually it could scare away some customers because of false positives. Everybody with small kids risks a match on some of their children pictures.

If Android also implements something like that I could end up with a Linux phone as my main phone and an Android one at home for the 2FA of banks and other mandatory apps. No WhatsApp but I'll manage.


I wrote a message explaining why I won't be posting any more neuralhash preimages and an index/summary of my posts on the subject:

https://news.ycombinator.com/item?id=28261147

Unfortunately it got flagged after getting 41 upvotes. :(


It may have been, but not anymore. Currently on the front page and not flagged!

edit: and now it's disappeared...


Thank you for all that you did here.


> As we’ve explained in Deeplinks blog posts, Apple’s planned phone-scanning system opens the door to broader abuses. It decreases privacy for all iCloud photo users, and the parental notification system is a shift away from strong end-to-end encryption. It will tempt liberal democratic regimes to increase surveillance, and likely bring even great pressures from regimes that already have online censorship ensconced in law.

What’s clear is the potential for future abuse mandated by various gov (though that could easily be the same even without all these CSAM measures.. not a new threat). However, the other things are erroneously lumped in with it. It only weakens privacy for iCloud photos if you have CP or photos in a CP database, and it doesn’t prevent E2E with texting… it just gives an additional parental control in addition to the many numerous parental controls (with them kids have no privacy already), and is totally unrelated to the CSAM effort.

I admire the EFF in many ways, but I wish they’d be more exact here given their access to expert knowledge.


> (though that could easily be the same even without all these CSAM measures.. not a new threat)

Apple has historically avoided being pressured by governments to allow this kind of surveillance by arguing they can't be forced to create functionality that doesn't exist, or hand over information they don't have. It's the argument they made in the San Bernadino case.

If they release this, it'll be much harder to avoid giving governments that existing ability for other, non-CSAM uses.


Ehh. I'd argue Apple has a mixed record. It's well known that iCloud backups are unencrypted and often handed over to authorities. In Jan 2020[1], it was reported they planned to encrypt backups, but dropped the rollout due to pressure from the FBI. I'm surprised they don't get called out because the difference between this and San Bernadino makes sense to me from a technical standpoint, from a practical standpoint and to the laymen it seems hypocritical.

In this case, I actually kind of buy Apple's argument that this will make it harder to cowtow to governments (assuming they encrypt iCloud photos at some point). Right now they can scan iCloud data and hand over photos and accounts without user's knowledge. They can do that without informing users (like they currently do with backups). With this in place the database and logic ships with the OS. They would have to implement and ship any changes world-wide. Users will have to install that update. An alternative is Apple silently making a server-side change affecting specific customers or countries. With that said, I do understand people's concern over the change.

[1] https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


iCloud backups _are_ encrypted, but have an HSM-escrowed key process on court order.

People like to pretend crypto is always E2E or nothing, but escrowed keys do mean that the process of checking cloud-backed data has an auditable release process, that the keys to your data are outside the cloud-hosted infrastructure, and that there is no support for blanket data scanning.

> assuming they encrypt iCloud photos at some point

iCloud photos are already encrypted. See above.


The request from the FBI in the San Bernardino case was to change a passcode limit constant and retry timeouts. Those are about as trivial to implement as any of the convoluted government coercion database attacks against CSAM detection being proposed here.


The difference here is, that apple would have to develop a new feature for them, test it, and waste millions in lawers to protect themselves from accusations of tampering with the evidence (which a software update definitely is, and who knows what FBI wanted in that software update, maybe even to insert a fake sms to the sms database, or many other things a good defense lawyer could bring to the jury).

Here, it's different.. let's say there's a new wikileaks, photos of secret documents... and again, first a few journalists get the data, start slowly writing articles, and FBI just adds the hashes to the database, and they can find out who has the photos, with metadata even who had the first, before the journalist, and they can find a leak.


> FBI just adds the hashes to the database

This is the crux of the argument right here, and I have yet to see a detailed description of how the FBI would go about doing that.

The most detail I’ve seen is in this thread, which suggests that it would be difficult for the FBI to do it, or at least do it more than once.

https://twitter.com/pwnallthethings/status/14248736290037022...

Has anyone seen something like this in the other direction? Something that walks through “the FBI would do this, then this, etc. and now they’ve coopted Apple’s system”?


But the FBI can’t just add the hashes to the db. That’s why it’s the intersection of two dbs in two jurisdictions… to prevent exactly that kind of attack. Then they need to pass a human reviewer as well.


Five Eyes


It hasn't been announced yet who else's db will be used. If it's not a 'five eyes' member... then what?


You are suggesting that another country could launder the request on behalf of the USA in order to circumvent the 4th Amendment. Okay then, let's play that out.

Australia contacts Apple and demands they augment CSAM detection so that every iPhone in the USA is now scanning for image hashes supplied by Australia.

Apple says no.

End of hypothetical.


https://www.lawfareblog.com/legal-tetris-and-fbis-anom-progr...

“ The Australian Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018 (TOLA) allows government agencies to issue “technical assistance” and “technical capability” notices to providers of communications services. The notices require that the providers give the authorities help in conducting criminal enforcement intercepts, and that they make changes in their systems to ensure that they can give that help.”

“In any event, the FBI chose a curiously roundabout way of getting access to ANOM messages. For ANOM devices operating outside of the United States, “an encrypted [blind carbon copy or BCC]” of each message that a user sent was transmitted to a server located outside of the United States, which then decrypted and reencrypted the message with an encryption key known to the FBI. Those reencrypted messages were sent to another server that was owned by the FBI, outside of the United States.

In the summer of 2019, the FBI started negotiating to build the legal structure that would make this technical architecture work. In essence, the FBI went looking for a third country that would host the BCC server and could lawfully accept all of the decrypted messages and send the copies to the FBI. As the affidavit notes, “Unlike the Australian beta test, the third country would not review the content in the first instance.” This would have been a fascinating negotiation. Both participants wanted to make criminal cases and avoid privacy scandals. The U.S. would want to be sure that the third country had full legal authority to intercept the contents of every ANOM message, and that the country was also willing to share the full ANOM take with the U.S. in something like real time.”


None of that explains how Australia could ever successfully coerce Apple into performing widespread surveillance of US citizens. The fact that Australia is a "five eyes" country doesn't make it any more plausible than if the demand came from China or Russia.


It explains how Australia could coerce Apple into performing widespread surveillance of Aus citizens. Then it’s trivial for the US to coerce Apple into switching that functionality on in the US.


Any widespread warrantless surveillance of the private physical property of US citizens, performed at the direction of the US Government, would be an absolute clear-cut unambiguous breach of the 4th Amendment.

I'm not saying the US Government wouldn't care that it's unconstitutional—we know they'd ignore the constitution when they can get away with it. But they'd also have to convince Apple's lawyers to go along with unconstitutional surveillance. You don't think Apple wouldn't be itching for another opportunity to prove their strength against a Government? Especially now? Apple would love nothing more than to have more opportunities like they got with the San Bernardino iPhone.


You haven't heard of how GCHQ would happily hand over intelligence they had on U.S. Citizens?

I know for a fact there are efforts at creating fusion centers across national boundaries. The question you need to ask is not if but what will make you interesting enough to mobilize against.

Thou shalt not build the effing Panopticon, nor it's predecessors. Is that so hard to not do.


I've no doubt that GCHQ would hand whatever they like to whomever they like. But why would Apple comply with a demand from the GCHQ to spy on US citizens?


They will come up with some kind of fuckery to get around the rules, like some crypto ridiculousness that makes files that match a certain hash go to servers in a different jurisdiction? Far-fetched, I know...


The passcode limit constant is enforced by the secure enclave. I don't know if it's been proven that the secure enclave component of the device can be changed without the device being unlocked. I'm not even sure it possible for any operating system updates to occur on a device which is locked.


Not on the iPhone 5C which was the phone used by the terrorist and did not have an SE. Locked iPhones can be updated from DFU, but I think SE firmware can’t.


The technical feasibility of the FBI’s request was never the question, nor the basis of Apple’s objection.

Of course it’s even easier for Apple to say “no” to the government if they literally cannot do what the government is asking.

That’s the basis of the EFF’s objection to Apple’s plans: they think that by implementing this CSAM system, Apple will turn an impossibility into a possibility.


We were never in a world where Apple was unable to do what the government was asking. Nothing impossible has been made possible, and Apple has made a stand against small changes - like changing constants - before.


It's not "impossibility". It's cost. The FBI cannot force Apple to do free work to circumvent security. Can they force them to add one more hash? Is that work?


I’m not a lawyer, but at the very least Apple would have to train its reviewers to recognize new image types, redact or update all statements where they explicitly drew a line in the sand about what the feature searched for, likely produce a custom build of iOS, update its knowledge base to distinguish between the root hash of the CSAM database and FBI Imagery database, perhaps re-present consent, add code to target this new database (or subset of the full database) to US users, etc. I’m not a product manager either, but superficially simple changes are often quite complex!


I think once Apple has established a pipeline for adding content in a known pattern and at a known cost, it lowers the legal bar (although I am not a lawyer). The government can take your stuff, they just have to pay you for it. I'm not sure if adding the N+1 image to the next dataset is similar to that or not.


> Can [the FBI] force them to add one more hash?

No. Because the hash causes searches to be performed on citizens' private property, this would be an unambiguous, indisputable, clear-cut violation of the 4th Amendment.


What if they ask nicely?


Not really.

Apple simply tells the DOJ if any non-CSAM content is added surreptitiously to the DB, Apple will drop CSAM scanning. Also, if DOJ makes any request to scan for non CSAM by court order or warrant, Apple will in kind drop support for the technology.

Apple is making the good faith effort here. If DOJ makes a bad faith effort, Apple is in now way required to continue to participate.


In this scenario, how would the first criteria be met? "Content is added surreptitiously to the DB"?

From my understanding Apple blindly accepts the list of hashes from their trusted soueces.


Apple will see the results of the matched images however. Given that Apple believes their algorithm will generate an impossibly small quantity of false positives, if they start to see political images, sensitive documents etc. show up, that tips them off the DB is corrupt.

All of these processes will be subject to discovery in the very first trial generated by this technology. If there is evidence that governments are polluting the DB with non-CSAM content, then warrants issued on evidence from the DB can be overturned. Prosecutors in DOJ and FBI etc. have a strong incentive to ensure that DB doesn't turn into a free-for-all dragnet because it could work to invalidate their ability to use evidence from legitimate CSAM cases.


> It only weakens privacy for iCloud photos if you have CP or photos in a CP database

Or people who have photos that hash the same as CP.


Or possess tampered photos that were engineered to be a hash collision.


You’d have to not only have over 30 hash collisions, but also have it collide with another secret hash function, and then also have a human look at it and agree it’s CP.

So what’s the actual realistic issue here? This keeps getting thrown around as if it’s likely, yet not only are there numerous steps against this in the Apple chain, this would already be a huge issue with Dropbox, Facebook, Microsoft, Google, etc who do CP scanning according to all of the comments on HN.


> You’d have to not only have over 30 hash collisions

That's trivial. If the attacker can get one image onto your device they can get several.

It's very easy to construct preimages for Apple's neural hash function, including fairly good looking ones (e.g. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue... )

> collide with another secret hash function

The supposed other 'secret' hash function cannot be secret from the state actors generating the databases.

Also, if it has a similar structure/training, it's not that unlikely that the same images would collide by chance.

> also have a human look at it and agree it’s CP

That's straight-forward: simply use nude or pornographic images which looks like they could be children or ones where without context you can't tell. It's a felony for them to fail to report child porn if they see it in review, and the NCMEC guidance tells people when in doubt to report.

Besides, once someone else has looked at your pictures your privacy has been violated.


If this really was such a problem, then as I said, we’d have been getting reports of this over the past 10+ years it’s already been in place at big cloud providers. So where is all of this ruining of peoples lives by uploading CP on their devices?

Also if you’re a gov actor trying to frame someone, why bother with a pre-image when you could put the real images on it?

None of that is new today — all that’s new is Apple is joining the effort to scan for CSAM, and instead of doing it on server they’re doing it on device right before you upload in a way that attempts to be more secure and private than other efforts.


What do you mean? People are arrested all the time for having CP on their machines, I see it in the news frequently. Impossible to know how many of them could have just been framed, no one is giving the benefit of the doubt to an accused pedo. And it never goes to trial due to the possibility of enormous prison sentences. If you’re innocent would you risk 100 years in federal prison going to trial or plead guilty and only face a few years?


Many people seem to miss that the automated scanning makes framing much more effective.

Say I sneak (psedo-)child porn onto your device. How do I get authorities to search you without potentially implicating myself? An anonymous tipline call is not likely to actually trigger a search.

With automated mass scanning that problem is solved: All users will be searched.


> it’s already been in place at big cloud providers.

I think the big cloud providers scanning your private data is of dubious ethics, but it's like complaining that your mail carrier is reading the content of your postcards.

So long as you send unencrypted data to a third party your privacy will be limited, regardless of what our laws or norms say. People usually know this, and so many do avoid uploading things to these places or encrypt what they upload.

When its your device itself doing the scanning, ahead of any encryption-- then that protection goes out the window.

Sometimes the same violation of privacy is made more acceptable by a clear boundary that you can stay on one side of to protect your privacy. Your devices vs someone elses devices is the most clear historical boundary in this case, and apple is breaking it.

I don't think it's unreasonable to expect the erosion of the private boundary to have an effect. And we can't say that the scanning by providers does nothing, -- the convictions prove otherwise. We can only hope that all those convictions were deserved, the nature of this crime is such that its hard to prove someone wasn't framed.


> So where is all of this ruining of peoples lives by uploading CP on their devices?

It's already happening. Except we just choose to SWAT people instead, since it's faster, easier, and there's effectively no liability on the behalf of the caller.


> So where is all of this ruining of peoples lives by uploading CP on their devices?

Once the capability is in place on everyone's devices, how are we supposed to guarantee it will never be used maliciously? Just say no to the capability.

> Also if you’re a gov actor trying to frame someone, why bother with a pre-image when you could put the real images on it?

Because the capability for this is now built-in in everyone's phones.


> That's trivial. If the attacker can get one image onto your device they can get several.

At which point everything you brought up about attacks on the hash function is completely irrelevant because the attacker can put actual child porn from the database on your device.


The apple system is a dangerous surveillance apparatus at many levels. The fact that I pointed out one element was broken in a post doesn't mean that I don't consider others broken.

My primary concern about its ethics has always been the breach of your devices obligation to act faithfully as your agent. My secondary concern was the use of strong cryptography to protect Apple and its sources from accountability. Unfortunately, the broken hash function means that even if they weren't using crypto to conceal the database, it wouldn't create accountability.

Attacks on the hash-function are still relevant because:

1. the weak hash function allows state actors to denyably include non-child porn images in their database and even get non-cooperating states to include those hashes too.

2. The attack is lower risk for the attacker if they never need to handle unlawful images themselves. E.g. they make a bunch of porn images into matches, if they get caught with them they just point to the lawful origin of the images. While the victim won't know where they came from.


"it just gives an additional parental control in addition to the many numerous parental controls (with them kids have no privacy already)"

Wait, sending data (of matching CP hashes) to law enforcement is parental control?


Yours is a very fair negative reaction. The information in the EFF’s includes a portion where it seems to be concerned only about alerting parents[1]. I think many parents would find that reasonable. However, the fact that the information will also be sent to the government [2] is just plainly an abuse of privacy, goes outside of the relationship between parent and child, and I do not imagine that parents would find that reasonable.

> [1] Moreover, the system Apple has developed assumes that the "parent" and "child" accounts involved actually belong to an adult who is the parent of a child, and that those individuals have a healthy relationship. This may not always be the case; an abusive adult may be the organiser of the account, and the consequences of parental notification could threaten the child’s safety and wellbeing. LGBTQ+ youths on family accounts with unsympathetic parents are particularly at risk. As a result of this change, iMessages will no longer provide confidentiality and privacy to those users through an end-to-end encrypted messaging system in which only the sender and intended recipients have access to the information sent.

> [2] When a preset threshold number of matches is met, it will disable the account and report the user and those images to authorities.


Your [1] and [2] refer to separate systems. The parental control does not send info to authorities.


Apple announced two separate things in one press release: a CSAM-scanning system, and a parental control that uses AI to attempt to detect nude pictures in iMessages and alert the parents. The latter system does not send any info to Apple or any authorities.


> ...if you have CP or photos in a CP database...

Which database? I get the impression that people think there is a singular repository for thoroughly vetted and highly controlled CP evidence submission. No such thing exists.


It’s an intersection between a US db and a not yet chosen non-US db, which then will have a human reviewer verify its CP before sending off to the authorities.


> What more could one ask for?

An independent audit for both the secret secondary perceptual hashing algorithm and the chain of custody policies/compliance for the "US db" and the disconcertedly open ended "not yet chosen non-US db"?


What's the point of that? If you don't trust Apple, why would you use Photos.app in the first place? They already have 100% control over that, and can spy as much as they want to. No need to go by way of the CSAM database, that would be absurd.


I've never been a customer of Apple but I'll try and imagine the experience... I might trust them to assemble hardware and write software for my consumer needs - but that doesn't mean I trust them to competently reason about me potentially being a pedo. That is only a small part of a much larger point, but it is reason enough alone.


Apple's responsibility ends with notifying law enforcement, at which point presumably there would be a subpoena for evidence and a trial.

The concern people have is logic on their phone snitching on them, with scenarios based on authoritative regimes setting the baseline of what is scanned/reported.

Apple is not serving as judge, jury and executioner (unless there is an electric shock delivery system being added to the iPhone 13)


That could be the most obnoxious nitpick I've ever heard, do you honestly think anybody is worried about Apple doing anything beyond submitting a false report - and that you are being helpful by pointing out that they can only do that very thing? Do you think that might be why I said "competently reason about" instead of "send the Apple genius death squad"?


My point is that people seem to think that this feature somehow makes it easier for Apple to spy on you.

In fact it doesn’t make any difference at all, since they already have full access to everything you do on the phone, so if you don’t trust them you shouldn’t use an iphone.


Do you ask for the same audit at Facebook, Google, Microsoft, Dropbox, and countless others who are already doing this and have been for years?

I do not share your same concern of some abused db _today_.


Neither Google nor Microsoft scan pictures people have on their devices running Android or Windows. I'm not sure how that's even applicable to Facebook and Dropbox.


You’re asking for an auditing chain presumably due to concerns about governments putting in photos of things that aren’t CP. Apple is only doing this for photos that get uploaded to iCloud, with this neural hash that gets uploaded with it. The actual verification of a CP match occurs on the server due to the hashes being blinded. So in many ways, it’s very similar to what these other cloud providers effectively do — search for CP matches on uploaded data.

If you’re concerned about non-CP being scanned for, then you should already be concerned about that with everyone else. Thus if you’re asking for auditing of Apple, then you should widen your request. If you do, then sure, I can understand that. If you’re not concerned about the existing system, then I think you’re being non-consistent.

Most people in comments to me seem to be non-consistent, and are being overly knee jerk about this… myself included initially.


Microsoft, Google, Facebook, Dropbox, etc. all scan photos which are cloud hosted for CSAM. Apple's new system scans only photos which are cloud hosted for CSAM.

This would be the source of the inconsistency - that the code to do scanning is on the client side of the uploader rather than the server side of the uploader does not change any abuse scenarios, but exclusively serves "slippery slope" arguments of full on-device surveillance.


>does not change any abuse scenarios

It absolutely does. When the scanning is server-side, companies can only scan the files you choose to send to their servers but with a client-side system in place, all it takes is a change in company policy and a few simple directory config changes to result in ANY file on your device being scanned.

Slippery slope or not, the client-side system massively lowers the bar for abuse and that cannot be ignored.


Except it's only occurring while simultaneously uploading to iCloud. The hash is metadata that goes along with it.


We should definitely start. Also, do those companies implement such subversive technology in devices they sell you?


I don't use those services for this reason. Apple was the last remaining option that didn't do surveillance.


Sure, but I don't expect it from third party cloud platforms - in the same way I wouldn't expect accountability from a garbage man who reports to the police after finding evidence of crime in my garbage. Apple is, for some insane reason, trying to establish the precedent that the contents of your Apple product are now part of the public space - where expectation of privacy isn't a thing.


But that isn’t true. This is only photos uploaded to iCloud.


And all it would take is a confidential change in policy before all photos are scanned, and the same "save the children" argument would be used to save face if the decision ever leaks.

"Pedophiles are taking pornographic photos of kids on their phones and then sharing them outside of iCloud. But wait, you don't want us to compare each photo you capture against ML models capable of identifying child porn? Why are you defending and siding with pedophiles?"

The only surefire way to avoid a slippery slope is to keep away from the edge.


[flagged]


Lie? I don't take kindly to such words, because you're ascribing malicious intent where there is none. Please check your tone... HN comments are about assuming the best in everyone.

This is only applying to photos uploaded to iCloud. Every single thing talks exactly about that, including the technical details: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

The hash matching is occurring on device, but only for iCloud photo images:

> Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the database of known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines whether there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image.

Read that PDF. You'll see everything in it is designed for iCloud photos only.


It is not a lie. The scanning is done on device, but photos are not scanned unless they are going to be uploaded to iCloud. Apple has explicitly stated this.


Oh, well if Apple says... I'm sure their statement somehow completely aligns with all the potentially conflicting interpretations one can draw from their PR, their stated objectives, and the implementation details observed, and it always will - forever.


Apple has released fairly detailed technical summaries of their system, far beyond what could be hidden behind "conflicting interpretations" of material written by a PR department. Have you read them? Are you claiming that Apple is lying?

If your contention is that Apple is lying now, then you have no reason to think Apple—or any other corporation for that matter—hasn't been lying about your data security for the past decade. Who knows, maybe Google Chrome is sending everyone's passwords in plain text to the NSA.

If your contention is that Apple might turn evil in the future, that charge could be levied against against any other company at any time. It's functionally unfalsifiable.


> Apple has released fairly detailed technical summaries of their system...

... that don't address implementation details - like what background processes get hooked into for hash generation and under what circumstances. They are also relying very heavily on secrecy, the secret local db being a good example. Why does that matter? Because their assurances against false positives depend on you assuming that the threshold counter only applies to content being flagged with a reasonably low rate of false positives and subsequently stored on their cloud, which permits additional safety assuring verification steps - and I don't see any reason why you should assume that.

> Have you read them?

I have.

> Are you claiming that Apple is lying?

Yes, but not about their intent (which I don't really care about, and is immaterial anyway) - they are lying about their ability to execute the program as they've described. Take for example their hybrid perceptual algorithm approach, which supposedly provides some increased measure of protection against adversarial attacks. We know for a fact that their primary algo is hopelessly vulnerable to hash length extension attacks, which makes the generation of false positives trivial. The second algo that supposedly addresses that is a secret, which should immediately raise red flags for anyone familiar with infosec. But I wouldn't be surprised if that safety turns out to be Microsoft's PhotoDNA - because it is already commonly used in the CP cataloging realm, and Apple would have more than one reason to not want to advertise something like that. First, PhotoDNA is a blackbox that has no independently conducted research available for public scrutiny. Second, it would mean they designed their system totally backwards - as PhotoDNA employs a high pass filter to guard against extension attacks, but at this point in the flagging process (as Apple has described) that filtering protection can't be employed to guard against extension attacks... so it provides no additional protection to speak of. Third, it was invented by a competitor.

> ...hasn't been lying about your data security for the past decade.

You are forgetting about all the cries for not ascribing malice to stupidity, and how this case involves a very different kind of cover for action. When calc.exe sends tiny encrypted fragments to telemetry.microsoft.com and it has no means of using a hidden channel to receive anything outside of itself - I'm irritated, but not alarmed. When PhotoAgent is chilling in the background - occasionally opening a RW handle to some persistent encrypted db, a db that also gets opened by another process prior to establishing a network connection to thinkofthechildren.apple.com, I become suspicious.

> If your contention is that Apple might turn evil in the future...

My contention is that they can't avoid making mistakes, and that assurances addressing concerns related to consequence of said mistakes depend entirely upon secrecy. History has shown how relying on secrecy and infallibility plays out, Apple's defenders are ignoring that.


> which then will have a human reviewer verify its CP

No, it won't. The human reviewer only sees very low resolution thumbnails, to check there's a "match". The content is not verified, so the two DBs could contain anything.


> It only weakens privacy for iCloud photos if you have CP or photos in a CP database

Who controls what is in the database? What independent oversight ensures that it’s only CSAM images? The public certainly can’t audit it.

What is stopping the CCP from putting pro-Uighur or Xi Winnie the Pooh images into the database? Or from the US using this to locate images that are interesting from an intelligence perspective (Say for example pictures of Iranian Uranium centrifuges)? Apple says they will only add images that more than one government request? All it would take is a few G men to show up at Apple with a secret court order to do this no?

So… China and Hong Kong? The Five Eyes nations?


Their use of a highly vulnerable[1] "neural" perceptual hash function makes the database unauditable: An abusive state actor could obtain child porn images and invisibly alter them to match the hashes of the ideological or ethnically related images they really want to match. If challenged, they could produce child porn images matching their database, and they could had these images to other governments to unknowingly or plausibly denyably include.

...but they don't have to do anything that elaborate because Apple is using powerful cryptography against their users to protect themselves and their data sources from any accountability for the content of the database: The hashes in the database are hidden from everyone who isn't Apple or a state agent. There is no opportunity to learn, much less challenge the content of the database.

[1] https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...


They have to come from the intersection of two databases from two jurisdictions. So already that’s out as you suggest. Then you’d have to match _nearly exact photos_, which isn’t a vector for general photos of some random minority. Then you’d need 30 of such specific photos, a match with another secret hash, and then a human reviewer at Apple has to say yes it’s CP before anything else happens.

I think there are plenty of reasons to be concerned about future laws and future implementations, but let’s be honest about the real risks of this today as it’s currently implemented.


Every step you've described is unfalsifyable: You just have to blindly trust that Apple is doing these things, and that e.g. authoritarian regemes haven't compromised Apple staff with access to the data.

> They have to come from the intersection of two databases from two jurisdictions.

My message directly answered that. A state actor can modify an apparent childporn image to match an arbitrarily hash and hand that image to other agencies who will dutifully include it in their database.

> Then you’d have to match _nearly exact photos_

It's unclear what you mean here. It's easy to construct completely different images that share a neuralhash. Apple also has no access to the original "child porn" (in quotes because it may not be), as it would be unlawful to provide it to them.

> but let’s be honest about the real risks

Yes. Lets be honest: Apple has made a decision to reprogram devices owned by their customers to act against their users best interest. They assure us that they will be taking steps to mitigate harm but have used powerful cryptography to conceal their actions and most of their supposed protections are unfalsifable. You're just supposed to explicitly take the word of a party that is already admittedly acting against your best interest. Finally, at best their protections are only moderate. Almost every computer security vulnerability could be dismissed as requiring an impossible series of coincidences, at yet attacks exist.


> A state actor can modify an apparent childporn image to match an arbitrarily hash and hand that image to other agencies who will dutifully include it in their database.

Even if a state actor constructs an image that is an NeuralHash collision for the material they wish to find, that only gets them through one of the three barriers Apple has erected between your device and the images being reported to a third party. They also need to cause 30 image matches in order to pass threshold secret sharing, and they need to pass human review of these 30 images.

Arguably investigation by NCMEC represents a fourth barrier, but I'll ignore that because it's beyond Apple's control.

> You just have to blindly trust that Apple

This has been true of all closed source operating systems since forever. Functionally, nothing has changed. And whatever you think of the decision Apple has made, you can't argue that they tried to do it in secret.


The invocation of 30 images like it's a barrier confuses me. I created a bunch of preimages posted on github, I could easily create 30 or 3000 but at this point all I'd be doing is helping apple cover up their bad hash algorithm[1].

I pointed out above that the attacker could use legal pornography images selected to make it look like child porn. This isn't hard. Doing it 30 times is no particular challenge. I didn't use pornographic images in the examples I created out of good taste, not because it would be any harder.

[1] I've made a statement that I won't be posting any more preimages for now for that reason: https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...


If someone is trying to frame a known individual, the 30 image threshold may not be a significant barrier, I'll grant you that. But if you're enlisting Apple's algorithm to perform a dragnet search of the citizenry (e.g. leaked state secrets) then this mechanism cannot be effective unless the material in question is comprised of at least 30 photographs.


I'll grant you that!

I have some residual nitpicks, on that point: many leaked data troves are much larger than that, though it is a material restriction.

The 30 threshold isn't leakless. Say you only have one hit, it still gets reported to Apple. The software also emits a small rate of "chaff", fake hits to help obscure the sub-threshold real hits. But it could still be used to produce a list of possible matches, including anyone with targeted material plus people who emitted fake matches, producing a list of potential targets much smaller than the whole population, for enhanced surveillance.


This still relies upon some degree of compliance by Apple in order to acquire the stream of vouchers. Or alternatively a working security breach of Apple's systems. Either way this represents an additional, non-trivial barrier to overcome.

It would be interesting to know what the "chaff" rate is and whether any intelligence agency could stomach that amount of surveillance, particularly since it's by no means certain that any of them are real. In fact it seems to me that it's very unlikely indeed, especially if the material is in any way radioactive. After all, finding a match this way requires quite a few assumptions:

1. The target owns an iPhone;

2. The target has enabled iCloud Photo Library;

3. The target has a photo library small enough, or is paying for sufficient iCloud storage space, that the flagged images are included in the (sub)set stored in the cloud;

4. The target has imported the flagged images into their photo library rather than to iCloud Drive or any third party app like SpiderOak, Mega or Tresorit.


>This has been true of all closed source operating systems since forever. Functionally, nothing has changed.

And that whatabout argument is supposed to justify the creation of a client-side scanning system that would massively lower the bar for abuse?


> Every step you've described is unfalsifyable: You just have to blindly trust that Apple is doing these things, and that e.g. authoritarian regemes haven't compromised Apple staff with access to the data.

So third party auditors as well?


Better than not, but how much can you trust the word of a third party auditor in a world where government intelligence agencies conspire to spy on each other's citizens because their laws expressly prohibit on spying on their own... where the (one time) worlds largest supplier of cryptographic hardware was covertly owned by the CIA and shipped backdoored units for decades, where our national standards bodies make standard backdoored random number generators, and where the US government used a sham vaccination campaign to collect genetic samples from a whole community in order to locate and assassinate a single terrorist.

When it comes to keeping data private we face adversaries whos only rules seem to be what they can get away with. Against that, full transparency when it comes to the construction of our security infrastructure really needs to be the starting position.


>and where the US government used a sham vaccination campaign to collect genetic samples from a whole community in order to locate and assassinate a single terrorist.

What story are you referring to here?


National Center for Missing and Exploited Children intersected with a not yet determined db in another jurisdiction, and then human moderators at Apple.

So what you’re describing is something that is a concern for the future that could exist anyway (and maybe already does at places like Google that have _zero_ auditable processing of data that already scan for CP from the same database above). But let’s not pretend that’s today.


> It only weakens privacy for iCloud photos if you have CP or photos in a CP database

Or if someone hacks your device and uploads one of those photos to your iCloud.

(I still have no idea why people aren't pointing this out more aggressively -- phones get hacked probably every minute of every day, and acquiring those actual photos isn't that difficult once you're willing to commit felonies -- hash collisions are a distraction)


Because that then would already be a problem for Facebook, Google, Microsoft, etc that host photos that hacked phones could be uploading today.

And we’re just not seeing that being the case. Because all these providers have been doing this for so many years, including the nearly 17 million photos identified by Facebook last year, you’d figure there would be a lot more noise if this was really going on.

In fact, I would venture to say it’s far easier to hack a Facebook account than it is iCloud that has many on-device-based security protections that a cloud login without mandatory 2FA (and often SMS when it is used).


We had SWAT teams for a long time before SWATing became popular. The publicity that this has gotten is only going to increase the chances that all these services start getting abused. And who is to say that it hasn't happened already and been entirely successful, but nobody believed the victim.


Suspected CSAM is always reviewed—the actual files, not a hash or reduced-resolution version—by members of law enforcement before an arrest warrant is issued.

Police and prosecutors have to do that because they have to attest to the judge that it is actually CSAM. And, unlike any private party, law enforcement is legally authorized to possess and review CSAM, so they don’t run any risk (aside from the risk of seeing horrifying images).

Unlike SWATing, the police don’t have to go to a person’s house to review suspected CSAM that is submitted by a service provider like Google or Apple. So it’s possible that there are existing collisions for PhotoDNA that make innocent files trigger an alert. But no one would know externally because once law enforcement reviews the file and sees it is a false positive, they just ignore it. The account owner would never know. The service provider might do forensics if they interpret the incident as an attempted attack.


So you think we’re going to see a rise in people uploading CP to others cloud providers?


I'm honestly surprised that's not a much more common way of griefing (either specific or random targets) or disabling accounts to deny a victim access to their e-mail after an attacker has gotten access, used it to reset passwords, and is now abusing the linked accounts (with the added benefit that the victim may be too busy being arrested to deal with the fraud).

Probably because the group that has/is willing to handle CSAM is small enough that it doesn't overlap much the other groups (e.g. account hijackers, or people who want to grief a specific person and have the necessary technical skills and the patience to actually pull it off). For criminals it may not be worth the extra heat it would bring, but from what I've heard about 4chan, I'm surprised there is not a bigger overlap among "for the lulz" griefers.


Yes. I would bet the large majority of people here who are now quick to point out that other cloud providers have been doing this for years didn't know that fact a month ago. We're now well armed with that information due to arguing about this Apple issue. The fact that you're so quick to inform me of the facts is precisely why I think its more likely to happen.


With all the news of CSAM going to be scanned on iPhones, which people are even going to keep such content on iPhones anymore? The news of this has been so prominent in the media, people not even in tech have heard about it plenty. If you are a Person of Interest that consumes/distributes such content and are a tad aware of the news, you aren't going to use iPhones/iCloud storage anymore.


One would hope, but this severely overestimates the common user’s awareness of such features or their implementation on their devices. Check out this survey from 2019, where a majority of iPhone users of all stripes could not identify the phone they use:

https://www.androidauthority.com/smartphone-users-survey-201...

To put it bluntly, this fantasy that any significant portion of people who live east of Tahoe even knows, much less cares, about CSAM scanning is plainly absurd. All the headlines and breathless coverage about the fatal mistake Apple is making has resulted in… Apple’s stock soaring to record highs.

Some creeps will get caught. Some will switch to another way of sharing that material. Apple will get shielded from liability for hosting this content in iCloud.


Apple will then have a list of all the people who have an iDevice but disabled iCloud in August, 2021.


That is not evidence, that is inference (of what I am not even sure... prudence perhaps).


I disabled iCloud the day of the announcement. This feature will be used to attack journalists, whistleblowers, and the vulnerable in our society.

I then bought a System76 laptop to move away from my Macboom.


And how does that relate to CSAM? Are we going to start from the premise that everyone is guilty of something?


Screw Apple. Walked in to Huaqiangbei with an Ubuntu USB stick yesterday to test a bunch of laptops and ensure all the hardware worked. Bought a Lenovo. Screw Apple.


Nice! For others not in the artery of electronics, System76 and Purism offer Linux-first laptops, rather than stuff that just happens to be compatible. Support them!


Lenovo? After Superfish? Interesting choice.


Had a look at online pricing: couldn't order System76 to China without huge cost and long shipping time.

Basically all electronics manufacturers have bad software now. It's assumed and rampant. Firmware on TVs, preinstalled apps on phones and tablets. You can't buy current-grade gear for a reasonable price without unwanted muck.

As it stands I still had to pay the Windows tax.


The logical argument against this tech is solid but I despise it because it materializes the theological idea of “always being watched”.

There’s always an option to be part of the government and be on the side that takes advantage of this tech to take out rivals, make your job easier etc. but implementation of always watching devices is a hill to die on.


EFF lost credibility on this issue when their first statement conflated the iMessage safety for kids feature with the CSAM scanning. It’s clear that they are willing to fearmonger to raise money and aren’t committed to the cause of educating users and advocating a nuanced position. I cancelled my recurring donation.


I feel list this story keeps on showing up on HN.

Does anyone have a link to previous discussion on this topic? Might help avoid endless re-hashes of the same arguments and misinformation.


This is a new development, please don't try to suppress news.


Dessant, this is a bad take. Encourage others out of the loop by pointing them in the right direction. Done properly it will improve and enable new discussion.


I'm not trying to suppress news.

There have been a ton of letters, calls to action, complaints and outrage about apple's actions.

Many of the conversations are HIGHLY repetitive.

I would strongly urge folks to review past discussions before re-hashing things (again) or focus on new elements.

And these headlines "Global Coalition"? I hope folks realize there is also a "global coalition" of governments that are working to do much more to get access to your information. Australia, China, the EU (yes, go on about GDPR but actual privacy - forget about it), UK etc.

There may also be a "global coalition" of users who like apple's products.

A better headline is EFF writes open letter with 90 other civil liberties groups to Tim Cook?

Edited to list prior topics on HN about this:

Apple's plan to “think different” about encryption opens a backdoor to your life

EFF: https://www.eff.org/deeplinks/2021/08/apples-plan-think-diff... 2260 points|bbatsell|16 days ago|824 comments

Apple Has Opened the Backdoor to Increased Surveillance and Censorship

EFF: https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-w...

618 points|taxyovio|10 days ago|303 comments

Tell Apple: Don’t Scan Our Phones

EFF: https://act.eff.org/action/tell-apple-don-t-scan-our-phones

161 points|sunrise54|4 days ago|30 comments

Is EFF opposition really new news?

I've also noticed this weird forumulation more often. "Please stop xxxx" rather than an actual discussion of a topic. Ironic to try and suppress a conversation by claiming someone elses comments are suppression?


I think it’s important to distinguish the comments from the submission. I agree the comments are mostly repetitive, but the submissions do, in fact, cover newsworthy facts that should not be dismissed.


This is the forth or more piece about the EFF objecting to Apple's work.

I was just mentioning - maybe it's worth reading the comments on the FOUR prior articles about this EXACT entity complaining about Apple (ignoring the other 30+ HN posts on this) for this specific issue.

This is not "suppressing the news".


Your question is valid. People out of the loop need a way to catch up.

I create this account explicitly because of this new feature. I've been posting constantly about it. If there's a new article on this I will try (and encourage others) to post such a summary as you did at the top of the article for anyone who hasn't been following.

Open to any questions you have about this from a "how do I catch up" perspective.


If the human element to this story was “cat gifs”, you might have a point.

The human element here is “world governments and multinational corps beholden to nobody want to insert the first of possibly many back doors into your emotional life”

Sit down. You’re just looking for attention for your ability to notice a basic pattern.


I find it very ironic that the people complaining about "suppressing news" or "violation of rights" are writing comments along the lines of sit down and shut up.


Your comments aren't news and we aren't the government suppressing your speech, so I'm not sure what you're going on about.


They're not telling you to stop any meaningful discussion about the issues.


“Person on internet can perform basic arithmetic, demonstrates by counting instances of repeat content.” is a headline that:

1) we’ve seen over and over

2) adds little to the context at hand

You’re not wrong. You’re late to the game spotting patterns of reposting (it happens deal with it for yourself) and undermining the context (putting it on others to do better by your expectations.)

You have a body and mind that can be trained to find and reflect on more interesting patterns and ideas. Have at it. Don’t bemoan all of reality not conforming to your current state of cognitive awareness


You can't stop it. Once a topic becomes popular on HN, any plausibly related submission will instantly garner enough upvotes to put it on the front page. Eventually it will fade somewhat. Same thing happens with Boeing-related topics, for example.



Wow - 11 stories with tons of upvotes (lots more with fewer).

A fair number are other "open letters" as well.

Helpful.


I've come to abhor the term `misinformation`, I know your intent but Twitter/FB has been using it in nafarious ways. Anytime I hear it, it triggers censorship related thoughts for me.


I agree! It's been misused to mean things someone doesn't agree with too often :)

In this case we have been gettings tons of claims that what apple doing will see them charged with child porn felonies etc with almost no foundation.

We've also had lots of claims based on a failure to actually read about what apple is doing (ie, will scan all content not just that scheduled for upload to icloud etc).


Sure, but as someone who agrees with you, I’m not sure we can call that ‘misinformation’. It’s just poor understanding and bad arguments.

I have seen a little misinformation relating to this subject - direct lies, fake news, fake links etc, but a negligible amount.


All good points. Yeah, we should stop using the term I think given it's lost a lot of its meaning. But a fair bit of the discussion of this topic has been badly misinformed.


I agree, but that is a feature of this particular problem.

For example, I see a lot of people who are simply wrong about how the hash matching works. One way to be wrong is to think it’s a cryptographic hash. Another way to be wrong is to think it’s just a perceptual hash match.

The problem is that these are both not crazy. The actual solution is far more complex and non-obvious than most people would suspect.

I think this is a genuine problem with the system. It is hard for almost anyone to imagine how it could be trustworthy.


Apple have been pretty clear that a human will review things. That is (I hope) a feature of any system that leads to child porn charges. If not it should be - AI gets things wrong and can be tricked (as can humans but in different ways usually).

But agreed, the technical bits may not be obvious (though apple released I thought a pretty darn complete paper on how it all works).


Sure - but who wants a human reviewing their private photos? I don’t.

Unless you understand the technical bits you have know way of knowing how rarely this is likely to happen in practice.


Not me, but if they only review after a flag, that's best you can do I think? facebook works this way too. Users flag photos and someone looks and deals with them.


‘No worse than facebook’ is probably the most damning thing you can say about a tech company.

The best you can do would be to not review people’s private photos at all.


Apple is clearly headed down a path where they require intimate knowledge of your data.

That AI assistant lives in the cloud in an Apple data center, not on your device.


Just EFF'ing sign it.


here is a quote from EFF complaint about Apple alerting parents to kids under 13 being sent porn:

"The recipient’s parents will be informed of the content without the sender consenting to their involvement."

Are parents really outraged by this? Are people sending porn upset about this?

My own view is as a parent if you send porn to my kids, I shouldn't NEED your consent to be alerted to this. I paid for the device, so it should do what I want, and if I turn this feature on than it should alert me.


For ideological consistency, I can only hope the EFF has a lecture prepared about two party consent for kids who show their phones to their parents if they receive a dick pic.


I think they are coming at this from the view that because iMessage was one of the first actual E2E encrypted products, showing the pic to the parent (after picture has reached end user device and been decrypted and is ready to display) should require consent. Ie, the encryption has been "broken".

That said - I don't find it very compelling. Parent paid for device. Parent is responsible for child. Parent should be allowed to control device (including setting a kids account instead of adult which trigger this etc). So I want to preserve MY right to both the device and to take care of my child. EFF seems to be focusing oddly on the rights of someone who DIDN'T pay for device.


The picture is not sent to the parent. They are notified that the child viewed a sensitive photograph, after warning the child, and the parent can view the photograph on the child’s device.


Right, but sender just wanted to send the photo to the kid, and now apple is providing a path for parent to see the picture.

Listen - I don't get the EFF argument any more than you do.


I think we’re on the same page and just wanted to provide more detail about how far-fetched their argument seems when you examine the details of what was built.


After the EFF lied about the system in the first place, does anyone care what they’re asking for now? What they’re really asking for is donations.


What did the EFF lie about? Missed that…


2/3rds of their original letter was spent describing a parental control over sensitive content detection in iMessage as an end to end encryption back door. At best, that is highly cynical. At worst, it is an intentional conflation of different features to raise false alarm.


It's not cynical at all. It's what EFF has been warning about since 2019 and before. (Disclosure: I worked at EFF during this period. We were extremely concerned about the potential role of client-side scanners and the concerted push being made at that time by the intelligence community to present this as a "solution" to end-to-end encryption's privacy features.) https://www.eff.org/deeplinks/2019/11/why-adding-client-side...

It's also the consensus position of a very large number of other digital rights groups and infosec experts. Do you believe they are also cynical and raising a false alarm?


Your link and most of the concern is about known CSAM detection announced for iCloud Photo Library, yet, again, 2/3rds of the original letter was about iMessage. Point me to the expert consensus that the parental control features announced were a threat to end to end encryption.

The iMessage feature is a parental control feature where kids over 13 receiving on-device classified sensitive images have to click through to unblur them, and kids under 13 will do the same and also have their parents receive a notification that such an action was taken. The parent in any case does not receive a copy of the image. The EFF described it as such:

“Whatever Apple Calls It, It’s No Longer Secure Messaging”

and

“Apple is planning to build a backdoor into…its messaging system.”

The Center for Democracy and Technology who wrote this letter they have co-signed said:

“Plan to replace its industry-standard encrypted messaging system creates new risks in US and globally”

I, respectfully, don’t see much evidence that these are consensus views. Furthermore, I don’t see how you can characterize this feature as a back door without believing safe browsing checks on links received in iMessage is an encryption back door.


This is based on pure speculation, but could it be that Apple believes the covid tracking functionality has shifted power from governments to Apple?

Many governments (e.g. Germany) wanted location instead of token based tracking with central storage of location pretty much up until the point where Apple and Google said that it won't happen .

This is based on my perception of Germany tech media coverage of the issue.


> the covid tracking functionality has shifted power from governments to Apple?

IMO this is off topic, but I think it absolutely has done that, and has demonstrated it. I'm surprised governments haven't screamed about it more loudly, but maybe they didn't want to do that in an example where they were clearly on the wrong side (pushing for privacy-violating approaches).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: