"But [Apple executives] said they expected any [implication by malicious actors] attacks to be very rare and that in any case a review would then look for other signs of criminal hacking."
Oh, all right: Apple, already being in possession of hard evidence of a hideous crime and already being required by law to forward such evidence to proper authorities, will also - pro bono publico! - sacrifice significant amount of time of a significant number of their in-house computer forensics experts, each enjoying a significant billing rate, to relentlessly look for "other signs of criminal hacking", until there is no significant doubt that the accused is, indeed, guilty. We're all safe, then.
This is a harmful distraction from the massive issues with Apple's proposal. If you wanted to frame someone for possession of CSAM, similar stunts can be pulled with Google, Facebook, Instagram, and Microsoft today. Yes, the scope here is broader and some people don't use any of those, but ....It's silly, and it makes the tech community look like a fringe minority of screeching conspiracy theorists.
And this is a problem because Apple's proposal is really really awful. Apple is normalizing scanning your private phone for files and reporting them. They built the technical capability to do it for any photo, and they will be under enormous pressure to expand it both in the US and abroad. And the fact that they did it will be used to pressure other companies into doing the same and to legitimize laws that require scanning for any content the government can justify.
Apple built a surveillance mechanism that is incredibly powerful. One no government could ever force a company to design and build. But once it's built, the only thing stoping it from being abused is Apple's pinky promise they won't let it happen. If you believe that legal norms, big tech companies and some quasi governmental nonprofit like NCMEC will stop such an abuse if it happens .... where have you been living the past few years? Because it sure isn't the US, the UK, Turkey, or China.
Is this actually different from other cloud photo apps? If you use Google Photos then your photos will likely be scanned by Google. If you use Apple’s photo app then their app will do the scanning.
There seems to be a vague idea floating around that this is built into the OS or the device just because the scanning happens on the device, but it’s not clear that’s the case. Apple doesn’t make the distinction between OS and app clear either.
Factually, not yet. That will change and I will explain how in a moment. But first there's a major difference between doing it on device vs in the cloud. It changes how we think about privacy and builds a capability to scan phones (not particularly limited to iCloud) into the device. That's a capability no Western tech company could ever be forced to build for more illicit usages, but now it exists.
Second, Apple almost assuredly will encrypt iCloud after this. So now we have the precedent of scanning encrypted messages. And that will then feed legislation that congress has been attempting to pass for years to kill any right to meaningful end to end encryption for messaging. https://blog.cryptographyengineering.com/2020/03/06/earn-it-...
Ironically, the difference is that Apple is doing it at the client layer so that they can't do it at the server layer; the user's iCloud [edit: photos, not all of iCloud] is encrypted at rest against Apple accessing it.
This approach makes mass-sweeping of all server-side stored data harder to accomplish (whereas in, say, Google Photos, Google can break-glass server side to get into someone's private data, so they could hypothetically do a mass-scan if the government demanded it).
Right but it's easier to just not use Google Photos. It's harder to opt out of your phone. I realize they "said" that device scanning will only be used if iCloud is enabled (right?). But ToS changes constantly and who knows what the future holds.
It is only applied for photos which are going to iCloud. If they change that, then we should be really worried. Current method is only pure improvement if leave all speculation out of it.
I am generally in agreement with you (and have made similar arguments, if you look at my post history), but the "expand in unspecified ways" is a bit ominous. Committing to only scanning photos that are being synced to the cloud (effectively, keeping parity with what everyone does, just doing it on device at the time of upload instead of in the cloud) would be really welcome here.
It's not at all hard to avoid using an app on your phone. I have an iPad and I use Google Photos. I've never used Apple's photo app.
This is what I meant by mixing up (and blurring the lines between) app-level and OS-level capabilities. It might not actually be mixed up technically, but it seems to be the user perception.
Yes, and even more ironically, thats precisely the problem. Because it makes mass sweeping of client side content viable --- both technically and morally--- in a way never possible before. The only thing stoping scanning of the entire phone for anything, now that Apple built the technical capability, is Apple's willingness and ability to resit pressure from the US, UK, China, and others to use it.
Very true. Of course, that's always been true since they manufacture the hardware and the OS for the hardware. They're optimally positioned to hide any type of behavior they want in the full stack of the product.
The only thing stopping your phone from keylogging your password to a server in the NSA somewhere if it recognizes a specific trigger pattern is Apple's willingness and ability to resist pressure from the US, etc.
>The only thing stopping your phone from keylogging your password to a server in the NSA somewhere if it recognizes a specific trigger pattern is Apple's willingness and ability to resist pressure from the US, etc.
Think of what would happen if you tried to make your average Silicon Valley dev team design, implement, and test a surveillance system they didn't want to build and that was immoral. They'd resit in an infinite number of ways that would delay the project virtually for ever. Short of summary executions, I bet you could not get a nice, efficient, effective system.
On the other hand, once the dev team has enthusiastically built the system that scans for any image, it's entirely easy to say "Now, make it look for these images." They have no avenue for resistance other and a up front no. And a government that wants to do totalitarian things knows many ways to force a yes.
Apple (and the other FAANGs) do not employ average Silicon Valley dev teams.
In general, a company at that size would approach this problem by figuring out who in the company is willing to take on an unsavory challenge like this and then forming a skunkworks out of them, slightly sequestered from the rest of the company.
I'm not saying Apple has done it, or that they're incentivized to. But it's trust-turtles all the way down. Either we trust them to say "No, you can't use our tech to harm our users," or we don't.
> Think of what would happen if you tried to make your average Silicon Valley dev team design, implement, and test a surveillance system they didn't want to build and that was immoral.
It wouldn't be that. It would be defense contractors sitting at Lockheed or a few blocks from DARPA whose daily bread is making a Tech Sandwich whenever the Broad Agency Announcement for one shows up on sam.gov, or on the DARPA page, or the variety of procurement sites that the government doesn't expose to the internet. If they want it, they can get it -- no persuasion of liberal tech-bros needed.
There is actually evidence (iOS 15 beta), that they added option to recover your backup from recovery keys. This strongly suggests that E2EE is coming.
> the user's iCloud [edit: photos, not all of iCloud] is encrypted at rest against Apple accessing it.
This is false. They present a web interface showing the photos. The UI isn’t locally generated entirely using JavaScript to decrypt the data. They only way this can happen is if Apple has the decryption keys.
iCloud Photo Library has never been private. Apple has always been able to view your photos.
How can the photos be encrypted at rest where Apple can't access them? If I buy a new iPhone all of my iCloud photos show up on it. That means that Apple can access them somehow.
While photos aren’t end to end encrypted (at least today), the fact that they show up on a new phone isn’t proof that if non-encryption. E.g. keychain passwords and iMessage messages are end to end encrypted (except in iCloud backups) but show up when you buy a new phone.
(Caveat that if you have iCloud backup enabled - which it is by default, the backups aren't end-to-end encrypted. This feature is basically on the convenience side of convenience vs privacy / security - too many consumers would irretrievably lose their data if iCloud backup weren't enabled by default)
> Apple is normalizing scanning your private phone for files and reporting them.
”Antivirus cries in corner as forgotten...”
I know, iOS has no build-in AV (like MacOS) but still, it is a bit laughable that many existing tools provides this same power, and only now it is a concern. On a black box system. I am resilent, and I will join into mass of pitchforks and torches only when there is actual evidence of them expanding their promises or using these features for something else they are meant. They knew the risks when bringing this feature and know the cost when it is proved to be misused.
yes, you can turn AV into the same thing. But no one has been advocating for that or passing laws that would make AV both mandatory and have to report you to the police.
This has been going on for CSAM scanning for a while now. The latest version was called the EARN IT act [0]
The argument is not this is a slippery slope where you might miss step. IF that was the case, yes AV would be analogous. Instead, its that people are actively trying to push you into the spikes at the bottom of the pit, don't build things at the edge of the pit where the handrail is a pinky promise not to let others push you.
> don't build things at the edge of the pit where the handrail is a pinky promise not to let others push you.
People are afraid, that the use of these tools are expanded in secret (hidden) for more than they should. From that point of view, legislation motives and discussion does not matter, because capability is valid for many tools at any moment, hence same speculation has applied before.
However, if you want to apply surveillance publicly, then we indeed need legal base for pushing specified tools as mandatory. To expand it for more than CSAM, it will be quite slow process, and implementing something like that publicly before legal base is path for descruction for any company, because people can change for other company.
Is Apple now making that legal process faster?
While it feels like Apple is now closer to the edge of the pit, from techical perpective there is no difference yet. Tools have existed and system is closed source.
Question is still the same; ”would you spy for us”? I don’t think that answer has changed from Apple because they changed the location of the image scan.
So, the question is, will legislation change towards more surveillance. Whatever the result is, I think it would have happened whether Apple added this feature or not, as it is not morale excuse. People find the way. In China, it is simply illegal to not install some app by muslims.
I just don't understand how a 60 year old gay man doesn't see how insidious this all is, and somehow trusts law enforcement to be such a benevolent actor.
The privilege that comes from being as rich as he is overpowers anything else. From his perspective the system is always on his side (the good guy side), so that's his perspective for society in general.
Tim Cook isn't in charge. They're starting the process of complying with, acquiescing to, the multitudes of politicians in the West that are demanding a change in the super structure of online privacy and how it's treated.
Anyone here think Yahoo executives actually decided whether the company joined PRISM or not? Those executives also were not in charge. There is a bigger boss in DC, radically more powerful, and most everybody here knows what they're after. They're sick of waiting, they're going to attempt to make another big surveillence move during the relative calm of the Biden Admin (they couldn't do it effectively under Trump, there was too much chaos, the government wasn't functioning very well). What program is actually being put into place right now - that Apple is probably joining up to, as with PRISM - that we won't find out about for many years?
It's going to get a lot worse across the board over the coming decade.
You got downvoted into greyness on your comment despite the clear, bleeding obvious fact that just a few fucking years ago, several major tech companies got outed by Snowden's leaks for having done exactly these kinds of things t the government's behest for years while those who claimed such a thing was happening were considered paranoid. It's absurdly blind to think that the same isn't possibly happening again quietly under somewhat different conditions now.
OP is probably getting downvoted for insinuation without explanation:
> There is a bigger boss in DC, radically more powerful, and most everybody here knows what they're after.
OK, so tell us! Who is the mustache-twirling villain, and what are they after? I'd love to know. Be specific. Is it Joe Biden? If so, what's the end result? He gets money? OK, draw the lines for us between this technology and Joe Biden getting money. If it's somebody else leading this conspiracy, who specifically? What specifically are they after? Without these details, OP's post is just an episode of the X-Files: Something's out there, and you know what it is, I just won't tell you!
> Who is the mustache-twirling villain, and what are they after?
No need to try to be silly to avoid reality.
Just read all the Snowden revelations. Those very same agencies, whose mission statement is to spy on everyone, are still working their jobs of spying on everyone. It's not like they were disbanded just because Snowden revealed a tiny slice of what they were doing then.
Your strawman is absurd. Nobody needs to claim any moustache-twirling villain or some narrow conspiracy to see how the OP makes extremely valid points.
We factually know that a number of powerful three letter agencies work in tandem with each other and with other agencies in other allied governments around the world to maintain as much secretive information scooping as possible about the affairs and communications of civil society. This was blatantly revealed in 2013 for all the world to see and nothing has happened since then to indicate that it hasn't stopped happening only with some moderate adjustments to procedure.
If anything, a number of governments are now trying to normalize their efforts to the public enough that they can do it more overtly (see recent attempts at arguing for backdoors on major E2E-encrypted communications apps as an example). The simple inertia of massive state security and surveillance budgets is enough to explain much of it without resorting to any sort of James Bond-type villians, let alone pinning any specific blame on Biden as anything more than just one more president among many heads of state and previous presidents, who supports the status quo of the mission-creeping institutions around him.
This isn't even and shouldn't be viewed as a partisan-politics thing. It happens almost equally regardless of specific party in power. Obama was tacitly complicit in it just as Bush was, and Trump's administration was likely no different either, except for its internal chaos causing certain rather unique administrative problems. What evidence indicates anything sincerely changing under Biden now?
That you could be so condescending about these very established tendencies despite the absolutely concrete revelations of their existence is what's strange and disappointing.
It's obviously not Joe Biden. It's the intelligence and law enforcement agencies of the USA, i.e. CIA, NSA, FBI, who despite perhaps not having "mustaches" to "twirl" have definitely done things like "spy on all American's phone calls" and "overthrow democratically elected leaders of South America to preserve American business interests."
Joe Biden couldn't lead himself out of a paper bag; I think we all know that if ever there was a figurehead president, he's it.
Not sure why the source of things like PRISM and all have to be a single evil person - I think you watch too many Bond films. I can't articulate any sort of form to the driving will behind what we see happening to the security state over the last twenty years in particular, but we do know that privacy continues to erode, that agencies basically dictate agendas to the press through "leaks" and even by putting retirees directly on their payrolls. That much is known. I don't know if it's productive to attempt the characterization of this phenomenon as some Scoobie-Doo mystery to be solved and unmasked. It's probably more important to oppose the policies, politicians, and press that seem to align with an agenda that promotes the bargain of us turning in freedoms for a promise of safety.
Obviously it's not Joe Biden. And it's not an "evil" conspiracy of mustache-twirling villains, either, although it may seem that way if you don't agree with their worldview, which is not exactly secret.
We know there is a global elite who have virtually unlimited financial resources and massive influence over nearly every key institution on the planet, including intelligence agencies. Some of them publicly attend meetings like the one in Davos. And yes, that includes names like Rothschild, Soros, and Gates. These are incredibly smart and hardworking people, and because of their powerful positions, they have an enormous responsibility over the governance of the planet. It's not all about money; this group has control over monetary policy, and are therefore above the fray of being divided into abstract economic units.
This group is making decisions like enacting free trade policies that reduce opportunities for the American middle class in order to more equally distribute opportunities to developing countries. Or creating surveillance networks to prevent catastrophic events which may include nuclear, biological, or cyber attacks that could threaten the global order and feasibly be deployed by a small group of individuals. Obviously, these policies give this group an enormous amounts of control over populations, but it's easy to argue it's for the greater good.
Something like CASM is more like a Noble Lie used to manufacture consent for a vital tool needed to advance their agenda, for lack of a better word. The tech community, for the most part, knows it's bullshit. But the media will control the narrative (or just ignore it), and the tech community will sound like paranoid nerdy pedophiles.
If you feel like CSAM is an overreach, then please continue to pay attention as the cyber-pandemic narrative gears up. Although anti-vaxxers are currently in the spotlight, you may have more in common with them than you think.
OK, we're getting somewhere. 1. Who are the conspirators, 2. What are they doing, 3. What is the end result, and 4. How does it benefit the conspirators?
So, for 1. you mention a shadowy "global elite" but also name Rothschild (which one?), George Soros, and Bill Gates. OK. Another poster points to the CIA, NSA, and FBI. All right.
For 2. It's "deploys CASM on cell phones". I guess this means Tim Cook has to be another conspirator.
3. What is the end result? Now, we're getting hazy. But, don't worry, most conspiracies start getting vague at this step. You say "manufacture consent for a vital tool needed to advance their agenda." What does that mean? What is the tool and what is their agenda? You hint at "equally distribute opportunities to developing countries." Is that what's going on here? How do you connect the dots between CASM and that? Or, maybe it's "prevent nuclear, biological, or cyber attacks." How does CASM do that? What is the end game?
Finally, 4. How does this all benefit the Rothschilds, George Soros, and Bill gates? Beats me, these people already have everything. What benefit would motivate this shadowy conspiracy? This is where most of these conspiracies totally break down: Drawing the line back to how the conspirators benefit.
The Rothschilds are also reportedly working with PG&E, Jerry Brown, and Solaren to use space lasers to start wildfires, resulting in high speed rail in California. Even if you could connect those dots, I don't get how it benefits the conspirators.
Apple hasn't made this argument, so this isn't why they're doing it. Don't create artificial reasons. If we were to base this on hypothetical law requirements apple might as well remove all forms of encryption too. But they're not.
The presence or lack of an argument by Apple that they're being coerced by intelligence/LE agencies is not demonstrative of anything. You know that gag orders are a thing. You know that the programs revealed by Snowden were not the subject of disclosures by the participant organizations or any argument such as you're expecting Apple to make under the same conditions.
All we know is what they do. It's not a stretch by any means to suspect that when an American corporation adopts uncharacteristic policies that violate their customers' privacy, the government is probably involved. Difference here is that unlike TrueCrypt, Apple can't just shut down. Hell, even the rumor of Apple doing this has beneficial impact to law enforcement. The press release may be the product.
Let's take the effort at the most altruistic face value - defeating child porn mongers. If you're trying to herd such people to a very narrow set of solutions that you can monitor and/or control, simply undermining trust in the alternative options may be enough without actually implementing invasive tools being discussed. The comedy in some of this is how hammy the next act can get - if I were a cynical man, I'd expect a technology to come out in response to Apple's move, or maybe it's out there already but suddenly gets a new push for mindshare. I'm sure the security world has a term for solutions like this. Not really honey pots, but more like fly paper. A product the agencies have either straight up owned or surreptitiously gained control over, marketed to people who would use it for ill.
Has Apple even once admitted knowing of, or that they participated in PRISM even when official documents were released showing that Apple joined PRISM just after Jobs died? Nope. None of the companies listed did, in fact they all publicly denied it because they are not allowed to by national security "law".
So, how can Apple make any argument when they legally aren't allowed to talk about it? Do you think Cook believes in your privacy so much he will go to prison for revealing it? Nope, he's the one who let them in Apple to begin with and "privacy" is a sales pitch everyone laps up. They've been compromised for over a decade. There is no security. It's all lies on top of lies. Have you ever heard of what happens to people when they reveal top secret government spying programs? Snowden? Drake? Binney? Manning?
I mean, if it's true of course they wouldn't. No company was saying they were doing things because of prism, either. If there is a forced government cooperation they will be under a gag order.
> Apple has announced they are only rolling this out to iPhones in the US. That makes it look like a legal requirement to me.
I'm just some guy with a beard, I honestly don't know anything about anything- in my eyes, it looks like less of a legal requirement and more of a stepping stone. Granted, those things aren't mutually exclusive and I'm not privy to any information others don't have, I'm just cynical.
What is most interesting(read horrifying) to me is that people will argue that this is not a 4th amendment violation because private company. But will later then argue that it is not Apple's fault because the government pressured them.
If the government is pressuring Apple to violate someone's privacy and Apple does so, that makes Apple's actions in that case a state actor and makes the actions unconstitutional.
Upvoted not because I agree. But we have seen time and time again only the founder are willing to stand up against these sort of things. ( I mean Steve gave the biggest middle finger to PRISM ) Managers are always looking at shareholder responsibility, that if they dont comply there might be severe consequences.
But I still dont think it is forced upon by the government, because the whole thing sounds and smells very Apple. Doing it for the child.
While turning a blind eye to the atrocities committed by China. In which case they're doing it for the money. Moral righteousness only seems to reflect upon the markets not driving growth.
There's no "orthodoxy". The community has various opinions—it's as simple as that. Certainly the majority view on HN is not pro-Apple on this particular topic...have you missed the all of the dozen or more major threads?
As for claims that the mods are censoring you...if only knew how little we care what your views are. We stopped having the energy for that a looong time ago.
If your comments are getting downvoted or flagged, you should try posting more substantive comments and make sure that you're following the site guidelines: https://news.ycombinator.com/newsguidelines.html. That will take care of 95% of the problem, and if you experience some of the leftover 5%, we can look into it.
Keep posting your ideas anyway. It's not like we get to turn in our karma for a free eraser or anything. Speak your mind, the orthodoxy usually comes around eventually anyway.
The downvotes are going to the people phrasing things using weird conspiratorial language. You can get much more support by simply saying “I think Apple is doing this because of government pressure behind the scenes”.
Maybe he has no choice. Maybe the government has threatened Apple? We don't see anything behind the curtain. One thing is certain. I've lost trust that the iphone is my device.
If that's true, then doesn't that act as evidence that they are being forced in some way? Yahoo [1], and others, were forced with other programs, with threads of treason for talking about it [2].
It’s almost certainly his billion dollar net worth insulating him from the worries of the plebs, it’s my standard assumption when someone asks: “Why is (billionaire) seemingly out of touch?”
Apple is basically admitting their system is vulnerable to abuse but wishing/praying/hoping that people won't abuse it. Optimism isn't a security measure. This is like leaving your production database with default credentials and saying it's OK because people will do the right thing and follow the rules.
seems like it almost confirms this system was created to make it easy to ruin people's lives, because you know plenty of bad actors will take advantage of this
I read all the pitfalls of this and am convinced it's a bad idea, but what do you mean about people using it to ruin people's lives? How would someone misuse it in this way?
Imagine one of these no-click zero-day iMessage exploits, but instead of taking over your phone it opens a share link and adds a gallery of illegal photos to your own device or cloud storage.
They not going to go through Apple if they planted CP on your device, they going to take you to custody saying they got an anonymous tip or something along this line or just tip off the police to do the job for them because it worked in the past, requires less effort and maybe I'm wrong on this but there is no additional gain from going though all the hops when you can do it with an anonymous call.
Also, if you have full access to somebody's phone, it does not matter if Apple is scanning for CP or not, you can do much more sophisticated things.
Also 0-day non click exploits are not something random scriptkiddies running around with dropping CP on random phones as they are way too valuable.
Certainly, twitch pranksters and 4chan trolls have shown its practically frictionless to SWAT someone and get the cops to raid their house, much easier than any convoluted hacking exercise engineered by a hollywood script writer.
> there is no additional gain from going though all the hops
The only thing I could see would be getting the victim locked out of their iCloud etc accounts with little/no immediate recourse, as from what I have read the process seems to be to lock their accounts as the CP detection alert is sent to law enforcement.
> Also 0-day non click exploits are not something random scriptkiddies running around with dropping CP on random phones as they are way too valuable.
I agree, 0-days are the far high end and wouldn't be used for things like this especially by trolls, but I have to imagine there are methods in the lower-hanging fruit levels available to malcontents to sneak content like this onto unsuspecting users devices. Like, custom ringtone/emoji apps and download packs, random QR codes that lead to downloading a place's menu but also a suspect image, or other usual suspects.
This could be done with iCloud today, given they already scan server side. Both cases require getting a user to save photos to an iCloud syncing location to trigger the scan.
Nobody has put down a convincing difference for this attack yet, but people sure do love repeating it.
I've read this whole situation as a signal to China and other authoritarian regimes that Apple has finally seen diminishing returns from the "Apple is more secure" angle and is now looking elsewhere for growth. It's just business.
I think what we're seeing is Apple betting on using cryptography as part of the product design phase. Apple devices already do weird things like wake up to announce their physical location so that users can find their devices. The thought of a powered down or suspended laptop waking up to announce its location isn't something I particularly want, but Apple users seem to like it.
Anyone who has spent any time on spaces that are strongly encrypted and focused on privacy know how quickly they become havens for the sort of material that Apple doesn't want associated with its brand. How many "Apple protects child predator" news stories do you think Apple can withstand while still remaining a luxury brand?
Apples goal here is to have the reputation for end-to-end encryption and privacy while simultaneously not being seen as a phone for child predators. They don't have a lot of options if they want to thread that needle.
I've thought about this space quite a bit, and all options suck. Client side scanning is really the only choice with reasonable tradeoffs. The other option is scanning encrypted photos on cloud using secure enclaves to do the scanning. My guess is that when the tech makes that possible Apple will move in that direction.
I agree that this isn't the best for privacy nuts like me. But the iPhone isn't a blackphone, it's a luxury handbag. The phone isn't for privacy nerds, the privacy is there to make other mobile OS's look cheap and tacky.
They deserve to be raked over the coals for this, there's no world where their current design is a "good" or "right" one.
Child abuse is a serious problem, but building a surveillance panopticon is not an acceptable solution to it. Better investment in education, health care, and reporting hotlines are the way forward to stop this issue at its source.
I think one of the problems here is a reality and perception don't align:
- Apple has over a billion devices out there.
- Child abuse is a rare problem, but with over a billion devices, there will be enough of it for a lot of newsworthy stories.
- Child pornography takes just one abused child for an arbitrary number of viewers. Arguably, by the time you're limiting the number of viewers, most of the harm has been done.
On the whole, I'm not quite sure how the Apple plan will protect actual children from rape (except to somewhat reduce the secondary harm of distribution). I can clearly see how it will protect Apple from bad press, though -- people won't use iPhones to record that.
On the other hand, an investment in education, health care, reporting, and enforcement could significantly reduce the amount of child abuse, but with 7 billion people in the world, no expense would bring it to zero. So long as it's not zero, the potential for bad press is there. Indeed, usually if something happens a few times per year, it receives more bad press than if it happens a few times per day.
Apple has every incentive to be (1) seen as doing something (2) do things which protect its brand value. Apple has no incentive to invest in education, health care, reporting, and enforcement. Those seem like good things to do, but if anything, if a scandal comes up, those sorts of things are used to say "See, Apple new, and was trying to buy an out."
As a footnote, if we value all children equally, a lot of this is super-cheap. This is a good movie:
And the problem it portrays could probably be solved with the same finances as the salaries of a few Apple engineers, and a focused, targeted effort to identify child prostitutes, help their families with the economics which force those kids to become child prostitutes, and get those kids into schools instead.
I'm guessing the $100k raised from this film will do more to protect kids than this whole Apple initiative will do.
Ford has a large number of cars out there
- Drunk driving is a rare problem but with a large number of cars there will be enough cases for there to be newsworthy stories.
-Drunk driving just takes one driver to create an arbitrary number of deaths.
We would not accept having breathalyzers in every car.
Or to bring it closer to the child abuse problem: Would we accept cameras that take pictures of the occupants of the car to make sure that the minors in the care are not being trafficked?
There's a stipulation just above that portion of the bill where the Secretary of Transportation can determine that it is not possible to 'passively' determine if a driver is impaired and decline that rule so long as they issue a report to congress as to why.
And I trust Buttigieg to give the issue a solid looking over, but aren't breathalyzers pretty well established as a positive indicator of driver impairment?
Though requiring the driver to blow into a straw doesn't seem particularly "passive"--whatever that means.
That is less invasive than making you blow. But there will always be edge cases.
Imagine a medical condition that makes it look like you are impaired. Now, you have to go to the dealer with a doctor's note to get this system disabled. Or when you want to rent a car.
Or, if there is a case when driving impaired would be better then the alternative. You and a friend are camping in the woods out of cell range, you both have some beers then one of you trips and gets a deep cut on the leg. Now you have to wait a couple hours before he can drive you to where you can get cell signal, hope you don't bleed out.
> We would not accept having breathalyzers in every car.
Funny you would bring that up. I think the new infrastructure bill requires that for cars built after 2029 (or some other "future, but not that far" date)
That's not the same thing at all. This would be like your car reporting you to authorities if you get into it drunk, turn the key, and step on the gas. It does nothing unless you've committed a crime.
All of the photos that you upload scanned and hashed. All of the hashes are either sent out for comparison to the database or checked locally.(I do not know which.) That means that for every picture you want to upload to iCloud, you must prove it is not abusive material.
So the equivalent is that for every single trip you take, you must prove you are not under the influence.
That's not true. The photos you upload are hashed, yes, but they're not scanned. Only the hashes are compared and that's done locally. Apple never gets any of your content so your equivalency is completely false. Signatures only get sent if the hashes match known CSAM. Therefore, it's like your car reporting you if and only if you've broken the law.
The equivalent is that for every single trip you take on public roads, you must prove you are following the public road rules - like you do with having to first obtain a driving license, registered car, car insurance, MOT (in the UK), road tax (UK), medical approval if you have certain health conditions.
If you're going to pay to use a hired car, expect to have to show the car hire company sufficient proof that you won't expose them to unnecessary risks. If you're going to pay to use a hired server to store your photos, why shouldn't you demonstrate to the owner that you aren't going to misuse their services or break their terms of service or break the law?
If you want to drive your car on your land, it doesn't need any of that.
So we should mandate a scanner in the car that makes you input your planned route, takes a driver license, has a camera to do facial recognition. It will then connect to a DMV database that verifies the information is correct and then to the insurance database to verify coverage. Check the tax database to make sure that has been paid, check with a medical database to make sure that you don't have any conditions as well as making sure that you have not been prescribed any medicine that says not to operate heavy machinery.
If you are going to hire someone else's car[1], you will need to provide them with your driver's license and the person at the desk will do "face recognition" to check whether it's your license, and they will check with some kind of database - at least their own to see if you've been banned from their premises, maybe a DMV one or their insurance to see if you have points on your license for previous driving related convictions which will affect their decision to lend you a car. Since it's their car they will deal with tax, but they will ask you if you have medical conditions which will affect your driving (or make you read the terms and sign that you haven't). And they will do all this in advance of you hiring their car, and after you're done they will check over the car looking to see if you misused it, and will keep a record of use so if they get informed about a speeding ticket or parking fine in future, it goes to you to pay it.
So ... this is your hellish dystopia, your "boot stomping on a human face forever", Hertz rent-a-car?
[1] analogous to you using Apple's iCloud servers.
Trucks have tachometers which track drivers aren't driving too long, and are taking sufficient breaks.
> "It will then connect to a DMV database that verifies the information is correct and then to the insurance database to verify coverage."
Wouldn't it be nice to know that if you're in an accident, the other party can't simply say "I'm not insured lol" and drive away and leave you and your insurance to pick up all the costs?
> On the whole, I'm not quite sure how the Apple plan will protect actual children from rape (except to somewhat reduce the secondary harm of distribution).
You bring up the distinction between "possession offenses" (i.e., a person who has CSAM content) and "hands-on offenses" (i.e., a person who abuses children and possibly, but not necessarily, produces CSAM). Detecting possession offenses (as Apple's sytem does) has the second-order effect of finding hands-on offenders because hands-on offenders tend to also collect CSAM and form large libraries of it. So finding a CSAM collection is the best way to find a hands-on offender and stop their abuse. Ideally, victims would always disclose their abuse so that the traditional investigatory process could handle it -- but child sexual abuse is special in that offenders are skilled in manipulating children and families in order to avoid detection.
I think that the case of USA v. Rosenchein [0] is a good example because it shows the ins and outs of how the company->NCMEC->law enforcement system tends to work and how it leads to hands-on offenders. It's higher profile than most, perhaps because the defendant (a surgeon), seems to have plenty of resources for fighting the conviction on constitutional grounds (as opposed to actually claiming innocence). But the mechanism leading to the prosecution is by no means exceptional.
No. This is not true, and I think I provided a good reference to that effect (it's really quite a good documentary too). A US surgeon engaging in child abuse is a statistical anomaly in the world of child sexual abuse. The best way to find child sexual abuse is to hop onto an airplane, and go to a region of the developing world where child sexual abuse is rampant.
It's not all hard to find such places. Many children are abused at scale, globally. I think few of those kids are getting filmed or turned in CSAM.
I'm also not at all sold on your claim that hands-on offenders tend to collect CSAM materials either, but we have no way to know.
I am sold on the best way of reducing actual abuse involves some combination of measures such as:
1) Fighting poverty; a huge amount of exploitation is for simple economic reasons; people need to eat
2) Providing social supports, where kids know what's not okay, and have trusted individuals they can report it to
3) Effective enforcement everywhere (not just rich countries)
4) Places for such kids to escape to, which are safe and decent. Kids won't report if the alternative is worse
... and so on. In other words, building out a basic social net for everyone.
We already live in a police state. The federal, state and local infrastructure and resources are mind bogglingly massive. They have laws granting them near carte blanche rights and actions.
We are citizens of our country and we deserve a dignified existence. We are supposed to have rights, and they're being worn away, formally and informally, by our governments and megacorps acting like NGOs.
I'm sympathetic to the overwhelming horrors of drunks, drunk driving, violent actors, child abuse, child porn, economic crimes, etc.
I've done my calculus, and I got my vaccine and I wear my mask in the current circumstances of our pandemic. But in a similar calculus, what Apple has planned to subject a huge portion of our population to, by din of their marketshare in mobile and messaging. I personally can't accept the forces at play in this Apple decision, and I'm continually baffled by those who think this is overblown.
Have you imagined what a near-future Mars colony will be like? You can't live on the surface, so it will be as high-tech and enclosed and cramped as a space station; an air-tight pressure vessel with no escape. It will have limited energy and resources so there will likely be rationing. It will be vulnerable to any pressure breach or loss of power, so can take no risks with mechanical failure, bad actors, disease spread, etc. so it will likely be sensored and surveilled all over. It will likely be funded in large part or entirely by private investors. Musk has estimated $500k for a ticket to go there and people have estimated $3Bn/year for 30 years to keep a base running with no economic return from that.
No government, no police, no Wild West "run them out of town" option. You think they're going to want to spend $500,000 return flight cost to send potential criminals away or just "let them be" in an environment like that?
The idea that you might be able to go there and "demand your freedom" without being a billionaire owner of the colony is ill-thought-out. Subjects will have no leverage and no options, and leaders will have billions sunk into it and demand obedience like a Navy Submarine.
Yes, I've thought about it. I was kind of hoping for a better suggestion.
However, I'd rather voluntarily subject myself to a dictatorship like that than believe all my life I have rights that are sacred, only to look up and find myself in an authoritarian panopticon.
I do harbor fantasies of some day collaborating on a new system of government, or at least laying the groundwork. It's not going to be Musk's planet forever, and the first generation of Martians will be volunteers who want the project to succeed. Which makes it more like the 13 original colonies than the Wild West.
> Child pornography takes just one abused child for an arbitrary number of viewers.
This is the thing that privacy advocates seem to ignore. Measures taken to reduce child abuse won’t reduce the circulation of whatever CASM does get created.
Some even seem to think, a la the ACLU, that viewing child abuse material is a victimless crime, and only the creators of the CASM should be punished.
I think it’s actually a good way to look at the problem from a different, broader, perspective that isn’t the average HN user and privacy minded individual standpoint. Also, it interprets Apple’s decisions in the wider framework of their B2C business. Apple’s privacy engineers don’t have the luxury of being radical like their critics when it comes to taking a decision like this.
Given this state of things, have they picked the lesser of two evils to solve the thorny problem of CSAM detection? I think it’s fair to say yes, they did, while still criticizing them for it (which is what they were of course expecting anyway).
Nope. If scanning was implemented outside the device on the iCloud as everyone else, may be. But this is intrusion of privacy on a new "on device surveillance" level and Apple deserves hostile reaction.
No form of apologetic or "technical" explanation can remove this from reality now.
They are betting heavily on their "core" demographics to trust them automatically and without any form of critical thinking.
If this implementation has no effect on Apples bottom line. Things are over.
We will live in badly implemented version of the Minority Report.
If Apple wants to get the same detection ability as server-side, they'll have no choice* but to expand and lock down client-side much more than they publicized. At which point this method is not the lesser evil at all.
* Think about what happens to CSAM uploaded to iCloud before NCMEC tags it. This has to happen for each new CSAM, since NCMEC can't tag what it doesn't see yet.
Surely Apple and NCMEC want to be able to catch these perps (which they easily would have with server-side). Doing it client-side requires expansion of scanning to do much more.
> Given this state of things, have they picked the lesser of two evils to solve the thorny problem of CSAM detection? I think it’s fair to say yes, they did,
First option is not to encrypt data at all (current state, server side does not count), second option is to use end-to-end encryption with hidden backdoor. They found a (third) way, to lock themselves out of most of the data, and for example FBI can’t ask them to show some arbitrary images.
>Apple’s privacy engineers don’t have the luxury of being radical
Not doing anything anti-consumer that the law doesn't force you to do is "radical"? I know you're not an astroturfer, but I had to double check because this is textbook astroturfing tactics.
Apple simply does not have to do this, as far as I'm concerned it's obvious they're either currying political favors or being incompetent. It's perfectly fine if they want to run it on their own unencrypted devices, they absolutely don't have to overstep into their user's devices.
But the point is, once you accept something noble and difficult like "preventing CSAM" as your primary overriding goal, then there's nothing that's too far or too extreme if it will help you with your noble goal.
Five years ago, the idea of Apple scanning photos on your phone would have been absurd.
Five years from now, what will people think about hotels installing AI-powered cameras in every room? The vendor swears they only start recording when they detect an act of abuse. It sounds absurd now, but where do you draw the line?
Many (maybe 5) years ago Apple launched Neural Net to categorize your photos and this scans all of them, a lot whether they are in the cloud or not. Difference is, that we don’t know where this information is stored. Still nobody is worried about that. Feature, which allows more than this newly added CSAM functionality. If someone wants to misuse that in hidden, there is no difference of now or future. Because all we have is trust. Speculation suddenly raises, when common politic reasons are mentioned.
It does not really matter if the scanning happens on device or iCloud in this situation, because you have to always trust their closed source system.
Google has scanned your images since 2009 in the cloud unencrypted, but now when Apple makes situation better, it is suddenly bad. All tools have been out there already. There are no really other options to get more privacy than this, but people refuse to see that.
Well, there is voting. Vote people who puts privacy over everything. That would make everything easy.
> Google has scanned your images since 2009 in the cloud unencrypted, but now when Apple makes situation better, it is suddenly bad.
At the moment Apple's scanning policy is about the same as it was before. They claim they're only scanning photos if iCloud photos are enabled. The change they're advertising is doing the actual scanning process locally.
The problem is two fold. The first is Apple went from scanning only explicitly uploaded content to local content. Since they've decided to intrude on local content once "for the children" it's not out of the realm of possibility (if not likely) they will make further intrusions in the future for prima facie noble reasons. Are third party apps going to be restricted on saving data unless they allow access to Apple's CSAM scanner? Will it start scanning texts or e-mails tomorrow letting and rando flood a person's phone with CSAM and get them arrested? Adding a local scanning system like this is a slippery slope.
The second problem is the opaqueness of the system. This has multiple sub-problems. While the NCMEC has a laudable goal, involving them in the CSAM scanning process involves an outsize level of trust I don't think they have earned. They have law enforcement's unfortunate disdain for personal privacy coupled with a fanatical devotion to their cause. They believe their actions are always correct and just so long as they supposedly serve their goal of "protecting children".
Due to the opaque nature of their content library it's not crazy to think repressive regimes will get self-serving content added to the source libraries for CSAM scanning. There's plenty of places where homosexuality is punishable by death and even mildly anti-government content will land you in jail. Obviously you and I can't go look at NCMEC/ICMEC CSAM libraries to check for falsely added content. So how are we supposed to trust a system run by fanatics to not have simple errors?
Which leads to the other opaqueness sub-problem. Apple's design is interesting, if not laudable, but is closed source and full of black boxes. PhotoDNA, NeuralHash, and the like are not published algorithms anyone can verify. We don't even have a way of knowing if some image we have tripped a false positive and have to trust Apple's unknown "threshold" isn't 1. So not only does the public, the subject of these new intrusions, have no way of auditing the database but they have no way of auditing the code or process. A stupid bug in the scanning system could get a user reported to Apple which we then have to trust not to forward (and not to have additional bugs in their reporting system) them to law enforcement and ruin their life.
So I am concerned with scope creep and bugs/false positives. I can live with a bug that causes video playback to stutter or a black box system in Maps that gives me the wrong hours for a restaurant. It's much harder to live with bugs that can get me arrested or even killed thanks to trigger happy police. Apple's system might be technically adept but their promises of future behavior aren't trustworthy since they've already changed their behavior with this new system.
> At the moment Apple's scanning policy is about the same as it was before. They claim they're only scanning photos if iCloud photos are enabled. The change they're advertising is doing the actual scanning process locally.
Major difference is, that they have no access for other images anymore as they used to have. They leave device as encrypted. Images used to be plaintext in the eyes of Apple.
> The problem is two fold. The first is Apple went from scanning only explicitly uploaded content to local content. Since they've decided to intrude on local content once "for the children" it's not out of the realm of possibility (if not likely) they will make further intrusions in the future for prima facie noble reasons. Are third party apps going to be restricted on saving data unless they allow access to Apple's CSAM scanner? Will it start scanning texts or e-mails tomorrow letting and rando flood a person's phone with CSAM and get them arrested? Adding a local scanning system like this is a slippery slope.
Emails have been scanned for long time in the cloud already. The rest is only speculation and against what they have told. It might be hard to trust, but in closed systems it is all we have. We should be worried when they actually say or start doing that.
> There's plenty of places where homosexuality is punishable by death and even mildly anti-government content will land you in jail.
It is fair to not trust third parties (NCMEC/ICMEC), but Apple is responsible for making the algorithm and testing that. Misuse must be part of their tests at this level. iCloud photos used to be plaintext so this hasn't changed from that perspective. If there is evidence that they are scanning other images outside of iCloud as well, then we should get the pitchforks and torches.
> We don't even have a way of knowing if some image we have tripped a false positive and have to trust Apple's unknown "threshold" isn't 1. So not only does the public, the subject of these new intrusions, have no way of auditing the database but they have no way of auditing the code or process.
This isn't true, since all math of their system is public and available on here: https://www.apple.com/child-safety/pdf/Apple_PSI_System_Secu...
But code is as closed as always been. You have same level of trust for iMessage E2EE or even the screen lock of your phone.
Due to the way how system is expected to behave (it only looks existing matches from the provided data, with certain modifications), it is certainly possible that 1/1 trillion false positives is reachable, because they can validate it during development. They are not developing some AI to match totally new wild images. There is human validation, so nothing is automatically triggering police.
Apple isn't required to stop redistribution of existing material; they're required to make a report if they have actual knowledge of users possessing or distributing apparent CSAM.
This is different from what your comment implies in two ways. First, they do not have an obligation to actively look for CSAM; they only incur an obligation if they find it. Second, the obligation applies to apparent illegal content rather than known illegal content. What qualifies as apparent could end up in court.
> Apple isn't required to stop redistribution of existing material; they're required to make a report if they have actual knowledge of users possessing or distributing apparent CSAM.
This isn't that simple. If NCMEC comes with the properties of CSAM (e.g. hashes) and asks provider especially those to be removed from their cloud, it is hard to remove them without looking for them. This is different than an obligation to actively look for CSAM in general.
Can you cite a statute that requires a provider to look for hashes when NCMEC asks them to?
If NCMEC told a provider that a specific URL (or similarly unique identifier) contains CSAM, the provider would be obligated to destroy the associated file or be guilty of possession/distribution because at that point they know what they have. That's different from NCMEC providing hashes that could identify files the provider may or may not be storing.
> I've thought about this space quite a bit, and all options suck... I agree that this isn't the best for privacy nuts like me. But the iPhone isn't a blackphone, it's a luxury handbag.
Privacy isn't a toy for nerds, though. It's not even a luxury item. It's a need and a right of all people. There is a good option: keep people's stuff private. It's the only option.
I'm relatively extremist on privacy and user autonomy in general. But I hesitate to say that privacy is so fundamental that people shouldn't be _allowed_ to decide they're okay with it.
It's a while since the ruckus about privacy from techie types has penetrated the public discourse, and I think this is a very good thing. The non-tech-savvy people, if anything, overestimate the degree to which their privacy is compromised, convinced that every sound they make within earshot of their phone is scraped for ad targeting.
But not one of the people in my anecdotal dataset change their behavior on this basis, nor even seem to be particularly bothered by it. I don't think you can even chalk this up to technical ignorance. Bush's warrantless wiretapping had something like 40% approval, and that wasn't even transparent or consensual!
It really does appear there are a massive amount of people out there who look at the current cost/benefit tradeoff of compromising their privacy and decide that it's worth it. Awareness is still important, but I don't agree with your suggestion that everyone be effectively coerced into accepting the tradeoffs that you or I accept.
> massive amount of people out there who look at the current cost/benefit tradeoff of compromising their privacy and decide that it's worth it
I don't agree with this characterization - it's too willful. To me, it seems more like a helpless coping mechanism. Since they "overestimate the degree to which their privacy is compromised", they resign themselves to not being able to do anything to protect their own privacy. The phone is listening to them, the satellites are tracking them [0], websites are recording them - basically every electronic device they encounter is not under their control. Their privacy is already gone.
Then, they watch TV and see actors using surveillance systems to capture Really Bad People. Since they've already resigned themselves to the collection, the only thing they have left is to hope that said surveillance results in things that are good and just. And when you try to bring up real-world problems, they revert to coping mechanisms of how it doesn't bother them - because if it did, they're still ultimately powerless to change anything.
To cross this divide, I think we need to give people actionable packaged-up solutions they adopt to protect their privacy. Part of the difficulty is that most people use their phone as their primary communication medium, and the phone ecosystem is a privacy dumpster fire. I don't have a recommendation for increasing phone privacy besides LoS+microg and also stop using your phone so much - do most of your communicating from a real computer running Free software.
Incidentally this is why this Apple news is so terrible - they had seemed to plot a course for more user privacy. Even with Apple retaining control, it could have let people see there can be boundaries. But now they've basically thrown away user empowerment in favor of putting a government agent on every phone. And so we're right back to the understanding of "everything I do is surveilled".
[0] I'm obviously describing their perspective. I've tried to explain to people that GPS satellites do not themselves track you, but rather let your phone figure out where you are. And by them taking an interest in the software on their phone, they could prevent it tracking their location. But I generally hit a wall of cognitive dissonance where the "satellite tracking" was really just some talking point, rather than something they think they could prevent.
> I don't agree with this characterization - it's too willful. To me, it seems more like a helpless coping mechanism. Since they "overestimate the degree to which their privacy is compromised", they resign themselves to not being able to do anything to protect their own privacy. The phone is listening to them, the satellites are tracking them [0], websites are recording them - basically every electronic device they encounter is not under their control. Their privacy is already gone.
I don't doubt that some contingent of the market feels this way, but I'm positing the existence of a large section of the market that truly doesn't really care that much about privacy. There's a reason that privacy advocates spend so much time arguing against "if you're doing nothing wrong, privacy doesn't matter", and it's because so many see big institutions (tech cos, banks, gov't) as detached institutions that for the most part do the right thing. It's the same reason that most people don't have a coherent sense of government's monopoly on legitimate violence and coercion: instead of grappling with the nuances and trade-offs of this bargain, it's easier to just model them as "the good guys".
Naturally, I'm going off of my perception here, as there aren't well-defined statistics that would give us a more reliable sense of the attitudes towards privacy that affect (or don't affect) people's purchase decisions. But a high enough proportion of my non-tech-employee acquaintances are unbothered by privacy concerns that I have to at least acknowledge that they likely represent a non-trivial segment of the market.
> they're still ultimately powerless to change anything.
This doesn't comport with my experience with these people. One finds niche cases here and there where the trade-off for privacy/autonomy provides a pretty decent ROI. I've occasionally been asked about some of these decisions of mine. In those conversations, the people I'm talking about don't look at these trade-offs and decide that the effort isn't worth the privacy benefit: they hear that the benefit is privacy and immediately go "oh this isn't relevant to me".
> The other option is scanning encrypted photos on cloud using secure enclaves to do the scanning.
This doesn't work because secure enclaves only move trust from the software developer to the hardware manufacturer, who has the code signing keys to update the firmware on the secure enclave. Which in this case would still be Apple, or someone equivalently [un]trustworty and subject to external coercion.
Yes, thank you. This is the only theory that explains observed reality.
I'd add that they probably consider the scheme to be better than the alternative (which is how others do it, including Google IIRC), namely checking photos once they have been uploaded. They have gone to some lengths to do more on the device instead of uploading user data, in Siri for example, but also Photos.app face recognition etc.
But many security researchers and analysts seem convinced this system will catch up innocent people. Will it only take a single arrest of someone who happens to get a copy of their iCloud account to their lawyer, instantly proving their innocence before Apple is destroyed? Or will it take two?
There's no way the Chinese govt won't abuse it to track down whistleblowers (ie: leaked document pictures) or dissidents who has pictures that are unapproved by the authorities (Winnie the Pooh memes, Tiananmen Square, etc).
"Could governments force Apple to add non-CSAM images to the hash list?
Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it."
(Reminder: if you don't trust what they say, you can't trust that they haven't been doing this for years already).
Yeah, and we have to remember that Americans might not be the true literal-target audience for this. Catching a few CP creeps in the USA might be sort of like a field demonstration of a weapons system. After the demo the system goes on sale to its true customers, namely regimes with "re-education camps" or who like to dispose of critical journalists with bone saws.
Many other American companies have done business with totalitarian regimes over the years. Maybe there's too much money in that market for Apple to pass up. Given the growth of totalitarian strong men across the world it's probably a growth market these days.
Payment may not be overt. It could also come in the form of access to markets. The deal might be that Apple must demonstrate the ability to help a regime hunt down dissidents before it can sell domestically, or they could be offered a break from otherwise onerous import or sales taxes.
I stopped using stock Android and went back to iPhones because I thought Apple cares more about privacy than Google does. Not exactly correct in all cases I know (ie. they both suck in terms of privacy), but it seemed like Apple users are buying in, so it might work.
Now I think my next mobile OS is going to be GrapheneOS.
Like others have mentioned, this is as big of a warning as anyone's going to get to get out of that locked-in ecosystem. On that note, the outrage is kind of useless if you don't skip buying the next iPhone. You should fully own what you fully pay for.
> Because in the EU [apps] are required after the PSD2 directive mandated "strong" auth requirements.
No. I recently set up an elderly neighbours online banking access. She has a laptop for some clerical work and email, but uses a dumbphone only.
All the bigger banks I have been using offer a hardware device to generate authentication codes. These usually come with some sort of camera (there are multiple systems) that reads a code of the screen and they require a your bank card.
I am sure not all the banks offer this, but it's so much better than some stupid app.
Anything that completely blocks Play services is a win in my book.
Sweden is pretty much a willingly fully cashless society and you can still function normally without banking mobile apps, or with a stay-at-home-usually-off phone that you can use for bank app when needed, though at that point you can just use the website.
Banking apps don't like rooted phones either, even if they do have GApps. My bank's app, though, isn't much different from their mobile web experience, so I just use that.
You might be able to hide root using magisk(?). Did at least work the last time I tried, and the safety net checks passed without issues on lineage os.
For such things, you can have a dedicated device that runs the locked down proprietary stuff. Motorola sells some very cheap Android devices that would work for this purpose.
I'm amazed that banking apps are the make or break example for so many people when choosing a non-apple/non-google mobile OS. How much banking is everyone doing on-the-go?! Just use the website! Use the mobile website if you must!
All the banks I use (in brazil), either only have mobile app to access the bank, or they require the bank's app to use the site (the app generates a 2FA token, you can't use a third-party 2fa manager like authy).
Most banks only allow check deposit by mobile (as opposed to desktop), but yes, desktop can do a lot. I'm incorporated, and some of my clients (usually new ones) will pay with a check, so without mobile I'll have to visit a branch/atm to deposit those.
And my banks still use SMS MFA, so I could login online using a dumber phone.
so you can't login to a bank website in the EU without having their app on your phone? WTF. Really? When did that happen? I know UK isn't in the EU anymore, but NatWest (UK bank) doesn't need me to have an app -- I can get an sms code.
No, but (at least in my country) most banks have used the PSD2 directive as a way to phase out HW tokens and force their proprietary apps to customers. But PSD2 does not really say this is required to comply.
The only alternative they offer is SMS-based 2FA which, unsurprisingly, often has an additional cost.
A cheap/used tablet is like $50-$100. Put your online banking apps and other surveillance-based-apps on it, and generally leave it at home. Mine has a red label on it that says "Full Take".
You're then free to secure your mobile device with things that will best protect your location and communications, without worrying about lazy/invasive apps complaining.
Also if you get mugged, an attacker can't make you sign into online banking and see that you have a bunch of money sitting in your accounts.
That is an interesting insight about the banking apps. The hypothetical mugger could make you login on the web versions although, so you will also need to not have your 2 factor banking items on the personal phone too.
Why only "for now" ? To me, compartmentalization is the future, as proprietary apps and platforms become ever more locked down. You'll have your "business device(s)" that engages with that world, that you're forced to compromise your Freedom for in various ways. And you'll also have your Free computers that run subversive freedom-preserving software that you can treat as trustable extensions of your own mind.
Back when everyone had a single device, trying out Linux used to be such a trepidatious affair because you had to write down all the installation steps, make sure you had the install media in good order, and hope that you'd come out of it with a computer that still booted some OS. These days $20 will get you an independent machine capable of running Linux, and you can tinker to your heart's content without affecting your existing environment.
"For now" because they are making it harder and harder to do it, and so, it's not a stretch to imagine a day where you simply can't do it. Or you can, but it will be rendered meaningless. For examples, look at how hard it is on phones to change the OS and even then, we're at the mercy of the AOSP remaning open and up to date. On desktop, Secure Boot is yet another wrench thrown into general computing. Or, nowadays, how do we really separate two browsing sessions? A private window is a good shot, but still there's ways to connect that to your normal browsing session.
The protocols and methods that can make the current ecosystem free feel like holdouts from a world that's passed already. I really really wish that the future is not closed, that we can still host our email and websites in 50 years, but when all key players want a closed and surveilled ecosystem, it's hard to imagine that remaining open is not a struggle.
These developments are precisely why I think compartmentalization is the future. The lockdown trend is happening, for off the shelf productized solutions. Simultaneously, it has never been easier to obtain computing hardware that can run code of your choosing.
So to the extent that you need to engage with the locked down world, you need a locked down terminal that behaves like everyone else's (to within some margin that they keep trying to shrink). And to the extent you want freedom (computational autonomy), then you need a real computer that lets you run whatever code you want.
If the Internet became locked down to specific protocols, that could be a different story. But the same bifurcation seems to be happening to the network. MITM webapps are going censor-happy and implementing IP blocks and eager captchas, yet it has never been easier to set up a VPS running whatever protocol you want.
probably no reason for 5g, expensive and most people have small need for this, 4g already very fast. phone data limited, expensive, even if on unlimited plan it slow down with much use, so no point in faster connection. have high cost for any modem and not yet even available in most location.
Don't you think though that trusting a company based on what they say is less reliable than trusting a company based on what they do. Apple is a closed ecosystem, therefore you have only been able to trust what they say and not what they do.
Apple is smart enough to determine what's in an photograph, but if I paste text into a sentence in iOS I still need to manually add spaces and format punctuation.
I will never forget the time iTunes deleted my music library, or it's inability to deduplicate identical songs.
I hate hate hate iTunes. My biggest gripe with iPhone has always been that I don't have direct access to my OS files. I have to spend an inordinate amount of time 'syncing' my device instead of just copying a few tracks to it.
This is why I don't have my large collection of CD's in my phone and just use Pandora.
Going back to iPhone 3 days, iTunes did not allow me to import my CD's into my device and play them as entire CD's.
If I want to listen to Mozart's "Eine kleine Nachtmusik" or Pink Floyd's "The Wall", it's a nightmare. iTunes is song-based, not album based. Well, the above, and many others, are works you pretty much listen to in order as recorded. In some cases (The Wall, Brandenburg Concertos, etc.) the works span multiple CD's.
I stopped using iTunes and storing music on my iPhone because of this. I don't enjoy music the way Apple seems to think you should. I have no clue if they fixed this since iPhone 3 days. I would not be surprised if they have not.
In sharp contrast to this, I have not problem playing single or multi-CD works as intended using Windows Media Player on my desktop, where I have my entire CD collection stored.
This, for me, is the single reason I would instantly jump into a Windows phone if Microsoft got their heads out of their asses, committed to doing a good job and integrated a phone experience with the desktop. They would have to regain my trust, but as a life-long user of both Apple and MS desktop products, I would absolutely welcome a better phone experience than Apple has delivered over the years. I really want to abandon iPhone and go to a good Windows phone, but MS does not seem interested in creating that opportunity.
Oh, yes, and to address iCloud, back in the early days it managed to delete not only whatever I had on iTunes (which I own on CD's so I don't care) but all of my contacts. Thankfully I had my contacts stored in my prior phone (I think it was a Blackberry). After disconnecting from iCloud I entered them manually and never again enabled iCloud all the way up to my current iPhone X.
Apple did worse than delete my iTunes library. It just got corrupted in such a way that most of my music was unavailable, and the metadata got scrambled in bizarre ways. I would have e.g. Daydream Nation but only 5 tracks, and the artist was Bob Dylan, and the cover art was a Freddy Gibbs album. Playlists had tracks removed - sometimes completely emptied out. It was just like that one day, and all I'd done around that time was play music.
The same kind of problems existed on my Mac, iPhone, and the web UI, but each one had its own set of fucked up metadata.
I had the same music library since iTunes 1.0, moved from one Mac to the next for almost 20 years. I wanted my play counts and the last time I played things and all of the nice metadata that iTunes used to have.
Worst of all, I couldn't restore from a backup because the cloud library becomes the canonical library as soon as you enable it. I tried restoring my library from a backup, but as soon as the cloud library synced, it would screw everything up again. As long as I was offline, my restored backup was in perfect condition.
Apple support was useless of course. They just told me to delete and add my music again. That's thousands of songs, and doing that deletes the metadata I wanted to keep. All I wanted was for them to reset my cloud library as if I'd never synced anything, so my working library could sync up. The only way to do that would be to stop my subscription and then subscribe again.
I spent probably 30 hours trying to fix it but ended up just accepting that my personal metadata was gone, other than existing on the last working backup. Doing what Apple support suggested was not reliable either. Deleting music in one UI had unpredictable effects in another. Adding the music I own, ripped from CDs, failed often when Apple tried to match it and then get the various libraries in sync. I had to remove and import some albums 3 or 4 times before it was consistent across devices and the web UI.
My library is still fucked in a lot of ways, but the music I want to hear most often is mostly there.
The thing that prevents me from just going back to syncing to my phone is that some music is only available from their music subscription, and that can't be synced over a wire.
They're also the only streaming service that will actually upload music it can't match. I'd move to Tidal or Spotify or somewhere else if any of them offered that. I want to be able to hear my obscure music in the same app as I use for big label music.
I have had something similar. I was trying to import/convert a bunch of flaac files (which apple cant read and requires they be converted to apple lossless.)
But i tend to be cloud adverse. So my library is on a NAS.
When my mac mini blew it up, i had to restore my entire music library. Then every single file could not be found..So I deleted the entire itunes library (which deleted the music folder structure, i thought i told it not too, but it could have been my fault). So i restored the directory structure again, and then had to re-import and rebuild the library.
All playlists were gone. Play counts, favorites, checked songs (because itunes loves to convert things and make duplicates) were gone and needing to be redone.
And now it seems it wont recognize flaac files anyhow.
I have been meaning to setup something like Navidrome and be done with it but i dont really like the thought of another self hosted server.
I think the question is: since it is a secret database that contains pictures that are illegal to view, how can you be sure that everything in the database is actually bad?
Don't like it? Call your Senator. Apple has to comply with the law.
"You’re going to find a way to do this or we’re going to do this for you.
We’re not going to live in a world where a bunch of child abusers have a safe haven to practice their craft. Period. End of discussion."
- Sen. Lindsey Graham
In which photos are scanned on the users device. This appears to be a report of a new press conference after the initial announcement? Does anyone have a transcript of this press conference?
Nothing Apple says about this modern day surveillance tool will make me more accepting of it. If you think this isn't about establishing complete control of your communications, you are a fool. If you think this about protecting the children, you are a bigger fool.
I do not want AI making such decisions affecting humans. No matter how good it is. I also don't want John from Apple looking at my profile and assigning me a score on a scale of 1-10 of how "pedo" am likely to actually be.
What I actually want is for people to stop thinking that technology will solve every human problem we have.
You have to be either naive, conceited or just lazy (avoiding the real work) to actually believe this is possible.
Just an FYI for everyone: you can use a local backup system with iOS. Fully encrypted local backups over WiFi (connect your iPhone to your Mac or Windows iTunes, use full backups - encrypted, and enable backups over WiFi).
Your phone will backup when charging overnight on the same WiFi network as your designated backup Mac/PC. The backup files are encrypted with a different password chosen when you set it up so it doesn't rely on only keeping your backup computer secure.
Which is handy for us geeks who stereotypically would have a higher chance of running a computer 24/7 and so backing up locally can be "just as painless as iCloud" for us. However it doesn't really help the other 99.9% of iOS users.
Even if they did, you then have the chance of the user forgetting the password they used to encrypt the data, simply because you only need the password when you a) want to change the password b) use a local backup.
Side note - doesn't really have nothing to do with backups :-P: When I took my phone in for a battery swap (Apple did it for free so I didn't botrher replacing it myself) they asked if I had backed up my phone as there was a small tiny chance they would have to wipe the phone. I said I had. When they were booking it into the system the person booking it in questioned me on the backup because his software wasn't showing a backup. The person booking it in was looking for iCloud backups.
It's great that this exist (and it's good that you're pointing it out!).
But it's pretty basic, unfortunately. If Apple would just spend a bit more effort, running iPhone backups to your mac via Wifi regularly would be totally viable. It still is, but not's not very convenient.
For example, I cannot exclude certain categories from these backups. I'd like to exclude photos, since they are already on my computer and I don't want needless duplicates of them. Same with e.g. downloaded podcast episodes. Similarly, it seems like I can't backup my contacts, since those are already in iCloud.
> I'd like to exclude photos, since they are already on my computer and I don't want needless duplicates of them. Same with e.g. downloaded podcast episodes.
The biggest iPhone is 256 GB. Hard drive space is cheap. Very cheap. So cheap that I would rather backup programs stopped trying to be so clever to save me space. Or allowing me to be that clever.
Who needs SWATing when you can send a CP pic (either real or with hash collision as per the thread few days ago) from a virtual overseas number/service and get FBI van to show up as well?
What about injecting code into a public website to download same pic into local browser cache without user’s knowledge?
The simplicity of the attack vectors here that would trigger the “manual” investigation is just dumbfounding and ripe for abuse/misuse.
The reported response from Apple offers little reassurance:
> The executives acknowledged that a user could be implicated by malicious actors who win control of a device and remotely install known child abuse material. But they said they expected any such attacks to be very rare and that in any case a review would then look for other signs of criminal hacking.
What triggers them to look for signs of criminal hacking?
Does every manual review process involve such checks?
Are they searching device backups for indicators of compromise [IoC]?
What if there's no device backup or device image to scan?
What if the scan fails to notice IoC?
What if the device was compromised after the last backup?
What if the device was compromised via physical access?
What if the device isn't compromised and the material was pushed maliciously or via drive-by download?
It's dangerous to assume that all material on a network-connected device arrived with the consent of the user when it can accept incoming messages from strangers, trick people into downloading files, or be compromised without your knowledge.
“That isn't mine” is going to be a tough defence if you can't even take measures to log where content came from.
Client-side scanning seems to amplify this issue (which could still happen with cloud storage) because at least cloud storage doesn't generally ship with or integrate deeply with messaging apps, social media, a web browser, QR codes, App Clip Codes[1] etc.
The impact might be fairly low right now with the current proposal (images would have to be uploaded to iCloud, so cached browser images don't get scanned as far as we know), but the existence of the non-consensual scan in the first place is worrying, because it means such attacks are only a policy change away.
> The executives acknowledged that a user could be implicated by malicious actors who win control of a device and remotely install known child abuse material.
Since Google has been scanning your account for kiddie porn for the past decade, wouldn't this apply equally to Google accounts?
>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account
No, but the person who sent that message could get in trouble.
In the case you linked to the person was reported for sending email to a friend with attached CSAM, not for receiving it.[1]
Apple's system scans images client-side if they're due to be uploaded to iCloud. That process can happen without user consent or action. For example, WhatsApp and other messaging apps save images to photos, which are auto-synced to iCloud. (If you use WhatsApp and iCloud you'll find your Photos section full of memes from WhatsApp group chats when you log in at icloud.com, for example. This was a surprise to me at first.)
So the risk of malice seems higher with Apple's system than with the long-running PhotoDNA implementations backing Gmail/Google Drive/OneDrive etc.
Gaining access to someone's email and sending attached CSAM is likely to cause them more issues than receiving it. But that's harder because you need their login info and not just their email address/phone number, which is all that an attacker potentially requires to trigger action from Apple's automated scans.
> The investigation was apparently sparked by a tip-off sent by Google to the National Center for Missing and Exploited Children, after explicit images of a child were detected in an email he was sending.
Right, the sender isn’t going to use their own email address in an attempt to incriminate you. My point was that receiving material by email from a stranger doesn’t make you liable for its contents (unless there is a record of you requesting the content). It makes the sender liable (if they can be traced).
Apple’s approach does not seem to provide the same safeguard. Your account will be flagged for review if there are n flagged images destined for upload on your device. The description of the process does not mention if or how provenance or intent to receive those images is established.
I mean you think that would be how it works, but say a system found the image stored in your mail's temp directory and notified the police, do you think they would be that interested in finding the person who sent it, or do you think they would think, "You had kiddie porn on your phone, that's against the law. 30 years." Win.
Is there even a way to get iCloud or Google photos on the iPhone to only upload photos taken with the camera, to not spam one's photo account with chat garbage?
I was trying to figure out a way, but got side tracked on the issue, then my phone got stolen and I lost a bunch of family/baby pictures (thanks Google/apple).
Gmail blocks incoming messages that contain CSAM, so you don't actually have this concern. It's similar to if someone tries to send you an email with an attachment that has a computer virus. It will never reach your Gmail account - not even your Spam folder.
(In the virus case, they also do a second scan when you open the message - with updated virus definitions to catch new viruses).
Since Google is saving a history of what you purchase from third party merchants by scraping invoices and receipts sent to you through your Gmail account, it's safe to say that they are scanning your emails.
Google scans all photos in the cloud (Gmail, Drive, Google Photos) for CSAM and has for a long time. It just doesn't show contextual ads against email anymore, since those all sucked.
> This material is prosecuted under a "strict liability." It doesn't matter how you got it, you're liable.
You're overselling it.
First, there is a statutory affirmative defense: if I obtain CSAM and "promptly and in good faith" delete it or report what happened to law enforcement, liability does not attach.
Additionally, federal laws are clear that you have to knowingly receive CSAM. That's not just a legal flourish or a word – knowledge is an element that a jury or judge will rule on. If I ask you to send me an illegal video and you do, we've both knowingly violated federal law. If you send me to a webpage that purports to offer me a job, but actually has images hidden with CSS to poison my cache, I've not knowingly received anything.
Something slightly different but very related happened to a senior police officer in the UK. She got sent a WhatsApp message by her sister containing a horrific CP act. It was captioned with a message asking people to circulate it to identify the adult in it, and probably those who sent it around (including the sister) were acting in good faith, but actually it was still illegal to send or even possess it. No doubt the originator of the caption was a deliberate troll.
She was found guilty of "possessing an indecent image of a child". [1] She tried to argue that she hadn't noticed the message, but it's not surprising that wasn't believed given that she had immediately replied to her sister saying "please call". She was sentenced to 200 hours community service, and originally sacked from her job but recently reinstated after appealing. [2]
It seems that she wasn't immediately in trouble when she received the message ... so long as she had immediately reported her own sister for distributing it, even though it's clear that she hadn't deliberately done anything wrong. (In fact the sister had contacted her to ask what she should do about it. Probably her answer was "don't have already sent it me!")
To be fair, this is partially because the laws in the UK are, I think, fairly bonkers strict about CSAM - mere possession, whether you've looked at it or not, whether you downloaded it or not, whether you even know it's there or not, etc., is counted as criminal.
As I mentioned in the last paragraph, it seems that she would've been cleared if she'd been able to convince the jury that she didn't know it was there. And would've been clear even if she had seen it so long as she'd reported it (although that would of course have got her own sister in trouble even though she was acting in good faith).
> At the same time, because of the First Amendment, child pornography offenses are not "strict liability" crimes like statutory rape: in order to convict a defendant, the government must prove that the defendant knew the material involved the actual abuse of a child
That varies by jurisdiction. Some US states require criminal negligence or offer affirmative defenses with regard to the defendant's belief as to the victim's age.
I think its surprising that society does not want to talk about CP and just content locking up whoever they find and throwing away the key. Pretty shambolic response for something so common - no offense but we spend way too much time and resources undoubtedly useless social issues instead if hard questions like CP and what causes it. Even the academic literature is sparse but I would argue we need more people finding answers and we might learn something about the human condition - rather than putting so much money and intellectual capital on crap like cyber bullying or transgender pronouns or mental health. Not that those aren't important but they are low hanging fruit. We need to get our priorities straight. Tackle the hard questions instead of this absurd head in the sand approach to uncomfortable topics. FFS.
(It's a pity your comment was downvoted when it was the only meaningful reply. As always, we'll never know why. Maybe the downvoters didn't get the sarcasm. Or maybe they think handing your sister to the police when she asks for your help is the right thing to do...)
Yes. This reminds me of when typing or receiving certain text would make an iPhone crash. But now having your account deleted makes it a feature. For example Whatsapp automatically downloads media to the camera roll which then get uploaded to iCloud. Of course that can be turned off prior, but this is like what happens with backing up. People want to backup, but don't invest the time in it. That's until it's too late, they lost their data and now want their stuff back.
- Apple: “Backup your phone to iCloud, it will be safe there.”
- 5 minutes later: “We’ve wiped your account because of a photos of (porn actor here) which is not CP but technically minor at the time she filmed.”
- “Also we’ve wiped your iPhone because we couldn’t knowingly let you keep that. Good luck contacting your parents, we’ve deleted your contacts. Good luck! PS: We’ve reported you to the police.”
We have a Tumblr set up for family to view pics of the kids. Several photos and videos of our kids when they were under 2 were taken down either temporarily or permanently by their CP algo.
These were a pic or video of kids in the bath or without a shirt. In none of them could you see bum or bits. Just a semi naked baby.
Algorithms like this get things wrong all the time
That was just a MD5 collision - an image that has same MD5 hash as some other image (in this case some CP). This is uncommon yet possible thing - see this example[0].
Yeah, vaguely talking about MD5 as "broken" is common and misleading. There are very particular known attacks.
Obviously nobody should be using MD5, but it can be useful to understand there are circumstances where it's basically reliable unless you have an extremely sophisticated attacker.
Yes, hash collisions definitely occur. There is no such thing as collision-free hashes, and MD5 is definitely broken.
Even though the author says they were 3 million MD5 hashes the second time, the first one he calls them SHA1 and MD5 hashes (even though SHA1 is considered weak too).
I wonder what kind of hashes Apple is planning to use. Will it be whatever is made available to them or will they only accept (what is now considered) secure standards?
Which may contain the hashes of their photos, because they've been taken down in the past, which means they probably have been added to certain blacklists that may have been integrated into the blackbox of NCMEC's database.
NCMEC's CSAM database already includes images that are not necessarily illegal. If _your particular_ photos have been flagged in the past, they may well be part of the database.
> NCMEC's CSAM database already includes images that are not necessarily illegal.
How could this be the case? If it's been determined to be CSAM then it is, by definition, illegal.
If it were true that the database is likely to contain legal material, how would we possibly know about it, given that the contents of the database are secret?
> How could this be the case? If it's been determined to be CSAM then it is, by definition, illegal.
Certain images are CSAM by _context_. They do not necessarily require those within the image to be abused, but rather that the image at one time or another was traded alongside other CSAM.
> If it were true that the database is likely to contain legal material, how would we possibly know about it, given that the contents of the database are secret?
Tools like Spotlight [0] make use of the database, so certain well-known images are known to flag. Such as Nirvana's controversial cover for Nevermind.
> Certain images are CSAM by _context_. They do not necessarily require those within the image to be abused, but rather that the image at one time or another was traded alongside other CSAM.
At the risk of sounding like a broken record, how can we know this is actually true? Every description of the NCMEC database's contents that I've seen is incredibly vague, and as of 2019 it seems like there were fewer than[1] 4 million total hashes available. I would think that if it genuinely did include innocent photos of people's kids, the number would be much higher.
> ...certain well-known images are known to flag. Such as Nirvana's controversial cover for Nevermind.
I've heard this multiples times now, but I've never been able to find any evidence of it actually happening. The only instance I could find was one where Facebook removed[2] that Nirvana cover once for containing nudity.
If you're sending other people photos of your children that are explicit enough to prompt someone bring them to the attention of child safety groups like NCMEC, and they look at it and agree it's worth their time to investigate, the first you hear of it isn't likely to be after it eventually comes full circle through Apple's CSAM processes.
Remember, this isn't a porn detector strapped to a child detector.
Hypothetically that's possible, although all three steps you listed are exceedingly non-trivial. The notion that an attacker could pull off two of those steps let alone three is borderline fanciful. In addition, their target must also qualifies with the necessary prerequisites:
• has an iPhone;
• has children;
• took photos of their children which could be mistaken for CSAM by a sloppy reviewer;
• is of sufficiently high importance to justify the effort.
And after that insane effort, all you've done is inconvenience your target for a little while until child safety people investigate your family situation and discover that the photos which got flagged were not actually CSAM.
Immediately after the investigation process discovers the hash fraud, Apple will immediately start delving into exactly how their hash algorithm failed in this instance, improving it to mitigate this exploit. So this target better be worth it!
If this was a plausible exploit, surely it would have already happened to people with Android phones since Google has been doing pretty much the exact same scanning of customer images for over five years. (The only difference with what Apple is now doing is where the hashing is performed—but this makes no functional difference to the viability of your hypothetical exploit.)
I haven't seen anyone claim that any of this algorithm was "created with ML". I'm interested in learning more so do you have a citation for that?
Regardless, it's not both. Setting aside how the algorithm was created, it's incorrect to say that an algorithm "created with ML" is itself an ML algorithm.
NeuralHash was so named because it was optimised to run on the Apple Neural Engine for the sake of speed and power efficiency.
The image is not fed directly into the hashing function, like taking an MD5 hash of a file or something.
Rather, the image is first evaluated by a neural net that looks at specific visual details, and has been trained to match even if the image has been cropped or anything like that. The results of the neural net evaluation are what is then input for the hashing function.
This is explained in detail in Apple’s documentation they released with the announcement.
Who did you switch to? As I assume you are aware Google has already been doing this as well as Facebook. Apple was simply the last of those to start doing it. Facebook reported 20 million instances of csam to ncmec last year alone.
Who says you need any of the above? Cloud storage is overrated. When is the last time you lost files? I have stuff from an old lexar jump drive early 2000s doing just fine.
For more sensitive materials back up when the data changes and store in a disaster proof safe.
I think the fear of losing things is a problem. People take so many photos anyway and who even looks at all of them? Memories are great and we should cherish them but… this is one of those cases where folks don’t need to rely on big tech.
Gasp, are you saying anyone worried about getting caught simply could encrypt their photos first and this system won’t work? So an extra step for the bad guy, a system that is invasive for all users, and a system that is easily avoided by the bad guy. What are you doing Apple, this feels like a cheating partner here.
Then in this case you could still use an Apple device, considering that if you don't use iCloud Photos, there's no scanning on your device anyway.
I think that radical stances and refusing a dialogue, albeit critical, it's something that in this case won't really go anywhere.
No, someone can still attack you by creating an iCloud account and pushing cp. There is no way to mitigate such an attack after purchasing an apple device as far as I can tell. And, apple pretends their devices are secure so they have incentive to not discover compromised devices (as if they could) even though it’s clearly a problem with Pegasus and probably many other non-consumer grade exploits. I think the only answer is a phone that cannot back up to the cloud at all. Which is what I suppose I have to shop for now. Hopefully this attack hits some senator or apple exec first. I don’t want to backup my phone, and at this point I don’t want a camera or location services. I want security which apple no longer offers.
>No, someone can still attack you by creating an iCloud account and pushing cp. There is no way to mitigate such an attack after purchasing an apple device as far as I can tell.
Could you elaborate? Totally unclear to me what kind of attack you're talking about.
I think they're saying that if someone can completely hack your phone so as to have remote control of it, they can sign you up for an iCloud account and add CSAM to it.
This seems... implausibly convoluted. If you have full remote control of someone's phone, Apple or not, you could do all sorts of incriminating things "as them", and I don't think Apple's new system noticeably increases your risk from this.
It would take the flick of a switch for someone to ruin your life for a crime you could never explain yourself out of. Nobody will ever believe that you were framed because that means other convicted predators could also have been framed. As soon as your name hits an index-able news article, guilty or not, your life is over.
Well, the obvious option if you've subverted someone's phone so you can do whatever you want with it, and have access to illegal stuff, would be to store it on the phone and submit anonymous tips about the person to the police. Or upload it to random image-sharing websites, or Facebook, or email it to their coworkers with some "I found this on X's phone and thought you should know" note attached, or whatever.
I'm just saying that actually getting the attention of authorities is the most trivial part of this suggested attack. Apple's new stuff is a vector for that, sure, but anyone who is in a position to exploit it could easily do so in other ways as well.
Most HN crowd presumable isn't actually worried about CSAM detection itself - its the local-side scanning where you lose control over your own hardware.
There's also GrapheneOS, which excludes Google APIs completely and is additionally hardened down to its memory allocation implementation, at the cost of performance and app compatibility[1].
lineageos and calyxos should as well, unless you opt-in. I guess they would still use the google captive portal detection? Is that what you're referring to?
> and is additionally hardened down to its memory allocation implementation
That's really interesting. Do you use GrapheneOS? Is it easy to lock the bootloader on Pixel devices?
It is just Android minus the nosy bits, it works just fine. I've used AOSP-derived distributions since 2011 and never felt I was missing out on anything, au contraire. Longer battery life, no ads, no spying other than through the radio firmware (which is part of all devices from all manufacturers using all operating systems [1]), no nonsense.
[1] I seem to remember that RIM (of Blackberry fame) made devices which used combined radio and systems firmware so those would be an exception to this rule
It's all I've ever used. I think it works great but I think your experience will depend heavily on your expectations.
I don't use any proprietary apps and only install them from fdroid or build them myself.
But if you do, you're going to have a different experience. Let's say you want to run Whatsapp. From what I can tell you basically have three options:
1) Install google apps.
When you install your rom you will also download a gapps bundle and install it. This will be a very vanilla android experience but with the ability to uninstall whatever you want, root, etc. You can open the play store and install Whatsapp. Everything should work OOTB. However you're running all of the google service including google play services, so privacy-wise this is not significantly different than stock android.
2) Install microg
When you install your rom you can also install microg. This is an install time option in Calyxos. Microg replaces many of the google apis. You can install Whatsapp through Aurora store, which can install apps from the play store. Whatsapp will use the microg FCM implementation. FCM is google's notification service. It allows your phone to make a single persistent connection to receive notifications, allowing for better battery efficiency b/c you don't have many apps activating the radio. FCM just communicates that an app has a notification, it doesn't carry the contents of the message. Unlike play services, microg registers the FCM connection with an anonymous.
So google knows your device is running whatsapp and when you get notifications, but not what they are.
3) No gapps / no microg
Don't do either of the above. You won't get push notifications with whatsapp. Many free/libre apps have alternative notification schemes involving separate persistent connections. This is less power efficient but works without involving google. I use Signal and Element like this and my battery still lasts >24 hours.
I use it as a daily driver for 2+ years now (LineageOS without gapps, or even microg). I use the f-droid store for my app needs, and the occasional proprietary app I download with Aurora store, or use whichever APK hosting site seems the least shady. I sometimes use MS Teams - complains on each start about needing the G framework, but works just fine regardless. Or, I played another game that had in game purchase, and it worked fine until I opened the in-game store, when it froze. Otherwise perfectly playable.
From the f-droid store I use a ton of apps, games, mostly utilities. For navigation I like Organic Maps.
None of the systems current or proposed scan local files. They all work on cloud storage. You could not use icloud and none of this change would affect you. Also I don't believe anything in icloud is encrypted so they could have scanned it at any time.
On device hash generation is 'scanning local files.' The fact that this process is only initiated by being flagged to being uploaded to iCloud doesn't change the fact that it is being done on-device, and increases the capacity for surveillance significantly.
Yup. Good luck telling repressive regimes that the technology doesn’t exist. How is the hash list to be trusted, especially in foreign countries? Who will be reviewing the images in foreign countries?
>The lawsuit is brought on behalf of four American Muslim men with no criminal records who were approached by the FBI in an effort to recruit them as informants. Some of our clients found themselves on the No Fly List after refusing to spy for the FBI, and were then told by the FBI that they could get off the List if they agreed to become informants. Our other clients were approached by the FBI shortly after finding themselves unable to fly and were told that they would be removed from the List if they consented to work for the FBI.
>A House representative said Thursday she is requesting an investigation after learning a CNN reporter was put on the federal no-fly list shortly after his investigation of the Transportation Security Administration.
>In my case, I started having trouble flying after I blew the whistle in the case of “American Taliban” John Walker Lindh, the first terrorism prosecution in the United States after Sept. 11. As the Justice Department ethics attorney in that case, I inadvertently learned that my e-mail records had been requested by the court. When I tried to comply, I found that the e-mails, which concluded that the FBI committed an ethics violation during its interrogation of Lindh, had been purged from the file. I managed to recover them from the bowels of my computer archives, gave them to my boss and resigned. I also took home copies in case they “disappeared” again. Eventually, in accordance with the Whistleblower Protection Act, I turned them over to the media when it became evident that the Justice Department withheld them from the court.
It isn't exactly a knee jerk; it has been quite likely that this sort of thing would happen sooner or later.
This is just a great point for if anyone is going to do anything. Apple is going to start scanning my phone looking for reasons to put me in jail. I don't want my phone's CPU time spent looking for reasons to imprison me and I don't want to be funding it either. This system will make mistakes.
Exactly. Despite countless occurrences of automated systems getting things wrong--there is no such thing as AI, remember, just fallible developers and their fallible formulae--somehow the naive continue to trust in these systems. It's insane, and those of us who do know how insane it is are left to pay the price for the naivete.
Its harder to take back policies like this than it is to object and get them stopped initially.
Also people have a habit of 'forgetting' about it later. Until stories of how it is misused are found. And then it's another attack vector we need to be conscious of.
And that’s how France still has VAT & revenue taxes.
Revenue tax? Have to pay for that expensive WWI war effort, you understand? For all the good it did.
Same with the VAT. Have to rebuild after WWII, you understand.
We also have an "Exceptional and Temporary Contribution" (CET), recently renamed to "Technical Equilibrium Contribution" (still CET. Smart one, that one).
A funny one, for a change?
When the Germans invaded in WWII, they changed France's timezone to theirs. After the war, we still called it "the German time". There were talks of going back for a few years…
Guess who still has noon at 2pm in the summer, decades later?
Change, no matter how ridiculously small or sensical, even when nobody benefits from the status quo (ie the damn timezone) is horrendously difficult.
Thus one should always assume that once it’s here, whatever "it" is, it’s here to stay.
You would think money would go into the "backend": caring for kids where the state is responsible for everything BEFORE more money goes into the frontend: finding more kids to throw into the hellhole that is child services.
Without the "backend" being in order and working well, raising well-educated, stable kids, the frontend is completely immoral. "Saving" kids from abuse, only to throw them into a slightly different kind of abuse ... if any person did that (e.g. a guy marrying a woman (or I guess vice-versa) with that resulting in that person abusing their new spouse's kids) would be considered a despicable crime. Somehow child services, who do the exact same thing (and they use violence to do it) is not a despicable crime.
Somehow just because the state does it, makes such things all a-okay.
But frankly this is merely the hole in the justification, all this should merely tell you one thing: any government that doesn't work hard to fix the child services backend does not have children's interests at heart when making these sorts of laws (and mostly they're making budget cuts in the backend, of course). Because fundamentally these laws throw children into the child services system. THAT is the real effect these efforts have on the actual children behind this. THAT is what is meant by "saving kids".
And if that system is full of abuse, how is that any better than what paedophiles do? It's not.
Which means the state is not attempting to help abused or disadvantaged children. In fact, they're doing the opposite.
> We also have an "Exceptional and Temporary Contribution" (CET), recently renamed to "Technical Equilibrium Contribution" (still CET. Smart one, that one).
Just an example. During a heatwave some summer over a decade ago, many elderly people died.
So what did the government do? They instituted a "day of solidarity", of course!
What does it mean? If you are salaried, then you get to work an extra day, during a holiday of your company’s choosing, and not be paid. The day’s salary will go to a public fund dedicated to helping promote the autonomy of elderly people. And your employer gets an extra day of employees supposedly producing value out of it.
Many people instead take the day, either on their paid leave or their Work Time Reduction days (RTT).
That’s on top of all the other social "contributions" (sounds better than taxes), of course.
Payslips used to be quite funny to decipher[1][2]. They’ve simplified those a bit since then; mostly by regrouping items[3].
This will happen with or without iCloud; the photos in iCloud are already not end to end encrypted and could easily be scanned on the server side because Apple can read all of them today.
The only reason to do this clientside when the data is already readable on the server is to do it to images that aren't hitting the cloud.
> The only reason to do this clientside when the data is already readable on the server is to do it to images that aren't hitting the cloud.
Or to eventually e2e encrypt all of iCloud. Or because Apple doesn't want to decrypt images server-side if they don't have to. Etc.
But the point is that currently, only photos that will be uploaded to iCloud Photo Library will be scanned. Making definitive points about possible future scenarios isn't particularly insightful, especially because the current system isn't much of a precondition of those scenarios.
None of this is happening "currently"; both of these claims are speculation about future changes based on Apple's statements.
Apple has made 3 announcements and released one research paper and held a press conference. Now we have to reconstruct what is likely going to be the truth from their carefully crafted statements.
Yes, and I mean the same system, based on the same statements from Apple.
Clientside scanning will happen even without iCloud. Apple expects and pressures all users to use iCloud, defaults it to on without interaction or consent, and does not test the non-iCloud paths very well. You can't even setup a homepod to be a simple wifi speaker without iCloud.
It's an accurate and objective description of the situation; there is no opinion involved. If you think facts are over the top, perhaps the situation is actually outrageous.
I remember WhatsApp used to save each received image to the iCloud Photo Album. I remember one day going to my album and seeing several memes and pics I had received but never saved.
Having 3rd party apps that have access to the photo album being able to do that makes it a bit risky to have iCloud.
Combined with the unpatched remote-root-via-phone-number disclosed in the Pegasus leak this boils down to a single-click "destroy this person's life" tool.
Honestly I'd rather get shot dead by a SWAT team than implicated for something as atrocious as what this tool is looking for. I imagine many people with a family would feel the same way.
It's an abomination that will destroy innocent people. The engineers behind this no doubt think it's fool-proof because they believe they're leagues smarter than any of those pesky naysayers ("hey, we're Apple").
If we've learned anything about Apple this year (as if we needed the reminder) is that their software is nowhere close to as flawless as they seem to think it is.
> If you assume that cops will just arrest people without doing any further research... Then yeah.
Like when they arrested & charged someone for a poor facial recognition match that never had a hope of passing human review? [0] Just glancing at the original photo would have stopped that. Or checking his rock-solid alibi. Neither of those things happened.
Oh, the police will get a search warrant, and find exactly what they were told would be on your device. The police aren't in the business of discovering your innocence. It's then up to you and your lawyer to prove you didn't put it on your device. Meanwhile your life will fall apart as you get fired, your wife divorces you, you lose all custody of your kids, etc.
None of those attacks would work against the system as described by Apple. The only photos scanned are items in your photo library prior to upload to iCloud. Your browser cache is not scanned.
Hash collisions would fail human review. About the only consequence I can think of for hash collisions is that the person at Apple who performs the human review step has a slightly nicer day because they were about to look at an image... and then it wasn't CSAM.
> Hash collisions would not pass the human review. About the only consequence I can think of for hash collisions is that the person at Apple who performs the human review step has a slightly nicer day because they were about to look at an image... and then it wasn't CSAM.
I truly wish I could subscribe to this optimistic view. Experience tends to show this to be unlikely.
Two factors combine against it:
1. There is no negative consequence for a mis-flag (to the reviewer)
2. This set up is a tool, and like many tools, inventive humans will find a way to subvert it in the name of convenience. I am referring to NSLs from U.S. Patriot Act as an example. Since CSAM is such a toxic thing (let's stipulate that CSAM itself is unequivocally bad), there is less tendency to examine it closely for, well, CSAM-ness.
Again, I'm only pointing out how this conflicts with Apple's description of their system. I'm in no position to know whether their description is accurate or how it will actually operate in the real world.
For the sake of argument, let's assume you're correct and Apple's review team are lazy shits who don't look at the images. Okay, so Apple then sends the report onto NCMEC. What are they going to do when they open the report and it turned out the images Apple reported were hash collisions?
My understanding (from someone who would know but said this in a Chatham House rules space) is that NCMEC is already incredibly underfunded, understaffed, and backlogged. Similar incentives apply to them. They're a nonprofit: a private organization who has significantly fewer dollars than Apple does.
The critical follow-up question is what do NCMEC do with their backlog? Unless they're dumping this backlog directly at the feet of law enforcement, I don't see how this changes the equation.
That may be true in principle, but irrelevant with respect to Apple's CSAM process. Unless the exact material is explicitly catalogued by NCMEC or another child safety organisation, there won't be a hash match.
This isn't a porn detector strapped to a child detector.
You are rather desperately trying here to downplay a massive security fuckup by Apple as if its perfectly fine. One of the main selling points of Apple, heck for many the most important one, was just blown to pieces couple of days ago. Its NOT Okay for Apple to send your images further.
The only argument left missing here is 'you have nothing to hide anyway, right?'.
I would be able to accept an inferior OS incapable of true multitasking and with very limited options to set. Closed system with no sideloading. I would even accept a lousy zoom on flagships cameras compared to, well, any competition. Proprietary connection port. Mediocre battery life. Overpriced accessories. But start removing security, and that's one step too far.
This (pervasive, over the past couple days) idea that Apple (of all major tech companies, lol!) will be capable of manually reviewing tens of thousands of automated detections per day is... nuts.
The "system as described by Apple" doesn't comport to reality, because it relies on human review. If you remove the human review, the system is fucked.
But no company on the planet has the capability to sanely and ethically (to say nothing of competently or effectively) conduct such review, at the scale of iOS.
Can they even, legally, review anything at all? I mean, it's highly likely there will be actual CP among the matches, viewing of which is - AFAIK - a crime in the US.
That is somewhat unclear at the moment. They don't get to see the actual image in your library, they see a derived image that's part of the encrypted data uploaded by your phone as it analyses the images.
I don't believe any of the information they've released thus far, gives any actual detail about what that derived image actually is.
One might guess it's a significantly detail-reduced version of the original image, that they would compare against the detail-reduced image that is able to be generated from the matching hash in the CSAM database.
Tens of thousands of automated detections per day? Unlikely. More likely tens per year. Remember, this isn't a porn detector combined with a child detector. It is hashing images in your cloud-enabled photo library and comparing those to hashes of images already known to child abuse authorities.
In addition, consider how monumentally unlikely it is for any CSAM enthusiast to copy these illicit photos into their phone's general camera roll alongside pictures of their family and dog. This is only going to catch the stupidest and sloppiest CSAM enthusiast.
That doesn't seem to be the same kind of detectors at all.
"21.4 million of these reports were from Electronic Service Providers that report instances of apparent child sexual abuse material that they become aware of on their systems."
So those 20M seems to be images that Facebook looked at and determined to be CP. Apple's system is about comparing hashes against already known CP.
For the record: I don't support Apple's system here, but it's not the same kind of detection at all. Let's try to not make up random facts.
> The vast majority of Facebook NCMEC reports are hits for known CSAM using a couple of different perceptual fingerprints using both NCMEC's and FB's own hash banks.
That's a summary number of many kinds of reports, of which CSAM hash matches would be one part.
That summary number also includes accusations of child sex trafficking and online enticement. I wouldn't be surprised if reported allegations of trafficking and enticement were in excess of 99.9% of Facebook's reporting. But since they don't break it out, I can only guess.
Given that guesses aren't useful to anyone, it would be interesting if you know of any statistics from any of the major tech vendors, of the reporting frequency of just CSAM hash matches.
> The vast majority of Facebook NCMEC reports are hits for known CSAM using a couple of different perceptual fingerprints using both NCMEC's and FB's own hash banks.
Fascinating. Thank you for providing the clarification. I still find that number to be perplexingly huge. If it's indeed correct, one hopes that Apple know what they're getting themselves in for.
Thanks for the kind suggestion, but I'm not going to concede anything on the basis of an assertion made by one person in one tweet, with zero supporting evidence, zero specificity, zero context.
Assuming that number is correct, it means there are orders of magnitude more reports than there are entries in the CSAM database. So even if I conceded that Facebook were reporting over 10 million CSAM images, how many distinct images does this represent? More than four? We have no idea.
How many of those four were actually illegal? Remember, there's a Venn diagram of CSAM and illegal. A non-sexual, non-nude photograph of a child about to be abused is CSAM but not illegal.
This is a serious topic; you don't seem to be taking it seriously.
Ten thousand iOS users doing something stupid or sloppy per day (noting they don’t have to be stupid or sloppy in general for that to happen) would not hit the monumentally unlikely criteria for me. Also this is not counting the false positives which is the premise of this thread.
I don't know about anyone else but I've never had any issue with regular porn sloppily falling into my camera roll. And that's just regular legal porn. Maybe I'm more diligent than others but regardless, it's just not something that happens to me.
Being sloppy with material which you know is illegal? Material which, if stumbled upon by a loved one, could utterly ruin your life whether or not authorities are notified? Material which (I optimistically assume) is difficult to acquire and you'd know to guard with the most extreme trepidation? We're seriously expecting tens of thousands of CSAM enthusiasts to be sloppy with their deepest personal secret and have this stuff casually fall into their camera roll?
A false positive will not have any effect. The threshold system they have means that they won’t be able to decrypt the results unless there are many separate matches.
> Hash collisions would not pass the human review. About the only consequence I can think of for hash collisions is that the person at Apple who performs the human review step has a slightly nicer day because they were about to look at an image... and then it wasn't CSAM.
The whitepapers provided by Apple do not say what the human reviews consists of. They could just look at the hashes to make sure there isn‘t a bug in their system.
> The whitepapers provided by Apple do not say what the human reviews consists of.
At minimum what we know is that each flagged image generates a "safety voucher" which consists of metadata, plus a low-resolution greyscale version of the image. The human review process involves viewing the metadata and thumbnail content enclosed in each safety voucher which cumulatively caused that account to be flagged.
The data is not sent to a "police group", it is sent to NCMEC.
From Apple's FAQ:
Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?
No. The system is designed to be very accurate, and the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year. In addition, any time an account is flagged by the system, Apple conducts human review before making a report to NCMEC. As a result, system errors or attacks will not result in innocent people being reported to NCMEC.
One obvious problem with human review is steganography.
The picture can look normal to the human eye, but if it contains hidden content (in the least significant bit of each pixel for example so that the hash is unchanged), a forensic software will definitely notice, raise some flags, and extract the hidden offensive content automatically, leaving the human reviewer no other choice but to report you.
If Apple says they are not going to look for hidden content, then they are just handling a free pass which render the whole scanning thing pointless.
I'm confused what scenario you're positing here. Given the widespread adoption of encrypted communications, steganography is of no use to traffickers of CSAM. Steganography generally serves only one purpose, which is to transfer material in public view with plausible deniabilty—such as leaking material out of a military facility which has exceedingly robust data protection processes.
Apple have explicitly said that their hash algorithm is only concerned with visible elements of the image.
I'm speaking about the adversarial scenario of an attacker trying to frame a target. He just need to get on your phone an image with hidden content that has a hash collision with the database.
Traffickers and consumers of CSAM know that their content is illegal to possess and store so they sometime use steganography software to store the offensive data inside their innocuous photo library. This way when they can browse their private collection via the lens of the steganography software and they don't have some suspicious encrypted file that would attract attention of someone they share the computer with.
You seem to be confused. As you said yourself, steganographic concealment would, by its very nature, not change the perceptual hash of the visible image. If the visible image doesn't match an known hash, the steganographically modified version isn't going to either.
First you generate an innocuous image that has a bad hash collision. (This is easy because perceptual hash are not cryptographically secure). Then in a second step you hide some offensive content in it via steganography without changing the hash. Then you send the image to the target.
He stores it in his cloud, it gets flagged because of the hash collision, so it get a manual review. The manual review take the image through some forensic software, which will catch the steganography (because the attacker will have chosen a weak scheme) which will reveal the hidden offensive content and then report you.
The manual review process only involves a severely transformed (low resolution, greyscale) version of the image which is attached to the safety token. The ability to decrypt any original files only occurs if the human review process confirms the presence of CSAM.
I don't have a lot of info on the quality of the visual derivative.
But since a human should look at it should have enough details to distinguish subtle cases like the age of the people in the picture, otherwise it's even more concerning.
If some human has enough info to make this call then the low-res greyscale visual derivative should still raise some flags if it get through a forensic software, as steganography software usually offer some resistance against usual compression artifacts.
I don't know exactly what's in the safety token, but we do know that it's grayscale and low resolution.
Allow me to be hypothetical for a moment; let's assume for a moment that the image has all chroma data stripped, it's downsampled to 1 megapixel, and then compressed to around 100 kilobytes using JPEG or HEIC. That would be sufficient for performing careful human review but would completely demolish any steganography.
Messaging apps like WhatsApp will save to your photo library though (unless disabled).
So any photo sent to you would be scanned. If you someone sent you a bunch of files, that might trigger a manual review, that would most likely flag your account.
I wouldn't expect that immediately deleting them would stop the review process.
That is why they talk about having a manual review process. So that when someone wealthy or politically connected triggers the system there is a review.
I haven't used WhatsApp, but I'm tempted to call bullshit on that. I've never used any messaging app on iOS which saves photos to your photo library. Doing so would make no sense and would surely be infuriating. It's also worth noting that apps on iOS can't save to your photo library unless you give them explicit permission.
WhatsApp does by default save received images to your photo library (as opposed to e.g. iMessage). You can turn that off, though. And the permission to read from a user's photo library (to e.g. post images) includes the ability to write to it.
Step 1: Get copies of pictures of targets kid in bath from phone/SNS
Step 2: Manipulate pictures so that hash collides with CSAM
Step 3: Get pictures back on targets phone so they get scanned.
If it were me, I would try and get a series of photos from the target, and manipulate several that look most borderline. That way it looks like more than a one off.
Now if there is an Apple review, the person who views them will see some suspect pictures and would confirm.
Now the target would have to get someone to review the original pictures vs the modified pictures. Good luck with the defense.
You are absolutely correct that neither automatic nor manual review is ever going to be 100% accurate.
I would like to believe though that for this system to fully fail an innocent person, the following would all need to have failed:
1) Coincidental CSAM hash collision
2) Incorrect manual review by Apple
3) Incorrect subsequent review by NCMEC
4) Inability of a lawyer to obtain the original image for presentation during a trial/appeal
which seems kind of unlikely? (although it's certainly the case that once steps 1, 2 and 3 have failed, the person's reputation is likely damaged even if they are able to prove their innocence in court).
The wider question here is, should 100% accuracy be the bar by which we judge this? I don't think we expect the law enforcement system to be 100% right, hence principles like the presumption of innocence and right to appeal, and even then it gets things wrong sometimes.
There are known cases of police faking AI-generated evidence[0]. There's no reason why Apple would be immune against such things. And the recent British post office scandal shows that even without manipulation false faith in technology as evidence can destroy hundreds of lives. The low chance of an error going through that whole chain of checks also increases the trust in that system even in the case of a false positive.
And all this is assuming it will never be expanded from CSAM to other content. Apple is already rolling out a censored version of iOS in China.
> Who needs SWATing when you can send a CP pic (either real or with hash collision as per the thread few days ago) from a virtual overseas number/service and get FBI van to show up as well?
You are talking like collisions are trivial to make.
I bet they have had a deep conversations in this area.
At first, you would need a real hash to even try (which are hidden). Secondly, to get real material it means that it must be in their database to trigger anything. This tells a lot from sender already, and is worth to tell for police. It is quite easy to prove that someone just send it to you. And one photo is not triggering anything. Besides, sender must know that those photos must go automatically into the cloud to mean anything.
> What about injecting code into a public website to download same pic into local browser cache without user’s knowledge?
At least US legistlation is precise that user must willingly obtain/download CSAM material, and it must be proved. So this is not harmful for the user in the end.
A lot of speculation, but does not really lead for coencequences. Almost every system can be tried to be abused, but does it really mean something, is different story.
> At least US legistlation
> does not really lead for coencequences
Except that a trial, even with an innocent verdict will SUCK and have terrible news stories about you and poison any google search for you with CSAM stories
Just because you assume attack vectors are simple doesn't mean they are. First of all, why would Apple forward a report about something that isn't CSAM to the NCMEC?
I wonder how long it takes until they add a feature to Safari to scan all the <img> <video> <canvas> elements for possibly illegal content. Would be very convenient considering Safari is the only browser engine on iOS.
That's fine™. You are just going to redirect blame on the original source, provided you got enough Apple Cash on balance to pay the lawyers and stay out of jail while sorting this out.
Well, this morning I got my Pixel 5 delivered. Installed CalyxOS in 5 minutes. Locked bootloader.
The experience is not that bad. In-app purchases aren't working, GPay doesn't work either. And the camera is, well, bad. Apart of that everything seems to be smooth and fine.
Try it and donate the iPhone price difference to Calyx Institution.
You don't even have to give up on your old iPhone and update its OS.
Maybe someone can comment on this: Does Google scan the cloud fotos of its users for CP? Have we seen an uptick of false positives/SWATings since they do that?
Apple is - rightfully and understandably IMO - criticized for their plans, but does anyone know how Google handles this?
Google and FB both scan storage for several different types of contraband, and also have triggers and thresholds for things that use too much bandwidth (eg pirated software download links that are shared widely et c).
Yes they do. The reason apple has done this is because they lagged behind other providers considerably in detecting this sort of content. Facebook for example are reporting millions per year compared to a few hundred for Apple.
Instead of scanning you whole library they came up with a way to do it on device, which is the main difference between other services. If you don't enable iCloud photo storage the system can't work at all.
I never realized that that's whats being done but now it's so obvious since everything we upload to GDrive/iCloud/Dropbox isn't encrypted without additional effort and reduced convenience.
I use Boxcryptor for Dropbox and it prevents them snooping on my files but if I started using that for pictures, I'd lose all convenience of being able to look at it in the Photos app, create shared albums, etc. It's a pity. I wish there was an encrypted photo service that let me share photos and create albums.
It's not even that I have anything to hide but I'm scared that one day, police will knock at my door because of a hash collision or a borked logfile or whatever. What else can I do other than me and my family becoming digital hermits?
>It's not even that I have anything to hide but I'm scared that one day, police will knock at my door because of a hash collision or a borked logfile or whatever.
Easiest thing is not to worry about that and just use the services as normal. You’d have to trigger the system multiple times before there was even a chance of having police involved and even then there’d be no actual evidence if you don’t have that content.
I have some respect for privacy absolutists that want to go down that path on principal but it sounds like a massive pain in the ass with no upside for most people.
-Apple "lagged behind" because it built private and secure services it could not monitor by design. This is a feature, not a bug.
-Facebook's reporting overwhelmingly flags burner accounts signed up via tor etc, only absolute idiots would post actual CP on their real name account on facebook.
-Apples solution is highly invasive and dangerous, and your statement about "only running with iCloud upload" is false. It took less than a week for Apple to announce that they will open these APIs to 3rd party apps.
> It took less than a week for Apple to announce that they will open these APIs to 3rd party apps.
That just means they are allowing other apps to scan for CP if they want to (or if they are required to by law). As controversial as it may be, I would trust Apple's implementation way more than I would trust a random photo editor app's implementation.
Right? There are people seriously saying they’d rather 3rd party apps ship all your images to whatever YC startup has integrated with NCMEC as a service with god knows what privacy assurances.
I don't care about the technicalities. The issue is that we would be constantly watched with a government defined black list. They could find all "troublemakers" with a simple query. This gives immense power to goverments, and completely destroys any notion of individual freedom.
If you support Apple on this, you support totalitarianism.
Google and the rest do the same thing, but on their own servers.
You want Google, for example, to hold false positive data on their servers forever where it can be subpoenaed and misused?
>Innocent man, 23, sues Arizona police for $1.5million after being arrested for murder and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime
What’s not to say the logs of the scans being performed on my device will be uploaded and stored off my device forever anyway?
The point I was trying to make was privacy reasons aside their motivation of doing it on the users device is scummy. Why don’t they mine Bitcoin on my iPhone while they are at it?
If there is a false positive, I don't want that fact to ever leave my phone, instead of residing on Google's servers forever, where it can be subpoenaed and misused.
Apple's approach here is far superior from a privacy standpoint.
>1. Only if you're uploading files are the files matched. 2. Only if the matches are very close are they considered matches. 3. Only if you have multiple very close matches is Apple able to decrypt the low-res versions of the images themselves. 4. Only if a human reviewer discovers any of the decrypted low-res images to be illegal content is any of your information shared with anyone else.
It’s only superior from a privacy standpoint if you completely trust Apple and humans to get this right. I don’t.
At least with server side your images are being scanned when you are actively sharing photos with other users or to the internet. Thus making it more difficult to distribute CSAM material.
If iMessage was serious about preventing child abuse, they should be introducing mechanisms to prevent actual abuse from occurring on their platform.
> How can you prove that this will always be the case?
How can you prove that Google isn't intentionally turning in a huge number of unnecessary false positives because of their well known aversion to hiring human beings when flawed machine learning models are cheaper?
Easily. Currently I have a choice to not use their services and I exercise that choice.
I don't trust Google and I don't trust Apple. Apple can perform the same process on their iCloud servers but choose not to. The way they are approaching the problem speeds up the erosion of privacy.
In other words I reject the false dichotomy you are presenting.
I reject Google spying on everything you do, including buying a copy of your credit card transaction records so they can spy on everything you do in the real world, just as they already spy on everything you do online.
>Google has been able to track your location using Google Maps for a long time. Since 2014, it has used that information to provide advertisers with information on how often people visit their stores. But store visits aren’t purchases, so, as Google said in a blog post on its new service for marketers, it has partnered with “third parties” that give them access to 70 percent of all credit and debit card purchases.
Yes because it’s a mechanism to prevent actual abuse from occurring on their platform and to report offenders directly. When one of the participants of an E2EE conversation reports the convo then the messages would be sent up in a way for Trust and Safety to read the messages and report to the authorities.
What is messed up about that? The method of reporting is in the hands of the user, not an ML algorithm. The ML algorithm would prompt the kid to stop and think about what is happening, before actual abuse occurs.. I assure you Stamos is speaking from a place of experience, in having to prevent these kind of things.
Any discussion that starts with "Here's a graphic account of sexual abuse" is not a real discussion.
It's just like trying to start discussing the Patriot act by starting with a recording from a plane on 9/11 (an irrelevant appeal to emotion that is so outsized it interferes with the dispassionate ability to weigh alternatives).
For all we know taking away cp from pedophiles makes them more likely to try it in person. Go after the creators.
He then suggests a method for going after the creators, i.e people who livestream their abuse over secure connections, or use them to conduct the abuse.
It's a far better proposition than assuming everyone is guilty and mass-scanning photo libraries.
I was in complete agreement with most Apple-related comments until I saw this group of knee-jerk reactions to a reasonable attempt at discussion. wtf
This is one of those things where you can align with the intent --child abuse is a horrible thing-- and yet, at the same time, cringe at the prospect of what doors we might open.
I don't use iCloud. I have no need for it. Then again, most people on HN do not fit the profile of the average Apple user. When you are technically capable some of these things don't have the same value they may have for you parent, uncle or grandma. In my case, I had a couple of problems back in the iPhone 3 days and just opted to ignore it completely. Today, my iPhone X isn't using iCloud and all is well.
That said, I have seen people do things like take pictures of tax and other documents and message them to others. I can't possibly imagine what people take pictures of and unwittingly keep in their phones and on iCloud. ID, paychecks, that wart in their crotch, anything. The average user has no clue how any of this works. It's magical. And, yes, it's simple. And, yes, it comes with potential consequences.
And now, all of it is up for evaluation for potentially criminal activity? By an anonymous a team with no legal accountability to anyone? Without and before being accuse of anything?
> "But they said they expected any such attacks to be very rare ..."
Well, ransomware has been rare almost forever, then suddenly became the norm.
> "and that in any case a review would then look for other signs of criminal hacking."
Good luck finding a malicious app that downloads child porn from an encrypted remote
server, plants it in the target device, sends "by mistake" an example to social media using
the owner credentials, then deletes itself.
This is crazy. Child porn traffickers will find a way to circumvent this while it would offer
governments just another weapon against people they don't like.
Also they completely ignore that we're talking about child porn; if someone is wrongly linked with the subject for just one second by the media, no matter how many times the news is being rectified afterwards, his life may be ruined forever. It's not like being accused of avoiding taxes or theft; any mental association with things like child porn or rape is not going away easily.
Any technology that could be (ab)used to plant evidence in such cases would be the ultimate weapon to destroy individuals without actually killing them. Better not to have it than to risk that it ends in the wrong hands.
I have come to one more nuance about the viewpoint. If people are spreading CP by signing into the same Apple account from multiple devices and using iCloud to automatically share the photos, I think that's a different situation than a single person signing into one computer and one phone that are mostly used on the same networks together.
Not that I've really changed my view that I wrote before, just there is a bit of grey here.
A) that would be clever, and if the bad guys are that clever, they will easily find another way to share their stuff.
B) apple could simply make a policy that your Icloud will trigger a scan / review after x devices are logged into it within a certain amount of time, in order to make sure you aren't using Icloud as a distribution platform for something. This whole scanning locally on device wouldn't be necessary.
Wow. This could be messed up for attorneys, DCS, social workers, etc. They allude more to child pornography, but I hope it doesn't extend to physical abuse.
Those photos are usually taken on phones by spouses, doctors, schools, etc. to be passed to the above on their phone for evidence for a DNN or similar case.
Glad my kids have aged out of baby bath photos.
And those poor people who I know are going to have to provide an auditing safeguard. I hope they take care of their mental health.
In every thread about this, someone makes this same false assumption: no, Apple is not scanning for naked children or children in pain. It's generating a hash to be compared against hashes of NCMEC-verified CSAM pictures (and while some HN commenters claim the DB contains non-CSAM, that has not been verified nor ever reported on by a news publication) and it does indeed only scan photos destined for iCloud Photos (which I theorize is the only part keeping this system legal[0]).
Auditing it would mean looking at CSAM, which is illegal. If it was widespread, I would expect at least one of the manual reviewers that are able to legally view the CSAM would contact major publications (under the promise of staying anonymous) and whistleblow on this issue.
: To be clear, I'd expect whistleblowing if these manual reviewers were tasked to 'accept' CSAM submissions that aren't CSAM.
There's no point in clientside scanning, then. The photos in iCloud are already not e2e and are completely readable by Apple (and are regularly turned over to the USG without probable cause or a search warrant).
What Apple are saying is they don't want to scan your library in the cloud, or have records of the outcome. There's a lot of process involved in them trying to have as little information about it as possible.
Apple has previously attempted to e2e device backups and voluntarily declined to launch the feature (after already doing design/code on it) when the FBI asked them not to (not under any legal compulsion).
Apple as a whole organization doesn't care about end user privacy. People claiming such are ignorant of the facts and are repeating Apple's marketing narrative.
Of course Apple cares about privacy to the extent it benefits their business. They are not a data company, for them user data are a liability.
Legal compulsion or not, FBI was somehow obviously able to force them to abandon iCloud e2ee and now NCMEC (or whoever) was able to force them to do this.
EARN IT seeks to deal with the scourge of online child exploitation by coercing service providers to more aggressively police such content on their platforms. https://www.congress.gov/bill/116th-congress/senate-bill/339...
Similar laws in UK and others.
Maybe this will short circuit the need for a government backdoor to snoop in icloud photos?
Q2) Didn't people agree to no illegal KP with the icloud TOS? Doesn't all this do is move the scanning from apple's servers to the distributed ARM processors?
Q3) Is that more environmentally friendly or less? I am sure it is cheaper for apple to have the iphone scan than add additional servers, cooling, space, etc.
If one doesn't use icloud photos this does not affect them, for now.
This is apple’s answer to not decrypting / unlocking phones for authorities. They found a way to keep our data private, while still being able to detect criminal activity. Oddly enough, most ISPs already scan for CP on the wire. So not even sure this is a necessary next step.
I'm confused. I was under the assumption that they were only going to do client side detection, is this article claiming they will be running the scans in the cloud as well?
No, they are putting this in place to avoid doing this in the cloud against your whole library and having access to all of the information that generates.
How is this supposed to be helpful? Wouldn't a perpetrator simply turn off iCloud syncing for their photos? Why would they even store them in the photos app in the first place?
Exactly. Especially considering it's been announced publicly.
Cynical take- as pedos move to other means of storing and sharing CSAM, there will be far fewer photos flagged for review which requires fewer reviewers to be paid by Apple. If they wanted to do this for the greater good as some users claim, wouldn't they have been far more successful in catching criminals if they kept this system secret?
Disclaimer: I'm not in support of keeping it secret nor even the system itself, but this is a question worth considering when viewing the situation through the greater good lens.
The worst part of this is that it's entirely possible (see: inevitable), and there's no way for us to hold Apple accountable. Once Apple starts hashing the rest of their userspace, the government will have access to a low-resolution transactive history containing the signature of every file you ever saved, made or shared. The fact that this data even exists is a sign that it will be abused.
I don't use Apple devices, but I'm wondering -- are the type of people who use iPhones significantly more likely to be engaged in exchanging child pornography than people on Android?
Perhaps there's a real problem here that needs to be addressed (though not in this way that opens the door to all kinds of surveillance)?
Google (and every other major tech company) already are doing exactly what Apple is. The only difference is that Apple is going to start doing the computing of hashes on the device for photos that are about to be uploaded, vs waiting for them to be uploaded to iCloud.
To me this signals that they are going to start allowing E2E encrypted photos on iCloud but they need to compute the hashes on device to comply with the law because they cant hash them once they are encrypted.
Another thought I've had: I'm not sure hiring tens of thousands of people to look at porn and child porn is really the future solution set that we all want. We'll have another subclass of of our culture, like military vets, who'll have trauma and PTSD as part of their job experience.
What? No, it’ll be a small amount of contracted resources that become second-class Apple “employees” who aren’t allowed to say they work for Apple. That small group will be responsible for taking on the work, which will be way more than the small number of resources can handle, then Vice will write an article about how traumatic the experience is.
A natural extension of these systems would be to enforce copyright, no? Of course it would be cause press to sue anyone possessing copyrighted content. A more measured response would be to have it disappear from iCloud, with a message that it has been put in the memory hole.
I think they're being a bit careful with the wording. They are noting a false positive rate, not accuracy. One way to get there would be to only report images with 99.999999% confidence. This would obviously not report a lot of stuff that is actually illicit material too.
I find a marked difference on HN about the attitude towards end-to-end encryption, and anonymous transactions in cryptocurrencies.
The former can enable the latter. And so much more. Organizing sex trafficking, terrorism and so forth. Nevermind the copyright protection stuff.
The latter can enable tax evasion, money laundering and financing unsavory activities. States don’t want people to be able to do that.
Yet many on HN applaud attempts to doxx everyone and every transaction in crypto, calling it a scam/for criminals, while at the same time decry any attempts to lessen encryption, however subtle or careful, of personal files and communication.
What is a consistent position on both these topics, given that there are dangers on both sides of the argument? I tried to present the core issue here:
Maybe this would be an unpopular opinion but for anyone who loves Apple ecosystem you can have an Apple device with minimum private stuff and a secondary non-google phone for private related stuff?.
Can someone please explain to me how this comparison would work? It seems so trivial to alter any image containing CP slightly such that its hash doesn't compare anymore?
I know this is a joke post, but I think in all seriousness it's going to take a crisis like this to get these laws/society changed.
"Thousands of developers swept up in CP ring!" that later turns out to be malware planting CP would go a long way towards fixing this issue.
I really am surprised nobody has made a worm that's sole function is to hit every FBI honeypot in existence and archive it to hidden folders just to prove a point.
What if Apple deliberately does this to frame anyone? Or any government officials? This Apple news really changed the way I approach my backups and data.
"The disclosure came in a series of media briefings in which Apple is seeking to dispel alarm over its announcement last week that it will scan users' phones, tablets and computers for millions of illegal pictures."
Yes, this definitely "dispels" my alarm. Thanks, Apple.
Oh, all right: Apple, already being in possession of hard evidence of a hideous crime and already being required by law to forward such evidence to proper authorities, will also - pro bono publico! - sacrifice significant amount of time of a significant number of their in-house computer forensics experts, each enjoying a significant billing rate, to relentlessly look for "other signs of criminal hacking", until there is no significant doubt that the accused is, indeed, guilty. We're all safe, then.
I don't know whether to laugh or cry.