Anything that includes client side scanning is a slippery slope to fully controlling your device. Will it be illegal to somehow disable the client side scanning? If so then how long until you are breaking the law when you turn off the government scanner — or are caught “installing a new hard drive” in your computer.. etc..
Is the problem that people can send encrypted things back and forth to each other? Requiring that companies put snooping software on their device is basically the thought police. Not hyperbole but the actual thought police. Today it’s saving the children, tomorrow it’s basically any problem the governments of many nations want to try to solve.
> Will it be illegal to somehow disable the client side scanning? If so then how long until you are breaking the law when you turn off the government scanner
And once they've normalized "your computer will spy and inform on you", is there any reason to think that won't expand to things which aren't colloquially "computers" but in fact are now computers?
What about "smart houses"? All your IoT toys are computers. Once phones, laptops and PCs as mandatory reporters has been normalized, is there any reason to think all the other microphones and cameras already in people's houses won't become mandatory reporters too? If they make it illegal to disable client-side scanning on computers, might they also make it illegal to remove the crime-detecting cameras in your own home?
Modern cars already narc on people, logging and uploading GPS traces that can be fed into police dragnets, just like phones. Cops can ask for a log of who's been inside a 'geofence' and where does that data come from? Phones and cars reporting on their owners, generally without their owners knowing anything about it. The 'slippery slope' isn't actually a fallacy if you have enough datapoints to legitimately draw a trend line. And I think we certainly do.
I remember talking to my dad about 1984 -- the book not the year -- and being somewhat surprised he didn't find it to be either prescient or honestly much of an interesting book in any case. While when I read it it definitely filled me with a sense of dread. I suppose "in his day" much of the predictions had not yet come true, and he didn't quite grasp how now they most definitely have.
Can you provide a source for that last paragraph (the cars uploading GPS traces and cops asking for logs)? I'm interested in knowing more, e.g. which country this happens in, are there any checks and balances, is this constitutional?
> The Jews will hide behind the stones and the trees, and the stones and the trees will say: Oh Muslim, oh servant of Allah, there is a Jew hiding behind me, come and kill him.
My friend, these are dangerous thoughts! Once we have our wifi-enabled brain interfaces you will get punished for thinking this. Please train yourself to forget about it now before you get in trouble.
But will we be punished for thinking it? Or will the chip simply cut off such trains of thought in the first place? :) "Revolutionary new brain implant improves mental health by suppressing damaging* trains of thought"
There have already been experiments in altering personal biases and beliefs through transcranial magnetic stimulation [0][1]. Direct electrical stimulation will eventually permit a greater degree of control.
> We presented participants with a reminder of death and a critique of their in-group ostensibly written by a member of an out-group, then experimentally decreased both avowed belief in God and out-group derogation by downregulating pMFC activity via transcranial magnetic stimulation. The results provide the first evidence that group prejudice and religious belief are susceptible to targeted neuromodulation, and point to a shared cognitive mechanism underlying concrete and abstract decision processes.
The former, in a couple centuries. Centralized monitoring of common thoughts will be a solved problem, and the means to circumvent it won't be known by many. This ideal oppression machine will last for a few centuries.
The only client-side scanning proposal we’ve ever seen (Apple and NCMEC’s 2021 photo scanning proposal) didn’t even address encrypted messaging. It worked on private photo libraries on your phone. I think it’s very important to reiterate that the targets here aren’t communications between criminals: it’s your private data.
It would scan photos you were uploading to iCloud, not private photo libraries on your phone. I'm sure you'll agree it's important to correct such a misunderstanding as one of those is a lot more invasive than the other.
It would scan the private photo library on your phone. There is no “public” photo library on my phone, and the default photo library contains extremely personal photos that I consider private and have not shared with anyone else, so I see no value in torturing language to pretend that this is not my private photo library. You are correct that it would only scan my private photos conditioned on a switch being turned on that would also cause those photos to be uploaded to my private cloud backup account. However this does not make that data any less private to me, and it is very different than scanning my photos in the cloud.
Since late 2022 Apple has enabled Advanced Data Protection, which encrypts all photos before they’re uploaded to cloud storage. With ADP on, my photo library is “private” not just in the common-language sense (it contains extremely personal data I have not shared with others) but also in an opinionated technical sense that these files are accessible only to me. If Apple’s CSAM scanner was deployed today, it would be scanning those photos in cleartext on your phone before unreadable data was sent up to the cloud. You could argue that Apple was making a trade: “hey, it doesn’t matter whether we can read the private data you’re storing, the price of sending even unreadable encrypted private data to our infrastructure is that you must run local software that scans the private photos on your phone,” and that’s a trade you might accept or reject on the merits. However I think it’s extremely important to say it exactly this way and not play language games. Apple was going to mandate local scanning of private photos as the cost of using their infrastructure even to store opaque private bits.
To add more detail to that, Apple's proposed CSAM scanning worked by computing a hashed value for each photo on your device then compare that to a list of known CSAM image hashes downloaded from Apple. Entirely on your device, aka the "client," as in "client side scanning" (to clarify, Apple's cloud is not the client, your personal device is). Then if you have photos that hashed to a value on the known CSAM hash list (which this isn't MD5 or similar bullshit hash algo, so that would only happen if you either engineered a hash collision or actually had CSAM content) they'd send them over to have a human look at. That's multiple photos, cause 1 match could well be a false positive.
It did a great job at freaking people out hearing about their photos getting scanned and it could be defeated by making a 1 pixel change to any photos a pedo would hide on their phone (since any changes to the image would totally change the hash).
>could be defeated by making a 1 pixel change to any photos a pedo would hide on their phone (since any changes to the image would totally change the hash).
This isn't the way those hashes work. A 1 pixel change would still hash similar enough to be matched. Maybe there are adversarial 1 pixel changes that could break the hashing, but I doubt it.
Even cropping, watermarks and other manipulations like that would still match. "Perceptual hashing", very different to cryptographic hashing. It's basically checking if an image looks "similar enough".
I believe this is why they needed multiple matches, because otherwise there must have been too many false positives.
This may be too oversimplified, but imagine that in a series of CSAM images, there might be, for example, a wall or furniture or something, that could appear similar enough to a wall in one of your own photos. That's a match, off to the gulag with you!
At the time of the announcement in 2021 there were no encrypted photos in iCloud. There was "private photo library only on your device" and "photos shared with Apple (not private)". The scanner would not have scanned private device-only photos.
> "If Apple’s CSAM scanner was deployed today, it would be scanning those photos in cleartext on your phone before unreadable data was sent up to the cloud."
Or enabling Advanced Data Protection could have disabled the scanner, we don't know. Even if it went the way you said, you could still not use iCloud and have private photos on your device, whereas your phrasing is trying to imply that there would be no option to do that.
Apple has been working on end-to-end encrypted iCloud since at least 2018 [1]. In fact they’ve been gradually implementing it since 2015. They finally deployed ADP in 2022. It is ludicrous to believe that in 2021 they designed a client-side photo scanning system whose only conceivable purpose is to be part of an end-to-end encrypted backup system, and yet also believe that system was not intended to be turned on as part of their ongoing (and ultimately successful) encryption rollout.
I think we will have to agree to disagree about the idea that turning on cloud backup suddenly makes my private photo library “not a private photo library.”
Yeah, important to mention that. Still, I don't want my phone to even be capable of doing that, nor do I see the reason behind it when iCloud could just do the scanning itself. That's one big step closer to the described full-private scanning (and just a flag flip away).
> "Still, I don't want my phone to even be capable of doing that,"
Not capable of what, running software? Communicating with a HTTPS endpoint? Having library code? Running stuff in the manufacturer's interest rather than your interest? All those things happen already in some form or other, and there isn't a cutoff to make the phone incapable of it without hobbling the phone.
> "That's one big step closer to the described full-private scanning (and just a flag flip away)."
iPhone already does scan offline private photos for face and object recognition purposes. And run big blobs of unknown Apple-provided code. It's only your trust in Apple that makes you think it doesn't report anything back now - and nothing at all stopping them from being arm twisted by the authorities to make that scan for something the government dislikes and report on it, as you say a flag flip away. It already does send your location and your surrounding WiFi signals and your voice when you use Siri unless you toggle the privacy settings, and that all came in quietly on regular updates.
Apple walked a fairly narrow line when they announced it, and when they publicly stated that if the authorities asked them to extend the scope of the scanning that they would refuse.
I don't know why they chose to do it on the endpoints rather than in the cloud, but acting like doing it on the cloud would give you any level of protection from them putting intrusive software on your phone is not reality. (Same with Google, Samsung, et al).
Not loaded with trained models on illegal content and wired up to alert the authorities if it finds a match, with presumably several teams within Apple built around that feature. I'm thinking about more than the technical aspects of this.
> It's only your trust in Apple that makes you think it doesn't report anything back now
Yeah, exactly. I trust them enough right now to run tons of stuff without my knowledge on my phone. I don't have the time or knowledge to audit my phone, even if it were Android. If they announced that new feature is going live like it's a thing customers are meant to be ok with, I'd trust them a lot less.
The supposed argument was that they wanted to keep the scanning they do in iCloud now (I believe they do it) and yet make iCloud encrypted so that they can't see the images once they leave your device.
Did Apple actually say they wanted to do e2ee iCloud photos when they announced CSAM scanning, or were people only speculating this? I don't remember / can't find an announcement on that. Also curious if there's some law preventing them from doing e2ee without the scanning.
I believe it was speculation based on them saying they wanted e2ee (and now it's available IIRC).
It honestly seems to me like they thought they could negotiate a middle ground without pissing off the Feds or the customers, but they maneuvered it quite badly.
Oh cool, didn't know that. It's this new "advanced data protection" feature that makes everything in iCloud e2ee except the classic mail/contacts/calendars combo that wouldn't really work with that. https://support.apple.com/en-us/HT202303 is a nice resource on this, and I wish more companies would publish things like this.
> It would scan photos you were uploading to iCloud, not private photo
There was no way to separate the private photos from iCloud-uploaded photos. It was all-or-nothing, like Android permissions: “Allow govt to scan all your private pictures, or do you wish to have no backup?”
I was perfectly feasible to design the ability to have private photos, but Apple chose not to. Or Apple, in collaboration with the government, chose not to.
The wording "scan your phone" misleads by implying that the photos are scanned because they are on your phone, when really the photos would be scanned because you sharing them with a third party, by the third party. Yes it was all-or-nothing, what it wasn't is all-, there was a nothing and that nothing would have kept your photos private.
(Yes it was feasible to design the ability to have a local photo store which isn't uploaded to iCloud, separate from other photos which are, and call it "private photos", but that's another matter).
It would start at only scanning content that was going to be uploaded to iCloud. There's literally nothing stopping the process from scanning all images whether they're going to be uploaded to iCloud or not. Such an expansion would use the exact same justification as the iCloud-bound content scanning.
It's a slippery slope that ends up with your phone/computer snooping on texts, call contents, or anything else and then submitting your "crimes" to the authorities.
They could have designed it to do that in the first place, and not announced it at all just hid it away in a point release. They didn't.
> "Such an expansion would use the exact same justification as the iCloud-bound content scanning."
One of the justifications was that Apple are/could be legally responsible for criminal images hosted on their servers, that exact same justification wouldn't apply to offline content scanning.
> "It's a slippery slope that ends up with your phone/computer snooping on texts, call contents, or anything else and then submitting your "crimes" to the authorities."
Not "crimes" in quotes, crimes without quotes. Generally people think law enforcement is important, especially regarding crimes against children. "A slippery slope" which leads criminals being punished is not the argument winning logical fallacy you think it is. The argument against it is around invasion of privacy, rights not to self-incriminate or to remain silent, ownership of device and software, freedom from unreasonable search without prior evidence, whether you can be found guilty by algorithm, et al.
> Not "crimes" in quotes, crimes without quotes. Generally people think law enforcement is important, especially regarding crimes against children. "A slippery slope" which leads criminals being punished is not the argument winning logical fallacy you think it is.
You're missing my point. Content scanning will start with the Four Horsemen of the Infopocalypse[0]. Then it'll move on to "crimes" like blogging about a public figure[1].
This is a person who is linking to an article that literally says the opposite of what they are claiming it says.
> The media erroneously reported [a statement from September 2021] as Apple reversing course.
Linking to an article from December 22, 2022 in which Apple is quoted as literally saying “We have further decided to not move forward with our previously proposed CSAM detection tool for iCloud Photos.”, ie actually and literally reversing course.
If this person cannot comprehend an article, should I trust that they actually checked that what they were seeing was what they were paranoid about?
In short, paranoid people are good to use as indicators for further investigation, but are rarely to be trusted as sole sources—even less so when they attempt to ascribe intent.
"erroneously" is referring to his later claims about Apple still continuing to scan. I guess it's a poor choice of words, but I don't see much problems with that.
Your note about that domain is interesting though. I don't regularly use Apple devices, so I'm not particularly concerned by that anyways.
If you turn on ADP those photos will be encrypted with a key Apple doesn’t have. They will still be private photos, of course. Even turning on unencrypted backup doesn’t make my private photos not-private.
You say that sharing your photos, unencrypted, with a third party you don't trust, doesn't make them "not-private" and you accused me of twisting language.
Yes there can be private conversations between two people, but there can't be private conversations between two people where one of them isn't trusted but can hear the conversation and simultaneously, what, can't hear it to keep it private?
What's especially fucked up in that case is that Google suspended his account and didn't restore it even after the cops closed the case and said that it wasn't a crime.
So the lesson is clear - avoid Google as much as possible and use services from separate companies. Email from one company, Chat/IM from a different one and so on. So that if one of your accounts gets suspended for one reason or another - it would not affect the rest.
I look at it as, split apart cloud things that are totally unrelated to the rest. Like there's no way my YouTube creator account would be the same as my personal Gmail/calendar/whatever. And don't put photos in the cloud at all unless it's e2ee. Chat/IM and email in the same place would make sense cause they're related things, except nobody uses Google's chat products.
It's nice how iPhones do the neat AI photo search and montages all client-side, with my Mac backing things up. They're really catering to the less common use case there.
Not the only one. At the same time, they announced a separate scanner for incoming inappropriate photos over iMessage as a parental control feature. Unlike the photo library scanner, this one actually got released. https://www.apple.com/child-safety/
That feature did not report users to the police. It did notify parents automatically, however, if you were a child in a family account. After some feedback from child protection advocates they removed the mandatory parental notification and made it a button that the child has to press.
> Today it’s saving the children, tomorrow it’s basically any problem
This is how most rights get taken away, not just encryption. Also, we're talking about countries that already have pretty restricted speech. Encryption has to consistently remain popular to survive there, and there are plenty of ways to undermine that.
And it's completely obvious bullshit. Consider any more salient issue affecting children: the leading cause of death for children under 18 is car crashes. The response to this large and growing problem is to a) blame the victims (they shouldn't have been in the road / they should have walked a mile out of their way to return to the spot 100 feet across a road / they weren't wearing construction vests & waving flags / walking to school isn't allowed in the first place) or to wave it away as the cost of doing business.
Were there a shred of consistency in actually advocating for children's quality of life, I could forgive those who are duped by these underhanded tactics. As it stands, there's no actual concerted "save the children" bandwagon that we're being invited to hop on to.
The pretext of child abuse prevention is used as a plausibly-deniable dog whistle to automatically discredit and demonize anyone who dares speak out against the proposed law.
It's got nothing to do whether the law would be actually effective at curbing such abuse, whether it will have harmful (seemingly-) unintended consequences, whether other solutions would be more effective or have less potential for unintended consequences, or whether the problem is actually widespread enough to even worry about compared to other risks that affect children.
Very well said, but even if we were talking about just CSA, this is still bullshit: the one thing you can do to help prevent child sexual abuse is to talk to children about sex, taboo-free, and have quality sex ed starting at primary school children. Emphasising consent, explaining what is sex and how it works, how it's not okay for an adult to do it with you, how it's not okay for anyone to touch you without your express consent, inculcating healthy views about relationships and sex from an early age, etc.
Unfortunately this is uncomfortable for some people (understandable, it's pretty awkward to talk with a 7yo about sex), but it's what politicians should be pushing if they were really about "saving the children". Monitoring cell phones is not it.
> As it stands, there's no actual concerted "save the children" bandwagon that we're being invited to hop on to.
Maybe one should be created, with a prioritized list of issues, and loudly inform people that if issues at the bottom are being prioritized the proponents may have other motives than the ones they claim.
Impossible. If you rank avoidable deaths by quantity, men arrive in all the top 10 categories, so I’m pretty sure no-one would want to make this list anyway.
If there were a political bandwagon around reducing child deaths, I'd probably still suspect ulterior motives. Also, if it were based on just death counts, I view murder differently from accidents. It's like when people try to compare lung cancer and 9/11.
Agreed on all counts. Even discussion of the statistics on firearms-related mortality tends to sidestep the proportion of suicides. People seem forever to be more interested in pushing their own positions than the hard work of solving the problems, even just one person at time.
I definitely agree surveillance is happening and I don't mean to do a whataboutism, but why such emphasis on Western internet? Are you implying this does not happen elsewhere? If we take a counterpoint to West, India and China both have significant surveillance of their citizen and they account for the other third of the world population.
Let's just not, because the ones carrying the baton of "liberty" are the ones that ought be measured by that same stick
China or other countries do not fall on the hypocrisy of saying that your info won't be part of the dragnet, this is not the case with said western govs which then proceed to decry the evil non-western countries for doing the exact same thing and expecting their populations to somehow do something about it¿? ¿? ¿ It is indeed nauseating
In some places they say you are free and have rights, but you don't.
In other places you know you don't have certain rights and act accordingly, because people are not stupid, if they know there's a potential danger ahead, they become more cautious (or they carefully comply, because there are no alternatives).
I would say the first group of counties is more dangerous, because it gives people a false sense of security, lowering their natural defenses, while corporations profit from knowing everything about them, things they said they should never have had access to in the first place, because you have rights, right?
because king pooh doesnt spent disgusting amounts of efforts talking about how china is a shiny beacon of light for freedom, how "we are NEVER gonna be like THEM", "that doesnt happen here".
the west talks big, but in reality western countries are the biggest slimiest dirtiest hypocrites the world has ever known
How will it work on computers? Will browsers do the client-side scanning? Will Apple and Microsoft implement it in their OS'es? What about Linux, will Linux be forbidden? (let's not get in the discussion that Linux is the kernel, you know what I mean).
Did some minor research, apparently it's for all providers of email, chat and messaging apps.
edit: How will it work in practice? Say I make some Open Source messaging app. Now I need to add some/the government approved algorithm to detect malicious content and then feed this to some government instance. I guess the government will provide me some key/certificate to ensure that my reports of malicious content are legit. But how will this work if this is public, the signing stuff can be abused to file false reports. I have no clue how this will work in practice. The death of Open Source email, chat and messaging apps?
Richard Stallman sleeps on a couch in Terry Gilliam’s Brazil. The police find out SSH and console emacs exist, as do /tmp and multi-user unix. Hilarity ensues as they knock down the door and arrest him.
Years later, we find out the mind crime courts use ~IRC over SSL~ (edit: emacs org mode over sshfs) to organize their docket, and they eventually have to give RMS access to a libre terminal from his jail cell, so he can help
them finish his own processing.
At this point, the backstory is established and our story begins…
Except the EU is pushing for chat client interoperability[0], and side-loading[1], so everyone should be able to switch to open source apps that don't have this Kafkaesque "your guilt is decided by the AI" feature.
So, a law in France prevents POS (point of sale) software from allowing users to modify or delete transactions and other data. To make sure they don't, software needs to be certified.
For three years (2016–2019), open-source software couldn't be certified, but since 2019, they consider any 'major modifications' of the software by any user, including the end-user, a reason to certify that forked software. So you can use and modify open-source software for your POS, with that condition if you want to use it for professional reasons. (though I have no idea how it's enforced)
Interesting; I think this should be more or less the same all over Europe, but it apparently is not. The Swedish tax agency has never required such a certification but the manufacturer is required to self-certify the POS, in addition to that you need a serial connected certified control unit that stores all the raw receipts (actions) of the POS.
Swedish info; if yo have the same for from france I would be interested. It's not easy to translate the PDF:s into something useable sadly.
I'm sorry, it appears your OS image isn't signed. Please submit it to the certification authorities along with the inspection fee of $25,000 to ensure that it complies with all necessary regulations.
Computers are going to become more like cell phones with locked bootloaders. TPM is already a mandatory feature thanks to Windows 11.
Well then we'll build our own computers at home. Sam Zeloof [1], a high school student at the time, demonstrated the possibility years ago. Are they going to outlaw electronics knowledge? At that point we're beyond a dystopian society, we're post-apocalyptic.
No, but they might ban "bespoke" devices.[0] Maybe you're smart and brave enough to build and maintain your own illegal device, but good luck trying to persuade other people to do the same so that you can communicate with them securely online. Also, the government might force ISPs to block access to devices that don't pass remote attestation checks.
Then chip makers will make money selling chips without secure boot. And when those are outlawed, people will start to make their own bootleg chips. This is already starting to happen, it's just in its infancy. They won't be as nice as the current chipmakers but at least they wouldn't be beholden to crazy legal restrictions.
Governments have access to what is precluded to normal citizens and hackers. All they need to do is telling the phone/router/CPU/chipsets/NIC manufacturers: "if you want to have business here, from now on you put into your firmware this small blob that will help us to catch pedophiles and terrorists", and see how quick they will comply. Open Source in software would be tolerated because hardware runs at high privileges, and if you tamper with that at production level to insert backdoors, no Open Source operating system and software can prevent them from working.
This sort of argument seems similar to "but I can manufacture my own gun in my garage machine shop" That's great for you, but it says little about the ultimate efficacy of a policy on a general population level. Japan's gun ban is generally effective, even though you occasionally have somebody who successfully makes their own homemade gun. And regulation requiring computers to spy on their owners could become generally effective, even if a few people like yourself have the technical know-how and inclination to opt yourself out.
I don't think such controls on software and data are as enforceable as fire arms. And older router that can run OpenWRT is way easier to procure than a 3d printer or a cnc machine.
Why not? Anybody can make a homemade shotgun with standard tools and hardware store parts, but very few people do. Very few people simultaneously have both the technical know-how and the inclination, therefore gun bans are generally effective by simply banning the easiest and laziest way of getting guns (buying them.) Even people with overt criminal intentions rarely make their own guns.
I think the same will likely be true for legally-enforced client side scanning. It is already the case that few people simultaneously have both the knowledge and inclination to "jailbreak" their phones. Throw in stiff legal penalties for doing so and even fewer people will do it. A few people still will, but if most people don't then the ban will still be effective even though it's possible to squeeze through the cracks. In both cases, instructions for circumventing the law may be found online by anybody that cares to look. But most people won't.
> Like piracy? It's ilegal and fines are often hefty so shouldn't it deter people from pirating movies or downloading roms etc?
Yes, and it mostly works! Bans on piracy work well already, most people don't torrent games or movies. And locked down platforms exist, demonstrating the technical feasibility of even greater control. Software piracy in particular is much more difficult on the sort of computers that manufactures deliberately design to be locked down, like modern video game consoles and iOS devices. It is still possible, but has been made sufficiently difficult to stop the majority of the population from doing it.
Plex works on iOS and video game consoles doesn't? You can install VLC or similar software on your iOS device and watch a downloaded mkv. People are often caught when selling or seeding a torrent file, which is easily avoidable by using a VPN or seedbox not by playing a rip of Dune.2020.4k.en.it.h264.mkv on their phone.
Piracy has been mitigated through better services at competitive prices offered by the likes of Steam, iTunes, Spotify or how Netflix used to be and not at all by law enforcement.
> Plex works on iOS and video game consoles doesn't?
As long as that remains permitted by Apple/Sony and your government, yes. If either of them decide to ban Plex or VLC, it will become effectively impossible for most people with normal levels of motivation and technical know-how.
> You can install VLC or similar software on your iOS device and watch a downloaded mkv.
Presently, you can. And yet presently, relatively few people do.
These sort of bans aren't ever 100% effective; you'll probably always be able to squeeze through the cracks if you try hard enough. That guy in Japan managed to make a homemade shotgun that was good enough to kill the ex-PM, but the simple fact remains that gun control generally works in Japan. And so do anti-piracy measures even today, before the full technical means of authoritarian control have even been brought to bear.
Locked bootloaders exist and mostly work. The fact that you can presently buy computers without locked bootloaders doesn't change the fact that the technical means of control have been demonstrated to work. Political policy is all that protects us today.
Game piracy doesn't work on game consoles. In order to pirate you have to jailbreak it (and risk bricking it in the process). Jailbroken consoles also don't work online. And not all firmware versions are vulnerable. Basically - it's a mess and most console players don't bother.
And as far as I know - XBox One, Series X and PS5 haven't been jailbroken at all.
Piracy may have been displaced into Netflix password sharing, or diminished and replaced by bona fide streaming subscriptions (why pirate if you can get what you want at a low price). But falling real incomes and Netflix tightening up on 'free' users could see more pirating.
Software has close to zero cost and very low barrier of entry. Also it’s much harder to enforce, the police can’t really “raid” your computer (yet) the same way they could an illegal gun making workshop.
> the police can’t really “raid” your computer (yet)
> yet
Key word.
The technical means for that sort of thing have also been demonstrated. The only thing holding us back from a highly effective digitally-enabled police state is political policy. Software piracy is presently easy on some platforms, but much more difficult on others. With the right political impetus, those controls could be extended to the presently free platforms. "Just buy an Android" doesn't work when the law requires Google to implement the same sort of controls as iOS. "Just buy a PC" stops working when the government permits or even compels Microsoft, Dell, etc to implement locked bootloaders like a Sony Playstation and only permit applications to run if they've been signed by an organization accountable to the law.
Same as my WRAP boards from a few years earlier, but as with any other device out there, you may indeed protect yourself but have no guarantees the people you're talking to can or will do the same, unless they're privacy conscious and know how to protect themselves. Unfortunately all it takes is one of the endpoints to be compromised.
Yours however is still a very valid point on why we should keep at hand an old device from before every chip could contain a backdoor, if only just for texting over serial port, just in case, although this could bring other problems like not having enough power for heavy encryption.
The "invisible supply chain attack by intelligence agencies" angle is a plausible vector, but doing so pervasively and repeatedly in a democracy with open records is unlikely.
To keep a secret that spans supply chains, across multiple companies, many with substantial international ownership and/or interests? Not gonna happen.
How many common people really understand room 641A though? There's the case to be made that the level of signal propagation and uptake is still "low enough" where even though it is public, it is still effectively secret.
From a CAP theoretic point of view, the info is Available, but there is still a hefty Partition in that there is a significant degree of the population that isn't Consistent on this fact.
My experience is that a large percentage of ordinary people have heard about 641A, but the overwhelming majority of them think it's just another crazy conspiracy theory.
It's a sad joke that child protection is the driving argument for surveillance. The actual numbers are _horrifying_, but almost nothing is done about it even in "developed countries". None of the organizations looking into actual violence against children is advocating for such measures. It is a completely fake and bullshit argument.
> Indiscriminate messaging and chat control wrongfully incriminates hundreds of users every day. According the Swiss Federal Police, 80% of machine-reported content is not illegal, for example harmless holiday photos showing nude children playing at a beach. Similarly in Ireland only 20% of NCMEC reports received in 2020 were confirmed as actual “child abuse material”.
All machine flagged reports must be checked by a human. Somebody will check your photos.
The point is that overwhelming majority of sexual abuse happens in families, and that online sharing of material is only a very small fraction of an overall "epidemic". If politicians were serious about addressing child abuse problems, they would follow the advice of child protection organizations including the CDC and WHO (and all the others). The only area where this comes up again and again is online surveillance.
Weird. I thought you were joking but you're not. What do some second-rate American actors have to do with EU Digital policy? They have no tech knowledge and they aren't even European.
These people perhaps have hidden financial motives (e.g. share holdings in scanning software companies), or they lack the technical expertise to judge the impact, and they may actually think what they want to do is a good idea.
In any case, they must be stopped at all cost. Freedom is priceless in the literal sense of the word, or people would not be willing to die for it.
Germany has a historic responsibility (after two totalitarian regimes that spied on its cities in its past), which thankfully means a substantial part of the population was educated in school enough about the dangers that they would not support any party who let that kind of nonsense creep in, probably even regardless of which party is in power.
> Germany has a historic responsibility (after two totalitarian regimes that spied on its cities in its past), which thankfully means a substantial part of the population was educated in school enough about the dangers that they would not support any party who let that kind of nonsense creep in, probably even regardless of which party is in power.
I wish it were so, but alas no. We have strong historic privacy laws, data protection authorities with teeth and a working court system constantly overturning new anti-privacy laws. Most of us Germans simply do neither care about privacy nor understand why "it all has to be so hard". The parties most people here vote for are also the parties that constantly enact laws eroding our privacy, only to then have them overturned by the courts. If not for those safeguards, Germany would again be on its best way on turning into an authoritarian police state again.
The only reason Germany is opposing this right now is that we currently have both the Liberals and the Greens in the government coalition. The Social Democrats (SPD) would have just winked it through and the Christian Democrats (CDU/CSU) would have fiercely supported it.
If what the article states is true, and they are pushing their own commercial product along side their crusade to rid the Internet of CSAM then that casts the whole endeavour in doubt.
What does some random EU bureaucrat know anything about it?
The commission is even worse. Most countries just send their loudest/incompetent/etc idiots to get them out of the way for a few years. Just look the commission president…
At least they are an EU bureaucrat not an American actor. They are paid to make these decisions, whether I agree with them or not. A random American actor is just butting their head in where it doesn't belong. It's like as if Daniel Radcliffe was trying to influence the American congress - even if he had the best intentions in mind, he should still be told to piss off.
Parliament is elected. Comission isn't, like any comission in any country anywhere in the EU(and outside of it you include the UK). If you don't like the comission then make sure your vote for the parliament matters.
Commissioners and the President of commission are appointed (elected) by the Parliament who is in turn elected by the people.
Same way the American president is elected by "electors" who are chosen by the parties, which are voted by the people. People only vote directly for the congress in the US.
It's the same thing for the European institutions, I don't see the problem here.
The people vote in the Parliament. The Parliament decides to build a road. The Parliament appoints Jim to build the road.
Is Jim democratically elected? Or does he have some democratic mandate?
In most countries this would be corruption. Government projects like that have to go through an open bidding process and even after that Jim would not be considered to have any democratic mandate.
The difference between Jim and an EU commissioner is which job he is appointed to.
I think in practice the EU commission isn't that bad (so far) - almost all of the people on it are/were big name politicians in their respective countries. I might disagree with a lot of what they have to say, but they did/do represent a sizeable portion of the voters in their country.
No parliament appoints any Jim to build any road in democratic countries.
Jim's company needs to win a regular tender and have to respect a very long list of regulatory and financial requirements.
Jim's company is a supplier, commissioners are regulators.
Much like in my country (and many others in Europe) ministers are appointed by the Prime Minister who is not elected, but appointed by the parliament, by a majority of the votes.
It doesn't make the Prime Minister and all the ministers equal to an average Joe who's been called to fix a squeaking door.
>Jim's company needs to win a regular tender and have to respect a very long list of regulatory and financial requirements.
Yes, because without this bidding, the process falls into corruption. No such process exists for appointing EU commissioners. There's no "regular tender" that the candidate has to win.
>ministers are appointed by the Prime Minister who is not elected, but appointed by the parliament, by a majority of the votes.
The ministers answer to the PM. They are effectively one governmental unit, EU commissioners are not part of it. If the government gets a vote of no confidence then the government is dissolved as a group. EU commissioners can't be recalled mid-way through their term though, which isolates them from this.
> No such process exists for appointing EU commissioners
I'm really struggling to understand why you keep lying about this.
First of all, there is no such process for any politician everywhere in the World.
Because they are not building roads where people drive their kids on and they are not the solely responsible for building them, in democratic countries there's a lot of them contributing to a synthesis, there's government, there's opposition, there's third parties (so called because... you know!)
Secondly, There's no "regular tender" it's bullshit.
Of course there is, nobody would be candidated to be a commissioner if there's no ground.
I wouldn't be chosen by any party, ever!
Because I have no chances of winning the tender.
Thirdly, and most importantly, there's a vetting process in place in the EU, and any of the commissioners there has more enemies than supporters, including those who are competing for the same seat.
So please stop your anti-EU propaganda and talk about the points you think are problematic in EU, not some fantasy issue that does not exist.
> The ministers answer to the PM. They are effectively one governmental unit, EU commissioners are not part of it.
Just because the mechanic is slightly different it doesn't mean it's wrong.
In France the President is elected directly, in Italy it isn't, in USA it's somewhat in the middle. it's the process that makes institutions democratic, not the system you chose.
> EU commissioners can't be recalled mid-way through their term though, which isolates them from this.
It's the same for parliament members in Italy unless they either die or resign, so what?
Commissioners are not appointed by the parliament they are appointed by the governments of the members states. They often use it as a way to get rid of influential but incompetent politicians for a fee years..
Candidates for the remaining Commission portfolios have to go through a tough parliamentary vetting process too.
The European Council, in agreement with the Commission President-elect, adopts a list of candidate commissioners, one for each member state. These Commissioners-designate appear before parliamentary committees in their prospective fields of responsibility. Each committee then meets to draw up its evaluation of the candidate's expertise and performance, which is sent to the President of the Parliament
After the President and Commissioners have been approved by Parliament, they are formally appointed by the European Council, acting by a qualified majority.
The commission is elected by the European Parliament from among the candidates presented to them. The Euparl is elected directly by the European people with proportional representation.
That’s just false… They vote for the president of the commission that’s true. However all the members are appointed by the governments of the member states. The parliament only gets a vote whether to confirm the commission in its entirety. They don’t get a say in picking individual members.
You don’t get to vote for the members of the commission they all are appointed.
And the parliament is a joke it can’t even propose legislation they just rubber-stamp what’s sent to them by the commission. They also don’t get a say in appointing individual members of the commission only the president. However the current president was probably the most incompetent high ranking politician in Germany so maybe they shouldn’t get a vote on that either… the parliament is just a way for local parties to send away noisy politicians away for a few years by giving them a cushy job or just a place to retire for older popular ones.
Hardly anyone in Europe takes it seriously which is reflected in its composition…
They're not doing it to "protect" europeans, they're simply hoping the eu marketplace is big enough that they can make US companies change their software in ways that would violate (at least) the 1st and 4th amendments if the government demanding it were in Washington.
Wasn’t sure if you were being serious but sure enough:
“Thorn, a U.S. 501(c) (3) organization founded by Hollywood star-turned venture capitalist Ashton Kutcher and his former partner Demi Moore, has been a central force lobbying for the legislation.”
According to this article, it seems they want to sell it directly to chat applications.
"There's a company called Thorn that is lobbying for the scanning contract and would love to get a government mandate for its software to be installed into your chat clients," he said.
Apparently Apple didn't want to pay and developed their own in-house, only to scrap it after complaints from the public.
On a more serious note though I'd also like to know. I never paid attention what Hollywood actors do in their spare time, but it's well known around the world that there are politicians and lobby groups pushing for authoritarian measures under the guise of doing it "to protect the children".
I’m a bit confused, is the German government formally opposing client-side scanning requirements or not? The article is about civil society groups voicing their concerns at a parliamentary hearing and notes that the parliament doesn’t have a say in EU legislation. But it specifically says the government wants client-side scanning removed without any specifics on that part.
Though it's not the whole government who's against but really the two smaller parties in the coalition: the FDP (liberal party, 11.5%) foremost, and to some extent currently also the green party (14.8%)…
… whereas the largest party in the government coalition, the SPD (25.7%) of chancelor Scholz is not only largely in favor of such client-side scanning (of course there are also exceptions within the party), but also the party that holds the relevant ministry (interior) and thus the participation in the EU-side negociations.
The current coalition contract kinda forces the SPD to oppose such client side scanning at the EU level - and we'll see to what extent they keep their word or try to play foul against the contract, but there is no doubt imho that if the next government was again a "grand coalition" of SPD and CDU without the liberals to block such stuff, then such client side scanning would be waved through by the same SPD that currently is contractually bound to oppose it.
The danger of such attacks against our liberties is still very much there, and it takes a constant watchful fight for our liberties to prevent the authoriarian statists from getting through with such stuff. They never stop trying to push through ever more of their liberticide ideas.
> The current coalition contract kinda forces the SPD to oppose such client side scanning at the EU level - and we'll see to what extent they keep their word or try to play foul against the contract
You cant 'play foul' against coalition protocols. The moment you do, the government falls.
You can (if you're the bigger coalition partner with a big power imbalance), and in fact it's not that rare at all and it's exactly what the CDU did quite blatantly last time they were in coalition with the FDP at the federal level under Merkel. Of course, the smaller coalition party could theoretically react to such an infringement against the coalition contract by ending the coalition…
… but that would usually hurt them enormously more than their infringing bigger coalition partner, especially if the infringement of their partner against the coalition contract is about something that the majority of voters doesn't care enough about, if it's something that the bigger coalition partner can publicly spin as TINA(*)-necessary due to changed conditions or anything they can sell as a "crisis" or if the polls are currently less favorable for the smaller party than at the last elections.
Ending a coalition is something that would typically come at an extremely high price for the smaller party who'd do it. Which is why it happens much more rarely than unilateral infringements against coalition contracts by the bigger coalition partner.
Yeah, reads like clickbait that is intentionally confusing "Germany, the country" with "Germany, as represented by these six people who were heard by a parliamentary committee yesterday".
That's the same no? They are their representatives. They act with the german power. It's like saying this is trump and not usa, while his ratification is equal to the us power. It doesn't matter for the other side. (I took trump as an example as I often see him used as an example in such cases)
No, these are just experts explaining their opinions to a small group of MPs, but they're (a small) part of the legislative, not the executive (which does the negotiations on the EU level), and parliament usually votes along the government. So whatever 10/736 members think isn't "what Germany will do". It might be, but it probably won't.
They're not the same. There is a big difference between heads of state and and elected representatives, and an even bigger difference between heads of state and the rando activists/etc that elected representatives might invite to share their point of view.
Heads of state, like the POTUS, are meant to officially embody the state itself. So if Trump while President says a thing, it is reasonably conventional to describe that as "America said..." But that isn't what happened here. In this case you don't have a head of state saying anything about the subject. The article is about various people (including "IT experts, civil libertarians, law enforcement officials and even child protectors" who aren't even elected representatives at all) giving their opinions to German Parliament. This is not a "Germany says.." situation.
It's bad enough with the amount of private data already scraped legally by websites, without sanctioning the removal of privacy.
Honestly the "for the kids" we know is BS, they say it's for the kids, even if they parade a group of well meaning people around bringing an awareness there's a problem (IMO, honestly double or treble the amount of police or IT entity around the world to penetrate the vile pedo groups) - but instead such actions proposed are almost always for other more powerful interests who see a fraction of the web as a major problem for some perceived idea they lose money to this fraction's activities.
The reality is any group up to no good will simply migrate to a protocol that permits sending a file from a usb or other external source, but such file will be encrypted unlike any previous known encryption. Then the same process as the good work done presently will save the kids, agencies will slowly penetrate such groups, discover the encryption and member contacts ...
At the end of day like many here have already said, it's a slippery slope. Some people are happy to use devices they really don't own the content to do as they please, they put up with google running their phone apps ... when I couldn't clear my cheap android phone's disk space of the junk which left no room for anything else, without having to reset, that was the point I gave up on smart phones - it now exists only to take texts and calls and create a wifi hotspot.
As demonstrated by iOS, the technical means to effectively frustrate the installation and use of Free Software already exists. We (the tech industry) have already built the walls of our own prison. All that remains now is for politicians to herd us in and slam the gates shut.
The point is that the technical means for control have already been demonstrated by iOS, not that anybody is forced to use iOS specifically. Governments could require that Google and other manufacturers implement similar controls. The walls of this sort of prison have already been designed and shown to work, all that remains is the political will to herd people in and lock the gates.
Locked down computing devices existed well before before the iPhone was a thing. Pretty much all gaming consoles are locked down. I'm fine with them existing as long as a non-locked down option is available.
And controls like that would mean death to open source OSes like Linux, because you can't develop and test an operating system on a locked down device.
You're still not getting it. iOS and the locked down game consoles that preceded it all demonstrate the technical feasibility of these controls. All that prevents it is political policy.
I didn't say iOS was the first to do it. I didn't say that you are presently forced to buy such devices. And I certainly didn't say that desktop Linux would survive the sort of totalitarianism the tech industry has invented the means to implement. You're missing the point so severely that it's hard for me to understand where my explanations could be falling short. Are you trying to get a rise out of me?
The other thing that iOS demonstrated is that not only it is technically feasible, it is also socially feasible - to the point where the majority of people willingly use such devices in some countries.
It’s cool if you can still install your open source messenger of choice, but whom are you going to message if all messengers that don’t have client side scanning enabled are banned from the official app stores?
> I really need to be able to control which software runs on MY devices
Well if you're running an Intel powered device, there is the Intel Management Engine[0], which is a minus ring zero backdoor with unfettered access to everything. It even runs MINIX! It's not really your computer.
I hope that one day germany will gain some influence in the european union, to counter the influence of great britain and protect end-to-end encryption.
> to counter the influence of great britain and protect end-to-end encryption.
In 2016, a voting majority of the UK population decided to give up their
valued influence in the EU, and we miss them dearly (not sarcasm - they
were a much-needed voice for common sense). This event is commonly referred
to as "Brexit" or Britain's exit from the European Union, and eventually from
the European Council, which it once was a founding member of.
They never wanted to ban memes. In fact, the so-called "meme ban" and accompanying "link tax" has already been approved in 2019, and implemented in several countries, including Germany. By now it must be clear that this directive does not in fact ban memes or tax links, and that that was always an exaggerated reading by internet zealots.
I don't use Tutanota anymore (main reason: no bridge to other clients), but I'm not mad about having paid up-front for another year. Thanks for keeping this topic visible.
If the image doesn’t leave the device and only the hash does… What is stopping one from uploading existing public images, banning a whole lot of innocent people?
I don’t understand these laws. What if I don’t want client side scanning? I’ll just get a Librem or PinePhone or a pixel 6 with GrapheneOS. How are they going to stop me? Think about it really, how are they going to stop me? The implications are pretty insane if you ask me.
It's funny that I was saying the EU is going to implement this like 10 years ago and people were calling me crazy conspiracy theorist, that the EU would never have done anything like that and that EU is totally not evil.
Look how Overton windows is moving.
Today it's a thing and nobody calls it conspiracy theory anymore and suddenly people no longer talk about good EU.
Tomorrow you'll have these scanners on you device.
From then your life will be micromanaged by bureaucrats and you'll become a slave.
As ideology EU is built upon is slavery.
I've seen at least half a dozen cases in the US for people arrested for child abuse material where all of them came up to be because of Google scanning their messages (not just emails). There was even a case where it was a photo sent to the child's Dr because the child had a rash, and Google's algorithms identified it and that was enough for the police to get a warrant for ALL of the user's Google account.
Uhhh yeah, obviously I don't want my devices spying on me. If Apple had gone ahead with their ridiculous plan I would have dropped them if I hadn't done so already the year before (I went from macOS to FreeBSD for more control and I dropped iOS years before)
I was a child before the internet and was used for pedo hunts before the internet, usually having to try on underwear in the UK store Littlewoods, the number of adults that attracted, who struck up conversations with my state employed masonic parents was quite astounding!
Ergo, I think parents should be put under the spotlight!
When you consider innuendo and not just the initial message, you'll find a lot of communication takes place this way, but innuendo isn't taught to most people so they are oblivious to what's really going on around them and its really quite disturbing.
Throw in a variety of drugs which can make people forget stuff or put them into a chemical trance if its not hypnosis, and people of all ages can be manipulated into actions they wouldn't have otherwise. What age can you start hypnotising kids? Some of these drugs are found in pharmacy and supermarket shelves with no checks if paying cash.
This is why I say people need to have 24/7 unhackable surveillance on them at all times, in order to prove whats been done, as victims, especially those drugged wont know or realise whats been done to them, sometimes for decades if at all.
Drugs/chemicals have been used to hack people for Millenia.
And some parents just see their kids as cash cows, after all the mindset used to be to have a big family so they could look after you when you got too old, and this was before the socialist elements of the state in todays sense introduced things like state pensions and benefits payments.
Thats why I say the state is virtue signalling when they claim to be protecting kids, but dont teach kids how to protect themselves or teach them the law to know what activity's are criminal. This isnt anything new either, its been going on for thousands of years, but history gets sanitised under the pretence of not giving anyone any ideas.
Will CSAM be killed by AI art? Hard to believe that producing it the conventional way can be economical when you can make an almost identical product without risking serious prison time (or any at all in some jurisdictions).
"Controlling the output of generative AI technology is simple. We will create context for its use. First we will censor any use related to social taboos. Then we will censor anything else that we desire. If anyone complains, we will accuse them of wanting to engage in or promote social taboos."
For once, we're blocking the right thing. Good that the CDU/CSU is no longer in charge of the Interior Ministry, but still Nancy Faeser (SPD) is barely better than the Conservatives.
You might be able to handwave some things in politics. They're either too old, too lazy, etc. They're just politicians trying to find a nice box to put everything into because otherwise you can't make laws. It's the fundamental problem with legislators that take a salary and are not volunteers for a short period. When you need people to justify their pay they start finding heuristics, no matter how awful, to create more laws.
The problem is the precedent, globally, of killing encryption is well documented. There is no good solution that doesn't harm everyone. Here in the states, the Clipper Chip [0] was the textbook example of politicians trying to legislate mathematics. You wouldn't even be able to do something like "give us a copy of your private keys" because then you'd go down the path of playing wackamole with every distribution, every slightly recompiled GnuPG, etc. It's an intractable problem. We, in the US, would've gone a long way by stripping Dorothy Denning's CS PhD from her [1] after her outspoken support of such measures. Instead she has received many awards for her "work" in the field of rights erosion.
The US seems to have settled on making attempts at Clipper 2.0 every decade or so. In the meantime encryption is considered a weapon legally which is how the DAs get their fill. Germany appears to have flat out opposed it...but it's only a matter of time. The EU will force them to bend the knee because historically they always have. It's a fantastic effort. Unfortunately, done by one of the biggest pushovers in Europe.
There's no hope for the technical among us. The people with power who do understand, the technocrats, are behind these efforts. The people that don't understand are behind these efforts. It's only the intractability of the problem that makes legislating it dangerous. Once someone clever enough makes it tractable there won't be encryption anymore. Pre-crime is the way the world has worked since 9/11 and encryption is #0 on the list of things to legislate to death. In the US, there are likely hundreds of billions of taxpayer dollars being spent to store every last bit of communication in Utah for this eventuality.The EU has a similar program. Those tax dollars have to be justified somehow. So when you ask "who would support this"... just follow the money.
I find it unlikely that taking away someone's Ph.D. would accomplish anything positive.
How do you envision this would work in general - an angry Twitter mob demands that academic degrees are revoked, and when the mob gets sufficiently large and angry, the university who awarded the degree buckles under the pressure?
If not a Twitter mob, then who makes these decisions? The Central Committee of the Party? The Committee for the Promotion of Virtue and the Prevention of Vice?
I'd imagine it would be similar to how the (former) doctor who kicked off the anti-vaccine thing had his Ph.D. revoked, which involved a whole board of his peers reviewing his claims and actions and determining that he caused irredeemable harm. The problem in this case is how CS is such a new field that we don't really have boards and such that will scrutinize to that extent in an academic context, at least as far as I know.
The ACM has a strict code of conduct. If an engineer commits an atrocious error their PE will be stripped. Violating the computing rights of literally the entire planet should be similarly egregious.
It is not twitter mobs. Its about holding people to a standard and not allowing them to corrupt the meaning of computing for financial, or tyrannical, gain. In recent history we have done almost nothing to hold anyone accountable for their actions. Academia being the most impervious to such punishments.
The ACM and ABET would make the decision. The same people who issue the certifications to the schools who award the degrees. Yes, these organizations are generally spineless cowards, but in a perfect world it would be them. Iron-fisted responses to tyrants is the only way you can insure the purity of a field and freedom from their destruction. I assume you will take this to it's natural conclusion and say any CS degree holder working for the NSA/Military/FBI/etc should also be similarly stripped of their title. To that I say, yes, if they are violating the computing rights of others willfully we as a society cannot allow such people to hold the credential. Otherwise a code of conduct is simply a list of suggestions. In which case it should not exist at all.
I cannot find information on Wikipedia about Denning's PhD being stripped away from her. She's listed at Purdue as having one. Where can I read about this alleged stripping of her PhD?
> Hartmann either does not understand encryption, or client-based content scanning. There's a good chance Hartmann doesn't understand either :-(
I think that is debatable and depends on your viewpoint. If you only look at the technical process of end-to-end encryption it is, indeed, not weakend by client-side scanning since that happens prior to the encryption. If you, however look at it from the perspective of the use case (sending information privately without information leakage) it is weakened. Client-side scanning only makes sense if, in case of a match, some authority is informed. This is by definition information leakage. On a first glance it looks like a ok compromise in the case of CSAM but if the technology is in place it can only go downhill from there and the next step is usually terrorism followed by capital crimes. The later two categories can be abused depending on the definition which heavily differs depending on which European country we are talking about. Also if the technology is forced in place by the European market, there is very little that stops other less liberal countries to use the same technology against whatever they don't like. So it's the usual slippery slope argument with the additional caveat that a lot of child protection agencies are not convinced that it would make an important difference and that the resources should be allocated elsewhere.
False positives / false negatives are important to note. It is likely that they're saying that out of 100,000 scanned files, 10,000 to 20,000 will not include child sexual abuse.
The problem is that for it to actually work you will need to take control away from the users over all their computing devices. Otherwise they can simply circumvent it.
It would mean a model as closed as iOS, but for all mobiles and desktop platforms.
I will personally never let that happen. This is way too draconian a measure to solve a problem that will not go away anyway. The predators will just go offline again or find workarounds, while it will be severely restricting all citizens in their computing freedom.
You would have to look at it from the perspective of the base rate (for low incidence) to properly understand what that means for the wider population: https://en.wikipedia.org/wiki/Base_rate_fallacy
If every agency and government with access to this technology could be trusted to only use it for CSAM it would be the easiest no-brainer to approve and turn on immediately.
But they can't. It can easily be used to target people for political and social reasons just as easily, and once that Pandora's Box is opened it can never be closed.
For the most topical example, imagine it being used in a conservative US State to target images of people not conforming to the gender they are expected to.
Even if the government exercised restraint and only used this technology in the manner they presently advocate for, a 10% false positive rate, presumably each resulting in an invasive investigation, is way too high.
> On 2 December 2015, at the Theresienbad swimming pool in the Austrian capital Vienna, a 10-year-old boy was raped.
> The perpetrator, ..., claimed that he was motivated by not having sex for four months
> In October 2016, the Austrian Supreme Court overturned the man's conviction of rape, ordering a retrial, while upholding his second charge of aggravated sexual assault of a minor. The rationale was that the prosecution had not provided evidence that the man did not know that his victim did not consent
> In May 2017, judge Thomas Philipp reduced the sentence to four years in a final decision by the Supreme Court, saying that the rape was a "one-off incident" and "you cannot lose your sense of proportion here"
This is shocking and disgusting, but should not be taken as a motive to spy on everyone. Spying on everyone implies general assumption of guilt covering the whole population, which is unlawful in most jurisdictions (proportionality) and also unconstitutional in many (e.g. Germany).
Lets just say EU is not very family friendly compared to individual states. I do not think this discussion belongs on HN. Just pointing BS in this law.
No. It is an agenda imposed on all EU citizens without direct representation.
It goes top-down instead of bottom-up unfortunately and that is why there are so many fights. It is also increasingly invasive even of the sovereignity of constitutions of individual countries.
Is the problem that people can send encrypted things back and forth to each other? Requiring that companies put snooping software on their device is basically the thought police. Not hyperbole but the actual thought police. Today it’s saving the children, tomorrow it’s basically any problem the governments of many nations want to try to solve.