Hacker News new | past | comments | ask | show | jobs | submit login
Europol sought unlimited data access in online child sexual abuse regulation (balkaninsight.com)
217 points by t0bia_s on Sept 30, 2023 | hide | past | favorite | 197 comments



It’s naive to think this is about child-abuse. That’s just there to frame the discussion. Interpol is not going to stop at analyzing images for csam - it’s not even the real goal. The goal is complete government oversight.

And no I am not into conspiracy theories. It’s same as with attempts at “breakable encryption”, it’s offered up under the guise of fighting child-abuse/terrorism/<insert-terrible-thing-everybody-has-to-agree-is-terrible>, but the goal is to increase government access to all communications between civilians.


Behind the proposal is several Big Tech companies with a lot of money which are seeking lucrative government contracts. One of them is Aston Kutcher's Thorn which seems to have a very good report with the EU Commissioner Ylva Johansson.

https://www-svd-se.translate.goog/a/ona477/kandisbolaget-tho...

https://www.lemonde.fr/en/les-decodeurs/article/2023/09/26/t...


https://cdn.netzpolitik.org/wp-upload/2023/09/2023-09-28_LIB...

"...Those aforementioned media reports point to alleged close working relationships between the European Commission and a broad network of tech companies, foundations, security agencies and PR agencies, including Thorn and WeProtect Global Alliance, indicating possible undue influence in the drafting of the proposal. Of particular concern are the allegations that the solutions laid down in the legislative proposal to fight CSAM supposedly replicate the solutions designed by those groups, contributing thereby to furthering their economic interests..."

This is of course, the same Ashton Kutcher, who recently wrote a letter advocating leniency for a Scientology member convicted to 30 years in prison for rape, after his sentence.



This is the same Ashton Kutcher who was a part of Thorn before he wrote a letter Advocating leniency.

Reducing him to a single act would work make sense if he was consistent in bad judgement.


When you advocate for a person who is at the very least a proven double rapist (with countless other allegations that haven't yet been seen by a court of law), describing them as a good person that they treat people with "decency, equality, and generosity", it justifiably raises serious questions about the sincerity of your abuse advocacy and who you presented yourself as.

It was a calculated decision and it is reasonable to think he had enough information from lawyers and friends to know how it would be received and how it looks. In spite of this, he still decided to bat for his convicted double rapist friend who, beyond just his crimes, has his connections in Scientology ruin the lives of many women brave enough to speak up against him.

It's not just the single act, it's that it was an act that he had a lot of time to think through and fully understand. It was an act which the years of his advocacy should have given him an understanding of the damage these types of men do to women and girls when they're free. That's what makes it so egregious for me.

Advocating for a convicted double rapist isn't a crime of passion, a poor choice of words, or a drunken fight.


From my understanding, he didn’t know the letter would be made public. Which I find to be even more insidious.



The thing that gets me is that "paedophile" is an easy bogey man to let us point fingers because literally everyone (no offense ace folk) has at one time been attracted to minors. And we are embedded in a Patriarchal society that puts an emphasis on youthful femininity as a commodity. So pushing the "paedo" button is the same thing that cults do: leverage shame to gain authority.


I'm not even sure complete oversight is the goal. All of the individuals involved might even have the purest of intentions.

But time and time again the outcome of such actions (when left un-countered) does seem to end up with more spying on civilians.


Even the purest of intentions can't justify by itself any particular decision or action taken. The decision or action itself must be evaluated on its own merits. Or you find yourself excusing everybody from abusive partners to dictators (which we see everywhere, unfortunately).


The persistence to pass this regardless of all the push-back that happened suggests that there is no "pure" intention out there. Zero.


> But time and time again the outcome of such actions (when left un-countered) does seem to end up with more spying on civilians.

A good example of this in the UK is the Regulation of Investigatory Powers Act [0] that constrains anti-terrorist surveillance and investigation, but which ended up, amongst other things, being used by local councils to track whether children lived in particular school catchment areas, tracking fly-tippers, etc.

[0] https://en.wikipedia.org/wiki/Regulation_of_Investigatory_Po...


It's the same with benevolent aristocracy, I'm sure there have been good rulers in the past who considered it their duty to serve the interest of the people and be benevolent, however 80-90% (conservative estimate) are not. They see themselves as the pinnacle of humanity and "why not? why shouldn't I do whatever I want? The peasants are wrong!" . It's the same with all positions of power and why there have to be counterbalances like courts and laws to protect privacy.


Complete oversight is a pure intention to some people.


Some if not most evils in history started with someone truly believing it's the path to righteousness. And good or bad, the larger one's reach, the more likely they truly believe they are a savior, the hero, the solution.


You are so conspiratorial! I just can't believe that anyone would want total control of everyone else's data, while remaining entirely anonymous and unaccountable themselves! /s

I actually think that it's coming to the point that the gains of technology, even large bits such as 'the internet' or mobile phones, are so inhuman that it's not worth it. I think modern tech can easily be characterised an almost entirely subsumed for governmental and corporate control. If we thought giving banks licenses to print money was bad, giving these nefarious entities control over everyone's data will be far worse.


I am all for privacy and so on, however you have to admit that with the current technology alone it's basically impossible to track or prevent certain types of crimes - and it's becoming more and more difficult. I am decently sure that terror attacks are prevented on a monthly/yearly basis - luckily, it doesn't make the news, when it does it's too late.

Child pornography is another beast difficult to solve - again, I am quite sure that they are able to find groups etc on a more frequent basis than we think. When they don't, I don't even want to think about it.

What tools do current investigators have? If you were a detective investigating a case, what would you do or use?

Basically everything useful is either 1) illegal, or 2) technically almost impossible to do (e.g, decrypting https which takes still years to brute force). You could try to ask for a favor left and right to get to a search warrant or install spying devices and "fix" (1) but how the hell do you break (2)?

Communication is happening online and a "military grade encrypted channel" has basically become the standard way of communicating with each other. The few who still use outdated clients or protocols - shame on them.

So, yeah, I really do understand the fact that governments want access to data for their own nasty purposes, however, what can we do as a society to make it 1) difficult for average Joe to use military grade encryption to act criminal, 2) easier for police to find criminals?

I know a knife can be used to cut bread or to kill someone, however, we're talking here about groups of hundreds of people sharing child pornography - that's not a knife, it's a freaking weapon of mass destruction in the hands of people who shouldn't ever ever have access to them.

No, I don't want the Chinese way, but what can we do to avoid getting there?

Can you think of something?


Using the encryption argument:

Encryption is a solved problem. Easy to implement solutions are available everywhere. (yes, post-quantum cryptography is not yet solved, irrelevant for the current discussion).

That means that even if it were technically possible to create "legally-breakable-encryption", criminals would simply use real encryption and law abiding citizens would be stuck with broken "encryption" and be vulnerable to government surveillance... and to said criminals. I.e.: there are only downsides. And government experts know this just as well as anyone. Which makes their agenda very clear.

More in general, imho, the benefits have to outweigh the costs: even if breaking all encryption would have offered a small benefit in fighting bad guys, that still does not outweigh the costs of a total loss of privacy (think china) for everyone.


Talk to children respectfully.


How does it help in this case? I don't understand...


Very well said, it’s unfortunate that they can simply pick this as a pressure point and the public might accept it.


Same for digitalID. Or digital currency. Soon there will be lobbying for removing cash. Of course because it is used by criminals and cannot be tracked. Who cares that you need electronic device and connection for make any digital transaction. Or who cares that it is absloute surveillance by state. But it is for our safety of course. You need to sacrifice your freedom for safety of others!


Do we actually have any proof of this?

I see this argument often, it's basically taken as an obvious truth in tech circles at this point, but I've seen very little evidence of that happening so far.

To be clear, I think these anti-CSAM laws are pretty terrible, but I wouldn't attribute malice where simple incompetence suffices, as the saying goes.


Why not both? Incompetence AND lust for power. It's evident all over humanity, back as long as we have recorded history. You can see it from an assistant manager at your local convenient mart all the way up to Presidency and Prime Ministership. Every bit of liberty you cede to these jokers is one you likely won't get back. I don't want that for my children or descendants.



Stop using Hanlon's Razor. The only people it cuts are the people who use it. Stupidity is weaponized soas to become indstinguishable from malice.


[flagged]


> Do we actually have any proof of this?

It's entirely possible that the people currently advocating, and the current would-be users will only target our current definition of "bad people".

On a long enough time horizon, every power will be abused. The "bad people" will be redefined, or the ends will justify the means.


Lets not pretend the socialist side were any better:

Soviet was on the extreme side as is China.


Apple’s ill thought out on-device CSAM detection and thankfully, the removal of it from its plans after heavy pushback comes to mind.

Any time a specific ability is granted or is available to law enforcement, it will expand to broaden surveillance on everything. CSAM is horrific, despicable and devastating. But the way law enforcement treats it as an easy “in” to push for more surveillance and more powers for itself is shameful.


It’s not shameful that people try to grab power to stop evil, it’s natural. What’s shameful is that we don’t have more political checks against these type of power grabs. And that citizens who could stand up and fight against abusive surveillance are increasingly apathetic.


Mostly because they can go for those power grabs again, and again - ad nauseam. After a while populace is just bored of fighting, and only few people care.


The problem is, that we have to complain loudly every goddamn time, get the media, protest, write articles, etc., to get them to pull back.

They only have to succeed once, and the law is written and stamped.


I know, sigh. Which is exactly why we need more structural defense against this behavior. I think it’s about time for a modern day constitutional convention.


People who prefer liberty over tyranny have to win almost every time, it's only takes one failure to lose liberty for a hundred--two hundred?-- years.


The statement is that it is shameful to grab for powers that invade the privacy of law abiding citizens.


The quandary is that powers that "invade the privacy of law abiding citizens" are also powers "to stop evil."

Pretending they aren't is part of the problem, as it empowers those who would push them to publicly advertise the latter good in a vacuum of silence from the tech side.

Something like 'Personal privacy is more important than maximizing law enforcement efficiency, including of CSAM' is a more honest, complete position.


> to stop evil

It's not at all obvious whether stopping CSAM is really the primary goal of some of the people who are pushing these regulations. It seems just like a justification to invade personal privacy.


You are speculating about intents some other people might have.

Let's take the issue in vacuum first. Either you hold your privacy as more important than X, or you are willing to compromise some of your privacy in the name of X-- there's no third option. Eg. in case of airport security or CCTV in some public space suddenly everyone is OK compromising personal privacy in the name of personal life and safety.

Now finally let's get back to those other people whom you suspect of having an agenda to surveil everybody. If they honestly tried to combat child abuse, how do you see them going about it?


"If they honestly tried to combat child abuse, how do you see them going about it?"

Detective work, stake outs, researching who makes this stuff, convicting the actual producers, convicting people who do direct abuse... In general, taking real steps instead of reading everyone's dairy and then doing nothing.


> Detective work, stake outs

So, physical surveillance. Idk if you are aware but this means physical surveillance on everyone, because with Tor you can't narrow this stuff geographically. Would you rather to be physically surveilled.

> researching who makes this stuff, convicting the actual producers, convicting people who do direct abuse

First it already happened when they used to expose identifying details. Those days are over.

Second, more importantly, what you described does nothing about resellers, aka the people who keep the abuse economy running and make money from it.

And please. Hash matching is not dairy reading.

And on the likely chance my dairy happens to have 1:1 collision with a know cp video, I would not mind if someone being able to look at it if it meant they also can look at the actual thing and identify reseller/perpetrator. How can you think differently?


> Idk if you are aware but this means physical surveillance on everyone, because with Tor you can't narrow this stuff geographically. Would you rather to be physically surveilled.

Presumable the content is actually produced at some specific physical location.

> Hash matching is not dairy reading.

The article is not talking about has matching, though. Quote from one of the Europol officials:

“All data is useful and should be passed on to law enforcement, there should be no filtering by the [EU] Centre because even an innocent image might contain information that could at some point be useful to law enforcement,”


> Presumable the content is actually produced at some specific physical location

Yeah and how to find that location? If you are opposed to any measure that compromises your digital privacy, physical surveillance is the only way

> The article is not talking about has matching, though

Sure. In context of this subthread you are correct. But remember when Apple tried to do it with hash matching? They published a white paper detailing their algorithm. Remember how everyone here instantly whined about total surveillance? It was just like last year. The sentiment is always the same "my privacy may not be compromised if it concerns safety of helpless victims whom I don't care about"


> in case of airport security

Not everyone is OK – lots of people argue it's a security theatre.


Let's see if you think it's a security theatre next time you fly from Jordan to Israel. Or actually anywhere within the US, where gun carry is allowed...


Sopmer of it is theatre but screening people to make sure they don't have a bomb ios not theatre - people will do this as has been shown.


It is a bit weird in that screening started only after bombing... a useful metric would be to see how many times bookings/hijackings were thwarted. (which I would guess many)


No screening started after hijacking in the 1970s.


> in case of airport security or CCTV in some public space suddenly everyone is OK compromising personal privacy in the name of personal life and safety.

These are not really comparable. Even without CCTVs you can't really expect that no one will observe you public areas (it's just that cost of doing so would be significantly higher).

Also it's something you have much more control over and it's significantly less intrusive than monitoring personal communication. e.g. an equivalent would be the government opening and reading every single letter you sent or received back in the days when people still sent them (or having the option to, which to be fair is something they probably had it was prohibitively expensive to do at scale). That is not something most people living in free societies found acceptable.

> If they honestly tried to combat child abuse, how do you see them going about it?

By actually directly targeting it as the other comment describes? Instead of using "think of the children!" as a vail to justify unlimited government surveillance.

> You are speculating about intents some other people might have.

Yes. Are you implying there is something fundamentally wrong with that? Do you always accept everything politicians say at face value? If so, perhaps you're on the market for a bridge?


> not comparable

Airport security literally checks the inside of your body (if they want to) through xray or other means. How you consider this not comparable in privacy invasiveness?

> By actually directly targeting it as the other comment describes

Please your own take. That comment didn't contribute anything useful.

> Are you implying there is something fundamentally wrong with that?

I can't believe this is a question. You realize you are putting your own thoughts in another person's head?


> How you consider this not comparable in privacy invasiveness?

How is that comparable to having access to someone's personal communication? What's so particularly private about the 'inside' of anyone's body? Physically checking the outside seems much more invasive. But yeah, overall I agree that compromises can and should be made in certain cases when the potential harm to society might outweigh certain individual rights (I don't see how that might be the case in this situation).

> Please your own take. That comment didn't contribute anything useful.

I don't agree and to be fair more or less the same can be said about your previous comment.

> You realize you are putting your own thoughts in another person's head?

No. I'm trying to infer what thoughts might exist in another person's head when they do or say certain things. I don't really understand what are you implying (that we should never assume that no politicians have any hidden agendas and they they all are perfectly honest?)


> What's so particularly private about the 'inside' of anyone's body

Seriously? If your body is not private to you, then what's so particularly private about your communication?

> I don't agree

That's not an answer to "how would they go about it if their goal was to actually combat child abuse, as opposed to some conspiracy to surveil that you imagine"

> I'm trying to infer what thoughts might exist in another person's head when they do or say certain things

Exactly. It is what you think they think, not what they think, and such says more about your mind than theirs.


It's absolutely shameful. These people are not naive children. They know exactly the ramifications of the tradeoff, which indicates that they are not doing so in good faith, but rather at the behest of vested interests.


Or they're mentally unsound of some variety.


Pedantic but I would still say it's shameful although expected. Like that it is shameful to have your car stolen if you leave it unlocked with the keys in clear view on the driver's seat, although it is expected.


> try to grab power to stop evil, it’s natural

Those who seek this kind of power over others are themselves EVIL by definition


Also that would have set the precedence for the police to use apple as a proxy to put software on your phone/computer to monitor everything you do and scan for them, while they lean back in office chairs and wait for a hit. Same would have been extended to any and all government bureaucracy. No warrant, no innocent until proven guilty; guilty until proven innocent is their credo.


Apple announcing that stupid Csam scanning started all of this and they proved it's possible.

An absolutely stupid and disastrous move.


> Apple announcing that stupid Csam scanning started all of this

Except all of the cloud providers are already scanning uploaded photos for CSAM and have been for years. Trying to blame Apple for this is insane.


I’m particularly tired of this point. Tech companies did a lot of unsound things in the pre-mass adoption days — for example, letting admins read stored content without any access controls. We don’t claim these things are standard or desirable just because they once happened. Moreover, these things rarely entered public awareness.

In the very early days of cloud computing (AKA the 2010s) when “upload to cloud” typically involved clicking a button and sending a photo to a server, a subset of cloud companies began scanning photos for CSAM content. Many of these companies exclusively scanned shared content rather than unshared repositories, because the purported goal was to stop distribution (allegedly Dropbox did this, recognizing that “upload” was an automated feature of their system and “share” represented a user choice.) A few companies were blurry on the distinction and just scanned everything, perhaps because it was technically easier.

What’s important is that this was never widely advertised to users, nor was there ever any sort of public debate about whether it should be SOP, particularly for “uploads” produced by default-on cloud backup software like iCloud. When people say “Apple started this” what they mean is that the first real instance of widespread public debate around this feature I know of was in 2019 when Apple very publicly announced their plans, and the feedback from customers was apparently so negative that they abandoned the idea. Moreover, Apple “started this” in a second sense of the term: they developed the first system capable of scanning end-to-end encrypted photos by conducting the scanning on-device, thus providing a technology demonstrator for the ideas in the new EU regulation.


This isnt comparable.

Apple was going to scan local photos, not cloud stored ones.

No matter the source, no matter the app.

And the phone will report you to the police if the algorithm marks any local photos as ones reported by the police using a "neural hash", which has a non-zero amount of hash collisions.

https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...


> Apple was going to scan local photos, not cloud stored ones.

At the point of upload to the cloud service -where they would be scanned anyway-.

> if the algorithm marks any local photos

No, it required a threshold of N photos to match before they were submitted for human verification.

> which has a non-zero amount of hash collisions

Hence the threshold and human verification step.


So, imagine your new house having cameras and microphones all over the place, that you cannot turn off, recording 24/7, but "only locally". If there's screaming, could be TV, could be just an argument, could be rough sex, drama practice, or maybe even violence and murder, it will mark those recordings and after a few repeats it'll send them to a person to look at your private recordings to see if it's just some bdsm play or if you're murdering your wife. Oh, the police officer looked at the video and it was just bdsm? Ok, continue until the next threshold.


> If there's screaming

... that matches a specific fingerprint.

But also this is a flawed analogy because the scanning is not 24/7, only when you are uploading to iCloud. It's more like "letting people in for dinner and them seeing blood splatter on your walls; after a few visits with different blood splatters, they might well suggest that someone have a look and check it's not just an accident-prone haemophiliac living there."


I'm an artist with unconventional religious convictions you insensitive clod!

You're missing the point though. The broadband sensors acting on another's behalf is the problem because all that'll happen is more and more liberties will be taken with the concept of ownership/post-purchase monteization, then god knows who is watching what. Hell, it's a security exploit away from becoming a home invader's wet dream.


> At the point of upload to the cloud service -where they would be scanned anyway-.

So scan them there? Why ahould the phone scan local photos? And icloud is enabled by default, guess who's going to disable it if that would've been implemented?

> No, it required a threshold of N photos to match before they were submitted for human verification.

Yay, private photos leaking to companies employees because of a flawed algorithm, makes perfect sense.


They can't scan them in the cloud because, unlike other cloud storage services, the data is encrypted before leaving the device and they don't have access to what they are storing. They still don't want to host bad stuff though so they tried to come up with a way to still scan somewhere while not making the encryption in the cloud useless for everyone.


Possibly your understanding of the motive is correct. But your understanding of iCloud security is not. Apple did not offer end to end encryption of photos until after. And it is not the default now.


> Why ahould the phone scan local photos?

To avoid doing it in the cloud? Then you can turn on end-to-end encryption on uploaded photos.

> private photos leaking to companies employees

Where N of them have matched known CSAM hashes at the point of being uploaded to iCloud, they will be presented for human verification, yes. How is this worse than the photos being scanned in iCloud and being flagged for similar verification?


You can turn on end to end encryption without scanning. Apple did. And Apple's modified key escrow ruled out end to end encryption. End to end means end to end. Not end to back door.

Known CSAM hashes is incorrect. The sources of the hashes are known to contain false positives. And true positives are not limited to depictions of sexual abuse.


Because icloud is a cloud service, this is my phone scanning my photos.


> Apple’s ill thought out on-device CSAM detection and thankfully, the removal of it from its plans after heavy pushback comes to mind.

Calling Apple's CSAM detection "ill thought out" seems to me to be letting the perfect be the enemy of the good.

Remember when everyone was up in arms about W3C standardizing DRM? Yes, the world would be better without DRM. But the DRM exists and will continue to exist. Arguing for not standardizing DRM isn't arguing for DRM to not exist, it's arguing for it to not be standardized. The encrypted media extensions let me watch DRM-protected content on Linux instead of being locked out of it all because none of the proprietary DRM everyone's using works on anything but Windows/OSX. Linux and Firefox and a DRM blob so I can watch Netflix is better than being forced to use Windows 10 with all its telemetry, Microsoft proprietary DRM, and a browser from Google or Microsoft to watch things.

Yes. It would be better if the government stayed the hell away from my files. Apple not implementing their CSAM scanning didn't get rid of the underlying issue or argument. It just means that it's no longer the tech industry setting the standard for how this will work--it's going to be the government.

Apple's implementation[0] was about as privacy preserving as we can hope for. It only scanned media that was about to be uploaded to their cloud service. It did scanning on device and used crypto to ensure, mathematically, that they couldn't even access the _hash_ of matching images until such a point that enough images matched to cross a threshold. They had client devices feed in fake matches to obscure even the number of potential matches that occur before reaching the threshold. At any no point in any of this is it possible for them to retrieve even the hash of an image that does not match, even after you've passed the threshold.

Would it be better if none of this happened at all? Sure. Is this a _fuck_ of a lot better than what we're seeing Europol pushing for? Absolutely.

Apple cancelling their scanning was a short term win. This isn't a new problem, and this isn't one that's going away. We can throw all the technical solutions we want at it, but it's not a technical problem. ("Sorry, can't scan user's content it's all end-to-end encrypted!". "Don't care. You wrote the encryption. Work around it.". "It's impossible!". "Okay, enjoy your new regulation that all encryption has to have a backdoor HTH HAND.")

I'd rather lose the battle and win the war. Let's put the most privacy-preserving CSAM scanning we can in place and take that card out of play. Let the regulators come out and try and explain how "Well yeah, you're scanning every image for CSAM but, uh, it's not enough. We do really need to see _all_ the images people have on their phones!". We're not making an argument, we're drawing a hard line and standing in place. The Europols of the world are not going to stop pushing. Apple's big, but not "override the EU" big.

When push comes to shove, we will lose. The EU _will_ respond with regulations. Maybe not now, but it should be obvious the way the winds are shifting. I'd bet my left testicle their vision for this is much more onerous than what Apple was proposing.

But hey, at least we can tell our children (away from our phones or any other electronics, and probably standing somewhere deep in the woods) that we were proudly defiant to the end.

[0]: https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


The question being debated right now is not “which precise scanning technology should we use,” but rather: “should we scan private photos and messages for CSAM and other illicit content.” By proposing a scanning system that scanned user-private, unshared photos Apple announced that they felt the answer was “yes.” Everything else is a technical detail.

And to be clear, once you’ve established the capability and the principle of the thing, the technical details will not remain static. The EU regulation already requires scanning for novel CSAM content and “grooming conversations,” because the people proposing this tech think hash-based photo scanning is insufficient. Having conceded the need to scan users’ private data Apple would have found itself mired in a long-term losing argument about specific technologies, one that the public wouldn’t understand or care about. And the other side would have the force of law behind them.

What precisely was Apple’s plan to maintain this “balance” then? Refuse to obey the law? Leave Europe? To paraphrase apocryphal Winston Churchill: there is one point at which you can defend your stance on principle, once you abandon that everything else is just haggling on price.


> Remember when everyone was up in arms about W3C standardizing DRM?

I'm really surprised how well that actually turned out.

Whatever you say about DRM (and I'm no fan of it myself), at least W3C's version is as open as it can be. You're prevented from making a copy of whatever you watch (which, to be entirely honest, is a very reasonable precaution in the age of streaming), but you can still write browser extensions[1] or even custom Electron apps[2] that interact with the video element in other ways. If you want automatic skipping of intros, changing playback rate where such functionality isn't supported, access to community subtitles or subtitle-related tools useful when learning foreign languages[3], automatic subtitle reading (with a synthetic voice) or even a completely custom Multi View interface, it's all possible. You can even do synchronized playback across multiple users[4], as long as all of them are authorized to play the relevant media. You could have achieve none of this if you had to use a Flash-based player with no programmatic access to its state whatsoever. It's the best compromise we could have hoped for.

[1] https://chrome.google.com/webstore/detail/netflix-extended/g... [2] https://multiviewer.app/ [3] https://chrome.google.com/webstore/detail/netflix-dual-subti... [4] https://www.teleparty.com/


Is it still a slippery slope fallacy when someone at the bottom of the first slope argues for further slopes?


That's exactly it. We know Apple plays by the rules. EU demands it and they make all phones USB-C, CCP demands it and they host iCloud stuff in PRC. If they are required to give out data to fight CSAM, they will comply. Hopefully Apple would try to not make it "free for all" but whatever they do it will probably be hidden behind relevant regulations, as opposed to a solution they tried to push.


It's very naive to think that Apple's solution would remain as described in that paper. All it takes is a push for 'proactive searching for images not in the database' through e.g. models predicting whether an image is CSAM or something and you have countless cases like [0]. This needs to be wholly unacceptable. Once the system is put in place, expanding it is a much easier pill for the public to swallow.

You're just advocating for frog boiling.

https://www.nytimes.com/2022/08/21/technology/google-surveil...


CSAM & apple debacle shows we can win some of the time. I'm not going to give up. CSAM was a huge precedent for having government sponsored software on your phone, running 100% of the time, scanning your device. That is just plain fucking awful and it was new and precedent setting. It had to fought tooth and nail, and it was just "don't let perfect be the enemy of the good". I'm sorry to be so blatant but that's what it was. Yeah I read the pdf and understood what they were doing, it was a crack in the armor of having government surveillance software on your phone and computer 24/7 and that is huge.


I don't think they would have stopped at Apple.


This is part of why I think politically there is no point trying to co-operate with these people and convince them to maybe not collect so much.

The only way to stop them doing this is for folks in the right places to make it technically impossible.

In a similar note: The fact that its taken years to roll out TLS ECH and DoH which would make a lot of passive surveillance of the internet much more difficult is only enabling bad faith actors like Europol et al.


> The only way to stop them doing this is for folks in the right places to make it technically impossible.

Sure. Then they'll pass legislation making it a crime to implement technical measures preventing such data collection, and simply lock up everyone you are talking about.

What's the real solution?


Steal the private communications of politicians and post them in public


US supreme Court Justice and an ex Prez's crimes are pretty public did it matter?


Given how many simultaneous criminal cases Trump is now defending: yes.

A better example would be Snowden given nobody was done for lying to congress, but even then he changed things by revealing so much.


At the transport level: The EU has a lot of power, but the power to force the IETF to withdraw an RFC globally? That would be a reach even for them.


Everyone mocked the Australian PM when he said that the laws of Australia applied in Australia, and not the laws of mathematics, but he was correct.

This is peak nerd-delusion to think that the state will somehow be stopped by your cypherpunk schemes. It was already a delusion 20 years ago, and now to make things worse, all those guys that used to hang out in those hacker spaces promoting those attractive but silly ideas work for big corporations and governments.


The state will be stopped by widespread use of anonymous strong encryption.

It's the widespread part that confounds cypherpunks, and why PGP, Signal, Let's Encrypt are important despite the bikeshedding they attract from purists.


> This is peak nerd-delusion to think that the state will somehow be stopped by your cypherpunk schemes.

Nonsense. Encryption is legal.

We got rid of the ITAR restrictions on encryption. We prevented Key Escrow and the Clipper Chip Mandate.

We won.

PS, governments hate bitcoin more than anything else on earth, and yet it is still worth half a trillion dollars. We're still winning.


Both can be true. Encryption is not a panacea.


In a fight between law and maths, maths can't be arrested up, can't be put on trial, can't be detained at the border. It is intangible, and can be anywhere, even inside the unreadable mind of a traveller.

The only way for any state to prevent the use of crypto they can't break is to wind back all the things that can perform it, meaning all computers, not just all internet banking and other things that are everywhere now and can't be used safely without it.

States are free to do so, because a state can outlaw physical devices, seize them at the border, etc. — but that is what it would take to do this. I doubt they will, but that's the only option.

Unfortunately, we also have the problem that political factions both native and foreign regularly try to undermine states; doing so in secret is a necessary but not sufficient part of this, and thus getting past crypto is IMO absolutely necessary[0] to keep any state from being usurped.

Fortunately (from the POV of a state) "getting past crypto" can also be Van Eck phreaking, not just weak crypto.

Unfortunately for everyone, just as any crypto backdoor is almost certain to be exploited by criminals gangs to get valuable information, so so are the non-crypto surveillance possibilities: not just Van Eck, there's more than one way to use wifi as a wall penetrating radar to violate your privacy; laser microphones can listen on you remotely for pennies; smart dust is just about starting to be a serious possibility rather than a tech demo.

My current vibe here is that each new invention creates a power vacuum that takes 15 years to properly fill, and we're currently creating new tech too fast for either states or organised crime to fill the gaps.

[0] despite the previous "but not sufficient" because Swiss cheese defence: https://en.wikipedia.org/wiki/Swiss_cheese_model


> States are free to do so, because a state can outlaw physical devices, seize them at the border, etc. — but that is what it would take to do this. I doubt they will, but that's the only option.

Another alternative is forcing these devices to be designed in such a way that installing unauthorized crypto tools isn't possible.

We're already very close to this point. PCs have Secure Boot, which prevents installing non-approved operating systems. Windows 11 won't boot unless it is enabled. It also requires TPM, which can prevent modification of system and user files by putting the hard drive in an unencumbered computer. Windows Smart screen really doesn't want you to run apps not certified by Microsoft, although it is still possible. Web browsers are doing more and more to prevent you from visiting websites not secured by TLS, outright blocking some APIs if HTTPS isn't enabled.

The tech is here, all it takes is a regulator to tighten up the screws. It's not unimaginable for the EU to ban all motherboards with Secure Boot that can be disabled, to force Microsoft to refuse uncertified apps, to force Microsoft-certified browsers to require TLS with a specific set of root CAs, and to require those root CAs to only issue certificates to those the EU deems worthy. The EU isn't terribly likely to do these specific things out of right-to-repair concerns, though those concerns could probably be assuaged if the certification was done in a fair way by a third party, possibly the government itself, instead of tech companies.

This way, you can have perfectly secure crypto with your bank while still giving the EU the ability to access your messages at need.


> Another alternative is forcing these devices to be designed in such a way that installing unauthorized crypto tools isn't possible.

On the plus side, this does mean no more JavaScript and no more Excel spreadsheets. Unfortunately we'd have to ban nice things too, as those are only two of the things you'd have to ban to make this happen.

Don't get me wrong, the government behaviour you describe is plausible — turning those screws to make it harder is highly likely IMO — I'm just saying such limited things will never actually allow them to achieve their goals, and that unless they want to outlaw possession of computers at least as advanced as the Z1 from 87 years ago[0], they need to do their surveillance in a different way that doesn't break crypto.

(And that everyone else being able to do that surveillance necessitates substantial social change, but that's a different topic).

[0] https://en.wikipedia.org/wiki/Z1_(computer)


> On the plus side, this does mean no more JavaScript and no more Excel spreadsheets

Probably true about Excel (or at least non-cloud Excel), but not JS.

You can apply the App Store model but for websites. Require ID to get a TLS certificate, block anything which doesn't do TLS, allow certified websites to execute arbitrary code with a few technical restrictions. If somebody violates the law and is discovered, through either manual or automated means, they can be blocked via TLS revocation lists.


> to wind back all the things that can perform it, meaning all computers

Beware, they are attempting this.

It's a big project; UEFI, secure boot, and the end of General Purpose Computing. But they will throw absolutely everything they have into this Hail Mary plan, and the chip fabs are a chokepoint...


When I said "all computers" I wasn't being metaphorical.

Do you have something that can XOR two blob of data? Doesn't matter how, if this is JavaScript on a web page, or an app: if it can XOR, it can do a one-time-pad, which is unbreakable encryption.

The hard part of that way of encrypting things has always been sharing the key, but if you're in a criminal gang, or if you're actually trying to undermine a government, you can share the key in person.

None of the things you've listed are even remotely sufficient to prevent unbreakable cryptography. Strictly speaking you don't even need computers: even a handful of transistors soldered up right would do this.


> Everyone mocked the Australian PM when he said that the laws of Australia applied in Australia, and not the laws of mathematics ...

Oh come on. That was his flippant response to a smartarse question.


RFCs are not legally binding. Most politicians have no idea what a RFC is. They don't care about the actual technical details. They will just order a result without being aware of how unrealistic and harmful it is. You might be an educated technical citizen, but when you don't comply then there will be all kinds of legal trouble.


The premise is more optimistic than you are suggesting:

1. Mass spying is unpopular. The only reliable support base is a small-ish group of unlikable busybodies.

2. Reducing civil liberties tends to come back to bite the people who implement it. The best part of the Trump backlash is watching the intelligence apparatus come down on the Republicans. Karma in a nutshell, they were one of the major enablers of all that stuff after 9/11. All these ideas like free speech and private communication are ultimately to protect politicians.

3. It is practically difficult to stop. Any country that tries to stop encrypted messengers would have to cripple their own economy by bringing in such limited computers that they can't do anything. And they'd be hopelessly vulnerable to foreign espionage.

This is not that hard of a political fight. They tried to ban strong encryption back in the PGP era and look how that went - SSL is everywhere, encrypted protocols are everywhere, we have cryptographically based assets and every company is encrypting everything they can lay their hands on at rest. The ban-encryption camp has a track record of complete failure. And The Children have been Thought Of and are living in the best era ever to be children.


The solution is to do it before it's made illegal.


Politician 1: We will pass this legislation to make Technology X illegal.

Politician 2: Look! The RustyMondayCo startup already made Technology X!

Politician 1: Drat! Foiled again! Now we can’t possibly outlaw it!


The solution is more like making sure Technology X is ubiquitous before they can do anything. That makes it significantly harder to outlaw without undesirable political and economic side effects.


Agree. This is precisely why they have stopped attacking cryptography itself like in the previous crypto wars. Now they aim for bypassing cryptography so that they can still claim that the data is kind of end-to-end encrypted.


But funnily enough I remember people complaining about DoH and how it was bad for privacy, etc...

As a start, moving away from UDP is an improvement


If you were in government, how would you propose to limit child porn?


If I were a government, I would let the police do its job without mass surveillance. Also, not all problems can be solved. Or, if I were a government, I would command all companies and individuals to drop what they are doing and work non-stop on cancer research.


And how do you know upfront that the mass surveillance is worse than the alternative, e.g., a country run by organized crime, children and young women being exploited at large scale? Do you think police can be effective without mass surveillance if the other side has all this technology at its disposal?

Cancer is high on my list of problems that cannot be solved, we’re getting older, old age comes with decline of your body, spending money on a lost cause is a huge waste.


> And how do you know upfront that the mass surveillance is worse than the alternative

That's not a real disjunctive. In my current country, where politicians are accountable and mass surveillance is not (yet) a thing, the country is not run by organized crime, and children and young women are not exploited. But I have lived most of my life in a country with mass surveillance (Cuba. And no, you don't need client-side scanning, a sufficiently high number of police riding bicycles will do just fine.) The chilling effect on public speech and thought has brought untold misery. All politicians are corrupted party-folk chosen from above. Young men and women enthusiastically jump at the opportunity of being sexually exploited for a chance to escape the country. There is the bottom of the dumpster where the slippery slope of totalitarianism takes us.

> Cancer is high on my list of problems that cannot be solved, we’re getting older, old age comes with decline of your body, spending money on a lost cause is a huge waste.

I would recommend you visit the aforementioned Cuba. A cancer prognosis there is better than it was during the middle ages, but far worse than it is in a first-world country. So, cancer is not unsolvable, it's just that there are different levels of progress across time and places. Same goes for aging, but there we have this cultural brick you have so brightly illustrated that says even trying to do something is a "huge waste".


The way you prevent the country from being run by organized crime is by spying on the governing bodies, not the private citizens (those typically don't run the country). In practice, it means we should have transparent data about the people running the country.

As a rule of thumb, good politicians tend to fight for transparency in the government and privacy for their citizens. Bad politicians push the opposite view.


Prevent the country from being run by organized crime by spying on the the people running the country, not the public.

If you inverse it and instead allow those running the country to spy on the public but not the other way around then when organized crime takes over running the country they have an insanely powerful tool to use to stay in power and accomplish their evil goals.


> Also, not all problems can be solved.

This is something it took me a long time to integrate.

Politicians, especialy politicos with their backs against a wall, face demands to "do something about it" that they can't resist. The result is often a treatment that is worse than the disease.

If that's true, the corrollary is that we shouldn't mock politicians when they come up with "solutions" that are ineffectual - provided they're just ineffectual, and not Trojan horses for some more insidious plan. Perhaps the best answer to "something must be done" is to do something - anything - that doesn't cost too much, and doesn't do much harm.


Always keep in mind that such oppressive legislation backed by silly arguments is possible because the people at large are essentially OK with this, and many of them even support it outright if it means getting "tougher on crime" or whatever.

The true enemies of freedom aren't shadowy cabals scheming in back rooms – they are your neighbors, your coworkers, some of your friends, and possibly even some of your family members.


One of my problems with representative democracy is that there is never a candidate that ticks all my boxes. I.e. an actual representative. If I vote for someone to keep my digital liberties, they want to ban nuclear power instead, or whatever. It's all a tradeoff, and I don't think it's good that the tradeoff needs to happen so early in the process of representating me.

I like Switzerland (where I currently live,) because they have direct voting on topics, mixed with electing representatives. There's a filter so the people don't have to vote on everything, but for the big questions, there's already a system in place to ask the people. That must have sucked 100 years ago, when communication was more limited, but today, I think every nation should move to it. It also keeps the people engaged (voting on 3-4 topics every quarter,) and not just something that happens every four years.

Maybe that would have saved Sweden from the extreme sides of (nationalist) politics, and from banning investments into nuclear for 40 years before ripping that up. Mind you the nuclear disinvestment was based on a (non-binding) ballot vote, but my problem is no one dared challenge it, or have a process to re-ballot the question, in 40 years. Even in light of new climate information.


> because the people at large are essentially OK with this,

Only because they are not informed about what this is actually about. It's all about framing.

* Do you want the police to have better tools to fight child abuse? -> Of course!

* Do you want the police to see what private messages you are sending to your loved ones so that they can ensure that you are not a pedophile? -> Hell no!


> Do you want the police to see what private messages you are sending to your loved ones so that they can ensure that you are not a pedophile?

Most people doesn't even care anymore, they know they're always spied on when they use messenger or instagram (covering the whole age spectrum here), they'll always hit you with the "I don't have anything to hide"


That's what _you_ and _I_ think.

Most conversations I have with non-techy people, they end up saying "Yes" to both.


Anecdotal example - I was talking with a (usually very reasonable) friend, and she was convinced Google (via Android) listens to her all the time (with a microphone) and suggests products based on her real life conversations with her friends[1]. What creeped me out was that it was not presented as a conspiracy theory but something completely mundane, almost a remark ("you know how they spy on everyone - like yesterday I was talking about Greece with some friends and today I get plane ticket ads - you know, the usual"). Not even a bit outraged. Crazy for me, but clearly people don't feel about this issue as strongly as we do.

Later her friends agreed that it happens for them too and didn't believe me when I suggested other possibilities.

[1] In reality, most likely ad targeting based on her online habits works as intended.


Test it. Pick a medical condition, randomly. From the musty pages of an old-fashioned book, so there's no digital connection.

Then mention that disease a few times in the presence of your Alexa, Siri, or Google whatever. Talk about the disease to your wife on a phone call once or twice (make sure she's in on it, so she knows never to type the disease into a search engine).

See if you start getting ads for treatment for that disease.

It's a little crazy, but I'm not convinced it's impossible. I've something similar to me happen a couple times, but not under rigorously controlled circumstances, so maybe it was google searches by a relative or friend who overheard me, and I was marketed to by association? Maybe somebody with that disease visited my house and their location was broadcast and linked to my wifi?

Unsettling, any way you cut it.


I've had many conversations exactly like yours, literally. To reasonable people being convinced that's normal, and me trying to explain how it's probably a coincidence or ad targeting.


> What creeped me out was that it was not presented as a conspiracy theory but something completely mundane, almost a remark

There are ads utilizing this trope now


These two questions are at the root of this conundrum, but you're not being completely accurate.

What exactly is your justification for regarding your right to privacy in your communication with your loved ones is more important than the right of the police to fight child abuse?

I believe most people would not agree with this exact proposition, but with a different one: that once the police is given the power to fight child abuse, which can only be given by allowing them to access private communications of anyone suspect of being involved in such crime, then there will be abuse of power and they will use that access to also fight other crimes or even for political gain, as tends to happen in authoritarian states.

I would absolutely be willing to forego my right to privacy under certain circumstances given that there were strong enough guardrails in place to prevent abuse in the future if that would allow child abuse and other hideous crimes to be prevented - it would be immoral to not do so. However, as most other people in tech, I have enough knowledge to understand that it would not be possible at all to prevent abuse with current technology - once the power exists at all to break into communications, anyone with enough motivation and resources available will be able to do it, not just the intended receipients of such power, unfortunately.

If you really want people on the other side of the debate to understand you, you need to stop being so simplistic - there are very good justifications for their positions if you remove the practical limitations of being able to stop abuse - which they do not understand, and I suspect a lot of people in tech even also fail to comprehend.

I would even go as far as to say that future technology may change this: it may be possible to have completely abuse-proof technologies in the future which, if it existed, would make me change my position on this matter.

For example, something that uses blockchain technology to make it cryptographically impossible for the police to access someone's communications without having a warrant?? And making that warrant only usable by the police if it was also approved by a number of different, independent groups, including groups advocating for privacy (something like a "smart contract" could do this?)??

You can say these are stupid ideas, and I would probably agree... but my point is that this may not be impossible, and perhaps people who are really concerned about privacy while also having an understanding of why the police may need this sort of power should be actually trying to find ways to do this properly instea d of just keeping repeating the mantra that no, this is impossible and we'll have to live with child abuse , terrorism etc. forever?!


> What exactly is your justification for regarding your right to privacy in your communication with your loved ones is more important than the right of the police to fight child abuse?

Violation of privacy is harmful. The police is not supposed to cause unnecessary harm. Given that the vast majority of people are not child abusers, there will be a great amount of harm for no gain. And most child abusers will find ways to evade the surveillance. This is not even remotely close to a reasonable bargain.

> If you really want people on the other side of the debate to understand you, you need to stop being so simplistic

We were talking about the general population and how they perceive the same topic given different framings. Most people think in simplistic terms when it comes to topics that they don't actively engage with.

> there are very good justifications for their positions if you remove the practical limitations of being able to stop abuse - which they do not understand, and I suspect a lot of people in tech even also fail to comprehend.

Are there very good reasons to fight child abuse? Of course. But if non-technical people believe that there is a magical technology that can deliver what politicians claim then you have to challenge them to explain where their beliefs come from. And precisely because they actually don't understand the technology they have to admit that they actually can't form a well-founded opinion on it. You might not be able to make them understand why the practical limitations make this a bad idea, but you can make them understand that there is an important gap in their knowledge on the topic. And something that everyone can understand is: Not every solution is actually a good or even effective solution.


A similar thing happens with the word "security".


>The true enemies...

The "true enemies" are the folk scheming in backrooms who hope to succeed by the ignorance of the rest of the folk you listed. Are my neighbors, coworkers, etc. a problem in this battle? Yes, absolutely, but they're not the ones acting with malicious intent.


It's not ignorance, it's malice. "The people" are much smarter, but much more evil, than commonly assumed.

And ironically, the false idea that the population is ignorant of every important issue is yet another argument for invasive, controlling regulation of everything...


The majority of the population does not have the background to consider the implications of say, banning E2EE, they do have the background to understand that CSAM is bad. Thus, when told that banning E2EE might make CSAM harder to distribute, of course they're going to prefer it. That doesn't make them malicious or evil.

The entire point of representative democracy is that the elected representatives are the ones who work with subject matter experts to reach solutions that work best for the people. Thus, it is the representatives who are, at worst, malicious/evil for not listening to subject matter experts in favor of their political games.


That is, of course, the banality of evil.


Only if your definition of evil is so broad that it includes everyone, including yourself, and thus making it a completely meaningless word.


Yes, that is, in fact, how the banality of evil works. Everyone is capable of doing evil, without "feeling" evil, if they can self justify its perfectly fine.


Who says SME's/experts should always be listened to? For that matter, a damn representative better damn well listen to their constituents first.


> The majority of the population does not have the background to consider the implications of say, banning E2EE, they do have the background to understand that CSAM is bad.

This is all still just framing and you're only falling into the trap.

One of the many strong arguments against this kind of government surveillance is it's the sort of thing authoritarian governments use to commit atrocities. People can certainly understand that Nazis are bad and technologies that protect people from Nazis are good. Now all you need is to point to the proponents of the scanning and ask why they want to help Nazis.

It's the same tactic they're using. And then they use counter-tactics, like weaponizing Godwin's Law even in cases when you actually are discussing authoritarian government policy.

It has nothing to do with the nature of the issue and everything to do with the fact that the proponents of these measures are professionally trained propagandists who know exactly what they're doing.

Maybe we need a corollary to Godwin's Law. Let's call it Lovejoy's Law:

As the length of a policy debate increases, the probability that someone implores you to Think Of The Children approaches 1, and the person to do this loses the argument.


I've heard Nazis used encryption.

Ergo, encryption helped Nazis.


This is exactly what I mean about framing. The Nazis used infamously broken encryption. We have to make sure our codes can't be broken, unlike the foolish evildoers. The Germans were never able to defeat the Navajo code talkers, and then we won the war.


It's ignorance. I can't expect Grandma to fully understand the importance of E2EE, but I can absolutely expect her to vehemently oppose CSAM. Same goes for the Joe and Jane Blows that have no technical background.


No. People are driven by fear, real or unfounded. Only a very small part of the population could be labeled as inherently evil and that part is usually psychologically impaired.


By deduction, the true enemy of freedom is universal suffrage.


You are quite possibly correct. Late-modern monarchies, absolutist on paper, were certainly granting more freedoms to their people than those people themselves would have likely voted for, had they been given the power to do so. I can only imagine what an "Enlightened Absolutist" European monarchy would look like in the 21st century. Pity that proto-fascist faux-democracies are all that is left.


It's ironic to me that the EU led the charge in cookie notices and is now pushing this. Although one is privacy from corporations and the other is privacy from the government.


It's called authoritarianism.


Definitely, it's this. The EU treats companies like they're horrible monsters, and then looks away when it does far worse shit.

It's (the EU's) authoritative to companies by suppressing them with penalties and bueacratic laws, but allows itself (the governments and states) unending tolerance in whatever it does.


> then looks away when it does far worse shit.

> but allows itself (the governments and states) unending tolerance in whatever it does.

Could you clarify what are these horrible things the EU has done that you're referring to?


https://www.privacyaffairs.com/gdpr-fines/

And soon what's coming is even worse: digital markets act. Which allows the EU to fine companies up to 10% of their yearly revenue.

That's just the obvious stuff that I can write about. I live here and have experienced it personally how Germany treats businesses.


Ah, thanks for clarifying it: you're against GDPR! Difficult to sympathize, sorry. Being against encryption backdoors is very different from being against any legislation meant to actually protect privacy, and I find that the two are actually conflicting positions.


Then you really don't understand GDPR. GDPR is not about protecting privacy, GDPR is most of all about giving the legislator the opportunity to rip into businesses. Have you actually read GDPR? I have.


I don't think you understand GDPR as it is about protecting privacy. It is however true that a nice side-effect of GDPR is giving legislators the opportunity to rip into businesses that don't comply with it. 10/10, can't wait until the DMA and DSA gets kicked into gear too.

Are you saying that you liked how Facebook and Google could cross-site track every single (logged-in or not) European and use that for targeted ads as well as share it with the NSA?

Thank GDPR (and Apple) for stopping that around ~2018.


> Are you saying that you liked how Facebook and Google could cross-site track every single (logged-in or not) European and use that for targeted ads as well as share it with the NSA?

Yeah, a couple small good things in GDPR doesn't outweigh the bad and damage. Not even close.


Skimming through your previous posts I think we differ too fundamentally to have a reasonable discussion about this.

I'll state that I'm happy that your libertarian/anarcho-capitalistic views on regulations, taxes and government is an extreme position here and not the status quo.


It's definitely not the status quo. But you know, the world is in a shitty position (war, climate crisis, ecological disasters, etc), so maybe the status quo is to be blamed after all and look - that we do need change.


I have read and helped implement GDPR in my country. It's the greatest thing that happened to online privacy, finally a law that has teeth and can influence big tech. It empowers me as a citizen and makes me feel my data has value again.

I have high hopes digital markets act will be as good.


You're crazy. Meanwhile our economies are literally beginning to shrink, our enemies are becoming stronger than us then going to war against us (Russia, Azerbaijan, Serbia, China, North Korea) and the world is on fire. But I am so happy your data is safe - like anyone gives a shit about you anyways.


You should consider trying to be a bit less aggressive if you're actually trying to improve the world. It's because of some people having such extreme, aggressive behavior that conflicts (such as the ones you're complaining about) start.


I don't think I'm being aggressive. I think you should be less presumptuous.

It's hard to imagine that his data is so important, that he's so wanted, that people care about his so much, that GDPR will make any difference whatsoever on his life. On the contrary, the world is burning, clock is ticking, and you're complaining about me being aggressive on HN?


> EU led the charge in cookie notices

you got it backwards

eu: cookies/tracking is bad, don't do it. if you _really_ need to, you have to ask.

big-tech: fuck this! of course we need to track users. we're just gonna ask everyone all the time and put the blame back on you.


Why would that be ironic?

It's one and the same thing.

They disregard property rights and order everyone about.

The reason this seems strange to you is you didn't care when they did that to Web site operators.


It's a distinction between method and goal.

'Authoritarian supporters of privacy' is a position it's possible to take.

Which begs the question of whether democratic/authoritarian or surveillance/privacy is the more important characteristic.


> “All data is useful and should be passed on to law enforcement, there should be no filtering by the [EU] Centre because even an innocent image might contain information that could at some point be useful to law enforcement.

Aha, you first. Oh, this lady only meant our data, not theirs.

Stuff like this happens because people at large lose their shit, or at least pretend to, when it comes to "child safety". Impossible to even have a straight conversation. It's straight up dangerous


> When Lauren McCluskey, a University of Utah senior and track star, first alerted campus police to the fact that someone had accessed compromising photos of her and was attempting to extort her for $1000 in 2018... But she forwarded the photos and threatening messages she’d received to campus police anyway, and now, nearly two years after her death, the Salt Lake Tribune reports that the officer who received them saved them to his personal cell phone and later flaunted the photos to male coworkers.

Can't wait for the competent police officers at my local station to call me in order to have full access to my phone based on some photos I sent to my mom regarding her niece (my daughter).

This will have a Streissand effect like no other.


> my mom regarding her niece (my daughter)

Is this a typo? I can't imagine how this would work without some familial intermingling.


> In the same meeting, Europol proposed that detection be expanded to other crime areas beyond CSAM, and suggested including them in the proposed regulation.

And there it is, just as anyone would expect. It’s never just about Protecting The Children™.


I was wondering why there are suddenly so much vitriol online lately about "protecting the children". I guess it is just the same old social engineering trick.

Like the war on terror and drug war and all kind of "icky things" previously. Just excuses and a red herring to implement something dubious.


Children are the perfect political weapon to get people to accept any tyranny. If you oppose this stuff, they'll treat you like a pedophile.

https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...


There's plenty to actually protect children from. Doesn't mean it doesn't also get used as a fig leaf for other things, but if you think it's only ever a fig leaf you are sorely mistaken.


What are authorities going to do when there's an image generation AI model that can output infinite amounts of CSAM? Are they going to keep trying to enforce these laws, which have good intent (at least by some advocates) but end up being a disaster that barely puts a dent in the problem, while massively increasing government power and decreasing privacy?


I've used this example (CSAM-making AI models) as an illustration of the difference between ethics and morals many times. It would be theoretically ethical to use these models. It would, however, be immoral. Since governments are allergic to morality these days, it's going to get a lot worse before it gets better.


Why would it be immoral?


I think the argument is that the models are trained on real pics?

The other argument could be that it increases the chances of someone moving on from the consumption state to the abuse state.


The second part could also be the other way round. If there is easy access to CSAM maybe there will be less abuse. I could see either case be true, or that the availability of such material does not change incidence of abuse st all.

But having to train models on real materials is a major moral issue.


> The second part could also be the other way round. If there is easy access to CSAM maybe there will be less abuse. I could see either case be true, or that the availability of such material does not change incidence of abuse st all.

This is actually my opinion but I have no hard data on this. Is it feasible to apply the same logic as with video games where realistic killing is depicted and the argument is that this doesn't make people killers in real life?


Related (from today): https://news.ycombinator.com/item?id=37716928

South Korea has jailed a man for using AI to create sexual images of children


It's not new information that this kind of content is illegal, in many countries even if it's only computer-generated.

The issue is that hash-based scanning will be rendered irrelevant by AI-generated images because there could be an unlimited number of them, and they could be produced by general-purpose image generating software that isn't itself illegal.


On the topic of scanning a device for CSAM, even if this were extended to computers, and Microsoft/Linux/etc. coded it into the OS, how would this even work when it comes to RAM where an image isn't directly touching a hard drive? (Though I'm not sure how that would work exactly.)

Esp. in cases like isolated virtual machines running old versions of say Windows or Linux where child molesters and pedophiles don't have to worry about the scanning software.

I can't pretend to be an expert on the subject, but isn't a VM in a separate area of RAM where it doesn't bleed out?


The chatcontrol law does not require scanning on the OS level.

The chatcontrol law requires scanning on the ISP level and on the internet service level.

That means:

* ISPs are required to check for connections to flagged servers. They also have to check for illegal URLs (yes, HTTPS will make that pointless)

* Services like chat/e-mail/file-sharing services need to effectively check if a user is sending illegal material (no, the law doesn't take into account the complexity of open source ecosystems and the many options you have for client software and server hosting options)


They hypervisor or host OS can just read the memory.


Don't AMD server class processors have some way to encrypt VM memory so the host can't read it?

Looking around quickly, it seems to be Secure Encrypted Virtualization (SEV):

https://www.amd.com/en/developer/sev.html


I see.

That said, continuous RAM scanning would be incredibly resource intensive.


Even if it might have been eventually struck down under the Charter of Fundamental Rights that would take years to arrive at, and by then their access would be de-facto entrenched. They know this as much as anybody.


I thought General Monitoring was illegal in the EU?

How could this not run afoul of that?

I thought the EU had learned something from the Data Retention Directive.

>What does the Prohibition of a General Monitoring Obligation mean? The prohibition of a general monitoring obligation means that companies cannot be obliged to introduce measures that will result in blanket monitoring of the activity of users of their service, nor obliged to seek out illegal activity.


It is, according to the EDPS. https://www.euractiv.com/section/law-enforcement/news/eu-wat...

Unfortunately, the commission doesn’t care about passing illegal legislation (DRD, the 3 EU-US transfer agreements, etc.), and the CJUE is not fast.


It seems like the thread is split between those who value privacy more than preventing child abuse and people who don't believe child abuse and crime in general can actually (gasp) motivate lawmakers to pass laws.

Actually, the view of those lawmakers in this thread seems to be some sort of caricature of self-interested future despots, as if most of commenters here are not living in democracies.

It would be astonishing if it was not routine on HN...

Sure maybe the moral tradeoff is worth it and privacy is top priority. Then I'd recommend to focus on all other areas of life that violate privacy first. Ban airport security, CCTVs etc. After that feel free to move on to banning measures that protect victims, who unlike most of you grown-up functioning adults, can't even stand up for themselves. And address the gravity of that tradeoff, or you're just making nonces out of yourselves.


If this was 1023 A.D. we would be talking about heretics, and Inquisition that must oppress every man and woman to protect them from themselves.


Thankfully it isn't.


Depends greatly on where you live.

Furthermore, a lot of the same forces that are trying to get rid of end to end encryption are explicitly trying to take their parts of society back to that.


This is conspiratorial nonsense in democratic countries where most of this audience lives.


Police (in basically all countries) have the goal for their job to be as easy and efficient as possible. That will be priority one, like it is with most of this. You can't really blame Europol for this. They will ask for as much as they can get, as they seem themselves as the ultimate good guys. They "could never abuse their power". However as citizens, politicians and the rest of us peasants have to push back, that's just part of the contract, anything else is living in a dream world. It's not about child abuse, it's about concentrating power and making their lives as easy as possible.


The police should be laughed out of a courtroom if any case they are trying to investigate is based solely on the potential evidence found on a mobile device, or only through the interception of encrypted communications.


Blackbeltbarrister [1] has a good overview of what these regulations are attempting to do; legislate end to end encrypted inspection tech into existance. Either it's a smokescreen for something more sinister (require backdoors or weaker encryption) or its a delusional attempt by regulators to enforce oversight over encrypted traffic. Either way, something has to give.

[1] https://www.youtube.com/watch?v=NUPeyqsVcU4


It is known which priests were kid rapist and the police didn't do much about it. Despite of their low interest in the past now they are very concerned about child sexual abuse and they want mass surveillance.


The problem is the music and movie industries which will want to use the technical systems setup for human trafficers, child abuse, murderers, you know, the real nasty.

A line must be clearly drawn in the usage of such "military" grade systems.


Good. If you don't have online child sexual abuse to hide, you have nothing to worry about, since for all the other things people do have to hide, such access will ignore that (up to direct threats to safety and national security concerns).

Criminals get easier access to online CSAM, then law enforcement should get easier access to user data. If you think that trade is unfair, take it up with the criminals.


I do send my mom photos with my 2yr old taking a bath or stuff like that. "if you have nothing to hide, you have nothing to worry about" - what a stupid argument. Letting criminals dictate the way I live sounds like such a wonderful idea. If you have nothing to hide can you post your credit card here?


This is a discussion about bean soup recipes and you are telling me you don't like beans. Mediocre argument.

Be very careful with sharing explicit photos online. Your settings and encryption must be very strong, or people other than your mom may get access to it.

If these others then get investigated for child abuse, it should come out to the investigators that they joined a semi-private Facebook group and have access to beach photos of children. Their credit card number also should show up in registering for CSAM forums.

It is a wonderful idea to protect you and your family by taking precautions. Lock your doors. Don't accept requests from people you don't know, educate your mother on online security, and think long and hard about if it is worth it to send your mother a picture of your naked child, for there is a possibility this will end up everywhere.

Everyone has something to hide and to worry about:

> It came in 2003 when Townshend, now 74, was arrested for using his credit card to access a website offering child pornography, though no images were downloaded.

> Rock star Pete Townshend reveals today how his arrest on child pornography charges saved his life after it indirectly led him to discover he had cancer.


No, I have much to worry about. I dont want people to read my private conversations, private photos and anything i don't want to be public.

They can catche them without such stupid measures, that will make any secure communication impossible for the masses.


I am fine with people (especially if they are licensed professionals) reading my private conversations and photos, but I worry about people (especially licensed law enforcement) accessing my data on online child abuse.

We can not always get what we want.

Instead, we entrust the police a monopoly on violence, physical detainment, and violation of privacy, and hold them accountable if they can be shown to disregard their duty and responsibilities. As a civilian, your duty is to weigh your personal sense of discomfort against the societal benefits of improved child abuse detection.


>As a civilian, your duty is to weigh your personal sense of discomfort against the societal benefits of improved child abuse detection.

No. Not your personal sense of discomfort... this isn't a case of privileging one person's selfishness versus the whole world. You have to weigh the harm to _all_ of society from loss of privacy, against the societal benefits of improved child abuse detection.

Including the future harm to those children as they live their lives with lost privacy.


You have a social contract signed by you. Worry about you. The government gets to worry about harm to all of society from loss of privacy.

Civilians are not supposed to worry about future children as they live their lives with abuse. They get too emotional.


During the Cold War, everyone had spies in the other side.

Russia is certainly still trying that now (I assume the US is too, but haven't heard of it recently).

Giving any group legit access to this risks those spies having a convenient and easy way to find anyone with dirty laundry (even mild, legal stuff), and blackmail them into helping the spies.

This problem still exists even if we don't have the system for legit access, I'm only saying an official system makes it worse.

We still have to alter our societies so that nobody has anything secret to be ashamed of. This necessarily means making society radically more transparent, and I think the only way this is possible is to also make society radically more inclusive and tolerant. Why also tolerant? Because I've heard people typically commit 3 felonies a day (don't trust random factoids), and at that rate transparency without liberty turns the whole nation into a prison.


Blackmail and coersion of civilian assets for intelligence work is both possible and a risk.

But it won't happen though an Interpol investigation and leave a formal audit trail.

It is just turning "think of the children!" into "think of the sexual blackmail!".


Given the LOVEINT that we only found out about because of Snowden, given the sexual blackmail used to coerce gay people during the Cold War, and given rule 34, I fully expect officially sanctioned mechanisms to be abused in this way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: