One would have to assume all of these things are points of negotiation to get Meta to disable E2E on FB messenger and WhatsApp, along with the public allegations made for years prior that providing encrypted channels directly supported the cause of pedophiles.
(It is also interesting to me that in ~2010, such discussions used to use terrorism, and now it’s CSAM/pedophilia.)
> (It is also interesting to me that in ~2010, such discussions used to use terrorism, and now it’s CSAM/pedophilia.)
Nah, CSAM/pedophilia argument goes way back. It is just that the terrorism argument has largely disappeared because the emotional impact of mostly 9/11 has been dealt with (terrorism was much more rampant in parts of the previous century, can't recall the decade but if I had to say it was 70s or 80s).
The thing is, tools can be used for Good and Bad. A knife, a car, the Internet, social media.
The problem is if we live in a society where your online identity is pseudonymous at best (remember Rotterdam hospital shooting recently, the shooter got found on 4chan), and there is post-moderation instead of pre-moderation you are going to have garbage content.
A lawsuit like this is used to prove negligence in proportion to the amount of users vs content. Ie. it will result in more active moderation by Meta. In that regard, it is The Netherlands hosting providers who were terrible with removing CSAM. And, almost all of them have improved.
Does that mean the content is removed from the Internet? No, impossible. But it sure as hell got more difficult for the perpetrators to distribute the content. Which is a decent compromise.
In 2010 threatening people with terrorism was already almost a decade out of date.
Eventually, if you keep threatening people with an outcome that they can easily detect, they'll notice that it's not happening. It's better to threaten them with something invisible.
The implication isn't that the allegations are false. The implication is that the problem is imaginary. Facebook can easily be the largest site of an imaginary problem.
> If we stick the debate around the truthiness of the allegations, it only takes a few bad examples to lose that argument.
That is true, but the argument here is over why people stopped appealing to terrorism to support their policy preferences.
I'm sorry. I don't see the difference between a problem being imaginary and allegations of a problem being false. If you allege a problem exists, and it's imaginary, then your allegation is false.
So, I'm just saying we can oppose the policy preference regardless of why they stopped appealing to a problem, and regardless of that problem's "realness"
> I'm sorry. I don't see the difference between a problem being imaginary and allegations of a problem being false. If you allege a problem exists, and it's imaginary, then your allegation is false.
The allegation here, as seen in the headline, isn't that there's enough pedophile activity on Facebook to rise to the level of being a problem somewhere.
It's just that there's more on Facebook than there is in most other places.
(It is also interesting to me that in ~2010, such discussions used to use terrorism, and now it’s CSAM/pedophilia.)