Yes because it’s a mechanism to prevent actual abuse from occurring on their platform and to report offenders directly. When one of the participants of an E2EE conversation reports the convo then the messages would be sent up in a way for Trust and Safety to read the messages and report to the authorities.
What is messed up about that? The method of reporting is in the hands of the user, not an ML algorithm. The ML algorithm would prompt the kid to stop and think about what is happening, before actual abuse occurs.. I assure you Stamos is speaking from a place of experience, in having to prevent these kind of things.
Any discussion that starts with "Here's a graphic account of sexual abuse" is not a real discussion.
It's just like trying to start discussing the Patriot act by starting with a recording from a plane on 9/11 (an irrelevant appeal to emotion that is so outsized it interferes with the dispassionate ability to weigh alternatives).
For all we know taking away cp from pedophiles makes them more likely to try it in person. Go after the creators.
He then suggests a method for going after the creators, i.e people who livestream their abuse over secure connections, or use them to conduct the abuse.
It's a far better proposition than assuming everyone is guilty and mass-scanning photo libraries.
I was in complete agreement with most Apple-related comments until I saw this group of knee-jerk reactions to a reasonable attempt at discussion. wtf
https://twitter.com/alexstamos/status/1424054568275439617