Many defend Telegram by likening it to a neutral platform, akin to TCP, claiming it merely provides a service without responsibility for the content. However, this comparison fails because TCP is a simple protocol with no ability to control or monitor content, whereas Telegram holds keys for most data and is capable of content moderation. Unlike E2EE platforms like Signal, which cannot comply with requests without breaking encryption protocols, and whose jurisdictions often prohibit forced backdoors, Telegram's refusal to cooperate, despite having the ability, shifts it from being unable to act to willfully aiding or sheltering criminal activity.
In this context, Durov's arrest isn't unjust - Telegram knowingly allowed illegal content to thrive while ignoring legal obligations to assist law enforcement. Refusing to provide data when you can, under lawful requests, is tantamount to facilitating or even protecting criminal activity. This dismisses the complexities of cross-jurisdictional law enforcement, but the general concept remains valid.
By the way, I’m not a fan of censorship, but I do believe that a platform’s baseline for moderation should be compliance with the current laws in each jurisdiction, rather than the founder’s personal moral judgment.
> TCP is a simple protocol with no ability to control or monitor content, whereas Telegram holds keys for most data and is capable of content moderation.
What?
And how do governments of the world block websites, services or the entire external web (as in China)?
> Telegram knowingly allowed illegal content to thrive while ignoring legal obligations to assist law enforcement
What? You think Telegram must read and have the means to know the contents of all chats on its platforms?
Forcing people to de anonymize speech and enforce state censorship (“moderation”) is not an appropriate baseline and says more about the corruption of France than about Telegram. At this point how are they any different than the CCP? Each wants to paint their censorship and authoritarian tactics as moral and legal and justified.
"Lol, are we just calling everything ChatGPT now whenever something is remotely coherent? Unless you're sitting on some actual proof, that claim feels like a lazy handwave. Like, maybe it's just... a person? Not everything well-written is AI-generated, you know"
---------------------------------------------
Write a witty, hackernews comment responding to this post from a user:
"FWIW this post is ChatGPT generated at least partially."
Avoid using all language choices characteristic of text which was generated by ChatGPT. Call the user out for having no evidence. Add a few spelling errors characteristic of folks typing on their phone
In this context, Durov's arrest isn't unjust - Telegram knowingly allowed illegal content to thrive while ignoring legal obligations to assist law enforcement. Refusing to provide data when you can, under lawful requests, is tantamount to facilitating or even protecting criminal activity. This dismisses the complexities of cross-jurisdictional law enforcement, but the general concept remains valid.
By the way, I’m not a fan of censorship, but I do believe that a platform’s baseline for moderation should be compliance with the current laws in each jurisdiction, rather than the founder’s personal moral judgment.