Network monitoring for unauthorized/unusual access, reading more into how this works I don't think this would actually change anything, you can probably still discern scripted vs manual shells it would just be a bit harder.
Is there a market for publishing the retrospective of the process which resulted in a "late found flaw"? I would expect uninteresting data would still be published if only to prevent someone from going down that rabbit hole in the future.
I don't see them drawing any distinctions from C++ or Rust there either. It really sounds to me like most of their low-level experience is in C, where the contrasts they appear to be drawing really do apply.
I don't understand. Isn't this already the case in America with Section 230? (The original post is about the Netherlands, so this is now tangenting.) It's just that no one actually acts as a platform.
I'll daydream with you except mine is different. The optimal social media in my view is one tied to your real identity. Moderation would only be applied under court order by the relevant jurisdiction for the view of the content within that jurisdiction. i.e.:
1) American posts content critical of Indian officials. That content is restricted by order of an Indian court and no such order is additionally given by an American court. It would be hidden from view within India but not from within America. The inverse would be true.
2) Indian posts content critical of Indian officials. That content is restricted by order of an Indian court. America (or any other nation) has no duty to protect that speech and thus no claim over it. That content is censored everywhere.
Additionally, everyone would have client-side filters which may be published. Emphasis on "published" because the publisher would be accountable for their words just as much as a newspaper within their jurisdiction. Though they wouldn't need to say much (i.e.: list of people I [dis]like). Unique identity and nationality are the only ones I can think of right now. More complex examples:
1) An American publishes a list of politicians who have made inflammatory public statements. They have evidence of this for each person on the list and make no additional assertions about their behavior. People not interested in such content could subscribe to the filter. (I guess people interested in chaos could view the inverse.) No court is willing to censor this list because their statements are protected speech.
2) An American publishes a list of men who have committed sexual crimes (such as in the original post). They assert it as fact not alleged crimes. They include someone who has not been proven in a court of law to have committed that crime. They can be sued for libel and possibly forced to remove the person from the list or reword the list description.
Anonymity between the user and the social media service wouldn't exist, but it might between users. The service could be mandated by the jurisdiction to unmask or otherwise ensure the accused does not fall within the jurisdiction.
> Isn't this already the case in America with Section 230?
IMO, Section 230 is too relaxed for large scale social media, but not relaxed enough for some other applications. It basically allowed big tech to mod the public square like modding a game, for it to be much larger, to move people around into biased groups, and to keep a record of all conversations, and train autocomplete bots on them..
I do not want more restrictions on existing apps. My point was that I wanted every local community to have their own online forum that's only accessible locally (through a RSA cert that rotates monthly, perhaps). They can even build minigames and cute events for themselves, and that would boost local morals and economy.
I think your thoughts on real online identities are interesting but I do not believe in either Censorship or total free speech. It's like the halting problem: I don't think there is a fixed list of rules that can always tell you the correct answer (To censor or not).
That's also why it's important for local communities to make their own decisions.
There's a dead comment sibling to this one talking about how group admins can see through the anominity. I don't understand why it is dead. It wasn't even rudely stated. Is it just flat wrong, or did it cross some unknown social boundary by pointing out a fact? Was it edited posthumously?
Edit: And, when I refresh it's not dead... I don't understand this system apparently.
A quirk in my config that I haven't bothered to fix makes it so my formatter runs post write. I've unfortunately picked up the habit of <esc>:w<cr>:w<cr> in rapid succession.
> Having bad leaders in an authoritarian system (there are many in the military) effectively amplifies the problem they create which is likely where this bias comes into play.
Additionally, good leaders in an authoritarian system can be more effective. It's just that no one wants to make the gamble for society at large.
It's more or less necessary for grunts where ultimately someone will be mandating another endangers themselves. I don't think modern society has enough bloodthirsty people to field a military completely composed of willing participants. We do have enough that think they're bloodthirsty to field our "voluntary" forces.