Facebook has a huge moderation team. Just because some things get through that cause damage doesn't put it in the same category as a platform specifically built to enable terrorism.
I’m sorry but this isn’t Reddit, you can’t just claim a platform was specifically built for terrorism because you’re upset.
It has definitely attracted an alt-right crowd, but “specifically built to enable terrorism” is some ridiculous cable-news-level propaganda.
I’d much rather this conversation be about free speech and where lines can be drawn — and it bothers me that platforms can be taken down everywhere because of an unrelated group that happened to use them for something horrible. What about Signal? It’s been getting lots of popularity recently — what if it comes out that the terrorists are on Signal now, and there’s nothing they can do to be moderated because of the encryption. Will Signal be taken down for refusing to add a backdoor?
If you build a platform specifically to house/attract people who were banned from typical platforms because they had a tendency towards promoting violence, then I would argue that you are very much enabling (possibly even encouraging) their behavior. I believe that is a pretty logical sequence, and a clear line to draw.
There are very few people who earnestly want an unmoderated place of discourse, because those serve very little functional purpoae. Eventually most people will find something either irrelevant to their interests or personally repugnant presented to them and will go back to a place where there is some degree of moderation in place so that they can consistently find thing that interest and engage them. Why are you on HN and not one of these wholly unmoderated forums? Even curation of topics is a form of moderation, not to mention HN's strict approach to actually thoughtful commentary. The people who earnestly want a wholly unmoderated space are increasingly likely, depending on their desire for it, to be one of those people engaging in something so boorish that it got them removed from moderated spaces.
Furthermore, there is no small amount of irony in you saying you'd rather talk about free speech right after telling someone what they can or cannot claim.
> there is no small amount of irony in you saying you'd rather talk about free speech right after telling someone what they can or cannot claim.
You can't make those claims and expect people to take you seriously without backing them up.
> There are very few people who earnestly want an unmoderated place of discourse, because those serve very little functional purpoae.
Do you mean unmoderated or simply moderated to your specific standards?
Parler was never unmoderated.
You are defending deplatforming, while simultaneously telling people to go to different platforms if they want different standards of moderation. Do you see how this doesn't work?
You're right, it's not Reddit, and "because you're upset" would take it in that direction. Let's not.
> it bothers me that platforms can be taken down everywhere because of an unrelated group that happened to use them for something horrible
Does it bother you that many people are calling for exactly that wrt Facebook right now? I just checked comment history and saw no evidence of that, but I figured I'd ask instead of pretending to be psychic.
> Will Signal be taken down for refusing to add a backdoor?
I don't think anyone, including Apple or Google, considers Signal to be in the same category that requires moderation. Why not? Because there's this commonly applied but never defined distinction between public and private communication. Facebook is considered public, even though some communications there can be private. Signal is considered private, even though you can form pretty large groups of people who are nearly (but not completely) strangers. I wish someone would codify the difference, and its implications wrt moderation/takedown requirements. The lack of clarity around such issues is why both posters and platforms can claim immunity while toxic content spreads.
I create a new social network startup. Early on, most of the people it attracts are those banned from Twitter and Facebook - since they don't have a lot of other options. In addition to normal social network things, they post some questionable and inciting content. Since I'm a startup, I have a small moderation team and no fancy AI moderation so most of it slips through the cracks. Is my social network "built to enable terrorism"?
The political situation we find ourselves in, even though it was allowed to fester for years on established social media platforms, seems ideal for securing a monopoly for those same platforms. How could a competitor get a foot in the door without being accused of catering to extremists?
Let me paint a hypothetical.
[...]
Since I'm a startup, I have a small moderation team
and no fancy AI moderation so most of it slips through
the cracks. Is my social network "built to enable
terrorism"?
Not hypothetical to me!
I once ran an for-profit online community. It was a startup, with strictly volunteer moderators. It was an early "social networking" thing; honestly more like "a BBS with some primordial social features". But hey, sounds like your hypothetical to an extent.
This doesn't mean I know anything.
Just means I'm sympathetic to the plight of folks trying to make that sort of thing a reality. For the record, I'd sure like to give it another try at some point myself.
Anyway, intent matters here, to an extent. Parler advertised itself as a more or less moderation-free space.
That's quite different thing from Twitter and FB, with their codes of conduct and actual moderation teams. I mean, the line may be fuzzy, but it's there. I'll be the first person to say that Twitter and FB suck, and moderation efforts for advertising-driven user content mills are probably eternally doomed because their very business model dictates that their user-to-moderator ratio is always going to be laughably huge; far too large to enable effective moderation barring some kind of generational leap in AI moderation tools.
But there is at least the semblance of a good-faith effort there from those two, as much as I dislike them.
The notion that you'd need moderation should not come as a surprise to you. It's not 1997, so it's not like you don't know that this kind of thing happens. If you want to build a social network, handling the moderation load is part of your job, not an afterthought.
We absolutely allowed large social media platforms to get away with it for far too long. It's not the only thing making them a monopoly -- the network effect of having all of your friends in one place is also a significant barrier to entry for any new social media site.
Fixing that after the fact isn't easy. But it doesn't mean that you can act as though you're not very, very, very late to the party in trying to establish a new social media site. In 2021 it's part of any new site to make sure you're not being used for crime -- or at least making enough of an attempt that authorities don't see you as being implicated.
Seems like a bit of a catch-22, doesn't it? If we set moderation standards too low, then big social-media companies are evil because they're not doing anything about harmful content. If we set those standards too high, then big social-media companies are evil because monopoly. And there never seems to be any space in between. We need something better than just an excuse to hate on Facebook/Twitter/YouTube/etc.
"a platform specifically built to enable terrorism" this is hyperbole. We shouldnt have 2 companies arbitrarily determining what the thresholds are for a service/app to exist.
Parler is in the same category as facebook and twitter. It’s amazing that people have been gaslit to believe that Parler was intended for or mostly used by extremists. More amazing that people keep repeating this authoritatively when they clearly had no exposure to the service.
Yeah, it's the same category in the way a truck and sedan have 4 wheels. It's amazing that people have been gaslit to alternatively believe it was this was some secure, free speech alternative to Facebook when it's quite evident w/ the data pulls that they had no intention of doing so and were at best, incompetent.
They were trying to growth hack using an extremist leaning, marginalized audience and got burned for it. Roll the dice, accept the outcome.
Facebook has a huge moderation team. Just because some things get through that cause damage doesn't put it in the same category as a platform specifically built to enable terrorism.