Hacker News new | past | comments | ask | show | jobs | submit login

In principle, who could be opposed?

In practice, how will this actually work out?

True story. I have a friend who got a temporary ban from Facebook (I think 90 days?) for commenting on a story about a Texas billionaire paying to hunt endangered animals, "How much would it cost to hunt Texas billionaires?" That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

Moving on to this policy, I'm happy to see neonazis and the KKK have a hard time. But there are people in the UK and EU who would see Brexit as being a separatist cause motivated by racial animosity. Will we see discussion of a future such measure banned by Facebook on such grounds? And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?

Facebook is going to have some difficult conversations ahead. And the more of these lines that they draw, the more difficult boundary cases they will run into.




In principle, I am opposed. I don't think hiding unwanted dialogue is very helpful in the long run. It just makes it fester somewhere else that isn't as visible. If Facebook feels they have to do something about it, then I'd suggest just flagging it. That gives people more information that they can use how ever they see fit. There is also the thought of collateral damage, like your friend ran into.

I also oppose it on general grounds in that I'm a big proponent of freedom of expression. Facebook can do what they want, we're not talking about the government here and I understand that, but I would rather they didn't go this route. I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent. I seem to be one of the few that feels that way, though, any more.

There is also the practical concern of what happens if the ideas of what is acceptable and what isn't changes over time? Who knows which opinions that you hold now might become anathema at a later date. For example, the opinion I'm expressing now used to be a lot more common than it is today.


> In principle, I am opposed. I don't think hiding unwanted dialogue is very helpful in the long run. It just makes it fester somewhere else that isn't as visible.

Let's prod at this:

- Should ISIS recruitment videos and propaganda be allowed (nay, encouraged) because deplatforming them will make them put their recruitment pamphlets elsewhere? Where else do they put their recruiting materials?

- Should we publicly and loudly encourage people to self harm, ideate suicide, etc. because if we don't, they'll just find secret places to do it?

- Should we consider white supremacist recruiting materials "dialogue" at all? If we're talking about dialogue, as in a formal debate between Richard Spencer and pretty much anyone else, one on one, that's potentially interesting (also embarrassing for Spencer). On the other hand, a video posted by a white supremacist isn't dialogue. It's even less dialogue when they can delete comments they can't aptly respond to, and when the people there are already interested (Richard Spencer videos weren't ever going to cross my feed). You're calling this dialogue, but it's really a one sided dog and pony show with maybe some unlucky sacrifices. Perhaps we shouldn't ban dialogue between white supremacists and normal people, but you're not advocating for dialogue, your advocating for Facebook (and whomever else) to support (distribute, platform, etc.) white supremacist propaganda and theater.

>I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent.

How do you propose to create a society where people aren't allowed to dislike you? That's what you're asking for, essentially. "Freedom from consequences" is the common way of putting this, but really what you're asking for is an infringement on my freedom of association. If you piss enough people off, that'll come back to bite you. What's the alternative? That people can't hold you accountable for your previous words? That quickly devolves into a society of 4chan, which, well, I'm not sure why you'd want to live in that.

>For example, the opinion I'm expressing now used to be a lot more common than it is today.

How certain of this are you? Did black people or women have the freedoms you suggest 100-150 years ago in the US? Was anyone advocating for that?


> Should we consider white supremacist recruiting materials

How do we define "white supremacist recruiting materials? A significant number of people have said to me, un-ironically, that opposition to immigration is an instance of white supremacist speech. Same with opposition to affirmative action (they even called a crowd full of mostly Asians opposing affirmative action white supremacy in action). And I live in Bay Area, same place where Facebook is headquartered. That's the problem with trying to define ideological blacklists. Once you put a category onto the blacklist, everyone will try to push their political opponents into that category. This is how you get things like "learn to code" becoming a ban-worth offense (but only when directed to journalists).

Nowhere on the page linked in the original post does Facebook define "white nationalism" or "white separatism".


>Nowhere on the page linked in the original post does Facebook define "white nationalism" or "white separatism".

Do they define ISIS? That didn't seem to cause as much concern.

As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.

I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.


> Do they define ISIS?

ISIS is (or was) an explicit political group, an organized proto-state. It literally called itself a State. It was very clearly defined.

> As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.

"Preventing people from trying to kill people" was already against their terms of service. The whole point of this announcement is to announce the fact that Facebook is expanding their prohibited categories beyond "preventing people from trying to kill people".

> I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.

As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist. Others have told me that supporting any expansions of immigration restrictions is white nationalist. A few have even told me that opposition to affirmative action (even among groups in which Asians are the ones primarily opposing it) is white nationalism. I did not get the sense that they were saying these things ironically or in jest. Are these views tantamount to "preventing people from trying to kill people"? I live in San Francisco, which while not exactly the same environment as Menlo Park, is still in the same metro area as Facebook's HQ. There's a significant possibility that folks with similarly liberal definitions of white nationalism exist at Facebook.

Also the way you say you would "probably try to distance myself from the things that were causing those mistakes" really makes it sound like the chilling effect this has on discussion is a feature, not a bug. As significant number of people suspect that tech companies' expansions of prohibited speech is becoming a means of partisan manipulation. Statements such as yours likely reinforce this belief.

> Your view appears to be that it's better to just prevent people from voicing their confusion.

I am not trying to prevent anyone from voicing anything. The issue is that a significant number of people do confuse (or deliberately label) mainstream political views with "white nationalism" and "white separatism". Thus, Facebook's banning of these things is very likely to be seen as - and perhaps actually be implemented as - a means of suppressing legitimate political discussion. It probably would have been better to keep their prohibited categories the same, and perhaps more aggressively police certain circles and keep their policy - as you put it - "preventing people from trying to kill people"

There's already enough suspicion that Facebook is acting in a partisan manner, and more stuff like this is going to inspire ever greater calls to enforce stiffer regulation on tech companies and perhaps even breaking them up. This announcement seems like a shot in the foot for Facebook.


>ISIS is (or was) an explicit political group, an organized proto-state. It literally called itself a State. It was very clearly defined.

Are you suggesting that groups like the daily stormer, the national policy institute, etc. are not organized political groups? The national policy institute is quite literally a lobbying organization.

>As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist.

Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.

Something about everywhere you walk smelling like shit and all.


> Are you suggesting that groups like the daily stormer, the national policy institute, etc. are not organized political groups? The national policy institute is quite literally a lobbying organization.

Facebook's announcements do not specify these groups in particular, and do not seem to indicate that its definition of "white nationalism" and "white separatism" will be nearly as clearly defined as banning recruitment to the Caliphate.

> Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.

> Something about everywhere you walk smelling like shit and all.

Sorry to burst your bubble, but calling people white nationalists almost certainly isn't going to change their views. Quite the opposite, all it accomplishes is lessens the severity of these terms and makes it such that people roll their eyes when they see the label thrown around. And it further alienates people like me, who do support immigration and affirmative action etc, but are increasingly turned off by the every diminishing threshold at which terms like these get thrown around. Furthermore, it makes it even harder to distinguish between actual white nationalists and legitimate views that people are trying to make socially acceptable by painting them with the white nationalist brush.

Something about crying wolf and all.


[flagged]


> So your concern is literally just "I don't know how this will be enforced." If so, why not just...wait, and voice your concern when you have evidence of facebook abusing this, instead of doing what you're doing now, which looks to me a lot like what you disdainfully refer to as "crying wolf".

After that, the damage has been done and trust in Facebook will have been diminished. And there is a very big difference between "crying wolf" (as in, actually mislabeling something benign as something dangerous) and voicing concern based on previously observed behavior. Calling opposition to immigration white supremacy is the former, saying that there's a distinct probability that Facebook's enforcement of these rules will impact non-white-supremacist views is the latter.

> Sure, but there are few enough white nationalists now that I don't particularly need to worry about them. The issue is if their mindshare grows. And calling them out keeps it that way.

You seem to be missing the core point I've been making. The issue is not with actual white nationalists which, as you point out, are few and far between. The issue is with mainstream views getting consistently labeled as white nationalist which introduces several issues. One, it makes distinguishing between the former and the latter more difficult making it very likely that mainstream views get banned under the label of white nationalist. And two, it makes it so that people are less concerned with claims of white nationalism thus making it more acceptable for the actual white nationalists to operate openly.

> See, I know a lot of people, and the only ones who ever say things like this are ones on the internet. I've never met a real actual human who is so concerned at the thought of someone calling someone else a name that they're going to stop supporting immigration reform or affirmative action.

You're right, but you're refuting a straw man. Alienation doesn't mean ceasing support for particular issues. More often than not it means refusing to identify with political groups. This isn't speculation, this is backed up by evidence. Record numbers of people don't identify with either Democrats or Republicans [1].

> Like I said, you seem really, really concerned about not being called a white nationalist. If you really do support things like affirmative action and reasonable immigration policy, I'm not sure why you have such a concern. Either no one is calling you a white nationalist, in which case, again, why do you care about the nonexistent strawpersno who might do so? Or, the one person who is is so fringe as to be easily ignored.

Yet again, you're talking about things I never wrote. I do not get called white nationalist and I don't think I ever have been called as such. But I do see co-workers and former classmates call mainstream views white nationalists, and I do see how it makes political discussion toxic and non-productive. These aren't strawmen, these are people I go to work with every day that are adopting stances that make it impossible for them to engage with people with opposing political views beyond hurling insults. Even though they aren't calling me a white nationalist, they're still calling my conservative family members and friends white nationalists when they say things like supporting the border wall makes someone a white nationalist. This isn't healthy for a democracy and it makes me not want to identify with the groups that they are a part of.

And to circle back to the original point that was made, the prevalence of mainstream getting called white nationalist makes it a real possibility that Facebook will start banning mainstream political views. If "build the wall" starts getting banned as white nationalist as many of my co-workers want it to be then Republicans are going to be very eager to bring down the regulatory hammer on Facebook and perhaps big tech companies in general.

> https://news.gallup.com/poll/225056/americans-identification...


>Calling opposition to immigration white supremacy is the former, saying that there's a distinct probability that Facebook's enforcement of these rules will impact non-white-supremacist views is the latter.

So, you have historical evidence of facebook mislabeling things that are clearly not white supremacist as white supremacist?

If not, then yes, you're absolutely crying wolf, because you're ascribing the behavior of some entity to some unrelated entity, apparently based on geographic location (fun fact, the people enforcing FB's policies are probably in Austin or Phoenix[1], not MPK).

Also, your poll is outdated[2]. The numbers are back up. And looking at broader trends, more people identify as liberal than ever before[3], so all of this stuff that you think is causing this shift towards conservatism, or at least away from identifying as a democrat, isn't, since people are more willing to identify as Democratic and liberal now than in 2016 or 2012. I don't think these are related, but since apparently you do, I hope that this helps you understand that your reaction appears to be the minority reaction, and that this public shaming that you so despise is working.

[1]: https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...

[2]: https://news.gallup.com/poll/15370/party-affiliation.aspx

[3]: https://news.gallup.com/poll/245813/leans-conservative-liber...


Allow me to summarize the story of the boy who cried wolf, because you seem to not have the correct understanding of the story:

In a village there exists a boy that is tasked with protecting a flock of sheep by shouting "wolf!" if he sees a wolf, to alert the townsfolk to come to his aid. Out of boredom (or self-satisfaction of his ability to get a reaction out of the townsfolk, depending on the variation) he shouts "wolf!" despite not seeing any wolf. After a couple instances of false alarm, the townsfolk no longer heed the boy's alarm and do not come to his aid when a wolf really does attack the flock.

The boy knew that there was no wolf attack, but claimed that a wolf was attacking the sheep anyway. I am doing no such thing.

Facebook as only just announced this policy, so no one has any observation of how they are enforcing it. This is obvious. The post itself states that the policy will only go into effect next week. That you are asking me if I have historical evidence regarding a policy that has yet to go into effect does not indicate that the original post was read in much detail (though it does make your previous statement that this announcement is about "Preventing people from trying to kill people" a lot less surprising).

I am not, and have never, claimed that Facebook is using the guise of white supremacy to ban mainstream politics. Again, this policy isn't even in effect yet. I am, however, highlighting the fact that a significant segment of Facebook's work force (tech workers in the Bay Area) espouse a view of white supremacy that does categorize things like opposition to affirmative action and immigration as white supremacy. Will this impact the enforcement of their views? I don't know, but my take on this situation is that it's a significant risk.

Also, I'm not sure why you're claiming that my data is outdated. My data was from 2018, at which point independents were at 44%. The latest figure on your linked Gallup poll is 42% - not very far off. It's still significantly above the historical average of the mid 30s.


If you want an analogy, then a white nationalist terrorist organisation like, say, Atomwaffen or Combat 18, would be the equivalent to ISIS. There's no problem banning those - indeed, we already have laws for that.

On the other hand, banning white nationalism as a whole would be more like banning Salafi Islam as a whole, instead of specifically ISIS, al-Qaida, al-Shabab, Boko Haram etc.


>There's no problem banning those - indeed, we already have laws for that.

This is not correct. It is not illegal to be a member of, or recruit for, Atomwaffen. Its illegal to commit crimes. But associating with a group isn't a crime (for good reason!). That doesn't mean that we shouldn't take steps to keep people from associating with people who will cause them to commit crime.

Or like, should we just go whole hog and encourage MS-13 to post recruitment videos on youtube too.

As for salafism, no that's akin to something like the WBC, which while reasonably considered a hate group, hasn't been banned from anywhere as far as I know.


My understanding is that membership of terrorist or rebel groups is illegal (and in fact can be grounds for US force to kill US citizens without trial like several US citizens killed while being a member of Al-Qaeda* ). But membership of groups that are... let's just say "unsavory" is not itself illegal. I don't know about groups like Atomwaffen to determine whether they belong to the former or the latter.

\* Which did stir up some controversy, but is not all that surprising. The precedence for this dates back to the US Civil War, it would be ridiculous to claim that the Union Army criminally murdered hundreds of thousands of US citizens on the battlefield without trial when the latter were fighting for the Confederacy.


What about this for an idea (and I'm not sure I love it, but it's just spitballing): We allow expression of any _ideas_, but we put a limit on _recruitment_. I.e., anybody gets to say "White people need a home country" or "the Caliphate shall reign supreme", but nobody gets to say "come to the meeting at 10th and Main 5 o'clock". That way we at least know what's on people's mind so we can engage them but we're putting a limit on helping them grow the movement (until and unless we decide they have some sort of valid point, which is part the point of letting them speak).


The recruitment rarely happens on the platform. The "normal" process is

1. Get exposed to $radical_group on social media

2. Go searching for $radical_group because their ideas are intriguing/edgy/whatever

3. Find the actual site of $radical_group, and now they control the entire process.

Think of it this way: you see a poster for some cool band that says "RadicalPandas's music will change your life. We have concerts, but they're secret and we can't tell you where they are because the man wants to keep us down". If anything, that sounds even more edgy and intriguing. So instead you just have to tear all of the posters down.


People with mental problems are always going to exist. Using them as props to keep chopping away at the boundaries of free speech is foolish.


The really radical stuff doesn't happen on open platforms, they're more of a tool for early exposure.

The actual planning and such happens on private sites with secret forums.

As a real-world example, a Danish right-wing nationalist network called ORG was exposed in 2011, after having allegedly been active since the mid-80s. Its members counted a number of well-to-do individuals, as well as members of the police and known violent white nationalists. The group ran and had control over several sports clubs, including martial arts and gun clubs, which they used for training members in street fighting and tactics.

They ran a database of basically every politician and public figure who ever espoused left-wing views, and thousands of ostensibly left-wing citizens as well, labeling them as "traitors" to be "dealt with".

At least one of their members was from the armed forces in some security-related capacity, and they had reasonably good opsec procedures in place.

I have no doubt that even after being exposed and having its members publicly named, the group or a new equivalent still exists, with new secret sites and a heightened level of paranoia towards possible infiltrators.

By deplatforming these people and driving them further underground and paranoid, we limit their ability to attract people through social media, for fear of exposure.


Do you have a way of going about this that isn't authoritarian? (Extending the definition of "authoritarian" to Facebook acting within its own platform). EDIT: Sorry this was really vague. By authoritarian in this case I mean that some authority has to decide what is and isn't an acceptable opinion (as opposed to, say, deciding on an impartial process that somehow weeds out bad opinions).

And somewhat related, does this mean you have as dismal a view of humanity as it seems, that you don't want people exposed to dangerous ideas? This all seems necessarily paternalistic. EDIT I can't imagine, for instance, trusting in democracy with such a view.


I agree with KozmoNau7 here, so many notions of constructing some system that will somehow weed out the "bad" ideas seems misguided in my opinion. The reason is that these systems necessarily treat all ideas as equal valid inputs by requiring anything be allowed. This is done when we have bountiful evidence to the contrary, evidence that fascism and racism lead to horrendous consequences if left to spread through disingenuous tactics and deception.

In fact, we humans already form a decent system for weeding out bad ideas/opinions, we just don't listen to our own past experiences on the matter. We found out that fascism is putrid and yet now we try to come up with a new system that will weed it out instead of just chucking it into the garbage bin and moving on.

It's not paternalistic or a dismal view of humanity or that "humanity cannot be trusted with such a dangerous idea", it's because the people advocating for it constantly lie about it and intentionally trick/indoctrinate others into following. Under the right circumstances of me growing up, I fully believe someone could have deceived me into believing it, so I don't think I'm better than anybody who got sucked in. We can trust in democracy as long as we take proper precaution against things that prey on the freedom of expression and association in order to remove those rights from others.

Put another way, what real benefit is there to "freedom of speech, except for advocating fascism" as opposed to "freedom of speech, no exceptions"? The US already doesn't have pure unrestricted free speech because there are exceptions made for outlier situations where free speech is not protected in efforts to secure the safety of others. That hasn't led to total collapse or censorship.


I completely agree with what you wrote. I think too many people erroneously presume that free speech somehow guarantees an audience and acceptance, and when neither one happens then they feel their rights are being violated.

Personally, I feel there should be legal protections for all speech, but not protections from social ramifications. By this logic, I am totally fine with the idea of punching fascists. Which itself could be hit with assault charges, but then, how many juries would disagree with the reasoning for the violence? Let the masses decide for themselves, basically. It's what bothered me about the firing of James Gunn and Roseanne both despite their polar opposite politics. The companies that employed them cared so much about what the public thinks or feels that they couldn't be bothered to let the public decide for themselves to support either celeb or not. (And I know Gunn has since been rehired, but this happens so often, my point stands.)


In the case of ORG, it was exposed by a leftist research network that is primarily anarchist or non-authoritarian communist in nature. As far as I know this research network has no central authority, but rather a horizontal democratic structure.

I lean towards anarcho-communism myself, generally close to the original communist definition of libertarianism. So my answer to which authority should run things is "none". Facebook runs their own ship and are free to choose which content they will platform. While I would prefer a complete absence of hierarchy in all aspects of life, I think we can agree that this is probably not going to happen at FB.

I also strongly support anarchistic deplatforming and generally hindering fascists, nazis, racists and other bigots from spreading their noxious views. We have to realize at some point that some views simply aren't worth spreading.


> By deplatforming these people and driving them further underground and paranoid, we limit their ability to attract people through social media, for fear of exposure.

Do you have any evidence to support that statement? Would you expand on why you think so? I would estimate that by driving them more underground, you make them seem more cool in the eyes of angry teenagers. You also make them more radical because underground their opinions are not exposed to contradicting views. In my opinion, we should do exactly the opposite - we should give them platform to speak and oppose them. Isn't that what open society is about?


The number of people being radicalized by isis decreased and their social media reach disappeared.

There are a few studies in this thread that conclude similar.

The whole sunlight is the best disinfectant trope is only a trope, not a truth.

See also any conspiracy theory and the amtivax movement. When groups appeal to fear, not logic, as a recruiting tool, you can't logic people away.


Who's "we"?


For exploratory purposes, just imagining we can make an agreement. Don't worry, I don't like the concept of collective decisions either.


Just to put this out there from the beginning, I don't know what the exact appropriate balance between censorship and freedom of expression is. Also, FB can do what they like and don't have to listen to me.

With regards to ISIS, if their recruitment videos are publicly available, then people can contest them publicly too. They can explain why this viewpoint or whatever might be appealing to certain demographics, but look at this evidence for what happens to those people once they join.

The self-harm / suicide question is harder for me. I still think it's a good idea to have this happen in the public sphere so people can expose those who encourage such things in other people. However, if an individual is being targeted specifically then maybe there should be a line there. I don't know.

As for white supremacy, yes, I said "dialogue" when what I really meant was "speech". I don't know who Richard Spencer is, but if he's blocking comments then people can still post rebuttals in other videos.

I guess my main point is that it's better to confront such things openly then to try to hide them under the rug.

There also seems to be this idea that ISIS and white supremacists and whoever can magically infect others with their dogma as a form of mind control or something. The danger, in my opinion, lies in trying to bury it. Then when people stumble upon them, or are recruited and given a login somewhere, they have access to only one side of the argument and are effectively in a bubble. In the public sphere where this plays out openly, they have ready access to conflicting information on the same platform.

I don't want a society where people aren't allowed to dislike me. I don't quite get where you're coming from. I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.

I wasn't alive 100 to 150 years ago. However, growing up 40-ish years ago, I remember hearing such things as "I don't agree with what you say but I will defend to the death your right to say it". Not something you hear much, any more.


> There also seems to be this idea that ISIS and white supremacists and whoever can magically infect others with their dogma as a form of mind control or something.

At scale, this is exactly the case. If the probability of the average person becoming radicalized is greater than the probability of the average radical de-radicalizing, then open discourse will, on average, increase the number of radicals until those probabilities equalize.

So now I ask you: how successful have you seen discourse at deradicalizing Jihadis, white nationalists, westboro baptist church members, anti-vax parents, or flat-earthers? Is it more, or less, successful than those groups at radicalizing new people?

>I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.

Someone says "I want to diddle your kid". You tell them to never speak to you again. They say it to someone else. That person blocks them too. The same person keeps telling everyone that they're interested in child molestation. Free association says that we should all be able to shun that person because yikes.

But you say something different. That we shouldn't shun the potential kid-diddler. We should continue embracing them and their views, because shunning them censors them. And all views should be cherished and protected.

Or maybe you aren't saying that, but then its very tricky, because if everyone blocks them on facebook, they've been deplatformed. So why is that okay if facebook banning the person isn't? Or maybe it's that proclaim support for child molestation isn't okay, but proclaiming support for genocide is. In which case we're right back where we are now, just that your definition of "for the most part" is "expression of child molestation is bad, but white supremacy is fine". This brings me to my final point:

>I don't agree with what you say but I will defend to the death your right to say it

This only works when what you're saying isn't a threat to me. See how willing people 30-40 years ago were to defend to the death the right of black power groups to call for black empowerment. (hint: people got so scared the republicans banned guns)

If I start saying "we should kill all free-speech absolutists", how long are you going to defend my ability to say that? Only as long as I don't have power. Once I have the ability to actually carry out my threats, say, if I'm the president, do you really want me saying I'm going to kill some subset of society? Are you going to defend my right to say that? What if you think I might actually carry out what I'm claiming?

As long as the speech isn't a threat, people are okay with defending it. This is why no one really cares about flat-earthers, they aren't killing anyone, they're just the easy butt of jokes. Same with, until very recently, anti-vaxxers. Jenny McCarthy is kooky and their stuff is nonsense, but what's the worry? Well, measles, the loss of herd immunity, etc -> hmm, maybe we should stop this kind of thing.

Sometimes its possible to make the fix reactively. Kicking unvaccinated children out of school stops much of the harm that unvaccinated children cause, and in many cases forces parents to vaccinate their kids despite their views.

Unfortunately for explicitly violent groups, there isn't an easy reactive solution. You can't un-shoot someone.

As soon as people are threatened, they stop. "Expression of white supremacy is acceptable, but child molestation isn't" is just an expression of what you find threatening. What you're seeing is actually a shift toward more people speaking. Those who were voiceless before are speaking out and saying hey, this was really shitty before, let's not go back, and all the people saying otherwise are actually, truly threatening to me, so perhaps lets have them not say these things.

And on the other side you're seeing people threatened by this change in how people are considering speech, and feeling threatened, and saying "hey actually let's legislate these platforms so that I can keep spewing my garbage". White supremacists feel threatened by censorship, black people don't. That should tell you all you need.


The problem is who defines what hate and a hate group is. It is usually the people you don’t want to do so, people that want to tell you what to think. Eg to some Ben Shapiro, Jordan Peterson and Thomas Sowell are considered white supremacists.


That is a general problem but it doesn't address the argument. We already find ISIS recruitment videos and suicide ideation groups bad but where do we draw the line? If it's so difficult to define what should not be allowed in the open then I don't know what can be unallowed, including ISIS recruitment videos in the name of freedom of speech.


Talking about ISIS is a red herring. Are they considered white suprenacists now?

If we are to judge by social media companies past behavior [1] it’s regular conservative and liberal opinions that will be considered hate speech or white supremacy

[1] https://www.google.com/amp/s/www.foxbusiness.com/technology/...


The comment I originally responded to wasn't about white supremacy, but about censoring discourse. ISIS propaganda is as much "discourse" as white nationalist propaganda is, so why didn't you defend ISIS when it was deplatformed years ago? I realize that's kind of a gotcha, so I'll ask a similar question: given that it appears you feel that any deplatforming based on ideology is bad, do you criticize facebook for having deplatformed ISIS?

To pre-address your other comment, neither white nationalist propaganda nor ISIS propaganda necessarily includes calls to violence. So the incitement of violence standard applies to neither.

If not, what differentiates ISIS from White nationalists? Both are violent groups that wish to kill people different from them. As far as I can tell, one just looks a lot more like me. They're both dangerous, and we shouldn't let either recruit people on facebook. What is the flaw in that line of reasoning?

As for that case, it was dismissed earlier this month[1], apparently because the plaintiffs didn't actually have any examples of wrongdoing on the part of companies, and just filed the suit to raise awareness of freedom watch's advocacy[2].

[1]: https://www.pacermonitor.com/public/case/25495855/FREEDOM_WA...

[2]: https://law.justia.com/cases/federal/district-courts/distric...


ISIS has a clearly defined religious teaching and a core part of their message is using power as well as violence to push their religious teachings onto others to create a world Islamic kaliphate.

On the other hand what is being called alt right and white supremacisrs by mainstream organizations is bullshit:

- The economist called Ben Shapiro alt right today https://mobile.twitter.com/TheEconomist/status/1111248348114...

- The guardian says Jordan Peterson is supposedly alt right https://www.google.com/amp/s/amp.theguardian.com/science/201...

- Bret Weinstein is supposedly alt right

And ridiculously enough Ben Shapiro is a Jew that is the biggest hate target of the alt right, and Jordan Peterson is hated by the alt right such as Richard spencer and Vox Day.


You're including the alt right on an argument about white supremacists. Could you perhaps answer my questions without taking about the alt right, who aren't being discussed by anyone but you.

White supremacists also advocate violence against other races often to subjugate them. This is highly similar to isis.

Or are you claiming that all members of the alt right are white supremacists?


ADL defines alt right and white supremacy to be the same: https://www.adl.org/resources/backgrounders/alt-right-a-prim...

SPLC also say they are the same: https://www.splcenter.org/fighting-hate/extremist-files/ideo...

The opinions of SPLC and ADL is especially relevant because mainstream media and social media companies such as Facebook seem to have relationships with them as well as similar orgs to define actions against people.

So no, we are taking about the same poorly defined term because these orgs is an authority to Facebook.

The reason it’s bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.


>This vague term actually encompasses a range of people on the extreme right who reject mainstream conservatism in favor of forms of conservatism that embrace implicit or explicit racism or white supremacy.

Unless you're of the opinion that "racism" and "white supremacy" are synonymous, the ADL does not define them as the same.

>So no, we are taking about the same poorly defined term.

To be clear, the "alt right" isn't white nationalist, it is, however, a white nationalist recruiting tool. Sort of like how you don't just start out as a random person and then the next day you become Jihadi Jane. You're slowly radicalized. The alt right to full-on white supremacist pipeline is the same way.

You start off interested in self help, so you read 12 rules for life; then get caught up in Peterson's weird ideas about western culture; then pretty soon you're seeing not just his youtube lectures, but other lectures about western culture; and then videos about the decline of white/western culture; and then you're watching Richard Spencer; then you shoot up a church.

Now absolutely, granted, not everyone follows the entire path, very few people end up radicalized, but while I'll absolutely agree with you that Peterson isn't a white supremacist, he's absolutely a useful idiot for them.

But again, let's stick to people who have been accused of being "White Supremacists" since you're the one using the term alt-right, not Facebook. Which innocent people are getting accidentally confused for "white supremacists" specifically?


From the same ADL source on white supremacy:

> White supremacist Richard Spencer, who runs the National Policy Institute, a tiny white supremacist think tank, coined the term “Alternative Right”

Or from the SPLC source:

> The racist so-called “alt-right,” which came to prominence in late 2015, is white nationalism’s most recent formulation

Both clearly use them in the same breath.

We can argue semantics about one being the tool for the other etc etc. But that doesn't change the fact that they are being tightly related by organizations that is an authority to Facebook.

SPLC and ADL is very aware of its power, and has related much of the prominent dissenters of wildly different ideologies to what they claim is alt-right; Maajid (an arab muslim, they lost a lawsuit on this one and apologized), Jordan Peterson (an individualist liberal), Ben Shapiro (a pretty normal conservative), Dave Rubin (a classical liberal) etc etc.

SPLC and ADL are by this evidence too often bullshitters that don't relate what they say to truth, but instead relate it to their ideology that is not preoccupied with truth-seeking. It is really a shame their views are elevated like this as a justification for Facebooks and other organizations abuse of power.


You still haven't given an example of someone being called a "white supremacist". Would you please? Or admit that you are fearmongering.

Again, Facebook didn't use the word already right. They're talking about white supremacists. You're arguing that these are the same thing. So if that's the case, you should have ample examples of people calling Jordan Peterson a white supremacist also.

If not, then maybe they aren't the same thing, and your equivocation is unwarranted and you should stop attempting to confuse the subject.


With regards to examples I can do better than showing how people use it in the same breath. Here [1, 2] are examples of where SPLC specifically uses alt-right and white supremacy to describe the viewpoints related to Jordan Peterson, Ben Shapiro, Dave Rubin etc in the same article.

I've also shown that even the authoritarian sources Facebook trusts use alt-right and white supremacist in the same breath when defining the terms. These articles are no accident, this is what they believe.

This is not fear mongering. This is people that don't mind using their power to accuse viewpoint opponents for things they didn't do and with no evidence claiming they hold reprehensible viewpoints.

Edit: can't reply due to message depth limit, but the debate here is about facebook suppressing people like Jordan Peterson, Ben Shapiro, Dave Rubin using arguments made by SPLC and ADL such as the one in [1,2]. Why do they have the right to suppress other peoples viewpoints on dubious grounds with no recourse?

It is pretty clear from your arguments that you agree that they are related or as you say "a white nationalist recruiting tool" regardless of how you otherwise view them. The process facebook institute will therefore suppress these viewpoints based upon no evidence and by subjectively mischaracterizing their viewpoints, with no recourse for this power abuse.

[1] https://www.splcenter.org/hatewatch/2018/06/07/prageru%E2%80...

[2] https://www.splcenter.org/hatewatch/2016/08/25/whose-alt-rig...


Right so both of those article said exactly what I said: there's a process of radicalzation and people like Rubin (and Peterson) feed into that process.

Once more: please give an example of someone calling Jordan Peterson a white supremacist, or admit alternatively admit that no one has done so and you were fearmongering.

Not saying jbp associates with white supremacists, or serves as a useful idiot for white supremacists. That's all well known. You claimed that people called Jordan Peterson a white supremacist. You still haven't justified that claim, and that's because, quite simply, it's false.

You're fearmongering. No one has called Jordan Peterson a white supremacist. That's just a factually incorrect statement. They've said he unintentionally helps white supremacists, he occasionally associates with them, take selfies with them sure, but for all your trying, you still haven't been able to find someone actually call jbp a white supremacist.

So perhaps, just maybe, your worry is misplaced.


I’d like to more clearly find out where and if we disagree where it matters the most. Do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.


>Do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.

I certainly do, but evidence (e.g. historic enforcement patterns) indicates that Facebook does not.

Note that my opinion on this isn't global and without nuance. There are things that Jordan Peterson does and says that aren't, at all, controversial and are even occasionally interesting and thought provoking. Not everything he does radicalizes people. Same with, I assume, Rubin and Shapiro, although I don't pay them enough attention to know or care either way.

However, that's all beside the point. Facebook *doesn't appear to thing that they're worth censoring. I'm not sure why my opinion is at all relevant.

You still haven't given me that example, by the way.


Huh? How is it a red herring? Their propaganda is considered speech that is worth censoring. May be specifically naming ISIS makes this too easy. How about generally censoring radical Islamist propaganda that doesn't necessarily name ISIS specifically? The point that you're not really replying to is that people are apparently willing to censor speech for various reasons, one of them links to particular ideology. Is this okay or is it not?


Free speech doesn’t cover incitement to violence, and it is already illegal and that is why it’s a red herring. Advocating for violence against innocent people is not primarily ideology, but someone being a bad human being that should go to jail for breaking the law.

You really don’t need hate speech and white supremacy policies to prohibit speech that is already illegal, but you need it to suppress speech that some people might disagree with when they want to abuse their power to suppress it.


Regarding the first paragraph, I know, but that isn't under discussion. What is under discussion is censorship of speech due to ideology, perhaps because it leads to violence if although there are no direct calls for violence. White supremacy and radical Islamism both need not call for violence, but the ultimate conclusion of the ideology is violence. It sounds like you don't want to censor either, which I think is consistent.


You are still assuming a clear definition of what white supremacy is, which Facebook doesn’t provide.

How do you think about the false positives?


ADL defines alt right to be white supremacy to be the same: https://www.adl.org/resources/backgrounders/alt-right-a-prim...

SPLC also say they are the same: https://www.splcenter.org/fighting-hate/extremist-files/ideo...

These orgs provide the best definition we can get right now due to Facebook relying on others to define it instead of providing clear definitions, and these orgs is an authority to Facebook. If there was ever an attempt at using power while abdicating responsibility that is it.

The reason while any action build upon this is bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.


You were given a definition of white supremacy below and elsewhere so I won't repeat others. It just seems like under this sense of freedom of speech we shouldn't censor radical Islamism or white supremacy if we can't ban speech merely due to ideology leading to violence. That is a consistent and fair sense of morality I think.


That is a bit extreme from my worldview and not necessary to be more balanced. I am arguing that facebooks process is biased, and facebooks authoritative sources are unaccountable to fix their own mistakes and ideologically biased in a way that is not evidence based. A better process with an adversarial appeal with authoritative viewpoint opponents would have compensated for some of this.

To figure out if we disagree where it matters the most, do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.


free speech in US does actually cover incitement to violence. in Europe it doesn't


Incitement to imminent violent action is illegal https://en.m.wikipedia.org/wiki/Imminent_lawless_action


Well, ISIS is a criminal organization for reasons unrelated to their speech. Once they have become one, it's not unreasonable to make their recruitment videos illegal to distribute to other people, on the basis that doing so amounts to conspiracy to recruit people to commit crimes (but mere possession and watching them should still be legal, because in and of itself that doesn't help them).


What happened to Heather Heyer was also not speech. What happened to Clementa Pinckney and 8 other African-Americans at the Charleston church shooting was not speech. What happened at the Tree of Life Pittsburgh synagogue shooting was not speech.

Shouldn't the same argument apply to the videos that radicalized those murderers?


No. Why should it?


My parent comment had said that conspiracy to recruit people to commit crimes is not an unreasonable basis for making it illegal to distribute recruitment videos for ISIS, because they are a criminal organization due to their violence, not their speech. That implies that that should also not be an unreasonable basis for making it illegal to distribute recruitment videos for violent white nationalist organizations.

Does that answer your question?


Yes, thank you for explaining. I misunderstood the context of your question.


If you can prove that any particular video did so, sure, I don't see a problem with that.


are you blind? white supremacists kill more people in the west than ISIS


You can improve this comment but removing the question and providing a source for your claim.


Rubbish. You're claiming that "white supremacists" kill more people than the Bataclan murders, the bombing of the Ariane Grande concert and a host of other atrocities (Sweden, etc.) carried out by self-proclaimed ISIS supporters.

That's not only wrong it's highly offensive.


You are offended by an unsupported claim, and proceed to make an opposite unsupported claim. Now I'm offended not by your assertion, but by your lack of substance.

I googled for "extremism murder statistics": This is only the US [1], and I have not looked at methodology, but it supports GPs claim, not yours. Please discuss, but do so using substance, not master suppression techniques.

[1] https://www.adl.org/murder-and-extremism-2018


Well I think the problem lies in that all his examples were from outside the US, and not only during 2018 which is when and where you gave an example. Here's another source that is only in the US that seems to agree with Wildgoose [1](pdf) from 2010-2016. This source certainly doesn't count as many deaths (note your link said their were 313 deaths from right-wing extremists from 2009-2018) it only goes from 2010-2016 saying there were 140 deaths (note your link also says 50 deaths happened by right wing extremists in 2018). I'm not sure what the discrepancy is there in not accounting for all the deaths from your link. This link states that from 2010-2016 68 deaths were from Jihadist-inspired terrorists while 18 deaths were from white nationalists and extremists. Yet again it would be nice to see data from the whole of the West.

Just to give you another source from the same people which seems to agree with Wildgoose's parent atleast with its reasoning (albeit by making a few qualifiers on the dataset that bend it to be in favor of that reasoning) [2] from 2001-2016 in the US.

[1](pdf): https://www.start.umd.edu/pubs/START_IdeologicalMotivationsO... [2]: https://www.start.umd.edu/pubs/START_ECDB_IslamistFarRightHo...


Wait, are you claiming they’re not? Lol


I do not have to claim that an untruth is such. It is rather those peddling untruth and statements not relating to truth at all (bullshit) that have to confuse everyone else to a sufficient degree to twist their worldviews for their purpose.

Unfortunately trying to control others viewpoints hurts yourself as much as anyone else because you are making yourself and what you can learn from others less adaptive to reality.


Is it trying to control others’ viewpoints to say that the KKK is a racist, white supremacist group? I thought we’re all in the marketplace of ideas here. You are literally telling me what to think, which is what your original comment purported to be against.


Show me one example of a white nationalist action by Jordan Peterson or Ben Shapiro, and I’ll be right there beside you fighting it. It’s on you to justify your claim.

Marketplace of ideas is not about throwing out unfounded claims and with a high likelihood have others treat them like a proposition founded in truth seeking. Rather it will be treated as the bullshit it most likely is unless you supply evidence to the claim.


> It just makes it fester somewhere else that isn't as visible.

That's a win in this case. It already festers somewhere else. White nationalists would like to spread their ideology outside of their normal bubble. They often talk about "redpilling normies." It's harder for them to do that if "normal" people have to explicitly seek out that kind of speech.


>It already festers somewhere else

excluding people from a conversation doesn't help them learn why/how they think is wrong.. those "hidden" places that let the wrongthink fester only create more actions brought by wrongthink (guess what they talk about? wrongthink).

i dont understand the logic of removing someones ability to converse if they dont understand why they're wrong.. they cant ask questions

someone explain to me how banning content/people = deradicalizing


White supremacists use Facebook for the purpose of recruiting and radicalizing people who are not yet white supremacists. The point of a ban like this is not to deradicalize people who are already white supremacists, it is to make it harder for people to become radicalized in the first place. People do not typically seek out white supremacists; white supremacists are always looking for people who are vulnerable to their message. Facebook seems to have come to the conclusion that their platform is just too useful to such terrorist groups and have decided to ban them before the problem gets worse.


>it is to make it harder for people to become radicalized in the first place

I've been in fb group chats with holocaust deniers etc. (mans literally inboxed me a youtube video, I never bothered clicking the links he sent but it was funny)

"banning white nationalist content" is a cute headline but doesn't hit the problem of seemingly benign normal users that would never "reveal their power level" slowly radicalizing their friends with content not on their site lol


I said "harder" not "impossible." Yes, white supremacists are still going to use Facebook and are still going to try to radicalize people, but it will be harder.


"harder" not "impossible."

The Rhetoric Tricks, Traps, and Tactics of White Nationalism https://medium.com/@DeoTasDevil/the-rhetoric-tricks-traps-an...

the overt white nationalism content you think of doesn't really exist, i guess maybe if ur a boomer it still lingers? idk


I sure saw a lot of overtly antisemitic, racist, white supremacist imagery in that article...


those memes are really old, theres more subtle content that radicalizes people

its less orchestrated and just friendly until a point


>i dont understand the logic of removing someones ability to converse if they dont understand why they're wrong.. they cant ask questions.

I don't think I've ever seen someone already radicalized reason their way out of it through conversing on the internet. I'm guessing the calculus is that exposing this content to people makes it easy to be suckered in, but we don't see the opposite affect.


You should read this awesome New Yorker article from several years ago: https://www.newyorker.com/magazine/2015/11/23/conversion-via...

That said, I think I generally agree that conversion via social media is unlikely. Still, I’m also not sure about my position on banning people. But the above article is a great read.


The Phelps people are disgusting, but they are not terrorists. The thing that is missing from most conversations about white supremacists is that their hate goes beyond words. These are terrorist organizations whose members have committed one violent, murderous act after another, year after year. Banning white supremacists is no different from banning ISIS.


Makes sense. But Westboro is radical, and the above poster had been talking about radicalization, not terrorism.


Most people do not critically evaluate content, especially if their own ego or group identity is involved.


I was waiting for the "only idiots get suckered into wrongthink" comment


Then I'm an idiot too. There are a whole slew of cognitive biases that cause us to prefer our in-group.


welcome to the club!


Actually, their point isn't "only idiots" more than it is "everyone." Cognitive biases are real and hard to overcome and we all can succumb to them.


1hour later

thats how quick it can take to slowly expose people to content and how useless it is to "ban white nationalist content"

v cute headline tho


outbound content is dangerous ban outbound content to make it "harder" but not "impossible" to expose "idiots" to "radical content" https://imgur.com/o14rjegrhk21.png


> It just makes it fester somewhere else that isn't as visible.

Is that true? Would we have the anti-vax movement if it weren't for Facebook & co.?


We had the anti-vax movement before Facebook.

And if it were banned, they'd just spin it as the pharma companies paying to suppress the truth. It would only further confirm their beliefs.


That "spotlight is the best disinfectant" is a very convenient narrative. I do realize in the end we all want to believe in whatever is convenient for us (notably including anti vaxxers). But is there actual scientific evidence supporting that particular narrative to work?

Through all of mankind we had strong mechanisms to form consensus, including social repercussions. Those don't work anymore, since what would have become outcasts in earlier generations can now easily (for example on facebook) find like-minded communities and fulfill their social needs/get approval/etc. It'll be interesting in how different realities people can believe in before society breaks apart. I'd prefer we wont let that happen.

The question is what actually works. Censoring them might reinforce their believe, but I can accept giving up on some if it does effectively stops the spread.


> Through all of mankind we had strong mechanisms to form consensus, including social repercussions.

This is a dangerous, dangerous path to go down to if you belong to any kind of enlightment inspired ideology. What kind of things were supressed the hardest? Sexual Deviancy. questioning authority. Questioning relgion. You really want this kind of society? I think maybe we can stand some antivaxxer ...


> > Through all of mankind we had strong mechanisms to form consensus, including social repercussions. > This is a dangerous, dangerous path to go down to if you belong to any kind of enlightment inspired ideology.

Mechanisms to form consensus does not necessarily mean rule by mob, quite the opposite. Positive consensus mechanisms can be trust in the scientific method, institutional credibility, and acceptance of reason. These mechanisms can support an enlightenment ideology, not prevent it.


I wasn't talking about mob rule. I was talking about an orthodoxy that's ruthlessly enforced by those in power. Cause that's what it was before.

"Freedom of speech" is not an accident. Enlightenment thinkers have been pondering this for 200 years and more and objections have been successfully adressed over and over again. It's as much of the type of consensus you describe as we will ever have. 'But computers' is not sufficient to just do away with it.


Pretty naive if you apply this conclusion to many countries in the world.


> The question is what actually works.

Make them pay a material cost: link vaccination to welfare benefits/family tax rebates, works well in Australia (search no jab, no pay). Also make an up to date vaccination record a requirement for enrolment into schools.

They will complain and might keep on spouting shit, but at the cost of ~$10k/child/year they'll change their actions pretty quick.


this is what the free market would do but I thought we don't want that because 'everyone should have affordable healthcare'. Isn't an antivaxer 'everyone'? It's kind of spooky, progressives convince themselves they have no ingroup bias by making their ingroup unreasonably large but the price for that seems to be that they un-human people who have a different view.


> Through all of mankind['s history,] we had strong mechanisms to form consensus, including social repercussions.

So, you're saying this mechanism is a good thing? Because mechanisms that reinforce the current social consensus, whatever it might be for that era, tend to maintain the status quo. And a desire to maintaining the status quo is a big chunk of the philosophy of, yep, that's right, ... conservatives.

I've often said progressives aren't liberals because they don't agree with liberal philosophical values of the Enlightenment. Their "maintain the current societal consensus" argument is the strongest evidence of that yet.


>Through all of mankind we had strong mechanisms to form consensus, including social repercussions.

And throughout history people lived to be 30 and died of plague wallowing in their own excrement thinking the devil did it.

If your ideology needs thought control to work it needs to be taken out at the back of the sheds and shot. The second a society stops discussing ideas openly is the second it starts sliding towards a new dark age.


In fact, the anti-vaxx movement is more than a century old:

https://www.historyofvaccines.org/content/articles/history-a...


Maybe it wouldn't be as big but I can assure you nothing gives a conspiracy theory more credence among it's followers than being deplatformed.


This is what people said about Alex Jones and his network. When he was deplatformed there was a surge of downloads for his app from white supremacists trying to show support and people gave your exact argument that now he has been legitimized.

But what actually happened were those numbers were inflated by his followers and without the echo chamber of the larger base he had on YouTube his numbers eventually plummeted. Instead of being a regular in the news he occupies a small corner of the internet without access to the larger impressionable ever refreshing base he once had.

He has now resorted to trying to sneak videos back on to you tube.


[flagged]


MLK was heavily persecuted in the sixties for his views and message by private and public entities. What point are you trying to make.


> MLK was heavily persecuted in the sixties for his views and message by private and public entities.

And people should be okay with his persecution by the people who had the political upper hand in that era? No? Then why should anyone be okay with deplatforming Alex Jones (as bad as he is) by whoever has the political upper hand at the moment?


>And people should be okay with his persecution by the people who had the political upper hand in that era? No? Then why should anyone be okay with deplatforming Alex Jones (as bad as he is) by whoever has the political upper hand at the moment?

Because one was a indispensible paragon of civil rights and an avatar of anti-racism and the other is a shrieking whackjob entertainer peddling freeze-dried survivalist food while spreading lies, mental illness and grief to families of murdered kids.

Hey, you're the one who brought up history.


And when the next generation's MLK shows up and gets persecuted again by the powers that be, except now there are even better deplatforming tools to silence him? Whoops, game over.

Hey, you're the one who failed to think things through. The tired old "b-but deplatforming will only ever by used against _bad_ people" argument is exactly a failure to appreciate the lessons of history, of which MLK is an excellent example.


>Hey, you're the one who failed to think things through.

No, I'm the one who corrected an utterly insane equivalence drawn between a commercial figure who drives grieving fathers of murdered kids to suicide and a man who was literally killed for his demanding equal treatment for all people. Again: if you bring up history, you are now forced to deal with historical results and categories. Deal with them. Don't just pretend you are.


I don't see a valid counterargument in your response?

It's ironic that, on a site where the UNIX philosophy is widely appreciated, that so many fail to appreciate the wisdom of the old quote "UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things.". So too, with Alex Jones, MLK, and freedom of speech.


I upvoted this comment because I saw people were downvoting it.


I wonder what would happen if they flagged it the same way my thermostat flags my monthly energy usage.

“Most of your peers consider this post racist neonazi propaganda.”


> Facebook can do what they want, we're not talking about the government here and I understand that

That isn't so clear: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=139661


Scott Alexander's theory is that having an open forum has become incredibly difficult, maybe tending towards impossible. All the incentives line up to make moderation steadily more costly or steadily more draconian over time, or both:

https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

It's not clear how to preserve a forum for free expression while spammers or trolls seek to suck up all the available bandwidth for their messages. You can end up with a sort of spectrum allocation problem, where if everyone is allowed to crowd the same frequency then communication on it becomes impossible. But once you start moderating content, it's really hard to find a satisfying balance. And on the margins, some trolls are indistinguishable from some people with really unbelievable opinions.

I don't know what the answer is, it seems like a really complicated problem.


From the release:

> Our own review of hate figures and organizations – as defined by our Dangerous Individuals & Organizations policy – further revealed the overlap between white nationalism and separatism and white supremacy.

It doesn't appear that this content is being banned because Facebook surrogates find the politics in question abhorrent (though they likely do so find). It is being banned because it is acting as part of a recruitment funnel for an ideology that openly advocates violence. They collected evidence of this before acting.

The line is somewhat blurry, but it does exist. On one side, we have a sensible use of editorial discretion; on the other, we have censorship. Is Facebook going to cross the line at some point? Almost certainly. Basically every individual or entity to have editorial discretion has both made mistakes and abused that power.

It remains to be seen to what extent that will happen here. If anything, they've erred on the side of permissiveness so far (e.g. how long it took to ban ISIS). If that changes, we should call them out at that time.


> It is being banned because it is acting as part of a recruitment funnel for an ideology that openly advocates violence.

That would be believable, if it was applied as such across the board - e.g. also removing Christian Dominionists, Stalinists, Salafi Muslims, NoI etc.


Facebook already deplatforms radical Islamic preachers on a regular basis.

I doubt that have enough data to show the links for the other groups.


Are you sure about that? Try searching for "الموت للغرب" on facebook (Death to the west)



This is "terror content" though, not the ideology behind it. I tried looking around, and there are plenty of pages preaching the ideology. It sounds like they're okay so long as they don't directly call for violence, showcase violence (execution videos etc), or try to recruit for either of those, even though the end goals are inherently violent. Most white supremacist content is similar in that regard.


I'd note that "Death to the West" has a very mixed set of meanings.

Often is isn't meant as a threat, but instead is a political statement calling for the end to the hegemony of the West. However, sometimes it is meant as hate speech (often when associated with calls for the destruction of Israel).


I don't know about Salafi Muslims specifically, but Facebook already does censor radical Islamist groups like ISIS. This move is trying to be more across-the-board by including white supremacy under more of its aliases.

As for the others--when's the last time a Stalinist drove a car into a crowd or committed a mass shooting?


[flagged]


The argument being pushed here (whether they themselves understand it or not) is basically that "seeing something a little bit to the right > leads a little more to the right > then even more to the right > NAZI!"

But it's not at all! Viewing racism as a left/right thing is wrong altogether. While it isn't obvious in US politics today, the left has often had a problem with race too.

Racism should be abhorrent to both left and right, and that should be independent of immigration views.

Which hilariously is a concept born almost exclusively out of modern leftist and current "_blank_ studies" thought/ideology.

Well actually Facebook has data showing this, so I don't think that's correct.


Because Facebook can misapply its policies does not mean it should have no policies, it merely means it should implement them better.

Obviously there are other kinds of content that Facebook could ban, but merely because those things exist doesn't mean this shouldn't also be banned.


"it merely means it should implement them better" this is the theory of every government committee, ever. In practice, generally, _it doesn't happen_.

I believe private companies are usually better, but still highly dependant on size and a number of other factors.

I definitely don't have faith that FB will wield this well, and the worse part is it will probably pretty hard to find out from the silenced parties when they f!@# up until much later.


why not just have their policy be the first amendment and they abide by the same rules as the country that created them? Why should corporations take on censorship powers that are prohibited for the government? Why should a private for-profit corporation controlled by one man have more power than a transparent institution controlled by democracy?


Governments have vastly more power over their citizens than any corporation. Facebook cannot imprison or kill users that violate their rules, and being banned from Facebook is not all that big of a deal (I have not had a Facebook account for more than a decade, and honestly I feel no particular need to go back). Moreover, if you required private for-profit corporations to adhere to the first amendment, you would quickly run into problems. Scientific journals could not exist -- the first amendment comes with no requirement for scientific rigor. Newspapers could not exist -- the first amendment has no requirement for journalistic quality (hence the legality of fake news).

The framers of the constitution could certainly have included a requirement that private institutions follows the same rules as the government. There were certainly big and powerful non-governmental organizations at the time -- the Catholic Church, for example.


I think the standard answer is that corporations are less restricted because they are not able to enact those policies through force. The government can fine you, imprison you, etc. regardless of what you think; however, you don't have to use Facebook in the first place.


that is the default answer but there comes a point when you have to reconsider it in light of the facts of a particular situation.

Take email for instance: first class mail is protected by the 5th amendment, and a warrant is required to open and read first class mail. Move communications to a private digital platform and the presumption of privacy is completely flipped into a presumption that all your emails will be read and stored for future reading. When all communications go online that leads to a very significant change in how the law applies to private communications and that is exploited by the government just as much as it is exploited by the corporations. After many years of there being a free-for-all where law enforcement and intelligence claimed they could view anything online without a warrant there was finally some push back by courts to reassert the rule of law, but email still has substantially less privacy than first class mail.

Speech on public forums is a similar case. The 1st amendment applies to the town square but if a corporation creates a digital town square then it gets the right of censorship. Then the government applies pressure on the corporation to censor on their behalf and the government begins to exercise a power it did not previously have. That was not a problem when online platforms were small and inconsequential, but when they grow to billions of users globally, then the power to censor speech on those platforms becomes quite influential on the exercise of real world power.

Corporations do not have police powers but they are subjects of governments that do, so once they get power they can always be made tools of the people that hold power over them.


No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future? Or if the decision maker sensed your intentions or sarcasm or if you don’t beleive your speech to be hateful but someone else does.

The whole point of defending speech you don’t agree with is it promises that arbitrary rules won’t be used when YOUR speech runs afoul or someone else’s feelings.

Let me phrase it to you like this... if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?

Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.


"Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with."

The problem is not that I personally disagree with white supremacy. The problem is that white supremacist organizations are a significant and growing terrorist threat, and they are using Facebook to recruit new members. This is no different from the various platforms that banned ISIS.


Actually, while arguably it may be growing (kind of hard to tell with single and no digit yearly numbers) It isn't all that much of an issue overall or even relatively speaking.

https://www.bbc.com/news/uk-47626859


Did you read your own link?

3 of the 4 graphs against time showed right-wing extremism increasing, in Western Europe and North America, the UK, and the USA, respectively. The only graph that didn't show that right-wing extremism is a growing problem instead showed that it has been consistently bad in Germany, worse than the other two categories combined (in that graph, the other two categories were left-wing attacks and "Not Identified").

What's arguable about it?


It raises through animosity mostly. It had been on the net for years without any significant effect. Rumors have it they tried to recruit some weeaboos, but they were too smart.

The poster is pretty much correct.


Did you mean to reply to someone else's comment? This doesn't appear to have anything to do with my comment.


> No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future?

Because I'm not a white nationalist? Facebook already bans child pornography, pro-ISIS content, and doxxing. Does anyone seriously believe that Facebook is slippery sloping to banning all speech?

But let's say that Trump and Pence do indeed take over Facebook and make it so they ban all users that don't loudly praise Trump. If they do, honestly, so what? Facebook can't imprison you or legally remove your property. The worst they can do is ban you from their platform. Which, well, that sucks, but there's lots of sites on the Internet. In fact, you could go ahead and legally make a non-censorship Facebook.

...of course, sites like that already exist, in this world, not in the Trump/Pence Facebook world. And they're dominated by child pornography, white nationalism, and doxxing. No one at large uses them because they're horribly toxic and disgusting. Almost like some rules about content are actually helpful for sites on the Internet.


It seems you and I go to different websites; and I’m quite happy about that.

But I beleive my comment said legal things. Having a different opinion is still legal, for now. You jumped to all sorts of illegal things.

And my context was what if “the bad people” are suddenly running these services or in positions of power and relative to them, you are the one with hate speech? Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.


Pro-ISIS speech isn't illegal but it's still banned. Pornography isn't illegal but it's still banned. So the legality argument here isn't particularly compelling.

> Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.

You have got to be kidding. If it's on a platform I control I can make any kind of legal judgments about the content I find acceptable. And if you're banned, again, so what? Facebook is not a government or even a government-adjacent entity: I have no right to participate on the site, and they have no obligation to platform my speech.


Now I'm curious about what non-censorship sites you are using? Because the only ones that I am aware of are full of terrible people. If you have one that isn't over-run by the bad guys, please share it because others here might want to use it too.


> No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future?

I don't.

My view on this action is a view on this action, not every potential future action Facebook might take in the future that is loosely analogous.

> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is.

Yes.

I also think that they would do a horrible job of it, and the board of Facebook should not choose them for that job. I also think Donald Trump does a horrible job directing the policy of the executive branch of the US government, but I don't think that fact means the the President of the United States should not direct executive policy.

If I thought that no one should have any responsibility or authority if Donald Trump would fail in the responsibility or misuse the authority, then, well, I wouldn't allow anyone to do anything.

> Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.

It's absolutely Facebook's job to decide what messages they are willing to relay on their platform. That's a direct consequence of the First Amendment.


The part where if people who you strongly disagree with were making the rules that you insist you would still be ok with that - is intellectually dishonest.


> how do you know it won’t be used against you in the future

I think it's foolish to wring your hands over the arbitrary machinations of a multibillion dollar corporation's online platform. They don't care about you or me and frankly they have no obligation to do so. This logic is akin to complaining that McDonalds puts too much lard in the fries, maybe that's true, but if that's the case then don't give McDonalds your business; the fact that there is a McDonalds on every street corner doesn't mean they are obligated to modify their business process to be within the parameters of your approval.

> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?

That's Facebook's prerogative. I don't really use Facebook, but if I felt like Facebook was unfairly targeting me I'd stop using their platform, if they elected Trump to the board of directors I'd stop using their platform. The solution to all these Facebook problems is very simple: stop using it.


Sarcasm and bullying aren't mutually exclusive.


Indeed. In fact, it's a rare bully who won't protest that they're "only kidding!" whenever they're called on their bullying. And in their mind, they probably were only kidding.


> That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

That line is grey and I'd argue that your friend's comment, while obviously hyperbole, does not respect human life.

There was an article posted about cyclists that points out the danger ous dehumanizing others; that comedy / hyperbole help group think achieve it.


So we are banning jokes now?


I think it's pretty reasonable to ban jokes about killing people. Maybe it was an overreaction here, but I certainly try to avoid even jokingly saying things that could be interpreted as a death threat if taken seriously, especially on the internet where it's easy for tone and context to get lost.


Just the unfunny ones


Of course we are. Where have you been the last 5 years?


> That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

That's what bullies usually say when you confront them.


A one off joke, or even mean remark, is not bullying.


Why should we tolerate any mean remarks at all?


How much would it cost to hunt [insert group of people here]?

Even if someone commented that as "sarcasm" it would look pretty bully to me.


Nope. You misunderstand what bullying is. You can't bully an abstract group.


In principle, who could be opposed? In practice, how will this actually work out?

We've had Nazis and Klansmen effectively shut out of public discourse for decades. They aren't on TV. Their ads don't run in newspapers. If you go shopping in a pointy white hood and a swastika armband, the store (or the shoppers, more insistently) will likely ask you to leave.

Facebook (and similar) are the ones with the odd new 'principle' thing they tried which is 'it's ok for the hyper-overt bigots to shit everywhere with no consequences'. In practice, it has not worked out well so they are belatedly changing it.


Blegh, the worst is coming in here and reading people criticizing and outright banning the use of sarcasm, or implying that they know your friend better than you and that she definitely literally meant what she said.

This is the end of freedom and, along with Europe decisions recently, the beginning of all the worst dystopias we have already listened/read/watched about.


I don't like recent trends either, but you really need to read more.


In principle, Facebook has never branded itself a bastion of free speech.

In practice, we've seen a lot of examples across YouTube, Facebook, and Twitter where things are blanket deplatformed under the guise of policies like these, and it's concerning.

One thing is for certain - Facebook can't possibly do a worse job than YouTube has at policing content.


If only Facebook focused its products on encrypted, private conversations, they wouldn't have to worry about / could claim to be incapable of policing content...


I promise you, people would still want to police communications like the kind that are being discussed here.


Corporations and organizations always had the ability to police their content to their own desires. A more interesting question is when does that become perhaps ethically wrong; Are people against the big companies policing content _in general_ (and thus likely against the first statement), or at people against the big companies policing content that violates popular definition of free speech once they reach a certain size?

I mean, we're discussing this topic on a site that highly polices content, albeit in one that does in different ways, but such different ways that it attracts the very commenters in this thread to read and respond to content rather than visiting other sites -- or any at all.


Until there are objective, rationale ways to determine what content to ban and what not to ban i’d prefer that we stay clear of using filter based on “I know it when I see it”.


If we did that with pornography we'd still be swimming in porno ads.


> Knowing her, it wasn't. It was sarcasm.

Did she know that the person she directed that comment to did not know her?


You don't have to know anyone to read that comment as satire. I suppose Facebook would also have banned Johnathan Swift for his "Modest Proposal". https://en.m.wikipedia.org/wiki/A_Modest_Proposal


But people do literally post on Facebook that they're going to kill someone and then follow up and do it. Remember Brenton Harrison Tarrant?


A tiny minority... who would've done it anyway even without Facebook.


Surely you see there's a difference between a Modest Proposal and posting a Facebook comment suggesting someone should die with no other context informing people it's a joke?


Doesn't the context of billionaire endangered animal hunter inform us that it may possible be not-serious.

Surely?

On the other hand, there's bound to be people who believe fair-trial-judicially-decided capital punishment would be fair retribution for anyone who intentionally kills endangered animals.

I'm going to say something like: I'm against capital punishment because it tends to kill innocent people at least occasionally, not because it isn't often deserved.


The point of a comparison is that there is a parallel, not a difference. There is always a difference.

What is the parallel? In both cases, people who did not recognize the author's point took offense. (And yes, a lot of people were seriously horrified by Swift.) And in both cases the author's actual point was close to the direct opposite of the one that those people thought. Some people found obvious, others didn't. To people who found it obvious, it can be surprising that it wasn't obvious to others. People who didn't find it obvious think it a horrible thing to say.

In this case the comment is directed against the justification that the billionaire offered that it is OK for him to kill the animal because he paid lots of money to do so. And her point is that just because you pay lots of money to do a wrong thing, doesn't make it OK to do that wrong thing. To see it, put the billionaire in the animal's shoes. How much money would it take to make hunting OK? Obviously no amount of money would suffice! Just as the billionaire's having paid lots of money didn't make his hunting OK either.

That said, this one actually gets complicated. The money from these hunts goes to anti-poaching efforts. So the billionaire kills one animal, and his money saves others. Which still makes the billionaire a shitty person, but there is a utilitarian argument for allowing it.

That said, how would you feel if we were talking about hunting children dying in a famine instead of black rhinos on a preserve? The same utilitarian argument applies, but I think most would be for putting the billionaire in jail. How you feel about that is likely close to how that friend feels about what actually happened.


Ultimately I'd prefer nobody was threatening kill anyone in any context on a social media site, no matter how witty they think they're being. It doesn't seem like it should be such a high bar to set but apparently it is.


The thing that makes it an obvious joke is that it's a simple reversal of a previous statement. That is a very common pattern for jokes.


Define "directed that comment to".

I am quite sure that Texas billionaire Lacy Harber does not know her.

I am also quite sure that the person who shared the post she replied to about his hunting an endangered black rhino did know her.

It would also be a safe bet that some friends of friends who saw that comment did not know her.

I would say that she "directed that comment to" the second. It is impossible to tell who reported her, or what they thought.


> It is impossible to tell who reported her, or what they thought.

Wouldn't Facebook know these things?

Under what circumstances do we have a right to confront our accusers?


The whole point of an anonymous complaint system is to allow accusers who might be intimidated by the idea of confronting you to have a way to get their concerns met.

Which means that anyone who has created an anonymous complaint system has traded off your right to confront an accuser with the accuser's right to not be intimidated and decided in favor of the accuser.

However there is usually another counterbalance, such as having the accusation silently disappear unless a neutral third party thinks that there is a point to the accusation.


> In principle, who could be opposed?

Ends vs. means, buddy. Ends vs means. Every single political topic on HN seems to be arguments between people who can't separate the two, and those who do.


Or say some Spanish politicians comments re Gibralter basically right wing nationalists who want use Brexit as excuse to annex Gibraltar.


Seriously tho, how much would it cost to hunt Texas billionaires? And world billionaires?


> Moving on to this policy, I'm happy to see neonazis and the KKK have a hard time. But there are people in the UK and EU who would see Brexit as being a separatist cause motivated by racial animosity.

No one is confused about the difference between "I want to kill all Jews" and "I think the British economy would be better off with fewer Polish plumbers".

> And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?

This is a non-issue and just scaremongering, not all that different from "they allow sex with children next!" from the anti-gay idiots (no one is confused between gay rights and "pedo rights").


>No one is confused about the difference between "I want to kill all Jews" and "I think the British economy would be better off with fewer Polish plumbers".

You clearly haven't been paying attention to modern day discourse over issues like illegal immigration and such. If you listened to some people (unironically the same people pushing for stuff like this) having an issue with illegal immigration is basically treated like a confession to guilt. And that's even ignoring some of the newer types of arguments they are making.. That merely holding an opinion that could be seen (solely determined by them) as problematic or a "whistle" (in effect anything to the right of left-of-center or non-PC) and thus a "gateway to extremism". So "hate speech" needing to be banned next.


I have been paying a great deal of attention, and it turns out that there is more than an insignificant overlap between actual "kill all Jews" Nazis and regular anti-immigration crowd.

Furthermore, as soon as the literal neo-Nazis get criticized the anti-immigration crowd either outright jumps to their defence or shrugs any concerns of. Trump's "very fine people" remark is a good example of that, or a recent article which "proved" anti-conservative bias on Twitter by pointing out that people like David Duke (former KKK grand wizard) or Richard Spencer (literal neo-Nazi who wants to forcibly eject all Jews and Blacks from US) got banned from Twitter.[1]

So, if you don't want to be treated like a duck, then don't walk and quack like one. How else am I supposed to interpret an article defending literal neo-Nazis as "Conservatives treated harshly on Twitter"? Look at the data that is presented[2], it literally includes the the American Nazi Party. Now, if you want to make the argument that we should allow these people because "muh free peach" then okay, but read that article again, it just talks about "Conservatives" and "Trump supporters" (aside: there are other problems with this "study" as well, such as not including various Liberal accounts that were banned for unstated reasons).

I also agree that sometimes anti-immigration views are brushed aside as "racist" far too quickly, and it annoys me as well. But this kind of confusion is a bed of your own making.

I am not as pro-immigration as you'd might think based on the above, I think there are some real problems caused by both legal and illegal immigration, and that they should be addressed. But it's plenty evident that the anti-immigration right has long since been infested by some very nasty people, which is doing a great disservice to anyone else, especially those with anti-immigration views.

I suggest you first take so effort to purge the toxicity before complaining about "PC".

[1]: https://quillette.com/2019/02/12/it-isnt-your-imagination-tw... [2]: https://docs.wixstatic.com/ugd/88a74d_d231bdbfb13c4b9ab77422...


I don't even need to write up a full reply, because you went and proved my point perfectly.

Idiots like David Duke and Richard Spencer make weak strawmen here. And i'm not even going to play into that game.

The problem is that paying better attention to whom you... defend(?), works both ways. Certain groups and people are literally believed and "protected" by the mainstream media by default, when they are clearly coming from a pretty strong POV.

>your own making

No it's a silencing tactic, no more and no less.

When one side stops calling everybody else nazis, then maybe we can all come together and have an adult talk about "toxicity" but that's not going to happen as long as some keep acting like little children on the playground calling others names to shut down debate.


I'm not shutting down any debate, I am pointing out there are literal neo-Nazis being defended under the banner of Conservatism. Be angry at Quillette, not me. There are plenty more examples. You can side-step that all you want with vague accusations against "mainstream media".

And "one side calling everybody Nazis"? Really? Did you forget that Dinesh D'Souza wrote an entire book calling the left "Nazis"? That there were ACA protesters with Obama defaced as Hitler? Never mind shining examples of "adult talk" such as "Assume the Left Lies, and You Will Discover the Truth"[1].

[1]: https://www.nationalreview.com/2019/03/left-lies-trump-russi...


“Joking” about murder or gun violence should be banned. To use satire to critic hunting when gun violence is a bigger issue is tone deaf.


That's really a perfect time to use satire.


If people weren’t shooting up schools and places of worship every other week... still no. It’s still callous.


> In principle, who could be opposed?

I am opposed in principle.

White nationalism (or black nationalism, or yellow nationalism) should be able to speak on a social platform like Facebook.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: