Hacker News new | past | comments | ask | show | jobs | submit login

If you want to make this argument maybe pick someone besides literal nazis to make it with.

Last I checked we kinda had a bit of a scuffle from 1940-1945 that stated, "This ideology is not okay and responding to it with violence is acceptable."

They're getting booted off the platform most people use to talk to grandma, not getting literally booted in the face. Unlike what they advocate for.




> literal nazis

Are "literal" nazis only who this new rule will be applied to? The accepted definitions of both "literal" and "nazi" seem to have drifted significantly in the last few years.

> not getting literally booted in the face. Unlike what they advocate for.

Funny, this type of story about people advocating for literal violence against non-whites is extremely common on the internet, I've heard it recounted I don't know how many thousands of times in the last few years, yet I've literally(!) never once encountered an actual instance of it anywhere I browse. Ironically, I have encountered hundreds of examples of violence being advocated against white people, although in the vast majority of cases the proponents themselves were white.

Could you possibly provide any examples of such statements on relatively mainstream sites, and also note whether you encountered it organically or found it via a google search?


> Are "literal" nazis only who this new rule will be applied to?

White nationalists are literal Nazis, so... yes?

> Could you possibly provide any examples of such statements on relatively mainstream sites, and also note whether you encountered it organically or found it via a google search?

Were you asleep for Charlottesville or all the coverage of it? White nationalists literally marched with torches chanting "blood and soil" and "we will not be replaced." What do you think those messages mean to non-whites? What do you think they're advocating for when they say those things?


> White nationalists are literal Nazis, so... yes?

a) No they aren't.

b) Even if they were, you have no way of knowing how this rule will be applied.

> Were you asleep for Charlottesville or all the coverage of it?

Would you consider the Charlottesville incident "mainstream", or common enough (and manifest in some form on Facebook) that Facebook must police user-posted content on its site?

Try to not treat this like an internet argument that must be won, and try not to exaggerate the frequency of rare (as far as I know, I am seeking a reasonable, fact-based argument in order to change my mind) incidents like Charlottesville, or conflate the specifics of what happened there with unsavory words that don't literally (actually) meet the claim of advocating violence.

Make no mistake, I'm definitely playing Devil's advocate here, but if the facts are on your side this (making the case that literally advocating literal violence) should not be overly problematic.

Or, if you would like to lower your claim to something like "frequent and possibly increasing dog whistling" (without arguing about the technical boundaries of that) on Facebook, that seems like a fairly reasonable claim.


> a) No they aren't.

Indeed they are! Or at least there's no difference meaningful enough in their political ideologies to draw a distinction. Obviously I'm not saying they're a member of a political group originating in 1930s Germany, but when someone calls someone else a "Nazi" colloquially, that's not what they mean either.

> b) Even if they were, you have no way of knowing how this rule will be applied.

So we shouldn't have rules because they can be misapplied?

> Would you consider the Charlottesville incident "mainstream", or common enough (and manifest in some form on Facebook) that Facebook must police user-posted content on its site?

Yes? Or what of Christchurch? Or the Quebec City mosque shooting? Or the Pittsburgh synagogue shooting? These incidents (and the literal dozens more like them) received tons of mainstream news coverage and the perpetrators 100% subscribed to white nationalist views, exactly the kind which Facebook seeks to deplatform. They believe that by removing these people from their site they might reduce incidents of these shootings. Do you think they're wrong?

What amount of violence needs to happen connected to one ideology for Facebook to ban said ideology? Facebook has banned child pornography, doxxing, and pro-ISIS content from their platform too. How much of that has to be posted before a ban is justified?

You can not like that Facebook can make decisions like this, but it is entirely their right to choose what to platform. If you don't like it, you can make your own platform. That it doesn't have the audience Facebook does is your problem, not Facebook's.


> Indeed they are! Or at least there's no difference meaningful enough in their political ideologies to draw a distinction.

Is that so? The Nazis were a large, organized group with a distinct leader, a fairly specific shared ideology, and track record of action - you would have us believe the same is true (no meaningful difference) of modern day white nationalists? And that there's evidence of this?

> but when someone calls someone else a "Nazi" colloquially, that's not what they mean either

col·lo·qui·al·ly

adverb: colloquially in the language of ordinary or familiar conversation; informally.

This is what I mean about the common misuse of the words "literal", "Nazi", "alt-right", and in my mind this excuse is similar in quality to the "I'm not really racist, I have black friends" argument. Have you ever stopped to consider if this type of "dog whistling" is contributing to the problem in any way? Do you care?

> So we shouldn't have rules because they can be misapplied?

Your words, not mine. The question was "Are "literal" nazis only who this new rule will be applied to?".

Have you no concern for possible second order effects of poorly thought out and biased censorship? And when I say censorship, I'm not referring to the First Amendment.

> Yes? Or what of Christchurch? Or the Quebec City mosque shooting? Or the Pittsburgh synagogue shooting?

For every single "white nationalist" incident you cite, I can cite several similar racially/religiously motivated crimes committed by non-white people. If the incidents are the problem, why is the enforcement (and news coverage) racially biased?

> These incidents (and the literal dozens more like them) received tons of mainstream news coverage

The coverage was mainstream, but is this behavior or these beliefs mainstream/common? How sure are you that these incidents are caused by incitement on social media? Have you ever considered that both the incidents themselves as well as the increase in "dog-whistling" propaganda on social media (I've yet to see examples of advocating violence) might both be effects of something else? Do you care?

> and the perpetrators 100% subscribed to white nationalist views, exactly the kind which Facebook seeks to deplatform.

Is that so, or might that only be what you presumed or were told? I haven't read the whole manifesto, but this fellow seemed to have much more complex and nuanced beliefs than those held by "100%" of "white nationalists".

I assume this is a "white nationalist" propaganda site of some sort, but to me the actual words of what this madman wrote are what actually matters, not who publishes them:

https://www.thenewamerican.com/world-news/europe/item/31759-...

Is what's written there mainstream white nationalist views? Personally, I have absolutely no idea, so I'm willing to consider any evidence you have supporting that idea.

> They believe that by removing these people from their site they might reduce incidents of these shootings. Do you think they're wrong?

They might be right, they might be wrong, unlike you I make no claim to know. What if they are wrong, and censorship aggravates these already mentally unbalanced people even more, driving them to organize (perhaps in the physical world) underground where it's difficult to see what they're up to or thinking? Is this worth considering when setting policy?

> What amount of violence needs to happen connected to one ideology for Facebook to ban said ideology?

Then we should be banning a lot more speech.

> You can not like that Facebook can make decisions like this, but it is entirely their right to choose what to platform.

I believe free speech is crucially important, and any curtailment should be carefully and logically considered, including a healthy discussion before any decisions are made. In this case, I can envision a plausible scenario where it makes the problem even worse.

> If you don't like it, you can make your own platform.

Again, there is a distinct difference between the First Amendment and the general principle of free speech. That so few people consider this important seems risky to me. I suspect this type of attitude is in no small part what got Donald Trump elected.

> That it doesn't have the audience Facebook does is your problem, not Facebook's.

And if social engineers like you are miscalculating, if you underestimate the complexity of sociology and human psychology, it might be all of our problem.

For the sake of improving the quality of discourse, can you sense any legitimate concerns in what I'm saying, or do I seem to you like yet another ignorant racist, little different than those who might frequent Charlottesville rallies?


> common misuse of the words "literal", "Nazi", "alt-right"

It's not misuse if it's common usage, it's simply usage you disagree with. Obviously when people say "Nazi" they don't mean "members of a 1930s political party," they're making a statement about something else. If you find the usage of these words confusing that's on you, not the words or the people using them.

> Have you no concern for possible second order effects of poorly thought out and biased censorship?

Not particularly: if those things happen, let's protest those instead of wringing our hands and worrying about the poor slippery slope.

> And when I say censorship, I'm not referring to the First Amendment.

Deplatforming is not censorship. You are not guaranteed an audience for your speech. White nationalists can still say whatever they want, and they can even say it legally. But they can't say it on Facebook, and it's not Facebook's responsibility to let them any more than it is to let pro-ISIS people post their content.

> If the incidents are the problem, why is the enforcement (and news coverage) racially biased?

This is irrelevant to the discussion at best and disingenuous at worst. Facebook can ban multiple kinds of content, including pro-ISIS content (and it does). White nationalism being banned is what we're discussing here.

> How sure are you that these incidents are caused by incitement on social media?

I believe that social media is helping incite these incidents, so I do legitimately believe the amount of people participating in hate crimes will be reduced as a result of this. Do you really think that people aren't radicalized by communities on the Internet? Communities that might exist on Facebook?

Of course, I also think it doesn't matter if that's true or not. No discourse of value is being lost by banning white nationalism, and Facebook already bans a plethora of other (totally legal) content because they don't think is appropriate on their platform. If you want to read white nationalist speech, it still exists. This merely means you can't read it on Facebook anymore. How this is a tragedy I simply cannot understand.

> Is that so, or might that only be what you presumed or were told?

Yes, it's so. Read the manifesto.

> Is this worth considering when setting policy?

No. People can (and will) do anything in response to anything; maybe white nationalist groups will make Zuck their #1 enemy over this. You can't control the actions of other people, especially crazy people, so catering your policies to them seems absurd.

> For the sake of improving the quality of discourse, can you sense any legitimate concerns in what I'm saying, or do I seem to you like yet another ignorant racist, little different than those who might frequent Charlottesville rallies?

I don't really see many legitimate concerns arising from banning white nationalism from Facebook. As I've said multiple times, Facebook already bans other kinds of speech they disagree with. I get defending free speech is a thing, but Facebook is not the government and they can ban whatever speech they like. They don't control the entire Internet or even a majority of it, and they can't prevent you from hosting your content somewhere else, so... I just don't see what's being lost here, except white nationalist content on Facebook.

And even if they do go ahead and decide to ban other content, I mean, A) let's cross that bridge if we come to it and B) who cares? It's just Facebook, they don't control the entire Internet. Make another site if you want to discuss what they deplatform.


This line of conversation is my point. You're debating someone who is acting in bad faith. You've already lost the thread because he's just going to come back with more circular bullshit.


Could you please explain how I'm acting in bad faith? Perhaps cite a few key exchanges to demonstrate.

> he's just going to come back with more circular bullshit.

Asking people to justify their claims, and pointing out falsehoods or logical errors in their responses, is "circular bullshit" now?

I clearly stated I am playing Devil's advocate.

I concede points where they are valid.

I make no (other than conversational) claims without backing them up, how many claims has OP made without providing evidence for?

The discussion is curtailment of free speech (the general principle, not the First Amendment - try to get someone to even admit there's a difference without changing the subject) - I am arguing that we better have very good reasons for it, supported by evidence. Fashionable popular opinion is not evidence.


> It's not misuse if it's common usage, it's simply usage you disagree with.

Your or my personal agreement on what the definitions should be are not the problem. The meaning has changed for only some users of the words, resulting in widespread misunderstanding, I would argue resulting in further (and completely unnecessary) political polarization. Words not having shared meanings seems sub-optimal to me.

> Obviously when people say "Nazi" they don't mean "members of a 1930s political party," they're making a statement about something else

But no one knows what statement they're making because we no longer have a shared dictionary, so people resort to their imaginations.

> If you find the usage of these words confusing that's on you, not the words or the people using them.

This suggest the problem is my lack of ability to understand, when the problem is there is no longer a common definition. There are certainly multiple popular definitions, according to which political ideology you subscribe to.

> instead of wringing our hands and worrying about the poor slippery slope.

It's not me who's "wringing my hands" - I'm not promoting or supporting any policy change, you are. I'm simply asking questions.

> Deplatforming is not censorship.

https://en.wikipedia.org/wiki/Censorship

"Censorship is the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by a government, private institutions, and corporations."

Well look at that, once again our definitions don't match. Care to share the URL for the authoritative source you're using?

> You are not guaranteed an audience for your speech.

As I said: there is a distinct difference between the First Amendment and the general principle of free speech. You only have a "guarantee" to free speech in the limited context of the first amendment.

> This is irrelevant to the discussion at best and disingenuous at worst. Facebook can ban multiple kinds of content, including pro-ISIS content (and it does). White nationalism being banned is what we're discussing here.

I thought the principle behind your argument was the harm - maybe the principle is race after all?

> I believe that social media is helping incite these incidents, so I do legitimately believe the amount of people participating in hate crimes will be reduced as a result of this.

A perfectly reasonable theory, what evidence of this do you have? Are you well read on the topic, or is this just a casual opinion that sounds about right?

> Do you really think that people aren't radicalized by communities on the Internet? Communities that might exist on Facebook?

I certainly do, but I'm concerned about everything that could lead to radicalization. History is full of examples demonstrating the strange and complex behavior of individuals and civilizations. Are we over-simplifying this situation? Should we freely discuss such things with open minds?

> No discourse of value is being lost by banning white nationalism

Completely agree, but is that all there is to it? Can you think of any possible undesirable reactions (regardless of the soundness of the logic underlying the motivation) to this type of policy?

> If you want to read white nationalist speech, it still exists. This merely means you can't read it on Facebook anymore. How this is a tragedy I simply cannot understand.

The tragedy is if this further inflames emotions, "righteously" or not, and someone shoots up another church. That's what I'm worried about.

> Yes, it's so. Read the manifesto.

The link I included sure didn't make this fellow sound like just another stereotypical redneck racist as you claim, at least my understanding of that stereotype. But never mind, let's once again not bother ourselves with messy details when vague stereotypes will suffice.

>>> They believe that by removing these people from their site they might reduce incidents of these shootings. Do you think they're wrong?

>> They might be right, they might be wrong, unlike you I make no claim to know. What if they are wrong, and censorship aggravates these already mentally unbalanced people even more, driving them to organize (perhaps in the physical world) underground where it's difficult to see what they're up to or thinking? Is this worth considering when setting policy?

> No.

Well then, this well answers my "Do you care?" questions as well.

> People can (and will) do anything in response to anything; maybe white nationalist groups will make Zuck their #1 enemy over this. You can't control the actions of other people, especially crazy people, so catering your policies to them seems absurd.

If everything is random, why even bother with policies like this, or any at all?

> I don't really see many legitimate concerns arising from banning white nationalism from Facebook.

I obviously disagree, but if you believe the reaction of extremists is not worth considering, at least you're being logically consistent.

> I just don't see what's being lost here, except white nationalist content on Facebook.

If any future negative reactions (in part due to this type of policy) of extremists are excluded from consideration, you are completely correct.

> who cares?

People attending a future church service might, if there is any validity to my belief that people aren't as simplistic as you think they are. But we'll never know for sure, because like you the media seems to prefer to not bother with messy details of why people commit atrocities when a simple stereotype works (and certainly sells more papers as an added bonus).

> Make another site if you want to discuss what they deplatform.

A perfectly fitting conclusion.


> This suggest the problem is my lack of ability to understand, when the problem is there is no longer a common definition.

The only misunderstanding seems to be yours. We can clearly identify white nationalists as actual Nazis, no redefinition required.

> Censorship can be conducted by a government, private institutions, and corporations.

No one is saying that corporations can't censor things. I am saying that this is not censorship. Facebook deplatforming you is not the same as censoring you.

> maybe the principle is race after all?

The only thing we're discussing here is the deplatforming of white nationalist content from Facebook. Bringing other races or religions and their extremism in is totally tangential at best, and actively disingenuous at worst.

> Are you well read on the topic, or is this just a casual opinion that sounds about right?

There are many examples in the news recently. Here's one I read just the other day: https://www.washingtonpost.com/politics/2019/03/22/trumps-rh... White nationalist speech creates hate crimes; the correlation appears clear to me. Of course, it also seems logical to me without any sources, so sources are just a nice bonus.

> Can you think of any possible undesirable reactions (regardless of the soundness of the logic underlying the motivation) to this type of policy?

Essentially your argument seems to boil down to: we shouldn't create good policies because crazy extremists might have bad reactions to them. But crazy extremists have bad reactions to everything. We should be trying to deplatform and deconvert extremists instead of catering to their sensitive tastes. If that offends them and causes them to lash out, that is unfortunate, but they'll do that anyway. At least if they do it to this there might be less of them in the future.

> If everything is random, why even bother with policies like this, or any at all?

While you can't control how people react to what you do, you can do what you believe is right and hope it has a good outcome in the future. Facebook apparently agrees with me.


> The only misunderstanding seems to be yours. We can clearly identify white nationalists as actual Nazis, no redefinition required.

a) With no standard definition of the phrases?

b) Some people may be able to, but can Facebook accurately and fairly (despite no standard accepted meaning of many terms) police speech (remember, we're not dealing with people wearing white hats at a rally, we're dealing with speech, which is subtle), at scale? Sure everyone can agree at the extremes, but when it's close to the middle, then it's complicated. It's like "I know pornography when I see it."

> No one is saying that corporations can't censor things. I am saying that this is not censorship. Facebook deplatforming you is not the same as censoring you.

https://newsroom.fb.com/news/2019/03/standing-against-hate/

"Today we’re announcing a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram, which we’ll start enforcing next week."

I'm not saying they don't have a right to do this, it is their platform after all. I'm not even saying that it is necessarily or certainly a bad idea. You and I disagreeing on this is fine and healthy. But how can you interpret "you can't say <x>" as not censorship? I asked you earlier for the definition of the word you're using, and you didn't reply. I ask again: please tell me the definition you're using for "censorship", with a link to the source.

> The only thing we're discussing here is the deplatforming of white nationalist content from Facebook. Bringing other races or religions and their extremism in is totally tangential at best, and actively disingenuous at worst.

No, that's the only thing the article is discussing. The HN discussion, and our thread in particular, are discussing broader principles of free speech and fairness, possible downsides of these types of decisions, etc.

>>> I believe that social media is helping incite these incidents, so I do legitimately believe the amount of people participating in hate crimes will be reduced as a result of this.

>> A perfectly reasonable theory, what evidence of this do you have? Are you well read on the topic, or is this just a casual opinion that sounds about right?

> There are many examples in the news recently. Here's one I read just the other day: https://www.washingtonpost.com/politics/2019/03/22/trumps-rh.... White nationalist speech creates hate crimes; the correlation appears clear to me.

"To test this, we aggregated hate-crime incident data and Trump rally data (a different variable than our topic of conversation, but again, no need to bother ourselves with precision or details) to the county level and then used statistical tools to estimate a rally’s impact. We included controls for factors such as the county’s crime rates, its number of active hate groups, its minority populations, its percentage with college educations, its location in the country and the month when the rallies occurred. We found that counties that had hosted a 2016 Trump campaign rally saw a 226 percent increase in reported hate crimes over comparable counties that did not host such a rally. Of course, our analysis cannot be certain it was Trump’s campaign rally rhetoric that caused people to commit more hate crimes in the host county."

You can find a correlation in data for anything you want to support, see: http://www.tylervigen.com/spurious-correlations

Now, stating that doesn't prove your claim is wrong, I'm just pointing out the one piece of evidence you finally provided is little more than an op-ed piece. We should be collecting more and better data on these things if they're important, so we can set evidence-based policy.

Erring on the side of caution (as Facebook is doing) is fine, but there's no need to tell lies in the process as far as I can see.

> Of course, it also seems logical to me without any sources, so sources are just a nice bonus.

This sentence explains this conversation, as well as the general state of political conversation in 2019: facts and evidence are considered completely optional.

> Essentially your argument seems to boil down to: we shouldn't create good policies because crazy extremists might have bad reactions to them.

No, that is one small point of my overall concerns (my "argument", if any, is that you won't provide any evidence for your claims, or assert that none is necessary when curtailing general free speech), and it's an important point. People are becoming incredibly politically polarized to the degree that it is causing strange behavior. Some people lose the ability to engage in logic & evidence based conversations on particular topics, others shoot up churches. Shit is pretty seriously fucked up and doesn't seem to be getting better. Being cautious and thoughtful about non-obvious risks seems like a good idea to me, not something to avoid.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: