This is an A+ study which seems to confirm that banning subreddits can be an effective way to silence their inhabitants. "Fat hatred" as an epidemic has largely disappeared since 2015, for example.
It's nice to get some hard data to counter the theory that if you ban a subreddit consisting of undesirables, they'll simply invade other parts of reddit and continue. In reality, the other parts of reddit aren't nearly so tolerant.
So what's the implication? Well, ban judiciously. Getting rid of places that fester hatred is like putting out a fire. But it's obviously very tricky to do this.
See /r/physicalremoval for an example of a sub that was just banned for inciting violence.
> Getting rid of places that fester hatred is like putting out a fire
I assumed this was the mental model of those who would suppress "hate speech". Here's the problem: if there's a nugget of important truth hidden somewhere in there, something people won't say in public but which is vital, then you're not putting out a fire, you're putting a lid on a boiling pot. You can try to stop the steam from escaping but eventually it'll blow up in your face.
I agree. I see those as symptoms (assuming there's anything there at all). The important question is whether there's something driving them to that state.
> The important question is whether there's something driving them to that state.
No. There are millions (probably even tens of millions) of people who share circumstances with these racist/homophobic/misogynistic asshats and who never fall into these behaviours. You cannot excuse any of the *isms with economic anxiety or similar nonsense.
It might be that we miss some key insight into Ebola or virology by trying as hard as possible to contain and eradicate it. Still, we restrict ourselves to studying it in a lab, rather than treating an outbreak as the place to find a possible nugget of truth.
It's one of the most frustrating parts of discussions about this topic for me: we're constantly supposed to weigh the value of hypothetical goods and somehow judge our commitment to freedom of expression by how far we're willing to go in defense of possible outcomes we can't even clearly articulate.
>We bring it out in the open, where it can be seen and dealt with. Like a boil that can never be cured so long as it is covered up but must be opened with all its ugliness to the natural medicines of air and light, injustice must be exposed, with all the tension its exposure creates, to the light of human conscience and the air of national opinion before it can be cured.
"I had hoped that the white moderate would understand that law and order exist for the purpose of establishing justice and that when they fail in this purpose they become the dangerously structured dams that block the flow of social progress. I had hoped that the white moderate would understand that the present tension in the South is a necessary phase of the transition from an obnoxious negative peace, in which the Negro passively accepted his unjust plight, to a substantive and positive peace, in which all men will respect the dignity and worth of human personality. Actually, we who engage in nonviolent direct action are not the creators of tension. We merely bring to the surface the hidden tension that is already alive. We bring it out in the open, where it can be seen and dealt with. Like a boil that can never be cured so long as it is covered up but must be opened with all it ugliness to the natural medicines of air and light injustice must be exposed with all the tension its exposure creates, to the light of human conscience and the air of national opinion, before it can be cured."
Sibling comment has the context and, really, these are two different things. Letting a nasty forum stay open is not at all the same as protesting segregation.
The bringing-out is itself a kind of context. That kind of critical/analytical context is far, far different from simply setting up a forum in a huge social media site where impressionable people sometimes congregate. It's like the difference between a biology lab where students perform dissection and a dirty, stinking, public restroom where people graffiti on the wall and poop in the urinals.
> In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To
be clear, from a macro persepctive, Reddit’s actions likely did not make the internet safer or less
hateful.
section 6.6 of the paper.
I argue that this paper capstones the pathway to concentrate, sequester, fester and finally eradicate hate subs from a single platform. It just moves elsewhere and then begins to adapt.
Also - Suppose the study was turned on its moral head, and people applied its lesson to bannign subs focused on "social equality", "improving one self", "making a better world" = what would the implications of that be.
In the hands of an attacker, what does this paper say?
It's too bad that they didn't have access to data about other sites like 4chan and Voat to see if they saw an increase correlated to the amount of people who left Reddit.
> "Fat hatred" as an epidemic has largely disappeared since 2015, for example.
As a person who sometimes reads Reddit, it's really surprising to me to hear that's the case. It seems like the same kind of invective comes out every time the subject comes up.
It's way less prevalent than it used to be. I remember how for a while right after FPH became popular pretty much every post that hit the front page would have "found the fatty" or similar comments. Thankfully that shit mostly stopped (or at least seemed to) after FPH was banned and this paper shows it was a real change in the culture of Reddit rather than just my perception of it.
Is this not just a matter of "the joke of the times?" Whatever is currently trendy can almost certainly be found referenced in every front page thread. Currently it's DJT, previously it was the FPH, etc. The masses shift targets, but fixating on a current trend seems pretty steady.
You're confusing making fun / criticisms of with hatred. Fat acceptance movement is absolutely batshit insane, it promotes unhealthy lifestyles, it causes early deaths, it costs billions of dollars, it increases everyone's insurance costs.
I'm yet to see anything of scale on reddit/voat that represents hate of fat people, and not of the lifestyle or the dumb ideas that the movement comes up with.
> I can make fun of my wife and my kids and my friends, call them funny names. It doesn't mean I hate them.
That may be true. But it's also very clear to those that visited the sub that FPH was far beyond just "funny names" and definitely lived up to it's name.
I'd say the comparison between "fat people should stop feeling bad about being fat and live their lives" and "inferior races should be exterminated" as similar positions is specious.
That's not apples to apples. Nazism is deadly, as it obesity (neither universally). Accepting people regardless of weight seems safer, and reaching out to people with questionable beliefs strikes me as productive.
Obesity is obviously deadly and fat "acceptance" is just as disastrous as meth acceptance. In fact, obesity kills far more than meth.
But my greater point is that we are obviously all okay with negative social stimulus (shaming et. al.) to curb bad behaviors. This is how virtually all social conditioning works and it's part of how we have a civil society. I don't think it's a good idea to decide some negative behaviors should be immune to negative social stimulus. Apply the same rule to everything, one way or the other.
"fat acceptance is deadly" ≠ "obesity has costs to the economy"
Edit to add: 'yjftsjthsd-h points out that this isn't an apples-to-apples comparison. (https://news.ycombinator.com/item?id=15224303) In this entire thread a lot of people are unknowingly or willfully talking past each other. That's the tough part about heated issues like this. Actually taking the time to break down and listen to each other to understand where we're coming from first—even if we don't agree with the conclusions on the other side—is fundamental to us having a useful, constructive discussion.
Obesity is a negative for both the individual and society. The fat acceptance movement is terrible because being fat isn't some sort of permanent, genetic condition that's out of people's control. You control what you put in your mouth and how much exercise you get, it's a choice.
Should we have an ignorance acceptance movement, where we don't push people to educate themselves? That's a choice as well.
The fact remains that 95% of people going on diets to lose weight aren't successful in keeping it off long-term and the only method that seems to show significant success is surgery. I find it hard to believe that the only problem here is that fat people aren't aware they're fat. Harping on it in the way you're advocating just discourages people from doing anything healthy, since, you know, if getting slim is all that matters, why bother exercising if you're still fat at the end?
Except fat people already know all those things. And also most of what this is about isn't that level of neutral, but at least not hurtful criticism. Most of it is of the "ugh let's all point and laugh at the gross fatty" variety.
Only, it would seem, in Western culture. In places like Japan, fat shaming is absolutely brutal and is a significant part of the reason that they have less obesity. (Source: Conversations with Japanese people, in Japan, about why Westerners were so fat.)
I've spent time in Japan too (I majored in Japanese and I lived there for a year), and, while Japanese people don't have very much tact about body image issues, I find self-reports like that very unconvincing. Japanese people will also wax poetic about the uniqueness of their having four seasons, which would probably be news to most of my fellow New Englanders.
There are a lot of other things different about Japan (most people walk more and drive rather infrequently, food portions are smaller, the actual foods eaten are not the same, for instance) than just how tactless or not Japanese people are about weight. I bet if you looked at Nikkei with Japanese immigrant parents you would see rates of obesity that looked much more similar to American ones than Japanese ones too.
In addition, although it's a slower creep, Japan is seeing average BMIs go up.
I know "correlation/causation" feels like a dead horse, but I would want far better evidence that there's an actual link and not just confounding factors (ex. diet).
Well, according to the summary of this the level of hate speech usage in other subreddits didn't change. So basically Reddit got rid of r/fatpeoplehate and the stuff that went on there, but the culture in the rest of the site remained the same.
Reddit is still a cesspool and it's not like those people vanished. But there's a difference between giving those people a home, even if the intent is to quarantine them, and forcing them into the general population where hopefully they get shouted down. It's annoying when they're more visible but they become less of an organized movement.
The danger of censorship lies in the idea that while the initial generation of censors may have benevolent intentions, a future actor who inherits those powers to stifle communication or censor ideas may use them to subtly shift discourse or the overton window, push an agenda, or control people.
Why do free speech proponents fear the precedent being set by Reddit or Twitter to ban those they consider hate speech? They fear it because of the idea that over the course of years or decades that same power of censorship (or even just cultural idea that censorship is OK) may slowly move away from being used on obvious bad actors and hate speech and into the censorship of groups that don't deserve it. Today's well-intentioned stifling of hate speech is the same set of tools and culture that could enable totalitarianism tomorrow.
Edit:
I was asked who I'm referring to as a free speech proponent.
I would cite Zeynep Tufekci as being a really interesting author in this space as her works specifically examines how technology and social media is changing the politics of dissent and revolution. Here's an interesting article discussing her book:
"With the rise of social media in 2005, the networked public square shifted from blogs and websites to “massive, centralized platforms where visibility [is] often determined by an algorithm controlled by a corporation, often with the business model seeking to increase pageviews.” Traditional mass media, once the gatekeepers of social movements, were replaced by a few, very powerful “chokepoints” that monopolize ad dollars and users while encouraging surveillance. Facebook’s “real name” policy, for instance, can snuff out a movement before it even begins.
...
To be effective, censorship in the digital era requires a reframing of the goals of censorship not as a total denial of access...but as a denial of attention, focus, credibility...Censorship by disinformation focuses on attention as the key resource to be destroyed...Rand Corporation researchers refer to this method of propaganda as the “firehose of falsehood.” The primary goal is simple: to confuse and overwhelm the “audience”...The result is a frayed, incoherent, and polarized public sphere that can be hostile to dissent...”
This goal can unfurl in a number of ways, but platforms always play a central role. We have witnessed it with Russia’s army of trolls. With the stream of fake news that clouded Trump’s election. And with the governments of Turkey and Russia, which have maintained their grip on power by demonizing “alternative sources” of media—the kinds of sources where movements are likely to flourish—while ramping up control of traditional media. In other instances, platforms have made behind-the-scenes-deals with repressive regimes in order to allow access to their users. In places where Facebook is the internet, social movements have little recourse but to submit to the terms of the deal.
...
Because “these platforms own the most valuable troves of user data and control the user experience, they wield power to decide winners and losers for people’s attention."
I wish I could find the exact book, but there was a philosopher writing about anti-semitism after WWII.
I really want to give you direct quotes but I'm about to run to a meeting, I'll search harder afterwards.
The jist, as written in ~1950s IIRC:
1. Anti-semites are immune to argument or criticism, because they are "Just Joking." They will spew hate speech and throw every argument they can at you, logical or otherwise, and outright lie, because they are "just forcing a discussion." If you actually pin them down and try to challenge them, they'll laugh you off. "I'm just starting the discussion here, are you really taking me seriously? Hahahahaha loser, triggered!"
2. It is acceptable to argue the possible benefits of levying import tax on wheat from Alegeria, because there are genuine positive/negatives to the transaction. However, it is not acceptable to argue over whether "All Jews should be killed." A standpoint that suggests the outright destruction of an entire people, or their enslavement or removal of freedoms, is so heinous as to not even be worth discussing the possibility of merit. In other words, anti-semitism is a garbage philosophy that our zeitgeist should not permit. It does not fall under the protection of "free speech," it is simply rejected wholesale.
My point: Restricting hate speech is not a slippery slope for freedom of speech. I personally believe there is no universal morality, but if I had to pick, I'd argue that the best outcome for the human race would be a culture that purges all racist and other arbitrarily prejudiced mindsets that judge entire populations on untenable grounds (race, gender, etc).
It is not a slippery slope if you set clear boundaries.
Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past. It is not that they are afraid of being convinced. They fear only to appear ridiculous or to prejudice by their embarrassment their hope of winning over some third person to their side.
I'm pretty sure I've heard this argument made about the campaign for our recent President as well. HRC had to use words well because she, and her supporters, believe in them.
While DJT and his supporters believe in "LOL - just kidding - can't you take a joke? Don't be so uptight." So he was free to play with the truth. His supporters aren't trying to convince themselves - they know their arguments to be BS - instead they're trying to find the secret code to convince enough bystanders.
What's interesting to me is that reading through this thread, particularly Sartre's description of anti-Semite thinking, is that the exact same thoughts are voiced in the rightist spheres I inhabit, but referring to the left. Particularly regarding discourse and respect (or lack thereof) for it.
In fact, it gets interesting when I think about speakers being no-platformed of late. When, IDK, Richard Spencer or someone gets protested away from some university, is this an example of:
- blatant disrespect for words, as shown through Spencer's poisoning the well as Sartre describes, or
- blatant disrespect for words, as shown through him not being allowed to speak?
It is a strange world indeed where both sides, referring to the same incident, take completely different positions, both in the name of free speech.
Yeah, totally agree on that. The worse part is that you can't offer a solid argument without being called biased by the other side.
I think this is one where you just have to call them wrong and tell them to do one if they disagree. Hate speech just isn't something that we have to accommodate, nor do we have to give credence to the arguments for it.
> What's interesting to me is that reading through this thread, particularly Sartre's description of anti-Semite thinking, is that the exact same thoughts are voiced in the rightist spheres I inhabit, but referring to the left. Particularly regarding discourse and respect (or lack thereof) for it.
Probably because of postmodernism, which is perhaps best interpreted as "defense against the dark arts" for the left. The "disrespect for words" and meaning itself, a hallmark of postmodernism, has its origins in propaganda techniques developed by corporations for marketing purposes, and was weaponized by the right long before it ever got picked up by the left.
Ah, Karl Popper, interesting to see his work about tolerance make a resurgence of attention. It seems to be mingled about in various political discussions from talking heads. It's right on about lax tolerance welcoming the intolerance and the whole system crumbles. In the end we all have to make a stand for something, every robust structure needs a sound skeleton to stand upright.
Reading the "Bad Faith" section of the wikipedia entry was especially interesting when you compare the profile of your average alt-right person with the mindset Sartre is describing.
It's tough. I'd encourage you to take a step back and reflect on arguments used from many positions. Currently a lot of discourse is breaking down and polarization increasing because of a lack of reflection and understanding one's own biases and the arguments one's making and where they're coming from. No one has a monopoly on bad faith (unless you're considering humanity as a whole). I've found Jonathan Haidt's The Righteous Mind[0] to be really insightful and useful in this regard, particularly if one has a goal of effecting meaningful change.
I'm not saying anyone has a monopoly on bad faith. Your rebuke isn't overly harsh. If your criticisms were more pointed I may have a more directed response... but I'm not sure exactly what you have in mind when you say "discourse is breaking down", etc.
(EDIT: I'm perfectly aware I'm generalizing and speaking of stereotypes when I say "average alt-right person", but Sartre wasn't exactly describing separate individuals either...)
Are those generalizations useful and constructive, particularly in this case? I'd argue strongly no. In fact, I'd argue they're actively counter-productive. One of the self-described reasons for the feeling of alienation that many have expressed is exactly this type of generalization. If you have a goal of working against, this, it seems that you're actually reinforcing it when you do so.
I think if those generalizations let us draw a useful parallel to a historical example, we can note the similarities (and differences) and apply lessons learned from that era to our own.
(Not that Sartre included a chapter called "How to Have Avoided The Whole Affair" in his book...)
I do not buy the 'a standpoint can be too extreme to not be worth discussing'. Interracial marriage would have been one of those standpoints as recently as the 1960s. If you find yourself incapable of articulating actual reasons, not emotional appeals, of why certain things are wrong, that is a personal failing and certainly you shouldn't tackle those issues yourself. But those who have thought about these issues should be able to bring them forward and deal with them objectively and dispassionately.
The number of people who seem to operate on this mindset of "killing loads of people would actually be good, but its too nasty to speak of" is really fucking disturbing. There are objective, rational reasons why such things would be monumentally destructive. I also encounter a disturbing number of people who are earnest racists, who believe that certain races have inherent biological advantage over others, but who believe that they are not racists because they find acknowledging that as inhumane. They don't use epithets and they support affirmative action and diversity and such out of DUTY. All the while never being able to learn that they're WRONG.
I agree that nothing is too extreme that it shall be completely purged from all discussion. Is that what we are talking about here though? It's not like r/fatpeoplehate and r/CoonTown were communities of academic discourse regarding obesity and race. These were communities where people (probably mostly teenagers) spewed hatred for overweight or black people 24/7. Together I think these contrasting ideas... (1) hate speech should be mitigated, but (2) retaining free speech is paramount... engender a few interesting points:
1. Purging these communities from reddit does nothing to remove the underlying fact that given an anonymous social forum, some people will readily participate in hateful discussions targeting overweight people and minorities.
2. When removed from reddit, do these people find different alternative social outlets to share their hate (i.e. after the ban, did 4chan etc. experience a measureable increase in fat/racist posting).
3. If #2 is yes, is it better or worse (and in what ways) that these people and their discussions are forced to move into forums that more readily accept, or even champion hate-speech.
4. Is there a net benefit or net detriment to keeping these shithead communities within an ecosystem like reddit where they, on one hand, can recruit more moderate minds, but on the other hand, are subject to the ridicule of more moderate minds.
I think it's entirely possibly that there may be a net benefit to segregating problematic groups now, but it may not necessarily remain that way in the future. We are undergoing growing pains in our society stemming from a vast easing of the cost of self publication along with massive siloing of opinions such that people are not always presented with credible alternatives to their own point of view (or credible alternatives are rendered less credible through lack of trust).
In the same way that I think those being born now will have different views on privacy, spreading personal information and identity than those in their teens now, and those have different views on those topics than those in their thirties (the pendulum swings slowly, but I'm confident we'll settle on being more private than early adopters of social media have been, but less private than those that lived without it), I think future generations will come up with their own solutions to the problems of credibility, fake news, and the other myriad problems insular social network sub-groups have exacerbated.
Put another way, these hate groups are using tactics that are proven, but have wider reach and more effectiveness given our current reality, so the only way to effectively fight that virality of that in the short term (until people learn to mitigate the worst effects on themselves) might well be to isolate the behavior. It's like any other virulent disease in that respect.
From my experience, many of the people that inhabit these hate groups often browse subs like r/incel (involuntarily celibate). It may be the case that simply quarantining these groups may work to drop their numbers because members of these groups will simply not reproduce and not spread their toxic ideologies onto their kids.
While funny, that's actually an interesting point. If the main spread of the ideology is through viral memetics to the point that spread through cultural familial indoctrination is very low, quarantining the the practitioners in some manner is double effective. This works even without them having problems procreating, as the same outcome would be observed if offspring are less likely to have similar beliefs (which I'm not sure is true).
I suspect if there is high overlap between the involuntarily celibate sub and the fat shaming sub, there is perhaps something deeper going on in the psyche of these people related to their own body-related issues.
Regarding 3: it is very easy to create an echo-chamber on reddit. Readers on /r/the_donald can stay on that subreddit, or on a multireddit, and completely ignore mainstream subs. They also rigidly enforce the echo chamber by banning outside perspectives.
There is a very deep irony in these subreddits and their champions holding high the banner of free speech while simultaneously exercising their right to censor any dissenting opinions from the purulent conglomeration of hatred.
The sad thing is that when people say they champion free speech for even the most violent and malicious groups of people that means they tacitly support those same groups clamping down on speech they dislike. Because ultimately that is their end goal and you can see it through their actions and speech. A fundamental misunderstanding of what constitutes freedom of speech has led down it being used as a weapon against free speech.
> 2. When removed from reddit, do these people find different alternative social outlets to share their hate (i.e. after the ban, did 4chan etc. experience a measureable increase in fat/racist posting).
Yes. See Voat.
> 3. If #2 is yes, is it better or worse (and in what ways) that these people and their discussions are forced to move into forums that more readily accept, or even champion hate-speech.
Not really. I think echo chambers are good for peace of mind and general contentment. I think they also tend to accelerate extreme viewpoints and further entrench the balkanzation of the Internet. It's like watching the Tower of Babel live.
> 4. Is there a net benefit or net detriment to keeping these shithead communities within an ecosystem like reddit where they, on one hand, can recruit more moderate minds, but on the other hand, are subject to the ridicule of more moderate minds.
Something new can only be birthed from the meeting of two or more different things. If we value free speech and a common identity/norms, then it is required for us to be able to reconcile our differences in some manner.
> I'd argue that the best outcome for the human race would be a culture that purges all racist and other arbitrarily prejudiced mindsets that judge entire populations on untenable grounds (race, gender, etc).
Except most people are not good with nuance. Distinguishing justified judgment from unjustified is pretty much hopeless. Just read any thread anywhere discussing any politically charged issue, like sexism and racism. Purely factual statements are always labelled sexist or racist if they don't support a particular narrative, and sexist or racist sentiments that follow that narrative are applauded. And of course actual sexism and racism is similarly absurd.
The point being that the "untenable grounds for prejudiced mindsets" is sufficiently slippery in and of itself that you haven't escaped the slippery slope argument. People will understand "untenable grounds" to mean what they want it to mean, and we're right back here where we started.
The argument for unrestricted free speech then is, do you have more confidence in the position that speech of type X is unequivocally morally wrong AND that whoever is enforcing morally righteous speech understands the proper nuance not to overreach, or do you have more confidence in the position that people ought to be able to speak about their beliefs without violent reprisal? Because I think these are mutually exclusive.
"It is not a slippery slope if you set clear boundaries."
How could those boundaries ever be clear if they aren't near absolute? Definitions, interpretations, and the cultural mindset change constantly. Prejudice today is different than yesterday, and will change again tomorrow.
I see a remarkable tendency of people to assume that liberal / progressive ideas will always remain free to express, so there's no need to protect all speech. I don't see how this is true.
Except anti Semitism isn't just "kill all Jews". If you're the ADL, then it might be "Israel sucks, screw those guys". They'll label it anti Semitism, lump it in with "garbage philosophy" and shut down discussion. Exactly what people are worried about.
So sure, feel free to censor "kill all Jews" but know it is 100% a slippery slope, in no small part because once you are allowed to delete opponents that meet that criteria, you'll try to expand the criteria.
> I don't know how you could be aware of what's going on in academia and believe this.
This talking point is the epitome of filter bubble bias.
First, this is usually characterized as a somewhat recent liberal attack on speech. But the most blatant and egregious examples of speech restrictions at colleges and universities in the USA all come from right-wing christian evangelical colleges. If you think being a conservative at Berkeley is bad, try being liberal (or even conservative but non-christian) at Wheaton or Hillsdale. The only examples I know of students actually expelled for blatantly political reasons all happened at christian colleges.
Second, nearly all universities are infinitely more supportive of free speech -- in policy and in practice -- than other employers or businesses.
This is especially true for private secular universities, who often embody the values of free speech without any legal obligation to do so. MIT can tell anyone they want to "get off my property" but in practice allows, hosts, and even encourages an extreme diversity of viewpoints.
You can probably build a case against my assessment if you spend all day scouring the past decade for counter-examples. In fact, that work was (literally) already done. But "cat everything | grep 'my view point'" does not a preponderance of evidence make.
Religious colleges are restricting speech mostly due to the religious nature of the institution though. A church isn't required to give equal time to atheists in its pulpit, and it's not "political" if the expulsion is due to differences in religious doctrine.
I wager that many of the expulsions at those colleges are due to said liberal holding a position counter to religious doctrine; like homosexuality is not immoral, or premarital sex is okay.
Let's try this: "liberal-leaning institutions are restricting speech mostly due to the (choose:political OR diverse OR learning-focused OR religious!) nature of the institution though. A private entity of any sort isn't required to give equal time to people it disagrees with in its space."
> and it's not "political" if the expulsion is due to differences in religious doctrine. I wager that many of the expulsions at those colleges are due to said liberal holding a position counter to religious doctrine
See, you've already conceded the only viable response to my above rewriting. Most all controversial differences in religious doctrine ARE political. Your distinction is one without difference. The political nature of these difference in doctrine is what makes them worth firing over in the first place!
Unless you want to point me to the Mathematics professor who got fired for critiquing the finer points of the Church's justification of its position on transubstantiation. (This century, please :-) )
If you want to critique Hillsdale AND Harvard et al, go right on ahead.
This is delusional. How many colleges are out there like Wheaton (?) or Hillsdale. Is the ration less than 1000:1?
And anyway it doesn't address the point. Restrictions on speech at Hillsdale don't come from rules against "hate speech". The real problem with "hate speech" is it doesn't actually mean anything. You can keep redefining it until only speech you like gets spoken.
> How many colleges are out there like Wheaton (?) or Hillsdale. Is the ration less than 1000:1?
Yes. In fact, I even said so in the post you're replying to. In the very next paragraph, I state that "nearly all universities" are extremely welcoming of free speech.
In other words, most places are great, and the vast majority of places that aren't great wrt free speech are cloisters of conservative christiandom, not liberal utopias.
My point is, if you go through each universities and evaluate its attitude toward controversial speech acts on a case-by-case basis, you'd be hard pressed to make the argument that the data set, in aggregate, supports the "liberal censorship on college campuses" narrative.
In other words, the only way to reach this conclusion is by living in a filter bubble where you a) ignore the vast majority of circumstances where universities -- especially non-public universities -- welcome controversial speech; and also b) ignore the fact that the most obvious examples of educational institutions which do not welcome controversial speech are all staunchly conservative.
Stop and think for a second. How many actual concrete examples of censorship on college campuses can you think of? 10? 20? 100? There are literally millions of political speeches and politically charged courses every semester on college campuses.
Taking millions of data points and filtering out 10 or 100 of them, and then generating a perpetual outrage machine out of your teeny tiny artisinally crafted sample set, is the definition of a delusional filter bubble.
> Restrictions on speech at Hillsdale don't come from rules against "hate speech"
1. Why in god's name does it matter what you call it? A restriction on speech is a restriction on speech. Either you value free speech or you don't.
2. Here's one plausible definition of hate speech: "behavior that -- on the part of individuals or student organizations -- violates the bounds of common decency and civility... or that disrupts the climate of academic reflection and discourse proper to serious study."
Guess which college categorizes this behavior as a valid reason for expulsion.
How many instances does Evergreen rate? How many instances of self censorship do you think such an atmosphere creates? Do you think actions like this only result in one person being afraid to speak up? Does it only matter if someone is willing to go to a journalist and make themselves a target?
> I guess you could go through the case log of FIRE.
As I stated above,
>> You can probably build a case against my assessment if you spend all day scouring the past decade for counter-examples. In fact, that work was (literally) already done
In that comment, I was referring to the FIRE database.
So I already addressed the rest of your argument:
>> But "cat everything | grep 'my view point'" does not a preponderance of evidence make.
And in the post after that:
> There are literally millions of political speeches and politically charged courses every semester on college campuses. Taking millions of data points and filtering out 10 or 100 of them, and then generating a perpetual outrage machine out of your teeny tiny artisinally crafted sample set, is the definition of a delusional filter bubble.
I'm glad organizations like FIRE exist!
I agree places like Evergreen have bad cultures.
But the "liberal threat to free speech on college campuses" is 1) extraordinarily over-blown to the point of absurdity; and 2) places undo emphasis on liberal institutions when the most heinously anti-free-speech institutions are all conservative.
You still haven't explained what a "preponderance of evidence" looks like. You keep doing blanket refusals that the liberal threat to free speech is overblown.
The claim at hand: "colleges are restricting conservative speech".
I make two counter-points:
1. Writ large, Colleges remain some of the most liberal institutions when it comes to free speech.
2. The emphasis on "liberal" is misplaced since all the best exemplars are religious universities. (I think we agree on at least the second part of this claim).
Regarding 1, I'm not sure what a "preponderance of evidence" looks like for this claim, but I'd eat a shoe if even 0.001% of controversial speech acts that happen on college campuses result in any action, let alone something that actually effects anyone's life in any material way.
Combining FIRE's cases and disinvitations lists, I have something far south of 1000 total data points. People talk about controversial things a lot at colleges and universities.
Delusion definition[1]: an idiosyncratic belief or impression that is firmly maintained despite being contradicted by what is generally accepted as reality or rational argument, typically a symptom of mental disorder.
Note that GP chose to describe "filter bubble bias" which is a neutral observation, whereas P chose to describe GP's idea as "delusional" which carries the connotation of a mental disorder.
Both definitely support your point that for holding different views than the majority, people were attacked and/or suppressed. Here's the thing though: Wheaton is an evangelical college. Is it expected that an evangelical Christian college is going to be open to or supportive of a professor saying that Muslims and Christians worship the same God? Or that marriage is a union between more than just a man and a woman? I think that's kind of like going into r/The_Donald and then posting pro Hillary comments. It's not the kind of venue that is established for dissenting views. They're built for proselytization. Furthermore, association of evangelical universities and academia as a whole seems like a mismatch considering how specific their focus is.
On the other hand, one can argue that secular universities such as Berkeley and MIT have a much different expectation. It's easier to argue that as they are secular universities (and in the case of Berkeley public and home of the FSM movement), they are meant to represent a much wider set of views.
But when you see dramatic shifts in the political leanings of professors, then followed by conservatives being attacked on campuses that are dominated by liberal thought you have to wonder. I'm not sure what qualifies as a preponderance of evidence in this case according to your words.
That being said, I would also argue that there is plenty of persecution of liberals in deeply conservative areas as well. It's not really about conservative vs liberal, so much as it's about how majorities operate and deal with minorities.
> Furthermore, association of evangelical universities and academia as a whole seems like a mismatch considering how specific their focus is.
This is just tautologically false. IDK what else to say about this particular quote.
I don't really understand the rest of your first paragraph. Why does MIT have a special duty to respect the speech of e.g., neo-luddites or young earth creationists where Wheaton has no reciprocal duty?
I believe free speech is imperative to higher education. I also think the current hysteria about liberals attack free speech on college campuses is 1) ridiculously overblown at 99% of institutions; and 2) hilariously misdirected given that the worst offenders are conservative christian institutions.
> On the other hand, one can argue that secular universities such as Berkeley and MIT have a much different expectation
Berkeley absolutely does. They are publicly funded. This muddies the waters some because Berkeley almost certainly allows some speech -- on the left and the right -- which, if not for constitutional necessity -- it might prefer to silence.
So the comparison is unfair both to Wheaton and to Berkeley. Unfair to Wheaton because it doesn't have a legal requirement to allow speech. And unfair to Berkeley because it can't quietly turn people away (if leftist-Milo wanted to give a speech at Wheaton we would never hear about it, but Milo can generate an entire news cycle out of Berkeley because they're obligated to not turn him away.)
This is why I put special emphasis on a private secular vs. private evangelical comparison. It's fair to both sides.
I'm not sure why MIT has any special obligation to allow free speech that isn't shared by Wheaton. Care to explain?
> But when you see dramatic shifts in the political leanings of professors, then followed by conservatives being attacked on campuses that are dominated by liberal thought you have to wonder. I'm not sure what qualifies as a preponderance of evidence in this case according to your words.
Look, secular universities just don't discriminate on the basis of political belief. Outside of a few very niche, very tiny, very not-influential departments, this just doesn't happen either explicitly or implicitly. When it comes to hiring new professors, teaching matters. Research matters. Personal political beliefs do not.
EXCEPT conservative christian colleges, which uniformly want a whole damn essay dedicated to convincing them candidates christian enough and conservative enough to work there.
Liberals are over-represented on college campuses and in the faculty. There are a lot of reasons for this. (I conjecture that one major factor is the 5-7 year pledge of poverty that precedes an academic career. But again, lots of reasons.)
But IMO there is a very important difference between not correcting for an existing bias in your hiring pipeline, and actively and explicitly introducing political bias into your hiring pipeline. I assert the latter rarely if ever happens at liberal-leaning universities, but is absolutely part of the explicit hiring policy at many conservative-leaning universities.
> It's not really about conservative vs liberal, so much as it's about how majorities operate and deal with minorities.
My personal experience -- and I think the evidence backs me on this -- is that liberal universities are far more welcoming to conservative viewpoints than the other way around.
> This is just tautologically false. IDK what else to say about this particular quote.
Re: the rest of your first paragraph, this seems like a huge double standard. Would you be happy with MIT throwing out every climate skeptic and neo-luddite if only they wrote "advancement of science and technology" into their charter?
Not really. Religion as in organized religion exists to promote a specific belief system. Science has a much different mission in it's search to describe the world accurately. A dissenting view there can potentially be the truthful path.
If MIT threw out every scientist who opposed global warming, they'd be little better than a religious college.
> This is why I put special emphasis on a private secular vs. private evangelical comparison. It's fair to both sides.
I'm not sure why MIT has any special obligation to allow free speech that isn't shared by Wheaton. Care to explain?
MIT has an obligation if it wants to present itself as a college that promotes education in general. Wheaton is there to teach people about a Christ filled life. MIT makes it clear they value innovation and groundbreaking ideas and discoveries. There no way to do that effectively without considering as many points as possible.
> Look, secular universities just don't discriminate on the basis of political belief. Outside of a few very niche, very tiny, very not-influential departments, this just doesn't happen either explicitly or implicitly. When it comes to hiring new professors, teaching matters. Research matters. Personal political beliefs do not.
Freddie has written a lot on this subject in general. In fact he's been routinely attacked and DDOSed for critiquing the liberal culture both on campus and in the institution. He's posted some really controversial stuff about how bad the research is in the liberal arts. It's also worth noting that he's hardly a conservative and is quite soundly a liberal.
Money quote:
"And while I think conservative students can mostly get by fine on the average campus, I really can’t imagine going through life as a conservative professor, particularly in the humanities and social sciences. Is that a problem? That depends on your point of view. But if it’s happening, shouldn’t we talk about the fact that it’s happening?"
> "My personal experience -- and I think the evidence backs me on this -- is that liberal universities are far more welcoming to conservative viewpoints than the other way around."
I don't know how to measure the merits of explusion/suspension vs. social ostracization. I also have known a fair amount of conservative people at Berkeley due to my efforts to outreach to other point of views in my student days. I don't think "welcoming" is the word that they would use when it comes to liberals and conservative viewpoints.
> MIT... science... There no way to do that effectively without considering as many points as possible.
Respectfully, exactly the opposite is the case. Ask any mathematician at a top university how many crackpot letters they have to throw out every year. Ask any biologist how much Real Work they would get done if they had a department half-full of young earth creationists.
Universities are bastions of free speech, but they are also inherently exclusionary. And places like MIT regularly let down that exclusionary guard to invite speech that explicitly interferes with the efficiency their institutional truth-finding mission. An admirable thing, IMO.
At universities, free speech serves a political purpose far more than it serves a scientific purpose.
I don't buy your distinction between Wheaton and MIT. Free speech pre-dates the scientific method and exists independently of scientific inquiry.
Also, at places like Wheaton and Hillsdale, politics are part-and-parcel with religion. Demanding religious belief, while allow a broad spectrum in which to express that belief, is one thing. But the divide between politics and religion at these places is tenuous at best.
> I really can’t imagine going through life as a conservative professor... particularly in the humanities and social sciences
In the sciences no one cares. Except Wheaton et al, who are afraid of the liberal atheist branches of mathematics or something...?
Humanities and social sciences vary by field. IDK a lot about most of them them. But the one person I know from purdue -- where Freddie is from -- is in the humanities and very conservative (and doesn't hide it or anything).
Maybe Freddie should've taken a leaf out of your book and talked to more people while at Purdue -- the unimaginable was right in front of him ;-)
> I don't think "welcoming" is the word that they would use when it comes to liberals and conservative viewpoints.
Sure, "more" is relative and being a minority always sucks.
Still, MIT might make you feel alone. Hillsdale will just expel you.
> I don't buy your distinction between Wheaton and MIT. Free speech pre-dates the scientific method and exists independently of scientific inquiry.
You can publish your research that challenges the status quo so long as Free Speech is respected. Do you think it's a coincidence that Socrates challenged the physicists in Athens? (Granted that was one of the reasons they executed him) Or Aristotle? I can't think of better examples of early science and those came up under the first place we know of that did direct democracy.
> Also, at places like Wheaton and Hillsdale, politics are part-and-parcel with religion. Demanding religious belief, while allow a broad spectrum in which to express that belief, is one thing. But the divide between politics and religion at these places is tenuous at best.
Again. They're evangelical colleges. What do you really expect?
> Humanities and social sciences vary by field. IDK a lot about most of them them. But the one person I know from purdue -- where Freddie is from -- is in the humanities and very conservative (and doesn't hide it or anything).
Maybe Freddie should've taken a leaf out of your book and talked to more people while at Purdue -- the unimaginable was right in front of him ;-)
Yes, one person is a preponderance of evidence. Not countless articles of students and professors being harassed and attacked for their conservative views in an environment that is supposed to be about higher truth. And some of those views that are becoming less and less conservative over time.
"For years and years I have denied the idea that campus is a space that’s antagonistic to conservative students. I thought Michael Berube’s book What’s Liberal About the Liberal Arts? was the last word on the subject. I still reject a lot of the David Horowitz narrative. But as a member of the higher education community I just have to be real with you: the vibe on campus really has changed. I spent years teaching at a university in a conservative state recently and I was kind of shocked at how openly fellow instructors would complain about the politics of their students, how personal they go when condemning their students who espoused conventional Republican politics. I encounter professors all the time who think that it’s fine for a student to say “I’m With Her” in class but not for a student to say “Make America Great Again” — that’s hate speech, see — despite the fact that both are simply the recent campaign slogans of the two major political parties. Yet those profs recoil at the idea that they’re not accepting of conservative students.
I hear people say that they won’t permit arguments against affirmative action in their classes — hate speech, again — despite the fact that depending on how the question is asked, a majority of Americans oppose race-based affirmative action in polling, including in some polls a majority of Hispanic Americans. The number of boilerplate conservative opinions that are taken to be too offensive to be voiced in the campus space just grows and grows, and yet progressive profs I know are so offended by the idea that they could be creating a hostile atmosphere, they won’t even discuss the subject in good faith."
"The idea that we need any intellectual diversity at all invites immediate incredulous statements like, “you’re saying we should debate eugenics?!?,” as though the only positions that exist are the obviously correct and the obviously horrible. The idea that you’re supposed to read the publications of the antagonistic viewpoint has been dismissed as a relic. People call for conservative books to be pulled from library shelves; they insist that the plays of conservative David Mamet have no place in the contemporary theater;"
> Sure, "more" is relative and being a minority always sucks.
Still, MIT might make you feel alone. Hillsdale will just expel you.
I don't think forced resignation and expulsion are much different.
I've already conceded that most universities are majority liberal and that being in a minority is always uncomfortable.
The rest of your anecdotes are just that. So one guy has some bad coworkers. BTW, my response is more than just any old anecdote -- It's a refutation of the single anecdote that you're offering. Same time, same place.
Do people get harassed and fired for political beliefs? Absolutely. And the fact that organizations like FIRE fight back when this happens is great.
But the "liberals attacking free speech on campus" thing is a bunch of politicized bullshit. At most non-religious universities, this isn't happening in any meaningful sense.
Let's put this in perspective. FIRE has less than 1000 cases over the past decade that I can find on their website. Literally millions -- probably tens of millions -- of controversial speech acts happen on American campuses every year.
Universities are still bastions of free speech. If you're conservative, you might find a lot of people who disagree with you. But the odds are extraordinarily small that you'll be silenced in any meaningful way. Like literally a one in a million+ shot.
I'm not saying that it's OK when it happens. I'm just saying the whole "liberals shut down free speech on campus thing" is over-dramatized echo chamber bullshit, that you only notice because there's an entire cottage industry generating outrage every time that 1/1000000 event happens.
this reply probably doesn't matter since this thread is a few days old, but here goes anyway:
> I encounter professors all the time who think that it’s fine for a student to say “I’m With Her” in class but not for a student to say “Make America Great Again” — that’s hate speech, see — despite the fact that both are simply the recent campaign slogans of the two major political parties.
here's the thing: "Make America Great Again" is a dog-whistle slogan, pining for a time that was racist. it is, to a lot of people (myself included), an inherently racist slogan. so, the problem is that a statement which was "simply the recent campaign [slogan]" of a major political party was, in fact, racist. i know it seems crazy to some people, but one of our major political parties ran an overtly racist candidate, on a (barely) covertly racist platform.
EDIT: sometimes the platform was overtly racist too. though it tried it's best to make it palatable for people who wouldn't want to identify as racist.
the power to 'purge' actors exhibiting what in practice can be arbitrarily and fluidly defined (eg, cultural appropriation) is not a power i want anybody to have.
>the power to 'purge' actors exhibiting what in practice can be arbitrarily and fluidly defined (eg, cultural appropriation) is not a power i want anybody to have.
It's a power everyone has. You should read Habermas (https://en.wikipedia.org/wiki/Public_sphere). The entire purpose of civil society is to draw the boundaries around what's acceptable and create the norms that allow people to work out their differences.
Allowing your public sphere to be filled up with people hostile to the concept of being able to work out their differences leads to a breakdown in social order and the rise of totalitarianism.
When control of the modern public sphere is vested in the owners of social networks and internet infrastructure, that balance of power becomes rather lopsided.
>When control of the modern public sphere is vested in the owners of social networks and internet infrastructure, that balance of power becomes rather lopsided.
Absolutely, but I'm not sure that's germane to this topic. That's an argument for why we should have open standards and be opposed to monopolies, it's not an argument for not having moderation.
As it stands though, many social networks are dominated by the perspectives of a minority of technically savvy people with an abundance of free-time and a willingness to spam propaganda. They get away with it because these social networks were designed as ad platforms, so spamming propaganda in peoples' faces is literally why they exist and they don't want to invest too many resources into letting you control who gets to shove things in your face and who doesn't.
> so heinous as to not even be worth discussing the possibility of merit
Right, so, what's the clear boundary here? If it's specifically mass murder, ok. "Arbitrarily prejudiced mindsets", however, is not well defined, and therefore subject to slippery slope effects.
Almost all prejudices start with some nugget of truth, if only a correlation with the causation backwards. Further, humanity frickin loves lumping people trying to tell the truth with those who build nasty stereotypes on said truth. In fact, there is no sharp line between them. For instance, the guy in this thread getting downvoted to bedrock for pointing out that being fat is bad for you. Is that person being hateful? (I can't tell, but I collapsed the threat so maybe there's real evidence in there I missed) When do you start restricting that sort of thing, just pointing out awkward facts? Good luck coming up with a clear boundary we can all agree on, because oh yeah, you have to get consensus on that or you've solved nothing.
Notice below I stated that I believe racism unhealthy for the human race as a whole - I can back my argument up.
A racist may be capable of making a racial purity argument to the same point, however, the racist's argument will infringe on real freedoms (give certain people less because of the color of their skin) whereas my argument is snip that mindset out of the culture as a whole.
I get what you're saying, all morality is arbitrary, and yes I agree. However, I've made a decision, and you don't exactly get a lot of points for a morality argument as a perfect nihilist. At some point you have to put some skin in the game, and that's what I'm doing here.
For a good example of slippery slope in action wrt anti-semitic hate speech, look at how the BDS movement is attacked as both of these things.
Note: I don't support BDS. But I don't consider it to be either anti-Semitic nor hate speech, and the fact that many people who support hate speech legislation genuinely believe that BDS does qualify as hate speech, puts me firmly into the anti-hate-speech-laws camp. Simply put, I don't trust democratic majoritarianism to provide a definition of these things that I would consider acceptable, much less good.
Furthermore, given that hate crimes are directed towards minorities, and hate speech legislation is passed by majorities, I would posit that in any case where that speech actually makes a difference (i.e. where it can translate to tangible action), the majority will always immunize itself. Meanwhile, it will use such laws selectively against minorities (e.g. see all the right wing politicians referring to BLM as "hate group").
As objectively abhorrent as anti-semitism is, I unfortunately think the issue is not so cut-and-dry for a variety of reasons.
First, just to respond to those 2 points:
1. There's a spectrum of anti-semites. On many parts of the Internet, you will see many anti-semites who won't even hide under the veil of "just joking". They will tell you to your face that they sincerely believe that either the Holocaust was a hoax, or that it was a good thing. (Or it was a hoax but should happen for real.) Many of these people don't come to these beliefs purely through childhood indoctrination but rather exposure to communities and information sources which, to them and in the context of their existing worldview, seem rational and accurate and polarize them further. When you debate an anti-semite on the Internet, you'll often find that a good percentage don't argue in bad faith and actually do cite what they consider to be credible evidence of their claims. This is in the form of links to websites that look superficially scholarly and professional, images and infographics that claim to be backed by studies, and long documentary-style videos. No, the evidence is not good at all (and any accurate information they do manage to present may support something or other but doesn't support their anti-semitic claims), but many genuinely and truly think the evidence they're linking you is damning.
These aren't all random idiots purely looking for an excuse to hate on Jews. Many are brainwashed from what they read and are exposed to, just like any extremist ideology or cult. They're conspiracy theorists.
You can't fight that sort of thing with censorship: you're proving their point when you do that. "They want to hide this information because they know we're right. They're trying to hide The Truth from the goyim masses. We have facts and data on our side, and all they can do is censor and ban us." As futile as it almost always is to actually have a logical discussion or argument with such an individual, rhetoric must be fought with rhetoric, plain and simple. You can't fight rhetoric with "your argument is too ridiculous or evil to be dignified with either a response or even an allowance of existence".
2.
>However, it is not acceptable to argue over whether "All Jews should be killed." A standpoint that suggests the outright destruction of an entire people, or their enslavement or removal of freedoms, is so heinous as to not even be worth discussing the possibility of merit.
First, only a subset of anti-semites believe or say that all Jews should be killed. I know that's a weird thing for me to say. Any kind of anti-semitism is of course despicable and irrational, even the lightest forms of it, but the distinction still needs to be made. It's much easier to talk to and potentially convince someone who merely believes in some common conspiracy theories and harbors low-level ill will towards Jews compared to someone with deep, zealous animosity and murderous intent towards some, most, or all Jews.
Second, as beyond disgusting an argument as that is, even such statements as "all Jews should be killed" must be (in a broad sense) "open for debate" if you truly believe in freedom of speech. If anti-semitism and other fringe ideologies can just be dismissed as banned topics, anti-semites (and far-right people who are against anti-semitism but find censorship to be anathema) will become more empowered, more secluded, more polarized, and potentially more dangerous.
Let's try to analyze what could lead a person to genuinely say or believe something like "all Jews should be killed". Let's assume they're being sincere, and aren't saying it just to be ironic or edgy or to piss people off. Many of these people have above average IQ and seem otherwise kind of normal. So how could this happen?
From what I've seen and experienced, the most common reason for this kind of sentiment is a delusional fear: they actually, truly believe that Jews are actively trying to gang up and exterminate them. Not just them in particular, but all Western Europeans/Americans/capitalists/right-wingers/white people or whatever their particular flavor of Nazism/fascism is. Of course, this is a ridiculous and false delusion, but in their mind it is a clear and present danger, and in their mind all of the evidence actually points this to being true in reality. They really believe they are in some kind of existential war with an organized "Eternal Jew" - a struggle for survival. And they believe that expelling or killing Jews is purely an act of self-defense.
Take a moment to just accept the craziness of it all. These people aren't playing with the same cards as us. They're fundamentally working off of a different impression of reality. They're on an unsupported fork of the "global/history" repo. All of their ideas and thoughts can be traced deterministically from initial delusional seeds that they accepted and integrated long ago.
I think the best explanation for most of these kinds of anti-semites is that they are a delusional/psychotic community of interrelated cult-like groups. And sure, there's generally no convincing crazy or delusional people, but you definitely aren't going to convince a, say, schizophrenic that he/she just needs help by decreeing "it is now illegal to think or say the things you are currently thinking and saying". (And no, I'm definitely not saying most anti-semites have schizophrenia or something, but I also would not be surprised if there's a disproportionate amount of overlap. I'm just using schizophrenia as an analogy.)
It's not easy, but racists and sexists can sometimes be convinced to soften their views or even change their stances. For example, Daryl Davis, a black man, managed to befriend many KKK members and leaders and convince them to completely rethink their entire ideology. (https://en.wikipedia.org/wiki/Daryl_Davis#Career_as_writer_a...) He treated them with decency, kindness, and an open mind (even though they did not deserve it), and many of them eventually came to treat him the same way and reject much of their ideology.
Or take the case of the son of the founder of Stormfront. Stormfront is (or was, before their domain registrar revoked their registration) one of the biggest and most fervent anti-semitic/racist/extreme-right communities out there. Derek Black, the founder's son, helped maintain the website until he ended up befriending some Jewish people in college. After merely being exposed to the fact that they and their families were kind, ordinary people for a few years, he renounced his racist views and dissociated himself from the website and his family. (https://www.washingtonpost.com/national/the-white-flight-of-...)
Both of these situations required years of exposure to opposing viewpoints presented in a respectful way to change their views.
They will never, ever be convinced to change their views if your response is solely to prevent them from discussing certain things or to ban them from places or even to advocate for (or ignore) violence towards them. (One might say "but they advocate for violence towards minorities", but as explained, that can be an oversimplification. And even when it is literally true, two wrongs don't make a right).
>My point: Restricting hate speech is not a slippery slope for freedom of speech. I personally believe there is no universal morality, but if I had to pick, I'd argue that the best outcome for the human race would be a culture that purges all racist and other arbitrarily prejudiced mindsets that judge entire populations on untenable grounds (race, gender, etc).
Again, we're entering dangerous territory. A private community/company, even one acting as a de facto common carrier of communication like reddit, certainly has the right to ban people who make heinous speech like this. But as a society, ideas like "we must purge all racists and arbitrarily prejudiced mindsets from our population" are untenable, unsustainable, and may be playing right into the other side's hands even aside from the slippery slope arguments and how exactly we should or could set clear boundaries of racism and prejudice. Even if we could theoretically set these clear boundaries (and we can't), this is still folly. The founding fathers and their predecessors in the Enlightenment knew this all too well. Freedom of speech really is an all or nothing thing. If something happens purely within the bounds of speech (e.g. the speech is not directly inciting or calling for a crime to occur), it must be permissible under the law.
I think this paper shows that this did help improve the overall balance of toxicity in reddit. But it does not (and pretty much cannot) show how this affects the greater ecosystem of hate speech online. No doubt the vast majority of the people who posted in those banned subreddits have been using many other websites and applications before the ban and are continuing to organize and communicate after the ban. They're just not doing it on reddit anymore.
And this is not something I would normally ever include in a comment, but since this is such a sensitive topic and since my comments here could plausibly be seen as some kind of tacit concern trolling that's secretly supporting anti-semitism or Nazism, I do feel it's relevant to mention that I'm Jewish myself and am extremely aware of the dangers and evils of anti-semitism and fascism.
If 'blatant sexism' is watered down to include discussing different averages, people are going to start reasoning that different averages can't be that bad to discuss, so sexism can't be that bad either.
It’s something of the same principle that was expressed in the post:
"If the elites, the technocrats, the 'Cathedral'-dwellers, were willing to lie to the masses about humans being blank slates—and they obviously were—then why shouldn’t we assume that they also lied to us about healthcare and free trade and guns and climate change and everything else?"
...
Elsewhere, I have seen this phrased as "One man's modus ponens is another man's modus tollens." One person asserts A -> B and mentally holds (A), therefore concludes (B). Someone else hears A -> B, mentally holds (~B), therefore concludes (~A).
-----
If you can accurately model the (~B) -> (~A) inferences that are happening, you have much, much better odds of being able to communicate effectively with these people.
As someone who frequently has reasonable exchanges with alt-righters I largely agree with your post.
To me the hatred from the right is somewhat self-aware. They know that their views are not welcome and they have no pretense of being inclusive. Segregation of cultures is their shtick after all.
But they are willing to debate things openly, at least on the internet. They might call you a shill, bluepilled or even a race traitor but I have not heard the that something is unthinkable to be discussed.
In fact, on 8chan you can find furry porn (which would be considered degenerate by nazi standards) and seemingly serious discussions how the purity of ones race and culture can be preserved side by side.
When then people from the left come and say things such as "it is not acceptable to argue over [...]" then the thoughtcrime bells in my head go off and I get the visceral feeling that the "distinct but equal" norm on the right might actually not be so bad compared to the "absorb and unify everything, when done stamp out things outside the overton window" on the left. Because I want to be able to argue hypotheticals outside the overton window without being accidentally mistaken as someone who might perform acts outside the window.
The fact that you feel you need to litter your post with a lot of virtue signalling out of concern as being mistaken as a nazi-sympathizer strikes me as something having gone wrong. Are people incapable of distinguishing between an argument being made vs. (non-)endorsement of an argument?
> but since this is such a sensitive topic
This is also something that I do not understand. As long as there are actual wars in the world, global warming, starvation etc. this seems to be an inconsequential topic to me. If we can discuss nations attacking each other, death penalty, sweatshops, massive extinction of many valuable species, etc. then we should be able to discuss groups within one nation hating each others' guts at a sub-war-threshold without putting on satin gloves.
These "culture wars" are way too overblown considering what they represent on the hierarchy of needs.
My peppering of reassurances that Nazis are bad wasn't so much to protect me or my post (you would have to be very willfully uncharitable to interpret my post as pro-Nazi) but just an attempt to tailor my response to an audience which is likely skewed to the left and is a bit sick and tired of more simplistic "but free speech!" arguments.
But part of it was certainly based on how commonplace it is these days for people to jump on a word and assume bad intent. It is unfortunate that that's the world we now live in, where you can't even discuss concepts from a meta level without accusations that you secretly support evil ideologies. The culture wars are likely to only get worse at this point, with no end in sight. I don't really think they'll culminate in a civil war, but if we're not careful, anything's possible.
> usually the tool that societies reach for to kill ideas is to kill the people that hold them. 99 times out of 100.
That's not normally the tool that democratic societies use. The US didn't wholesale kill its segregationists in the 60s. Western Europe didn't kill its Nazis, or its Stalinists. Ayn Rand wasn't pushed down a stairs. I'm trying to think of an example of where a modern democratic country _did_ do this. We didn't kill the misogynists or the racists or the homophobes, we just let them slowly fade into irrelevance. It's not done yet; it's a process.
Societies who do try to kill off bad or opposing ideologies rarely actually do that well with it. How well did the Soviet Union really do with getting rid of its ethno-nationalists, say?
Edit: I suppose one example would be covert action, there has obviously been some of that. Rounding people up wholesale doesn't really happen, though.
I mentioned modern democracy. I'd have serious trouble considering those to be modern democracies at the time. Universal suffrage seems like a minimum requirement.
Or labour history back further...in fact, I specifically want you to look at Europe ~1790-1855 around when the craft guilds collapsed and mass protests started happening.
Police, as we know them today, were literally created in this period in response to these crowds of lower-to-middle class workers who were disrupting society.
> I'd argue that the best outcome for the human race would be a culture that purges all racist and other arbitrarily prejudiced mindsets that judge entire populations on untenable grounds (race, gender, etc).
Phrasing. You can't just come out and say purging--you have to dress it up a little more.
The term is often used for non-violent action. For instance, it'd be reasonable to say that the US democrats purged their racist 'dixiecrat' faction; it took a while, but they have been eliminated.
>"I personally believe there is no universal morality, but if I had to pick, I'd argue that the best outcome for the human race would be a culture that purges all racist and other arbitrarily prejudiced mindsets that judge entire populations on untenable grounds (race, gender, etc)."
Out of all the possibly noble things you could pick, you pick prejudice as the item that would be universally-forbidden in your moral code? I understand that race-relations, group-bias topics and intersectionality are in vogue and centre-stage at this moment, but we need to aim higher as a society.
Pretty much, yes. I've made known my views on freedom of expression. I'm a very strong proponent, to the point where I've recently been forced to defend Nazis.
Politically/socially correct speech doesn't need to be defended. However wrong someone is, the liberty to express their views, within the bounds of reasonable law, should be a social goal, not just an amendment that restricts the government.
What is socially acceptable today may not be socially acceptable tomorrow. What power is used against your enemies today may be used against you tomorrow.
Oh, to be clear, Reddit can do whatever they want on their site. I was speaking to the larger.
And, yes, I believe in freedom of expression. Even ideas that are hateful.
No, no I'm not white. So, I'm not saying this because I want to say hateful things. In fact, I don't hate anyone. I don't even have anything controversial to say, except to support the liberty of free speech.
People do have the liberty to express their views in the sense that they have no trouble from the law with what they've said online. They do not lose their right to vote because of it. That's freedom of speech.
However, being censored by an organization is something that's very healthy for any organization of a decent size. It helps to keep the mission and culture of the organization consistent with what it's designed for. In reddits case, it decided that it would not harbor hate speech because it didn't want to make its culture include that. This is very similar to a group of friends not inviting you to drinks after you repeatedly said that you think genocide is a good solution.
It's censorship! Which, if done by the government, can be bad. But it's something that we all do when we want to change the culture of our group. It is very puzzling to me that so many smart people on hacker news do not understand why it's beneficial.
I'm astounded that on HN, of all places, you've been downvoted for what was considered obvious a generation ago, and obvious to anyone who has lived under authoritarian rule.
Usually, as I've said much the same before, I get down votes and then someone comes along and actually reads what I wrote, as opposed to knee jerking to the idea that I support Nazis. In fact, I support basic human rights for everyone. Some people are uncomfortable with that idea. It's okay, I have karma to spare.
I'm not in it for the points. I'm in it to express the idea of liberty. Many people only want liberty for themselves and those they approve of, or so it seems.
Edited to add: See? It's back in the positives. It will bounce around a bit, but liberty is still popular. I think some people don't read what I wrote, so much as they insert ideas about what they think I wrote. Another post in this thread suffers similarly. It's all good.
On the other hand, where does speech stop and actions start? Where does liberty stop and mob assault start? Are death threats covered by freedom of speech?
Freedom of speech is an important right and necessary part of true liberty, but the people in the situations in question aren't exactly engaging in reasoned debate.
Speaking of which, "only want liberty for themselves and those they approve of"? Really?
It becomes unlawful when there is credible threat explicitly stated, then there are libel and slander laws. Death threats, when credible, are illegal and justifiably so. See my first post.
And yes, really. See the replies to my post. Note, specifically, the comment regarding votes. I stand by my assertion. The evidence supports it.
"...but the people in the situations in question aren't exactly engaging in reasoned debate."
In this case, I agree. But like "obscenity" or "pornography", determining what is reasonable or not is a slippery slope. Promoting concentration camps? Not reasonable. Protesting military action? Probably reasonable. What about protesting military action against a country that runs concentration camps?
(As North Korea does run these types of prison camps, and the chance of military involvement with the DPRK seems higher than ever, this is not a completely academic scenario)
Since Charlottesville, there have been two changes in public discourse that I've noticed. First, free speech even for despicable positions is now equivalent to Nazism in the eyes of many. Second, Westboro-style language is now acceptable if the victims are the South. Plenty of people being unironically joyful over the hurricanes.
The "power" to ban people isn't up for debate, though. Reddit have that power, whether they exercise it or not. If they did not exercise it today, it could still be inherited and abused by a future actor tomorrow. There may be arguments about how to change that, but arguing Reddit should not ban users for their speech would not be one of them.
Further, it's hard to imagine a world where websites cannot ban users. Reddit's "power" extends over their own property and who they admit to it, I can't imagine it being possible to run a website without that "power" - how would you deal with spammers and so on? How could you possibly compel a website to admit all people whether they want to or not?
This is a shallow reading of Zeynep Tufekci's work (which I agree is among the sharpest analyses out there). She very explicitly calls out sites like Twitter, Reddit, and Facebook for not doing enough to curb hate speech and harassment. Some quotes from her latest book:
"I later realized that the attackers had also been organizing online, using the same affordances as other activists for positive change—but only to attack female writers who touched upon gender-related topics. They were using Twitter’s ease of organization and willingness to let them operate freely to target the freedom of speech and assembly of others. Like many platforms, Twitter had wanted to remain “neutral” but, as is often the case, rights of one group—the group who wanted to silence women or minorities—clashed with rights of women or minorities (especially outspoken ones) to freely use the site to speak and assemble. A stance of “neutrality” meant, in reality, choosing the former over the latter."
"As with many of the issues I study, it is difficult to have a coherent and unified normative view or a simple rule that would apply in all cases that all doxing is good or bad by itself. There are always trade-offs. These judgments have to be made in the context of whose rights are allowed to trample whose, what ethical values will be protected and which ones disregarded. Will we protect children’s right to be free of sexual exploitation, or the rights of adult men to anonymously gather and exploit? But will we also protect the right of dissidents around the world to be able to post pseudonymously? There is no single, simple answer that covers all ethical and normative questions that come up for platforms and their policies, without the need to judge many of these cases individually, rather than applying blanket rules."
Today's hate speech is the same culture that could enable totalitarianism today. In fact much of it was enablers of totalitarianism; people are still circulating Protocols Of The Elders Of Zion, for example.
(also, we fought the "never 'censor' anything, ever" war on USENET decades ago over spam. You have to have the power to 'censor' spam if you want to have a forum that isn't mostly spam. How few genuinely unmoderated spaces are left? Would the moderators of fatpeoplehate have deleted spam or off-topic content?)
I mean, in the first place Reddit and Twitter were never real common spaces. Perhaps the bigger problem is how much of the communication landscape is controlled by private actors accountable to no one.
Reddit's founders have stated otherwise, though they've also been less than consistent on the point of free speech.
Private / commercial actors do have accountability, though it can be difficult to trace.
I'm neither arguing against limitations on public discussion, or defending private fora as equivalent to public spaces (I disagree with both views). But I'm noting that in a strict interpretation of your comments, neither statement is true.
Lack of moderation isn't "freedom" for all, though: it simply empowers those with the fewest compunctions to bully and harass others off the site through informal means.
Right. And if we're going to talk about the social value of free expression, that has to include consideration of how certain unconstructive behaviors undermine that value. And how people may go out of their way to defend those unconscrutive behaviors if they serve to push people out of conversations with whom they disagree.
Who are these free speech proponents that you are talking about?
Reddit and Twitter are private organizations, not government institutions. Free Speech proponents like John Stuart Mill are fine with free speech (as a philosophy) limiting the speech of individuals if the expression causes harm, such as organized mass harassment and threatening of individuals (which is what fph was banned for).
I mean, I am certainly given pause by this tidy argument. If private actors not subject to the rules control the means of transmission (or if speaking out in the wrong way threatens your livelihood) then freedom of speech starts to look mostly theoretical. I'm not sure I have an easy answer to the problem myself.
However, I can't help but notice that HN is obsessed with some sort of conspiracy against Nazis and misogynists, yet does not at all seem concerned about the actual pronouncements by the current of his intentions to criminalise certain criticism of his.
Thank you. I've realized I've been hearing warning klaxons everytime a new paper on social media comes out.
I've started posing the question - what would happen if someone used this paper, and decided to ban subs like "twox". or "for equality to all" or "improve yourself" and so on.
And the fear of that must also be tempered by reading the paper itself, since a key part is being ignored in many comments so far
Section 6.6
> In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To
be clear, from a macro persepctive, Reddit’s actions likely did not make the internet safer or less
hateful.
So its not like we've cleared the thicket, we've just put it in a jar not owned by reddit.
The study itself seems to be over the 10-20 day period immediately before and after the ban.
> can be an effective way to silence their inhabitants
Although that is true, the authors note that they merely shifted to alternative platforms. It's not really getting rid of the problem, just moving it somewhere else.
People don't seem to understand that a lot of negative behavior exists in a context, and if you remove the context, the negative behavior can largely disappear. It doesn't necessarily just go elsewhere.
This gets to a Harvard study I now can't find that suggested that anonymity online is a red herring. Anonymous and whatever the opposite is (named? nonymous?) people are equally likely to follow community norms -- anonymous people are actually slightly more likely to. The problem is not the anonymity, the problem is the community norms, which in the banned subs were horrible.
> Though many subreddits saw an influx of r/fatpeoplehate and
r/CoonTown “migrants,” those subreddits saw no significant changes in hate speech usage. In other words,
other subreddits did not inherit the problem
Right, but why does Reddit care? Their moderation approach effectively addressed the business problem they faced. Maybe in the long term Voat will grow to become an existential threat to Reddit, and Gab to Twitter. But it sure doesn't look that way right now. A reasonable business manager at a firm like Reddit might look at a new company like Voat as a gift.
It doesn't seem like a reasonable organizational goal for any firm anywhere to eliminate white supremacy from the entire Internet. It's hard to imagine how that might be accomplished. Certainly, Reddit was never in a position to do it. Why would they look to solve any more than the part of the problem they were directly engaged with?
Voat doesn't have a real userbase outside of the hate groups that have made it their home, and they've been in serious financial trouble for a while. They would have gone under by now if not for a recent cash infusion they received from /r/The_Donald, and even that will only last them a few months.
Hate groups were so prevalent on Reddit for one main reason: discovery. Being on Reddit made it easy for them to proselytize and recruit. Hate subs show up in Reddit's search function and in views such as /r/all. If someone posts a dogwhistle or even an outright racist statement on a default, you can click on the poster's username, see their post history, discover any hate subs they post to, and get sucked into their propaganda. Just typing the name of a subreddit creates a direct link to it... so if you tell a racist to "go back to /r/CoonTown", you're inadvertently advertising the sub (which is why AutoModerator has been a godsend, as you can automatically remove posts that contain links to known hate subs).
But if all the hate groups are over on Voat, all of that goes away. Redditors can't randomly stumble on Voat communities, and it's harder to advertise Voat communities on Reddit. Voat is its own tiny, isolated little private island for hate groups, and it's very hard to proselytize from there.
Same user name on Voat for reference if you want to get further into this debate.
I have been on Voat since close to the beginning. 3.2y because I was banned from Reddit for a disagreement with an admin.
Now you claim that Voat is full of racist and hate groups. I won't deny that there is quite a lot. Thing is that it isn't nearly the amount you want to make it seem. A larger base isn't ever going to participate within those groups. They will look into it.
We have also not been kept afloat due to the cash donations from the Donald. We have been kept alive because of the efforts of admin and many good people that want to develop the site in order to make it lower cost to host.
I can't speak for others but I do know that we do not go looking for new members within Reddit. We do however get many people that have been banned from various places within Reddit. That is what it is all about.
You won't understand that because you want to show a side that would make a vast majority knee jerk and react the way you want. Keeping them under your thumb.
By all means come over and see us. Check it out and post your views within /v/whatever because within that subverse your submission will not get deleted unless it is spam.
Keep in mind spam to "US" is when someone is trying to sell something without paying for an ad. I encourage everyone to experience what true freedom is. You may like it.
That's a little alarmist. I don't like the MakeupAddiction group (after all their whole stance on Maybelline is just terrible ;P) ... should I have a podium to punish those who like that sub or those who participate in it? (Note: I'm all for slapping down people who brigade, but guilt by association is terrible)
They might not disappear, but it may be harder for them to continue growing. Hate speech subreddits had the chance to bubble up into the larger community when a funny or popular post gained traction, and maybe help to normalize their behavior for a new audience. Voat on the other hand has transformed into a hard pill to swallow for new users. The front page is daily filled with hate speech targeting black/jewish/fat people along with women and immigrants.
But they are (a) no longer reddit's problem, largely and (b) banished to Voat with the paedophiles and the more obvious Nazis, where they will languish in obscurity, because no-one really goes there. They can no longer propagandise to a wide audience on Reddit; they're stuck with a small, already awful audience on Voat.
How good is that corpus of data? Does it show the removed comments of users? (As a non-mod you cannot see the comments that were removed in a sub) Mods use automod to automatically remove posts based on keywords.
Don't be too sure. Glancing at the link, all the test for is the usage of the same words that were used in the banned subreddits - but those are unlikely to be used as they would quickly get the account banned. Instead presumably the accounts would use a more subdued approach, but with little change in intent.
The approach also fails to account for users who go to alternative sites.
Fire is one metaphor, though I lean far more strongly to epidemiological parallels.
Hatred is an operational state, as is illness. Both progress as information is spread through a population. That requires reservoirs or sources, and vectors of transmission, as well as susceptible carriers.
By disrupting the reservoirs, limiting the vectors, and innoculating the carriers, you can treat epidemics of both disease and antisocial ideology.
I'm not sure the fire analogy is that apt. I imagine the hatred in those people still exists and if it can't resurface on Reddit, it might resurface elsewhere. Of course a study to back this up would be nice, but this is my theory.
“We are what we pretend to be, so we must be careful about what we pretend to be.” - Kurt Vonnegut Jr.
I read comments like yours - "I'm not really a bad person, it was just a prank, Bro!" from time to time and I think it betrays a real lack of understanding on how human interaction works.
It is one thing to make a joke about fat people in some private setting among friends who know you and will take your words ironically. But to make the same joke in a public sub that was explicitly set up to ridicule fat people conveys a very different meaning. It means that you condone the sentiment behind the sub and agree with those users (perhaps a minority perhaps everyone else on there except you, who knows?) that do harbor malice towards others. By joining in you give such people strength and encouragement.
Reddit was right to ban those subs. You were wrong to participate.
(This doesn't make you a bad person in general, we all make mistakes.)
If you are chatting among friends in real life they can see your body language and hear your tone of voice. If someone takes offense you can immediately back-peddle.
Submitting a comment to a webpage (either public or protected) is more like writing graffiti on a public wall. Nobody knows or cares what your intentions were.
I see it more like standing in a corner with your friends talking about whatever (in this case a specific topic). Sure, someone can walk by, overhear the conversation and be offended. But to call that making public comments in some form is missing the point of the distinction between public and private.
I think the argument is that it's a space that is accessible by everyone without any sort of gatekeeping, that makes it a public space. Whereas a forum such as somethingawful would be considered more private as there is a subscription fee for membership.
To me the issue isn't the lack of literal gatekeeping, its the intended audience. If I'm having a conversation in public with my friends, I still have a reasonable expectation that my conversation is "private" as long as I make an effort to keep it among those in my group. That someone could walk by, eavesdrop and overhear something objectionable doesn't meaningfully alter this dynamic. I see subreddits as similar to this. The fact that there is a specific URL that indicates its topic is enough gatekeeping to warrant the conversations as "private", in terms of who is the expected target of the conversation. In the case of fatpeoplehate, the insulation of the content of the sub makes it reasonable to expect that consumers of the content will be "in on the joke". Objecting to the content because its in a "public space" and therefore not appropriate for public consumption doesn't make sense.
I think the "fun" comes from the sport of seeing how far they can go in provoking other people by taking extreme behaviors and speaking about them in a casual, normalizing tone.
Which, needless to say, is childish, wasteful of everyone's time on account of its disingenuousness, and doesn't reflect well on one's personal character or maturity.
>Thinking black people are inferior is a political view.
I disagree. Thinking that trickle-down economics is a solution to poverty is a political view. Outright judgement of entire populations on illogical and untenable grounds is simply racism.
I have taken the mindset of rejecting these mindsets wholesale - I won't give credence to racists anymore. It is a garbage, untenable, unarguable viewpoint, and those that are within it have simply been failed by the system. It makes life much simpler :)
> Thinking black people are inferior is a political view.
No, that's not a political view. That's plain old racism. Politics has to do with money and power. Thinking black people are inferior is white supremacy. Don't conflate that reasonable positions taken in politics.
I don't think it makes sense to deny that it's both. One can hold abhorrent political views -- that doesn't make them not political. What were the Nazis if not a political movement based on white supremacy?
Playing night elf Druids in computer games doesn't hurt others. Your participation on the forums did. You like hurting people for fun? How do you sleep at night?
In some multiplayer games just playing them hurts other people up to the point where they experience something very similar to a mental breakdown, all because you're on their team and you're performing poorly.
It all comes down to the emotion management. My countrymen - and by extension me - have been called thieves and equated to animals countless times. My race has been blamed for all the evils in the world. Yet, I don't care - none of this has any potential to hurt me, it's just words on the internet and I'd rather follow the famous mantra of Tyler The Creator than pretend I'm being hurt somehow.
Well... I think you can have "fun" without it being at anyone's direct and unwilling expense for reasons beyond their control.
That aside, even if most of the users "weren't taking it seriously" it creates the kind of environment where being hateful is tolerated and celebrated... not exactly the kind of community anyone should want to encourage or participate in.
My experience with online communities is that while you don't have to have fun at other people's expense, it's a large subset of what people like to do. For better or for worse a lot of people like to say or do things that push boundaries. Sometimes it's actual hate and sometimes it's not. This kind of behavior goes way back with humanity in general. Trickster heroes have existed in mythos across all cultures and while they do things that are at the expense of others, rarely is it out of malice.
For those who can't watch: A female streamer is playing PUBG when she is "downed" by other players. Said players then play the ISIS theme that has been played on ISIS execution videos over voice chat before shooting her character in the head. Are these players tolerating and celebrating ISIS or are they deliberately being offensive for humor? Or some other motivation?
For good reason people find this behavior objectionable, but at the same time I don't see how you're going to force people to stop doing things like this.
People drain swamps and other areas where stagnant water collects to deprive mosquitos of places to breed and grow and affect the population at large, not because they want to visit those places.
Please take some time to reflect and interpret other's comments charitably and arguing their strongest interpretation, not their weakest. It'll help you understand others better and make your own arguments stronger and contribute more productively.
That seems like a great analogy until I remember that we built DC on a drained swamp.
Again, their statement was that if it involved pain then it was unacceptable. I pointed out boxing, they mentioned concentual. I pointed out that the user consented when they visited the URL.
I mean, come on... The two big ones were coontown and fatpeoplehate. What the hell did they think was going to be there?
It's not the content of the subreddits alone, but they inevitably "leak", or actively encourage harassment of people on other subreddits. This is how they can be harmful even to people who never go to the banned subreddit.
Ridicule is not a healthy way to correct 'bad' behaviors. Even aside from the fact that I fundamentally disagree that people should feel bad for being obese - Aside from the typical externalities of excessive eating and sedentary lifestyle - your comment is baffling to me.
The social cost of the negative outcomes of ridicule directed at obese people - lifelong stunted self-confidence, eating disorders, depression, suicide, etc. - FAR outweigh the few cases where people like yourself might find this productive/helpful.
Do you really believe that ridicule is a reasonable way for society to deal with this (or any) issue?
>Do you really believe that ridicule is a reasonable way for society to deal with this (or any) issue?
Isn't that exactly what's going on with the backlash against hate groups? Like it or not, ridicule is an effective social tool for behavior modification.
"The nail that sticks up gets hammered down" works extremely effectively in many cultures. I'm not saying there's other downsides to using that tool, but it is extremely effective.
S.Korea & Japan are in the lowest three obesity rates for a reason. (Third being Italy)
It's not a hard thing to figure out, honestly.
I'll vigorously defend your right to express your opinion. That doesn't mean I have to agree with you and I may disagree loudly.
Aggressive social conformism is just everyone disagreeing with you, loudly. It's the kind of thing we should be doing to racists, etc.
I liked it in the beginning, but one time there was a post of a fat dude working out in the gym, and everyone was mocking him relentlessly. I posted "hold on, I thought we were making fun of fat people doing nothing about their condition, this guy is making an effort."
The response was downvotes and "nope, all fat people are human garbage and should be mocked."
Was done with the subreddit after that. It started as a activist subreddit challenging the "love your body" movement that was trying to propagate unhealthy fat acceptance, it ended as a hate subreddit. Shame.
A woman became obese after a fecal transplant -- hinting at the complexity of how obesity works in the body, experts say.
The unnamed woman weighed 136 pounds -- but gained 34 pounds over the next 16 months -- going from a healthy body mass index to an obese one, according to a case study published in an Oxford Journal called Open Forum for Infectious Diseases.
"The patient actually said this: 'From the moment I had the fecal transplant, I felt like a switch flipped in my body,'" said. Dr. Colleen Kelly, a gastroenterology at the Warren Alpert School of Brown University. "She felt like prior to the fecal transplant, she had never had to worry about weight."
>No one is holding a gun to your head, forcing you to eat too much.
How you recognize what "enough," "too little," and "too much" are are based on a complicated set of hormonal and nervous system cues. When that system tips too far one way or another you're inevitably going to start gaining weight or have trouble building muscle.
The effect of commenting dismissively like this on a complex and emotional issue is that of trolling. We need you to be more thoughtful and to please resist flamebait like the guidelines ask.
I think some conclusions are being drawn too soon, from a study which cautions
> In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To
be clear, from a macro persepctive, Reddit’s actions likely did not make the internet safer or less
hateful.
From section 6.6 -
> Recent work has shown that some banned subreddit users migrated to other social media sites like
Voat, Snapzu, and Empeopled [29]. The banning of r/fatpeoplehate and r/CoonTown led to the rise
of alternatives on Voat.co, for example, where the core group of users from Reddit reorganized. For
instance, in another ongoing study, we observed that 1,536 r/fatpeoplehate users have exact match
usernames on Voat.co. The users of the Voat equivalents of the two banned subreddits continue to
engage in racism and fat-shaming [22, 45].
In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To
be clear, from a macro persepctive, Reddit’s actions likely did not make the internet safer or less
hateful. One possible interpretation, given the evidence at hand, is that the ban drove the users
from these banned subreddits to darker corners of the internet.
As I recall - it took time for Coon town and FPH to metastize. That the metastizing process may well be REQUIRED, to get a critical mass which results in a succesful cleansing.
Further this study is based on specific keywords, which the researchers took great pains to curate - but human ability to hide improves with practice - the change of lexicon to words that Wont be easy to track with basic lists.
FINALLY - People should flip this on its head - should reddit or any site be compromized, and suppose subs which "fight for equality and freedom and goodness for the world" are banned - what would this study imply?
> This is an A+ study which seems to confirm that banning subreddits can be an effective way to silence their inhabitants.
Was a study needed for that? It's really isn't a good study because it doesn't say anything.
> It's nice to get some hard data to counter the theory that if you ban a subreddit consisting of undesirables, they'll simply invade other parts of reddit and continue. In reality, the other parts of reddit aren't nearly so tolerant.
But they did invade, that's why all of reddit became censored.
> So what's the implication?
There are no implications. Censorship works. There are reams and reams of studies that show that.
It's why you won't see much tiannamen square stuff on chinese internet, go any lgbt stuff on russian internet or atheist stuff on saudi internet.
People are celebrating this study which essentially says "water is wet". Yeah we know.
This is a bit like studying crime in one neighborhood in isolation.
Putting cops in neighborhood A may reduce crime there, but it may just increase crime in neighborhood B.
I actually liked having offensive subreddits on Reddit because there were so many people that devoted themselves to arguing with the people that posted there.
Now those people are gone and are off to internet spaces where nobody disagrees with them. This will reinforce their opinions, not change them for the better.
I think that there's a cohort of people (mostly teenagers and early 20s?) who won't actively seek out hate based sites, but if they're available might check them out, especially if it's not just direct hate, but focused on jokes at <group that is easy to attack>'s expense. At first for fun, but eventually it normalizes the behavior for them, and a small percent get radicalized for real.
The alt-right/theDonald overlap with hate movements is a great example of this. What percent of those Charlottesville marchers and their supporters started doing it just for the memes, or just to light up smarmy internet liberals?
In that sense, removing the most radical groups from a giant forum like Reddit off to silos like stormfront can have a net positive effect.
Angela Nagle recently talked with Ezra Klein on his podcast, and that was one of the major points she made - how much of the alt-rights use of irony and satire draws in a lot of people who are then exposed to the larger ideology
Is the harm then a) that these people are exposed to the larger ideology, or b) that these people don't have the critical thinking tools or habits to see these ideologies for what they are?
I think it's (b).
In life we are exposed to a lot of bad ideologies, and the only thing that can help us navigate these ideas is our own thick skin and critical thinking tools and habits. The only way you develop these are through exposure and practice.
I think that if you think the solution to ideological waves taking over governments is merely better education, you will be woefully unprepared for whatever ideology the next wave of fascism or otherwise sweeps the world.
"I think that there's a cohort of people (mostly teenagers and early 20s?) who won't actively seek out hate based sites, but if they're available might check them out, especially if it's not just direct hate, but focused on jokes at <group that is easy to attack>'s expense. At first for fun, but eventually it normalizes the behavior for them, and a small percent get radicalized for real."
I see it as a form of inoculation: on Reddit, a casual visitor has a better chance of seeing counterpoints and ridicule.
Now those groups have scattered like roaches into safer spaces that are less likely to tolerate dissenting opinions, and can claim persecution.
<personal bias, no time to research this for sure> The extremists weren't going to be swayed anyway by people arguing with them, or even ridiculing them on the internet. If anything, the focused subreddit with its own moderation team can create extra legitimacy by voting down arguments and dissent, or deleting them entirely. Spend some time on thedonald, and see how much actual useful debate there is on the posts attacking Hillary's body.
It's the large, non-participatory cohort of people who may have never read the comments that are being helped by banning these groups, because these behaviors seem less 'normal' and 'legitimate' if you don't see them, especially if you're not exposed to them as jokes and pithy quotes.
Some of the coontown people will slink off to the KKK blogosphere and private message boards, but my point is that many of those with only mild biases won't, and so will lose the chance to be radicalized.
Arguments with extremists aren't for the benefit of the extremists; they're for the benefit of onlookers who perhaps haven't considered counterpoints.
And if a community deletes or down-votes dissent: well, that happens but it also becomes a matter of public record and can also inform. Case in point: the moderation of /r/politics.
Nothing is served by "saving" Joe User from being exposed to unpleasant speech in an open forum, due to the fear that he might be adversely affected or even damaged by the experience. This is anti-vaxxer logic, on a meme level.
I think you're really idealizing these subreddits here. The way reddit works, the downvoting of dissent will never be seen, that's the problem. (Generally a pro of reddit, but not in this case). Dissent will get sorted out of the comments section by default, and the arguer banned. The nature of these extremist groups is not to have an open forum for discussion.
There is not a public record to be examined and ruminated on, on the contrary it's a situation where you can go and see "oh 151,400 subscribers to fatpeoplehate, this is normal behavior", or (more likely) people who never view comments will just start to see jokes about killing the obese on their front page feed.
There's a difference between 'saving people from unpleasant speech' and providing an echo chamber to hate speech along side the company's incredibly popular other content.
I don't see the connection to antivaxxer logic, but I'll give you the benefit of the doubt there?
Just to be clear I'm not advocating that we ban all 'unpleasant speech' from the internet, but I do think siloing it off so you really have to be looking for it works, and that there's significant evidence to back that up.
In reddit's case in particular, many of the banned subreddits were using smart tactics like brigading and vote manipulation to amplify their numbers, and push the 'this is normal' message even further, although that's somewhat orthogonal to what we're arguing about.
Reddit isn't completely a black box: subs such as /r/undelete and /r/longtail track deleted threads, and deleted comments can be reviewed ( https://www.reddit.com/r/howto/comments/5en53l/how_to_see_de... ). Additionally, the disparity between upvotes and comments -- especially for threads upvoted far more than their peers -- can indicate when voting shenanigans are taking place.
A casual visitor wasn't terribly likely to see counterpoints and ridicule on those parts of Reddit, though, because they banned them with an iron fist. FPH in particular was very consistent in banning anyone who empathised with its targets or questioned any of its dehumanization or its tactics. It was essentially a safe space for hate.
Ironically, for years and years criminologists made exactly the argument you did: preventing crime in one place just moves it somewhere else. So, police largely didn't bother, throughout the largest crime wave in history (1970-1995 or so).
Then in the mid-90s, some new research suggested that making crime more difficult to commit in one place actually prevents crime everywhere; criminals, like most people, are fundamentally lazy, and putting up inconveniences makes them more likely to just not commit any crime at all. In the 20 years or so since, crime rates have fallen by more than half.
Some redditors probably did go off somewhere else where they could spew invective. Most just went back to looking at gifs of dogs falling down stairs or whatever.
You genuinely think the type of people who subscribe to subreddits like coontown and fatpeoplehate are going to have their mind changed by people on the internet?
I've had no success changing the minds of people who think that being trans is a mental illness despite citing numerous peer reviewed articles. Maybe I'm going about it wrong, but I'm not sure it gets more clear than pointing to a bunch of scientists that directly contradict their understanding.
I post on /pol/ without agreeing with most of their views. You're unlikely to immediately convince anyone of a diametrically opposed point of view at any given time. But you can bring nuance to a very one-sided, memed and stereotyped "discussion".
For example there is a large overlap between pro-fossil fuel and nationalist attitudes. It is fairly easy to get some concessions about renewables from them once you point out that renewables mean energy-independence and not giving money to the middle east.
If you have no scruples of adopting their arguments for the sake or arguing you can also make your argument explicitly anti-jewish by pointing out that israel would be a far less important strategically if the US did not depend on oil as much. It seems like people have trouble applying logic of this kind because they fear being seen endorsing the arguments that they use but in the end it's just a form of playing devil's advocate.
If you have a thread where the general tone is that all blacks should be killed then maybe an admission that this would not be a smart move and better solutions can be found is already progress. Small progress, but still important, nudging things in a better direction bit by bit. And yes, board culture does change over time, imperceptibly to most, but it does change.
So patient, persistent effort might yield a small shift in the attitudes of some in the audience.
At what cost in those who see the basic terms of debate validated by engagement? When you're arguing that only some black people should be killed instead of all of them, what harm are you doing by legitimizing the basic question of whether blacks should be killed?
You are assuming that there is a question to legitimize. If you have someone convinced of the issue then there is no question. By engaging in discussion you are simplying providing an alternative view point. You don't have to present yourself as opponent, you can present yourself as member of the ingroup with a slightly more moderate position that might also be more palatable to the public. There are many strategies.
It is also important to realize that there are many passive readers of such discussions. I recall moot (rip) saying something along the lines that there is a 1:10 ratio of posters to lurkers. So by presenting a more moderate position you're also showing people who might not like the left that there are alternatives to the most radical voices.
In the past, even in many brutal wars, belligerents generally recognized the need for parley. This is not even a war, it is a disagreement about social norms and laws. If villainize the other side to the extent where it would be immoral to even speak to them you're only fueling the polarization instead of creating a continuum. Look at europe. The extreme right exist too, but between the left and the right there are many different currents represented in parties, which lowers the activation potential for people gradually switch camps.
> you can also make your argument explicitly anti-jewish by pointing out that israel would be a far less important strategically if the US did not depend on oil as much
Wait a sec, can you explain how this is anti-jewish? As a person of Jewish ethnicity who does not support many actions of the Israeli government, I fail to see how this makes the argument anti-Jewish. I do concede that it might make it more appealing to those with anti-semitic sentiments.
> I do concede that it might make it more appealing to those with anti-semitic sentiments.
That is what I meant to say.
That said, both /pol/ and the israeli government tend¹ to lump anti-zionism and anti-semitism together. In the former case due to the purported¹ global jewish conspiracy, in the latter case because it makes a cheap knockout argument against anyone questioning their treatment of palestine.
So I phrased it through that lens.
¹ do I really have to pepper those qualifier words everywhere?
> You genuinely think the type of people who subscribe to subreddits like coontown and fatpeoplehate are going to have their mind changed by people on the internet?
Yes, because I've personally met such people.
> I've had no success changing the minds of people who think that being trans is a mental illness despite citing numerous peer reviewed articles. Maybe I'm going about it wrong, but I'm not sure it gets more clear than pointing to a bunch of scientists that directly contradict their understanding.
Changing minds takes a lot of empathy and skill (and usually time) to pull off consistently in person, much less online; I'm not surprised that a strategy of throwing scientific articles in people's faces (which they will probably never read) would be unsuccessful. Regardless, even if you are successful, you will probably never know unless you have a long term relationship with the individual.
I also have direct experience with the opposite: an older friend (late 50s), socially isolated, who over the last year or two has self-radicalized into a hardcore Muslim hater.
A few years ago he was just a plain ol' gentle soul, and he liked to watch those militant atheist videos on Youtube (I have no opinion about such). I guess one of them had some sort of anti-Islam recommended video that caught his eye, because I remember the day we went out for lunch and he told me about a video he watched about Islam.
I have quite a few Muslim friends so I spent some time gently dissecting what he was saying. None of what I said stuck. Over the following year, he got deep into the rabbit hole, until how awful Muslims are is all he'd talk about.
I had to end the friendship, despite my efforts to talk him out of that, because that garbage had literally become the entirety of what he was into/wanted to chat about. (Objectionable and boring, hah.)
The jump from militant atheists to Islamaphobes is not a big one. A lot of New Atheists, including very prominent ones like Bill Maher, Sam Harris, and Richard Dawkins, have a tendency to single out Islam as particularly worth hating.
- making a statement to new users and new forums about what is acceptable and what isn't. Some times it's warranted to take some action even if only symbolic.
- making it inconvenient to be an ass. People are lazy and habitual. With luck, their preferred forum for hate disappearing mess they pick up knitting instead of looking for another forum (in the same way that preventing crime in one area doesn't move it but actually reduces overall crime.)
Recently the desire to have your genitalia cut off in order to pass as a member of the opposite sex was considered a form of body dysphoria. Can you point at the scientific breakthrough that changed that? I've been under the assumption it was a social change.
In some cultures, eg areas of India, the concept of a third sex has been well established, those in such communities often find transsexualism to be strange yet accept the idea loosely described by eg "male body female mind".
So, I'd be interested in that pointer, that shows acting out genital removal is not "aberrant". Does the research also show it's not mental illness to want to become deaf, an amputee, etc.?
Direct me to one of your previous discussions if you like.
> I've had no success changing the minds of people who think that being trans is a mental illness despite citing numerous peer reviewed articles.
It was considered so until 2013, when Diagnostic and Statistical Manual of Mental Disorders dropped it as a mental illness:
https://www.scientificamerican.com/article/where-transgender... Not that I think this has any relevance: many things once considered mental disorders are now considered personal traits. The point is that it's not an outlandish thing to believe, just an outdated one.
That's the point of the study. The concern was that by banning particular subreddits, they'd simple disperse the activity on those groups across the site. But that's not what happened, if you study the data site-wide.
After spending ~15 years arguing with the same, repeating dissenting opinions, with no evolution or change in those arguments, I'm ready to retire to my echo chamber, where evolution and global warming are real, and the earth isn't 10,000 years old.
We are having a conversation on Hacker News. Any sampling from HN's user base, including your sample of 1, is biased. Please correct me if I'm wrong, but there's a high probability that:
1. you have at least average intelligence
2. you can entertain logical arguments, since software development involves a lot of mathematical logic
3. you are heavily exposed to American culture and San Francisco's progressiveness, even those of us that are not even living in the US ;-)
4. you have a good education, either from an Ivy league school, or because you are an efficient autodidact
5. you are middle class and have your basic needs satisfied by your salary
Don't get me wrong, but there's a high probability that you're living in a bubble, just like most of us here and you're really not representative of the average population.
Right, but the mods here don't allow heated debates on controversial subjects. They view them as pointless or even harmful. Further, they seem to have accepted a sort of broken windows theory of moderation that says that any such discussion has to be stopped, lest it leak out into other threads.
It's the other way around: the radicals have a need to engage the rest of society to convert them to their cause. That forces some degree of moderation, they know that if they are too far out no one will listen to them.
A better metaphor would be that if you confine the Christians to their ghettoes soon someone will take up arms against the infidels.
>Now those people are gone and are off to internet spaces where nobody disagrees with them. This will reinforce their opinions, not change them for the better.
I agree that 'curing' them should be a goal but stopping the spread via quarantine is useful on its own. Forcing them off into their own corner is disrupting their recruitment channels which means there's less to cure tomorrow.
It seems most people are terrified that the bigots might be right. That's the only thing that makes any sense. It's the only situation in which censorship makes more sense than discussion. If the bigots are wrong, you just correct them, lampoon them, etc. There must be many people who suppose the bigots cannot be corrected, that the truth is horrible and must be hidden for the good of all. Those people are wrong. And they will do radically more harm than good.
Good intentions are the opposite of good actions. (An old German saying I quite like.)
No, we are arguing opinions here. FPH was not conducting double blind scientific studies on the efficacy of shaming as a weight loss intervention for the obese (AFAIK). And, in fact, if that study does show promising results, I may admit they are right and help them in their noble effort to solve the obesity crisis.
But what I hear them saying is, "Hey it's cool to hate and harass half of the US population based on appearance!" I am one of the people you reference who thinks that that "cannot be corrected," but that's because it's not a falsifiable hypothesis! Neither is it "the truth," it's just their opinion. And it is something that I think bored, mostly teenage, people may buy into due to "digital peer pressure," which is admittedly a somewhat hair brained new concept, but based on real effects.
Though with rampant censorship these days, few people have to build up any kind of argument to counter these bigots. It's a lot easier to hit a downvote button, or a ban button, than to present a reasoned argument at all.
What a fantastic place the internet is evolving into. A haven for, well, some kind of speech. Not free of course. Restricted and safe speech. Perhaps speech licenses and internet ID's soon? If you say something bigoted we can throw you in jail and fire you. Harass your friends and family too. It's the only way to protect a free and open society.
In the book Tipping Point, Malcolm Gladwell discusses about Epidemic Suicide in Minnesota. Just like that, Minds that are volatile would read things on hate speech and then believe them. I know a lot of people hate Trump just because a lot others they know hate him. A lot of people call themselves atheist because a lot of educated people they have spoke to claim to be atheist. Of course, this is just correlation and not equal to causation.
I disagree with any conclusions drawn that censorship has solved any kind of problem here.
Let's take the fat people hate subreddit. Okay, you ban every user and the subreddit itself. Some people make new accounts, and a new subreddit about hating fat people, so you ban all those too.
Did that convince anyone that their opinions were wrong?
Absolutely not. If anything, convinced them, in their eyes, that they were onto something and had to be silenced.
All it's done is driven people with those bigoted opinions away from Reddit. These people still exist (until we start exterminating them I guess) and will continue to speak their bigotry wherever it is they do end up. All that's really accomplished is just cleaning the website of speech you don't want anyone to see. Kind of like sweeping dust under a rug - still there, can't see it though. Out of sight, out of mind.
You're leaving out a major argument for enforcing general community standards: does being part of a larger group make discovery enough easier to help recruit new members?
In the case of Reddit this argument is especially worth consideration given how common it is for people to promote bigotry with memes & other jokes or selective filtering (remember the guys who only post negative news stories featuring black criminals?). It's very plausible to believe that many users, especially younger ones, hit something like /r/news or /r/funny and end up on a hate subreddit without realizing how far off of the mainstream it is. That seems especially worth studying given the common anecdotal accounts from parents who found their kid managed to start out in a Nintendo forum an end up in pretty dark places.
> I actually liked having offensive subreddits on Reddit because there were so many people that devoted themselves to arguing with the people that posted there.
The real problem with having "offensive" subs removed is that now they are going to target the next layer of speech.
You see this on reddit. Now that they succeeded in banning coontown, fph, etc, they are now targeting political subreddit. Hell during the election, there was a push to have the_donald banned.
Then what? Christian subs? Food subs that advocate meat consumption?
It is encouraging a culture of outrage and censorship. Which is very scary.
This uses the pushshift.io dataset. The dataset is collected by an independent non-Reddit inc user @jasonbaumgartne on twitter. Huge props to him for collecting it. Anti props to the authors for not giving him a better acknowledgement in the paper.
I wouldn't assume it's malice either, but I'm glad it's being mentioned.
There's a trend right now where researchers and reviewers have no respect for data, even as research uses more and more data. They don't think of citing the data they use any more than citing the air they breathe. This leads to lots of research built on bad data, as producing good data is not rewarded.
I've seen my own data set become a footnote, appear as the only uncited source on a slide, and be attributed to other people who worked on a related project but had nothing to do with producing the data.
If that's the case then the data is bias and flawed. The data does not reveal independent subreddit bans, or existing posts in subs (but were removed by mods or deleted by the user)
I find the effects on "invaded" communities post-ban to be most interesting.
There seems to be no indication of any up tick in hate speech within communities that were "invaded" by former subscribers to the banned subreddits.
Similarly, there was a dramatic decrease in hate speech overall among the subscribers of the banned subreddits after the ban.
To draw an un-scientific and unsubstantiated inference, perhaps the power of crowd-psychology and echo chambers is responsible for more hatefulness than we yet can prove.
In some regards I would like to hope this is the case.
That doesn't mean much. People wear masks, self-censor or adopt more obtuse language. For example despite using a throwaway right now I am still phrasing things not quite as I would like out of concern that I would not be taken seriously.
"hate speech communities" still consist of people. Those people can engage in other hobbies without injecting their political views into everything. Similar how vegans and meat-eaters can dine in the same establishment and atheists and christians can be buried on the same cemeteries.
I would even argue that dedicated places for hatespeech lead to exaggerated expression of mostly reasonable people. Think of trump promising the wall. People cheered him on for that. But that doesn't really mean they want a wall. They might be perfectly happy with stricter immigration policy or enforcement of existing immigration laws too. But shouting "build the wall" is a group ritual for them. Virtue signalling works for both sides.
Edit: After skimming the paper I noticed that the authors were also identifying subreddit-specific lingo. I wouldn't expect this to carry over unmodified to other subreddits. You can still denigrate particular groups without using those particular keywords.
Not to mention this completely ignores the fact that most subreddits have active moderators who remove hate speech.
If a user is forced to discontinue use of a subreddit where it's allowed and continues to use subreddits where it's not allowed, then of course you're not going to see an uptick in hate speech. You've done nothing to change their mind, however.
Reddit also has some absolutely fascist mods. For example, it's well-known that if you post anything that is against illegal immigration (and it gets too many upvotes!) /r/worldnews mods will go through and ban you after the thread dies down.
I was once banned there 48h after a polite discussion on the economic costs (citing various sources) for "hate speech". I got too many upvotes so it was noticed. Appealing to a mod resulted in him "examining my posts on other subs, and given who you voted for the ban will be permanent".
Actual hate speech is one thing, but far-left mods of major subs are using the banner of hate speech to silence well-informed opposing views.
Aside from a small number of general site-wide rules, reddit is based around the idea of "your subreddit (meaning, you're a mod of it), your rules". And nothing about it requires that every single rule be exhaustively defined up-front; moderator discretion is ultimately the sole definition of a subreddit's rules.
If you don't like a subreddit's rules, you're free to create a competing subreddit on the same topic but with different rules, and try to attract people to it.
I moderate a medium-sized (200k-ish subscribers) subreddit, though, so I know how unpopular this idea is with some people.
Though the authors clarified that only the hate speech specific to the banned subreddits has disappeared. I think that's still a strong result, but maybe the users might have gone on to spread hate about different topics. (I think that may be unlikely though, since they probably aren't just indiscriminately hateful but focus on a certain topic.)
> Similarly, there was a dramatic decrease in hate speech overall among the subscribers of the banned subreddits after the ban.
It decreases because almost every subreddit bans hate speech.
> To draw an un-scientific and unsubstantiated inference, perhaps the power of crowd-psychology and echo chambers is responsible for more hatefulness than we yet can prove.
Actually it has been shown that censoring and stifling people leads to them become more zealous.
It's especially true if people feel persecuted or marginalized.
I'm not surprised to see a study that confirms censorship works as intended, just more concerned about the future societal structure we're building.
I grew up with free speech as the ideal. Free information, free speech, and freedom in general. Reddit itself was about that at the beginning. That zeitgeist is gone now - I just hope praising free speech doesn't become "hate speech" before I pass.
I thought the point of censoring hate speech was not to purify content but to prevent actual violence etc from occurring.... have we seen a decrease in violence against fat people since 2015? If not, I’m not sure what was really accomplished other than a slightly less annoying site...
There is a contingent that is very interested in re-decentralizing the Internet. There's also a boom in building (or claiming you intend to build?) a decentralized crypto layer on top of the Internet. I don't know whether this will ultimately succeed, but at least the dream is still alive in some form.
You call it censorship. But to what extent is Reddit -- or any person or organizationan -- required to provide a platform for speech or expressions they find abhorant or contrary to their values?
Would it be OK with you if I put a bumper sticker on your car? "Silence hate speech!" or "Jews will not replace me!" or whatever it is that you might find uncomfortable?
Sure. Then the question simply becomes whether it's OK to suppress ideas on a public forum because they are deemed by some to be a low-quality contribution.
It seems to me this is dancing around semantics to avoid the question, especially considering the problematic nature of the distinction between something that one considers low-quality vs. something one disagrees with.
They're fundamentally different questions. I have conversations with a lot of people on a lot of topics, some of those conversations are more substantive than others. Even if I disagree with someone it's easy to tell when a new idea has been presented and how well it has been articulated. My understanding of this voting system is that we are meant to use it to encourage meaningful conversation, not to motivate our own agenda.
Is it possible to abuse this system? Of course it is. That's why I am engaging you in this conversation to perpetuate the idea that the voting system is actually a meta-vote about the quality of the conversation.
You must have grown up in a very strange time & place, because it's almost certain that in any reasonable definition, the freedom to speak and publish, and to reach an audience, is much larger today than ever before.
In this case the person literally ran over another walking talking person. How exactly are you justifying giving a platofrm to a person who supports doing these things with a Nazi rationale?
One person did that. Just like one AntiFa asshole hit a guy with a bike lock. You can't take one action committed once by an individual and use it as a cudgel to eliminate speech you find distasteful.
This study seems to have failed to compensate for the most basic of confounding variables. For instance their control group was specifically chosen not to be a control, in the nominal sense, but other groups that were likely to be banned or otherwise reprimanded. Why? They showed that the so-called control group and the treatment groups had a slight decline in overall posting activity, but failed to consider the trends for the site at large. What was the overall trend, per user, at the time? Many of their control groups were also banned, some within the window of time they sampled!!
The authors also seem to be failing to account for the fact that the entire site began to begin automated censorship of posts with undesirable keywords or from users that had posted in undesirable subs. Many of these posts can not be recovered even from the API which directly taints their data. This also resulted in two further confounding issues. A very non-zero number of users that stayed on decided to post under new names and use language less likely to be tagged by automated censors.
I think the important question is what effect this censorship has had on trends of such beliefs, rather than whether the certainly specifically censored words continue to appear. The answer to the former is interesting and the entire question of censorship. The answer to the latter is rather self evident and has little to do with whether censorship is effective, which is itself separate from the question of whether it's desirable.
Reddit takes user's content on the assumption there is value. In the case where there is liability and no value, they are not interested in the content(makes sense).
I think the future is in a decentralized reddit-type system where content creators and curators are rewarded for their contributions fairly.
It works for people being edgy online, but can we censor glorification of a more serious problem than “fat shaming” - say, gang violence - and have it reduce acts of “hate”? I’m not convinced similar censorship/propaganda can stop more serious problems which entail actual violence... or do we really consider people making fun of fat people a serious problem which having been solved improves society in a significant way?
> I’m not convinced similar censorship/propaganda can stop more serious problems which entail actual violence
I think a more useful framing of this is whether and how it changes the situation. I don't think anyone is arguing that this stops it, but it's not hard to imagine that given these two alternatives ((a) provide forums where a given behavior is allowed and (b) censor forums where a given behavior is allowed) that it might have some impact.
And let's be clear: it's not like this is a single-argument function or that there's only a single value that's being optimized. If it were only so simple :)
Very interesting result, somewhat encouraging to me, because it seems to cautiously indicate that you can actually do something about those echo chambers that may start as jokes but then actively normalize hate and discrimination, possibly pulling formerly neutral users in too. (And keep in mind that those hate echo chambers are not places of honest discussion, since they regularly ban any opposing viewpoints and only strive to keep the hate train going.)
> "In a sense, Reddit has made these users (from banned subreddits) someone else’s problem. To
be clear, from a macro persepctive, Reddit’s actions likely did not make the internet safer or less
hateful. One possible interpretation, given the evidence at hand, is that the ban drove the users
from these banned subreddits to darker corners of the internet."
Could these groups now become seriously dangerous when they convene and discuss in the shadows?
I think an interesting extension of this study would be looking at how other sites were affected by these bans as well. There could well be a mass exodus to a different site, like 4chan or Twitter, where the community reforms.
This seems like a case of using an easily available proxy end point and cheerfully ignoring broader social issues. We are in the middle of an obesity crisis and should be looking at the issue of "Fat People Hate" through the lens of morbidity and mortality.
The typical post on Fat People Hate consists of a picture of a morbidly obese person overflowing the seat of their mobility scouter. The comments echo each other as the commenters say how disgusting the sight is.
One topic for empirical investigation is whether the person in the photograph responds to the shaming by losing weight, or perhaps even gaining more. But typically the person involved is unaware of the existence of Reddit. There is a break in the causal chain and nothing to investigate.
Another topic for empirical investigation is whether the (presumably) slim participants in the Fat People Hate sub-reddit maintain a healthy weight. One conjecture is that participants are exposed to extreme images of morbid obesity and become complacent about maintaining a healthy weight. They go on to become overweight and a proportion get fatter still and suffer serious health consequences.
An alternative conjecture is that some of the participants feel social pressures in their daily lives to clean their plate, eat up, and not be a broccoli munching spoil sport. Fat People Hate gives them an opportunity to construct and maintain an identity as some-one who doesn't get sucked in by their local social pressure to over eat. They unexpectedly maintain a healthy weight, contrary to predictions based on their place in society.
We can articulate an ideal for research in this area. It involves tracking down the participants in FPH and discovering records of their body weights. The important questions are: While they participated in FPH, did they buck national trends towards obesity? After FPH was banned, did they gain weight?
We are actually interested in counting diagnoses of type 2 diabetes and toes amputated as a consequence of diabetic neuropathy. When we ask about the effectiveness of the ban on FPH we are actually asking about weight gained and lives lost. But when research question 1a asks: How were their activity levels affected? they are not talking about walking or jogging. They are talking about speaking.
Perhaps I missed it, but this paper seems to ignore usage of alt accounts. Many users have more than one reddit persona and may have simply stopped using their FPH persona rather than leave the site as the authors assumed.
> Perhaps I missed it, but this paper seems to ignore usage of alt accounts.
It's also mistakenly attributing lack of "hate speech by these accounts to banning of subreddits when it's actually that most of reddit now censors "hate speech" and bans accounts that do so.
They actually did tweak the /r/all "heat" algorithm partly to prevent /r/The_Donald from dominating it (because T_D users were gaming the old algorithm by mass-upvoting a firehose of shitposts).
It's nice to get some hard data to counter the theory that if you ban a subreddit consisting of undesirables, they'll simply invade other parts of reddit and continue. In reality, the other parts of reddit aren't nearly so tolerant.
So what's the implication? Well, ban judiciously. Getting rid of places that fester hatred is like putting out a fire. But it's obviously very tricky to do this.
See /r/physicalremoval for an example of a sub that was just banned for inciting violence.