Hacker News new | past | comments | ask | show | jobs | submit login
Reddit and the Struggle to Detoxify the Internet (newyorker.com)
401 points by smacktoward on March 12, 2018 | hide | past | favorite | 717 comments



We tend to think we’re more in control of our behavior than we actually are. That is, our brains are operating from habit a large amount of the time, and the idea of the CEO brain consciously deciding our every action is largely an illusion.

This is why emergency exit doors must open outwards. People have died in fires in theaters trying to push on doors that said “Pull”. The crash bar on an emergency exit door is what’s known in design as an affordance.

When you present people with a door handle that affords “pulling”, they will pull even if the sign says “push”.

Now consider the fact that the primary affordance of social media is the “reaction”. Is it a surprise that content that garners a reaction will trend towards the outrageous?

If proactivity is the road to a more fulfilled, more civilly minded life and society, maybe we need to think of our affordances. Because we’ve made it awfully easy to be reactive, and awfully cumbersome to be proactive.

Edit: The comments below correctly point out an error; what I should have said was that there’s a danger in putting a “pull” handle on an emergency door that pushed outward, because of the confusing affordance.


This is an insightful analogy.

To stretch it a bit, I'd say there's an important difference between a fire door and reddit. The fire door knows its job is to prevent people burning to death. Reddit... does it know it's job is to prevent outraged reaction?

I think this is one of the things that made facebook so problematic on politics, it can't tell the the good likes, comments and shares from the bad ones. I'm not sure they really had a concept of better and worse. Some stuff isn't allowed, but otherwise?

Imagine one post, where Mrs X invites neighbors to meet a local candidate at her house for revolutionary thoughts and biscuits. Another post, where Ms Y rants about Macron voters, Trump, taxes and kids these days. Both are political. One is actually democratic and participatory. The other is cheap, nasty, unproductive and divisive. Does Facebook, in any meaningful sense, value one over the other? Does reddit?

Reddit has its non-censorship values. I respect that. It's important that someone does. I also think they want to house the weird, and I respect that too. But, I think unrestricted speech may be an insufficient value, like nondiscrimination or atheism. It's not enough to build on. You need positive values too.

Free speech is also problematic, when taken as 'all speech is equal.'


>Reddit... does it know it's job is to prevent outraged reaction? //

Ha, ha, ha.

Cause that'll increase pageviews.

Outraged reaction is exactly what all news media is going for because people rant and rave and in passing see more adverts. It's just they want to sanitise the topics according to their advertisers wishes.

Reddit doesn't have non-censorship values, it's heavily censored; not all from the top admittedly, but the sanitisation that's gone on in the last few years is huge as Conde Nast have moved to make it a more tempting platform for advertisers.


Well, yes that's the obvious (and cynical) assumption. It's definitely true as a real pseudo-economic force on the internet and media generally, but let's not just assume reddit are following that "interest" blindly.

If you think reddit doesn't have those values, I guess we disagree. I don't see any way of coming to that conclusion apart from fundamentalism, it's either absolute or it's bullshit.


I'm not a Reddit user but there are plenty of stories out there about how the corporate managers have permanently closed some offensive forums. To be clear, I'm not claiming that Reddit management did anything wrong; they're under no obligation to spend money spreading toxic content. But obviously they don't value non-censorship as a corporate value.


Let's see - Vanity Fair, Wired, GQ - there seems good reason to think that Conde Nast are willing to compromise and go with the click bait, a lot.

https://www.wired.com/story/bad-actors-are-using-social-medi... .. is the first entry Reddit, is it in the list, nope.

I think Conde Nast are prepared to stoop pretty low in the search for dollars before integrity.


To be fair, Conde Nast also owns Ars Technica, which, in my experience, has a fairly exemplary reputation as far as avoiding clickbait and running good articles.

That said, it certainly seems to me like if not as an explicit business decision, Conde Nast certainly has no problem with their properties sensationalizing their media for views/clicks/etc


Increased pageviews, yet many users also hate constant outrage and leave the platform entirely. It is not obvious that a platform with civil, high-quality discussion would lose in the marketplace (not obvious it would win either, though).


> But, I think unrestricted speech may be an insufficient value

It looks like you are imagining that every speech has objective "value" which can be determined (maybe it's hard to do, but if we throw enough "big data" magic dust onto it we can get at least close) and then somehow speech can be sorted by value.

I think it's completely wrong from the premise up. The value of the speech is a subjective measure. Some people value invitation of candidate X, some people value rants of Ms Y. In fact, if salaries of talk radio hosts and late night comedians tell us something, way more people value rants than measured, polite discussion. Thus I think "value" exists only in the eye of beholder, and trying to objectify it would only mean dismissing the values of part of the audience and emphasizing values of some other part of the audience. I can see why a site may want to do it and why one may want it to happen, if he or she happens to belong to the latter part, but I see no real justification for it.


> Thus I think "value" exists only in the eye of beholder, and trying to objectify it would only mean dismissing the values of part of the audience and emphasizing values of some other part of the audience.

And yet we need to do it. As you observed, "way more people value rants than measured, polite discussion", and I'd claim that this is a problem. The "value" may be subjective, but the consequences of both "kinds of speech" are real, and so it would be great to incentivize the kind that leads to a more stable, more just society, and disincentivize the one that causes thoughtless destruction.

Yes, I'm aware that "disincentivize" is getting dangerously close to "ban", but I feel we need to try and walk that fine line, if we want to have a society that's better than just random.


> And yet we need to do it

Who are "we"? And what is "it"? Do you just take on yourself the mantle of decider for the whole world who is worthy and who is not? Or how is it determined? Who is worthy to wear that mantle? I don't think any human is.

> I'd claim that this is a problem

Maybe human nature is a "problem". But what you're going to do, replace humanity with better species? Do you have one in mind? Beyond that, I don't see how declaring it "a problem" helps anything, unless there's a fix to this problem. History teaches me that all attempts to create a "new, better human race" didn't just end badly - they ended so awfully terrible that when people point at it, other people get offended that you dare to compare their ideas to that. So, we have to deal with what we have now - and had for millienia. Given what we did with it - well, not ideal, but certainly there have been some improvement.

> it would be great to incentivize the kind that leads to a more stable, more just society

How do you know what leads to more just society? Maybe rants would lead to more just society faster? As for stability, stability is only good when we're at the optimum. Are we?

> I feel we need to try and walk that fine line, if we want to have a society that's better than just random.

Which fine line? Everybody has their own fine line, that's the point. You could maybe find a bunch of people whose fine line is similar to your own, if you don't look too far into the future (and if you do, what you get is https://www.youtube.com/watch?v=WboggjN_G-4) - but pretending there's some line that's good for everybody is just willful blindness. And out of all possible ways of building a better society, I don't think dismissing people that have different views as something that doesn't matter is the best one to start with.


> Maybe human nature is a "problem". But what you're going to do, replace humanity with better species?

I agree that the root of the "problem" currently lies within people and not process. I agree that changing people wholesale is not easy and not desirable. I agree that there has been some improvement.

I also think human nature is a product of the environment. The way people behave on different websites, in different countries, and in different social situations shows this rather clearly. There is no fixed set of anything that constitutes the whole of how people act. Put someone in a nudist colony, and the environment changes, and the way they act changes (with time). If a reddit user starts going to 4chan, the environment changes, and the way they act changes. Put someone who follows the "always defect" strategy in a community of "always cooperate" people, and the environment changes, and the way they act changes. Put a racist in a racially diverse community of acceptance, and the environment changes, and the way they act changes.

If you accept this idea, it follows that certain environments can be better for society as a whole. Case in point with Reddit: they decided an environment without beastiality and certain violent elements would be better. Maybe they were wrong, but I don't think so. I'm not suggesting I have a wonderful theory of what the best environment is, only that there are better and worse ones. The problem of what we value is hard, but that doesn't mean it's not worth trying.

This ties the loop back: human nature is a product of environment, and environment is a product of humans. We have the power to create environments that make thoughtful discussion easier and hate harder. We can put energy toward solving the problem by changing the environment. HN has an environment I am very fond of, despite being here for only a couple years. I appreciate the work that has gone into making the comments an insightful, respectful, and generally nice place.

We can't replace humanity. We can't change people without changing the conditions they exist in. We can change the circumstances of our struggle in order to grow together as a species.


Picking out only a very small thing that you said:

> I don't see how declaring it "a problem" helps anything, unless there's a fix to this problem.

This seems wrong to me.

The process of (as a group) clearly identifying things that are problems, and coming to agreement that they are problems, and coming to agree on whether they are important, is of fundamental value, regardless of whether we have solutions at hand.

Without this step, folks will either be ignoring problems because they don't know about them, or proposing "solutions" to things that others don't even see as problems, neither of which can lead to any good...


This post is basically saying the problem is not worth looking at because it's too scary...


No it is not. It's saying if you're looking at something as if your private point of view is objective reality and dismissing the very existence of other points then the problem may not be where you looking for it.


"How do you know?", "What are you going to do?" "Everyone is different so we can't know anything for sure!" are very much defeatist, knee-jerk responses to someone doing something extremely important:

Presenting what the problem is.

The first step to solving a problem is understanding it. It's not solving it. Trying to solve a problem immediately is like trying to write code before you fully understand the requirements.

If you lack the mental fortitude to simply look at a problem without having an immediate solution to it, you're not going to be able to solve major, ugly, nasty, uncomfortable problems like this.

But, inevitably, these problems will show up and knock on your door. Running away from them is not a good plan.


What if I don't believe in trying to control people at such a low level? What if I think people should be allowed to be as ranty as they want without having to worry about being "disincentivized"?


Then I'd like to talk with you, see if you have an alternative way of keeping the society from self-destructing.


So we've got desires to identify with groups, and we've got desires to share criticisms of groups, and we struggle to find a balance. If we never criticize, we stagnate, if we never identify, we "self-destruct". We can come up with a "quick solution" as the notion of keeping more of our criticisms to ourselves, but that's certainly not a goal you want to pursue overzealously. You need to be able to share criticisms.

We can tell people it's not attractive to be ranty, but I'm not very comfortable going further than that, at the risk of wandering into thought policing.

Look at some modern political opinion, I'm sure you've seen it as much as me. Ranting is cool. Calling everything under the sun "problematic". The tricky thing is it's not always wrong. There's surely a nearly infinite list of "problems" one could identify, and some of them are truly important. But we just need to turn down the heat on the criticism for just a second. But you can't just ask bipartisans to listen to each other more. We need to make it less cool to be blindly partisan. We need to increase the value of being able to identify with anyone. And we need to make it really uncool to judge hundreds of millions of people you've never met with deep assumptions.

I don't know, it's an interesting problem and I haven't thought of it this way very much. All I'm sure of is that this growing lack of interest in protecting free speech is about the only topic in modern politics that I give a shit about.


> We can tell people it's not attractive to be ranty, but I'm not very comfortable going further than that, at the risk of wandering into thought policing.

I don't want to see it going much further than that either. I was thinking more along the lines of making it so being thoughtful is "sexy" and being ranty isn't, the way today owning a car is "sexy" and smoking isn't.

Free speech has its positive and negative consequences on stability and happiness; I do not want to fight free speech, I'm looking for ways to reduce the negative consequences. I'll protect your (and mine) right to rant about whatever you want, but I sure as hell would like the general policy discussion to involve less rants and more thoughtful cooperation.


> Free speech has its positive and negative consequences on stability and happiness; I do not want to fight free speech, I'm looking for ways to reduce the negative consequences. I'll protect your (and mine) right to rant about whatever you want, but I sure as hell would like the general policy discussion to involve less rants and more thoughtful cooperation.

I agree that free speech is an essential aspect of what makes us humans, and that it comes with both many positive and negatives.

In implementation of a plan to mitigate the negatives though, I much more support a private entity such as Reddit censoring whatever they wish, as if people believe it becomes to harsh they can simply leave. I'm paranoid that allowing an entity like the government (where constituents can't easily just leave) to get involved with it is good, as it allows for many conflicts of interest. These conflicts could be instances where the ruling party or minority parties push to label an opposing belief as more divisive, or where the ruling majority seeks to 'disincentivize' a minority or outside belief/religion by saying it is offensive to what they deem our values.

I feel like we should push for the civilization of speech to be a societal change, not a policy based change.

On a slightly different note, people have been saying that language and civil discourse have been going to hell for a very long time. George Orwell rather famously wrote an essay titled "Politics and the English Language" in the early-mid 20th century, wherein he detailed how society was moving towards using unclear and imprecise language to pander to the many without being forced to use falsifiable statements. Anthony Burgess wrote "A Clockwork Orange" in the 1960's where he highlights the main characters savagery in part by highlighting their usage of 'barbaric' dialect. William Langland wrote that “There is not a single modern schoolboy who can compose verses or write a decent letter.” in 1386. While civil and educated discourse is an important issue, people have been saying it will lead to the downfall of society for a very long time, but in many cases it is just changing and the entrenched powers dislike having to cope with that change.


I do. I propose we do exactly what we've been doing for the last 10000 years when human society didn't self-destruct. Do you have any other proposal that has a similar or better track record?


Human societies self-destructed plenty of times over the past 10 000 years.


Those 10,000 years have involved very, very little in the way of free speech for most people - I don't think going back to the days of lèse-majesté and the Inquisition is what you had in mind, even though those institutions certainly provided stability in a sense.


Facebook and YouTube already do stuff like this: "too many people are just having fun clicking like on funny images instead of typing long comments, so let's increase the newsfeed and recommendation penetration of posts and videos with more comments... oh, shit, but now we are promoting flame wars as the best way to get more comments is to troll people".


While my policy might seem similar to theirs, were I in their shoes, I wouldn't be trying to promote content just for generating comments. Number of comments isn't a particularly useful measure for anything other than.. measuring how much discussion the content creates, it's no indication about the quality of that discussion.


>Reddit... does it know it's job is to prevent outraged reaction?

Says who? Let's get a little more objective first. Its job is to make money, presumably, if not encourage participation by any means necessary. Its job isn't to moderate. Its job is to allow the creation of sub-communities that can be moderated in any conceivable way. Most moderators aren't interested in reducing reactionism, they're just interested in reducing whatever they or their community doesn't like.


>unrestricted speech may be an insufficient value, like nondiscrimination or atheism. It's not enough to build on. You need positive values too.

I agree with this statement, except for the inclusion of atheism in the list. Atheism is literally the lack of a belief. You may as well say "The lack of belief in astrology isn't enough to build on. You need positive values too."

Unrestricted speech and nondiscrimination argue for something. Atheism literally argues for nothing.


> Free speech is also problematic, when taken as 'all speech is equal.'

Free speech is not problematic, as long as everyone has the right to speak and falsehoods can be debunked - there should be no "safe place" for the exchange of ideas, whether they are good or not. Starting by saying that 'free speech' has a problem is a very, very dangerous place to go to.


> there should be no "safe place" for the exchange of ideas

What do you think a "safe space" is? If you are arguing that there should be platforms where people can speak without being shouted down when the audience strongly disagrees with what they're saying, that is a safe space. In order to construct that space, you have to deny some rights of the audience to speak in that context.

And this is the whole problem with naive free speech advocacy. Unrestricted free speech is not possible anymore than unrestricted freedom in general is possible. People cannot possibly hear every single person's viewpoint, so some people will always be denied a platform to speak to some other people.

The question is how best to structure our societal discourse. What values are important, and how do we protect them? And the question needs an answer more complicated and nuanced than "free speech". Because when we don't acknowledge the complexity of this question, we become blind to the de-facto decisions that we're making about which speech to prioritise.


> If you are arguing that there should be platforms where people can speak without being shouted down when the audience strongly disagrees with what they're saying, that is a safe space. In order to construct that space, you have to deny some rights of the audience to speak in that context.

No, I am not asking for that kind of platforms. I am saying to let people express what they want to say, and the only restriction to Free Speech should be "direct incitation to violence" (such as asking to lynch someone publicly) as mentioned in the US Constitution. Everything else should be able to be said and be heard, and debated between people as long as they want to debate. And of course you will be responsible for what you say, as an individual, and you will have to face the consequences of your words. But it goes both ways.

Restricting Free Speech puts power among the ones in control of Speech. Allowing Free Speech is the only thing you can do to allow even the marginal points of view, even unpopular ones, to be heard.


>Everything else should be able to be said and be heard

But what does that actually mean? If all you're saying is that the state should not stop them, then relatively few people disagree with that, but the argument usually goes further. There are many ways people's speech can be limited without the involvement of the state. Be that de-platforming, protests or economic or social limitations.

>Restricting Free Speech puts power among the ones in control of Speech.

This is true, but the reality is there will always be restrictions on speech. It is not physically possible to let everyone speak to everyone, or even just those willing to listen. However we structure our societal discourse, it will always privilege some speech over other speech.

We have to engage with, and be ready to criticise, the implicit decisions being made about what speech is privileged and why. Because if we don't, then we cede power to those who already restrict speech with these decisions.

Just saying "don't restrict speech", and thinking that gives everyone a voice, is incredibly politically naive.


> Just saying "don't restrict speech", and thinking that gives everyone a voice, is incredibly politically naive.

I don't see how "incredibly naive" is something that is the fundamental piece of Western Civilization (at least in most of the English world).

> It is not physically possible to let everyone speak to everyone

No, but first not putting any filter on the nature and contents of speech, as long as it is not violent, is something we should stand for. The "How" is irrelevant.

> However we structure our societal discourse, it will always privilege some speech over other speech.

This should be an individual's choice to make, as in what "speech" you want to listen to. When you go on social networks for example, it should be expected and natural to find people who share different views, no matter how revolting they might appear to you. And we should find comfort in the fact that they are allowed to be expressed, because in turn we are allowed to express ourselves just as well. So in fact, there is no intervention needed by any state actor - on the contrary, free speech is the tool that enables us to discourse and experiment with ideas. Just shutting the door, or filtering inconvenient speech is not making it go away, and certainly will end up having no positive effect towards those who profess such speech, because it could further solidify their opinions and prevent them from being receptive in the future.


Free listening is just as important a concept as free speech. I get to decide who and what I listen to.

It may be good for me to hear an uncomfortable truth, but ultimately I get to decide whether I listen or not. If I want to live in an echo chamber, I can do so, but it should be my choice. Equally if I want to hear uncomfortable opposing views, I should be able to. (Somewhere. Not necessarily on Reddit.com. The site's owners can publish whatever they want.)

This is true in practice, because as a last resort I can put my fingers in my ears and say “la la la not listening” loudly. So frankly, advertisers should give up trying to force me to see their adverts — they can't — and persuade me to listen instead.


Actually, it's designed to open outwards because when there's a crush of people, there's no room for the door to open inwards.

Also, affordances aren't to support habit or unconscious behavior. Affordances are to imply meaning or how to use something by their design (whether digital or physical).


> Affordances are to imply meaning

That's actually called a signifier. https://ux.stackexchange.com/a/94270


I disagree. A signifier gets into semiotics, but more explicitly. e.g. the word "push" is a signifier, but a large bar that makes it seem like you can push it is a perceived affordance.


Speaking of them as signifiers isn't useful here, but yes, these are signifiers. You said yourself they convey meaning. The implication of their use wouldn't happen without signification.

One could even write a book about the discourse of affordances. It would be boring and irrelevant, but it wouldn't be wrong.

Sorry, just defending the mechanics of meaning.



Good points in your comment, I'd probably use the words impulsive vs. self-controlled (instead of reactive/proactive).

Asking people to build user-interfaces that promote self-control seems a bit naive though. There's a reason self-control is identified as one of the primary "fruits" of a Christian life (Galatians 5:22-23), since it's exactly the opposite of default human behavior.


Crash bars offer an unthinking affordance: if a crowd presses up against crash doors, the doors open automatically, relieving the pressure (or at least, if the doorway / passage itself is not overwhelmed).

Automatic negative feedback mechanism.



Yeah, I think this is close to the right interpretation.

Much of internet usage is, essentially, a giant slot machine. We've studied the things that make gambling addictive and intentionally incorporated them into social platforms to keep eyeballs on. The goal is to generate a reflexive subconscious impulse to check for new inputs with specific relation to the platform. A good high-level overview of this is the book Hooked, about how to build addictive computer interfaces. As we've "gamified" and "optimized for engagement", we've created an apparatus that many have difficulty understanding or overcoming.

This concept cemented more for me as I watched an Amazon FBA seller just impulsively hit "Match Price" over and over again, despite the fact that it was leaving him with margins of less than 25c per unit. He couldn't be reasoned out of this and insisted he had to match price with the lowest seller, even though his units would've sold for much higher margins over the ensuing weeks if he hadn't.

I realized then that really, he was simply gambling. He hits the button, sees a corresponding spike in sales, and sees a corresponding increase in his "income", despite the fact that letting it increase at a slightly more moderate rate would've easily netted 3x-4x more money (not talking years here -- just a few weeks to sell out as non-cheapest). He does it because he likes the impulse. He likes pressing the button and correlating that action with numbers that say he increased his money.

Facebook is the slot machine; Likes and Comments (broadly "engagement") are the currency; Your Posts are the input. You drop in a post and find that bland, mature, predictable posts don't generate a bright animation or engagement. You respond by posting more and more stuff that has high-engagement quality, which is, of course, the more controversial things like religion, politics, and identity.

The check on this is one's own identity is tied back to it, but that generally just means that you take the things you conceptualize as positive self-image to an extreme that warrants reaction, in particular in these controversial fields that generally have high-yield output from the slot machine.

So Republicans are put on a path toward cartoonish Republicanism, Democrats on a path toward cartoonish Democratism, and so forth. It has a distorting, reinforcing echo-chamber quality on everyone. It encourages people who are more moderate on certain issues to disengage, it encourages us to select them out and weaken our bonds, it makes it easier for our own little world to become all-consuming and feel all-encompassing. It has an overall corrosive effect on public discourse and relationships in general.

The more I think about it, the more I think the physical constraints of "real life" on dialogue and relationships, like being in a room with someone who can punch you if you sink too low, or making instinctive reactions and adjustments based on a conversation partner's body language, are much more valuable than we've assumed.


And for any that are skeptical about the gambling link, think about the now-ubiquitous pull-to-refresh gesture on mobile interfaces -- does it remind you of anything... like a slot machine perhaps?


Even worse than gambling--the output isn't random. You can actually affect the result, a lethal combination for gambling behaviors. With slot machines it's at it's worst when the gambler believes there's a system that works. In this case there actually is one making it capable of drawing in folks who would not be at risk of being pulled into a slot machine.


> Now consider the fact that the primary affordance of social media is the “reaction”. Is it a surprise that content that garners a reaction will trend towards the outrageous?

Well, maybe, but is there a way it could it be otherwise?


We are on rails. Even "The Road Less Travelled" is another road, ignoring the infinity of directions we could actually take.

We can't even notice the other possibilities, let alone consider them.


>This is why emergency exit doors must open outwards. People have died in fires in theaters trying to push on doors that said “Pull”. The crash bar on an emergency exit door is what’s known in design as an affordance.

Is this true? I always assumed it was because it's pretty much impossible to pull open a door when you are being crushed by people pressing against one another trying to escape.


Yes and recent.

>An inward-swinging door - three times cited as a code violation by West Warwick inspectors and three times replaced by club managers - was blocking the exit closest to the stage.

https://www.bostonglobe.com/metro/2013/02/15/series-errors-s...


For some context in case anyone is curious, two sentences later the article reports that "Bouncers opened it immediately after the fire broke out".

The door in question's main problem appears to have been that bouncers wouldn't let people use it, not its orientation.


AFAIK you're correct, a vertical handle is an affordance to that indicates a door should be pulled, but the reason is mechanistic. A crush of people will open the door with a push-release but won't with a pull handle - whilst the handle affords pushing some numpty will try and pull it but the people behind them will soon ensure that they push the door, with their broken ribs if necessary.



I think it's true to some degree, I've seen people push on doors that say "pull" even when there is no fire.

But I think your assumption is probably more correct, past a certain point they have no option to pull anymore.


The push/pull "mistake" is usually caused by poorly designed doors. Like putting a vertical grab handle on a door that pushes open.


It's the same sort of myth as "why are manhole covers round?" and there are lots of attributions for why that ignore history for a just-so story.


I don't understand the reference. Why are manhole covers round? I always thought it was to prevent the cover from falling into the hole. Is that not the case?


They are only round when they cover a round hole. I've seen a number of covers that are other shapes, often square.

There are engineering reasons to prefer a round hole, but there are sometimes other considerations which push a different shape.


> Why are manhole covers round?

Manhole covers exist in several shapes, as illustrated on the Wikipedia page.

https://en.m.wikipedia.org/wiki/Manhole_cover


That is the argument but actually any lipped shape can give this guarantee. Even historically manhole and utility covers had to be somewhat flush to the ground and so they've always had to be lipped for this.

You can see lots of utility covers that are hinged, square, or other options, but all are lipped for safety. While a circular profile makes the lipping easier, it doesn't seem to influence many utility covers.


You can easily put a square or a rectangular utility cover into the hole, because the length of the side is less than the diagonal. This isn't possible with a circular cover because the diameter is uniform.


The parent’s point is that a lip prevents this for every shape, by making the hole smaller than the side-length.


Only with a sufficiently large lip. The diagonal of the hole has to be less than the shortest side of the lid. As I picture it in my mind this is an excessively large lip.


If you have a very rectangular shape then the lip would have to be large.

Stop for a moment next time you walk in an urban environment. Look for circular utility covers you see in your urban environment as opposed to squaed ones. Look at the features of the circular ones vs the non-circular ones.

One of the reasons this myth irritates me so much is that everyone is so certain that they know exactly the answer but their actual daily experience doesn't line up with the results at all.

I've heard several people offer explanations from a geometric safety option (which is attractive for free-standing covers) to the simplicity of manufacturing (e.g., that it's very easy to make circular molds and get even density compared to square molds) to simply what the contractor suggested. I've also heard people suggest that metal cylindrical templates were something very common to manufacture for a variety of industrial uses.


I'm not certain, that's why I asked the question in the first place.

I'm not sure what features I would look for because the lip on a rectangular cover would be under the lid. How do I know that the rectangular covers have sufficiently large lips to prevent the lid from falling in? I'm willing to accept that in some cases the lids are rectangular and there is a risk of them falling in.


If only this were true for certain utility holes in my neighborhood with rectangular lids. I I've sprained my ankle a few times. I mean, they won't fall in accidentally because of the lip. But you can lift one out and put it in the hole, due to negligence or a taste for vandalism.


> You can easily put a square or a rectangular utility cover into the hole

Then why do rectangular (including square) and triangular manhole covers exist? Rectangular ones are quite common, triangular less so.

Grandparent is correct, lips are what, in practice, prevents manhole covers from falling in, not being circular (which many are not.)


Well maybe in some cases the risk of the lid falling in is acceptable. Or maybe there are other mitigations in place.


I don't know why rectangular utility holes exist, I assume cost.


FYI, I've personally dropped a rectangular manhole cover into a properly lipped hole. They do fit all too well.


Sewer grates are frequently square. For what it's worth.

(Not always, but quite frequently.)

They have the same gravitational-gradient dynamic as manholes.


What's the myth? There are good reasons for round manhole covers.

Also fun fact: That used to be a Microsoft interview question.


That is not where the myth originated, but it is where I heard it first.


Thanks - you correctly caught an error and I’ve edited the post to address it. The danger is in the confusing affordances, and the point was about that and not the direction of the door swing (which as you’ve correctly pointed out has other more primary considerations.)


Don’t worry about that. Just retweet it as a fact, pronto.


Now consider the fact that the primary affordance of social media is the “reaction”.

Questionable.


Ironic!


I can't wait for the word "toxic" to drop out of favor. It implies that the person is not just giving a bad opinion, but is in fact fundamentally flawed and dangerous. Arsenic isn't toxic because it had a bad day at work, arsenic is by its nature a deadly poison and you can't change it, only avoid it. The vast majority of people labeled "toxic", though, are just humans with a variety of opinions and beliefs, some you may agree with and some you surely disagree with. They may change these beliefs, but not by being vilified and told they are intrinsically bad.


Poisoning a conversation is a thing. And if you've participated in online discussions I don't see how you can honestly say it doesn't exist.

You're implying that in every thread where a troll shows up to derail the conversation we all have to stop what we're doing and give thoughtful responses to the troll to show them the error of their ways. But then the only possible conversation is debates with trolls. But that gives too much power to the trolls. Sometimes you just have to shut the trolls up so you can talk about what you want to talk about.


> Poisoning a conversation is a thing. And if you've participated in online discussions I don't see how you can honestly say it doesn't exist.

"Poisioning a conversation" is a bad metaphor that conflates two different things which do exist, but need to be dealt with in two different ways:

1. Baiting: trying to say something horrible to anger people for their own entertainment. The proper response to this is simply to ignore it: if you aren't entertaining the baiter gets bored.

2. People saying things they actually believe, even when those things are genuinely terrible. Responding to these people prevents their beliefs from going unchallenged, and is the only way we can possibly hope to change those beliefs.

If it were just group 1, you could just ban those people and that would be fine. But the problem with that is that sometimes people are actually in group 2, and engaging those people and correcting them is part of arriving at shared values in a functioning society. The tendency to accuse people who are genuinely expressing their (awful) opinions of simply baiting so that you can ban them is problematic for open discussion.

> Sometimes you just have to shut the trolls up so you can talk about what you want to talk about.

Contrary to what you're saying, I think it's very possible to have conversations about what you want to talk about while letting these people say what they want: isn't this what comment trees exist for? On Reddit and HN, person A and person B want to have a conversation, person C can say whatever they want, and it doesn't affect the continuity of A and B's conversation as long as A and B respond directly to each other's posts, and not to C's posts. Every platform I know of supports private messages.

Underlying what you're saying is an assumption I'd like you to reconsider: why is it that you think that a public conversation in a forum where anyone can respond should only be about what you want it to be about?


It's not a bad metaphor. If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome. I've never observed that a no-moderation community is free of trolls, and ignoring trolls often results in them just taking over because they just start talking to each other and everyone else leaves.

These things are caused poison because even a small amount can cause serious problems if left unchecked, and the effect creeps out across an area (how many people are hit) like poison spreading.

I think what you're missing is that a very large amount of people simply do not enjoy a certain style of discourse to the point that they'll opt out of a community that doesn't ban that discourse. You may not like it, you may think those people are weak or something, but the rest of us want to talk to them, and we want them to feel welcome.


> It's not a bad metaphor. If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome. I've never observed that a no-moderation community is free of trolls, and ignoring trolls often results in them just taking over because they just start talking to each other and everyone else leaves.

Which form of trolls from my post are you talking about? I insist that we not pretend these are the same group of people.

I think that good moderation filters out the baiters and lets the people who believe what they're saying stay. And contrary to what you're saying, I don't think that such communities end up with just trolls. There were plenty of reasonable conversations on Reddit before Conde Nast took over and dropped the banhammer.

> I think what you're missing is that a very large amount of people simply do not enjoy a certain style of discourse to the point that they'll opt out of a community that doesn't ban that discourse. You may not like it, you may think those people are weak or something, but the rest of us want to talk to them, and we want them to feel welcome.

I'm not missing that--in fact, I don't enjoy talking to people with hateful beliefs either.

But the alternative you're proposing is an echo chamber where you don't have to hear those people, but they still believe what they believe, and those beliefs become our leaders and laws. If we ignore bigots on the internet we get bigots in office.


I think you're very confused. The idea that rational people have to engage thoughtfully with irrational bigots is pure nonsense. It is not the duty or obligation of anybody to engage with those who hate them.

The reason we get bigots in office by the way is not because trolls are banned. It's because powerful interested want bigots in office. Bigotry sells. It's very easy to screw people over if you can distract them by having them hate on some out group. It is naive to think that taking with bigots will change this.

And here is the point: social change doesn't proceed through rational discussion. Never has, never will. Real change requires organization and solidarity and protesting and marching and uncompromising demands.

If you want to waste time engaging with trolls have at it. You will find that these people have nothing but contempt for discussion and no interest in being swayed by logic. For the rest of us we have far better things to do and banning trolls and bigots is the obvious choice.


> I think you're very confused. The idea that rational people have to engage thoughtfully with irrational bigots is pure nonsense. It is not the duty or obligation of anybody to engage with those who hate them.

"Have to" and "obligation" in a general sense are things I try to avoid saying. They don't exist in my belief system, and I apologize if I mistakenly said otherwise.

What I'm saying is that if we want bigots to change, we can't just expect it to happen.

> The reason we get bigots in office by the way is not because trolls are banned. It's because powerful interested want bigots in office. Bigotry sells. It's very easy to screw people over if you can distract them by having them hate on some out group. It is naive to think that taking with bigots will change this.

I think you've confused cause and effect here. Some powerful interests certainly see bigotry as an end goal, but I think most powerful interests who support bigotry see it as a means to an end. As you said, bigotry is a distraction to achieve other goals. Bigots are easily manipulated if you don't care about bigotry: you just pretend to be a bigot and that gets you power, and then you can do what you actually want to do. If there were not bigots to be manipulated, powerful interests wouldn't push bigots into power.

> And here is the point: social change doesn't proceed through rational discussion. Never has, never will. Real change requires organization and solidarity and protesting and marching and uncompromising demands.

Organization and solidarity and protesting and marching aren't incompatible with rational discussion, and in fact none of these things work if they aren't a means of putting forward a rational discussion.

Modern protest movements need to read Martin Luther King's writings and understand what he really did. Every single protest he lead was carefully designed to make a point in the rational discussion of the time. The bigoted viewpoints of the time: that people of color were violent, dangerous, less intelligent, etc., were struck down one by one on public television by MLK's protests. Bigotry is based on lies, and MLK made it impossible for people not to see the truth. When bigots feared people of color would be violent, he showed them people of color peacefully being beaten. When bigots feared takeovers by blacks, he showed people of color only wanted normal things like sitting where they wanted on the bus and drinking from the same water fountains. He didn't simply try to talk over the people he disagreed with, he listened to their concerns and showed their concerns to be invalid.

Harvey Milk, as far as I know, didn't write about his tactics, but they are clear in what he did and said. When bigots saw homosexuality as a foreign, unusual, threatening thing, he encouraged people to come out so that bigots could see that gays were normal people all around them. When bigots saw homosexual culture as an invasion of their neighborhood, he showed it also brought economic benefits ("You don't mind us shopping at your liquor store." "We both pay taxes for your child's school").

Can you explain to me how you think protests work to change policy? If all they are is simply trying to yell your opinion louder than your opponent, why should people in power care? If protests don't persuade anyone, what's to stop everyone voting for the same people and getting the same bigots in power? If our only tool is escalation, they'll just escalate back, and they can escalate further because they have guns. :)

> If you want to waste time engaging with trolls have at it. You will find that these people have nothing but contempt for discussion and no interest in being swayed by logic. For the rest of us we have far better things to do and banning trolls and bigots is the obvious choice.

If by trolls you mean people who are saying inflammatory stuff to enrage people for their own entertainment, sure, engaging with them only entertains them.

But if you're talking about people who are just trying to live their lives and think that bigotry is the way to do that, I very much doubt you have tried talking to these people, because this has not been my experience at all. If you approach talking with someone about their bigotry as if they were a human, with compassion, and address the actual fears and hang-ups that cause them to be bigots in the first place, people do change. It doesn't always happen quickly or at all, but sometimes it does. And more importantly, I've never seen it happen any other way.


> Which form of trolls from my post are you talking about? I insist that we not pretend these are the same group of people.

I haven't found it particularly wortwhile to distinguish people who are saying terrible things to troll, or because they believe them. They're very often the same group, because reasonable, emphatic people are going to neither say nor believe those terrible things.

If you cannot carry a conversation with consideration for the other people in it, I do not want you in my community.

> There were plenty of reasonable conversations on Reddit before Conde Nast took over and dropped the banhammer.

I perceive Reddit as a very good example of the kind of community I _don't_ want, because any deep, complex, or otherwise not aligned with the popular discussion is impossible there, so I'm afraid we're at an impasse.

> But the alternative you're proposing is an echo chamber

Luckily, I haven't proposed an alternative, so it's rather interesting what imagined alternative you're making that statement about...


> I haven't found it particularly wortwhile to distinguish people who are saying terrible things to troll, or because they believe them. They're very often the same group, because reasonable, emphatic people are going to neither say nor believe those terrible things.

> If you cannot carry a conversation with consideration for the other people in it, I do not want you in my community.

Okay, you can want whatever you want, and I understand why you want that. I also have the gut reaction to someone saying something bigoted where I to avoid the person so I don't have to see it, or respond with vitriol and ostracization, because that's what feels good in the moment. But if people continue to insist on putting their head in the sand and take actions that feel good rather than actions that actually address the problem, these problems are only going to get worse.

> I perceive Reddit as a very good example of the kind of community I _don't_ want, because any deep, complex, or otherwise not aligned with the popular discussion is impossible there, so I'm afraid we're at an impasse.

I have had fairly in-depth conversations and said plenty of unpopular stuff on Reddit all the time, so I'm not sure what you're basing this.

> Luckily, I haven't proposed an alternative, so it's rather interesting what imagined alternative you're making that statement about...

You said, "If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome."


> But if people continue to insist on putting their head in the sand and take actions that feel good...

I'm not sure how you go from "I don't want trolls in my community" to "it just feels good, you're putting head in the sand". I'm not putting anything anywhere, I know exactly what I am doing. I don't want trolls in my community.

> these problems are only going to get worse

Not in my community they won't.

> I have had fairly in-depth conversations and said plenty of unpopular stuff on Reddit all the time, so I'm not sure what you're basing this.

This is a subjective thing obviously but it's not like it's some new sentiment I made up, plenty of people find Reddit terrible to have interesting conversations on. Particularly a thing you'll see mentioned often is that shorter, less complex posts are often more liked than longer, more complex posts requiring a lot of effort to write.

> You said, "If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome."

Which is not a proposal. It's a statement on consequences. A proposal looks like this: "To have a well functioning community, you need to have this, this, and this, and not that". I've said nothing of the sort. Communities are complicated and require design, and there's a lot of variety within communities besides just "free for all" and "echo chamber".

You have a conversational style which seems to like to presume that the person you're speaking to is doing something they never claimed they're doing (keeping their head in the sand, or suggesting an echo chamber), which might be why you find Reddit tolerable, because this is very much the kind of interaction I find really annoying and could do without. It's always easy to feel right about everything if you just put words in the other person's mouth.


> If your community is ignoring trolls instead of banning them, it's effectively sending the message that the trolls are very welcome.

I have never seen it play out that way in practice.

> I've never observed that a no-moderation community is free of trolls

I've never observed community with moderation and no trolls. You will not ever eliminate them completely, so better approach seems to be ignore them and just ban outright spam.

And I did observe almost unmoderated community that had trolls, nobody cared about them, and everything was fine.

> very large amount of people simply do not enjoy ...

I think you are doing a lot of projection here. Maybe people agree with you, but considering how broken are your arguments I would not take your sweeping generalizations seriously.


The issue is that many in group 1, baiters, intentionally and effectively mimic those in group 2, earnest people. So sadly, although they are different, there is no way to reliably distinguish them from their posting behavior.


No, the parent was simply saying that “trolling” (or, having a bad opinion) doesn’t imply a character flaw, and that we shouldn’t use language that makes that implication, lest we start to think of people who have bad opinions as irredeemable. People can do wrong without being fundamentally bad, broken people.

If we treated, say, driving a car the way people treat Internet discourse, you would be dragged out of your car and stoned to death the first time you cut someone off.

Yes, sure, ignore posts that seem like trolling. Filter or block them, even. But perhaps you could give the people behind those posts a second chance, before writing them off for life for one post.


You know, nobody has been actually linched for either trolling or reckless driving. Maybe some were killed, if they crossed the wrong kind of victim, but that's the risk you assume when being a jerk.

On the other hand, I would fully support a law that would give a temporary suspension on people's driving licenses if there was a reliable way to tell that they have an habit of cutting other drivers off. And to make it a permanent ban on operating any kind of vehicle if you were a reincident. Again, it would be greatly inconvenient for them, but there must be some point when the rights of the public superseed the rights of individual assholes.


Is it not easier to ignore trolls and let them realize they will not get attention that way, even without being censored? And if they are not ignored, then perhaps they are more than the simple trolls we imagine them to be?


Implying dishonesty is toxic!

Oh haha well done...


Spoken like someone who hasn't spent much time on reddit recently. Since the massive surge in white nationalist, alt-right, and other right-hate ideologies online (and particularly the creation of the_donald, a subreddit which has shockingly avoided the ban hammer for an inexplicably long time given its obvious explicit purpose: to be an echo chamber to spread hate, scream slurs, threaten people with death and genocide, and dox enemies).

In literally any part of reddit, if there is any post on anything that could be construed as racial or about any gender, even innocuous, hordes of extremist right wing trolls descend upon it and spew horrifying screeds of hate. They abuse people into silence. They are toxic. These things leak. And toxicity is real. Hate begets hate, saying nasty horrible things to people and advocating for genocide are not innocuous "beliefs that other people might have" they are unacceptable behavior in civil society.

If you do this in real life you are ostracized, beaten, you lose your job, you are abandoned by your family and friends. And this is good. Social signals and actions to prevent "toxic" behavior have existed since forever. But the Internet is the property of a few companies who are loath to enforce those same social rules. Sometimes, they get pushed far enough they feel have to.


You're mixing beliefs with tactics/behavior. The behavior you described is "toxic" no matter what your beliefs are. I agree that the lack of repercussions and social feedback online lead to an increase in people acting like this and it is a problem for pretty much all public forums. However, it is neither constructive, nor is it truthful, in my opinion, to attach this behavior to a single group, side, or set of beliefs. All you'll end up doing is driving moderates of said group further to the extremes. You can call out ideas you think are bad and you can call out behavior you think is bad, but "other-ing" an entire group based on the worst actions at the fringes of their membership just isn't going to change any minds. It only widens the divide.

Edit: The exception, of course, is if the behavior that is at issue is actually encouraged by a foundational belief of the group.


>Edit: The exception, of course, is if the behavior that is at issue is actually encouraged by a foundational belief of the group.

I'm glad you added this, I agree with you in general. I'm not interested in "other-ing" right-wing people, Republicans, moderates, conservatives, but I'm very interested in "other-ing", e.g. neo-nazis or Klan members. I am not worried about neo-nazis becoming more extreme (? is this possible?) and I also am not willing to let their sensibilities or concern for their feelings dictate any part of my or society's behavior.


Exactly. Giving a massive benefit of the doubt that their complaints can be non-partisan, some alternative terms that might be more correct to their concerns might be:

  - Polarizing
  - Extremist/Fundamentalist/Zealotry
  - Trolling
However, these sorts of things are legitimately "toxic" to a welcoming discussion environment (ie a large social site's revenue stream), but it's being conflated as being "toxic" to society in general, which itself is an extremist perspective. :-P


I would say that all of those things are toxic to society in general. Those behaviors just tend to be more muted out in the real world where acting shitty is more likely to have immediate consequences.

I can't think of any non-contrived situations where those behaviors aren't equally damaging to social bonds in any situation, whether it's an online community or a government or a family or an organization. They all reduce enlightenment, are always irrational, and always increase human misery, even if it's just a teensy bit in the most benign cases.


Yes, you agree that they're "toxic" to communication, which I stated. But I wouldn't say they, for example, reduce "enlightenment"; they're a product of not being "enlightened" (for whatever measure you use).

All of these are products of the failings/weaknesses of humanity, not causes of them. Their presence is to be expected and crops up in everybody, not just in some "toxic" subset of people that can be excluded.


This is true, but “toxicity” seems to arise much more frequently in some people than others.

In Robert Sutton’s book, “The No Asshole Rule”, he describes what it takes to be a “certified asshole”:

> A person needs to display a persistent pattern, to have a history of episodes that end with one “target” after another feeling belittled, put down, humiliated, disrespected, oppressed, de-energized, and generally worse about themselves.

Put another way, in the series “Justified”, Raylan Givens opines (paraphrasing here):

> If you come across an asshole in the morning, well, you just met an asshole. If you’re coming across assholes all day long, maybe _you’re_ the asshole.

Assholes at work create a genuinely toxic work environment. People get sick, quit, and even commit suicide.

It can be argued, with some merit, that this differs from the Internet in that the assholes are usually in a position of power to abuse their subordinates, while on the Internet - at least in chat rooms and the like - people can withdraw from hateful environments by just closing the tab on the browser.

That’s not my point, though; I’m asserting that people who show a “persistent pattern” of promoting hatred of particular groups, inciting violence, and convincing people of harmful information through lies, half-truths, and myths, deserve to be labeled as “toxic”, and can be far more dangerous to society than a common or garden corporate asshole, because their messages can - and do - influence millions of people towards antisocial or, at very least, irrational thoughts and activities.


I think what this boils down to, and is extremely apparent in those quotes, is communication skills & style.

People with very controversial beliefs (and in a free & diverse society, everything is controversial in some relative axis, hence we need to extend measures of freedom to each other) can still act civilly, or they can flail and be problematic on forums. That's not a feature of their beliefs, but a feature of their behavior (stubbornness, arrogance, etc). People who have fully "correct thinking" in some scope can also be disruptive, poorly behaved members of discussion-based communities.


There are in fact system dynamics in which specific behaviours are unhealthy. Toxic, if you will.

Consider these, generally, hygiene factors.

There are substances which are toxic in specific concentrations or circumstances, which are otherwise healthy: oxygen, CO2, water, vitamin D, salt, nutritional iron. Certain forms of discourse.

My view increasingly isn't that there are things, but interactions or behaviours. A thing isn't, but is what it does or how it behaves.

(This ... tends to simplify numerous ontological questions.)


> My view increasingly isn't that there are things, but interactions or behaviours. A thing isn't, but is what it does or how it behaves.

The "belief in things" is itself a cognitive simplification we make, probably for the sake of efficiency. To use a programming analogy, toxicity as a concept is a function of at least three arguments - toxic(what, to-what, context) - but we attach it as a label to the first argument and store it there.

Compare e.g. with beauty, itself a function of at least two arguments - beautiful(what, to-whom). But we usually assume to-whom = "human like me", and stick the whole thing as a label on a thing, because 99% of the time, that's the correct thing to do (incidentally, the quote "beauty is in the eye of the beholder" is literally a reminder that the concept of beauty is a function of multiple values).

Confusing the "arity" of concepts seems to be the cause of quite a lot of misunderstandings between people.


What's your meaning of arity here?

"Things" with some bounded shape or form (or other properties) may be related to perceptual apparatus.

We see, or hear or smell or taste or feel, etc., the boundaries, emissions, or other perceptible manifestations of objects or phenomena. If you will, those are their interfaces.

As in other domains, an interface may reveal, or conceal, some more complex back-end, inner working, or larger system.


Arity, as in number of arguments to function.

As in, beauty(who to-whom &optional context) confused as beauty(who).


Got it.

I've been thinking on Aristotelian categories over the past couple of years. Thought occurs that all of them are relations, though with varying degrees of dependence on the observer.

Related, a ~1835 essay on value by W.F. Lloyd notes that all value is relative. That's not a universally held view, but is, I believe, correct.


I kind of agree with you, but I think there's two threads here that need to be teased apart.

I agree that people get considered toxic because they're expressing views that are unpopular within the group in question. All groups have their sacred cows and enemies (where the belief in the truth of these views is often spread socially rather than developed intellectually), and people tend to not look kindly upon someone going against the standard line.

The second thread is that toxicity is not just a matter of the opinions being expressed but also the way they're being expressed. Eg when people are going out of their way to be rude to others, or where they're not arguing in good faith.


I suspect the two are being conflated, and not entirely accidentally. In many social circles, going against someone's sacred cow is viewed as essentially no different from going well out of your way to be rude or argue in bad faith. This offers a signal advantage - it means it becomes acceptable to silence people who you disagree with on the basis that they're behaving badly.


On that allegory though, the poison is in the dose.

Small amounts of copper are required in your diet to keep your body functioning. However, large doses of copper are toxic.

Same goes for most drugs.


I believe the metaphor of "poisonous chemical" has run away with you. Is it about the listener ingesting knowledge that is harmful ("grokking"), or is it about the speaker making a mistake and thus everything that speaker has done / will do is a mistake?


I submit to you, a subreddit I've found in the past week that is absolutely horrifying, and I think stands quite counter to the "variety of opinions" model you put forth.

This isn't anything visually graphic, but the opinions expressed in this subreddit make me believe that these people are preparing for a violent insurrection, full stop.

https://www.reddit.com/r/CBTS_Stream/

CBTS stands for the Calm Before The Storm. These people follow an anonymous online poster named "Q" who posts vague, short posts on some other site (not sure where), and then lots of other subscribers repost these and form general conspiracy theories that all revolve around the deep state, the NWO, and other nefarious groups colluding to remove Donald Trump from office. Its not your standard fare related to the ongoing investigations in congress or the special counsel, these people are the dangerous combination of paranoid, gullible, and angry. As an outsider just perusing, its obvious that this place is crawling with charlatans and con artists who understand that they are addressing a crowd of people who are more prone to believe an idea simply because of the tantalizing ramifications of if it turned out to be true. Anyone can spout out the most hairbrained idea, and three people will show up to give vague, outrageous stories that confirm the original idea.

I don't have an answer, but surely there must be a level of fomenting anger and general mob action that deserves some sort of modulation/regulation.


> the opinions expressed in this subreddit make me believe that these people are preparing for a violent insurrection, full stop.

What makes you think so? I've looked at it (admittedly, didn't spend too much time) and it looks pretty standard fare for a subreddit - or any other forum like it. It's slogan is "BE LOUD. BE HEARD.". People that are preparing violent insurrection don't want to be loud and heard. They want to be silent and invisible until they have enough people and materiel overthrow the government. People that want to be participating in a democratic debate want to be heard. There's no point in being heard by the other guy if the next thing you're planning to do is shooting him (well, maybe if you demand surrender, but I see no such demands there and if would be weird to do it on Reddit). The only point I can see of being loud and heard if you try to convince somebody, or at least gather support - e.g. for winning al election, or pressuring an elected representative into doing something by showing them how many people demand it. All that is part of the normal democratic process.

Even if people that are there hold some unacceptable views (I have no idea if they do, but even we for a minute assume they do) that doesn't mean they are planning violence. What is the evidence they do?


I mean preparing for violent insurrection that they believe is about to be started by someone else. I don't think they are planning to start it. So many of the posts are alluding to events that are imminent. My fear is not that the logical progression of this group is a pre planned and executed terror attack of some kind, but that they are being radicalized, and are not far off from a mob of Manchurian candidates. Perhaps thats a very loaded and hyperbolic phrase, but consider this: imagine a hypothetical situation where Donald Trump resigns, or is impeached, or for whatever reason leaves office on a day other than Jan 20, 2021 or Jan 20, 2025. The mass demonstrations that would inevitably ensue would be powder kegs. I don't think its hyperbole to say that those protests could quickly devolve into mob violence on a large scale. This subreddit is essentially prepping people for that day, and like a doomsday cult, insinuating that the day is coming very soon.


That may or may not be true that they are expecting doomsday of sorts, but there are a lot of conspirologists which predict imminent doom literally for decades, and when it never comes they aren't bothered by it even a little. If that day will never come - and most likely that's exactly what would happen, if history teaches us something - there wouldn't be no other harm done than a bunch of folks wasting a lot of time on the internet talking about stuff. So far mob violence was a rather rare occurrence in political demonstrations, actually - and one that happened lately was mostly driven by antifa. That's pretty much the only movement right now that openly uses mob violence and achieves some political success by it - cancelling speeches, shutting down events, etc. Are there any other examples?

I see a lot of explanations - especially on the left - how expressing certain views is akin to violence. If we had tons of actual violence happening - or clearly imminent to happen - we wouldn't need any speeches about how words are similar to violence. It would be clear to us that there's actual violence and there would be a lot of pointing to it instead. So, I take it as a sign that there's actually not much violence to be pointed at, if we are pointing at words instead.


I'll whole heartedly agree that there is no smoking gun to any of my theories. I don't equate this speech with violence. My point is just that situations that result in mob violence can deteriorate at a rate far faster than rational voices can prevail. People are programmable. The level of fervor I see on some of these subreddits tells me that someone is programming these people. People are using the tools of psychology to achieve goals that would not be achievable otherwise. In the time elapsed since the programming of this particular group of people, two years ago, all of the protests I've seen from Trump's base have not been airing of grievances, but more of "lets go gloat in public and see if we can trigger the left.". Trumps base is triggered with shadenfreude, not anger. My point is that there is a singular event, Trump leaving office prematurely, that large groups of the right will inevitably interpret as a coup. If that very situation arises, then the calm, see-it-coming-a-mile-away, clear signs of trouble you envision, is suddenly hundreds of thousands of people in crisis mode.

I realize that there is a spectrum of using psychology to influence people, and I can't tell you where the line is of too far, but can we agree that the line exists?


> I don't equate this speech with violence.

Oh I don't say you do. I am saying the need of so many people to do it suggests there's a distinct lack of violence to point at, otherwise they'd be pointing at it, instead of pointing at words. And since these people are highly motivated to find anything to point at, their failure to find it suggests maybe there's indeed not much political mob violence to find.

> The level of fervor I see on some of these subreddits tells me that someone is programming these people.

For some definition of "programming", maybe. But for that definition, everybody who debates on the internet "programs" everybody else participating in the debate. It's just a nefariously sounding way of describing mundane things, just like writing "contains chemical compounds!" on food packaging.

> People are using the tools of psychology to achieve goals that would not be achievable otherwise

Not sure what you mean by "otherwise". People communicate. Some of them use knowledge of human psychology to make their message more persuasive. It's not something that appeared today or yesterday or this century or this millenium. Is it harder to convince somebody in something if you ignore human psychology? Of course. But there's nothing nefarious about it - it's like saying "people are using tools of physics and chemistry to achieve goals that would not be achievable otherwise". Sure they do, all power to them! That's why we spend all the big bucks financing the science!

> My point is that there is a singular event, Trump leaving office prematurely, that large groups of the right will inevitably interpret as a coup.

That would largely depend on the manner of said leaving, I'd assume. For example, if he becomes gravely ill or suddenly dies, that sounds unlikely. If Democrats win majority in Senate and House in the next election and decide immediately to impeach Trump "because he's bad", without a proof of any real crime committed - that sounds much more likely. But that would be a consequence of highly inappropriate behavior resulting in loss of trust in the democratic system by citizens. The cure for it is not to behave like that. If there's no such behavior then the history shows there would be no significant violence. I've heard rumors that Bush would cancel elections and institute martial law, then Obama would cancel elections, and no doubt I'll hear about Trump cancelling elections, and then whoever will be elected after Trump would cancel elections too. There's always talk about this, because it's easy.

> I realize that there is a spectrum of using psychology to influence people, and I can't tell you where the line is of too far, but can we agree that the line exists?

Not really. There's no "using psychology" but plain old persuasion, and no persuasion is "too far".

Well, of course, if you use violent methods like torture, you can also achieve psychological effects, but if we're talking about persuasive speech alone, then there's nothing "too far" in that. There's no words that can make robots out of people, and in fact convincing somebody to change one's mind on a political question by just throwing words at them is really hard. Possible, but hard. People may be "programmable", but not very easily. Usually if they become convinced in something, there are a lot of reasons for it and a lot of background for it, not just some nefarious article on some forum.


"There's no words that can make robots out of people"

How do you explain cults? How do you explain the effects of advertising? How do you explain the uniform levels of discipline achieved by basic training? How do you explain phone scammers? How do you explain the success of the public relations industry? How do you explain Bernie Madoff? How do you explain cigarette smokers? How do you explain the effects of what we refer to as echo chambers? Everyone of these consists of people being programmed or brainwashed in one way or another.

You hear the word brainwashing and immediately think of someone thats hypnotized, or a zombie, the typical hollywood trope. But its a far more common thing.

If you're interested in reading about this, there is a book by a psychologist named Robert Cialdini, called Influence, the Psychology of Persuasion.

https://www.amazon.com/Influence-Psychology-Persuasion-Rober...

One interesting persuasion trick is to start with extreme opening bids in negotiation, and then back off to what you really want. This is how the actors of Watergate were able to convince others to go along with the plan to break in to the Watergate. The original plan was far more involved, with a $1,000,000 budget, and included kidnappings. G Gordon Liddy used this as an extreme opening bid, and eventually convinced everyone that what eventually took place was a reasonable compromise. After, its not like they kidnapped anyone, and they only needed $250,000.


Most of your litany of questions can be answered by saying that nobody became a cultist by only reading words on a screen of what the cult believes.

Take Scientology for instance. I have access to the whole of their printed words online, yet I am not a Scientologist. You wouldn't be either. You can sub out "Scientology" for any other cult or any other odious group (including race nationalists, terrorists, etc.) and the statement remains true.

The reason being that nothing is being engaged here other than reading, writing, and thinking. No money is changing hands, no leader is demanding my obedience at literal or metaphorical gunpoint. In the case of a web forum the absolute worst thing that could happen to me is that I'll get rude comments or be unable to participate further.

The only difference between discourse and propaganda is the aim of the people doing the talking. This is a subjective value judgment on what you think of the speech, and when it comes to analyzing it, this amounts to noise. Are you trying to propagandize at me? :)


The flaw in your argument is that the obedience must be demanded at gunpoint. In actuality, the obedience can be gained through a combination of continuously putting people under stress, and framing yourself as the only viable solution to the stress. Its how you train dogs, and how you train Marines.

Narcissists and con artists don't need to threaten violence to achieve their means. The threat of violence is simply one of the most effective methods of placing someone under emotional stress. It is once these people are under emotional stress that their defenses are down, and they are vulnerable to brainwashing.


@Karunamon: I'm not sure why, but I have no reply link under your latest post, so I'm posting as close to it as I can.

I don't think its controversial at all to say that both Facebook and Reddit are brainwashing people. Not to say that the companies themselves are doing the brainwashing, just that they are effective platforms for anyone to do it on. Do you really not know anyone who literally lives on one of the two sites? They are both highly addictive echo chamber services that people willingly return to, hoping that the next page load will contain their unicorn story that confirms everything they want to be true. Its the dopamine cycle of what makes social media interesting in the first place. These are two of the four most visited sites in America. I can't control for peoples previous experiences, or say that Reddit or Facebook are the exclusive causal factors, but I think that just the fact that they are excellent at getting users to self select for interesting content, at the expense of content that may challenge their opinions, is enough to count these services as brainwashing platforms. People are figuratively screaming to the void "I want to be entertained!!!" Trump answered the call. The repeated dopamine cycle of discovering outrageous content, and then eventually petering out to boredom, is the stress. Once you are sufficiently stressed out, you are far more susceptible to believing that the media is lying, and the FBI is biased, the intelligence community and government at large is filled with evil actors with their own agenda. I mean, how interesting would that all be, right?

Another point to consider is that I don't necessarily think any one organization has caused all of this to happen. Internet addiction has obviously existed nearly as long as the internet. My theory is that Facebook and Reddit effectively teed up millions of people for someone else to come along, and capture their minds.

As I've said throughout this thread, I'm not suggesting a minority report situation, or that circumstantial evidence should depose a president, or that censorship is the answer to any of this. What I'm more lamenting is that the charlatans are winning, and their methods are nearly impervious to defense in a modern democracy. As best I can tell, all I can do is try to convince people of what I think is at play, and hope that it resonates.


Re commenting: That happens after two or three replies - you have to click on the timestamp to get the permalink to the comment to be able to reply to it.

Re everything else: This is a reply that doesn't do justice to the effort you put into it, but I think with your definition of "brainwashing", we've made that term functionally useless, and I think bringing partisan politics into it apropos of nothing has made any further honest conversation on this matter impossible.


Fair, but how do you continuously put people under stress using only words on a screen? The comment you originally replied to asserted that "There's no words that can make robots out of people", and so far you've provided counterexamples that use many more things; none of which are restricted to internet comments.

Keeping the original topic in mind, we're talking about online comments. Not cults, not marines, not anything other than words on screens.


> Fair, but how do you continuously put people under stress using only words on a screen?

Once someone has become afraid of something, it doesn't just vanish after the event that caused it is over.


> How do you explain cults?

People have strong drive to belong to an in-group. There are multiple experiments that assigning random markers to random people and making them participate in certain activities lead to "group cohesion" effects and people start assigning deep meaning to those markers, despite them being completely random. Cult is just when people take it to the extreme - likely because they didn't find satisfaction for their in-grouping drive elsewhere.

> How do you explain the effects of advertising?

Which specifically effects? It's mostly brand recognition, aka availability heuristics - if you have brand A, B and C and you heard before that A is good, you're more likely to choose A than unknown B and C - and just plain informing people things like brand A exist. And a touch of in-grouping ("if you drink Coca-Cola, you are in a group of cool people"). And a tad of signaling ("if we have money to but expensive ad spot on TV, we must be a successful company that can afford to create good product, would not disappear tomorrow and values its reputation so we would not cheat you").

> How do you explain the uniform levels of discipline achieved by basic training

They are not that uniform, but again in-grouping plus other guys will be literally shooting at you (though research shows a lot of this shooting is much less targeted than previously thought).

> How do you explain phone scammers? How do you explain the success of the public relations industry?

That's plain persuasion, with various ethical fences either present or absent.

> Everyone of these consists of people being programmed or brainwashed in one way or another.

Again, if by "programmed" you mean "persuaded in something, in which they most likely inclined to believe from the beginning due to selection and grouping effects" then sure. People can be persuaded to buy a shampoo especially if they wanted to buy one already, and people can be deceived especially if they're already out to look for something the deceiver seemingly offers, and people can be blind to arguments especially if those arguments challenge their prejudices.

The only thing different here is tobacco smoking - that's physical addiction, it is a different mechanism.

> If you're interested in reading about this, there is a book by a psychologist named Robert Cialdini, called Influence, the Psychology of Persuasion.

Yeah I know about Cialdini. It has a lot of nice tricks, but it's not magic. It's much less magic then it's made out to be. And yes, reframing and anchoring is one of the tricks. If you watch Trump carefully, you can see all these tricks employed, he does it all the time. He didn't invent it of course - expensive stores have been putting items with outrageous prices on prominent display for ages, to make regular item prices seem lower in comparison. It does confuse some heuristics for people. But anybody is capable of approaching the prices - or Trump - rationally and see the actual price. There's no "programming" to prevent it - if one makes minimal effort, one can always do it.


Most of your explanations amount to spreading the trees out far enough and saying, "see, there's no forest here". You agree that all of the underlying mechanisms perform their component tasks, but disagree that there are cumulative, persistent effects of prolonged exposure to combinations of them.

My point is that there are large swaths of conservatives that have been radicalized by the last 25 years of Fox News and Rush Limbaugh. And now they are being told that mainstream media is lying about everything.

It's so complete in many people, that they don't hear themselves when they lay out their political motivations. The number of people whose primary goal in achieving any particular political end is to anger their political opponents is staggering. Its like a catch all if someone can't be convinced of the traditional conservative stance on an issue, they can fallback to "well, at least liberals will lose their minds".

This radicalization is primarily the cumulative effect of conservative media, and its villification of the left.

Cults can't exist without a confident authority figure dominating the flow of information to the adherents. These things don't just happen out of vacuums, without a leader in on the scam. Just because reading Dianetics is not 100% effective in converting to scientology, that does not mean that the book isn't an effective tool of persuasion, that combined with the other effects like ingroup psychology, can get people to join a cult that will bankrupt them without a second thought.

With regards to advertising, I mean the use of sex, patriotism, or other emotionally charged concepts to trigger positive associations with the subject being marketed. I don't claim that all people are impervious to all instances, I mean that enough people are susceptible to it, and it has persisted over a long enough period of time, that it has produced a significant amount of people with seriously warped views of how government works, how the 20th century played out, and what the powers that be are planning to execute imminently. I think there are literally people out there being primed to support whatever totalitarian aspirations Trump may have, and that they are being convinced of the righteousness of his cause. Do you really think that hundreds of thousands of people are being facetious when they refer to Donald Trump as "God Emperor"?

With regards to basic training, the effectiveness can be attributed directly to the process I describe. Convince people that they are in danger for long enough, reinforce this with threats and screaming. After the people are sufficiently scared, give them a path to escape the danger. Apply the imaginary danger proportionally to the level people stray from your prescribed path. You posit that since the danger is controlled and not as real as the grunts are lead to believe, the effect is somehow negated. It works in the military because you are isolated from any contrarian information by the military themselves. There's no source of information to tell the trainees that its a controlled, safe environment. It works in political echo chambers because the people have isolated themselves willingly. There's no _trusted_ source of information to temper the vitriol from the echo chamber.

Your model of people and how they take in and react to information ascribes a lot more agency and rational decision making than what I posit. Your argument is that since everyone technically has the tools available to them to become educated enough to not fall prey to devious persuasion, devious persuasion is automatically defanged, because all people avail themselves of all education.

Tobacco smoking, specifically nicotine addiction, is not a different mechanism at all. In fact it is a highly illuminating example that shows how insidious constant bombardment with persuasion can be. Nicotene has only the slightest physical withdrawal symptoms. The cravings smokers exhibit are a product of the brainwashing. People don't wake up in the middle of the night from cigarette cravings. They've convinced themselves that smoking a cigarette removes stress, instead of ensuring its perpetuation. When they crave the cigarette, they fantasize about how much they will enjoy it, but when they actually smoke it, addicts are surprised 20 times a day to find out they don't enjoy it at all. But this surprise doesn't free them from their mental prison. They are of the opinion that cigarettes modulate some completely unrelated stress in their life. Think of talking to a smoker, and just telling them how dumb they are, by repeating obvious facts that we all agree upon. How does that work out for you? They immediately put up defense mechanisms and dig in. But in their private moments, they know that everything you said was true. I posit that many Trump supporters are in the same predicament. They are faced with the prospect that their entire world view is wrong, that they aren't the woke genius's they've assured themselves to be, they are just Bernie Madoff investors, tools of Vladimir Putin, the choice to double down on supporting Donald Trump is no choice at all.

Trump supporters are addicted to him like Nicotene, and they will irrationally defend the Nicotene even as they cough and wheeze, because after all, whats more emasculating and cuckold-like than admitting you've been catfished by a charlatan to the very people that have been screaming this fact at you for over a year?


I guess this has been longer ago, but the Malhuer National Wildlife Refuge takeover by the Bundys was essentially a militia challenging the government to armed conflict. I'm well aware that the case against these people ended in a mistrial because of exculpatory evidence that was withheld. That doesn't change the fact that conservatives have threatened violence in recent political protest.


I suspected somebody would bring up Bundys. However, given that the whole dispute is about grazing fees in some piece of federal land, I can't really take it as a political statement. Yes, Bundys proposed some political theories as justification of their actions, but if there wasn't $1M in grazing fees, I don't think any of it would happen at all. The fact that violence was threatened is bad - but it is hard to take this as a genuine example of politically-motivated violence.


Well, luckily for all observers, the Bundy's erased any doubt that they were making political statements with their other armed standoff with federal authorities. You refer to the elder Bundy's grazing fees dispute, which played out at their Nevada ranch. This actually involved a grievance the Bundy's themselves held. I refer to his son Amon Bundy, who took up the cause of another father and son from southeast Oregon, who were jailed for starting fires on federal land, clearing brush. They were originally sentenced to some amount of time, which they served. A judge later determined, after they were already released, that there sentencing was somehow improper, and that they had to serve more time. The father and son willingly complied, and turned themselves in. Amon Bundy and his fellow Yeehawdists took over a National Wildlife Refuge, and essentially tried to provoke another standoff. They want federal lands to be wide open to do whatever they want with, to strip resources, and effectively get something for nothing. They see their inability to do whatever they want, and the consequences of doing so, as tyranny of the government.


I submit to you, a subreddit I've found in the past week that is absolutely horrifying, and I think stands quite counter to the "variety of opinions" model you put forth.

I'm not familiar with that subreddit, but I think I'm familiar with the mindset you are describing. Nothing in your description seems to set it apart from the "variety of opinions" model the GP proposed. Other than your feeling that these ideas are "beyond the pale"[1] what exactly differentiates this subreddit from being a group of people whose ideas you disagree with?

These people follow an anonymous online poster named "Q" who posts vague, short posts on some other site

I thought the comments by Occams-shaving-cream in this thread was a good take on Q: https://www.reddit.com/r/conspiracy/comments/82qpk5/in_case_.... He suggests that Q is essentially a marketing team within Trump's campaign, casting out ideas and seeing what resonates with potential voters. He hypothesizes that it may not have started this way, but given the obvious utility of having such a mechanism, it likely has become such by now.

I don't have an answer, but surely there must be a level of fomenting anger and general mob action that deserves some sort of modulation/regulation.

Maybe, but it seems possible that attempts to censor discussion may paradoxically fan the flames. Given a group of paranoid conspiracy theorists who believe that the "powers that be" want to silence them, "modulation" seems equally likely to make them believe even more fervently that there are powers who do not want certain knowledge to be known. And they're right! The difference would seem to be only that they believe this knowledge is truth, and you believe it is a dangerous lie. Public discussion seems like the best way we have of resolving this dispute. Trying to stamp out an idea like this seems more likely to produce violence than to prevent it.

[1] Did you know that this phrase originally meant "outside the reach of English law": https://englishhistoryauthors.blogspot.com/2013/03/the-origi...


I don't disagree that just about any regulation of these groups will be interpreted as more conspiracy.

I guess the crux of my argument is that these people are the victims of psychological warfare. I'm not throwing out the word brainwashing for emphasis, but I truly believe a lot of these people are actually brainwashed. To give context, I don't believe its controversial at all to say that boot camp in the American armed forces, and most assuredly other countries as well, is brainwashing, through and through. Put people in a prolonged state of stress. After sufficient time, tell these people you can end all of the stress if they just follow your directions. Run till you puke. Have trained soldiers screaming in your face. Be woken at all times of the night to both run till you puke and have trained soldiers scream in your face.

Instead, many conservatives in America have been fed a constant diet of outrage/dopamine cycles about how evil Bill and Hillary Clinton, Barack Obama, and anyone associated, are. They're given 30,000 emails to peruse that are eventually framed to "unveil" a pedophile ring tied to all of the current players in the democratic establishment.

Then most importantly, a billionaire with no political experience, no political capital to lose, whatsoever, comes along and trolls, and proves wrong, nearly the entire national media for over a year. Nearly every week, he makes some offensive remark that leads every veteran of any election anywhere to believe he has committed political suicide. Thus, a year and a half of reporting that Trump will soon quit the race, and has no chance to win. But since, unlike all of those previous politicians, Trump has no "betters" to please, no one in American politics had anything with which to pressure Trump to do anything he didn't want to do. So when Trump won the election, the brain washing was complete. Trump had lead them out of the "prolonged stress" of Barack Obama's 8 years, and the year and a half long prospect of Hillary Clinton being President for 4 years after that. Everything he said about the lying media turned out to be "correct". Witnessing Trump win under the unique circumstances in which he won has had the psychological effect of making him a nearly god like figure in the eyes of his base. He has cut through political correctness, sexual assault claims, ethics concerns, taken his red meat to the supreme court and won. In the eyes of someone already inclined to pull for they guy on their side, Donald Trump became nothing short of Luke Skywalker. If you think this is hyperbole, heres another example of how people can be brainwashed by witnessing uncanny success against all odds, especially when that success stands to improve the brainwashed's lives immediately.

There is a known email scam where the scammer emails a sufficiently large enough pool of marks, the winning team for a single National Football League game every week, ahead of the game, so the marks can bet on the game. Every week (out of 16), the scammer simply takes the group of people who "won" the previous weeks game, and splits them in half, telling half that team A will win, and half that Team B will win. Obviously, no one will listen to someone who picks losers, so assume that every week, half of the marks leave the scam. If the pool is sufficiently large, (only 32768) after 15 weeks, there will still be marks who have been given the winning team ahead of time, for 15 consecutive weeks by some anonymous stranger. It's not controversial to see why those marks may have been brainwashed by the process, and would be willing to pay large sums of money for that 16th pick.

Obviously, Donald Trump didn't run this scam, but it goes to show that people can be casual observers to information, and be presented with just the right information to be essentially brainwashed.


Wow. You seem to be experiencing your own conspiracy theory. Not every popular idea has some puppet master controlling it. It's completely understandable that supporters of a widely ridiculed underdog would gloat about his success. It was just another election, and just another president, not the end of the world. America hasn't collapsed.

Nonetheless, I don't want to censor your ideas. Let people make their own minds up. McCarthyism censored communist ideas because they were too dangerous and people might get brainwashed. Was that a good idea too? I though the whole free speech ideal of America was to keep political ideas out in the open where they can stand and fall on their own merits, not silenced groups of violent supporters like you get in Venezuela or Egypt which leads to revolution after revolution.

For an example of what might be a dangerous idea of the left - how about the popular one that blacks are poor because whites oppress(ed) them? That sounds like a recipe for endless failure. Don't work to improve yourself - just get angry at the bogeyman. They did it in Zimbabwe and it ruined them. They're doing it now in South Africa. And American blacks are listening to the left mantra of oppression too. All it can do it make people angry and hateful - what if it leads to race riots?


If you don't believe there are massive psychological exercises successfully influencing, and arguably radicalizing people, occurring on the internet, specifically on Reddit, you're being willfully ignorant. Or a Trump supporter, but I repeat myself.

I realize that the topic of brainwashing is loaded enough that any rational discussion of it can easily be derailed by someone by simply making the strawman you just made, by comparing my arguments to claiming the end of the world or collapse of America.

Isn't it ironic that you can easily spot a culture of victimhood in others, the psychological effects, the futility of it, but I imagine you don't see the same thing in your average revanchist conservative? The level of hatred and frothing at the mouth over 8 years of Barack Obama, fed by people like Donald Trump, Sean Hannity, Alex Jones, and other charlatans created a victim complex that seems to persist even now in many Trump supporters.


Wow... instead of debating the pertinent points raised, you resort to name-calling.


Let's not pretend you're not attempting to gaslight me. I _returned_ name calling, as my interlocutor decided the easiest way to dismiss my claim was to frame it as chicken little proclaiming that the sky is falling. Then I debated his points.

Lets break it down point for point:

- lopmotr framed my comments as a conspiracy theory. I retorted that there is obvious evidence all over reddit that supports my claims.

- lopmotr brought up examples of victimhood as the causes for decline of states, and drew a line to whats happening in America with blacks as equivalent. I retorted that conservatism has this same victim complex, and that its ironic that people can see in others much easier than they can see in themselves.


As someone who is sympathetic to your argument, I disagree with your tactics. 'lopomotr' suggested that you might be falling prey to the same psychology that you are seeing in others. While it's probably impossible to be completely polite when doing this, I don't think he was trying to be rude. As you might guess, self-identified "conspiracy theorists" don't necessarily consider "conspiracy theory" to be purely an insult.

You, on the other hand, compromised your otherwise reasonable argument by flippantly claiming that approximately half the US voting public is "willfully ignorant". This is gratuitous "name calling", and el_cid was right to call your attention to this. I didn't vote for Trump, but have intelligent friends and relatives who did. Regardless of whether their choice was wrong, insults like this are counter productive to changing anyone's mind. So stop it.

Personally, I think you are right about much of the behavior you see on the right, but seem to be missing (or at least not mentioning) the equivalent online psychological tricks that mislead the left. As your penance, here's a story from someone one the right detailing how he sees some of the matters you are refering to: https://imprimis.hillsdale.edu/the-politicization-of-the-fbi.... I thought it was an interesting read that I haven't seen in the mainstream press.


I'll admit that this theory is pretty out there, and I'm not going to continue to expound about it here. I'll also admit that I can easily get pretty worked up, and veer into the partisan, hair on fire rhetoric that is the very subject of my ire. I pledge to do better, and read your article.

I developed the roots of this idea over December '16, after several conversations with one of my life long friends I hadn't seen since a few months before the election. He's an attorney, and someone I know to be extremely bright. He voted Trump. The level of spite and shadenfreude in all of his arguments, and him repeating the phrase "I've never been so sure of anything in my life", in regards to his confidence in Trump to fulfill his campaign promises, was very jarring. All of these traits were completely foreign to my friend before Summer '16. He hadn't gotten less intelligent in any other avenue of his life.

I only came to this theory through the realization that susceptibility to public relations tactics, weaponized persuasion, whatever you want to call it, is not a matter of intelligence at all.


"Let's not pretend you're not attempting to gaslight me." "you can easily spot a culture of victimhood in others"

I think you should focus on continuing to build your case on reddit and then you should definitely return here in a few years once its ironclad!


The subreddit you refer to has since been banned:

"This subreddit was banned due to a violation of our content policy, specifically, the prohibition of content that encourages or incites violence and the posting of personal and confidential information."

https://www.reddit.com/r/cbts_stream


wow. I wonder what falls out of that.


Totalitarian ideologies such as Communism, Fascism, and Islamic Fundamentalism are inherently, irretrievably toxic. They place the entire future of the human race at risk. I don't favor censoring them but we need to ceaselessly oppose them and shun their proponents.


For, uh, purely scientific purposes I've noticed the posting volume on pr0n subreddits like /r/gonewild and uh, a few dozen similar subreddits, is perhaps 100 to 1000 times the sheer posting volume of a controversial subreddit like /r/the_donald. The sheer volume of relatively R rated pr0n is perhaps 1000 times the volume of everything else.

There seems to be tap dancing around the issue that reddit is a 1960s Playboy magazine fifty years in the future. There's just enough excellent articles to keep the advertisers amused, but the fact has to be faced that 99% of the traffic is young men looking at scantily dressed young women. You need a little submarine PR to stay controversial about subreddits nobody on a statistical basis reads to keep things legit appearing, whereas all the traffic and money is over there at /r/randomsexiness

Just like the old saying about Playboy, I only read it for r/ama not the nekkid ladies.

I'm not complaining; I'm just pointing out that reddit is THE most successful pr0n site out there with the most brilliant strategy I've ever seen. Please don't confuse it, and its achievements within the pr0n industry, with legacy news media or anything like that.


I think it's disingenuous to try to classify Reddit as only a porn site, or to report on its communities in that context - by nature of its size and nature, individual (or groups of) subreddits are nearly as disparate as two separate websites, and should not be generalized based on Reddit as a whole; while the porn communities of Reddit may be the largest section, that does not exclude functionally distinct bodies such as writing groups, political activists, and racist gatherings from existing, each with their own identities.

While there is doubtless overlap that influences demographics and discussion, that influence does not preclude non-porn subreddits from relevance or discussion.


> I think it's disingenuous to try to classify Reddit as only a porn site

I completely agree, there are zillions of different experiences depending the subreddit and many of them can amaze you. For example, I have asked very specific questions in subreddits like sysadmin and networking on how AWS and Google Cloud handle layer 2 network protocol like Ethernet, and received the correct answers that I never received posting on the specific AWS or Google Cloud groups outside Reddit.


I run a Reddit post scheduler (https://laterforreddit.com/), and I have noticed a substantial bias towards posting in various adult subreddits.

I suspect in my case that this has a lot to do with how people who are browsing porn use Reddit, versus how people use the rest of the site at large. Most subs moderate posts and discussion actively and have norms (and moderators) that act against posting frequently, cross-posting and self-promotion. On the other hand, most porn subreddits are quite happy to accept x-posts, self promotion, etc, because their goal/approach is to provide more porn faster.

This seems to be an emergent property of Reddit rather than a deliberate strategy, but they seem happy enough to keep the revenue coming (as am I).


> reddit is a 1960s Playboy magazine fifty years in the future [...] Just like the old saying about Playboy, I only read it for r/ama not the nekkid ladies.

That's a thought-provoking way of putting it, though I'd add the /r/gonewild subreddit and its variants makes it considerably more meta. A lot of people, particularly females, are posting explicit imagery of themselves - revealing a hidden culture exhibitionism as the vast majority of them do it for little profit beyond comments and upvotes. There are people on Reddit who do actually make money from posting explicit content, but the vast majority of users who do it are in it for a sense of self-worth from the "updoots" and the thrill of exposing themselves relatively anonymously.

Taking your observations about advertising into account, it's a complex ecosystem. The closest thing I've seen to Reddit is Usenet, but in a age where digital cameras are ubiquitous.


Have any data on it mostly being girls doing it for the thrill? It seems like every time I like a photo and look the posters profile, they've got something else going on where the free photo was just marketing. Tons of underwear selling.


> It seems like every time I like a photo and look the posters profile

No offence intended but could it be that well-proportioned, model-like subjects are more likely to have their profile viewed, so it's a case of sample bias?

While no, I don't have any hard data so my points are anecdotal, I tend to be disinterested in subjects that are more model-like as the point for me is to look at everyday people getting their kit off. And from what I can tell those everyday people make up the lions-share of self-posted content.


> revealing a hidden culture exhibitionism

Not only do many females seem to enjoy exhibitionism on subs like /r/gonewild, there's also a not-so-hidden culture of exhibitionism that takes place in real life that "strangely" seems to have so far escaped being raised in the ongoing gender wars discussions. Human beings are very complex beings.


> culture of exhibitionism that takes place in real life that "strangely" seems to have so far escaped being raised in the ongoing gender wars

I'm not sure what discussing a lack of clothing or dressing provocatively would achieve - it's never an adequate defence for poor behaviour or assault by a third party. Any nuanced debate about the topic is difficult and likely to be a no-win scenario.

One of the few reasonable points to be made is that revealing clothing can be inappropriate in professional settings, however I think other females need to be the ones who encourage appropriate attire in the office. A male delivering the message would lead to anger and resistance as it'll be viewed as a form of control rather than something formed from consensus.

This might be an overly simplistic way of looking at it, but female exhibitionism is not dissimilar from a guy flashing how much power and resources they have. It's a signalling system, and it attracts both wanted and unwanted attention, but in a civil society assault and harassment are never acceptable no matter how provocative someone's behaviour might seem.


[flagged]


> the same action (wearing revealing clothing) is considered just fine and don't you dare even question it you victim-blamer when committed by one gender, and sexual assault when committed by another.

Well, no, wearing revealing clothes isn't considered sexual assault when done by either sex. Though I suppose if you mean that male public upper body nudity is treated as acceptable whereas female public upper body nudity is treated as criminal and a sex offense, but were just using slightly hyperbolic language about “sexual assault”, you'd have a point.

Though that seems to be in the opposite direction of the fantasy you are trying to sell.


> It might open people's eyes to some of the hypocrisy in the current public dialogue, just one of many issues being that the same action (wearing revealing clothing) is considered just fine and don't you dare even question it you victim-blamer when committed by one gender, and sexual assault when committed by another. Oh and by the way, "all we want is to be treated equally'.

I'm not seeing apples for apples in your argument - it'd be valid to say the popular female opinion (whatever that is) is hypocritical if their stance was that men couldn't wear revealing clothing, but you're making the point that they're hypocrites for wearing provocative clothing and then being upset if they're treated indecently or worse. You could argue their exhibitionism is inappropriate, but that is subjective, shaped by culture, and entirely different to saying their position is hypocritical.

If I'm rich and wave my money as I walk down the street, it's still a crime to rob me even if you feel as though I was asking for it.


> I'm not seeing apples for apples in your argument - it'd be valid to say the popular female opinion (whatever that is) is hypocritical if their stance was that men couldn't wear revealing clothing

Isn't that their stance? If a man went out in public with 1/3 of the flesh of his penis showing, everyone would be cool with it? The reality is, he'd be arrested for indecent exposure. In this case, the equivalent of what women do on a regular basis is quite literally illegal.

> but you're making the point that they're hypocrites for wearing provocative clothing and then being upset if they're treated indecently or worse

No I'm not.

> If I'm rich and wave my money as I walk down the street, it's still a crime to rob me even if you feel as though I was asking for it.

I'm not saying otherwise.

This conversation is actually not a terrible example of my overall point.


It's not very obvious, to me at least, what your overall point is?

> If a man went out in public with 1/3 of the flesh of his penis showing, everyone would be cool with it? The reality is, he'd be arrested for indecent exposure. In this case, the equivalent of what women do on a regular basis is quite literally illegal.

I don't understand what the specific double-standard you're alluding to it? I don't think a woman with exposed genitals is going to escape an indecent exposure charge.

Upthread you liken men wearing revealing clothing to sexual assault, but I've never seen this as a talking point in the gender discourse.


Is that relevant? If only 1% of Reddit traffic is destroying civilization then civilization is still being destroyed.


> There's just enough excellent articles to keep the advertisers amused, but the fact has to be faced that 99% of the traffic is young men looking at scantily dressed young women.

Then again, these young men probably aren't paying attention to the ads (unless maybe they're porn ads!), so those subreddits aren't a target audience.


If I may ask, how do you jump from posting volume:

> 100 to 1000 times the sheer posting volume

to traffic:

> 99% of the traffic

?

Also, just to clarify - do you consider porn something not legit or bad? Your tone suggests it, but I may have misread, so just want to make sure. If so, why?


Porn is not a banned word.


That's legitimately hilarious. Thanks for sharing this tidbit.


What tidbit? His totally made up statistics?

Because I've noticed that the amount of posting on mainstream subreddit is maybe 5-7 gazillion times more than the porn ones so he's full of shit.


Thanks for mentioning statistics, then using gazillion in your post, really makes it seem more valid.


So we've tried:

- Real identities (Facebook comments)

- Voting and self moderation (Reddit, HN, etc.)

- Strong moderation (Reddit, HN)

They all result in toxic comments, trolling, an echo chamber,or worse, a complete lack of participation. There's no real solution to this problem. However, if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling at the cost of less participation and an echo chamber (though, you could argue that not all participation is equal and you're mainly removing poor participants).

There's no perfect way to do this because even if you made a subscription necessary, for instance, you may just create an echo chamber. As part of the solution you'd need to prevent the creation of new accounts to circumvent any punishment received.

I'd say the most straightforward solution is that you have a forum and you get an account. Physical mail is sent to your house in order to get a single account. Then, regular moderation practices would be taken seriously as there's no way to create another. The community would be left with those who care enough to not be banned. The problem is that the moderators themselves may be corrupt or wrong.

Thoughts?


We really just need better public education, as well as clearer separation between information and entertainment.

Facebook/Reddit/Twitter/etc promote a regression towards the mean understanding... viral content reflects and amplifies the "average user's" sentiment and values. Acceptable for entertainment, but inherently prone to misinformation, propaganda, and demagoguery. Education requires valuing subject matter experts. Opinions which may not be widely held, or even popular, but supported by people who are vetted as being knowledgeable on the subject.

Traditional media could be regulated because they were largely centralized, but centralization also creates an establishment that regulated counter-culture ideas. In contrast, the internet is anarchic. Online anonymity impairs delegation of trust... any idea can be published so every individual must rationally evaluate what they consume. Attempting to regulate away undesirable behavior on the anarchic internet is just cat-herding. At best, you create a walled garden for a select few.

As I see it, the paths forward are either:

* public education, emphasizing civics/rationality, to support distributed self-regulation

* centralizing with state regulations

I want the former but the latter seems most likely, considering how the underlying networks are consolidating, and increasing awareness of how amplified public ignorance creates political/economic instability that hurts those with power.


We really just need better public education

That's great but it's a slow cultural change. Well-educated countries can still fall into extremism, which is driven by emotional and atavistic factors as well as economic and political ones, and can't simply be dispelled with doses of Rationality (tm). Arguably, the failure of rational utilitarianism to engage with this aspect of humanity and to simply dismiss everything that can't be quantified as irrationality exacerbates the growth of toxicity.

On a more practical level, the US is a country where part of the population rejects the theory of evolution on religious grounds, historical narratives are intensely contested, and political life is objectively and increasingly polarized. Educational change happens over generational timescales, and if it were as simple as making it available all our social ills would have been dispelled long ago.

Of course education and critical thinking skills are essential for a healthy social body, but when I see people saying 'we just need better education' I feel like I'm on a bus that's headed towards a cliff edge and well-meaning people are suggesting that the solution to this is better driving lessons.


My intent was to describe the system (social media is a hyper-efficient anarchic consensus-based information exchange), and a root-cause for undesirable output...

There may not be a solution that preserves the open internet, if this system is fundamentally incompatible with social realities.


"We really just need better public education"

I agree 100% as long as you put me, or people of my worldview, in charge of the curriculum and personnel.


Yes. In today's age of near-universal literacy, "uneducated" is just a euphemism for views disliked by the the side in control of the education apparatus.

While thought-policing media, schools, churches, and any other possible venue of "indoctrination" may "work" to a superficial extent, it mostly just completely destroys the credibility of your authority and leads to stunning implosion and destabilization. See: the Soviet Union.

Like it or not, you can't just beat "undesirable" views out of people, either with schools or social media moderation.


Education is not indoctrination.

Education teaches critical thinking, science, history, numerical literacy, and the general skill and toolset to differentiate fact from falsehood, rhetoric, and manipulation-- regardless of where it is coming from.

Education is an immune system for the mind. Generally it is the manipulators who don't like an educated populace, because it decreases their power. They tend to be the ones labeling education as "indoctrination".


>Education is not indoctrination.

Right, in principle, it's agreed that "education" is "good knowledge" and "indoctrination" is "bad knowledge" and/or "fake news".

As long as you think that the inoculations being administered in the school system are valid, you'll call it education. Once you stop thinking that, you'll call it indoctrination.

So you're not really arguing anything. Every side calls training that biases you toward their preferred narrative "education" and training that biases in the opposite way "indoctrination". Is your point that "sometimes people disagree"?


Do you understand the difference between knowledge and critical thinking?


> Education is not indoctrination.

This tends to last until someone realizes that an educational system is a wonderful indoctrination tool to advance their goals. Often enough this is followed by enacting that.

This isn't new. "Give me the child for the first seven years and I will give you the man."


Being able to distinguish fact from fiction is not a skill that is near universal, and it should be. Seems like a straw man to attack "thought policing" and "indoctrination." We're talking about critical thinking and logical reasoning.


This argument is completely disingenuous. Your average person is capable of critical thinking and logical reasoning -- those who aren't are either wards under the care of another person. Normal people just think logically and critically in reference to local optima, and that's not something that we can or should try to program out of them.

That quality is also known as adaptability and it's crucial to successful survival and prosperity, for exactly the same reason that it's useful in mathematics: global optima are generally difficult to deduce, if they can be conclusively and authoritatively determined at all.

Saying Side X is "not being logical" or "can't think critically" is virtually always just a cop-out. It says you either a) don't understand or b) don't want to admit the validity of some of their concerns.

Most of the time when the other side's argument is understood, the disagreements are a matter of priority and/or credibility, not nonsensical thinking. And those priorities are usually determined intrinsically; values as such can't really be programmed or taught. They're the result of the years of experience each individual has endured in the real world.

A good example of this is that many engineers are known for a just-the-facts, no-frills approach. This is because engineers tend to prioritize facts and correctness over aesthetic and emotional value. Other people who don't do this aren't objectively wrong -- they just put different weights on the considerations, leading them to different conclusions.

Another example is outlet credibility. Your average Fox News viewer may believe that MSNBC is propaganda secretly dictated by the shadowy figures in the background, and vice versa. If you believe this, the logical conclusion is to dismiss or at least discount the perspective of the propagandist.

You cannot "prove" that one side is propaganda and the other side isn't, because it is impossible to definitely deduce the intentions and motives of other people. Reports that say reports from MSNBC were more frequently errant are of no value because you can just say "Oh yeah, says who? The same shadowy figures?" to that.

It is important to understand that humans hold a variety of totally non-falsifiable beliefs -- things that cannot be definitively proven one way or the other, even if you try, like the state of mind of the speakers we're around. These have to be approached from the subterranean to be understood, let alone addressed.

All we can do is understand that our own perspective is not the default or de-facto correct one, and that other people are entitled to their own assumptions and unfalsifiable opinions just as we are. They're entitled to their own credibility heuristics and decisions about who is worth trusting. People are free to make their own decisions and conclusions, whether we agree with them or not.

Understanding that is critical to learning that it's OK to disagree with people, without having to pretend that they're insane just to preserve your own ego and self-worth.


> All we can do is understand that our own perspective is not the default or de-facto correct one, and that other people are entitled to their own assumptions and unfalsifiable opinions just as we are. They're entitled to their own credibility heuristics and decisions about who is worth trusting. People are free to make their own decisions and conclusions, whether we agree with them or not.

For opinions, perhaps. There are also people who reject facts. I don’t consider rejection of evolution or young-earth views as legitimate. Thus, those who cling to these views are empirically wrong.


>There are also people who reject facts.

Most people don't reject facts, they reject certain interpretations of facts.

For example, some people believed epilepsy came from evil spirits. They didn't deny that the person was shaking on the ground. They just had a different explanation for it than we do now.


Empirically? One suspects a different adverb would have been more correct in that sentence.


Do you contend that human beings have not empirically measured the spherical, or roughly spherical shape of the Earth?


When you build a small house, you don't account for the curvature for earth. Same for when you walk down the street.

When you build a runway for a plane or a long bridge, you do.

A model is not necessarily useful in all contexts. People still use the flat earth model in useful ways because it's simpler to assume the earth is flat in some situations. Of course, once you go beyond the capabilities of the flat earth model your numbers will wildly diverge into the realm of useless while the round or spherical models provide useful numbers for longer.


If parent had been talking about chemistry or physiology or any subject that can be explored via controlled experimentation, I wouldn't have complained. Instead the topics were geological and evolutionary history, which seem very much not "empirical". Not that I suspect that those sciences are wrong in any sense, but words have meanings.


I misread gp as saying "flat Earth", and not "young Earth". My apologies. I would agree that even if we can point to things like nylonase or the speed of light coupled with known distances to stars, those are deduced facts. Whereas astronauts have empirically observed the spherical nature of earth.


> A good example of this is that many engineers are known for a just-the-facts, no-frills approach. This is because engineers tend to prioritize facts and correctness over aesthetic and emotional value.

And yet engineers are over-represented (compared to people with other degrees) amongst Creationists and conspiracy theorists and, I would guess, terrorists. I think engineers value simplicity and direct causation more than facts or correctness.


No, it's not disingenuous at all. It is not a cop-out to say that people who believe in conspiracy theories, people who don't understand facts, people who are highly opinionated about things they don't understand, etc. are not behaving logically. They can have valid concerns and still be behaving irrationally. You clearly think these are mutually exclusive but they aren't.

Appreciate the multiple snide attacks, though.


> people who believe in conspiracy theories

Is there not such a thing as conspiracy fact? Aren't some conspiracies, in fact, real? It seems both sides of the political aisle have pet conspiracy theories these days, so it's really hard for this to hold water anymore.

> people who don't understand facts

As another commenter said, people will usually agree on the clear and present facts, e.g., Donald Trump won the presidency. Where you'll find more disagreement is on rationale: either he won because he gave a voice to the discontented American working class, or he won because he worked in cahoots with Vladimir Putin to subvert American democracy.

People don't refuse to acknowledge the obvious state of affairs. They have different interpretations, based on different values and credibility heuristics, of the likely impetus for that state of affairs.

>people who are highly opinionated about things they don't understand, etc.

aka virtually everyone. How many of us know enough to hold our own with the experts in something that we're "highly opinionated" on? If we can in anything, it's very narrow. Are all of our other opinions invalid now? Humans use credibility heuristics to try to determine who is right about something, and then they follow based on that.

> are not behaving logically

I dunno, it sounds logical to me, at least in the practical sense. If we pretend we live in a world of infinite resources and time, you might be right, but considering the constraints of reality, the logical approach seems to be to have and express opinions in the moment according to one's best judgment, since everyone else is going to be doing that too. Just gotta try not to be too haughty about it.

> They can have valid concerns and still be behaving irrationally. You clearly think these are mutually exclusive but they aren't.

I agree someone can have a valid concern and also behave irrationally. I don't agree this is what you started out saying, though.

>Appreciate the multiple snide attacks, though.

No offense intended. Edit deadline is passed, but I wasn't thinking I put any such things in. My apologies if you felt I was being condescending or passive-aggressive.


I'd like your opinion about this particular subreddit:

https://www.reddit.com/r/CBTS_Stream/

These 20,000 odd people unequivocally lack the type of critical thinking skills GP is referring to. I find it hard to believe that they are all under professional care. These people are straight out of The DaVinci Code, or National Treasure. They truly believe that they have uncovered a massive conspiracy to over throw the current American government, and they are organizing to stop it. Many subreddits choose a sort of mascot that defines their subredditors. For instance, people who subscribe to the tongue in cheek /r/evilbuildings are "6509 villains plotting", where they post pictures of buildings that have a nefarious apperance, no conspiracy in the comments. /r/CBTS_Stream has "21,333 Operators". As in mercenaries/militiamen. These people are rabid Trump supporters, seem to have a strong fundamentalist Christian bent, and appear to be extremely gullible and susceptible to any sort of theory that involves revenge upon the previous administration. They even have their own prophet, "Q". Everything from occult references, to nazis, to big pharma killing off holistic doctors, to arranging Trumps tweets into an 11x11 grid, and then playing word search to reveal a secret message. These people swear that Donald Trump's televised rallies are chock full of encoded messages and symbolism, both in what Trump is saying, and the clothes/posters of supporters in the background. These people buy toothpaste from Alex Jones, because it doesn't contain flouride. These people believe that all mainstream American history since the American Civil War is a lie created by the perpetrators of this current hoax these people have uncovered. They also believe that Trump has already secretly met with Kim Jong Un, and will soon unveil a world saving peace treaty, and that will "make the libs heads explode".

The truly sad part of this is that a lot of these people are also members of other subreddits dedicated to people who have escaped Mormonism, or Jehovah's Witnesses, or similar groups. So these people have already thrown off the shackles of psychological warfare once. But they believe now that they are "woke", and seem completely beyond talking down.

Good luck explaining to these people that they are being radicalized by Russians, or whoever. Good luck getting any of these people to not believe that any censorship is obvious proof that the sleuths are hot on the case, and that the global elite are silencing them.


> This argument is completely disingenuous. Your average person is capable of critical thinking and logical reasoning -- those who aren't are either wards under the care of another person.

Oh please, you think the average person has sufficient critical thinking skills to read the newspaper and pick out the parts that are "stretching the truth", use specious reasoning or various other logical fallacies, etc? You must roll with a different crew than I.

> Your average Fox News viewer may believe that MSNBC is propaganda secretly dictated by the shadowy figures in the background, and vice versa.

If they had critical thinking skills, wouldn't they be able to get a pretty decent handle on the degree to which they are propagandists?

It sounds to me like what you're saying is, most things within this realm are not knowable, except for the parts that are. The world is complex and confusing, but I don't think it's that confusing.


I understand and agree with some points of your criticism, but I disagree with the part that we can't beat undesirable views out of people. Well, we can't do it completely, but it's not a binary thing, and I believe we really can do a lot to educate people. And not a political education, but teaching them about their own biases. Teaching them to be critical, to not just ignore evidence when it goes against their views, to be fair to others, etc.

I don't know, I don't think we have actively tried yet.


> Like it or not, you can't just beat "undesirable" views out of people, either with schools or social media moderation.

You absolutely can. An example: https://www.theguardian.com/commentisfree/belief/2012/sep/22...


That is indeed an inspirational example.


I don't mean it to be inspirational, only to indicate that persistent propaganda and organized information warfare can indeed drive ideas fully out of the population.


Broken clocks are right twice a day.

Religion sucks, but Soviet Communism's anti-religious nature doesn't excuse its foibles.


We've already got people with the correct worldview in charge of curriculum and personnel. The problem is that there are still dumb-dumbs that sometimes think there are valid alternatives to our worldview. That's why we need better education.


We should incentivize them to have the correct worldview. E.g. if you’re a CEO you should make sure that your workforce holds the correct opinions by reminding them that they can be fired on the grounds of being a bad “cultural fit”.


I can't tell if this post is ironic or not


"correct worldview".

People like you scare me.


Sorry--it is satire, but the comment represents literally how it comes across to me when I see claims that "better education" will effectively bring about less toxic discussions. The implication is clear: If only people were rational and educated, like me, they wouldn't think the way they do, and then we would all agree.


How about this: first teach advanced critical thinking skills, so people have the skills (if not the will, that's another problem) necessary to see through propaganda from both sides.

I have a feeling a lot of people would have issues with this approach though.


How do you teach critical thinking?


By giving people things to think critically about, and ensuring that they respond in an appropriately thoughtful manner.

Of course this doesn't work when politics is taboo.


I think emotional maturity is more important than critical thinking. People in our culture have this life or death anxiety over being right, especially in social groups. You see it all the time on social media. Person 1 makes a throwaway facebook post which contains some kind of factual error. Person 2 points this out. Person 1 feels personally attacked and becomes emotionally invested in "winning." The more pushback person 1 gets the more stand their ground and will scorch the earth to save face. Where is all this intellectual insecurity coming from?


It comes from the fact that when you say anything incorrect online, there's an infinite number of people who will call you out on it. Your intellect is always on trial. You have to convince a jury of the entire planet that your opinion is valid.

Take the same comment or opinion and air it among three friends in person (or a very tight social network). You only need to convince two or three people who likely trust and respect you already, and who are not inclined to want to spend an infinite number of hours debating such trivia across all time zones.


Why not just engage in conversations on the principle of charity and good faith. There's also the concept of steel manning other peoples arguments to help extend good faith.

Not every conversation has to become a burned bridges and salt the earth affair. If the other person is just trying to "win" then disengage from the argument. If the other person is arguing with you in good faith then maybe you're wrong or have something to learn from a new perspective.


But... an infinite number of people aren't reading every page on the web, all the time. Even on Reddit, you're only really interacting with the limited subset of users who choose to comment, out of the limited subset who read a thread - which is still possibly bigger than a circle of friends, but smaller than any significant fraction of the human population.

There is the perception that "the entire world" is watching you on the web, criticizing your every move, but that's not a fact.


I guess I'm just not sure what the curriculum would look like. Is there something you could point to as an example of a course doing this well?


I would start with someone who is skilled in both critical thinking and education, or am I misunderstanding the question?

Logical fallacies would be one place to start, you can see examples of this all day long on reddit for example.


Well, my understanding is that "critical thinking" is already very commonly considered to be part of various course curricula. If it's not being taught, then we'd need to do something differently.

I've been hearing claims of the need to teach "critical thinking" since I was in high school. To me it always came across as one of those things that can't easily be taught, particularly in a traditional academic setting. Everyone agrees it should be taught, but if there were a clear way of doing it, we would.


There's plenty of material out there that isn't remotely touched upon in a traditional education.

https://en.wikipedia.org/wiki/Classical_logic

https://plato.stanford.edu/entries/logic-classical/

https://distancelearning.ubc.ca/courses-and-programs/distanc...

> If it's not being taught, then we'd need to do something differently.

When reading the news, forums, or overhearing conversations, do you not regularly encounter people who obviously have no significant skills in critical thinking?


> There's plenty of material out there that isn't remotely touched upon in a traditional education.

Right. I took a logic class for my undergraduate degree. It's actually the source of the "modus" in my username. I guess to me that's a far cry from what people refer to as "critical thinking." Being able to identify textbook logical fallacies isn't the same thing as rationally and objectively forming a judgment about something.

It's certainly a helpful part, but I doubt most would remember it any better than geometry or 1800s history.

> When reading the news, forums, or overhearing conversations, do you not regularly encounter people who obviously have no significant skills in critical thinking?

I do, but it's rarely a clear-cut example of misunderstanding a logical fallacy. More often than not, it's the blind acceptance of supporting evidence while rejecting opposing evidence. Or assigning way too much value to a poorly-sourced news story. Or approaching the issue with a different worldview / values. Or any number of other biases that affect decision-making.

To be clear, though: I agree it's clearly not being taught. I'm just not convinced you can take a bunch of high schoolers, put them in a room, and after X weeks of doing something, they'll be critical thinkers. I agree you could probably teach them logical fallacies well enough to pass a test on them, but that's not the same thing.


Do you think we've reached the absolute apex of having a well-informed citizenry?

If not, if critical thinking doesn't work, what could we do to improve this situation?


> Do you think we've reached the absolute apex of having a well-informed citizenry?

Of course not.

> If not, if critical thinking doesn't work, what could we do to improve this situation?

I'm not sure "well-informed" and "critical thinking" are even relevant to each other, but putting that aside, I genuinely don't know. That's why I asked how you teach critical thinking.

It's possible people are bound to retreat to their biases and it's a futile effort. I'm just not convinced attempting to teach people "critical thinking" will work, because it hasn't.


> because it hasn't.

Implying it's been tried, and failed.

Where has widespread teaching of critical thinking been tried?


I've seen it in numerous course syllabi and mandates. I'm not sure how to cite that, though. Here are a few examples where it is assumed the existing education system / teachers claim to be teaching critical thinking.

> Public school teachers and administrators will tell you that one of the mandates of public education is to develop critical thinking skills in students. They believe that curricula are designed, at least in part, with this goal in mind. [1]

> Common Core, the federal curriculum guidelines adopted by the vast majority of states, describes itself as “developing the critical-thinking, problem-solving, and analytical skills students will need to be successful.” [2]

> Many teachers say they strive to teach their students to be critical thinkers. They even pride themselves on it; after all, who wants children to just take in knowledge passively? [3]

Are you willing to acknowledge educators / curricula commonly claim to teach critical thinking? To me it's always come across as something claimed to be taught pretty much everywhere. Yet we both seem to agree it's not working.

We could try teaching critical thinking differently and potentially meet some success, but that doesn't change how it's been claimed to have been taught for some time with poor results.

[1] http://argumentninja.com/public-schools-were-never-designed-...

[2] http://www.newsweek.com/youre-100-percent-wrong-about-critic...

[3] http://theconversation.com/lets-stop-trying-to-teach-student...


> Here are a few examples where it is assumed the existing education system / teachers claim to be teaching critical thinking.

> Are you willing to acknowledge educators / curricula commonly claim to teach critical thinking?

I'm not in denial of some sort ffs, I'm frustrated at watching our society coming apart at the seams because the vast majority of the population seems to be incapable of intelligently reading a newspaper article, and will fall for seemingly any trick in the book.

Of the examples of "critical thinking education" listed above, do any remotely approach the critical thinking specific education I'm talking about here?: https://news.ycombinator.com/item?id=16572861

People are absolutely inundated with propaganda nowadays, like no other time in history, with social media being the most powerful weapon by far. We are graduating our children and sending them intellectually defenseless into this new world, I don't know if the average human mind can be brought to a level sufficient to cope with the propaganda created by the world class experts in persuasion who are working for a variety of deep pocketed entities, but at least we could try.


> Of the examples of "critical thinking education" listed above, do any remotely approach the critical thinking specific education I'm talking about here?

Well no, but my claim wasn't that your suggestion has been tried. It's that other people have been claiming they've been teaching critical thinking for some time, and it's not working.

I agree it's a problem--I just don't think a class in logic will do it. I'm not sure it's teachable at all, and even if it is, I'm not sure those same skills won't be ignored the moment the argument questions one's identity or becomes emotional.

Is it worth trying? It's easy for me to say "sure," but it's not on me to implement, and I'm certainly not sure how to assess whether it'd be successful.


Judging solely on the number of HN commentators who are absolutely incapable of detecting irony or satire, and indeed who may feel those are entirely out of place on HN, the average citizen isn't capable of considering two mutually contradictory propositions at the same time, let alone becoming "well-informed". The various exhortations in this thread to "just teach them!" bespeak a similar innocence. We have a rather large number of trained professionals engaged in the teaching already, so such pleas should at the very least be accompanied by considerations of why those efforts have not yet sufficed.


Dialectics are hard, man.

But to be fair, the longer we get into the current era of politics, the harder it is to distinguish between earnestness and satire. Young people who watch the movie Network today don't see Howard Beale as satirical, because there are too many people like him today who are deadly serious.


Critical thinking, specifically, is taught on a widespread basis?

What country are you writing from, and could you give some specific examples?


Most of our high schools despair of teaching mathematics to the level of algebra, to most of their students. Many haven't yet despaired of conveying literacy to those same students, but the outcome is by no means certain. I would consider both of those prerequisites to "critical thinking", no matter what particular idiosyncratic definition of that phrase you might prefer. Therefore I suggest that we aim lower, for a sort of animal suspicion that comes naturally to all humans. The result, from the perspective of political harmony, will be the same: hundreds of millions of critical thinkers would not magically all arrive at the same conclusions on any set of topics. In a perfect world of critical education, not only would you still disagree with most people's conclusions, but you would also still disagree with how they arrived at those conclusions.


Philosophy


You’d think that, but Wittgenstein minted his career on calling philosophers out for not thinking critically enough in debates.


Even if you hadn't misinterpreted the GP, this crosses into personal attack, which is not allowed here. Please read https://news.ycombinator.com/newsguidelines.html and stick to those rules when commenting on HN.


I think moduspol was being sarcastic.


> .../Reddit/... promote a regression towards the mean understanding... viral content reflects and amplifies the "average user's" sentiment and values

Comparing the experience in a niche subreddit, vs. a default subreddit, it's clear that the real problem is allowing causals in. If people have to go out of their way to participate, you wind up with only the ones who care to do so. And they have, in their reputation, something they don't want to lose.

But if people are allowed to participate by default, you get the enormous masses of people who, collectively, by virtue of their shifting roster, are immune to moderation. And you get people who set out that morning to share as many opinions as possible, instead of the people who set out to participate in that community exclusively.


I’ve witnessed this in well-meaning subredddits. Lots of arcane rules for posting. No warn band and removals mean I don’t try again.

On the subreddits I manage, I allow broad participation with no barriers. But they aren’t popular enough to have enough trolling to break me down and add hoops.


The clear separation between information and entertainment is BRILLIANTLY said. Seems straightforward but it's really important. Why is Twitter, a cesspool of memes, rap, sports, and kardashians mostly used by kids under 25 the primary means of breaking stories? Why is so much news on there? Anytime profitability, entertainment, short attention spans and news all get entwined, the resulting outcome is clear.

Facebook, Reddit, twitter, Snapchat have all started for entertainment and switched to catchbyte reaction/outrage culture news and normal news has turned into a mess as well.

I say regulations wiping trending news feed off Facebook, twitter and Snapchat is a start. Perhaps funding to any group that meets set in stone criteria regardless of political affiliation gets some federal funding to make up for the cost of making real journalism. That journalism must be fact checked and we hold them accountable.

People are desensitized to everything now and when shocking news is made every 15 minutes and the world is so connected we become numb to so much and that is incredibly dangerous


I wish this were true, but the amount of fake news spread by trusted news networks, such as newspapers spreading Russian propaganda bots, implies that education isn't sufficient. If a professional cannot discriminate between truth and lies on twitter how can an average person?


The economic incentives are misaligned. Fake news helps with page views and other KPI. If there was an actual enforcable cost associated with misrepresenting the truth, we'd see a slowdown in fake news.

The restoration of the Fairness Doctrine would also help stymie some of the biggest promulgators like Fox News *

* you can search for "fox news viewers misinformed" and encounter studies and results like http://publicmind.fdu.edu/2011/knowless/


The analogy of food nicely shows that some sort of race-to-the-bottom does not necessarily occur: there’s still organic or other high quality fresh food even though McDonald’s has been around for a while.

Similarily, there are still excellent news sources. The Economist is often cited in these discussions, and the New York Times is also vigilant in their reporting and the correction of errors when they occur[0].

What we’ve seen is a breakdown in trust of institutions, largely disconnected from actual mistakes on their part. People will quickly demand proof and invoke conspiracy theories when, for example, the there-letter agencies accuse Eussia of interfering in elections. They have learned to invoke “appeal to authority fallacy”too well, without offering an alternative. Because you cannot evaluate a new story without in some way deferring to the reputation of the publisher.


The breakdown in trust of news institutions has many sources, so correcting factual faults is just addressing one part. Omission and selective use of facts, misleading context, and misleading language seem to carry a higher penalty for trust in todays environment where it is very easy to provide the original source when ever a slightly biased news article is published. A factually error is very binary, true or false, while omission and selective use of facts gives room for much more outrage and distrust of otherwise well establish news institutions.

The Economist and the New York Times may have good practices in regard to errors, but there is a clear difference in their reporting to independent fact checking sites. To make matters worse, even those examples of "excellent" news papers tend to have a clear and open political alignment. With increased political polarization this then result in a rather natural distrust of news institutions, even those that are vigilant in correcting errors after they have occurred.


I disagree with the fast food analogy because that has obvious and direct personal costs, while infotainment negatives are subtle and externalized.

Re: trust and appeal to authority; your example made me realize people are drawn to grand conspiracies because unverifiable theories are infallible... Luring in people unfamiliar with probabilistic reasoning and consilience.


Organic food is not higher quality. Organic just means they cannot use some arbitrary list of farming practices, (some good some bad).

Sure McDonalds is not good, but there is also plenty of organic that equally bad (or worse).


> Organic food is not higher quality.

That depends. Sometimes European organic veg is preferable to Chinese industrially farmed veg when your local supermarket offers only those two choices. This is definitely true of garlic: Chinese garlic tends to be notoriously bitter and lack juice, but Spanish organic garlic is very sweet, pungent, and juicy. Now, the fact that the European organic choice was made according to the limitations of organic farming may well be irrelevant to its goodness, but there is a strong enough correlation with quality to guide consumers, and it was likely chosen by your supermarket as an alternative to the Chinese imported product precisely because they wanted to cover the organic segment.


That is not a property of organic though. Non-organic farmers are able to produce at least as high a quality as organic (nothing an organic farmer does is prohibited for the non-organic farmer, while there are a number of things the conventional farmer can do to increase quality that is prohibited to organic farmers). Of course just because they can doens't mean they do.


At the same time, I feel like we are seeing a breakdown of trust in institutions because of actual mistakes that, in years past, would have gone unnoticed.

Although this distrust does have negative impacts to our society, I view this distrust as an overall good thing.


>We really just need better public education, as well as clearer separation between information and entertainment.

nope, this won't work.

People will always need a place to voice their vile comments in a cowardly manner


[flagged]


Taxes don't pay for private education, let the free market decide how to run those schools.


Reddit is not actually strong moderation.

Forum communities with actual strong moderation, eg. Something Awful, ResetEra have near zero issues with hate speech, Nazis and other things that reddit has let fester.


I'm a regular on Something Awful.

One of the major factors that keeps it relatively clean is that user registration costs $10. That's a strong financial disincentive against trolling, bots, etc.

I really believe that successful online communities of the future will have paid signups.


Also, SA's userbase consists largely of older, tech-savvy people. It's been around for nearly 20 years now and I bet their registration peak was ~2004 (:files:). So it's pretty likely that the median age of a poster there is ~35-40.

I'd totally pay $10 for a less shitty reddit clone.


The Something Awful forums provided my first real exposure to 'internet culture'. I find myself reading their forums more and more often lately because discussions there seem less likely to devolve into an echo chamber. An account there is well worth the registration cost imo.


I think part of it is the lack of voting and the presence of easily-identifiable avatars: participants have an incentive to post things that generate maximum engagement and discussion with other specific users as opposed to maximum instantaneous agreement. In this regard it mirrors real-world social interaction much more closely than reddit/fb/twitter.


The other factor is not being shy about banning people. Heavy moderation.


I believe you are correct. If I pay for something that suggests I am the customer and not the product.


Haha if you pay for SA it only suggests that you are a sucker who paid 10 bucks to post on SA for a couple of days before getting banned. They are very ban happy there.


I have had an SA account for years and never been banned. I don't appreciate being called a sucker and I think the price is fair for what I get.

When I use Facebook I am the product, not the customer. This means the platform is optimized to put my eyeballs on advertisements or provide data about me to marketers. It is not optimized to provide high quality conversation.


Good for you but it's hard to take you remotely seriously when you try to relate SA and high quality conversation.


Are you trying to be ironic on purpose?


SA is basically 4chan except you have to pay 10 bucks to join. I don't really see the joke here.


The "joke" is that you complain about the lack of high quality conversation on SA, and yet your posting style here is extremely shitty.

No wonder you got banned so quickly there...


What, am I supposed to list my reasons with citations (of course) to justify my opinion about some historical relic of a site? And I didn't get banned from SA; I just know a lot of people who did.


If you got banned after a couple of days that means you broke the rules in a big way.


Well, that's because reddit is simply a platform, right? reddit undeniably has tools to allow for strong moderation. see /r/AskHistorians for a perfect example of this.


Strong moderation would mean that moderation is not optional, always present and always consistent.

Having a toxic community that internally declines to moderate is not strong moderation.


"Tools to allow for" is a hell of a phrase. Yes, moderation is possible on reddit but the idea that the tooling for such is in any way "good" is in error. It's enough to make moderation possible with sufficient application of effort. The fact that it's so rare is a strong indication of how useful those tools are.


Reddit tries to be just a platform, but it fails in two ways:

1) Namespace of subreddits. The subreddit which snags the most obvious name for a topic has a much better chance of becoming canonical for that topic that competitors with worse names.

2) Cross-subreddit identity and supporting tooling. For example, i can easily search for all recent posts made by a particular user, but i can't easily search for all posts made by a particular user within one specific subreddit. This sort of thing promotes "cultural leakage" across reddits and makes people think of all of Reddit as one community with one culture.

Related: see https://news.ycombinator.com/item?id=16573842 which summarizes a study that finds cultural leakage across subreddits.


We haven't really tried "real identities". We've come close.

Real identities would require verification, which sites like FB only do after the fact.

I fear that a huge repercussion of the election issues is that we will get there. A real ID may be required for you to post comments in all websites. And i'm not sure how I feel about that. Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know. But at what cost?

I really do wonder what is the root of trolling. What makes a "normal" person take an alter-ego/view/counter view, solely based on lack of face to face interaction...


Most of the problems of the Internet are mimicked in real life. That's why I don't think "real identities" would solve much, to be honest. Before trolling, for instance, there was the art of the prank (some harmless, and some downright mean -- just like trolls!), and the term "rabble rouser" seems to date back at least a couple hundred years. Some people in real life interact with others in rather toxic manners, in one form or another.

I mean, you get toxic behavior even on something like Nextdoor, where you pretty much know it's the neighbors across the street. Technology has just made things more convenient -- social media removes any curation, and technology also has made some means of harassment much easier to execute.

Myself, personally, I avoid social media that encourages toxic behavior (which usually means, smaller, special interest type sites; social circles that you know; etc.). This involves some degree of moderation or self-selection.

I don't see a good way around limited moderation for Reddit either, which is unfortunate in that it is hard to moderate something that size well (it's usually inconsistent and often arbitrary-ish).


Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know.

That is not at all clear, especially as a percentage of comments. There are plenty of sociopaths who are perfectly willing to troll under their real name. Meanwhile, more reasonable people may quite rationally be worried that expressing any opinion on a controversial issue will lead to online mobs trying to get them fired from their jobs, kicked out of school, or otherwise ostracized. Not to mention scenarios like being a gay teenager in a very socially conservative environment.


Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know.

I don't think it would. Lots of people post horribly objectionable material under their own names on a regular basis, depending on their level of financial security, peer group, and social milieu.

What makes a "normal" person take an alter-ego/view/counter view, solely based on lack of face to face interaction...

Some people are horrible, and are just as unpleasant in real life as they are online.


> Reality is that it would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know

Is this true? I've seen various studies saying that the opposite is actually true, but I can't currently find any of those studies. Does anyone have any sources?

EDIT: some sources, though I don't know the strength of their validity:

  - https://techcrunch.com/2012/07/29/surprisingly-good-evidence-that-real-name-policies-fail-to-improve-comments/

  - http://www.slate.com/blogs/future_tense/2014/07/17/google_plus_finally_ditches_its_ineffective_dangerous_real_name_policy.html


> A real ID may be required for you to post comments in all websites. [I]t would drastically reduce comment trolling, if your real identify was viewable and searchable by all you know. But at what cost?

The cost is that it would prevent people from anonymously reporting abuses, which means that fear of retaliation will have a chilling effect. We've already seen this where people get death threats, houses burned down, etc, when they do things like report sexual assault.


People should be allowed to be anonymous but only explicitly so. The problem now is we have people lying about who they are and using multiple accounts and other forum moderation abuse to push viewpoints, often paid to do so.


>at what cost?

I can see at the cost of making stolen identities worth even more.

As it is websites completely suck at keeping our 'anonymous' identity secure. Our email and passwords are hacked so commonly there are websites dedicated to tracking it. Now you're just adding 'real identities' to the brokered data. Any real trolls will be able to use this data from the dark net pretending to be you. Even worse, since real names are required, any employers will look for you on the internet, they will see your "I'm an anti-gay right wing pro-russian" profiles online and say "you are not a cultural fit for our company". You will have to take the time and effort to clean up what is said about you. Since it's a real identity, it's not going to change, and they already have all the information they need on you.

Good luck in that terrible future.


A lot of this is not "trolling" but genuine hate speech. People will post hate speech under their own bylines on news sites, all day long.


There are two kinds of people who use real names on FB: those who never post offensive content anyway (the vast majority), and those who simply don’t care if offensive content is associated with their real identity (a small minority).

Those who post offensive content and do care, find it completely trivial to get a fake name. FB’s enforcement of their policy is basically nonexistent. This is the second-biggest group of users.


> A real ID may be required for you to post comments in all websites.

I honestly don't think this will ever be the case. Because there is a lot of profit from an unmoderated comment store. Also because it'd be monumentally hard to actually make all sites compliant.


> So we've tried

4chan also exists and it works and is marvelously non-toxic once you realize that any insults hurled at you are impersonal because they can only attack what you have immediately posted previously. Your attack surface is tiny, assuming basic information hygiene.


One of my biggest issues with reddit is post history. Oops this person posted something we don't like 3 weeks ago, dismiss his post and attack him. Oops, this person posted somewhere we don't like, ban them from the 46 subreddits I moderate. Oops this person posted for the first time in a default sub they have been subscribed to for 3 years today, ban him for brigading.


I imagine there are a few trade-offs when flicking the "post-history" switch either way.

Post history On: You get to follow a users comment history. If you read an insightful comment by them, and want to read more, then having a history is nice. Users are people

Post history partially On: e.g. comments could decay to anon after some period. (Cue the sites that collect all post data and match to users) Slightly increases the cost of doing a deep dive on a users history. Users are people, fading to ideas

Post history Off: Lowered attack surface for people who are actively trying to find an argument with you. Less pressure to have a persona consistent with the typical one of any particular community. Users are ideas


There are more options.

For example you can make multiple identities easier. I make use of that here via firefox containers, it is quite nice.

You can make anonymous the default and track the user under the hood, allowing them to claim a post at a later point if they feel confident enough to attach their name to it.

You could limit who can see the user identity. Or only display something more vague than a specific identity, a profile-lite behind a one-time ID.


The only real consistency i've seen between toxic communities and healthy ones is size. When you try to call 100k people a "community", it's a bad community where the toxic people have a louder voice than the good ones. If the community is a core group of <1000 participants and maybe 10000 spectators (following the 90-9-1 rule of online communities) it can be good. When it grows larger than that, it needs to be split up or shut down.

The only way i see to save reddit is to set a maximum size for a subreddit, and shut down or otherwise isolate every subreddit that grows bigger than the maximum threshold.


I believe the somethingawful forums involve a cost to participate, something like a one time $10 fee that was designed to remove low content or negative content participators. Might be an interesting case study of your hypothesis.


Unfortunately, while this works, it does not align with the goals of the companies running these forums-- to get as big as possible, to get the biggest valuation and/or slice of advertising dollars as they can. Even small monetary barriers to entry decrease participation substantially, and that's just not acceptable.

This is why I'm quite pessimistic about the current situation: the current in-vogue business model of surveillance/advertising capitalism demands massive size beyond what can be moderated, and thus makes this problem inevitable. And it only gets worse when the most toxic users are the most profitable, viz. Twitter's refusal to ban Donald, even though by any reasonable interpretation of their TOS, he breaks it every other day.


Worked, but not 100%. Many people were still willing to pay 10$ over and over and over again, for whatever reason, to reregister and keep posting (badly) just to get banned again.


The number of people committed enough to being trolls that they will keep paying you is small enough that they don't dominate the discussion.


At least you get money out of the trolls though. Just donate some of it to a bullying campaign.


Metafilter does this as well. A one-time payment to post along with heavy moderation will remove/keep away most of the toxicity.


Ten bucks as ante for the entertainment?

Cheap, if you ask me.


See other comments about metafilter.


I think your suggestion would lead to an echo chamber. Take a look at metafilter and how samey they've gotten. One thing to remember is that even small barriers implicity give mods much more power.


It just occurred to me that that forum moderation is essentially politics, and the failure modes of web-forums follow the failure modes of the political systems their moderation emulates.

The most common form anarchic moderation (i.e. no moderation). When a forum's small, unwritten social rules keep things under control. However, as forum grows, that breaks down, and things become more chaotic.

Metafilter essentially has an authoritarian moderation culture, the rules of discussion are both made and enforced (selectively or not) by the same group on another subject group. There's a wall to keep outsiders out (the paywall). It avoids chaos, but its failure mode is ossification, devolution into an echo chamber, and eventually desertion; as public forum behavior comes to more-or-less rigidly reflect the opinions and preferences of the moderators.

Reddit's somewhere in the middle of the above two forms. There are anarchic hordes in the less moderated reaches, and little authoritarian kingdoms without the walls to keep the hordes out.

I don't think anyone's tried real democracy in a forum (with elections, politics, checks and balances, and the time investment that all entails). It'd be interesting to see how such a forum would fare, and if it could avoid chaos without become an echo chamber. Democracy isn't the public-opinion-style voting we see in forum's today, but instead actual accountability of the moderators to the users.

Not claiming this is a novel insight, but it's new to me.


LambdaMOO tried switching to democracy around 1993, with "LambdaMOO Takes A New Direction". Basically, the mods ("wizards") instituted a petition system for technical changes and ceded all social decisions to a separate arbitration board. The resulting three and a half years of chaos is summarized in the last post on http://meatballwiki.org/wiki/LambdaMOO

In the end, the wizards published "LambdaMOO Takes Another Direction" and took back control, concluding:

> Over the course of the past three and a half years, it has become obvious that this [refraining from making social decisions] was an impossible ideal: The line between 'technical' and 'social' is not a clear one, and never can be.

A great deal has been written about this experiment (and a Web search will find much analysis, along with full text of LTAND and LTAND2), and there are a wide variety of perspectives on why LTAND failed, but one conclusion that nearly everybody seems to reach is that attempting to give a community democracy with no higher guidance is almost guaranteed to be a recipe for disaster.

Reflecting on all of this, I have no idea how the US founders managed to get something that worked at all, much less as well as it does. (And notice that it took them several tries to get it right.)


Freedom fighters rebelling against authoritarian regimes also start out with the goals of empowering the populace ("power to the people!"). They cast off their shackles, stage a coup and seize control...only to realize after some amount of chaos that people are incapable of governing themselves, and the former champion of freedom becomes the new dictator.


I complete agree. My premise is that you can only remove two out of the four -- echo chamber, trolling, toxicity, community. If you accept this premise the only logical conclusion is to remove trolling and toxicity. Without community the point is moot, after all. So really you're removing two out of echo chamber, trolling, toxicity.


Hmm, I think I would choose to remove toxicity and echo chamber. I don't mind a certain degree of bad faith behavior if removing it comes at the cost of having discussions with people different from me

I think maybe a good solution would be to (1) pay mods and (2) make everything they do transparent. This will give you better mods to start out with but also gives users the power to notice mod overreach before it spirals out of control.


What's the difference between trolling and toxicity? The words have seemed interchangeable in most contexts where I've heard them used.


Within a narrow community, a little bit of "echo chamber" is worth the quality of commenting online. I'd rather get downvoted a little and have to debate my minority opinion than it be drowned out with spam and bots here.


Well your comment presupposes you can trade between a little echo chamber for a lot of spam. Sure, but my comment is more about moving beyond pure spam to stuff like trolling - call it low-level bad-faith behavior. I'm okay with some of that in exchange for a community which is a bit more diverse.


An echo chamber is the only real option. Computers are unaware of concepts of good or evil, so at best a moderation algorithm can do is enforce a certain viewpoint. The question is what viewpoint? The viewpoint of the consensus of users, or the viewpoint of the community leadership?


I don't think moderation should be done by algorithm, as soon as you give the task back to humans you're much more capable of shades of gray and thoughtful, real moderation. Humans have been moderating public spaces for thousands of years, we're more than up to the task if a little bit of care is put into the implementation.


But humans are VERY very slow at this. Even our world-class moderation systems (the legal system in many countries) is excruciatingly slow, often taking months or years for a single decision.


MetaFilter is more of a non-forum than an echo chamber. There's almost 0 discussion; people go there for the links, not the conversation.


Have you ever... looked at the comments on MeFi? Some posts get lengthy, complex discussions, on subjects related to the link at hand; some do not. There is also Ask MeFi, where you can ask questions and get answers from other users (ads shown against this section to visitors without an account used to be a major portion of MeFi’s revenue until Google did some stuff that lowered MeFi’s search ranking). And there’s MetaTalk, which is for talking about the site and has its fair share of “hey let’s hang out and talk” posts.

I mean, yeah, it’s structured mostly around links, and you can certainly use it as a source of Interesting Links. But there’s conversation and community there if you look around a little.


I was a member there for years. There's a community, which is highly normative, but they also resist implementing threading or commenting by reference precisely to keep the focus on submissions.


I'm still a member! For the record, we resist implementing threading or commenting by reference because it's makes for an unreadable discursive shitshow when trying to follow busy, active discussions.

Staying relatively on topic is an unrelated aspiration and one that we mostly let flex a lot depending on the specific thread and context.


This is remarkable for the terse confidence with which it is misapprehends the actual structure and content of the site.


There is a fundamental approach that has not been tried as yet

- Incentives

The real-world has that figured out long ago. If you find something that is truly useful & timely, you would be willing to pay real money for it.

Google Answers (answers.google.com) had tried an approach wherein a price can be put on a question and any legit reply which answers that q can claim it. 'Reputation' definitely still plays a role in this, but the system is flexible enough to allow a new comer to attempt answering a question & stake a claim to the funds.

The real-world has many of these aspects sorted out, like calling a plumber or a carpenter from your neighborhood to get your work done. The problem is we have embarked on creating a 'global' network (aka FB) without first having adequately understood how to create strong family & community network, before we go global with our social networking..


Stack Overflow site utilizes this really well. You get more privileges the more trusted you are.


skeptics stackexchange is a good example. Lots of people complain that too many comments are deleted, but it's an absolute lions den of controversial issues. The mods do a very tough job and handle hate speech better than anywhere I've seen.


Reputation will mean different things to different people.


I think you're underestimating the cost to society of your parenthetical. To the degree that "the personal is political", sharing the details of one's circumstances — especially as a marginalized or disenfranchised individual — can reveal ostensibly unique struggles to be widespread societal problems. Twitter does this, and has been good for highlighting shared experiences. We'll lose a platform for that very important, seemingly trivial disclosure if we improperly disincentivize contributions. We need to keep "poor participants" in the common conversation.

Secondly, I notice no mention of deliberate, paid propagandizing, i.e. professionally divisive sock-puppets employed by sock-puppet firms.

Any serious discussion of threats to a healthy public discourse must address deliberate attempts to undermine the legitimacy of the common voice.


HN generally works very well. The echo chamber problem is due to allowing downvotes IMO. In my experience that simply leads to minority viewpoints being downvoted. Instead, downvotes should be removed, and people should be allowed to flag abusive comments.


I think the Hackernews' "echo-chamber-ness" is extremely exaggerated. It only feels like that if you're in the minority viewpoint in a thread (which happens to me, too). However, it's not echo-chambery if dissident viewpoints live side-by-side dominant viewpoints, even if the latter are 80% of the thread replies and upvotes.

An echo chamber is arguably more when we actively suppress dissident viewpoints. Reddit is infamous for moderators doing that simply by deleting comments under some pretext of 'spirit of the subreddit' or such. With Facebook there's a first-and-last-name-and-picture-visible shaming that can be scary and damaging, repulsing the opposite viewpoints. At a more extreme, you can help foster an echo chamber by organizing a large group of people to scream and picket and threaten a speaker that has the wrong views, reminding all the others of what happens.

HN to me is an oasis. Even if I get downvoted when I have a minority view. I still feel as if intelligent arguments are considered.

With Reddit, how many intelligent comments are there? The English grammar alone is awful, full of shortcuts, cliches and new millennial-speak. But worse: the responses are short. One-liners. And even worse: argumentation is ad-hominem and emotive.

In summary, I think Reddit is about emotional expression, and HN is about (an attempt of) rigor and rationality.


I've seen a lot of very interesting, usually quite short comments on politics-related threads in the past few weeks, that were posted less than ten minutes prior and already grayed out and marked "[dead]". In each instance, the user was not using inflammatory language at all, yet HN was implicitly saying "yeah we're not going to allow discussion on this topic." I'm sure there's Very Good Reasons(TM) for this but it always feels like wasted opportunity for interesting, out-of-the-box discussion.


> In each instance, the user was not using inflammatory language at all,

Downvotes are not only for inflammatory language; a comment can be a negative contribution to the signal-to-noise ratio, and even violate the commenting guidelines, without using inflammatory language.


There might be a lot of reasons for that and we'd need to see specific links to say why, or make a good guess.


Ah. HN is special: they punish political discussions. It's unfortunate, even tragic in my opinion. They allow it sometimes if there's specifically a tech or science-related topic very very closely attached.

I avoid poking the moderator lions (I used to post political articles maybe a year or more ago), but I do wish HN would have another view of that particular topic. It's rather unavoidable that adults (and we are adults), highly-educated ones at that, would sometimes slip into politics when science or tech news (or legal news about tech or science) is discussed.

But yes, you're generally right about that.

I think emotive political discussion is useless, but rational policy discussions aren't useless.


On top of downvotes, you can say very toxic / abusive / condescending things and get away with it if you share the "correct" viewpoint, but unpopular views have to be exceptionally polite to avoid biased moderation. You can't bluntly refute or critique a questionable (but popular) argument without being accused of lacking civility...


> if you share the "correct" viewpoint, but unpopular views have to be exceptionally polite to avoid biased moderation

Where HN has been falling short (lately, in my observation) is where discussions about the ethics of certain business models get lost via the "buried" option or killed off completely.

You cannot come to HN to discuss the potentially-negative ecological or economical impact of a YC company. The voting rings will literally send your comment or post to the void: buried or killed off completely. HN does still post lots of interesting links, but for truly interesting discussion that isn't (for lack of a better word), tainted by bias, I prefer Reddit these days.


Other areas where I see this happening on HN:

- discussing the risks of psychoactive drugs.

- pointing out flaws in overhyped press releases about the next wonder drug/treatment

I guess you're right that you can avoid getting downvoted by being exceptionally polite and spending about 15 minutes crafting a response saying "crap science, uncontrolled trial, possible placebo effect", but sometimes I just don't have the time and energy for that. I'd prefer it if people here didn't automatically assume I'm full of shit when I point out a flaw in an argument without writing my response absolutely perfectly the first time.


> "You cannot come to HN to discuss the potentially-negative ecological or economical impact of a YC company."

People say negative things on HN about YC companies all the time. We moderate HN less, not more, when YC or a YC-funded startup is at issue. That doesn't mean we don't moderate it at all—that would leave too much of a loophole—but we do moderate it less. This is literally the first principle that we tell everyone who moderates Hacker News. You can find many posts I've written about this over the years via https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme....


Thanks for the reply.

What I meant, is that one cannot start a discussion of such things without being willing to lose lots of points and karma. Observations that AirBnB might be doing more harm than good to cities having "housing crisis" issues, and the fact that Uber and Lyft are actually harming public transportation rider numbers and putting more automobiles on the roads (creating congestion).

Two issues I've seen brought up here that get downvoted into oblivion. Why risk that? It's far easier for people to jump on the "attack the poster" bandwagon... as they have done to me in this thread.

Granted, I've been reading HN for over 11 years now, and the site is not the same as it used to be. A lot of interesting posters have left. Probably I need to lower my expectations for what to see when I come here.


It's hard to say why specific comments have been downvoted. Often it's because they break the site guidelines in ways the author didn't notice. Sometimes it's simply not fair, and other users need to (and often do) fix that by giving a corrective upvote.

Plenty of comments arguing that Airbnb/Uber might be doing more harm than good routinely get heavily upvoted, so I'd question your overall generalization.


The real issue is binary choice. I might disagree with a comment, but acknowledge it's a valid well thought out argument. On the flip side I might agree but acknowledge it's a poorly formed argument.

Something might be totally off topic or funny, but if I made me laugh do I down vote it?

Slashdot's model of tagging posts was a pretty good idea I think and allowed one to filter out the 'funny' or 'offtopic' comments.


> I might disagree with a comment, but acknowledge it's a valid well thought out argument.

So, upvotes and respond.

If it's a net positive contribution, you shouldn't be downvoting.

> Slashdot's model of tagging posts was a pretty good idea

Its a good model for a customizable user experience, and a bad model for a community. Those two goals are often opposed.


I disagree. Upvoting is like a High 5, downvoting, especially on Reddit, is used to bury something people don't agree with. Downvotes, IMO, should require some sort of intellectual effort as to why you are actively burying a comment or post and thus require a reply.

Dragon, you commented and downvoted on something that you are doing right now which is commenting on a comment system. No? At least you had the decency to reply, which most Redditors don't. Which makes Reddit Toxic.


I never downvote a comment, reddit or here, simply because I disgree. I find such behaviour (subjectively) wrong since it does not encourage discussion in an open, civilized manner (on reddit, all that happens is that you get 30 comments deep and you just downvote eachother's comments to 0 while being increasingly aggressive).

I reserve downvotes for when a comment is being needlessly toxic, doesn't contribute to the discussion or otherwise not helpful for an open discussion.

I think the best cure for "downvote to disagree" is to firmly hold to the principle that the opposing side of the argument has the best intentions to the extend of their knowledge and that at the end of the discussion, all participants should have learned something. You should also always be willing to change your mind on what you argue about.

Always.


HN works very poorly, and worse by the year. I've been using HN since 2009, I've been on a wide variety of discussion platforms going back to usenet, HN has had its moment in the sun and that has largely passed.

Voting on HN barely has an effect, and I suspect that the average votes per comment on HN has gone way down year over year. People just don't vote on posts as often as you'd think, not anymore. A related problem is that commentary doesn't go on for very long. In the usenet days you could have a good thread that would last for months and months that would continue to spawn good and interesting commentary, a flash in the pan thread might only last a few days. On HN the window of commentary for a post is rarely more than a day and typically only a matter of hours. It's just people strafing comments into the void and then disengaging. Long comments typically don't get read, and don't get upvoted, don't get commented on, etc, for example.


> ...I suspect that the average votes per comment on HN has gone way down year over year

OK, I'll bite! A cursory look at the data shows a clear increase in average votes on comments from 2007 until 2012, which is the only year with a dip, followed by steady growth until the present all-time high.


Huh. Is this total votes or votes per comment? I'm curious what the median number of total votes on comments that have at least one vote is.


I wonder if (or how) one should take population into account, too. We might figure that a majority of the people read only threads that are on the frontpage, so, if the number of people on HN has doubled, then each thread will get viewed by 2x as many people, and, if the new crowd has the same likelihood of voting on each comment as the old crowd, then you'd expect 2x as many votes, assuming no change in comment quality. Instead you might want "votes per comment, divided by number of users".

Of course, there are lots of "all else being equal" implicit assumptions there. First, if the population doubles but stories move off the frontpage in 0.7x the time, then you'd only get 1.4x as many votes—and this is one of InclinedPlane's points. Second, the newer crowd could be significantly more, or significantly less, active. To control for these two things, the measure you might use instead is "votes per comment per pageview", or "votes per comment per second a user spends on the page". Third, there might be more comments posted—well, duh, it would be weird if the new users never posted any comments.

Fourth—and I think this another thing InclinedPlane wants to focus on—comment quality could have changed. Comment sorting is relevant too, because I'm sure lots of users don't read everything. If we suppose that, due to an increase in population, we get 2x as many comments but they have the same quality distribution, and if we suppose the best comments always go to the top, then the average quality of the top n comments should increase; you can see something like this in extremely popular Reddit threads, where the top several highly upvoted comments are clearly optimized for something (often clever jokes). If we suppose a decent population of users only read the top n comments, and always use the same function that maps "quality of a comment" to "probability of upvoting", then, when the set of comments doubles and (by assumption) the best rise to the top, we'd expect these users to generate more upvotes overall, and hence "average votes per comment viewed" should go up. (It's also possible that people's standards would rise. But I think people's changing standards would lag behind the changes in what they're viewing.) That said, for the comments that aren't in the top n, the fraction of people that view them (and consequently might comment on them) would go down.

The question of how long threads sit on the frontpage is relevant, both for comment exposure and for InclinedPlane's point about conversation longevity. (There are also pages like "new" and the no-longer-linked-at-the-top "best".) I wonder how best to quantify that... perhaps "the frontpage tenure of the thread with the longest tenure of all threads on that day".


> HN generally works very well

After reading a few discussions over the last few days, I was thinking to my self that HN was better than ever, and very good (with one serious shortcoming). Even the echo chamber is much better than I remember.

EDIT: The shortcoming, IMHO, is the abandonment of politics. HN is the ideal place to solve that problem, with s sophisticated audience open to and interested in experimentation and in problem solving. The goal of propagandists is not to persuade you, but to paralyze you; to shut down real discussion and debate. HN is, unwittingly, capitulating and cooperating with them. HN is another success for them.


> The shortcoming, IMHO, is the abandonment of politics. HN is the ideal place to solve that problem

No, it's really not, as HN demonstrates most of the time it interacts with politics.


> HN is the ideal place to solve that problem

That's an illusion, for reasons I attempted to describe here:

https://news.ycombinator.com/item?id=16443431


Thanks for responding.

> That's an illusion, for reasons I attempted to describe here

That implies that it's an unsolvable problem, if I understand correctly. There's no reason to think this problem is any more difficult than all the other 'unsolvable' ones and this one is particularly, I would even say 'extremely' valuable to work on.

I don't believe we simply could introduce political topics and it would work due to some HN magic. It would take serious work and experimentation to find a solution, but I think HN is better suited than other places to do that work. And a solution could change discourse in the country and the world, at a time when discourse on the Internet problems SV has invented has become a very dangerous weapon for some, and is tearing society apart.

I realize that "we" means you and sctb more than anyone, and so it's a request and encouragement. I still think it's the most valuable thing HN could do, potentially world-changing. Previous generations had books and leaders that changed the course of history; this time it might be software or a software-based technique that turns the tide. I hope that at least you will keep it in mind.


I don't know that it's impossible. But if I'm certain about anything re HN, it's that it would be unwise to try to make it be that, for the same reason we don't do radical medical research on living humans.

Our first responsibility is to take care of what we have. The way to take care of a complex system is to be sensitive to feedback and adapt. We can apply that principle here. Look at what happens when the political taps get opened beyond a notch or two. Discussion becomes nasty, brutish, long, and predictable. That's what we want less of, so opening the taps all the way is not an option. For similar reasons, closing them all the way isn't an option either.

I don't disagree completely. I think there's a chance HN can slowly develop greater capacity in this area. But it would need to be very slow and not something we try directly to control. Anything as complex and fragile as HN needs a light touch.


Well... downvotes are definitely abused. They're not supposed to be used to express disagreement, and they are. All. The. Time. And it stinks to be on the receiving end of that.

But I'm not sure that they should be eliminated. The alternative is to leave moderation as the only way to deal with bad (abusive, off-topic, trolling, unintelligible) posts. I'm not sure that having people flag every bad post they think they see, and letting the moderators sort it out, is really the optimal way to do things.


> They're not supposed to be used to express disagreement

That's a common misconception. Downvoting for disagreement has always been ok on HN: https://news.ycombinator.com/item?id=16131314

I think people have the wrong idea about HN downvotes because they think Reddit rules apply to HN. It's a bit like how in Canada we think we have Miranda rights because we've seen it on American TV.


I would argue that while it's totally okay to do it, I'd find it better if people used it less for simply disagreeing. In my experience, the resulting discussion ends up being of poorer quality because of it (and less exposure due to being pushed down and hidden once it hits a certain threshold)


Such evidence as I'm aware of points in the opposite direction: HN without downvotes would be like a body without white blood cells. Disease would quickly kill it.

The problem with your argument is that it doesn't reckon with just how lousy bad internet comments are, or how many of them there are, or how completely they take over if allowed to. To a first approximation, bad internet comments are the entire problem of this site.

It's easy to imagine an HN that would be just like the current HN, only with some negative X (e.g. bad downvotes) removed. Most of these ideas are fantasies, because the thing you'd have to do to remove X would have massive side effects. You can't hold the rest of the site constant and do that.

https://news.ycombinator.com/item?id=16131314


>bad internet comments

Bad != disagree. I think we all agree that it should be ok to downvote and hide "bad" comments. However the problem is that many good comments are downvoted simply because people disagree with them.

I think it might be better to remove the downvote and replace it with "flag", so people can flag bad comments (spam, abusive, pointless, etc). At least that way people would need to think a little before the comment gets flagged, which would hopefully result in fewer minority viewpoint comments getting hidden.


I'm not arguing that we should never downvote, if someone is writing garbage, I will happily downvote them. But maybe people are a bit too quick to downvote when they disagree...


Sure, but people have been saying that on HN for many years.

I think you have more success if you ask everyone else to upvote unfairly downvoted comments.


I try to do that, yes, though I always (wrongfully) hope that people change...


I stand corrected. I agree with zaarn, though - overuse of downvotes for disagreement is not helpful for having a real discussion.


This is why 4chan always had it right. Anonymous + minimal moderation. Instead of echo chambers you get extreme contrarianism. The upside of this is that non-conformal viewpoints aren't buried and low effort trolling just gets ignored.


Reddit does not have strong moderation.

Some subreddits do have strong moderation; some subreddits think they have strong moderation but they have fucking idiot mods who call down trolls; and there are some subreddits that have permissive moderation and those subreddits leak.

> However if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling.

You genuinely don't. There are forums where you have to pay real money to be able to read and post, and where acting like a jerk will get you banned. They haven't eliminated the jerks. About the only advantage is paid mods which ensures some consistency.


I apologize -- I mean the platform itself offers strong moderation. Unlike Facebook, for example. I agree with your point, though.


It closes subreddits at the owners whim, that seems strong. It doesn't need to be always exercising that power to have demonstrated that it has and uses that power.


My friend introduced me to a project that he'd been working on that touches on some of these issues.

https://doxa.network/

While I can't come to a conclusion on it (whether or not it's a good/bad idea) - I wonder what others in the HN community think about it. I apologize in advance if this is a bad place to comment, I've been mostly a lurker thus far!


I think this has been posted before, actually. The underlying concept, I get. But I'm not sure how it'll ever become a 'product'.

In addition to all that, I still fail to see how a blockchain would solve any of these fundamental issues.


Reddit and HN are both missing two key ingredients on the moderation side: Transparency and accountability.

There's nobody watching the watchmen, essentially. That leads to a lot of frustration, anger, mistrust, and abuse.


In particular, there is no guarantee that moderators are any good. Whoever registered a subreddit first "owns" it and moderates it themselves/chooses additional moderators. That's it.

This means there was a "gold rush" in the early days and if some shitter is sitting on prime real estate (like brand names of products) there's nothing you can do about it. If they decide to close the subreddit by making it private, nothing you can do about it. If they go inactive and are still squatting on prime digital real-estate, nothing you can do about it... if they later get hacked but are still inactive, doubly nothing you can do about it.


> Physical mail is sent to your house in order to get a single account.

That's how NextDoor works and it seems to work reasonably well.

There are some big communities that seen relatively healthy to me. For example, Instagram is easily my favorite social network. I see photos that my friends take and that's about it. I pull it up and am done with it in a minute or two.


Your list consists entirely of push media. Some example of pull media that "play well with others" or at least are not actively anti-social are podcasts and email listservs. Of course its easier for advertisers to monetize push media.

In the long run push media is always going to be troll-ier and offend more people than pull media.


I believe that heavy moderation is necessary to weed out toxic stuffs.

HN seems to be doing a great job in that regard (in my opinion) and it is a model with which other websites can learn from.


I think one way to help prevent echo chambers is to have term limits on moderators. So many moderators become sour towards their own communities but feel an obligation to stay involved. Term limits might help with that and encourage new users to become moderators themselves.


Relate 'karma' to the ability to post at all?

More karma, more posting; less karma, less posting. Everyone starts every month/week/day with only so much, no roll-over per timeframe. Modify it so that 'popular' threads cost more to post in. You can give karma to others too via the upvote and take it away with a downvote, but still no roll-over. Troll/shill accounts would still get upped around en masse, but less so and it would be 'easier' for mods to tell. (I'm sure you can model this without too much effort vis a vi prisoner's dilemma). You'd have to pick and choose which to comment in. Posting content would work similarly, but a slight mod to the cost to posting.


The NRA has its own social network where users must perform certain tasks before the user can post or comment. You have to tweet at your legislator or share things to your personal Facebook in order to show fealty to the community, in order to get enough karma to talk to others in it.

If you're curious: https://www.bloomberg.com/news/articles/2018-03-01/the-nra-h...


Jesus, that is effective.


Are you saying this would create less of an echo chamber?


I'd think so. More popular threads would 'cost' more to post in and there would then be less posters in it as a result. 'Brigading' would be difficult.


on the contrary, popular opinions would gather more karma, thus allowing them to share their ideas more often, unpopular opinions will quickly have their posting ability removed via downvotes. Soon enough only the prevailing popular opinion will be found

I had the same idea at first but I don't think it would work in practice


Hmmm, you're right. Perhaps a sliding scale then? The cost to up/downvote increases exponentially?


> However, if you create consequences or a cost to commenting you'd eliminate toxic comments and trolling at the cost of less participation and an echo chamber (though, you could argue that not all participation is equal and you're mainly removing poor participants).

Twitter actually does this but only for some types of accounts. Every see the "see more" button under replies? That's where people of low quality go. Twitter has a sort of rating for figuring out if someone is of low quality or not and they usually get hidden. Even their likes and retweets get hidden.

Hasn't to help overall on Twitter though I have noticed I get less assholes replying to me.


I'd say hierarchical credibility. More authority. Give the leaders power to bless others with disproportionate power, who in turn have the ability to bless others with disproportionate power.

Remember that Reddit ultimately only imparts one vote per person regardless of whether you're an admin or moderator or whatever. End that system. We've already established that these systems are not libertarian - the leadership has an opinion (in /r/Science the opinion is that you must post good science) and we want to empower them to enforce it.

Not only that, but provide negative feedback. If you endorse a terrible person, make it impact your credibility.


It is wildly inaccurate to call HN or Reddit “strong moderation” — the tooling alone is abysmally bad, plus Reddit can’t put the “post anything you want as long as it is not literally illegal” genie back in the bottle so easily after a decade.


I don't see the connection between the quality of the tooling and whether or not there's strong moderation. HN is pretty strongly moderated and reddit inherently allows for very strong moderation. Do you think HN is not strictly moderated or that reddit does not allow for strict moderation?

I'll agree that reddit doesn't seem to expose easy to use moderating tools, though.


I feel like HN is such an echo chamber it doesn't require strong moderation.


Real identities doesn't work when FB threads aren't indexed by Google.

If googling myself found comments I'd make on "platform X," you can be sure I'd carefully consider my comments on that platform.

Having anonymous commenting is important, but if you want a less toxic platform for mainstream comments, real identities + indexing is a good start.

The real WTF moment for online discussion is yet to come. When ML chatbots are able to comment indistinguishably from human comments, purely as a measure of capacity of time/scale, comment threads are eventually going to just become chatbots arguing with chatbots, drowning out actual discussion by humans.


This nightmare scenario has natural stable states, I think. At some point, it will cease to be profitable to propagandize at machines, no? We should work to construct a society that negatively incentivizes bot proliferation, but it can't be a purely financial incentive. Free speech must not have a price tag, or else "stop, thief" may become impossible to shout.


IMO the problem is size/scale. If you're too small, then you just have people going through the motions of talking to each other in an attempt to make the platform seem real so more people will join it. If you're too big, you won't be able to curate effectively. There's a sweet spot in the middle, you start heading up around only maybe 20 or 30 users (the point at which it becomes impractical for everyone to know everyone), and when it starts heading back down depends on the culture, but go down it will.


I'd like to see what impact posting limits would have. For instance, let every user make 2 submissions and 10 comments in a month. One of the issues on every forum is that it takes a lot less time to make a ton of low effort comments than it does to make one thoughtful post. Another is that a small minority end up making the vast majority of the posts (even when that minority is well intentioned).


Another key factor is the community reaction to low-effort posts; Many users of social media are not looking for or willing to put in the time and effort to digest more in-depth discussion, and will happily skip over more thoughtful posts in favour of the next meme or bite-sized thought. In a platform such as Reddit, where votes per unit time are paramount, this results in the burying of most longer discussion.


Set up a forum and try it, then you too can be the mayor of an internet ghost town.


You still have sybil attacks.


We’ve tried real identities plus serving as many ads against user conversations as possible (Facebook), and “algorithms” designed to keep users on as long as possible regardless of the cost to their mood, or the world around them. Perhaps “social media” should stop trying to be a way to make lots of money.

Naaah. Gotta monetize every inch of everyone’s life. It’s the American way.


I think nextdoor does this model and it’s still notorious for trolling/internet fights.


I’m amazed at how much trolling there is on Nextdoor. It’s not hate speech level, and most posts seem pretty cordial but considering the community venue aspect I’m just floored at how reflexively shitty people can be the minute there’s a divergence of opinions.


Yeah, Nextdoor is astonishingly vile given the non-zero chance of actually running into one of your fellow members at the grocery store.

My hypothesis is that Nextdoor primarily appeals to those who already have that kind of territorial "they're all out to get us" mindset since territoriality is literally what the app is about. It's the angry "get off my lawn" old man of social networks and it attracts exactly those kinds of people. Mix in some Internet depersonalization effects and you get something pretty nasty.


It's a virtual home owners association in my experience.


My Nextdoor community gets a lot of hate speech. They have filters that catch the words you know, but they don't have a way to stop the ideas.

My neighborhood is 50%+ non English Speaking Chinese. There are posts almost everyday that say things like..

"Some people in this neighborhood need to OPEN THEIR EYES and stop hitting the gate with their cars. I'll post this in the appropriate languages so everyone and the community can appreciate the significance of my message. Ching-Chang Chow... Bang Bong, Bing bong bing."


That must depend on the neighborhood - I don't see any of that.


There was a podcast (I think freakonomics?) saying that sports forums had the least toxic political discussions on the internet. This was probably due to their common interest in a sports team over their political views.


>Thoughts?

Imagine a blank graph. Randomly place 1 million circles on it. No two with identical boundaries. This is our hypothetical Venn diagram of people's preferences for how communication happens online. Drop another million circles to represent how those people will actually act online.

It doesn't matter what points on the graph are labeled "toxicity," "echo chamber," "uncomfortably friendly," or "sparse comments, experts only." It only matter's that the circles are not all identical.


Metafilter set the gold standard: accounts require a one-time payment to discourage sock puppets and they have full-time moderators who’ll jump in before behavior gets out of control and tell people to tone it down.

I don’t know an easy way to scale that model.


It seems like if moderators of subreddits were given the ability to A) limit posting rights to paid users and b) given the ability to ban at will, that most subreddits would see a much improved commenting culture.

It would also help reddit monetize.


> Real identities

Real real identities (i.e. government issued digital id) have never been done. I am sure they will come eventually. The political process is just very slow compared to the pace of technology.


The purpose of using real identities (government ID) is not to facilitate debate and sharing of opinions, but to punish and neuter debate and limit sharing of opinions.

Compare the outcomes of totally anonymous reputation-based forums such as HN, reddit or 4chan with near-real identity forums such as Facebook or Linkedin.

There is a very open flow of ideas and debate on 4chan and other reputation-based forums. There is at least as much hate speech and trolling going on at Facebook as on HN, yet Facebook has near-real identities. Linkedin have at least as much spam and criminal phishing posts going on as HN, yet Linked in has near-real identities.

HN would not be a better forum if everyone had to register with their government ID. The main benefit would be to make it easier to ban one person from accessing the forum, and silence that individual.

My vote is for reputation-based forums.


South Korea tried this for a while, but I believe eventually gave up on it. China effectively manages it transparently.


No one will use it. Anonymous commenting is much more fun.


> There's no real solution to this problem.

I disagree, There IS a solution, but no one likes it.

Segway: Football aka Soccer had a problem back in the day. Games became (more?) boring because teams would go up a goal, and just play "kick the ball to the goalie". Goalie'd pick up the ball, bounce it, pass to a player who kicked it back to the goalie, rinse and repeat. Then they instituted the back pass rule - https://en.wikipedia.org/wiki/Back-pass_rule - where basically, pass back to the keeper and he can't use his hands, only his feet. A generation of goalies had to learn to play with the ball at their feet, and the bit people enjoyed happened more often.

Rugby Union has had a similar evolution, although more intense. Rugby's goal is to have a game that can be played by people of all shapes and sizes, and they mostly succeed. George Gregan was a great player at 5'9", and most second rowers are over 200cm/6'7". But rugby always gets boring, because the coaches start to playing boring rugby (lot of kicks, lots of positional play, less running). So rugby, every few years, overhauls the rules. For a year or two, the game is exciting again, IMHO the best sport in the world, then it is boring all over again as coaches play safe. We then get another overhaul.

I think the Soccer back-pass rule and rugby's ever-changing-rules are models of how changes in rules can dramatically affect the quality of an activity, I don't think they are doable for online sites at scale. Instead, the only solution I can see working is to constantly change to new platforms.

I started in SEO in circa 2001, and there were heaps of forums run on phpBB et al. They had all recently started, so they were figuring themselves out. The forums all interacted, but they all had their own feel and their own rules. Over a few years, they developed "personalities", and everyone could find a place that suited them. This "personality" then morphed into a kind of group think, and the forums grew into echo chambers where the rules where strictly enforced to keep out the others, and each forum became hostile to all the evil others, and they devolved from fun places to hang out to the same old same old.

SM then came along, with sites like FB, Digg and reddit being born, and this process started again. A new place, no real rules yet, and it was the same exciting process of discovery. Over time, the bad parts set in, and these places became stale echo chambers filled with all the bad bits everyone talks about.

That's why I think the only real solution is to tear it all down and start afresh, because this process has repeated several times now. I think that partly explains why SnapChat and the other platforms exist, and why, IMHO, SnapChat et al will grow to a point then fail to grow anymore, as the "freshness" fades, and all the nastiness intrudes. Unless sites can figure out a way to change this process, which after almost 2 decades of this process is either unlikely (pessimistic), or it is too soon to tell (optimistic).

TL;DR when a platform gets entrenched, it starts to exhibit more nasty traits, and a new platform started afresh is the only solution.


just fyi I enjoyed your comment and anecdote but the word you're looking for is "segue" not "segway."


Not quite correct.

We've tried:

- Real identities (Facebook comments)

- Voting and self moderation (Reddit, HN, etc.)

- Strong moderation (Reddit, HN)

All within the context of rampant financialisation, land policies returning us to feudalism and continuous lying about the banking bailout including removing the one candidate who was going to take on the banks (Bernie).

Maybe it's not "the internet" that's the problem. Maybe the dissemination of information as we slide into this rentier hellpit is causing people to be pretty pissed off?


How can I read up on these land policies?


Here's what I know or believe I know about feudalism. It's almost entirely from the book https://www.amazon.com/Pre-Industrial-Societies-Anatomy-Pre-... , which while excellent does not deal with feudalism in any depth.

- European feudalism was an unusual system in that the government had no taxing power. The lord who owned land held the taxing power ("feudal dues") over that land, and the king funded himself by collecting feudal dues from land he owned personally, rather than e.g. by taxing the dukes. This might contrast with a more advanced state of civilization in which the caliph / emperor / whatever collected taxes directly from everywhere by virtue of being the supreme ruler, and paid a salary to his lower administrative functionaries. Or it might contrast with a system where the use of land wasn't much of a source of taxes. Or both of those latter things might be true simultaneously.

The US system of property taxes has a lot in common with the system I've described above, and some obvious differences. Similarities:

- The federal government ("king") can't assess property taxes. Only the states ("local lords") can do that.

- People other than the government cannot own land outright, but must pay the property tax ("feudal dues") for its use every year.

Of course, rather than the federal government receiving tax income based on federally owned land, it instead double-taxes the citizens of the states. (But on mercantile revenue rather than on land.) This is arguably worse than the feudal system.


How are devolved local states equivalent to private landlords? States capture land value via land tax and socialise this. Private landlords capture it and keep it.

Night and day.


Are you referring to a feudal lord as a "private landlord" or a "devolved local state"? Both would be more or less fully appropriate.

"Devolved local state" is slightly more accurate than "private landlord", because unlike a landlord in a more commercialized society, a feudal lord was not legally able to sell the land he owned. He was legally able to govern it.


My interest is that the feudal lord was living off the backs of others. The state taxing land to build hospitals is not the same thing.


You might enjoy Stop, Thief!: The Commons, Enclosures, and Resistance https://www.goodreads.com/book/show/17802312-stop-thief


Lords often got their start in lording by building a bridge.


I would read the heck out of a book that purported this.


I don't like "the economist" however a while back they did do a piece on land value tax, maybe HN will find it more palatable coming from this journal:

https://www.economist.com/blogs/freeexchange/2015/04/land-va...


The real solution is making blatant lies and misinformation illegal.


Without perfect information and transparency there's no way make lies illegal.


And who determines what's true and what's not?


At the risk of excessive red tape, the judicial system has been doing a reasonable job of determining facts in most modern countries.


It actually doesn't. Example: Did OJ kill anybody?

Judicial system engages in determining whether someone is guilty based on the evidence that it itself filters.

What about more complex things? Like what if I say "P = NP"? Is that a lie?

And then, do you want a trial for every comment? That will not work even for a fraction of comments.


>Like what if I say "P = NP"? Is that a lie?

That can't be determined to be a lie unless someone solves the problem. It's only a conjecture or assertion until then.

>And then, do you want a trial for every comment? That will not work even for a fraction of comments.

Only really has to apply to political statements made by the most powerful office holders, and only when contested, and only when there is a imminent intent to deceive and impact policy.

It's not really an all-or-nothing situation. It's just a matter of how much can be achieved within a reasonable cost.


This problem is easy to solve: give users the tools to do their own filtering. But services like to pretend that it's a hard problem because they don't want to actually cede that control to their users. The whole point of controlling the communication networks is to control what and how people say and hear.


It’s surprising how much social media bullshits on this issue. RSS had the standard and solution a decade ago.

Being able to filter even by simple regex would be so much better than Facebook/Google’s “algorithm.” But it would allow us to filter out ads.

It would also allow for community-driven white/black lists.

But again, wouldn’t allow Coke and HBO to make you look at stuff.


And Usenet and email had the solution a decade or two before RSS. The march of progress is in dumbing things down so the average person doesn't have to think, and the intelligent person isn't permitted to think.


Harrison Bergeron


I actually use some combination of Tampermonkey and uBlock to filter out Facebook ads (I forget which finally did it, since they work hard to make those things tricky to block). Haven't thought about using it to filter out offensive posts, but that's because I just unfollow people if they pass a completely subjective tolerance barrier.

By virtue of delivering content to your browser, Facebook has put the power in your hands. It's how you use your browser that determines what you see. ;)


Indeed.. Social media users don't get those tools because they are not the customers, they are the product.


The people who downvoted you are idiots.


I didn't downvote, but it may be that those that did are tired of seeing this same "insight" on every single story about a free service.



I don't think it's that easy.

Having a toxic sub-community leaks into other communities. I've noticed that myself, the /lit/ subboard no literate on 4chan has been taken over by /pol/ users.

It has been shown that when Reddit banned two toxic subreddits, the users of those subreddits either left completely or drastically reduced their hate speech in other subreddits:

>Within the frame RQ1 and RQ2 provide, we find that the ban worked for Reddit. Many more accounts than expected discontinued their use of the site; and, among those that stayed active, there was a drastic decrease (of at least 80%) in their hate speech use

See http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf

In other words, even if you filter your own subreddits to the ones you 'like', chances are that the toxicity leaks into your favorites.

(This is the same paper quoted in the OP's text)


I strongly disagree, although unfortunately I can't really propose a better solution.

Filtering just enhances the echo chamber, it's true that social networks are proactive about it and "pre-filter" things but I think they're actually getting pretty good about it, they give the users what they want. And what we want is content that agrees with us, makes us feel right and confirms our biases. People effectively "filter" by selecting who they follow, what they subscribe to etc...

Furthermore I don't think filtering "toxicity" on huge open communities can really work because most people out there are terrible at "not feeding the trolls". I've tried /ignoring trolls on IRC for instance, but then you're left with the messages of other people engaging the troll and it's somewhat even worse because you don't have the context. Then you try to block the people replying to the troll but then you have people replying to the people replying to the troll...

Personally I feel it's mostly a non problem, I don't have twitter, I don't have a reddit account, I don't have facebook... I'm only active on smaller communities like HN. I think humans simply suck at behaving correctly in large crowds, especially anonymously online. There's no fixing reddit or facebook, they've just become the modern equivalent of scribbling a swastika on a bathroom stall. Cat gifs, pun threads, racist comments, it's the same thing really, instant gratification without any effort, just watch the karma go up or down. If you don't set any bar for acceptable content then you're bound to be overrun with low-effort content.


That sounds great for people who are already invested in Reddit but terrible for newcomers who don't want to do a bunch of work to get Reddit into a usable state.


Reddit has default subreddits. It could easily have default filters that users could change/augment if they wanted to.


> Reddit has default subreddits.

Didn't it get rid of them a year ago?


I don't think that's so easy or a good enough solution for everyone, but I agree it's a good tool. In particular, I think it would also be more effective if we had better internet accounts, which were more transversal and could be used with more services.


Reddit actually has this and it essentially saved my experience of the site and kept me around as a user. I browse r/all frequently to discover new content and then use their subreddit filtering as essentially a robust negative keyword list.


Yeah, it's a platform vs protocols issues. I much preferred the Usenet model and wish there were some capital/effort behind rehabilitating it. The web internet comment protocol seems promising but I have yet to see it implemented anywhere.


Absolutely. The internet is a tremendously powerful tool for politically motivated entities to control exactly what content and ideas hundreds of millions of people consume in a very ironically authoritarian and anti-neutral way, despite what they have done a fantastic job of convincing people.

Also that is why I enjoy playing Valve's online games and not Blizzard's or Riot's. No politics or baby sitting. Just the ability to mute everyone if you so choose. The "no BS" approach is also quite apparent in the game design itself too


Most of us solve it by not visiting the political subreddits.


This is absolutely it. You can mark the day that Facebook, Twitter, Instagram, et all started to suck and it's when the removed control of what was in their actual feed from the users.


Twitter has keyword filtering at the user level and it's magical.


Just the past week:

- New Yorker publishes this article

- Mayor of London at SXSW called for tighter tech regulation.

- UK Culture Secretary declares social media "broken", states he wants to time-limit children's access to social media and "impose stricter checks on the age limits when children can first set up a social media profile"

- Tim Berners-Lee said "we must regulate tech firms to prevent 'weaponised' web"

- The Economist prints an article "On Twitter, falsehood spreads faster than truth" (reporting on a recent study by MIT’s Laboratory for Social Machines)

- A report is delivered to European Commission: "Final report of the High Level Expert Group on Fake News and Online Disinformation"

- The House of Lords International Relations Committee concluded tech firms were negatively affecting our society

Blimey...


Yep, I agree. Politicians & their PR firms are getting the bee hive stirred up, swaying public opinion. It's the governmental version of http://www.paulgraham.com/submarine.html


- moral panic over video games in the news again

- florida legislature declares porn a health risk rather than debate guns after school shooting


Had to scroll down all the way to your comment until somebody finally gets the meta view. Just amazing how the usually brilliant minds of HN are baited into this morronic censorship discussion.


It's worth noting that instead of community management, Reddit has been soft-pivoting to become a new Facebook with features like group chat, new profile designs, and a News Feed-esque view on the official mobile apps.

Incidentally, Facebook itself hasn't solved the problem of its community/fake news, hence the recent algorithmic changes to the News Feed to surface more content from friends, and a public push toward Facebook Groups. To be like Reddit.

It's a never-ending cycle.


They couldn't make enough money with their current monetization model so they're copying the arguably most successful player in their market (FB). It'll be interesting to see how it plays out because of reddit's "anonymity", its lack of real social connections (friends, family, that cute person who sits next you you in econ), or any kind networked apps (whatsapp, instagram, messenger, ...).


Bah, I recently deleted my 5 year account with nearly 4000 comment karma on Reddit.

Reddit is toxic. Saying something benign but mildly disagreeable caused some people to get emotional, and I do mean exceedingly emotional like you just crapped in their car and had sex with their girlfriend. I am not talking about politics, but subreddits like javascript and programming.

Worse than being toxic, Reddit is an echo chamber. Many users find vote counts as more than a validation but rather a righteousness or correctness, which contrarily means opposing votes are not a disagreement but a vilification.

I just found myself becoming progressively more depressed contributing to conversations there. In the end it seemed to almost never be about the conversation but about being RIGHT + 1!!!

Since deleting my account a month ago I am a much happier person.

Hacker News has voting as well, but with an incredibly important distinction. The votes are hidden from all but the respective contributors so as to eliminate votes from ever being used as any kind of vindication by ignorant people.


There are definitely reddit communities were this isn't true, though these are largely small community subreddits that focus on some particular topic.


This is often mentioned but the examples for it keep changing—any of the small nice communities that grows too large will become filled with noise and become unpleasant. It's the reason I've been moving to lower-volume/higher intimacy websites recently, hoping to filter out noise. Unfortunately it's not really easy


Reddit used to be a place where you could post links to anything on the internet (kind of like this place) and comment on it (without signing up with your email). Now it's place more like Facebook with algorithms based on your personal taste, geolocation, forced email, shadow banning, banning of politically incorrect subreddits, endless political posts and "Futurology" and "Uplifting" news articles. Free speech on reddit died a long time ago.


Reddit has a common frontpage (two, actually: /popular and /all) and user-customizable list of subreddits.

> algorithms based on your personal taste,

only for recommending subs to try

> geolocation

only for recommending subs to try

> forced email

no it does not

> shadow banning

extremely rare, and per-sub, not reddit-wide

> banning of politically incorrect subreddits

extremely rare; most of why Reddit is publicly despised is because it allows politically incorrect subreddits.


> extremely rare, and per-sub, not reddit-wide

from your list, it's the only thing it's not correct:

there's a site wide shadow-ban that is only done by admins. that means if you post something, nobody will see but for you everything will show up as normal. subreddit moderators can approve your comment and it will show up. this started as a measure against spammers but it seems like normal users get shadow banned too.

now there's normal ban, where you can be banned from a subreddit (you can't post but you can browse) and there's a reddit-wide ban (your account gets basically deleted).

and there's also AutoModerator "shadow-ban" which makes use of reddit's auto moderator to delete a user's comment as soon as it get posted.


How do you know the latter are rare, as they're not publicly disclosed. You only find out the former potentially if you're the a moderator, or perhaps the person banned. The banning of subs you find out if you're in the sub, or watch every sub to check for bans. Or is there a banlist from reddit I don't know about?


Reddit only used free speech as a draw until it got enough user to make money by pushing agendas. It's been agenda-pushing high censorship for years now, but some people just started noticing around the 2016 US election where they didn't even try to hide their obvious bias anymore.

In an age where any opposing opinions are consider "toxic" by some, I don't think trying to "detoxify" should be a goal. If you don't like the opinions of a subreddit, don't go to that one.


People attribute to malice what is best attributed to entropy

The old reddit, allowed actual pedophiles to describe their issues in ask reddit threads.

Theres even a dark ama where a relation which would make people ill, was discussed.

----

Unfortunately what social media is exposing, is weaknesses in how human beings agglomerate.

If someone makes a forum for dead baby jokes - distasteful to many, but not harmful in private - it eventually attracts people who think that dead baby jokes are the norm.

Or in worse terms - a sub for making offcolor jokes soon gets overrun by people who think off color jokes are not a risque deviation from normal behavior, but is actually normal behavior.

----

On top of this, the internet and social communities are extremely low on contextual information - which is critical for most people to understand a conversation in real life.

This means that the only way someone can know what you mean when you are being vague, is if you make an constant effort for tonal and idea accuracy in your comments.

Obviously this is challenging, leading to a cascade of misunderstandings, which only serve to polarize groups more.

----

Theres a lot of research being done on how people behave online, and its just a grim picture.

How Community Feedback Shapes User Behavior (https://arxiv.org/abs/1405.1429)

The spreading of misinformation online( http://www.pnas.org/content/113/3/554.full)

The effect of the ban on hate subs on reddit (http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf)


Its been relatively rare that unpopular opinions offered in good faith are punished with anything other than downvotes on reddit. People don't post opinions with the intent to convince, they post with the intent to deceive or create a reaction, IE trolling. That has been the big wave that has taken over, its just taken the conservative political sphere over far quicker.

The other wave is the circle-jerk - a relentless and all-encompassing embrace of confirmation bias, institutionalized to keep a fundamentalist purity of thought. This is beyond a normal echo chamber, these are high powered ideology amplifiers.

We've never had the concept of unlimited freedom of speech, and most of the time that its been curtailed, it has been to protect against those who use it in bad faith (fire in a crowded theater, etc). That is the toxicity - IMO Reddit is going to learn this lesson far too late, and allow a group that doesn't want to be there anyway take down the whole site when they leave.


It's not at all rare, and it's endorsed by the admins. They LITERALLY changed their algorithm to keep the Donald Trump fan subreddit from appearing in the /all filter and their posts from ever reaching the front page.

When you change your searching and indexing algorithms to suppress fans of a specific politician but not fans of other politicians, that's not really good faith.


> Reddit used to be a place where you could post links to anything on the internet (kind of like this place) and comment on it (without signing up with your email).

There's no money to be made with that.

HackerNews is different cause it's just a funnel and an online play ground for folks at YC. The day they decide that the signal to noise ratio has decreased past their threshold, this place will be gone faster than the Arc server can shut down.


Actually, there was money to be made with that.

Reddit generates enough to continue being Reddit through the gold program.

What happened was Reddit got sold, and that large amount of money raised expectations about "making money", in particular, equating massive traffic to massive potential dollars.

Reddit would have been just fine, and would have seen few real financial issues had it not been burdened with providing a return for a rather large sale price.

Just because it can be sold, may not mean it should be.

There is an inherent conflict between users goals and how they derive value from a site like Reddit, and the builders who may or may not align with what attracted users in the first place.

It's a difficult problem, and one that Aaron spoke to a few times. The idea of sustainable community discussion was solid, and clearly resonant with a lot of people and probably the broadest demographics around.

Reddit appeals to everyone from teens on up. Kind of amazing.

It's current owners paid a price high enough to remove the sustainable discussion idea from the table, favoring a more monetized model.

That has had ripple effects. Not all good, though some are.

Today, Reddit has factions. The vast majority use Reddit casually, often on mobile. The basic app works, but not well.

There is a much harder core set, a good third or more, and I'm one of those, who just uses the old school browser interface, which works just fine on modern mobile.

This third contains many of the community minded people. There is growing angst over changes and a steady divergence in directions, leaving many who understand Reddit, and it's users well, in an increasingly rough place.

I don't see it ending well.

Frankly, someone is going to do the sustainable discussion thing again, and maybe get it right. Making gazillions won't be in the cards, but a very nice living will be.

Should that happen, whoever does it will own discussion big time.

The strength in users funding user discussion comes from the absence of that inherent conflict that always manifests when money gets too big and expectations fail to align with what users see as the real value.

Maybe people being the customers, in the sense of one another, not so much to deliver ADS, is worth exploring more. It won't be as sexy as something that sells, or that will IPO in a way that sets up founders for life, but it may well be the perfect place to deliver answers to some of the harder questions related to large group discussion dynamics.

I have been experimenting with the idea of not making a bunch of rules that define how shitty people can be to one another.

What is stated is being a good human is no risk. The more shitty someone is, the greater their costs and risks get.

Risks include bans, time outs and such.

Costs include various forms of enforced compliance being required to post.

Remedies for being a good human are basically a return to normal peer status.

In particular, a gradual escalation of cost to contribute long before a time out or ban happens, seems to signal to people. A lot more is recoverable.

I see others modeling these things, demonstrating deescalation, and in general trying to recover things before any admin type action is taken. I also see them mentoring newbies toward the safe positive. Her important as that kind of distributed person to person action does not require tools, and cannot be bought.

This very strongly suggests community norms can be used instead of explicit rules to manage things toward value and away from noise.

Ever notice how alphas looking to start shit have a keen interest in the rules? Of course! They want to know what they can get away with.

Take that away and their behavior does change.

A secondary part of all this involves evaluating risk reward for trolls and others making noise.

I'll not put it here, as my thoughts are incomplete and some work is in process, but I will say indirect means, including tools, automation, filters, rules are all indirect means, and do not address the risk reward inherent in causing grief.

I will also say we can't do much about the low risk inherent in just making an account to chat, but we can definitely impact the reward.

There are more ways to get at this than we see out there right now.


Yeah - if reddit had a sustainable model in place of a VC model, it would be a different story.

The best example for reddit is a national park, imo.

As for rules -

Rules are not the first line of your defense.

The first line of defense is the price of admission into your community.

I have seen a large number of forums now, and frankly most of the heavy lifting is done by the type of people and the topic.

For example - Boring, technical/professional forums / Intellectual or otherwise taxing topics / Result in a self selection process that reduces noise.

This is also supported by research on how different types of information are spread through networks - http://www.pnas.org/content/113/3/554.full

A classic example of how barriers to entry make an impact - take a look at the dwarf fortress forums around 2008-2010.

The game is arcane, and to grok the game you need to be willing to put in an unusual effort at the time.

The resulting main boards are pretty good, signal vs noise ratios are healthy.

----

The next major rule is that - general topics are bad. It allows people to farm tangential credibility, and anyone with an opinion can speak.

The worst offender are topics on religion(/identity) and politics.

If you allow pol on your forum, you are fighting a losing battle. Pol has the lowest barrier to entry, but actual Policy and Politics are managed by complicated facts and hidden information.

Politics affects everyone, and is designed to be associated with hot button topics.

This single handedly will poison and polarize your community.

----

Between these 2 rules, you can get enough positive starting runway to create a healthy community for a while. Eventually the community will grow old and degrade, but you will avoid a whole host of other problems.


I disagree with your first line of defense, but I do that on more general discussion.

You are on solid ground otherwise. :D

Norms, established members, and the concepts of owning our end of a discussion, as well as weighing what others say can do wonders for general fuckery.


What's your argument against the first Line of Defense/barrier to participation?


By the way, it's entirely appropriate and desirable for anyone to speak, given a broad or general audience purpose or topic.

Avoiding those doesn't get us answers to hard questions. It does simply push the problems out of scope.

In some contexts, better understanding one another does a whole lot of good.

That is also a focus of mine personally.

My comments on should be taken with that idea as context.


First, and again, you are on solid ground with specific discussion topic focused communities.

Assuming a more general discussion, and or one where growth and or participation is desired, having strong norms in place immunizes the community against nefarious players.

When someone new shows up, the idea is to give them some rope. Let them self identify, and self select too.

When most of the community gets this, then the idea of raising both their cost and reducing their reward both pack a big punch.

You aren't wrong. What you put here does work, but it has a lot of friction, which may be undesirable.

See, a nefarious actor has a low risk and generally low cost to entry.

Why enter?

They also have a high reward potential, and it's that favorable ratio that consistently attracts them. Above someone mentions $10 as a barrier to entry.

$10 is cheap ass compared to the fun one can have! No joke. In my study of this, I've met many who would pay that easily.

If that friction, however high it needs to be, is desirable, fine. Raise the cost enough and the outcome will be a low noise, but also low churn, low growth community, generally speaking. A very compelling, but specific topic can alter that too.

Otherwise, the community, it's founders, moderators, others of status, influence, however that all is structured, only has control over the reward part, and they can have some control over the cost too.

See, the topic of discussion here is more general chatter. We have more focused things well on the way to being acceptable signal to noise. What we do t have is general community, say politics, or other broad topics in that same state.

It's that which I have focused on for a very long time. Personal interest.

Norms, and various discussion devices can deny nefarious actors joy, and can frankly make them regret their intent. At the same time, consistent calls to join the community, based on their own arguments framed in positive ways can work wonders.

One discussion norm is weighting of what one gets. When a clown calls you an ass, it's as laughable as it could be a basis for righteous indignation.

The vast majority of the time, righteous indignation is the response. "How dare you!" From there, the reward gets pretty good for the clown.

What happens when the response is laughable? One actually laughs, or does something basic like rate the shit?

I can tell you the reward on all that is much lower. When they realize that is a norm? Low joy, high pain potential, particularly when they figure out they really are seen as a clown, or ass.

All these things, and I'm leaving a lot out, as it would be a short book to model, get at reward.

Ask yourself, what does a reward look like?

Say it's attention, as another example.

Return that with endearment, and most often they will leave, or join and become a member with some genuine basis. They leave because the moment they are familiar, their behavior normalized, understood, it's boring. No fun. No joy.

They join, because they find out there really is a common basis and they didn't realize it. Often, an ask to join will actually work.

A few will just spew forth, and so raise their cost makes great sense.

Simple things, like say not allowing them 4 letter words, carry a very high, often funny cost that can always be removed on good behavior, expression of and demonstration of better intent.

The best is the community can see that, act accordingly and it's all public and largely transparent. Often, some members will step in to help. Often that works, amazingly, and when the basic norms behind it all are in place and solid.

Many other simple, subtle, always recoverable things can be used in tandem with strong community norms to inhibit noise, while promoting signal.

The best part about this kind of approach is it's well distributed. Model it to a few, they apply, model for a few more, and soon most active participants are largely immune.

An "infection" won't spread, and where it does, the cost tools are used to marginalize impact.

A lot really depends on community intent and whether friction to entry is desirable, or not.

The bigger problems cited here are most painful on general discussion, and most discussion is actually general.

Having more specifics out there is good, but not ultimately a solution for the general need and value humans find when they interact.

This is tribal. People seek tribes. A good tribe stands strong because it's members understand how to do that and why it matters.

An investment in those things can be very effective, and that was my point to make. The things you mention can also do that.

You aren't wrong. There are just cases where other ways may well deliver better results, or be better aligned to the community needs and goals.

Say those goals aren't to grow old and decline.


> Ever notice how alphas looking to start shit have a keen interest in the rules? Of course! They want to know what they can get away with.

The problem is, rules are also a way for someone who was wronged to appeal. While it's a line that troublemakers want to thread to push boundaries, when rules are unwritten, communities often become a reflection of the will of the moderator rather than a community of diverse opinion. Codified rules are one of the steps we have to encourage diversity of thought, but of course there needs to be rule enforcement...


Unwritten rules have upsides too, though I rarely see them being a reflection of the moderation unless the moderation is being very heavy handed.

I've been in communities that were very pleasant and open about a lot of things where the only rule was, quote, "don't be a dick". I've also been in communities, that despite extensive rules that would make make fellows from my law course cry in joy and fully oriented towards left/liberal, were cesspools of hatred.

I think in both cases it's more a matter of what the moderation will do with what they have. "You reap what you sow" probably applies in some way.


It does apply. People will circumvent systems.

There must simply be a commitment to health and purpose of community.


When there are only unwritten rules, then people trying to circumvent them will, usually and to my experience, get caught and stopped by the moderation. The lack of written rules usually means they have leeway in what they do so they can easily plug holes.

On the other hand, with written rules it's also harder for moderation to abuse their rights.

Either way, there will always be this commitment, the community usually figures out what they need by themselves when the decision becomes necessary.


It's entirely possible to appeal sans a set of rules.

The appeals are rooted in norms, and the process is a dialog, kept recoverable, not a trial.

Explain to me how a set of rules isn't a formalized expression if the will of those who created them?

There is more to all of that, in particular, security and agency.

But, those details aside, norms operate much like rules, and are far more resonant, and community owned than rules alone are.

Finally, the organic and well distributed nature of norms tends to check varied and manipulative enforcement of rules. A primary example might be a flare up, vs nefarious intent to make noise and or cause grief.

Think family vs Roberts rules of order.


I'm not disagreeing with you, just saying that there are both benefits and costs to having rules.


Indeed.

I'm not entirely convinced rules are indicated for all discussion forms and communities.

It's a human problem, and using humans and human ways has advantages.


I’ve also noticed recently that Reddit has gotten extremely aggressive towards people using it without an account or using it on mobile without installing the app. I’m addicted to it enough, thank you, without having a dedicated app for it.


There's no forced email for Reddit accounts. New accounts are midly painful with some delays in posting built in.


I made a new account about a week or so ago. Their new user form looks very much like you need an email to create an account, but you can just press continue without adding one, despite there being no indication of that. If I hadn't already known I didn't need an email, I would have either given them one or left the site and not returned, depending on how I felt giving them my email address.


As they request it they can probably claim that those not giving it are not authorised under the CMA/CFAA or appropriate act in whatever jurisdiction. Would be interesting to see that tested.

Just because I have an honesty box for payments doesn't mean you're not obligated to pay for whatever goods/services.


Why do we think we'll succeed at "detoxifying" the internet? To me it seems like wanting everyone to say things that don't differ from what everyone else says. That doesn't happen off the internet.

Off the internet, most people interact with those who are more or less like themselves, so there's less chance of disagreement. For anyone who differs significantly from you, you hold back your real thoughts and/or stew silently. We just have to accept that echo chambers are an inevitability on the internet with censorship and moderation, and discontent is an inevitability otherwise.

If people knew how to talk diplomatically or to get along, this wouldn't be a problem in the first place. Maybe that's something that can be taught, but it's not a problem that gets fixed on the internet, or is a problem caused by the internet- fix it at home; this may also take care of the trolling problem.


> Why do we think we'll succeed at "detoxifying" the internet?

We won't. Honestly, most people don't want success in any form. They want validation, comfort, and instant gratification. If you have to direct success toward a broad enough group it will never be achieved lest you temper your expectations. This applies to all things. It even applies to choice of career, income expectations, and hiring decisions as well as expecting more mature behavior online.


Rather than censorship, just give readers better filtering tools. This was largely solved on USENET years ago, has been largely solved in email (to a great degree) -- it's just Twitter, FB, etc. which seem to have a hard time solving it.

If you made certain sets of filters the default, or available in groups, and did filtering based on your best beliefs about the desires of readers (based on keywords, senders, and maybe other characteristics of the conversation), it would go a long way.


I think this is the real solution. Instead of giving people useable, client-side filters, we've let companies push us to using shitty, broken website filters so that they can control the flow of information. Client-side filters work great, and they are how we generally solve these types of problems with other communication methods, and do so relatively successfully.


How would you do that in such a way as to prevent a Twitter army from piling on someone? Or at least make it so they don't feel they're being bombarded?


At some point go whitelist or raise thresholds very high (like, on twitter, only people you follow, or blue checks, show up in your timeline if abuse is detected, as a very rough pass. Could easily do something far better.)


wildcard filters on content and meta-data go a long way


I can't be the only person who is just not bothered by this kind of thing at all, am I? I'm way more worried about political indoctrination in upper education than trolls on the internet.


I'm the same, and I take it a step further: I personally believe that the push to associate emotional triggers with words, phrases, and news events to the point where seeing someone else "say the Wrong Thing" causes anything resembling emotional distress, is the most powerful weapon of social destruction in use today. The solution to "toxicity" on social media isn't better moderation, better filtering, better banned words and phrases lists... it's being able to see 280 characters on the screen, representing ideas you wholeheartedly disagree with, and not get worked up about it as your eyes move to the next thing on the list and you go about your life as usual. Sadly, the current structure of things incentivizes the former behavior and disincentivizes the latter. Corporations (especially news corporations) and politicians see no problem with conditioning people on a mass scale, and few people seem to question the narratives being spun all around them instead of just blindly believing whatever their social sphere agrees with.


Then you are running in the wrong circles where you can't just reject any oposing viewpoint as 'Russian bots'.


In the past month I've had 2 comments accuse me of being a Russian bot or a shill on hacker news. People really do just see what they want to see.


I'm not quite sure I understand this comment. Can you expand?


Aside from the degenerate cases like “kill all white men” or “liberalism is a mental disorder”, one man’s “toxic comment” is another man’s “common sense”. And as frustrating as it may be to SV digital overlords they don’t get to decide which is which, at least not without unpleasant repercussions down the line.

So there’s really no way to solve this. There is a way to contain it somewhat: create homogeneous bubbles. Which is what we’ve been observing over the past few years.


Or, rather than giving up or "creating homogeneous bubbles" instead you could look at things on a case-by-case basis as Reddit is doing. At the end of the day the privately run company does get to decide whether content warrants removal or not since they are running a business and advertisers don't like advertising in a space associated with extreme or harmful content. They also have an incentive not to take the moderation too far since to do so would put the company's most valuable asset — its user base — at risk.


Trouble begins when you try to define “harmful” content outside the most egregious cases. Politics inevitably creep in. Worse, these days some people think that “words are violence”, that being offended gives them any rights (and thus creates the vicious cycle of being offended by more and more inane things), and that everyone who voted for Trump is a Nazi. How far are you willing to go, as a private company, to please your advertisers? And how far will your user base allow you to go before deserting en masse?


As long as there are no consequences to being a dick to people over the internet, "toxic internet" is here to stay.


Ignoring bots/ads/spam commentators, I think the emergent toxicity in online discussion is a result of a fast-paced feedback loop and (ultimately) low empathy.

All people commenting online; liberal, conservative, depressed, mentally ill, whatever-- they all have ebbs and flows to their behavior. Online discussion exposes the sum negative aspects in the behaviors of all actors. Instant feedback loops (esp. 1-to-many interactions) speed up the rate of negativity.


If you can't be a dick to people in a virtual online world then perhaps you feel you have to test your ideas by being a dick to them IRL?

This appears to be the parallel catharsis argument to that used for why ultra-violent FPS don't encourage, provoke, increase violent behaviour.

[I don't really buy it in either case, but I'd argue it online to see what other think.]


The only consequence you should get for being a dick should be people realizing it and not taking you seriously. IMO, that's one of the things you shouldn't really outsource, unless you want someone else to pretty much decide everything for you.


I agree. There are strong psychological and sociological inhibitions that the internet strips away. Turns out those inhibitions are a good thing, and likely evolved for a reason.


What if a sampling of your comments floated around you in AR. Dystopian? Probably, but it would force you to face consequences.


I think there are definitely layers here. One of the things happening is that people who ultimately support all sorts of awful things like genocide and racial cleansing get upset when people are rude to them on the internet, as if that were the real problem. There are a lot of priorities out of whack.


Sticks and stones.

More concerning is the large scale propaganda and marketing.


This has nothing to do with "getting your feelings hurt".

If someone is a dick to you IRL, you just go about life without them and its of zero consequence to you. Imagine if they stayed in your life outside of your own accord and just yelled dumb shit at you from the sidelines.

Not everyone is built as strong as you, it bothers some people.


Thats untrue - in that what you describe is the kind of trolling we would wish for because its benign.

Trolls actively target forums for support groups. Today they can attack groups and classes of people.

At this point, expecting people to "grow a thick skin" is not an option, because a major aspect of their life is fighting for their rights against people who are attacking them, or people who have hurt them.

At this juncture the troll loses all rights over the definition and interpretation of how their action should be perceived.

Instead most normal people just see it as vandalism and stop caring about whether it was for the lulz or not.

---

And as stated in the article - NOT responding to a troll who is attacking someone makes you complicit.


Your definition of 'complicit' appears to be flawed.

Choosing not to be involved is clearly not the same as actually starting the behavior to begin with.


Being someone who choses to take offense and get upset about every little thing is their problem. Not everyone elses.


Generally speaking, you and I agree on that much.

However, we're talking about how filters and algorithms won't stop people from being dicks to each other.


"propaganda and marketing" are "words", the kind that, as you imply, cannot hurt you.

To say otherwise is "free speech, only if I like it"


Exactly, but only as long as you take precautions that no one can escalate online harassment to something that's a real-life problem.

I'm pretty comfortable online, but mainly because I only post under frequently-changed pseudonyms and never leak much personal info. If someone gets obsessively mad at me, I can just abandon the account and be done with it.


Reddit has zero problem leaving post that threaten the lives of an entire religious group on their frontpage. If they were serious certain subs would be nuked.


You realize that is a perfectly reasonable objective for a site to be a mostly impartial, user-moderated forum, right? I don't consider any view I encounter on reddit to be endorsed by the platform. If people want content moderated with a specific slant, just subscribe to the subreddits that suit your tolerance, there are plenty that are heavily moderated.

Personally, I love seeing things that I disagree with on reddit...it's honestly one of the last places left that gives me the feeling of exposure to different bubbles.


The problem with Reddit is that, while it's not endorsing the content, it is facilitating it. You can let the white supremacists in your neighbourhood use your garage for their weekly meetings and plausibly claim that you're just being neighborly while disagreeing with their purpose. But I'm still going to judge you for helping them out, for making their organizing easier and cheaper, and for smoothing the road for them.

When Reddit banned r/fatpeoplehate and r/coontown in 2015, among others, it actually succeeded in reducing the amount of hate speech on Reddit:

Working from over 100M Reddit posts and comments, we generate hate speech lexicons to examine variations in hate speech usage via causal inference methods. We find that the ban worked for Reddit. More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage—by at least 80%. Though many subreddits saw an influx of r/fatpeoplehate and r/CoonTown “migrants,” those subreddits saw no significant changes in hate speech usage [0]

By making it easy for participants of r/coontown to participate, Reddit actually contributed to the output of r/coontown.

[0] http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf


Letting white supremacists use my garage for weekly meetings is not the same as me building an establishment for the explicit purpose of facilitating conversations of all kinds, and then not kicking out a white supremacist minority for using the facility.

My concern with censorship is always 2 fold:

1.) You have to always assume the person who is in control of censorship is the person who doesn't agree with you. Donald Trump has on multiple occasions indicated that he would like to restrict the press's capability to criticize the president, since he has been in power. Kim Jong Un shares a similar sentiment. Would we like that, because there are many Trump supporters who would see absolutely nothing wrong with that. There was a moral majority in the US for a long time that would have preferred to restrict the capability for others to talk about what they considered to be sinful things, such as homosexuality or communism.

2.) You have to realize how quickly things become conflated. Ben Shapiro, a conservative pundit, went to Berkeley to speak, where student groups attempting to ban him from speaking had strung up a huge sign reading "We say no to your white-supremacist bullshit." It's important to note that Ben Shapiro is an Orthodox Jew who is harassed daily by white supremacists online...and has never said a single thing advocating for racial supremacy. In fact, he is hugely opposed to racial, gendered, or national identity politics of any kind. Students wanted to ban him for hate speech simply because they associated him with what they perceived to be hate speech.


Building an establishment to facilitate conversations of all kinds generally does not obligate you to allow every specific conversation. Were it my establishment, I wouldn't permit members of NAMBLA to organize their sex-trafficking trips to Southeast Asia, or white supremacists to organize the next Unite The Right rally. I wouldn't feel at all like I'd betrayed my purpose in facilitating more, and more different, conversations by specifically banning some that are obviously problematic in and of themselves.

I also wouldn't feel like I was censoring anyone. My not facilitating their conversation is not the same as my preventing them from having it. I would also be mindful that the presence of nazis and pedophiles in my establishment was itself a barrier to other communities taking advantage of my facilities; if members of those other communities started showing up after I banned them, I'd probably consider it a net win for freedom of expression and conversational facilitation.

Also, you're soft-pedaling Ben Shapiro by calling him a "conservative pundit", as if he's in the same inoffensive league as Ross Douthat or David Brooks. He was an editor of Breitbart News, which is now quite openly the voice of the "alt-right"; he departed and got on their enemies list because he was smart enough to realize that Trump was going to damage the cause for a generation--which is why they're going after him, because they were quite happy to have an orthodox jew on staff while he was agreeing with them. He's said a variety of things about Arabs and black people that are flat out racist. He's not quite the bomb-thrower that Milo is, but he still drops nuggets such as "Arabs like to bomb crap and live in sewage". Calling him a white supremacist might be hyperbole, but it's not simply conflating conservative views with nazis. He's a racist asshole who advocates for ethnic cleansing. Maybe white supremacy isn't his conscious goal, but when he facilitates that end, it becomes a bit irrelevant if he's actually written that down on his "A better 2018" list.


I think that study is good but I add personal nuance when interpreting its obvious curative suggestion.

Its not the ban that worked - as much as the full identification of those groups, the banning of those subs and the negation of the immediate survival attempts of those subs.

The whole process is what worked, not just a simple ban.


The parent commenter specifically said "post[s] that threaten the lives of an entire religious group". Hardly comparable to something someone can simply disagree with.


Those are also specific posts, but most of the posters who share the sentiment of the commenter typically advocate for the removal of entire subreddits.

Also, to be frank, I think that posts of that nature are exactly the kind of thing that someone can and should disagree with. Suppression of those beliefs by silencing them just means they are moved away from platforms through which their views can be better contextualized for them. The prevailing attitude that some people are just not worth debating says more about the person who holds that belief than it does about those they're accusing of never budging. I have had my mind changed many times, as well as changed the minds of others.


I suppose I'm saying that disagreement is just a start when it comes to content like what was brought up. Of course you and I disagree with a suggestion we eliminate a people group. But it's not simply another unit of thought in an objective marketplace of ideas. The Overton window came up a lot in 2016, and I think it's useful both descriptively and prescriptively--certain things should be outside the bounds of civilized discourse. Implicit in the notion that these ideas should be confronted rationally is that its adherent is simply misguided, miseducated, and that sure and steady appeals to reason and empathy will persuade them, as if all white supremacists rooting for genocide are waiting for their American History X moment, but I don't think you have to look hard to find intelligent and otherwise rational people advocating vile things. The obstacle isn't reason, it's something beyond that.


I think it's extremely patronizing and disingenuous to lump posts calling for the extermination of members of a religious group, or calling for the execution of refuges in with "things I disagree with".


I go on reddit every single day, and I have never seen a post on the front page advocating for the genocide of an entire religious group. I've seen a handful of posts linked from /r/againsthatesubreddits, but the irony is that the only reason those posts made the front page is because they were linked from the meta subreddit.

That being said, for posts of that nature that do reach the front page, I stand by what I said. If it is the policy of reddit to be entirely noninterventionist and simply allow the communities to self moderate, that is completely reasonable, and I don't think any of those views should be interpreted as being encouraged by reddit.

The obvious question given an open platform should be why exactly would a community like that be able to exist successfully within a system that votes content up or down?


Yeah – until it affects their PR or advertising, they don't care. There are some vile, disgusting subs out there.


Could you give examples, I've been there a lot and have never seen this (though I've been told "people like you should be killed" a few times because of my religious associations, so don't doubt it, just curious about specifics).



The only way to detoxify the internet, is to get rid of the "social" aspect. Starting with gasp comments. My online experience has been that much better, since adding comment blockers to my browser.


Personally I'm guilty of often seeking out comments before or soon after viewing an article or video online. As social creatures I think its reasonable to want, even need, others opinions. The effects of receiving these opinions may not always be positive, but I think they are necessary.

Pragmatically, comment sections can have bots, ads, instigators. I think there is a better solution than outright getting rid of online discussion. There's room here for innovation (Not even technical innovation, I think there's low hanging fruit here in terms of comment section design)


Similarly, I use comments as a quick screen to filter through the PR/targeted journalism of stories. Not perfect and will eventually stop working as media companies wise up.

Similarly, I will frequently check Wikipedia before the company/film/whatever. Flaws, but frequently better than official channels.


It doesn't even have to be targeted journalism, sometimes people write junk because they make mistakes or aren't aware of counter-arguments...not even scientific papers are safe from this. Cunningham's Law[1] to the rescue :)

I can feel that hostile internet comments take a toll on my overall happiness, but I don't want to be ignorant of valid counter-arguments to the content I read.

[1] https://meta.wikimedia.org/wiki/Cunningham%27s_Law


Reddit has a large user base and a myriad of subreddits where different solutions have been tried. So far, the best I've seen is strong moderation: clear rules and swift enforcement.

AskScience is a shining example of a high quality subreddit, although the comment section usually looks like a graveyard with 90% of the comments removed.


I'd like to see moderation decoupled from the forum namespace.

So you'd subscribe to a forum namespace in order to see posts about a topic (say, AskScience), but you would then also subscribe to whichever moderators you want. The moderation wouldn't be inextricably tied to the namespace.

Anybody can post anything in any namespace (and anybody can declare themselves a moderator of any namespace), but people will only see posts if their chosen moderators allow it. You could have all of the same moderation powers that exist at the moment, except any given moderator's actions are optional to any given user's view of the forum.

We wouldn't need a situation like with r/bitcoin and r/btc where disagreements about moderation resulted in a splinter group creating a separate namespace: those who disagreed with moderator X would simply unsubscribe from X's moderation.

I don't know if this would result in a better forum, but I'd be interested to see how it's different, and I don't know of anywhere it's been tried.

(And in case someone out there is granting wishes... can I also have it decentralised e.g. using IPFS's pub sub?)


Yep. That was usenet. But in those days, we didn't have 3rd party recommended filters - but there's no reason why they wouldn't work now.

Maybe it's time to remake Usenet, minus binaries. Binaries, piracy, and their data load per server are what killed Usenet.

(Yes, I know it's still living on in paid-service world. But gone are the days your ISP runs a machine.)


> Binaries, piracy, and their data load per server are what killed Usenet.

Binaries and piracy were two of the biggest reasons to use Usenet.

Spam is what killed it.


I respectfully agree and disagree.

Spam was and still is a nassive headache. However Bayesian filters were really starting up. But Spam was annoying at best.

What caused Usenet servers to be quit was that ISP's were seeing them as a pirate haven and a lawsuit magnet. There was all the impetus to stop supporting piracy, and lose the costs incurred with that bandwidth to a Usenet server.

Sure, it was a great draw to use it to pirate... but it is also why it fell. Now these days, time to move to IPFS. That place is ripe for piracy, and super simple to share.


> clear rules and swift enforcement.

But then somehow a few "bad guys" get to become moderators and everything goes back to s*it. As someone has said above, it's all about the incentives. No-one has enough reasons or available resources to want to take over an obscure sub-reddit with almost no real-world influence, but when there are State actors involved you can be sure that things will turn sour. My most recent such experience is with /r/syriancivilwar, which used to be decent enough two or three years ago (even if some of the posters had actual ISIS flares), but since the troups-on-the-ground involvement of both Russia and Turkey the sub-reddit has become an echo chamber for those interests.


Its also a narrow topic subreddit with a specific goal.

That kind of moderation works under those environmental conditions - similar how some drugs work on certain types of body types and fail in others.

Other topics which are broad and have little general boundaries tend to get a lot harder to define.

Classic example would be where does porn stop and art begin?

In forum terms - politics, general opinion topics have more subjective moderation.

---

Overall though, I agree - manual moderation is pretty much the best way to go forward.


I'm surprised there isn't a moderation as a service business out there.


I assume the irony of writing that in a comment is not lost on you.


It might if he blocked it.


I agree, on areas of general topics (news, current events, et cetera), the comments are cancer. As more comments are posted, racism/sexism/ageism/horribleness approaches 1.

The comment section changes a lot if you stick with tech sites and peoples' blogs, or project sites like github/gitlab/hackaday.io . In those cases, people are 99% of the time pretty decent. They may demand a feature on a FLOSS git* page, but aside of entitlement attitudes, it's relatively non-poisonous.


While I agree with you, and I also run a comment blocker, I find it difficult to believe that sites are going to willingly give up the engagement that comments brings.


Well of course not, I'm just telling you how to fix it!


Lots of sites are giving up on comments. They add nothing of value to advertisers.


>The only way to detoxify the internet, is to get rid of the "social" aspect. Starting with gasp comments

are you talking about the "social" comments, or commenting in general (including this site)?


I've been kicking a design around in my head for the past few weeks which retains most of the outward form of something like Reddit, on the grounds that it has proved to be a model that can attract users, but redefines the upvote and downvote process such that an upvote now means "I want to see more of this poster's comments, and transitively (and weaker as the links go out) what that user sees" and a downvote now means "I'd like to see less of this poster's comments and transitively what they upvoted". The difference here being that upvote no longer means "I think everyone should see more of this comment", but now "I think I should see more of this commenter".

I think it ought to be unidirectional to prevent obvious attacks. I'm not too worried about the "filter bubble" because I think you are inevitably in a filter bubble, and it's on you to manage it, not expect random sites on the internet to somehow pop you out of it (they basically can't, by definition).

I think I'd still want categorizations for subject, just to give things some sort of focus. The result is that instead of /r/politics or /r/$ANYTHING being captured by some sort of single monolithic gestalt, multiple simultaneous gestalts could be running side-by-side. You'd want some deliberate mixing, both to keep things interesting and because I think you'd want to ensure some "heat" in the simulated annealing sense to ensure that things don't get too isolated and things keep moving. (Communities themselves keep moving, after all.)

The net effect of this is that while trolls would still exist, anyone not interested in trolling should naturally filter them out of their view relatively quickly, while the trolls get each other to troll at. Since they can not be eliminated, all you can really do is fence them off. One of the other interesting side effects is that you ought to get some other interesting communities in there, too; the community of people who thoughtfully comment, the community of "doers", etc. One interesting possibility is that this may even allow "celebrities" to have a "social network" that nominally anyone can join, but requires sufficient social proof that it isn't just "anybody with a Twitter account and an itchy trigger finger can get the first reply below $CELEBRITY's tweet.", wrecking the utility of social media for the celebrity. I haven't quite solved the question of how to initialize the communities once the system gets going, but it's probably feasible.

What stops me from coding this up is that whenever I start really fleshing this out into an implementable system, I realize it violates one of my life rules: "Never engage in an endeavor in which the worst case scenario is complete success." I don't want to run a community site like that. So for me personally, the outcome matrix boils down to "Failure -> waste of time, Success -> oh shit". Because even if this did miraculously fix all social issues, which I am hardly naive enough to suppose, there's still legal compliance, DMCA compliance, etc., all sorts of things I personally have no desire or drive to do.

However, by all means, feel free to "steal" my idea. While I am by no means an expert in this field (the aforementioned paragraphs are pretty much the sum total of my thinking at this time), I'll even spot you some free "consulting" in the form of playing intelligent-rubber-duck to bounce ideas off of if you'd like. And let me know if you get to deployment.


IIRC, Advogato did something like that a million years ago.

https://en.wikipedia.org/wiki/Advogato


Well, it wouldn't be the first time someone had the right idea decades before the world was ready. (No sarcasm; fully serious.)

Though you'd want to open it up to people beyond open source developers. And it's possible the idea of using the "up/down arrows" that everyone is familiar with may still be worthwhile; it's been too long since I've seen the advogato interface to remember if they had that. And finally, a "likeability" metric that was less mathematical and a bit more rough & ready might be a good thing; mathematical constructs are really good at maintaining the properties you design in, but are often extremely fragile. I'd want an algorithm with some knobs I can twist, not a single algorithm. But by all means read the papers in question; why rediscover those things the hard way?


I've been thinking about a similar system for a while.

I think you can avoid a lot of the success-as-worst-case issues by designing this as a decentralized protocol.

Each 'user' is a cryptographic identity: all votes or posts are authenticated by the users signature. When you follow/upvote a user, your clientside software increases that user's weight in your feed. All content on the system is content addressable by hash. All 'likes' are signatures on a content hash.

When you connect to another node, you request all recent signatures for people you follow/have upvoted. Clientside software can use various schemes for weighting - totally up to each individual user (ex. how many upvotes is equivalent to following someone? how much time decay in scoring do you want? how deep do you want to traverse followed users' social graph? etc.). Weights for each followed user are multiplied by that user's score for that piece of content, and is then summed across all followed users to create an ordinal list of content - a 'feed'. In this way, if multiple users you follow all like the same piece of content, it would probably be scored higher than a piece of content liked by only one user you follow - so long as your weighting for all those users is equal.

Ideally:

- Nodes should be able to support multiple users, and host a webserver that provides access to the network to those who are not technically sophisticated enough to host it themselves.

- User data should be portable and transferrable to different nodes.

- Nodes could implement their own email+forgotpassword userflow to abstract the crypto complexity away from laypeople, though this complicates account transfers and risks account theft by unscrupulous nodes.


And yet here we both are, commenting, on the internet ...


This is a comment though.


HN doesn't allow such negativity though. Reddit flourishes on it.


It really depends on the subreddit, doesn't it? I only visit subreddits like r/boardgames and while there is the occasional rude person, overall thos subreddits are welcoming uplifting places. Some parts of Reddit are horrid and some parts are beautiful, just like real life. I just don't go to the horrid parts - just like I would avoid the horrid parts of a city.


I would encourage you to read any thread on harassment or the treatment of women and marginalized people in the tech industry and then revisit that thought.


Any thread about Facebook, and most threads about Apple are cesspools of negativity. I think you've got some rose tinted glasses when it comes to the kind of discourse that HN offers.


>HN doesn't allow such negativity though.

You do read this site, right?

My opinion is that getting rid of voting is step one to making both sites infinitely better.

Extremely draconian human moderation is step two.


> “Does free speech mean literally anyone can say anything at any time?” Tidwell continued. “Or is it actually more conducive to the free exchange of ideas if we create a platform where women and people of color can say what they want without thousands of people screaming, ‘Fuck you, light yourself on fire, I know where you live’?

This comment, by Reddit's General Counsel, says a lot about their approach. They believe that minorities/women will be subjected to nonstop violent comments unless Reddit pursues proactive and forceful moderation, free speech be damned. I think (hope) that's a very cynical view. The author of the piece does a great job juxtaposing this quote with the results of r/Place, however, in pointing out that it did not devolve into swastikas and "toxic" content when left to its own devices.

But reddit moderation is not just about quality content. I think many commenters here miss this. Moderation of a hugely successful website is not just about the technical issues or the logistics. Huffman, the CEO, is quoted as saying he believes Reddit can sway elections. So now moderation is about power. Imagine the temptation: you're one click away from destroying the popular, impactful subreddit of a political candidate you detest. One click to sway an election, maybe.

Edit: spelling


Minorities and women are subject to nonstop violent comments.

God help you if you are a moderator and a member of a popular community and known as member of one of those groups.

And its even worse if you are a mod, a minority member, AND a mod of a minority community.

----

The best way to think of social media/reddit is a beautiful cleave sort process.

People form a community, and then they have to deal with attackers or outsiders. Eventually a drama even occurs and people get expelled.

These people who are expelled form their own community and this process continues ad infinitum.

On top of it, some people will independently create subs just to target you, or create subs that inadvertently target you (from the old forum days - nazi forums would actively harass Jewish forums.)


> you're one click away from destroying the popular, impactful subreddit of a political candidate you detest

I hope they put that behind at least two clicks.


I don’t know if you can do this. The problem with social news sites is not that the trolls are winning but it is that the moderators aren’t properly compensated because no website has the capability of moderating large websites. Also to get advertisers on your platform you need a moderated forum that aligns with your advertisers. The internet always had free speech and with that comes trolls by definition . The difference now is that there are competiting ideologies associated with websites that want to control vast parts of the internet . We have websites that want your pictures , attention and comments for free in exchange to promote other people’s products. On the site operators side they need to do deals with advertisers who don’t want that controversial topics. So the advertisers force discussions on the platform to only align with them. Or the site operators do this. Solving the troll problem is not going to be solved just by reddit.


> To its devotees, Reddit feels proudly untamed, one of the last Internet giants to resist homogeneity.

Waitwhat? Reddit is a huge hivemind, with each subreddit having more or less the same views, since people who do not agree to the narrative of that subreddit are quickly downvoted so much that they can't post more than once a day or something.


That's not my experience. There seems to be subreddits on every conceivable topic, and you can subscribe to whatever variety you want and you'll get a wildly varied range of views/politics/whatever.

There are even 'change my view' type subreddits for people looking to understand and debate the 'other side' of their arguments.


The comment you are replying to becomes more valid the larger the subreddit regardless of its subject. You wouldn't think subreddits like javascript and programming would be hotbeds of rampant emotional insecurity fueled by deep echo chambers, but they certainly are.


I wasn't too clear in my phrasing. Although Reddit as a whole has relatively diverse views, each subreddit has a very specific hivemind.


and one fact they want to ignore, many of the politically oriented subs are managed by PACs who worked for years to get their people into moderator positions if not create subs as they needed them. then through mailing lists, text alerts, and such, they know what stories to post and act on.

reddit is anything but wild wild west anymore, its been co-opted by those who know how to manipulate opinion of products and persons


I don't think this is all that specific to Reddit, though. I mean, even /pol/ was 'infiltrated' by the likes of CTR (although this of course doesn't work, since 4chan is aggressively anti-everything).

Furthermore, the fact that all moderated political (and even many nonpolitical) forums are pushed and pulled towards the extreme fringes is understandable, even to be expected: those who take moderator jobs are those who are most driven about the subject at hand, and those are most often the ones with a fringe opinion.


All this centralization has created all these problems. We need to federate our systems, perhaps at the app level even. I worked at OpenTable for a short and their arch was interesting. Each PoP in the old version is its own system that pushes data central. Tho they’re replacing it with a new generation of centralization presumably due to eng maintenence concerns. Similar with their regional frontends, seems they’re being consolidated.

I am coming to think nested intranets and special purpose vpns might be a good approach to global commerce and expression. I love building massive scale no touch systems, but they’re never that perfect once they start being used. Eg. Usually you end up with more severe and harder to solve issues with one massive db vs. smaller systems with derived data and links. And on the FE a lot of hard problems go away, beyond i18n etc also cyber bullying etc become tractable.


Would it be sufficient for every user to just have client side configurable filters? Or is this movement more about denying "toxic" people a platform to begin with?

If it's the latter case, I can't get behind it. Free speech means everyone should have a platform to speak. As society transitions away from using our mouths to using our keyboards, the first amendment implies that speech should be protected on the common medium - whether verbal or text.


Filters can work after the fact, but they don't prevent a lot of it to start.

As for "toxic", the article is referring to things like /r/CoonTown, or brigades like the one that went after Leslie Jones on Twitter a number of years ago. The people behind those already have a platform to speak: The Internet. They're not entitled to space on another platform like Reddit or Twitter, and they're not entitled to force their speech in front of someone else's face.


Coon town is obviously offensive, but easily avoided (ie, not a problem).

I casually followed the Leslie Jones incident and don’t see the problem. The brigading seemed easily avoided by Jones and was actually amplified by media covering how bad it was.

My twitter feed doesn’t show mentions by randos, so Jones should have been able to easily filter out at the client. She may have for all I know.

So both your examples seem easily solved by client filters to not force speech in faces.


"Coon town is obviously offensive, but easily avoided (ie, not a problem)."

It is until it starts to leak.

"I casually followed the Leslie Jones incident and don’t see the problem."

I honestly cannot see how one can see the incident, and cannot see a problem with thousands of posters telling an African American woman that she's an ape.

"The brigading seemed easily avoided by Jones"

That is blaming the victim.

"So both your examples seem easily solved by client filters to not force speech in faces."

Except they're not.


“I honestly cannot see how one can see the incident, and cannot see a problem with thousands of posters telling an African American woman that she's an ape.”

Let me clarify. Obviously, thousands of people calling someone an ape is a problem. But one that is easily avoided by disabling twitter replies to your account. So it’s not a problem for anyone that it happens to.

So when I say “I don’t think this is a problem” I mean that it’s not a problem that can’t currently be solved and not worthy of trying to stop the thousand assholes.

It’s not blaming the victim to point out that if you change a setting you save them from pain. It’s not Leslie Jones fault at all that people are jerks. But maybe she didn’t know how to configure twitter. If so, wouldn’t it be a shame that if no one wanted to help her learn for fear of victim blaming?

I equate this as worrying if people are sitting in their living room gossiping about you and saying mean things. Somehow you have the magic ability to listen in. You do and are upset. Certainly you can try to make those people stop being jerks. Or you can stop listening. Or maybe do both.

The problems seem solvable if your goal is to not encounter jerks harassing you online. Since you can filter them out. If your goal is to not have jerks, then definitely not solvable. But that’s a stupid problem to solve through authority. The way you solve jerks is through love and education.


> It is until it starts to leak.

It doesn't really leak, not until the honeypot gets shut down and the denizens are forced to hangout elsewhere.


Someone made a quantitative study about the /r/fatpeoplehate ban recently and their conclusion was that the honeypot theory isn't valid and that banning hateful subs causes a reduction in overall hateful behavior instead of just shuffling it around.



The mob tends to be ok with hate if it's directed towards someone they dislike - e.g. you can find on /r/canada 's frontpage people calling Doug Ford's kids fat and ugly.


You are making the problem seem worse than it actually is. Account age and inability to delete comments make an anonymous reputation pretty realistic on HN at least. Just in this thread we see examples of toxic comments being downvoted into invisibility (oblivion).

Or is the problem that /r/the_donald exists? Just don't go there if you don't like it. And don't go to /r/trees if you don't like marijuana.


The problem is that reddit is one site. You can live in other "neighborhoods" on reddit that are not /r/the_donald, but you are still forced to live in the same city. That means people from there will show up in all of the other sub-reddits, voting and commenting away and twisting every single part of the site to be what they want. It's exhausting to put up with, and if you don't notice it then count yourself lucky. There are many regional city/state/province sub-reddits that have come under concerted "attacks" by /r/the_donald posters. Folks from there will re-post links (or share via chat) to "controversial" posts in other sub-reddits to coordinate brigading, for example.


>It's exhausting to put up with

The sub in question has had polls and surveys posted to it, popular ones which have remained on the front page of it, and they've received paltry engagement (~10,000 engagements).

You're making it out as though that one entity has corrupted the site, which is a plainly hysterical claim based on the evidence.

>Folks from there will re-post links (or share via chat) to "controversial" posts in other sub-reddits to coordinate brigading, for example.

Do you have any evidence for this? Because discord groups to disrupt Reddit aren't unique to T_D and the hyper-leftist ones far outnumber those on the right, as with their meta-subreddit parents.

It's disappointing to see Reddit drama appearing on this site as often as it is. I remember a time when HN wasn't really on the Redditor's radar. What good times they were.


> It's exhausting to put up with

It would be more exhausting to put up with a system of censorship.

There will always be people around you that say or do things that you do not appreciate. Getting offended at them is something you do to them, not the other way around.

The more we act like people have power over us, the more power we give them.


This theory has long since met with reality and lost.

I mod - and it turns out that trolls target groups of people.

So for example seeing a bunch of posts targetting women, or minorities, is going to go ahead and affect people who have been abused for being parts of those groups.

Indiscriminate massed assaults are extremely hard for people to deal with - and even if I am not the target I would expect myself to step in, or some authority to step in.

So yes, in a very real way they have power over a large number of people, just the same way you have power over someone weaker or more vulnerable than you.

The old chan philosophy worked on a smaller net, if it even worked then.


>>Or is the problem that /r/the_donald exists? Just don't go there if you don't like it.

I think the issue with r/the_donald, and other extremist subreddits (and online communities in general) is that they generate lies, whip each other into a rage-induced frenzy over those lies and then spread that hatred elsewhere on the Internet, and some of that even spills onto real life via certain conservative news outlets. In addition, anyone who even slightly disagrees is banned with prejudice.

This makes the problem difficult to ignore.


Then why not remove r/politics for your same reason? Or r/latestagecapitalism. Crazies from that subteddit have the nerve to post in other parts of Reddit.

The biggest problem, imho is that there is an attempt to 'deplatform' dissenting views. (That term was coined by people on the left actually doing it)

Quite simply it boils down to vilifying your opponents, classifying dissenting views as 'toxic' or 'hatespeech', and screeching loudly and as often as you can to get the opposition removed. If they move, do the same to them on the next site.

This is big money apparently pushing for this as an 'online strategy' to remove opposition. Heck I can buy as many reddit accounts and votes as I like.

We need to push back and see it what it is - to try and remove any wrong think and get people thinking as a controllable single mass.


> Reddit was also an important part of Trump’s strategy. Parscale wrote—on Reddit, naturally—that “members here provided considerable growth and reach to our campaign.”

So were Hillary's and Bernie's campaigns, who each had a major presence on Reddit, both much more than Trump's.


I quit reddit for good on Saturday. There are two major problems with it:

- Downvoting creates echo chambers,

- No accountability for moderators.

I think the downvoting should be removed completely. Downvoting is basically giving people the power of a mini-censor and the people love it.

Subreddits can be completely censored by moderators and there's no way to know this is happening. Maybe reddit can develop a way to judge the behaviour of the moderators. Like showing what posts they removed, for example.

I've been part of plenty of internet communities that don't become like this. Even completely public forums and IRC channels. I guess HN isn't as bad as reddit because we have an above-average intelligence crowd, but there are some opinions I am careful not to mention here as well.


How do you filter out user sentiment without downvotes?

I think there are flaws in downvoting, but they are less than systems with only upvote (eg Facebook).

Upvote only encourage extreme views that attract attention because if 5% like and 95% dislike (theoretical) this is ranked higher than 4% and 0% dislike.

What we really need is a score that is like as percentage of views that will be more meaningful.

Mixing in some sort of cost to post/vote will help as well because people will be more judicious with likes. Currency is hard to work out as steem is a big fluffed-fest. Maybe something where you get credits every day and can exhaust or bank them.


I've never actually used Facebook so I haven't seen the effects of only upvoting. But up/downvoting is not meant to signal agreement or disagreement anyway. It's meant to signal whether it's a useful post or not. But people treat it as the former which is why it creates echo chambers. I think the Facebook "upvote" is meant to signal agreement. Does it also make the post go to the top?


Something like that.

Ultimately it doesn’t matter what up/down vote is supposed to mean. It matters what the outcome is.

Up means prioritize, down means deprioritize.


Completely agree about downvoting. It really does nothing for me as a user, as I am able to reply to something if I feel strongly enough about it. A great way to go about detoxifying social media is to limit negative interactions as much as possible.

Upvoting, on the other hand is valuable as it helps cut through the noise. A great example was the thread awhile back about WhatsApp encryption where the first comment was from Moxie.


I detest this smug mentality, that so called trolling is trivial to identify and differentiate from legitimate, but unpopular opinion.

I feel strongly about it because it creates opportunity for censors, both in the form of moderators and communities themselves, to stifle debate under the guise of "detoxification." This kind of communal policing is ground zero for the echo chamber phenomenon.

We live in a reality where there will always be those who abuse platforms of communication for nefarious purposes. We also live in a reality where we are able to temper our own outrage, and we must if we wish to maintain healthy, open dialogue.

Offense is taken, not given.


And I detest this smug mentality that this subject is concerning things that are mere differences of opinion, like we're discussing tax policy. That's not the case in the least.


You don't seem to understand that terms like "toxic" are as arbitrary and subject to abuse as the word "obscene".

In fact, I'm not even sure as to what you're getting at. Are you seriously suggesting that what is toxic is somehow universal?

It does not bother you that dozens of serious contemporary, controversial topics, with severe consequences for our society, are regularly censored on websites like reddit under the guise of so called toxicity?

You aren't bothered by such vague conceptions of offensiveness being used by corporate giants to push their narrative at the expense of others? Of the danger of self-censoring group think on community moderated boards?


"It does not bother you that dozens of serious contemporary, controversial topics, with severe consequences for our society, are regularly censored on websites like reddit under the guise of so called toxicity?"

I think you're gonna have to provide an example for this. An example of something that is being labeled as toxic, but wouldn't appear to be.


The problem is not that the topics themselves are toxic, but that only one point of view is identified as such. I only mention these "flamebait" topics as examples, not to discuss them or choose a side.

Gamergate

Gender Equality/Equity

Affirmative Action

White Privilege

The various subjects in James Damore's recent memo [note, he was fired for honest, sourced feedback that was explicitly requested of him]

Politics in general (especially anything remotely pro-Trump or pro-Republican)

If you pay attention to dynamics on reddit, for example, certain sides tend to be stifled, regardless of any potential merits. I am not choosing sides, just offering examples where certain opinions are immediately identified by communities as toxic, and no discussion is even attempted.

Hell, there is a growing movement among feminists, at least a vocal minority online, which claims that our modern concept of masculinity is "toxic."

The very word is turning into a catch all for "opinions I don't like," which seems to happen any time one attempts to censor information.

Finally, consider this: if the topics I've brought up are so clearly, as you claimed, not merely "differences in opinion," then why do they attract such heated discussions, with contributors on both sides, even on HN?


I suspect I might be feeding the troll, but I'll give you the benefit of the doubt.

Gamergate might have very initially been about 'ethics in games journalism', but it didn't stay that way very long. It was very quickly infested with racist, sexist trolls. All that stuff with the lady that supposedly slept with someone to get good reviews, or whatever? Turned out that was total bunk. That was proved within a couple of weeks.

Guess what, KotakuInAction still exists. It doesn't really matter whether the initial idea was right, though. If they'd been right initially, their later actions would still be wrong. And they were wrong initially in reality, and it hasn't stopped them from continuing their shit.

For quite a long time, I dismissed the criticisms of Gamergate because I really had nothing to do with it. I read the initial couple of news articles and watched a couple of videos with some outrage and kind of went 'yeah no shit, IGN gets paid to give good reviews, not news'. I then basically ignored them for years. I would hear that they were toxic this or problematic that and I dismissed it, because I thought their initial points seemed solid enough.

Then when I actually went back and looked at it, it had basically turned into TheRedPill. KotakuInAction and GamerGate are really not worth defending.

---

The rest of the topics are pretty similar: all differing levels of controversial, and with echo chamber subreddits heartily happy to advocate for them and heartily happy to advocate against them.

The word 'toxic' is only used by one of those sides, but the other side generally uses far worse language to describe their 'opponents' if you want to use that term. I think being described as 'toxic' is a little better than being described as 'worthy of being shot' or whatever other horrible things people in TRP, The_Donald and such tend to say.

Plus sometimes echo chambers are actually good. When you want to discuss the finer points of something, being able to just say 'this is not a subreddit for debating the merits of ideology X just because it's called /r/X, go to /r/DebateX please' is fine. One of the biggest problems with reddit is that in comparison to old-style forums it's so ephemeral. Other than the (maximum 2) stickies and sidebar links there's little in the way of permanent material in any subreddit, so you end up having the same introductory surface-level discussions again and again and again.

Reddit is a social news website first and foremost. It's not good at discussions at all.


This is stupid. Facebook, reddit, twitter are trying to make the internet this beautiful family friendly place. This is bullshit, we as humans are toxic. This is the way we function. This is the way react on things. Censoring all of this might affect things we should not censor. Fuck family friendlyness if it has a chance to affect free speech.


> which brand themselves as strongholds of free speech and in practice are often used for hate speech.

Um, hate speech is free speech.


Hate speech is used to silence people. Hate speech is not free speech.


You're referring to threats, which are not free speech. Hate speech is free speech.


Since the dawn of social media, as an anthropologist, I find the way it has evolved to be incredibly fascinating.

The original point of social media was to bring people together and build communities. Now, more than a decade removed from it's inception, it is doing the exact opposite. It's tearing people's communities apart, making people more polarized, and driving huge wedges into the very foundation of this country.

And yet, with all the privacy concerns, people continue to use it more and more, seemingly unaware of the consequences of doing so. It's like a train wreck you see coming a mile away. Yet you continue to stare, thinking or hoping its going to get better.


> The original point of social media was to bring people together and build communities. Now, more than a decade removed from it's inception, it is doing the exact opposite. It's tearing people's communities apart, making people more polarized, and driving huge wedges into the very foundation of this country.

"Corporate" Social media are businesses, they are not here to bring people together but to sell influence. Businesses like Google,Twitter or Facebook have been very good at that. Social media are no different than TV or any ad sponsored business. The customer is the advertiser. How do you think these businesses are making billions without their users paying anything?

Venture Capitalists decided to support these business models with massive amounts of money, allowing them to reach critical mass. The whole "bringing people together" was just marketing ploy during the growth period in order to acquire an audience. But that's not how VC got their return on investment. Remember when Facebook was a website acting as a private network for families and friends? they quickly pivoted away from that because it doesn't make that much money.


With the benefit of hindsight, I think “free-form platforms” like reddit or youtube may be the road to mediocrity.

Youtube is still the major player even though twitter, Facebook & netflix grew around them. It is less opinionated on how video should be consumed. Less opinionated on what should be consumed. Anyone can post. Anyone can see it.

Problem is, youtube doesn’t really have a purpose that it is trying to serve. At least not an obvious one. It has a lot of content. It’s searchable (I hate that the web is no longer searchable..). It’s the obvious place to put stuff. It’s everywhere. But…

The platform does not distinguish between the lazy mashup with a clickbait headline and the youtuber making original content for people who would really miss it, and could not get it anywhere else. No preference. No values. No taste. No goal. Just views. One content is as good as the next. Monetisation reinforced this, as did the partial rollback.

Go to the youtube homepage now. Is the content good? Read the comments. If the conversation interesting?

In the middle is Facebook. It started off highly opinionated. Use real names (big deal at the time). Connect to your real friends. Watch their videos and read their posts.

Over time though, Facebook has lost opinions. Stuff is stuff. Views, likes, pokes, plays and comments. Features that get users doing more stuff, they win. This caused their political pickle. Political baiting gets lots of views, likes, pokes, plays and comments. So, as they “optimized” for more stuff, that’s the stuff they got.

On the other end of the spectrum are wikipedia and stack overflow (for example). Highly opinionated. Wikipedia is for XYZ. Stackoverflow is for ABC. If you want something else, go someplace else.

HN is opinionated too. It takes effort to avoid comment threads full of bullbaiting an even more to avoid a flood of one liners.

So Reddit… Reddit did avoid uniformity, that’s a nice way of putting it. It has values. It has things (e.g. censorship) that it does not want to do. But, I feel it still lacks purpose. What does Reddit actually want, what’s the ideal comment or post or sub reddit?

Personally, I would love more HN-like places, more wikipedia like places, places with a job to do.

gdubs has an excellent comment about what Reddit is doing wrong (promoting impulse gut reactions), and I suspect it’s related. When you don’t have an opinion about what you are trying to do or be, and so you’ll tend to end up with whatever is cheap. It happened to youtube and it happened to Facebook.


I don't know about youtube. There's a lot of really interesting, useful stuff from people who just want to share a bit of what they know about some niche topic. If you know what you're looking for youtube is a great resources. But if you're just trying to browse to find fun stuff in your free time, then I guess it's not as useful.

I agree that a lot of the bigger channels get worse as they make videos primarily to get views. Although I think this is true of most content producers: You only have so much good material, but it's hard to quit when there's still money to be made at the margins.


I don't disagree, but I think we might be talking past eachother.

Youtube definitely does have a lot of really interesting, useful stuff. There's some stuff that's fantastic. That's the stuff I want them (youtube) to value more highly. Some (most) of it is interesting and useful to people who aren't me. I'm not even commenting on youtubers disingenuously leaning into controversy or whatnot to get views.

What I'm saying is that there is a lot of stuff that isn't. Youtube does not seem to have an opinion about what is good or bad, at least not beyond view count. Have a look at their homepage. Is it good? Does it give a special place to the things youtube does uniquely well? To me, it just looks like a random collection of videos and a lowest common denominator theme.

Structuring youtube as a video db, with search... that's great. I like that. It means you can narrow down to the point where the lowest common denominator is relatively high, but I feel like youtube would prefer me to just watch some scandal outrage news clip or a show clip or something.


The principle that mass platforms breed mediocrity is an old one.

Dwight MacDonald's "A Theory of Mass Culture" (1953) is some decades old, but still among the more recent additions to this field.

https://is.muni.cz/el/1421/jaro2008/ESB032/um/5136660/MacDon...


Algorithmic/centralized vs Human/community moderation is the largest divide between Facebook, Twitter, and Google vs. Reddit, HN, Twitch, Mastodon, etc.

I’m thoroughly in favor of the latter. Algorithms can be used as tools to aid human moderators, but context, discretion, and diversity are important. In the case of bad communities, you have a much more tractable problem in terms of tracking/banning/avoiding what goes on —- as both a platform and as a user.


I just started reading a book that got me thinking about this - Conspiracy: Peter Thiel, Hulk Hogan, Gawker, and the Anatomy of Intrigue [0]

If you can remove just a couple of the worst actors from the internet, does it have an outsized benefit? Are those people defining "acceptable behavior" and by example giving more reasonable people permission to behave that way? Interesting questions regardless of the specifics of the Thiel/Gawker case.

[0 affiliate]: https://www.amazon.com/Conspiracy-Ryan-Holiday/dp/0735217645... [0 non-affiliate]: https://www.amazon.com/Conspiracy-Ryan-Holiday/dp/0735217645


A lot of commenters here are mentioning that better education could help solve this problem. I agree.

What if there were a new type of "teacher" who replaced or augmented regular mods on sites like Reddit?

Universities strive to hire professors who will teach students to be open-minded, consider the sources of information, and just think better in general. The prestige of each university is largely based on this factor. We tend to measure this by looking at the research accomplishments of the professors, or by looking at how many great thinkers/leaders come out of those universities.

What if sites like Reddit did the same thing, and tried to build their prestige based on excellent moderation? Could they be measured by how many great ideas/movements come out of the platform?

How do universities prevent the "echo chamber" effect where a few mods or a few users can take the whole conversation in one direction?


"How do universities prevent the "echo chamber" effect where a few mods or a few users can take the whole conversation in one direction?"

They turbocharge group think. Probably not the best example of an open minded community.

A distant cousin of my wife went to a Seminary; within a relatively narrow range of beliefs, he claimed there was a lot more intellectual freedom than universities along the lines of teaching how to talk to devil's advocates and heresies and converting non-believers. More of a carrot and stick than university which is solely stick-style, who can most emphatically say "I can't even" and so on.


One more opinion for heap?

People are too sensitive about seeing their snowflake opinions validated. Upvote and downvote do not map to right and wrong. What makes an opinion popular has as much to do with rhetoric and appealing to a million fallacies as it does to facts or logic. This is human nature, not an inherent problem with the internet.


Social media sites are learning the hard way that they're media properties like any other media property, and that they have to have strong editorial control over their media property to be a proper media business.

Reddit is a hive of working-class populism, which is incompatible with any advertising-oriented business. Advertisers don't want their ads next to shitty toxic content. They want their ads next to elite, well produced content. You think Calvin Klein wants their beautiful fashion ads next to a photo of a dead body?

Sure they can get 10 cent ads of your neighbor's garage sale to place next to the photo of a dead body, but to get the $5 million ad buy from Proctor & Gamble, they'll need editors to raise the quality of their content.

Detoxification is central to any ad-based business.

How quickly they learn this is the question for Reddit.


Reddit initially rose to prominence because of bad decisions on Digg's part. Reddit is nothing without its users. Any decision that potentially drives users off its platform can't be made lightly.


As a media business, they can be a lot more profitable with fewer high-quality users than hordes of low-quality toxic users.

A fashion magazine only has 1 million subscribers, but can make $400 million/year in revenue, basically printing money.


> As a media business, they can be a lot more profitable with fewer high-quality users than hordes of low-quality toxic users.

I see no evidence of that.

> A fashion magazine only has 1 million subscribers, but can make $400 million/year in revenue, basically printing money.

A fashion magazine doesn't outsource its content production to its subscribers.


> I see no evidence of that.

I would encourage you to look at the evidence.

> A fashion magazine doesn't outsource its content production to its subscribers.

And?


>>> As a media business, they can be a lot more profitable with fewer high-quality users than hordes of low-quality toxic users.

>> I see no evidence of that.

> I would encourage you to look at the evidence.

I would encourage you to provide it.


What fashion magazine does this?


Magazines like Vogue, InStyle, Harper's Bazaar might sell 2,500-3,000 ad pages a year, each page costing $100k-$200k. $100-$150 CPM is typical.

It helps to know that it's a $3 trillion dollar industry.

Outside of fashion, you can look at the financial industry that can also charge $100+ CPM.


> Reddit is a hive of working-class populism

Is this . . . I don't even know how to respond to this. Have you ever spent more than three seconds with a working-class person?

I can assure you that cat memes and intersectional feminism are not mainstays of the working class.

> which is incompatible with any advertising-oriented business

Tell this to the ad agencies selling Busch and Coors Light.


> cat memes

Do you not have any working class friends on Facebook? Compare their meme posts to your elitist friends.

> intersectional feminism

This is the discussion about Reddit, not Tumblr

> Busch and Coors Light.

Have you ever see them advertise in anything other than pristine media property?

I would encourage you to look at people in the real world, instead of basing your beliefs on your own online behavior.


It's not that the "trolls are winning", it's that people are allowing the trolls to bother them. Trolls have always existed; it's our heightened sensitivity and inability to just shrug them off or laugh in the face of their obscenity that's letting them "win".


Yeah this is bullshit. Trolls have always existed, but shrugging them off was never a solution.

In message boards I used to frequent trolls were suspended without question and banned for repeat offenses. Now when trolls get banned there is an out cry from the troll and those in line with them about censorship and violation of their free speech.

The problem is trolls are given too much room to play and speak.


The conflict is that reddit originally touted itself as a meta-community, where such moderation was applied per-subreddit. If you didn't like the topics/policies of one community, then start another right alongside.

But the desire of investors for widespread palatability and the media's latest push for censorship have perverted the site into creating unified "community standards", across what should be considered independent communities.

Reddit itself gained much of its popularity due to the mass exodus from Digg over their censorship of one simple number! Users inherently do not want to be censored in what they can communicate about, and so the cycle will be with us until we finally scrap this hack of using centralized websites in lieu of end-user software - centralized structures can never remain free of top-down control.


The other problem is that several subreddits were known to "leak," where the subscribers to some subreddits would go out and spread that kind of thing across the rest of the site. If the racist content stays in their subreddits, it's still terrible that it's there, but at least it can be firewalled off. But when the users of those subreddits start spreading that content through the rest of the site, it's much more difficult to avoid it.


For me the issue is the broad application of troll and liberal banning. I was all on board the detoxify train until I was banned from a subreddit where I had posted for 5 years.

I didn’t know what comment or behavior. Messaging mods said that it was obvious what comment and that I was a troll.

This was confusing to me. I never went back. Now I am skeptical of labeled trolls unless I can assess behavior directly.


> Yeah this is bullshit. Trolls have always existed, but shrugging them off was never a solution.

> In message boards I used to frequent trolls were suspended without question and banned for repeat offenses.

That must be selective memory, or at least not generalizable: on some pretty major message boards (e.g. Slashdot), trolling became a prominent subculture.

In fact, one of the my major memories of numerous early message boards was that trolling was an integral component of the forum culture. Trollish things would frequently be said and you spotted the newbies and outsiders based on how they responded. As you learned the culture, you'd learn not to get trolled and maybe occasionally troll yourself.


> The problem is trolls are given too much room to play and speak.

That's the problem with your mindset right there.

We are talking about people. To decide that other people cannot say things you do not like to hear is to deny them their liberty. That clearly worse than "trolling".

Getting offended by a person's words or actions is not them doing something to you, it's you doing something to them - or rather, to yourself.

So if you can't - or shouldn't - compel other people to think and act in a certain manner, what do you do?

The answer is simple: show them the door.


No, trolls have not existed in the form they exist since the emergence of the internet. Self-censorship is a wonderful thing. And the way it works is: you say dumb things (especially as a kid), you get slapped by your parents/friends/people around you. By age 18 or 21, you know instinctively that what you can and cannot say around other folks. At least the vast majority of folks do.

On the internet however, there's no real sense of human interaction and the repercussions are usually minimal.

Real life trolling is probably 1/10k of internet trolling.


Ok so some random person is saying something you find offensive or hurtful. You can block them so you don't have to see what they post.

It's a decentralized solution to a decentralized problem.


Not everyone finds the blocking option in time.

Not everyone has the self control to avoid the comments in the first place.

Not everything you've seen can be erased and forgotten (there's a reason NSFL exists).


You don't seem to have any clue about how prolific trolls can be, and how persistant their threats are.


I would encourage you to look up the GamerGate scandal from a few years ago, and update your knowledge on the topic of online trolling/harassment.


So your answer to people you don't like is corporal punishment.

Do you honestly believe that is the best method to improve discourse?


> So your answer to people you don't like is corporal punishment.

A) Who said it's my answer?

B) Just because I listed corporal punishment, it doesn't mean it's the only one form of punishment used.

That how society works, especially for kids. Kids are jerks to each other. After a childhood of interacting with jerks, we learn what we can and cannot say. We react to stimuli. A huge chunk of these stimuli is negative, i.e. a form of punishment.

Does it improve discourse? I believe it does, not all of it is negative. It forces us to learn and to better ourselves.

Obviously it shouldn't be the only form of interaction. You also need the carrot, not only the stick.

But the internet proved that the "stick" is there for a reason. You either have a "carrot" for everything (impossible, and also doesn't work, since people get used to rewards and become desensitized to them after a while) or you need at least the threat of a "stick".


Ah, the old "We just need to change human behaviour" solution. Yeah, that would definitely work if it worked.


Stopping trolling is the same. People will always troll. If there’s a text input on a site, it will get trolled.


When's the last time you saw someone smoking a cigarette?


10 minutes ago? I get your point though, smoking has gone down. Would you be open to raising taxes on trolling?


No one needs to change anything. The internet existed for years without this being an issue.

If you encounter someone on the internet who is annoying you, most platforms give you the option to block them. You do that, then move on with your life. It's not hard.


That is an incredibly naive view of harassment in an era of doxing, swatting and revenge porn.


The internet did not exist in its present form, extent of influence, flexibility, etc., without this being an issue for any length of time. Don't lie to yourself.


And how do you do that when they come at you by the thousands?

And do so in a way that doesn't boil down to a blame the victim solution of simply leaving the platform?


Trying to get everyone to not be bothered by trolls is a massive and neverending undertaking.


Its also impossible. You're not going to have a close conversation with a group of people when a little kid is jumping up and down and screaming for attention in public. Ignoring them doesn't work for either trolls or bullys. Either they run the show, or you do something about them, end of story.


> Ignoring them doesn't work for either trolls or bullys.

This goes against decades of received wisdom: DNFTT, anyone?

For what it's worth, I agree with you, and I always thought simply not feeding trolls was pointless: There's something similar to a broken window effect in terms of overall tone. If someone comes to a site, sees a lot of negative comments and general asshattery, they'll file that place away as "where the asshats are" even if those comments are being studiously ignored by the regulars. That means the only ones who want to comment there will be the ones who want to act like what they see around them: asshat trolls.

I also think that ignoring them until they go away is a bad strategy: Even if you ignore one or two of them successfully, there's dozens if not hundreds of them waiting to join. You can't outwait them all without the community degenerating due to the broken window effect I mentioned above.


I kind of took your first sentence and started replying to it by saying about what you said afterwards - I thought that was your main point! My typed reply:

> Do not feed the trolls only solves the problem in smaller communities, or at least only just holds off the effects of the troll (topics spinning out of control, pointless arguing, accusations and recriminations) until they can be moderated. Its not a solution for a community, and never has been.


DNFTT worked in an older era.

It doesnt work now, and with the diversification and use of the net by vulnerable groups.

Having a users on a generalist forum suddenly exposed to flashing lights to induce a seizure, or ambushed by images of dead people, or being attacked for being a woman or a minority group?

Yeah you cannot expect people to shrug that off, without also expecting a large mass of humanity to be essentially living without a normal emotional response.


But policing the expression of wide (and fluidly defined) swaths of thought that some believe to be "toxic" isn't?


I just don’t think saying “people shouldn’t let themselves be bothered” is at all a viable solution. Automated troll detection is clearly a hard problem, but I’m not convinced it’s unsolveable. Whereas getting people to not be bothered is, unless people start living forever and have no offspring, by its very nature a commitment without end.


Yet much more modest than the policing of content that Facebook and the like seems to be interested in.

That goes double when it’s two clicks to permanently dismiss someone from your attention. The best troll repellant is and always has been to ignore them.


Back 15-20 years ago, forums didn't like personal abuse or being off-topic. Now, the problem seems to be "dangerous ideas" or "might influence people to believe something I don't". It's changed from staying reasonable to trying to influence real world politics by silencing dissent. Why not have freedom of political ideas on a political forum? Stop worrying about influencing voters in the "wrong" way and meddling in elections and all that nonsense.

Even on HN, there are "wrong" ideas that you can't even hint at, even when they're on topic and you're being non-abusive. It's dominated by aggressively enforced political opinion.


I'm an old BBSer. I was on message forums seeing the entire array of so-called problems of today, which people summarize as "toxicity." I saw the full gamut of what can happen, including court-ordered restraining orders, personal harassment in real life, fights. But I also saw communities of what were (in those days) teenagers and young adults (sometimes older adults) finding self-moderation through trial and error and experience.

We were anonymous in those days and no, it wasn't more toxic, it was less. I credit those days with teaching me to learn to write correctly, to learn a style of approach to the world which is a mix of rational and skeptical, and with being exposed to numerous philosophical and ethical viewpoints that I would NEVER have been exposed to in the actual toxic environment that I experienced as "high school in America."

Now, this was in the mid 90's. Fast forward to the Internet revolution, we start to see the beginnings of message forums online. Fast forward a little bit, we're shocked to find an extremely low level of intelligent engagement, and a high level of emotive hissy-fitting, anger and knee-jerking. Worse, there's a set of interests and focus areas that belong to the masses. My BBS friends and I quickly abandoned our years-long dedication to messaging. It was finito.

Because suddenly, a computer nerd niche was exposed to the masses, and the interests of the masses overwhelmed this niche, to the point that intelligent message forum discussion boards because extremely hard to find. RIP intelligent message forums (that I could uncover, anyway) circa early 2000's.

Now, to me Reddit is the pinnacle of that. I've never written a single item on Reddit but I've read hundreds of posts. And to me they are:

-- Very short, one or two sentence one-liners Me-too's, "this!", etc. Blanket dismissals. There's nothing inherent about anonymity or any other element of the media itself that leads to that.

-- Controlled by autocratic "mods" who delete or otherwise punish views that aren't held by them. The political ones are classic examples. Newsflash, guys, there's nothing inherent about politics that should lead to that, and it's not the medium that leads to that degree of censorship. Echo-chamber my butt - that's not an echo chamber, that's a top-down dictatorial type of dictatorship. And evil at that. An echo chamber is when people self-select (in my view) and bounce and reinforce each other's ideas.

-- Emotiveness. This is number 1. In any discussion, whether discussing a muscle pain in one subreddit, Trump in another subreddit, or altcoins in another, what you find are people view the message medium as a chance to express their feelings. And so when one person disagrees, it's almost an attack or a failure to acknowledge the other's feelings, if you look at this psychologically. But it's perceived as "toxic." What you say is toxic is a manifestation of people who are not USED to rational back-and-forths -- they didn't qualify for the debate team if you know what I mean -- and they think that it's about expressing personal feelings. Almost like they are VOTING on what is true and false.

Folks, I assert this: it's not the anonymous aspect. It's not "toxicity." There's no "cure." There's only people. And levels of education and intelligence (yes, I'm sorry, intelligence plays a role here.).

I was going to talk about Facebook but I deleted the paragraph as it's a whole other set of variables in addition to these.

If there's ANY cure it's education. Teach people what argumentation is. Ad hominems being the first thing they should learn is naughty. Next, a code of conduct for MODERATORS. If you have moderators who behave like the Stasi intercepting private correspondence and throwing out the bad ones, that should be number one on the hitlist.

It's about a, education on argument logic, b, ethics on what moderation is (and isn't). Again, my BBS peers learned this when we were 13-18, gradually.


The article reminded me about another proposal of how to fix reddit: http://chuqui.com/2015/07/fixing-or-replacing-reddit-some-qu... I see a problem with this but there are some valid points. Maybe mastodon.social is heading there:

So I wouldn’t host the stuff. But I could build an easy to install environment that would be a standardized system that could be installed on effectively any hosting site. Start with WordPress, WordPress’s P2 theme, a forum plug-in, the Disqus commenting system and a couple of weeks of hacking some custom work, and you’d have something that could be easily installed and run by a non-geek on any hosting service that supports WordPress.

There are big advantages to this: If someone really wants a topic to exist, they can get it going for well under $100 (including domain name) and keep it running for $10-20/mo. Most of these sites will be very low traffic and a lot of them will in fact be pop-up and collapse as people figure out running sites is work and audiences don’t appear by magic — but the good ones will thrive and grow, and for most of these, it’ll be cheap enough to operate that most people can run them out of pocket. By building it as an independent site, though, that person would have the option of doing advertising, or running a Patreon or GoFundMe, or find other ways to pay for hosting the content of the site.

It also shifts the liability for the existence of the content to the owner and host of the site and away from the central authority. If that person wants to find an offshore hosting service that doesn’t care what the content is, that’s up to them. So you’ve removed the need for a central authority to have to censor to protect its own interests.

You still need a way for people to find these topic-specific sites. Enter the central authority. It hosts a directory, much like the original Yahoo! directory was. Building something like this is dirt cheap and easy to host, so someone (like, ahem, Reddit) could do so at low cost so that it doesn’t have to host those subreddits any more but could still support them by hosting a directory of them where they could be found.


Given that reddit is a Y Combinator alumnus, did anyone at YC contemplate the current mess of the internet back when they applied? I'm curious about the questions asked back then and how they would be answered now


Reddit doesn’t owe anyone anything.

how about asking kids parents to detoxify their kids than solving the problem way down the chain of command?

People are shitty online because they have anonymity. Now instead of going ahead and removing that, it might be worth going backward even more and going to the person.

people need to be nice on the internet. The internet does not need to be nice to people. Detoxifying internet is no different than censorship. I can say this with certainty that as soon as we ask companies and governments to detoxify, it will get misused [un]intentionally.


So it's going overboard to police what people say in a public forum, but it's not going overboard to police what parents teach their children in their own home? You're more afraid of censorship in the public space than state-mandated parenting practices in private?


> state-mandated parenting practices.

I hope I can become a good parent and can teach my kids to be good people by the time I am ready. If you need a state to tell you that, you probably should not be having kids. Having is a privilege and unfortunately also a right.

Edit: In all seriousness, what I was saying is that we should be teaching kids to be good people rather than acting good on the internet.

Also, we should be smart enough to know that all this article is doing is spewing shit on Reddit to generate them pageviews. That is all journalism is now a days.


People are shitty even without anonymity. Theres enough people on Facebook who say horrible things, or push for horrible and hurtful ideas.


I'd argue that a cross-internet reputation service would fix this problem.

1. The reputation system is affected by voting on all sites in which you participate.

2. The history of your participation across sites is viewable in your history (ala reddit, HN)

3. Your reputation is displayed with you participation wherever you participate.

Sites could then put reputation limits only allowing users above a certain reputation to participate. Making the cost of toxicity increase.

The biggest problem would be getting the walled gardens to adopt the system.


Allow me to poke holes:

- You've just created opportunities for sites who don't participate to attract customers. "No reputation requirements, we don't judge you". Great for curated niches, bad for mass market.

- People with diverging but still main stream political views getting their reputation docked by those in the other camp who control the site. It can flow both ways. Reputation starts to get linked to the echo chamber, in that those who don't echo the same get docked.

- Someone hacks you and starts messing up your rep across multiple sites.


A lot of other posters in this thread seem to agree that anonymous groups work better. I tend to agree. But getting everyone on the planet to switch to anonymous environments will be incredibly difficult and take a very long time due to intrenched financial interests. I could hardly get my wife to switch to Mastodon (even being able to keep her identity there), let alone my mother in law, or my rarely seen co-worker.

So for point 1: I'd say let "no reputation" sites attract customers. If people want to participate in completely anonymous groups they are responsible for the groups they choose and the opinions in them. Good on them.

But for the rest of the people who lack the awareness, knowledge, or motivation to seek out these communities, we have a responsibility to find a way to make non-anonymous communication less toxic.

For point 2: Knowing the reputation of a user for the site you are visiting as well as their global reputation helps the reader make a more informed decision about the worth of this person's comments to them than without. Lets use politics. Assume a user with liberal tendencies argues intelligently with conservatives in Fox News comments where his comments are downvoted because they are diverging. They also comment thoughtfully on r/SandersForPresident where his views are appreciated and upvoted. When I see this persons comments on Fox, I will see his fox new's score is -128 and his overall score is 3. A troll on the other hand may have a -348 on Fox news and -828 overall. From this I can conclude conservatives don't like the first user, but some other people do like the first user. I can also conclude the second user is a troll, or doesn't ever participate in groups that share somewhat similar viewpoint. All of which you can see in their history, where their scores for various sites are listed, perhaps you could even bookmark scores for specific sites you want to see. Of course, this is binary, and can be improved with dimensionality, but is still better than nothing at all.

For point 3: Hell, what is someone hacks your bank account? I think a bigger concern would be someone automating downvoting of your account or upvoting of their own. Which would require more thought.


Single-dimensional "karma" numbers are part of the problem.

If you allow multi-dimensional karma then you have the problem that aggregation mechanisms are going to be site-specific (in karma systems, 1 + 1 is not necessarily equal to 2 even before considering multidimensionality) and you end up drawn back to the individual sites hosting the concept anyhow.


While that would be a huge problem, also think about communities where RL anonymity is a necessity. The recent Twitter idea to verify every user only works when people are willing to put their full name out there, but as we've seen on facebook that doesn't really stop anything - these are not really CEOs of respected companies, they're nobodies playing with the little bit of power they have in their lives.


So you're suggesting the Black Mirror Nose dive approach?


When I watched that, I thought of "Facebook Hell Universe". Because I think that's what Zuck would want - having a reputation system under his control and control of everyone under it.


I realize my first comments is not as detailed as it should have been, I've expanded in the comments.

Black mirror is same vein, but different concept:

Getting loans, buying cars, via a single reputation score is different than giving people a tool to curate the content they wish to consume.

By having access to a "user's reputational history" across multiple sites, you can see where their reputation is coming from. Then you as a consumer of internet media, can choose what site reputation to put value on. Letting you curate and avoid toxicity, or live in your echo chamber if thats what you want.

Its like asking a couple friends if they'd loan their weed wacker to the new guy down the street, they say he acts kind of shady but has always brought back their garden tools. Same thing. You ask users of a couple sites how this other user has acted. They say he made some jokes, but also had some valid points here and there.

We don't then say, oh they made crude jokes, charge them an extra $1000 for rent.


Even if you can make that work, that will only lead to the tyranny of the majority. You're filtering for the most common denominator opinions. I wouldn't want to be a part of this kind of sterilized community.

And how will you solve brigading? All it takes is two large groups disagreeing on something, and your reputation goes down the drain if you're in the slightly smaller group.


Your reputation will go down, with THOSE people. If they refuse to agree they refuse to agree. There is nothing toxic about that.

An accounts "history" and "reputation" will be tied to each site. A liberal who posts regularly on echo chambers on both sides of the aisle will have a middle of the road reputation. On Fox news for example you'd see next to their comments:

FoxNews.com Rep: -123 Global Rep: 38

A high global rep is due to the high score posting comments on WashingtonPost articles. Where you would see:

WashPost.com Rep: 161 Global Rep: 38

I'd expect someone with a reputation of 0 to be an honest individual gaining views on both sides of an argument. Someone with a global rep of -1833 is probably a troll or someone who doesn't interact with "like-minded" people, which could of course be seen by looking at the history of where the reputation came from.

You as a consumer, would most likely flag site reps to show on all pages, so maybe you on every site you visit would see a users reputation like so regardless of the site you are on:

Current Site Rep: 3 FoxNews.com Rep: 283 WashPost.com Rep: -87 Global Rep: 221

And you could, of course, dig into their history of reputation, and even Hide/star particular users who you notice are trolls/note-worthy to you personally.


My intuition screams that this is a bad idea. It's like an Internet credit rating / popularity contest combined.


That will only work if you insist on tying the reputation to identities.

Even then it probably won't stop people being awful (there's plenty of awful stuff posted by identifiable people). But if people can be pseudonymous, it won't work at all.


Absolutely! You dont have any contact information here but feel free to contact me offline if you wish to collaborate on a piece about this topic or just to share some ideas. you can reach me at petermcneeley@hotmail.com


I think, somewhat counterintuitively, the ability to "anti-like" or "Report as bad take" could actually make the internet a much more positive place.

The insulation from negative perceptions of those outside your in-group causes polarization and rewards unpopular but extreme opinions.


This, to me, sounds like the zombie apocalypse we were promised.



I'd take toxicity over censorship any day. The media machine & politicians have been going at it recently.

I wonder when being toxic will become illegal.


What are your thoughts on requiring or at least encouraging verified accounts - somehow putting a verified name behind the poster?


I would argue we don't need to detoxify the internet, stopping trolling is missing the point however we should stop the spread of misinformation, in fact sites like reddit and facebook rarely create information, they just consume it blindly without questioning the source


I'm less sure regarding Facebook (as I'm rarely on it) but there is a large amount of information created (or at least initially disseminated) on reddit. Places like /r/DIY, /r/woodworking, and countless others are chock full of original content and helpful users willing to answer questions and get into a dialogue down in the weeds.


> Struggle to Detoxify the Internet

The internet isn't toxic.

It's the people who are toxic.

These people are just as noxious in real life even if some of them hide it when their identities are known.

Can anything be done to detoxify the people, or do we just treat them like spam and filter them out?

And what happens next when millions of rabid voices are suppressed?

The toxic people don't cease to exist, we just won't be able to see them as well.

Perhaps we'll find the social and political environment of 2022 to be much darker and more dangerous than 2018.


> It's the people who are toxic.

I half-agree with this. The internet doesn't make a genuinely kind and empathetic person into a troll. But I think it does amplify certain bad little impulses that are latent in pretty much everybody -- the temptation of quick and cutting putdowns, mob and tribal mentality, lobbing rhetorical bombs then ignoring the consequences...

There is a qualitative difference between a back-and-forth on Twitter (or even HN) and a back-and-forth in real life. There's way more trust in good faith in the latter.


> These people are just as noxious in real life even if some of them hide it when their identities are known.

I don't think so. A lot of people, and this is especially true with Twitter, get noxious because they found an audience there to "amuse" and entertain, which might amplify the temptation for "troll behavior", especially when that audience is of the same political/cultural leaning as the speaker. Social media inflate egos. Some people who might feel insecure in real life find a community there where they can feel like someone, not by doing something positive but by being mean and condescending to whom they deem their ennemy.

However social media didn't create these divisions, they just amplify them.

IMHO Twitter is a proof that even with real identities, people will engage in toxic behavior provided they feel supported by a large audience. Of all the social media I found Twitter to be the nastiest of all.

By contrast, aside from a few brigading, Reddit communities are often isolated, self contained and don't "leak". It takes a user to actively go on a sub to see its content, while Twitter is constantly pushing stuffs to its users, even the nastiest ones.


It's not Little Boy that created a runaway supercritical explosive nuclear reaction, it was uranium that created a runaway supercritical explosive nuclear reaction.

Except that that uranium had to be mined, refined, separated, shaped, and placed into a gun-type fission physics package.

The toxic people you mention existed before Reddit (and Facebook, Twitter, YouTube, or other various online media). However they weren't manifesting the same effects as we're seeing now, for the most part, until Reddit (and Facebook, Twitter, YouTube, or other various online media) came along.

It's not the components that have changed, but the relationships, vectors, and mediation between them.

We could, of course, note that there were earlier periods in which we saw similar types of situations evolve. And I could respond in turn that many of those instances were themselves the result of changes in the media landscape: AOL and Usenet, 24 hour cable television news, talk radio, FM radio, CB radio, handheld megaphones, Xerox machines, mimeographs, television (both terrestrial, cable, and satellite), radio, large-scale public address systems, high-speed printing presses, widespread literacy, population concentrations in cities, railroads, and more.

And from these: the 2003 Iraq War, the Gigritch "Contract on America", the Rwanda massacre, the Yugoslav civil war, the Reagan and Thatcher revolutions, the 19760/70s Vietnam War protests, the 1950s and 1960s Civil Rights movements, McCarthyism, Father McCoughlan, Fascims and Nazism, the Russian Revolution, the Revolutions of 1840, the Chartist Movement, the French Revolution, the American Revolution.

Changing how a system receives, transmits, and processes information will fundamentally change that system itself.

The Internet is, in significant part, toxic.


These people are just as noxious in real life even if some of them hide it when their identities are known.

I suspect most of them are rather pathetic in real life.


I definitely have to disagree with you here. Human behaviour isn't really built in. There are obviously some natural behaviours in humans, but most human behaviour is a result of our surroundings.

When humans are in material need, they act in a way differently from when they aren't in material need. Nobody is surprised by that.

When humans are on reddit, they act differently from how they act on an old-style phpBB forum. No surprises there either.


And Reddit has been toxic since day 1.

https://motherboard.vice.com/en_us/article/z4444w/how-reddit...

They were Fake news before fake news was a thing.


Decent, moral people avoid people and places like reddit altogether in their every day lives while walking down the street.

A news analyst, on NPR a few years ago, called reddit a "Frankenstein's monster they can't control".


Does detoxify mean getting rid of mean speech?


There was a post on /r/The_Donald not long ago calling for shooting all refuges in the US on sight. Do you feel that is just "mean speech"?


People would downvote them or argue it out.

If that person were never able to share that opinion, they would never get feedback that it is an unacceptable opinion; and might therefore act on it.


Moderators of T_D prevent that. Unless you subscribe to the subreddit, you're unable to vote, and despite constantly rallying against "safe spaces", they instaban anyone who goes contrary to the group.

That, and I find it impossible to believe that someone who is able to post that online would know that it is an unacceptable position.


It depends on the discussion and I’d need to see the link. While obviously calls for violence are illegal (FBI steps in), there are many possibilities where the discussion could be useful (eg, convincing people to better understand refugees).

Largely I’ve been desensitized to complaints about TD as typically it results in hyperbole (eg, election was a direct threat to people’s lives).


"It depends on the discussion"

You're gonna have to provide a context in which this wouldn't be a terrible and unacceptable thing to say, cause I can't think of one.


Not whether it is a terrible thing to say. But whether saying it should be banned.

Here’s a super simple example.

Jerk: “I think we should shoot immigrants” Non-jerk: “that’s a bad idea because immigrants have souls” Former-jerk: “good point. I changed my mind and no longer think shooting immigrants is a good idea. In fact, I will now dedicate my life to helping the needy and everyone who reads this thread should as well.”

It would be bad to ban a thread such as the one above.


A few minutes reading /r/The_Donald shows that there is a vast difference between saying "that guy on the internet hurt my feelings" and "this is a threat to civil society."


So what does detoxify mean exactly?


> Some of the conspiracy theorists left Reddit and reunited on Voat, a site made by and for the users that Reddit sloughs off. (Many social networks have such Bizarro networks, which brand themselves as strongholds of free speech and in practice are often used for hate speech. People banned from Twitter end up on Gab; people banned from Patreon end up on Hatreon.) Other Pizzagaters stayed and regrouped on r/The_Donald, a popular pro-Trump subreddit. Throughout the Presidential campaign, The_Donald was a hive of Trump boosterism. By this time, it had become a hermetic subculture, full of inside jokes and ugly rhetoric. The community’s most frequent commenters, like the man they’d helped propel to the Presidency, were experts at testing boundaries. Within minutes, they started to express their outrage that Pizzagate had been deleted.

This is a critical thing that we as a society need to recognize about censorship and political correctness. When we censor people or ostracize them for saying things we don't like, even when what they say is legitimately awful, these people don't disappear, change their minds, or stop voting. They go elsewhere, become more entrenched in their awful beliefs, and because we've pushed them all together, they become more united. They become stronger. And because they become stronger away from us, we don't notice, and we're blindsided when they flex their political power.

Underlying this mistake are some important truths:

1. People who say hateful things are human beings worth engaging with. No, I don't like reading a lot of what people say. It's easy for me to forget that they are human beings with their own struggles and traumas that cause them to believe the awful things they believe. When we dismiss them as trolls, we're dehumanizing them. Such a society doesn't leave room to be wrong and learn--if you're wrong, you're dismissed--and it dismisses the people who are the most dangerously wrong. It's not our responsibility to educate people, but that's irrelevant to the fact that if we don't educate people no one will. We need to engage people who believe awful things, try to understand what needs cause them to believe those things, and try to address those needs with compassion and courage. Truth is the antidote to hate.

2. Free speech doesn't just matter in a legal context. Free speech is protected in the US constitution because it's important in a free society. If we're going to let the discourse of our society move into privately-owned platforms like Reddit/Facebook/Twitter instead of publicly-owned platforms like street corners where newspapers are sold (or, the rest of the internet) then we have to value free speech on those platforms as well.

Too many people are stuck in this idea that the Trump election was an anomaly--that in November Congress will change and in 2020 we'll have a new president. I see no reason to believe this will happen. We have changed nothing about our behavior and we're hoping the ones who elected Trump to change.


>2. Free speech doesn't just matter in a legal context. Free speech is protected in the US constitution because it's important in a free society. If we're going to let the discourse of our society move into privately-owned platforms like Reddit/Facebook/Twitter instead of publicly-owned platforms like street corners where newspapers are sold (or, the rest of the internet) then we have to value free speech on those platforms as well.

I think this comparison makes little sense. Either you're comparing newspapers and social media platforms or you're comparing the street corner and the internet. The powers that be should not be censoring the street or the internet. But the owners of newspapers have always exercised some level of control over their content, just as social media platforms exercise varying levels of control over what is posted to them.

I don't like it, but I didn't like newspaper control over newspaper content much either. Expressly ideological newspapers that are more like newsletters are one thing, but newspapers that purport to be objective and are not are quite another.

I don't see echo chambers on reddit as being much different from the echo chamber that is the University [Insert Here] Society's newsletter. Echo chambers have always existed. People sometimes just want to surround themselves with opinions they agree with.


> Huffman can no longer edit the site indiscriminately

There's precisely zero proof of this.


Wasn't there a case where he did precisely that by editing a post of a Trump supporter?


This sentence in the article is in response to that incident, claiming they've removed that ability.


He made the change in the database. They didn't remove functionality.


It's described in the article...

EDIT: I misunderstood OP, I thought the comment was saying there was zero proof of Huffman's ability to tamper in the first place. Agreed there's no way to prove he can't do this anymore.


Mind quoting the passage that describes the measures in place? I just read the entire article and absolutely zero proof is provided that Huffman can no longer edit things.

It mentions Spezgiving, followed by Huffman's apology, and then moves onto banning subreddits, to the meeting about banning subreddits, and ends at /r/Place and the worry it'd be nothing but swastikas. Absolutely nowhere is anything described about the measures in place to prevent Huffman - or anyone else at Reddit for that matter - from editing posts.


Thanks for clarifying, I misunderstood OP's comment, just updated my own.


Obviously not, there is no way to prove it.


i.e. You can't prove a negative


Tampering can be detected with internet archives and data dumps of Reddit submissions/comments.


Anybody with unrestricted access to a SQL server can manipulate whatever they want.


and how many people are running bots/scrapers to do this? also, how do you know whether the OP edited it, or it was an admin?


There are multiple sources of internet archives and reddit-specific archivals.

re: edits; check whether the post sentiment changes, I'm not worried about admins fixing typos/grammar.

Also, investors understand trust is a prerequisite to profit... editing shenanigans are intolerable.


Toxic statements = statements from a different political orientation

Why are people so willing to embrace Orwellian doublethink? My prediction is that such a sword will do decapitate undeserving people of all political aisles.


"Toxic statements = statements from a different political orientation"

Wrong. Saying so is dangerously minimizing the problem.


Have you been following what’s going on in the UK with their anti-“hate” speech laws?

Tweet something that someone decides is offensive to them, get a visit from the police? I’d say it’s a very rational concern that these regulations would be weaponized against “wrongthink”.


I'm not saying the UK's way is the way to go. But trying to handwave away the problem of toxicity with, "It's just people who can't handle' differing opinions" not right either.


The number of actual prosecutions or even visits from the police is tiny, and when you look at the actual cases they tend to be racist death or rape threats.


"Arrests for offensive Facebook and Twitter posts soar in London" [1]

"Arrests for 'offensive' Twitter and Facebook messages up by a third" [2]

"British Police Arrest At Least 3,395 People for ‘Offensive’ Online Comments in One Year" [3]

"Police arresting nine people a day in fight against web trolls" [4]

"More than 3,300 people were detained and questioned last year over so-called trolling on social media and other online forums, a rise of nearly 50 per cent in two years, according to figures obtained by The Times.

About half of the investigations were dropped before prosecutions were brought, however, leading to criticism from civil liberties campaigners that the authorities are over-policing the internet and threatening free speech...."

[1] - http://www.independent.co.uk/news/uk/arrests-for-offensive-f...

[2] - http://www.theregister.co.uk/2016/06/02/social_media_arrests...

[3] - http://www.breitbart.com/london/2017/10/14/british-police-ar...

[4] - https://www.thetimes.co.uk/article/police-arresting-nine-peo...


None of those articles mention the content of the posts.


There are a few examples in there. Here’s a few more [1]. Mostly it’s ill advised mouthing off, or blowing off steam as was the case of the guy delayed hours at an airport. These are not “true threats” or incitement, which are charged differently.

[1] - https://www.makeuseof.com/tag/killallwhitemen-twitter-trolli...


Dangerously minimizing? What's dangerous is a culture addicted to simulating political action by managing online comments. Instead if putting resources to making your local garden great, to mangle Nietzsche of Ecco Homo, people get worked up by actions they have no control over instead of working locally to change conditions where they could have an impact.

And you lack imagination and historical hindsight if you think the concept of "toxicity" can only be wielded by whatever you consider to be the forces of light. So to speak.


"Dangerously minimizing?"

Yes. Pretending that the problem of toxicity is just people who can't handle differing opinions is to be willfully ignorant of the problem. As such, a reasonable conversation can not be had.

"And you lack imagination and historical hindsight if you think the concept of "toxicity" can only be wielded by whatever you consider to be the forces of light. So to speak."

Nowhere did I say that in the least.


You're spoiled beyond belief if you view "toxic" comments online as a serious problem. If it is truly "a problem", then the larger problem is tying your emotions to what anonymous people say. You live vicariously through an image of how offended someone else might be. Instead of tending to the garden you can actually protect and uphold.

Do you think the poor bastards in Skid Row feel better because some (literal) white typed in defense against toxicity?

Do you think someone...Let's say Tim Wise! Do you think Mr.Wise has actually interacted with black communities while he lives in a 98% white neighborhood? Do you think Mr.Wise would help some homeless junkie who's trying to figure out how to work his phone so he can message an estranged daughter on Facebook? That's part of what I did in Skid Row. There are issues of knowledge, skills, and plain diet that would offer much better benefits than speaking out against some notion of "toxicity", created by people who have no skin in bleating their stupid moralism.


Sorry, but this idea of "Sticks and Stones" hasn't been true for a long, long time. You may be able to shug off terrible, racist, bigoted slurs hurled at you, but people who have that slung at them day after day after day might not be, and I'm not going to blame them for not wanting to take it anymore.


Then why are they on the racist slur subreddit? Same with Twitter, you don't have to follow people who insult you, or the people who retweet the insults to you.


Why are you assuming that stuff stays in it's little corner? It doesn't. Especially with Twitter, it's very easy for anyone to message that kind of thing to you.


You could argue that toxic comments on the internet are the only reason the current president got elected in the US. Should said president lead us to a nuclear holocaust, or any number of other real possibilities due to his proven negligence, toxic internet comments would have a very real impact on the material conditions of citizens in this country.


Huh? What if Clinton had won and started a nuclear holocaust? Who's to blame then? You must be opposed to all political advertising for the party you don't like because somehow you're able to see that it's bad and the other half of the population isn't. That's being hopelessly blinded by partisanship.


I'm opposed to all political advertising period, and would have also considered a Clinton presidency non-optimal, though likely more stable than what we have now.

I'm fine with the down votes, but it should be pretty obvious that it would have been Marco Rubio, or some other sanitary GOP member who got elected if it wasn't for the Russian troll brigade on the internet whipping up a frenzy for Trump.

That's my only point


it should be pretty obvious that it would have been Marco Rubio, or some other sanitary GOP member who got elected if it wasn't for the Russian troll brigade on the internet whipping up a frenzy for Trump

You raise a wonderful point. Personally, I don't think a "Russian troll brigade" was singularly more effective than all the other attempts to affect the election, but if there was an influence, it certainly may have been greater in the GOP primary than in the general election.

But despite having read an awful lot of election analysis, I don't think I've ever heard anyone else make this point. I've only ever seen the issue framed as Trump vs Clinton, with almost all the proponents of the "Russian hacking" narrative being on the pro-Clinton side.

Besides yourself, does there exist a group of pro-Rubio (or perhaps more interestingly pro-Cruz) supporters who believe that Russia successfully influenced the election in support of Trump? Is there somewhere online I could read more about this worldview?


You know, that's a good question. I read a lot, and I pay attention a lot, so for me dots get connected rather naturally. There isn't much out there talking about russian spamming taking place during the primary. I found this though: http://www.mcclatchydc.com/news/politics-government/congress...

That being said, my family is loaded with died-in-the-wool republicans, and I remember them all blown away by the fact that trump was winning the primary. None of them could believe it, and they all hated him during the primary!

After trump won, and it was him vs clinton, then the narrative switched. They loved trump at that point.

It was very chilling. It became clear to me that these folks were heavily influenced by some sort of propaganda because their minds changed like the wind!


The article is largely about removing flagrantly racist content (the subreddits they listed as examples were like r/KKK and r/CoonTown) and various illegal content like bestiality videos. It seems disingenuous to classify these as simply "different political orientation".


As much I disagree with racists, I believe that a healthy society must allow them to speak.

I tried to find a quote from Dershowitz’s book on defending Nazis but gave up due to paywall.

Toxic is not an objective measure for speech. But like in real life, toxicity depends on the dose. Life doesn’t eliminate toxins, it reduces them to manageable levels (or dies).

There’s also the practical effect that banning is irreversible. Toxicity is not terminal, but banning doesn’t allow for redemption.


agreed. but consider how the people being banned will see things.

i don't suggest we cater to them, just that we realize their way of thinking and reacting.


I think you replied to the wrong comment.


Yup. I moved that one to its intended parent, and detached this one from https://news.ycombinator.com/item?id=16572878 and marked it off-topic.


[flagged]


> pro-conservative or anti-liberal

hmm... you might be on a different part of reddit than I am.


Most people are idiots. Consider that half of the population (hypothetically) has less than a 100 IQ score. Idiots did different things together 100 years ago and, 1000 years before that, groups of idiots would get together to raid and rape and pillage. They still do that in other places in the world. Now they come onto the internet to send hateful things and porn to one another. Most of the biggest idiots do not go on reddit. When the idiots on reddit act up, management cleans it up a little bit. Is there an issue that needs to be fixed here? I think anyone even claiming to have a solution may belong to the former group


"Since the early 20th century, raw scores on IQ tests have increased in most parts of the world. When a new version of an IQ test is normed, the standard scoring is set so performance at the population median results in a score of IQ 100. The phenomenon of rising raw score performance means if test-takers are scored by a constant standard scoring rule, IQ test scores have been rising at an average rate of around three IQ points per decade."

https://en.wikipedia.org/wiki/Intelligence_quotient

You assume too much about half of the human population.


I suspected that I shouldn't have put that into my post or should have qualified it with a statement regarding how completely useless of a metric it is. My general point was that probably around half of the planet is humping a tree as we type at our computer and pontificate about the human potential and dignity, regardless of a useless score that slightly correlates with things.


IQ isn't useless. It does correlate with violence (rape and pillage?) and of course it's a powerful predictor of academic performance, future income, and even life expectancy.


As soon as it gets mentioned to make a point, it incites a completely orthogonal discussion about what it means, its validity, socioeconomic bias, racial bias, etc


The human brain is a fantastic computer. Most people are geniuses, constantly crunching stochastic phenomenal trajectories, and navigating ever-changing real and virtual terrains. The problem isn't how smart we are, but on what we focus.


Great point. Then perhaps we should aspire to be like gorillas, instead. Instantaneous, implicit martingales and brownian motion calculation in their minds subconsciously, exactly like us, except, they don't shitpost on the reddit.


Historically, "The term "idiot" was used to refer to people having an IQ below 30." So not on that definition.


Oh, mai. Where is the tldr; for this one


OK, so how did r/Trees end up focusing on marijuana, and r/MarijuanaEnthusiasts on trees? Was r/Trees first, and then tree lovers did r/MarijuanaEnthusiasts as a joke? Was there a war?

Edit: tone


Not one word about the HUGE Antifa presence or doxing by left wing conspirators and agitators. This is exactly why I don't read the New Yorker.

I would consider reading it if they gave light to both sides of this issue. But they don't. This is not journalism.

Edit: u/spez was caught editing user comments on certain subs he publicly disagrees with. How he's still CEO is beyond me. At least he admits he's a troll.

Edit 2 - Downvotes all you want, all I'm saying is FACTS. Here's more FACTS: https://motherboard.vice.com/en_us/article/z4444w/how-reddit...

Reddit culture starts at the top and this is Steve Hoffman. He should resign. He's completely in over his head.


[flagged]


From the guidelines: “Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: