This is a dangerous path to go down. Much better to allow anyone to post anything they like that's not illegal. If you don't like it, unfriend/unfollow/block them.
Why is this bad? Because hate speech is not clearly defined. What may be fine today may qualify as hate speech tomorrow. Eventually posting anything not super positive or encouraging could be considered "hateful" or "triggering" and get you in trouble.
Exactly. And there are so many ways it can go wrong.
That's a problem with big companies. They need to please a mass of people that are very different with each others. Eventually they settle with the lowest common denominator, and eliminate more and more things that can offend.
Among them you will find:
- new ideas which you can't know if they are bad or
good yet and need assessment but will feel
inconfortable.
- old ideas becoming relevant in our new context
(or that we are evolved enough to accept now)
that have been put asides for long because
of habits.
- ideas that are important for a part of the
population and unsettling for another part.
- distrupting or creative things, arts, that just
need to exist for humanity to keep grow, even
if it's not always good.
- bad things that people need to know exist.
You can't exercice criticism if you don't have
the material to criticise. People need to be
able to read "Mein Kampf" so we can argument
against it.
If you remove all the things that is offensive to everyone, the only thing left is a white sheet. Wait, not necessarily white, I didn't mean to imply... Well yes, sheets are made from trees I realize it has an impact on the planet but... Ok then there's nothing left. What do you mean I can't think outside of my white-sheetist stereotypes ? What if I identify as a nihilist ? You are offending me!
These quips about PC culture miss the mark. The rise of social networks / platforms catering to people who express their distaste for certain speech stems from their desire to further their revenue streams. I haven't heard of anyone leaving a platform because there wasn't enough offensive content.
Expect this behavior from every site that harvests user data and relies on adclicks to pay their shareholders. The monetezation of self expression -- a concern that dates back to at least the Usenet era -- is the culprit, not the speech criticizers.
"I haven't heard of anyone leaving a platform because there wasn't enough offensive content."
I have left both Twitter and Github recently, not because there "wasn't enough offensive content", but because I find thought-policing even more offensive than the allegedly "offensive" speech.
If it weren't for the fact that Facebook is used by many of my older relatives, I'd be out of there, too, and it won't take much more to get me to leave.
So... now you've heard of someone. I do not think I am unique.
When those that agree with you start affecting said companies' bottom line, we'll call it a movement. Until then you appear to be in the severe minority.
The Internet is not governed, it is bought and sold; if you have any intention of affecting change, you do better to acknowledge that, at least as a start.
Realize, too, that the ability to "leave" these services is something of a privilege. Every job I've ever applied to in the tech world asks for a LinkedIn and, I've observed for programming positions, at least a presence on GitHub. It's hard to explain your free speech objections on your intro letter.
The majority don't care about democracy and trade freedom for security everyday.
The majority don't care about health and trade a healthy body for a cheeseburger everyday.
The majority don't care about ecology and trade pollution for saving a walk everyday.
Decision affecting the humanity cannot be based on the majority behavior. It should be based on the majority vote, because then you give people an occasion to step back and think. To discuss and argue. But don't base on status quo.
"Every job I've ever applied to in the tech world asks for a LinkedIn"
I've never seen any job that asked for a LinkedIn account, nor am I acquainted with anyone who ever got a job through a LinkedIn account. While there are undoubtedly exceptions, I don't personally know of any. In fact, that's why I left LinkedIn (long before I left Twitter and GitHub) -- way too much company spam, not enough actual usefulness.
"Realize, too, that the ability to "leave" these services is something of a privilege"
Is there anything that isn't a "privilege" nowadays?
"It's hard to explain your free speech objections on your intro letter."
There's no need to "explain" anything. Just refer them to your own public code repo, which any programmer worth his or her salt should be able to set up in a short time using an Amazon S3 bucket and one of the many utilities that exist for manipulating those buckets. Presumably the company you're applying to just wants to see some code samples. There's nothing magical about GitHub.
I agree with you that these are private companies which can, rightfully, facilitate or inhibit conversation or speech in anyway they desire: I wouldn't have it any other way. And I don't agree with the leanings of Facebook & Twitter management.
Having said that, their right to exercise editorial controls on their platforms does not mean they are somehow above criticism when they elect to do so in stupid ways. As the original comment in this thread suggested their moves in regard to speech are short-sighted and detrimental to the overall health of society, especially given the pervasiveness of use of these platforms. Still, I can recognize their right, and call them out on their foolishness, at the same time. I only cross the line when I demand force to be used against them (meaning a law written to force their hands); then I would be in the wrong. They don't have a responsibility to further discourse and I don't have a responsibility to hold my tongue when they fail in this regard.
They don't have a responsibility to further discourse and I don't have a responsibility to hold my tongue when they fail in this regard.
I would argue that anyone who takes it upon themselves to become the central hub for most discourse (or to organize the world's information) has an implied responsibility to act as a good steward of the empire they seek to control. We have no obligation as members of the public to accept whatever controls a land-grabbing (I use this term here non-judgmentally) empire builder wishes to impose; they sought to turn us into members of their empires, so we should at least have the right to demand they behave equitably, and perhaps even beneficently.
What is the source of this implied responsibility? What is the reason behind it? We might hope that they would do the right thing, but our desire doesn't bind them to deliver our wishes to us... even if it is in their best interest to do so.
When we use a Facebook or a Twitter, we (the public) accept the product they offer. True enough, if we find out we don't like the deal we can threaten to walk away if there is no change. But these companies don't have a responsibility, implied or otherwise, to better us, cater to any of us, or fulfill someone's vision of social good. It would be nice if they delivered on these things (of course my vision of what that means) but it is not their responsibility to do so. If I don't like that with sufficient intensity, my one valid recourse is stop using their services.
What is the source of their right to seek an empire in the first place? At root it's the social contract of the society whose members they seek to enlist.
The source of the implied responsibility of beneficence is the fact that without assuming such a responsibility the acts of a corporate empire are likely to be destructive to the long term interests of the society. By (for example) controlling the social network, they leave members of the society with limited choice not to participate, and thus the implied responsibility of beneficence in exchange for that removal of choice.
"The squeaky wheel gets the grease". I have heard far many times for my liking that Apple/Facebook/Twitter/Google...etc are private companies and they can do whatever they want as an argument. The same goes for products that are free. It doesn't automatically mean that we lose the right to criticize. It's necessary and healthy to raise objections and demand features that we'd like to have. It's in these companies' own best interests to listen to their customers and make decisions with their customers in mind (too). Even if these companies don't take heed, someone else would. May be another company would build something we are asking for. May be another social media company could pop up with a constitution like protection of freedom of speech.
The difference is that nobody is asking for these companies to be banned or to cease operations. They are free to do what they want and we are free to ask what we want.
Let's follow that logic: the internet infrastructure is owned by private companies (i.e. Comcast), so shouldn't Comcast be able to censor and throttle anything that goes over their network?
Infrastructure isn't a free market. If it were, there would be a hundred different cable lines hanging outside your house, like what happened in the 1800's.
So, in order to prevent that situation, we allow one company a monopoly on the market. In exchange, that company is forced to follow common carrier rules, such as not interfering with speech.
An entrenched-monopoly dominating an industry primarily controlled by network effects isn't really a free market either, free markets don't exist outside of college econ textbooks.
We all agree to use the same company for infrastructure, just like we all agree to use the same couple of companies for social media. It's just when the social media companies do things that are bad people pop up and say "they are private companies in a free market they can do whatever they want" because these companies are new and cable companies are old.
I think the point here is that Twitter and Facebook increasingly look like a piece of infrastructure - it's hard to reach the general population without them, it's hard to opt out of using them, etc.
but increasingly, the internet for people is facebook and Twitterand and Instagram, etc.
The issue is that its hard to get the message out with a few brands dominating the internets. How will one get the message out about a website if she or he couldn't blog about it on a platform like Twitter or Facebook?
Agree! I am all for playing nice but I am also an advocate for freedom of speech. I fear we are losing that. The whole safe space movement was stifling freedom of speech and now this will hurt it even more.
Safe spaces do not stifle free speech. Just as your ability to limit who enters your home (if you have one) does not stifle free speech, safe spaces do not stifle free speech.
I would argue that they actually increase freedom of speech, because they give certain individuals freedom to express themselves without fear of retribution. Something that can be rather difficult achieve for some people.
Of course, there are problems with the safe space movement. For example, they often highlight racial/ethnic/gender differences between people. In my analysis, it seems that marginalized people have realized the world is not going to change any time soon, and that they will continue to be marginalized – which is the motivation for creating a safe place.
Should Facebook/Twitter be a "safe space"? Probably not – but Facebook has been very succcesful at creating such safe spaces with their Groups platform, which seem to be gaining more and more traction.
There are countless examples of "safe-spaces" being argued for on campus to shut down campus speakers, debate, not allowing video of protests, etc. Would you like links for reference?
But first, since you're defending the concept, how would you define a safe space... and how is the space kept "safe"?
The definition of a "safe space" is one where free speech is not allowed. We have had safe spaces for centuries, they are called homes. At home, you are free to shield yourself from whatever speech, ideas, or behavior you find offensive. The whole crux of a free society is the freedom to be boorish, crude, or otherwise offensive. That's what freedom means. It means there is no authority that decides what forms of speech or expression are not allowed. We already have laws against assault and harassment, and those are enough.
Safe spaces are echo chambers. Only people who align with their own views are allowed, so those views are the only ones expressed. It's "Free" as long as you say what they want to hear.
The parent poster said he was for defending free speech, why would he want to switch places to somewhere where it's restricted?
What is your objection even? That because free speech is restricted in e.g. North Korea we should start restricting it elsewhere?
Or are you implying that oppressed demographics(e.g. homosexuals, black people, etc.) are having their freedom of speech stifled and therefore restrictions in free speech as a result of the 'safe space' movement are justified?
That's the only reasonably understandable(albeit flawed in my personal opinion) interpretation of your comment I can really make.
How is it flawed? A TOS that discourages hate speech and enforces some sort of standard seems far less stifling than intimidation by rape and death threats.
It's funny to see people get bent out of shape at the idea that they might have to watch what they say when the consequences of saying something offensive are just that a platform might not let them publish everything they want to while there are large groups of people who already have to watch what they say because the consequences of saying something that another group disagrees with are rape threats, death threats, and harassment campaigns.
>It's funny to see people get bent out of shape at the idea that they might have to watch what they say
I strongly believe in free speech and I will absolutely be 'bent out of shape' when people or groups try to stifle it.
>there are large groups of people who already have to watch what they say because the consequences of saying something that another group disagrees with are rape threats, death threats, and harassment campaigns.
Please provide me a modern day example of this in the US.(where these companies are located)
Could you be more specific? Who's? If we're talking about an online context, barring the kind of censorship we're talking about here, that's strictly false. A tweet is a tweet is a tweet.
If everyone abandons all of these platforms (and any replacements that pop-up) due to death threats what does that do to free speech online?
If tons of people are self-censoring or avoiding speaking at all for fear of being doxxed and getting death or rape threats, do we really have free speech?
The number of real rape and death threats is miniscule. Of the millions of times someone has received a comment they considered a threat, how many times has something happened in the real world? Your odds of being killed by a meteorite are better. There's no reason to take that kind of stuff seriously unless there's specificity involved.
There's a cottage industry of throwing rhetorical bombs and trolling 4chan in order to get threats. The people complaining the loudest about threats are reluctant to actually get the police involved, and they never stop trolling because they realize the threats aren't real.
tldr: Internet threats are no reason to even consider curtailing free speech.
There is a difference between the two. Getting doxxed is caused by social regulation going awry. Getting removed by a company is that company controlling the narrative. Neither is good, but one still allows for the freedom of expression.
I'm sorry but I fail to see a difference between the two. In either case users are leaving/avoiding a platform due to abuse and this free speech on the platform is lessened (except, of course, for the harassers).
"You will be judged by the company you keep" is starting to bite these platforms, and they're trying to do something about it.
>If tons of people are self-censoring or avoiding speaking at all for fear of being doxxed and getting death or rape threats, do we really have free speech?
Yes we do. If you self-censor because you are afraid of the consequences of your speech, that makes you a coward. Far better for cowards to cower in fear silently then for everyone to be forcibly silenced to placate cowardice.
So if you think a movie is good, but you're afraid to post that because there are people making death threats to people posting the "wrong" opinion then you're in the wrong for being a coward?
Because banning death threats is forcibly silencing people?
>So if you think a movie is good, but YOU'RE AFRAID to post that because there are people making death threats to people posting the "wrong" opinion then you're in the wrong for being a coward?
You answered your own question. Its already a crime to make death threats. If you are afraid to speak your mind or otherwise do what you think is right because you are afraid of the consequences, whether they be crimes or not, you are a coward.
I learned at the age of 12 that you do -NOT- have freedom of speech on the internet. I was on some internet forum causing a ruckus and one of the moderators told me to stop posting or whatever. I countered with "free speech!" and got banned off the forum.
Most of the internet we choose to use is a collection of walled gardens. People can kick you out of their garden, that's how it works.
I don't think anyone on here was saying or implying that that's NOT how it should work. In fact, at least one commenter stated that they wouldn't have it any other way.
We're discussing a normative conclusion that even though "people can kick you out of their garden, that's now it works", perhaps people shouldn't choose to kick others out of their "garden" because of disagreements and free speech issues (though they're always welcome to do so).
I know it's frowned upon to discuss downvotes, but I think it's relevant to the conversation.
Godwin-evoking private forum moderators exercising their right to police their turf aside, I always find self-moderating upvote/karma systems on forums interesting. Ostensibly the point is to allow users to control the tone of the site, and promote interesting, relevant content. In practice it is used to bury unpopular opinions and register distain.
Some comments are justifiably downvoted for being of spectacularly low quality, whilst many others are hit for not reflecting prevailing opinion - essentially we (internet commentators in general, not specifically HN, although it's hardly exempt) actively squash Free Speech on a daily basis.
It is not stifling freedom of speech. Not one iota. Remember, you do not have the right to force someone to listen to you.
And remember, most of the stuff we're talking about here are twitter eggs saying they're going to rape someone's family to death because a video game was delayed. No reasonable person would consider that protected speech.
Who is arguing that listeners be "forced" to hear hate speech?
The question at hand is: do we want to be permitted to hear hate speech? Twitter and Facebook (and the law in some jurisdictions) are saying: "no, you may not encounter these subjects where we have the power to stop you, because it may inspire bad things within you -- either personal pain, or worse yet, radicalization."
Your second point is that none of this content is any good, anyway -- that we're better off getting rid of it.
FB and Twitter can do what they like, and there is something to be said for removing low quality content in order to achieve a better product. But "hate speech" seems much broader than that.
In Chaplinsky (a terrible SCOTUS decision that has been abandoned in practice), a WWI-era court decided that an anti-war protester couldn't disseminate anti-war rhetoric to his fellow immigrants (in Yiddish, btw). This was "like shouting fire in a crowded theater." Some things are just too dangerous to say out loud.
Are we so certain of our current taboos that we really don't want to hear from the fringe? Are we really OK with giving up our right to hear controversial viewpoints and decide their merits for ourselves? Because that's what these rules/laws do -- they don't force you to hear anything. You are always free to block, ignore, etc. They prevent you from encountering the content in the first place -- someone gets to decide for you what is and isn't "safe," because they're afraid you'll be influenced by it.
For me, I don't want to be kept ignorant of something because someone decides I'm better off not hearing. Even if that speech is violent, racist, conspiratorial, or just plain nutty. I can decide when divert my attention. It's part of being an adult -- learning that words can't hurt you.
You are wrong in that it's only a matter of 'unfriend/unfollow/block them' - that is only a solution for people who might get one or two harassers, but there are people who just get bombarded with it - just look at the gamer gate thing to see a prominent example.
What happens when those harassers report posts in the masses and get people silenced? It's a mess when you try to define whatever you think is not right. I had a friend tell me he tried to post a dream he had on Facebook and Facebook didn't let him post it, something about his dream was deemed hate speech somehow. A dream of all things. Lots of issues with this.
Then there should be settings for you to not allow communication except from those in your friends circle.
I agree that there are ways around that, but there will always be ways around it. Pretending that filtering speech will make the harassment go away is pretty naive.
If your suggestion is "people who are the target of persistent harassment should remove themselves from public discussions", that seems at least as contrary to the ideal of free speech as removing the harassers.
I'm not really sure where you got that idea, I never said that.
EDIT: I see what you mean. I think there's a huge difference between temporarily choosing to not engage in public discussions, and being shut out of said discussion.
You already can make your account private to only your friends on most of these services. I guess giving up the attention isn't worth it for some of these people.
What's wrong with a curated forum for speech, especially if the guidelines are a part of the terms of service? Why does everything need to be all about free speech? No one forces you to use these tools, if they start censoring too much stuff, users can move onto another service. I'd much rather use sites where I can post without worrying about getting bombarded with hate messages. HN is such a kind place compared to twitter and Facebook.
>Every dictatorship has ultimately strangled in the web of repression it wove for its people, making mistakes that could not be corrected because criticism was prohibited.
-RFK, "Value of Dissent" (21 March 1968)
The problem is that of blindness and group think. Every major advancement of human rights has been against the "terms of service" and were extremely uncomfortable ideas for the majority of people.
Getting your feelings hurt every now and again can be used as a catalyst for personal and moral growth just like exercising causes pain but increases health.
This is not about no-platforming, where the private company sets the terms of conduct for its users. This is about freedom of expression being encroached upon; the government (EC) is telling companies owned privately by shareholders, and used by its citizens, what opinions they can and can't express without fear of ramifications.
What should happen when a person's twitter account is mentioned in an article that creates viral outrage, and they are bombarded with death threats, hate speech, and so forth?
There's nothing wrong with Twitter providing tools to help someone being harassed. That, in and of itself, does not prevent free speech. If someone's Twitter goes viral and they don't want to deal with messages directed at them then give them a whitelist that they can add people they want to receive comments from and the drop the rest. This only affects the view of the person affected while not constraining what can be seen by the public.
Your comment is completely non sequitur. That such a thing doesn't exist today does not invalidate my point which is that there are technical solutions to this that do not rely on the platforms inserting themselves as judge of what is hate speech and what isn't hate speech.
This policy about removing hate comments didn't exist yesterday and is still not in place. By your logic it isn't a solution either and can't be discussed because it's not yet in place.
I think the fact that these tools don't exist speaks to Twitter's priorities, though, and in a discussion of editorial control of social platforms that seems like a salient point.
This would be best. It's sort of telling that Steve Huffman, CEO of reddit, had this to say about reddit's content policy enacted about a year ago:
>When you draw really clear lines in the sand at a site like Reddit, “there will always be some a—–,” Huffman said, “looking for loopholes.” He eventually came to the conclusion that virtually every other major social site has come to: that content guidelines for online communities work best when they’re “specifically vague,” giving the contours of clarity on what sort of content is forbidden, while affording those in charge of enforcing the rules some leeway with when, precisely, the rules apply.[1]
That's the CEO of reddit saying that selective enforcement should be the default, and that users should not lean on the actual, written rules to determine what is acceptable and unacceptable.
They could put a toggle in settings that anyone can use. Call it 'Airplane mode' or 'Do not disturb'.
When enabled, the 'Notifications' link/page disappears from the UI and the user no longer receives messages from anyone who they haven't messaged before.
Better to make it a whitelist in case somebody no longer wants to receive messages from somebody they've corresponded with in the past. Otherwise if they've ever corresponded with a troll or have had a falling out with friends then the filter is ineffective.
The more I talk about this the more it sounds like a score file[1]. Obviously it would need to be made more user-friendly to be really effective.
Implementing a per-user configurable whitelist is much different than making value judgements about someone's post berift of context and history and virtue-signalling with a swing of the almighty banhammer. It puts control in the hands of the user, who is trusted to make their own judgement.
I think it's useful to talk about a real concrete example, so thank you for this comment.
Ideally, people wouldn't send death threats/hate speech. This seems practically unenforceable from Twitter's side.
So what if Twitter had a special feature where it could detect if you are receiving an unusual spike in notifications (and possibly even use sentiment analysis). Then it would present you with a nice, warm message:
"We noticed you're receiving an unusually high number of notifications. These messages may be abusive, hateful, or triggering."
Then provide options for turning your profile private, or a special new setting that completely hides @-notifications from people you don't follow.
An obvious candidate for a next step in this slippery slope is criticism of religion. Will comments criticizing fundamentalist beliefs (young Earth creationism, etc.) be deleted within 24 hours?
Identity politics is normally associated with the nominally "left," but really it's the right and especially the religious right who have been experts in the weaponized use of identity politics in the last few decades. The entire Reagan and Bush eras were built on the use of identity politics to mobilize the "base" and keep critics on the defensive.
Liberals invent identity politics plays, but conservatives take it to production at scale.
I expect to see right-wingers totally co-opt this bandwagon real soon now.
The Larry Flynt principle still stands: the measure of freedom of speech is the freedom afforded the most offensive or idiotic speakers. I don't support haters, but I support their right to spew it. Unfriend and block exist for a reason.
This was my first reaction - how do you define Hate Speech? It's similar to the quote about pornography, "I know it when I see it." But today's hate speech was yesteryear's vernacular.
My inclination is this is still a good thing to do. Facebook doesn't need to provide a public forum for hate mongers.
These are private companies, though. They can police their content however they want and might determine that more restrictive rules are better for the product than loose ones. The host simply accepts the consequences of making the wrong choice.
They are not in the business of providing a platform for free speech. Just look at reddit for an example. The quality of their product has suffered because they allowed (and continue to allow) racist and misogynistic subreddits to proliferate. Thus, these segments of their user base also proliferate. At various points, they decided to take action. Whether those actions improved their product or not is debateable.
Social media platform users have no right to free speech on the site. They do have a right to start a competing website that allows free speech, though.
> Social media platform users have no right to free speech on the site. They do have a right to start a competing website that allows free speech, though.
And then the new website becomes popular, and then authoritarians start demanding that the new website ban speech they don't like, and then people who want free speech say that we don't want that and you come back to tell us that they're a private company that can do what they want.
They're also a private company that can do what we want.
> This is a dangerous path to go down. Much better to allow anyone to post anything they like that's not illegal.
> Why is this bad? Because hate speech is not clearly defined.
Legality by which jurisdiction?
If someone posted the 1-line perl script for DeCSS, would that be blocked? In Turkey, I believe posting anything defamatory regarding AtaTurk is illegal. What about posting Nazi Propaganda in Germany?
Not the OP, but HN is a trade forum that addresses a specific audience/niche. It makes sense to cull the submission load to something that fits the forum's designated purpose. Twitter and Facebook are supposed to be platforms for discussing/sharing anything one finds interesting, so restrictions are less sensible there.
But again, who draw the line ? I don't feel threaten by somebody saying that to me on the net. I don't censor it even on my comments. Maybe somebody will feel threaten by something about going to "see what've done with your shitty new restaurant". Are you going to remove that too ?
And are you really sure the sure the line will not be moved after ? Because this is such an opportunity for politics. But also just for any entities with a moral or economic agenda.
And will the censorship be done by humans ? Or will machine will stop a quotation from a book or a movie containing it ?
... so "the line" is basically whatever European Union says is "illegal hate speech" as per this announcement. (The announcement makes it seem that it is largely targeted at terrorism and racism, at least for now).
I agree with you, though, in that personally I'm not the biggest fan of "hate speech" laws. They seem like a ripe target for political abuse.
I think the "true threat" is a good bar, though. It would stop things like the prosecution of a guy here in the UK that joked about blowing up an airport because his girlfriend's flight was delayed.
I disagree. Banning threats like that would not have any negative consequences. If anything, it would encourage people to post more, as they no longer have to worry about dealing with that huge mountain of shit, which is incredibly demoralizing and discourages people from posting.
Threatening to kill/harm someone is already illegal if the person on the receiving end of the threat has fear for their safety resulting from the threat. If people are so afraid they should consider the already existing methods we have of deterring criminal behavior.
You do not have the right to free speech on Facebook/Twitter/Whatever. You have the right to speak your mind, but no one must give you a podium to do it from.
This issue is less about Facebook, Twitter, etc. choosing to a code of conduct than it is about having limitations on speech imposed on them by the EC. For this reason, it's about freedom of expression and not about self driven no-platforming.
This is an unfortunate truth. It's just so frustrating that the network effect means that moving off of Facebook means loosing a whole bunch of connections that won't go elsewhere.
It wouldn't be this way if we fixed our technology access and intellectual property laws. There are a lot of phenomena in the ethos right now that people find perplexing and internally recognize don't really make sense, but they have a hard time figuring out what causes it. Those people are intentionally deprived of the information necessary to make the connection to the laws that enforce extant monopolies.
The network effect works the way it does because we have granted super-restrictive monopoly rights that prevent competition.
There was once a company that tried to allow you to export your data from Facebook. It was called Power Ventures. They were sued and shut down by Facebook. The corporate veil was pierced and the founder was found personally liable for $3 million in supposed damages.
The technology exists, but we've made laws for companies like Facebook to make sure that their monopolies are not threatened.
When you create things with Facebook, Facebook owns that data. It's in the ToS and they have the right to tell you what you can or cannot do with it. If that's a problem for you don't use Facebook.
Networks exist because not everyone is an admin and web developer and wants to build their own website, and unless you own the pipe to the servers, the servers themselves, the software running on the server, and so on you have exactly whatever rights your provider of choice says you have and not one iota more.
This is not correct. Courts in the US are very dubious of automatic rights transfers and I doubt a ToS that included them would fly. ToS boilerplate usually grants the company an unlimited license to the content, but they do not own it. The original rightsholder, which, in the case of status updates, family photos, etc., is usually the person who posted it, continues to hold the copyright, and thus, should be allowed to download it as he or she sees fit. But a judge ruled that Power Ventures had violated a variety of tech access and intellectual property laws by creating a scraper that allowed just that: a scraper to easily export his data out of Facebook.
>Networks exist because not everyone is an admin and web developer and wants to build their own website, and unless you own the pipe to the servers, the servers themselves, the software running on the server, and so on you have exactly whatever rights your provider of choice says you have and not one iota more.
That's not technically correct, but we'll accept it for purposes of argument because it's not super far off for some access modes. I'm saying that it doesn't have to be that way. A lot of people hear about these things, acknowledge the injustice and weirdness in them, and then just move on with their day, seemingly believing that it's something that can't be altered. We can alter it. We don't have to grant Facebook an arbitrary monopoly. We don't have to allow them to restrict us from gathering the data that we rightfully own just because we've shared it on their platform. Let's change it.
It's difficult to change and difficult to make people aware of this situation because, obviously, incumbent interests want to keep their monopolies, so they spend and lobby and produce and proclaim to try and keep them, and to make people think that there's nothing that can be done about it. It's not that way in real life.
The reason that Facebook and Twitter receive this scrutiny is that information posted to them is public, persistent, viral and forced into many people's view by feeds. I think SnapChat counteracts all these features.
I was recently hanging out at some nightlife spots, casually talking to random millennials at bars and such and realized that nobody was using Facebook or Twitter. They were all using SnapChat, almost compulsively.
I asked them why and they said it was because their mom was on Facebook (!public). They also liked that everything disappeared after 24 hours (!persistent) and it alerts you if people screenshot your posts (!persistent). They also liked the fact that you couldn't repost stuff, only show pictures that the app took and you only shared with your friends (!viral) and only had to look at their stuff if you actually wanted to (!feed). It didn't come in on a feed.
I think Facebook and Twitter are going to be cleaned out soon except for self-promotion and commercial information. At this point I assume that Facebook is public too. People desperately want privacy. I think a rising trend in the industry is the notion of forgetting. People want services to forget. They want to live in the present and not have everything remembered about them forever. They also don't want to be bombarded by feeds since they are full of distraction, advertising and ideology. The only thing they are interested in posting to these public services is things that they want to share publicly, like self-promotion or commercial/job related stuff.
> I was recently hanging out at some nightlife spots, casually talking to random millennials at bars and such and realized that nobody was using Facebook or Twitter. They were all using SnapChat, almost compulsively. (emphasis added)
I am nearly 40 and don't want my nightlife posted on Facebook where my ex-wife, parents, child, or potential employers might see it, either. Why do you expect millennials to behave differently?
I can't speak for all millennials, but I am currently attending university and have many millennial friends and contacts. Almost to a person, they all have Facebook. The only two people I can think of that don't are a 30-year-old and a guy in his 50's who is a conspiracy theorist. Facebook is used as a way to add soft social contacts as well as to coordinate and interact with a larger group of people. Facebook is the "public" social face, with things like Snapchat offering a more private face. And I don't think that difference is specific to millennials. My 50-something ex-girlfriend and my 23-year-old current girlfriend use these services almost identically, as do I.
There's no privacy on SnapChat either, in terms of privacy from data collection. As soon as SnapChat is pushed to really monetize the platform, it will slowly start to implode. People I know don't usually post much on Facebook, or even Instagram...but those are their "public" personas and they will post things that improve their public image (usually.) They do sign into them, look at what people are doing and occasionally post. SnapChat is the "private" persona, so posts are going to be more varied. While Twitter has the opportunity to be an anonymous platform for some, which neither SnapChat or Facebook have, really.
They are all different components to what people want with some audience cross over.
I don't use SnapChat, mostly because the CEO killed off the 3rd party app for my platform of choice (Windows Mobile.)
> People want services to forget. They want to live in the present and not have everything remembered about them forever.
This is a mindset I'm having hard time understanding. For me it's the opposite - I'd love to have everything I interact with a) persistent, and b) searchable by me. I dislike the idea of ephemerality, of stuff that happened going away, without a way to recollect them. Maybe I'm the weird kind who still has his SMS archives from 10 years ago somewhere on his computer...
The mindset is that the services should forget, but if I want persistence it is up to me to remember. I have to same sort of archive of 20 years or so of computing but it's mine and not in custodianship by someone else and that's the distinction. Just because some random web service has a breach it doesn't mean that messages/thoughts/dreams/jokes I sent someone in confidence should be exposed to the world unless I permit it.
It's closer to wanting to be able to archive every letter you receive in the post, not the post office keeping every letter and you having to rely on them for the safety of your messages. A lot of these services also keep changing the rules of inclusion to a community over time, where things were private they are now not private or people just need to restart and get going again in a new social group.
I see the distinction now. Yes, I guess I have the same attitude - I want ownership of my data. A way to keep it permanent (and searchable) for myself. Cloud services remembering stuff is mostly convenience, but in case of personal stuff (as in things posted on social networks), not something very desirable.
Technically you can still share a photo taken outside of Snapchat (on iOS at least).
You just can't add it to your public timeline. It can only be sent to your friends/contacts list. But if you wind up doing select-all on the friends list, it pretty much works out the same way as sharing to the timeline view.
Snapchat is also pictures and you can do traditional text. Mainly you have this thing called a "story" that people can subscribe to that lasts for the last 24 hours and it's pictures and video and text that you send to it. I recently tried using snapchat with a few people. They get annoyed that they can only see your pictures for a few seconds and then can't view them again and text disappears after 24 hours.
From wikipedia: Snapchat is primarily used for creating multimedia messages referred to as "snaps"; snaps can consist of a photo or a short video, and can be edited to include filters and effects, text captions, and drawings.
It also has a fairly sophisticated face recognition system (accessed, strangely, by long-pressing your face in a live video window, but only while using the rear-facing camera; it's almost like an Easter Egg but isn't) which it uses to do face swaps and other types of animations on top of video.
I don't know if the app is designed to intentionally hide features to create some sort of "discovery" experience, but there's a lot more to it than is obvious at first glance.
For those who aren't upset with Snapchats goals broadening, it's a great utility for many different means of communication. Chat features and multimedia along with an expiration date on all content is something that many people can appreciate in a world of public social networks.
Yeah, I don't think this will work out well. Twitter removes a lot more negative posts from those on the right side of the political spectrum than the left, and seems genuinely biased to treating bad behaviour from the former as somehow worse than the latter on ideological grounds.
And with stuff like the trust and safety council on Twitter and the news topic biases on Facebook (as well as documented examples where they worked with governments to remove speech the government didn't like, such as that about immigration in Germany), I suspect it'll just cause yet more political divisions and drama on these sites.
That's a standard of proof that's impossible to meet.
Consider:
- Twitter is structured in such a way that direct censorship can't be differentiated between somebody's finger on a button and technical difficulties. (Did this hashtag disappear from autocomplete because someone removed it, or because the backend index is messed up?)
- Differentiating between "left" and "right" political posts is not possible from an objective, programmatic standpoint.
Political identity doesn't work either. Someone who's an unabashed Republican tweeting about, say, abortion, there's a very high likelihood that whatever they say about it is negative regardless of the content of the message, but how do you get a computer to understand that without a lot of false positives?
- Nobody from the companies is ever going to admit to hiding/removing content for political reasons. (Barring whistleblowers of questionable veracity, c.f. Facebook)
That only leaves generalities and user reports to work from, which have their own reliability problems. We're talking sociology/political science at the end of the day, and it's opinions all the way down, with very little hard data.
They send tweets in real-time to data miners. The delete requests don't show up until later. All you need to do is break their terms of service by not honoring the deletes
What viewpoint are you endorsing? Given your standard, CM30's assertion is neither verifiable nor refutable.
The parent comment is valid. hackuser isn't asking for a double-blind study with p=0.05 and n=[all Tweets ever produced] ; she or he is asking for any evidence whatsoever to substantiate the claim.
But few people want to participate in message boards filled with hate speech. Any public forum needs rules of decorum, just like any public or social setting in real life - few would want to participate in forums where others behaved like 4chan message board posters.
>But few people want to participate in message boards filled with hate speech.
I predict that more people will want to visit if they see it as the only outlet where they can freely express their opinions on a topic. The vast majority of people don't seem to be able to sustain silence when they have a differing opinion, they need an outlet.
If they cannot express that on Twitter/FB, they Will find another way.
Funnily enough this will have a much larger negative effect on the "hate speech" movements goals than the existing presence of so called "hate speech".
Plenty of people belong to forums and social networks with reasonable rules though. Which say, remove actual personal attacks but use common sense in regards to what should be removed rather than promising to 'remove hate speech in 24 hours' under some general 'framework'.
I'm not sure these rules won't be widely seen as reasonable. The forum we are participating in, Hacker News, removes hate speech much more quickly than that, though it doesn't provide an SLA with a response time.
It's not about left or right removal quantity. It's about hate. And there are a lot of easy cases on social media. I don't know what the do with not so easy cases. Maybe the same as before.
What he is saying though is that (there is at least a perception that) right wing hate is removed far more frequently than left-wing hate due to biases that cause those with their finger on the button to perceive it as somehow more justified.
I think there are cases that should be easy on both sides and I would hazard a guess that they simply aren't dealt with the same. Sure, hate being removed is objectively good, but if only one side of the hate is removed, it will cause friction.
It will cause friction and it doesn't matter what you are doing. Hate will produce friction. Counter speech is friction. Removing or blocking will cause friction.
This actions are made BECAUSE OF right-wing hate on facebook/twitter where people post that they should kill other people, politician, refugees. That's why they focus on this problem right now. Does this mean left-wing hate speech at the same level will not be deleted? I don't think so.
The New Black Panthers are vastly outnumbered by bigoted and racist readers of right-wing news sources such as the Drudge Report and Fox News. The objective truth is that hateful right-wing views are openly disseminated in the media, and that's going to result in a lot more right-wing hate being spread online. If this bothers you, then take it to the right-wing news sources that push these views and profit from them and make them so prevalent in the first place.
"The New Black Panthers" is not even close to what would come to mind for me when I think about left-wing hate. I specifically didn't mention any of the groups that do come to mind because I have no interest in getting in to a shouting match with someone who identifies with them.
Said groups could however be characterized by a mob mentality that leads to attacking peoples livelihood, harassing peoples families, sending death threats, etc, usually driven by social media, and hopefully something we could both agree is over the top, unacceptable, and indeed openly hateful.
My comment was ridiculously measured. I didn't even pick sides in the actual topic. I definitely didn't pick sides politically. You seem to have chosen one for me, although I regret to inform you you have assumed incorrectly.
Your comment was not measured. You made it impossible to have or continue a productive conversation. If I would have been the person you apparently think I am, you would not have won me over. You would have alienated me further. But I suspect that's part of it all. Some people don't care about productive conversation. They care about being "right" in front of other people who also want to be "right".
What if a black person says "To all my n*ggers out there, x,y,z" on twitter, vs. a white person saying the same thing. Will the white person's tweet be removed, or both?
Interesting point. How can anybody know? If I have a penis, under today's standards I can claim to be female. If I have caucasian ancestry, can I claim to be black? Why not?
Because all of these factors will necessarily have to be taken into account when we come up with our speech rules, don't you see?
Sure it's complex, convoluted, and controversial.. but as long as we put the right people in charge to make the decisions for us, it will all work out fairly.
"We", meaning the readership of this site probably doesn't (I hope?), but it is a common trope that what you are allowed to say (meaning: be accepted without immediate backlash) is based entirely on your identity.
I'll not go into that concept further for fear of igniting another tired flame war.
Don't forget that there's a vast number of African Americans who do not condone the use of the n word freely and for fun as it happens these days. I believe their main objection is that it takes away from the meaning and history associated with it. If we consider another N word (Nazi), then it's also use popularly but only to mean something very negative and not in the sense of "my buddy".
The meaning of speech depends on context, always. If I say something by myself, in the privacy of my home, it's much different than if I say the same thing with a microphone to an angry mob. Yelling "fire" when a building is on fire is different than yelling it in a crowded theater where there is none (a classic example from law).
Similarly, if a white person publicly says something discriminatory against black people, it's a much different context than if a black person says the same thing. As an simple example, imagine a two black people in a room of 40 white nationalists: If one black person says to another, "hey n-!", that's much different for the person hearing it than if one of the white nationalists says the same thing.
There's a lot more to that word, including black people taking power for themselves and away from racists by using it (really a brilliant, innovative way to push back), but that's too much to write here and now.
The hashtag #KillAllMen is an active phenomenon on twitter, tumblr, and facebook. Feminists make excuses for it.
Here's an ArsTechnica article on climate change[1] where the very first comment advocates the murder of those who disagree with the liberal position. It has a positive score and many others in the thread defend it.
Liberals can dish it out too, and moderators let them get away with it.
He didn't refer to an entire ethnic group as "rapists and murderers". He was speciffically refering to... the ones who were rapists and murderers. (Drug cartels, cayotes, and other criminials).
He was claiming the Mexican government was purposefully or tacitly sending their problems across the border.
You can disagree with this claim, but don't misconstrue what his message was.
He's also cited this article[0] in reference to the point he's trying to make.
Right, you can make excuses and quibble and apologize for his hateful demagoguery as much as you like (his quote of course as we all remember suggests most immigrants are rapists and murderers and criminals, but concedes that it's possible some might be decent), but the point here is that there's a pretty significant difference that you'd have to be willfully blind to not recognize.
One side of the spectrum goes out of their way to spew hate and abuse and incite violence. The other preaches tolerance and respect. You can't really play the "there are two sides therefore the two sides are equivalent just opposed" game here, I think even if you still could manage to do it pre-Trump, Trump killed it by choosing to be objectively worse by all measurements and accounts.
There is a name for this, it's a censorship. And in long terms it just doesn't work. Look at EU, almost all countries there have very restrictive laws against the hate speech and spreading of the racial/religious hatred, and yet far-right political options are winning all over Europe... by banning people and ideas from mainstream soc. networks they'll just push them into underground, which will make them even more appealing to young... being banned on FB will be a way to show that you are cool...
The nuance in the US is that this is censorship by corporation, not by government, so extant laws against censorship (which are aimed at the government) don't apply.
I think it's past time that corporations for social media of a certain "size" be recognized as public fora, and a body of law be developed regarding their behavior and responsibilities. Facebook, Google, etc, should not be censoring. Where this gets tricky is that both the corporation and the internet tends to span borders, so jurisdiction becomes very complex. Someone in country A uses a service in country B to interact with someone in country C. Which country's laws apply?
That is the problem. In the absence of legitimate power, illegitimate power is operative, and today that is Facebook and Twitter; they are serving de facto as quasi-governmental entities regulating interactions between citizens, since real governments have been unable to do so.
Removing hate speech means you have to define it in the first place. As soon as that line is drawn in the sand, it gains the ability to move. When it moves, it means anyone can eventually gain the power to move it. When that happens, it means that good and bad people can move it. When bad people eventually move it, you're in a pinch. That's why you don't draw in the sand.
What that means you need to define good and bad people in the first place. On a serious note I completely agree with you as right now hate speech is about race, gender & body shaming which is the current line in sand. Soon nationality, sub-races, religion, region & sub-regions, individuals, comic characters and pretty soon actual corporations will be add to the things you can't hate on.
sorry, every statute is a matter of consensus and moves around quite a lot, especially over the course of history. This is how every legal system operates, this isn't a serious argument against anything
Of course it is, that's the whole point. Its why the 1st Amendment guarantees all speech (despite the Unconstitutional restrictions imposed over the years by our broken legal system). You either have free speech or you don't - there is no middle ground.
Then, a few years down the road, it’ll have slid down the slippery slope to include anything that could hurt someone’s feelings. This has been what’s happened to all of society over the past few decades and Facebook and Twitter will be no different.
People on Twitter and Facebook have already had temp bans for supporting Trump. That's no defense of Trump, but the minor things I've seen them being labeled as against Twitter/Facebook policy have been very minor. It's worrying to me as a free speech absolutist.
I used to agree with your basic sentiment. I'm still a strong believer in free speech, including speech that causes discomfort to those who hear it.
However, we live in a world that focuses violence on specific individuals, depending on their race, class, gender, religion, and a number of other factors. Our everyday actions are part of that system. Sometimes we help focus that violence, usually without realizing it. This is true of victims of violence as well as those who are not targeted.
What is our responsibility? Ignore the situation?
I'll tell you this: Feelings do get hurt, but the problem is not hurt feelings. The problem is the poverty & oppression baked into our system. A system that supports some people (definitely me) while making life incredibly hard for others.
Is removing hate speech going to stop these problems? Absolutely not.
Would I have any problem removing hate speech from the discussion community I run? Absolutely not.
Would the good people at Ycombinator have any issue removing hate speech from Hacker News? Absolutely not.
Each decision being made will be evaluated for its merits. "It's only a small change" doesn't actually work as rationale with this many people watching.
There is no slippery slope, it's a... grippy slope, because of all the... eyeballs.
Evangelical Christianity is a fantastic example of the slippery slope these sorts of actions create. I doubt anyone here would say most evangelical churches violate the legal ideals of free speech, but they've been extremely vigilant in maintaining group think on many controversial topics and anyone who goes against the group ideals is immediately shunned and treated as "other". This has created a feedback loop where they are now so far detached from mainstream Christianity* many are trying to classify it as it's own sect and in some cases it's own religion. Thus there has been practically no progress in evangelical theology within the past 20 years (Revelations fear mongering not withstanding), and as a result has lead to some seriously unscientific beliefs (that aren't even compatible with their own faith!) and severely detrimental societal interactions with the greater world.
Extremely interesting to find those who look upon those same Evangelicals with contempt and hatred are following the exact same path.
* inb4 someone who's never been to an evangelical church service tries to claim evangelical christianity is mainstream
> As I understand it, "evangelical" just means the church tries to get more members
"Evangelical" is a label that was adopted by a specific group within Christian Protestantism that was reacting against specific elements that were seen as problematic within Fundamentalism (it was essentially a dissident offshoot of Fundamentalism) -- the term had some prior use, which in part inspired the label selected, but that's now pretty much its exclusive use when used to describe a subset of Christianity.
In some respects, some of those disagreements have faded over time; there's a lot of overlap in terms of both religious ideas and relation of religion to civil society between Fundamentalists and some subset of Evangelicals, though the range of Evangelical views is still broader. Its not uncommon for people outside of either movement to use "Evangelical" and "Fundamentalist" interchangeably.
"Evangelical" christianity has come to describe a certain group of mostly Protestant believers largely found in the South and Midwest US (https://en.wikipedia.org/wiki/Evangelicalism).
Interestingly enough, they're more of a political identity in some circles as beyond a few (relatively minor) theological differences with denominations like Baptists, Methodists, or "non-denominational" groups they mostly identify based on social/political issues (gay marriage, abortion, religion and government, etc. etc.) rather than theological ones (although they do their best to tie the two together).
"Evangelical" actually means something quite different from "evangelizing".
The "evangel" is the "eu-angel", the "good news", the "gospel". Or, more specifically, the gospels, the first four books of the New Testament.
To evangelize, to be "evangelistic", is to attempt to bring the "good news" to others. It's a matter of what you do.
To be "evangelical" is (so at least members of the groups that go by that name would say) to be grounded in the "good news", the gospels, the New Testament. (As opposed to, say, later church traditions.) It's a matter of what you believe and where those beliefs (purport to) come from.
The term "evangelical" has a complicated history, going back to Martin Luther and beyond. It's pretty much always denoted a strain of Protestantism, and in the modern-day US it means roughly "fundamentalist but not too fundamentalist". (On the other hand, in non-anglophone parts of Europe it mostly just means "Protestant".)
Evangelicals tend to be evangelistic, not least because the New Testament tells them to be. But you can be evangelistic without being evangelical; as you say, that would describe any church that tries to expand its membership, or any individual Christian who tries to persuade others to convert.
An example: Facebook removing actual statistics on campus rape (i.e., "hate speech") and kowtowing to the completely false SJW/feminist narrative that 1/4 college women are raped.
Ehh... This is more like "don't post stuff that offends me and makes me scared for my safety".
That's the real danger with censorship. Anyway, Facebook as the social platform is on its way out, but the brilliant fucker that Zucks is, he is buying up everything that remains relevant.
This is a direct consequence of the world wide web turning from a self-publishing culture (blogging + RSS) to a posting culture (Facebook, Twitter, Instagram, Snapchat, etc.)
It's trivial to exert political and social pressure globally. I'm not trying to be polemic nor alarmist but, when you centralize the primary social exchange for almost 2 billion people on one network, these are the effects.
People need to start thinking for themselves and see that by using a service like these they are only seeing part of the whole.
To be fair, most people probably won't care too much about what's outside the walls of the garden. And further, the walls of the garden aren't terribly high nor hard to come over. It's easier and simpler than ever to start a blog. It's easier still to make a Facebook account.
People who care enough to make the trivial amount of effort to publish outside the system should and we as developers should make it even easier, safer and lower friction. Let's Encrypt, cheap and easy static hosting, static site generators, etc.
I find it interesting that, in this thread, most of the reasons to support this type of thing resort to the lowest common denominator as the reason to support it.
"This could be used to silence opposing viewpoints!"
"You mean like rape and death threats? Why do you support that? You're a disgusting racist bigot!"
As if that's the only type of free expression the naysayers are talking about. It's kind of sad really.
Imagine the surprise when some of these people supporting such things find out it's their turn to have their opposing viewpoint stifled.
"With the right of free speech comes the responsibility to extend that right to others. Because if we leave who is heard to simply 'who can speak the loudest'... we won't find out when we're wrong."
Having watched both of the available trailers for the new Ghostbusters movie, I think I will very much dislike it and have no intentions of watching it. By the measure of many prominent online media publications and verified Twitter users, this alone makes me a misogynist, hate mongering, intolerant man-child [1].
But Twitter et al. support that sort of hate speech, while my opinion would be happily suppressed because of all of the hate people would stamp onto it.
The first trailer was terrible. I think almost objectively terrible. It's clearly a ghostbusters movie, but it didn't seem to give me any reason to want to see the movie. It made it look like a cash-in-remake.
The later trailers have looked much better and give me some hope. We'll see what reviews say.
The original movie is a pretty great film, and I enjoyed it. I loved the cartoon as a kid, so I'm interested in seeing what happens with the franchise.
If you're not interested in the movie because you don't want a remake, or you thought the trailer didn't look good, that' fine.
If you're expressing that you hate the movie only because they changed the cast for this iteration... then you start to fit into the 'angry-man-boy' category. Same if you fit in the previous category but use lots of sexist terms and slurs to describe the new movie.
And then there is group three, the worst of the groups. These are the people sending hate messages, death threats, rape threats, etc. You can call for a boycott for whatever reason you want, people will probably think you're childish. But if you're sending death threats because someone changed a fictional character in a movie you haven't even seen yet? Yeah, you've earned that label.
As for AVGN, I haven't watched the video. Given everything that's happend around the movie it certainly immediately raises my "angry-man-boy" suspicions, but again I haven't watched it. At this point the nutcases are getting so loud over this (and every other topic) that it gets really easy for people with more reasonable viewpoints (especially if not presented well) to end up lumped in.
Solution? Ban the nutcases. No one needs to be able to send death threats via Twitter. No one needs to be able to receive them. Death threats aren't free speech, they're already exempted (assuming they are in any way reasonable).
There is plenty of room for subtlety in discussions of this kind of stuff. Can white people use the n-word? What if group X suddenly decides that phrase Y is evil?
I don't care. We can decide that later. For now, let's just ban the death threats, physical abuse threats, etc. and see where that gets us. I bet it improves things a ton.
I'd say I want to see someone make a good case for allowing death threats but this is the internet and I really don't, because someone will try and I'll think that much less of humanity.
That's because if you dislike the trailer, it's obviously because you dislike women (or at least movies empowering women), which makes you misogynistic.
Not liking something that has women in it is not the same as not liking something because it has women in it, but thank you for trying to tell me what I'm thinking.
For example: Not liking a gathering of wives of KKK members is not the same as not liking all women.
Except that isn't the case here. The EU has declared, arbitrarily defined "hate speech" illegal. This isn't just the big name volunteering their services.
Thank you. It's nice to see a post here that isn't trying to conflate "things people don't want to hear" with "death and rape threats" and assume they have to be dealt with the same way.
Those kind of corporations are utilities. Your electricity provider cannot stop your supply for using said electricity to post racist shit on the internet, so should not be able Facebook and Twitter.
This call for action seems rather unorganized and vague.
What does this even mean:
"The companies also agreed to promote "independent counter-narratives" to fight hate speech, including content promoting non-discrimination, tolerance and respect."
Are they trying to stop ISIS or please offended snowflakes? Doing both is a heroic task which will leave millions disenfranchised. Will this open the market for new social media platforms?
It's still a bit vague, but it will give you a better idea. I think the focus is terrorism and outright racism more than offended snowflakes. From the release: "The European Court of Human Rights set out the important distinction between content that 'offends, shocks or disturbs the State or any sector of the population' and content that contains genuine and serious incitement to violence and hatred. The Court has made clear that States may sanction or prevent the latter."
> Will this open the market for new social media platforms?
My gut reaction to this was "Can't be a bad thing, let all of the hatred currently vented on Twitter and let it go somewhere else". But this is how Stormfront essentially works, and it's a bad thing - an echo chamber of "kill all the [insert group]" leads to thinking that's acceptable, or even a good idea. It's why mass shooters are announcing their plans on 4chan.
People need to be told that their ideas are wrong, not given a new playground where everyone agrees with them. Kudos to Twitter for saying death threats and jihadism are wrong and not supported on their platform.
Social media is not directly subject to the first amendment. However, there is a legitimate problem if entities that monopolize a communication channel apply subjective censorship; it's a threat to the first amendment, an American principle, if the mass can't actually carry out free speech. It can be argued that online speech has now been monopolized by a few entities; while I don't think that's entirely true, there's a nugget of truth in there. If Facebook, which now wants to be more of a media company than just a place for cat pictures, controls an obscenely large portion of online communication, that means they(a small number of people in reality) can have far too much influence in political discussion, freedom of expression, etc. While the side of me that's liberatarian says that people can simply not use Facebook if they don't like it, saying that alone is not going to stop the mass in the center of the bell curve from subjecting themselves to censorship, which indirectly subjects myself and others to that same censorship. What can make this worse is the continuing implosion of Twitter. If we want people to have some retain modicum of responsibility(i.e. just unfriend/unfollow/block), censorship that oversteps reasonable boundaries needs to be challenged, and our government can do that if they impose on free speech.(whether or not that actually happens, idk)
We need to flag it, not delete it. Pretending this stuff doesn't exist isn't doing anyone any favors. It's far better to have a public record of who is posting this stuff. Let them advertise themselves, and let's deal with them. That's progressive. Censorship is for communists.
For a blanket solution I would suggest a "safe feed" default where tweets flagged as hate speech are removed from your feed. Also have a hate counter in profiles so the number of hate tweets is public information. Then if the user wishes to delete them and have no hate on record, they can. What the manual mod team would need to do is review unflagging requests. Flagging can be done by the community, and most of them will be right. Unflagging and malicious flagging is the right problem to focus on, not whether or not to delete something (minus cases where it's the law).
I've had remarkably similar ideas rolling around in my head for the past couple of years, watching various viral controversies work their way through Twitter, Reddit, and Wikipedia. I disagree with the idea of a "hate counter," though, because it would encourage people to self-segregate and I think it's wrong to reduce people to their worst attributes. Everyone has the potential to make a positive contribution in at least some niche.
I would just add that my ideal system would have a plethora of highly objective, undeniable flags for a variety of potentially disruptive behaviors; posters would be required to disclaim or "pre-flag" their own posts; and readers would have the ability to toggle which flags they're willing to view at any given moment. The punishment for failing to disclaim a flag would be a period of probation (increasing with repeated violations) where the user's posts are automatically disclaimed for that flag. The punishment for malicious flagging, as judged by a group of users with a long history of proper use of a particular flag, would be probation with regard to the ability to use that flag on others' posts.
I expect that the "off-topic" flag, along with the ability to rigorously define the topic in any given context, would get a lot of use, as would "name calling," "assigning motives or attributes to a group" (aka "broad brushing"), and "expressing incredulity" (aka the starting-your-comment-with-"Um..." flag).
The only possible way this scales is if they run on a report-based system.
Result: A group of mildly-dedicated trolls can pull anything they want from Facebook and Twitter by bombing them with flags. These events are miniscule in the grand scheme of millions of posts per day the sites get, so nothing will ever be done about the abuse, leaving those that complain to look like a loud, perpetually-dissatisifed minority.
Hate Speech is now the Worst Thing Ever® on the internet, so Facebook and Twitter now get to claim "Look! We did something! Witness us!"
What ever happened to the principle "I may disagree with what you say, but I will defend to the death your right to say it." This is about putting the lid on views of their citizens that European governments don't like. Disappointed that Twitter and Facebook are giving into this in Europe when they wouldn't in China.
This makes me uneasy, but not for the reason other people are saying. By committing to a 24 hour response time, they seem to be committing to a response strategy that doesn't involve any investigation. There's a risk that they'll end up removing important speech, and things that are only borderline in their questionability.
Defining hate speech is a fuzzy problem. Are we talking about a union of definitions from everyone, or an intersection, or is the answer the ubiquitous "it depends"?
again, just to reiterate my unwavering unpopular viewpoint: having a single entity (or 2-3 colluding entities) control vaste swathes of the internet is fucking insane.
* 1 - 2 last mile providers per area.
* google virtually entire world search engine (pagerank variations used in every major one)
* facebook controls most of the worlds personal account info / verification
* twitter realtime sentiment and data broadcast.
facebook and twitter are less dangerous than time warner and google, but facebook is still massively dangerous because a real fb account is more proveable than a ssn. its 10 digits vs, 10 years of your digital life, including any part of your life that ws important to someone else. I still don't get this:
> I have a backup of my application data on aws in several regions, I also have a replicated backup on azure, and I push to tarsnap every quarter just in case.
This viewpoint considered pragmatic and best practice.
> We should allow Google & Baidu to essentially control the onboarding, searching, verification, security, document storage, mobile, transmission, DNS, and CA on the internet.
this is fine. If you don't like it, use duckduckgo or bing's version of pagerank.
American tech companies shouldn't indulge censorship, even if it seems well intentioned, the scope is promised to be limited, and the pressure from governments is immense. They also shouldn't try to pick political winners by promoting the social campaigns and positions of the EC, as they've promised (citation below).
What is legitimate speech should not be determined by whoever is in power. Europe has tried that on several occasions. It never goes well. Just imagine if Donald Trump could declare what speech was considered legal and illegal. Perhaps we're getting a preview from Erdogan in Turkey right now.
This situation appears to be an attempt to suppress dissenting views (largely on immigration, border control, and refugee/asylum policy) following shifts in public opinion away from European integration and establishment incumbents like the CDU/CSU and SPD and toward nationalism and political parties like AfD in Germany, FPO in Austria, and FN in France. Since November Merkel has been pressuring Facebook to sanitize the platform of negative views toward migrants. It is troubling that leaders can punish their citizens for views that break from their policies. State police have even conducted house raids on those promoting "incorrect opinions" toward migrants on social media (citation below). We are not talking about physical threats, we're talking about opinions. Facebook and Twitter should not enable what has become, in a very literal sense, thought police.
If there are window-licking asshats posting nonsense on social media, let's challenge their views and win the debate. Democracy is about more speech, not less. I hope Facebook, Twitter, Google, and Microsoft haven't drifted so far from their original principles that they've lost sight of that.
> American tech companies shouldn't indulge censorship
I'm all for free speech, but US companies should follow the local laws. US tech companies aren't above the law, especially when they do business in foreign countries (selling ads is doing business). It sucks but going against the laws doesn't work in the long run. You can get away with it when you're a young US start up and you don't have offices in France,Germany or Austria, but as soon as you do, you are effectively subject to local laws.
The website where the speech is published to. They don't try to define hate speech in absolute, just define what they don't want to be associated with. Yes it's a slippery slope, but it's just this website opinion.
As much as it's annoying, people tend to forget those websites are private companies, not charity or gov entities. They can do whatever they want as long it's legal. They don't have in any way humanity interests in mind, it's silly to expect them to take a stand on any other directions than the one benifiting them.
Hate speech is already illegal in most of EU (you can go to jail for it). Trouble is, this was not enforced properly on social networks, they went "we don't care" and law enforcement didn't know what to do. This became a big problem during the refugee crisis, and several extreme far-right parties across Europe started using social media as their preferred medium (and the Russian propaganda machine is using it too). And they won big in several elections. So first Germany made a deal with Facebook and now the EU did it for all EU countries. I really do hope this will work to stop the propagation of hate speech at least a bit, otherwise several countries will be governed by neonazis in a few years, and even the EU itself might break. This is serious.
> I really do hope this will work to stop the propagation of hate speech at least a bit, otherwise several countries will be governed by neonazis in a few years, and even the EU itself might break. This is serious.
That's hilarious. You basically said "People who don't have the right opinions should go to jail!". Maybe if your governments had attempted to explain why the far-right is wrong instead of suppressing them, they would have been less radical. You can't just sweep the thoughts and emotions of your people under the rug, even (especially) if they are unpalatable to you, and hell, to me as well.
No, I did not say that, and did not mean that, they just have to stop the messages from spreading massively (right or wrong does not really mean a lot now). Trouble is, social media is new, it is very effective in spreading a message - any message really - and the society (and state institutions are a part of the society) did not yet adapt to it. The controlling mechanisms, the culture around the new medium, are not really there yet. So this is the dangerous time when fringe forces are trying to exploit it for evil purposes. And it is not impossible they will succeed in this, like they succeeded in radicalizing the masses in the 20's and 30's with different media.
What is the difference between Hate Speech and Newspeak? When do we cross over the line from good intentions to thoughtcrime? Is the best way to end racial, religious, or sexual discrimination to hide it or to lambaste those who would propagate it?
It is absolutely stunning and sickening that so many otherwise intelligent people cannot (or refuse to) grasp the simple concept that banning any speech or free expression is the opposite of having "free speech". It doesn't matter if the government (or some other entity) designates that speech as "hate speech", offensive, false, defamatory, or anything else. You either have free speech, or you don't. You cannot partially restrict free speech. Restricted speech is not free. Freedom is dangerous. If we were all locked in our own individual, soundproof cells all the time, nobody would be in danger of being assaulted or offended. That is not a free society. In a free society, you are constantly at risk.
It would be bad enough if censorship, fascism, and bondage were being inflicted upon us by domineering overseers using violence and sheer force. It is far more disgusting and disheartening that so many of my fellow human beings are actually embracing and endorsing this authoritarianism. Unfortunately, it looks like the human race is going to get what its asking for.
I think there are a number of things that can probably be done to address harassment on social media which don't significantly infringe on free speech.
One possibility might be to allow users to set a rate limit on replies to a particular message for people they haven't white listed.
(So, like, if more than 20 "strangers" reply to a public post that one makes, it wouldn't allow further replies. But only if the person who made the post wanted it like that.)
Other possibilities include checking if many people went to an account from the same other account mentioning it, which, there would be ways around that, but, it seems like it could still help. (also as an optional thing)
Some of these are kind of weak but I think some things could be done.
Note that none of these prevent anyone from saying anything to anyone who wants to listen. It just makes it harder for people to say things to someone who doesn't want to hear them.
You don't understand the 1st amendment bro. The 1st amendment only stops the US Government from censoring speech. Private companies can do whatever they want.
So the EU (the government) directing a private company to censor things is OK with you? Or did you not understand what I was referring to/making a comparison of?
I completely understand this move. It seems logical enough, and it's much like what Reddit has done/is doing banning certain hateful subreddits. Advertisers don't want to advertise with companies who are known to host hate-speech or anything of that caliber. However, a move like this is essentially going down the rabbit's hole: you can never get to the bottom. Who decides what constitutes hate speech? No doubt, certain groups will claim they are being censored much like what happened recently with Facebook and conservatives. It may seem like a perfectly good move, but it's just not viable in the long-term.
Especially when sometimes this is being forced by a group of people who seem to be professional complainers. First they cut out 10% of the users for this violation, then another 10% of the smaller group for that violation, then another 10% of the new smaller group for another violation, and so on and so on. Eventually you're just left with the complainers, who'll just move onto another platform to start over with the complaining.
> Big tech companies have also met U.S. government officials earlier this year to discuss how to stop ISIS from recruiting terrorists on social media. The Obama administration had asked the companies to develop techniques to detect radicalization, and block pro-ISIS messages, photos and videos.
Assuming the messages could be accurately detected, wouldn't it make more sense to allow them to continue and flag them for spying, and find the people sending them? If they're blocked from Twitter, they'll just move on to some less easily accessible platform.
Meh, I've seen Facebook and Twitter's standard response to racially motivated death-threats. Deciding it's within the terms of service faster isn't going to change much.
Goooood luck. I have reported so much neo-Nazi scum posts, people and groups, and guess what? Not a single one got blocked, despite Hakenkreuzes, refugee hating and other stuff that's outright illegal in Germany. It took over two years to block "Anonymous.Kollektiv", a fake "anonymous" site spreading extreme right-wing hate - and it isn't even certain that Facebook blocked them; many people suspect Mario Roensch to have shut it down voluntarily after he failed at opsec and subsequently got outed on mass media; now he's on the run with multiple warrants.
Meanwhile, Facebook threatened to block "Kein Mensch ist illegal", a 122k-liked pro-refugee page, and its administrator got suspended for 30 days for posting a sourced, valid image (http://kein-mensch-ist-illegal.org/fb.jpg) which shows that 90% of religious muslims like democracy and 28% of right-wing AfD allied people hate democracy with further 61% being frustrated with democracy. All in all, nothing that violates any FB rule. It took mass-media outrage for FB to rescind the block.
As long as FB repeatedly and openly rather allies with neo-Nazi illegal crap and instead bans admins and pro-refugee pages (because Nazis regularly organize "flagging" contests), I won't trust them.
Actually, it would be Huxley rolling in his grave. Orwell was about government/"the system" oppression. Since these are private citizens/companies doing this, it's more inline with Brave New World.
Well, kind of. In BNW population too was controlled by government, but not through means of warmongering, fear and hate, but through pleasure inducing drug that made people numb to anything happening outside of their daily routine (and eugenics, mostly).
So I think banning is more Orwellian. But sure there are a lot of people who wouldn't care about censorship on the internet, even if it would affect them too.
It's fascinating to see what criteria they will consider hate speech.
Facebook is usually fast to act on any reports, so I've been able to test what they allow and what they have removed.
A black girl making fun of Asians and calling them stupid and "chinky ass" was left up.
A white guy calling black people "ghetto" was removed.
A white guy calling Mexicans "wet backs" was left up.
I'm not sure what they are counting as hate speech, and what they aren't, as those posts appear to have been moderated by an actual human. I guess it depends on the time of day, and the attitude of the person judging the content.
Considering that the definition of "hate speech" has already morphed into anyone addressing race, sex, gender, national origin, religion, etc. it seems we have already arrived at such a vile place you described. It absolutely boggles the mind to realize that we are living in a world that is imploding on itself through self-immolation and self-doubt and self-destruction.
I guess the cry-bully crowd will be happy once the west and the civilization and peace and prosperity it has brought to the world and provided them is left in a crumpled heap an we are all so fortunate as the rest of the noble savage world that does not indulge in the "privilege" of freedom, liberties, free speech, individual rights, self-determination, competition, etc.
The blessings of socialist nirvana will soon be upon us and we will all be equally destitute once everything has been torn down in an infantile fits of social rage and tantrum akin to what led up to the dark ages.
Do you feel better after trying to be passive aggressively insulting? Let me guess, you have no comprehension for the fact that your comment is precisely the kind of infantile tantrum I was referring to; in the absence of argument or fact, you resort to flaccid attempts at insult to make yourself feel better in an attempt at abating the discomfort that comes from the cognitive dissonance of reality penetrating the artificial boundaries of your self-delusion.
Don't worry, I don't expect you to understand that and very much expect the same kind of defensive reaction you just exhibited. Don't worry, you can continue fooling yourself into believing you are correct.
Do these count as American companies? If so, is there not some violation of the First Amendment?
> The First Amendment (Amendment I) to the United States Constitution prohibits the making of any law respecting an establishment of religion, impeding the free exercise of religion, abridging the freedom of speech, infringing on the freedom of the press, interfering with the right to peaceably assemble, or prohibiting the petitioning for a governmental redress of grievances.
The phrase in italics is bounded by the phrase "making of any law". The bill of rights operates only against the Federal government, except those rights incorporated against the States by the 14th amendment, and things which were applied to private action by the 13th amendment.
In broad strokes, no – the First Amendment only prohibits the government from abridging speech. Companies have no obligation to protect your speech, but many act with the spirit of free speech.
Not American but I'm pretty sure the 1st amendment doesn't apply to private companies. You can go out in the street and yell hate speech if you want, you don't have a right to force them to allow it on their platform.
Why is this bad? Because hate speech is not clearly defined. What may be fine today may qualify as hate speech tomorrow. Eventually posting anything not super positive or encouraging could be considered "hateful" or "triggering" and get you in trouble.
One of my favorite quotes on people being mean on the internet, courtesy of Tyler the Creator: https://twitter.com/fucktyler/status/285670822264307712?lang...