The rationale they gave is that hate speech appears on these apps, because some of the microblogging sites that can be accessed via Fediverse have this kind of content. Based on this rationale, I look forward to Google Play removing Chrome, Firefox, and all other web browsers from the store as well.
This sort of decision by Google does make me rather uncomfortable (the entire situation is uncomfortable... https://www.theverge.com/2019/7/12/20691957/mastodon-decentr...). But it's worth understanding why the situation may be a bit more complicated than is described above. What seems to be happening is not an absolute ban on Fediverse apps, but a ban on specific implementations that make it easy to join specific communities which encourage hatred and real-world violence. Other implementations block these instances, and I believe are not banned.
Whether or not this is a good thing is a complex question. If you happen to be the target of this hatred and violence, and feel it is an existential threat to your livelihood, you might believe that it is a good thing to make it more difficult for those who are engaging in this behavior to enlarge their communities. On the other hand, if you believe eliminating communities by platform fiat is an existential threat to your livelihood, this may seem like a very bad thing.
(You might also think it's hypocritical, since you can access most of these communities via a browser. Google also controls the browser, and does make it difficult already to access some sites https://developers.google.com/safe-browsing/v4 . However, it does seem to have a higher bar for browsers than for social apps (e.g. malware, csam, iirc); some have suggested that there are legal reasons for this, I'm curious to learn more on this, but I have not seen any substantiation yet.)
This isn't quite safe harbor. It's not like the app was removed for one user posting one bad content. If what the poster above said is true, it's closer to if an app had a user who regularly broke the rules, and the app refused to ban said person.
I disagree about your comparison. This app can connect to arbitrary domain names. This is getting blocked because you are not filtering the list of domains a user can connect to proactively.
That's wild & I can think of zero precedent for it.
I'm not sure what you mean by justification. I think I simply lay out some context and a set of conflicting perspectives.
That said, if you don't want Chrome and Firefox to be content aware, then you should argue that safe browsing should be eliminated from Firefox and Chrome. That is a self consistent position, but it may not be consistent with e.g. avoiding dramatic growth in botnets, ransomware, organized crime etc.
Actual safe browsing comes from content-unaware tools like NoScript. And yes, I did spend half a hour going through about:config and neutering everything related to 'Safe' Browsing(R)TM(C)LLC.
> but a ban on specific implementations that make it easy to join specific communities which encourage hatred and real-world violence
So basically, Google only supports the Fediverse if, like itself, it engages in censorship. The Fediverse exists not to encourage hate speech, but to discourage censorship. Hate speech is the inevitable result of allowing humans to say what like they like. Some people will choose to be nasty. Many people believe the greater good is the free flow of information, and that adults are more than capable of filtering out and avoiding those information sources which make them uncomfortable. Instead, Google wants to treat everybody like children, and be the helicopter parent that swoops in and removes anything objectionable.
>but a ban on specific implementations that make it easy to join specific communities which encourage hatred and real-world violence.
As you state, one can access these specific communities in a number of ways, including Google Chrome. If the community is the issue, go after the community, not an ActivityPub app that can access content from these and other communities.
Should Google also ban RSS reader apps that don't actively block RSS feeds from sites Google doesn't like?
Oh, please don't suggest banning RSS apps - Apple is already doing that, they removed Pocket Casts and Castro because they allow access to Podcasts that offend Chinese censorship, while Apple's own podcast app remains because it blocks those particular podcasts:
It's a bit silly to emphasize specific communities if this results in a ban of the entire app or network. ~all apps and networks have some communities like that. I don't think this is a complex question at all, this is just bad.
When the cathedral supports real-world violence it's good. When you support real-world violence it's bad. They want you dead, but will settle for your submission.
Safe browsing doesn't include sites for encouraging hatred and violence, etc. Only malware, social engineering, and "harmful"/"unwanted" applications. If they start including those sort of sites in their safe browsing lists, that would make your point here more relevant.
(Of course, some people get hit by safebrowsing unfairly. But I think in most cases, it is because someone compromised their site and used it for a malicious purpose, and then they struggle to get Google to remove it within a timeframe which is reasonable.)
In other words, the developers of these apps need to all run their own Fediverse nodes, _but not federate them to any others_ because otherwise users may be able to access content from nodes that Google doesn't like! Because each dev having to vet every instance out there is the only other option and that's practically impossible.
It concerns me that the company taking down the apps also owns my entire mobile OS from browser to network stack, as well as the DNS resolver, the search engine, and the email client I use.
So Google somehow knows which apps block certain instances, which generally get reputation among other instances & are quickly blocked? That's not believable.
And right after that we can remove any FTP client that uses the FTP protocol to download content Google doesn't like. We should scan all apps that use a common, published protocol to make sure the protocol is not being used to consume objectionable content. /s
The app is not the service; the protocol is not the platform.
I think you might have misread my comment; I wasn't suggesting whether a course of action was correct or not, but just explaining how it could technically be feasible. I interpreted the comment I responded to as not understanding how it would be possible for Google to have done this a certain way, and I was theorizing one possible way they might have done it.
Ah, then yes, apologies -I did not mean to put words in your mouth. Technical feasibility is likely easier than imagined; most Mastodon services use the auto-generated list that appears on their "about" page - easily scraped if not available through the API - here's the list on the instance I moderate on for example:
It wasn't that long ago when virtually everyone understood that "hatred" was completely subjective. Trying to remove all communication channels because of the potential for "hatred" means nothing but total silence.
They do it with youtube now as well and demonitize ANYTHING with firearms in it. Doesn’t matter if you’re a hunter or trying to sell people on a new product.
All these disparate media sources that we yearned for back in the cable-only days have finally turned to dogshit.
i think the reason would be that with browsers they don't control the ecosystem enough to get away with it. I actually agree with the ban if your framing is correct (not having looked into it any further), but if they did this in chrome, people would just use another browser to access these sites. you can sideload apps as well of course, but it's much more of a hassle than doing it on PC, where people are used to software distribution not being as centralized
Free Speech Extremist. Shitposters Club. No Agenda Social. Lets all love Lain.
There are ton of instances which much of the Fediverse blocks, but if you set up your own server and follow people on those instances, it's not 80% hate speech and racism as others would have you believe. Yes there is some of that, but there's also weebs, and anime and political discussion and weird gaming discussion and videos not posted anywhere else and memes and the great diversity of through we use to have on Reddit before it became a monoculture.
There are also straight up anarchist instances that justify violence and destruction of the state like Rage Love, Anticapitalist Party, and others.
It's a very big space, with new players entering and leaving every month.
Banning apps because they do or don't have block lists greatly misunderstands how the Fediverse works.
Which i have always suspected was the real reason they killed off Google Wave in such a hurry, even thou we were told they found it useful for collaboration within Google.
> If you happen to be the target of this hatred and violence, and feel it is an existential threat to your livelihood, you might believe that it is a good thing to make it more difficult for those who are engaging in this behavior to enlarge their communities.
I’m indeed being threatened by various hate groups (one of them actually tried, and almost succeeded, to kill an acquaintance), but strangely enough they are never removed by Google or any other big corporations. Worst, each time I voice any slight complain about them, I am the one being censored. Some of those groups are even sometimes getting official support by the GAFAM. This is a really odd and unfair world.
I think it matters because sadly I’m at the point where I need to evaluate the death threat for whether it is reasonable to fear from it.
It’s really unfortunate when someone fears for their life and I don’t want that for anyone.
However, lots of people fear for reasons that I don’t think are actually from threats of violence.
I had a friend explain how they literally feared for their life. When trying to console them I learned that the thing that was making them afraid was a friend’s Facebook post about a restaurant that supported some Bible group. Their reasoning was that the Bible group was anti-gay, and they might end up killing them for being an ally of gay friends.
Because of this they feared for their own life and wanted the friend to stop talking about it.
Now of course, there are multiple lame things about Bible groups being jerks, but certainly nothing to make this person think their life was in danger or directly threatened.
I’m not sure how to specifically help that person, but after several episodes like this, I don’t pay much attention to them when they say that they get death threats.
Maybe I’m just jaded but lots of people talk about death threats and I’m sure they perceive them as such. But having the details of the threat helps to differentiate the really dangerous people trying to kill others from the plentitudes of people saying “DIAF” who aren’t trying to kill, just being jerks.
Exactly. This is Google drawing the line on where this "hate speech" is from and they believe that such "content" can be accessed via the Fediverse.
To see how ridiculous this sounds, Google might as well completely take down the entire social media and internet browsing category on the Play Store since I keep seeing the same content from both extremes on all these platforms.
Just wait until you tell them to take down their own browser since you can find this "content" with a simple search. They will soon realise that "drawing the line on hate speech" is more tougher than solving leetcode CS questions.
> Just wait until you tell them to take down their own browser
Well, they're taking the address bar away, bit by bit; they have SafeSearch; and they have AMP. It's a very slow erosion, but there will come a point at which going outside of the list of officially acceptable sites will become more difficult - first with mandatory warnings, then maybe with mandatory reporting to law enforcement or whomever, and eventually not at all.
Yes, it sounds like a "slippery slope" argument, but we're a few steps down the slope now, and any argument that encourages us to climb back up has to point out where things may go if we don't resist.
It sucks that this requires us to defend the rights of people to speak whom we may intensely disagree with, but that's the crux of the matter. Either we become mature enough to understand that people will have discourse we dislike, and avoid it or engage with it as we see fit, or we continue to hide behind authority figures who will purport to keep us safe by controlling what we can say and think.
The fascinating part of this is that Google has officially claimed the mantle of arbiter of what is allowed on the internet ( they are not exactly a gate keeper yet, but given how people have trouble accessing information outside FB, Apple, Google gardens, they are well on their way ).
edit: Trouble in a sense that it is inconvenient for them.
No, and there are laws and tomes of case law that reinforce that no matter how much curation they do, an interactive computer service will not be held liable for user generated content.
I get what you're saying, but I don't think this is because Google cares about hate speech. Google is simply using hate speech as an excuse to get rid of apps that it doesn't like. Deciding which apps you like and which you don't isn't that hard of a line to draw.
Google is politically very very left. I'm guessing that someone at Google may have browsed these apps, decided they didn't like what they saw, and pressured to have the apps banned. Obviously they can't do that to large players like Facebook, but small apps, they can easily crush, and nobody's going to do anything about it.
Image if Google decided to just block certain websites on Chrome, or if big tech got domain registrars to drop 4chan, or whatever humor websites they don't find amusing.
Not allowing a parties political ads would be clear favoritism in a political situation. You might not like Trump, but it's quite the jump to say republican ads are "hate speech." .. in fact that's quite literally weaponizing the word "hate speech" to censor political opinions you don't like.
Which is the whole point of hate speech laws. Can't market censorship of one party, but who could ever oppose censoring hate... then redefine hate to be anything you'd like censored, and ... that's what we have now.
Any time anyone complains about censorship, roll out the excuse of holocaust denial, regardless whats actually being censored.
Yes it's why freedom of speech is a thing. Ideas are meant to compete and the power to decide which ideas are acceptable is an absolute power that completely corrupts a society.
I said pretty much to underscore I was generalizing and that there are many nuances there. But that is how conservatives self identify. You will find that in their music and their art and their culture in general. I'm not inventing hateful terms to turn you against them as OP was.
> Trump just negotiated a peace deal between Israel and UAE
There are lots of Christians who favor Israel but ultimately hold anti-Jewish opinions. This is weird, yes, but support of Israel should not be mistaken for support of Jews (the reverse is also, but only incidentally) true.
> Trump's son in law is Jewish.
So were Emil Maurice and Erhard Milch, at least according to German law. Didn't stop Hitler and Goering from making exceptions for them.
I have quite a few conservative friends. Most of them don't support Trump, because Trump isn't a conservative. He doesn't stand for God or individual freedom, nor does he stand for loyalty to the country. Don't conflate conservatives and Trump supporters. They're not the same thing, and trying to present a bait and switch between classical conservatism and Trumpism is a bad faith argument. This is why you see so many conservative politicians that are no longer in office supporting Biden over Trump. Because Trump doesn't extoll conservative values. Instead he represents a populist and proto-fascist wing with more than a couple white supremacist tendencies.
I travel all over the country. And I'm from a highly conservative area in Florida. Trump has 95% support in the Republican party. He's got tremendous support and enthusiasm.
Trump isn't racist at all. He was friends with Michael Jackson, Jesse Jackson, Whitney Houston, etc. He actively tried to help Whitney from overdosing (https://www.rollingstone.com/music/music-news/mike-love-to-t...). He calls as many victim's family members as he can, regardless of race. He calls every fallen soldier's family, regardless of their race or creed. He just pardoned a black woman who was put away for a non violent drug crime in the 90s. He signed criminal justice reform established opportunity zones in poor neighborhoods to encourage business investment. He supports school choice to help poor communities who are stuck with broken schools. He's got 11 members of his cabinet who are Jewish (Mnuchin, Friedman, etc).
That was a mistake, presumably. It's likely this is too. The deep desire on the part of posters here to assume malice and scream CENSORSHIP is really off-putting.
I actually don't know anything about fediverse, but if it's like other pseudoanonymous obscure communications media it's probably filled with awful stuff. It's not that hard to imagine a naive reviewer who doesn't understand the architecture to be confused if they get a report showing screenshots of the app with the content available in it.
The only reason Podcast Addict has been restored (multiple times) is that it's high-profile, and the owner raised enough stink to cause widespread (enough) outrage about this. Otherwise, whether through malice or incompetence, it would be gone forever.
>The only reason Podcast Addict has been restored (multiple times) is that it's high-profile, and the owner raised enough stink to cause widespread (enough) outrage about this. Otherwise, whether through malice or incompetence, it would be gone forever.
This is my concern. These apps are not content hosts, they are akin to Web browsers or RSS readers, but they are small, one-person endeavours that don't have the clout to get Google to notice the difference between the content providers (the individual Mastodon servers) and the ActivityPub client app that these apps represent.
I know one of the devs is thinking to not push the issue as he's worried about his other apps on the same developer account.
The discussion has veered off into censorship issues, but this is a simple 230-ish problem, these apps are not the Mastodon servers that (presumably) some people have had issues with. They are agnostic client readers of the ActivityPub statuses.
There is no way, nor any legal requirement, for a browser like these apps to be held responsible for the million possible bits of content it could consume.
It's not a desire to scream about sensorship. It's more about how the rules are arbitrarily enforced. And how every app's fate is in the hand of two big players, so you're sol if they ban you. Even if the ban is a mistake, good luck getting it reversed unless you're going viral.
Honestly it's somewhat telling that Automation for app review can get messy fast and that Google should invest in Apple's approach to app review (but I also agree that the poster is extrapolating the app denial into something much more than what it is)
The fediverse is very split, you have some servers that are run by people who post straight up Nazi symbolism on their admin accounts, and you have some servers that have admins who will happily participate in piling on someone for appearing Insufficiently Woke. I block both kinds on my server because I just want a nice quiet place to talk with my friends, and that's a definite segment of the Fediverse too.
This much is key to observe: this isn't a partisan maneuver by Google, as much as people may want to slot it into that. It smells much more like a control maneuver: a perceived competitor.
To the big tech cartel, period. Don't think for one second that Google, Apple, Facebook, Twitter, Microsoft, Adobe, and their friends aren't having one big handshake party over this kind of crap.
Really? It's not okay to say it's censorship when it is? I'll admit I'm wrong iff the apps are reinstated without having to impose additional restrictions.
There is a specific exception to web browsers, so Mastodon app(s) could probably classify as one by prominently displaying the web url of a post above the post.
one of the problems we have in Tech (as an industry and on social media) is to allow individuals who make poor and bad decisions to hide behind the collective of a company/organization. And we continue applauding them for their great work they do in areas that are removed from the political. But these days innovation acts as a shield where we let the innovators get away and reap praise as individuals (the inventors of golang, the teams who standardized QUIC, the guys doing netflix propaganda about their simian-devops-army, facebooks React, Amazon's DSSTNE...) all of them have engineers who wear these things like a badge and are proud to give talks. Yet when they are responsible for projects that violate human rights, remove the Taiwanese flags from their app, or censor speech as in this case then we're never talking about people but it's always the opaqueness of the firm that hides these abuses.
We need a list of these lizards so we know when to throw tomatoes and rotten eggs at them whenever they give a talk or share feel-good posts on LinkedIn.
people should be ashamed instead of proud when they write "disclaimer I work at X"
When the hell has that /not/ been the case in society? Institutions have always been shields and your dehumanization and desire for shaming ironically shows exactly why they serve that function - they don't want to be subject to the whims of random mobs who aren't a part of them.
The tech industry is a place where people generally prefer to talk things out rather than yelling and shaming. I think that's worth protecting, even if we see short term gains that might be available from defection. After all, once Google realizes the norms have changed, won't they be able to leverage their resources to find people who yell louder and shame more frequently than you?
The old days were never as controversy-free as most people remember. There was a time not that long ago when common techie opinions like "Internet piracy isn't a big deal" or "shooter games are fun and kid-friendly" were seen as quite immoral in some circles, and calling your forum "Hacker News" was kinda subversive. If we're headed back to that kind of environment, just with a different set of moral issues enforced by a different set of people, that seems solidly OK.
That's needlessly confusing. Half the posts on https://www.reddit.com/r/TIHI/ are "hated speech" while being miles away from anything that would get called hate speech.
Guess they should ban Chrome, since hate speech appears on the web. All these Fediverse things can be accessed via web apps and progressive web apps (PWAs) too.
It's pretty rich that Google claims to be removing these apps for hate speech when their own search engine returns results from sites like Kiwi Farms and Encyclopedia Dramatica on their victims so prominently.
(throwaway since the former name searches themselves to find new targets.)
You joke about that, but I wouldn’t be surprised if in 5 years (or maybe 1 year?) open browsers are banned and only “allowed” browsers are used that allow access to “allowed” websites and content.
It always starts like that. People agree to very sensible things. Like hate speech is bad and it’s not censorship if it’s not mandated by the government. Eventually the definition of hate or whatever it is that’s offensive is very removed from the original meaning, and now we all bear the brunt of the sensible people who with best intentions wanted to make things better.
Hanlon's razor has hobbled everyone's ability to see what the establishment is doing. This isn't about good intentions gone wrong. The loss of control of the narrative due to the internet has been a severe setback for the powerful, and they have been slowly clawing it back by limiting access to alternative media.
Various think tanks, NGOs, board members with multiple irons in the fire, foreign interests, and the government itself exert a lot of influence on large players to shut down harmful narratives. Most visible was when the deplatforming activity started with threats from lawmakers against outlets if they didn't remove certain content. You've also got various orgs with CIA connections acting as "fact checkers" on Facebook. The influence happens in subtle and many ways.
>The loss of control of the narrative due to the internet has been a severe setback for the powerful
How was the narrative controlled by the powerful before the internet became a part of everyday life? Specifically, how was the narrative controlled in the US in the decades before 1993?
I'm asking for recommendations of books written by historians, journalists and other serious people. (Understanding the situation decades ago is probably a lot easier than understanding the current situation -- partly because the powerful will take pains to hide their controlling actions from the public.)
In the US I get the general sense that politicians and holders of government offices have never been able to exert a lot of control of the narrative with the result that journalists and the prestigious universities have so much influence that they are best thought of as essentially part of the government.
That suggests that the efforts of the establishment to rein in the big social media companies will prove largely ineffective with the result that Facebook and Google will probably join the New York Times and Harvard as parts of the de facto governing structure of the US.
If that is true then that suggests that the efforts of the establishment to rein the big social media companies will prove largely ineffective with the result that Facebook, Google, etc, will probably join the New York Times, Harvard, etc, as part of the governing structure of the US.
That's exactly what "reining in" looks like. Instead of being an alternative to, e.g. the New York Times and opposing the next Iraq war, social media just becomes yet another rah-rah cheerleading mouthpiece of whatever opinion the "serious people" hold.
I don't know if you consider Chomsky to be a "serious person", but Manufacturing Consent does go into how the people actually in charge of the government (professional civil servants, corporate lobbyists, etc) manage to make it seem as if their opinions are infallibly correct and countervailing opinions are thinly veiled crankery. What social media did (at least in its early days) was give everyone the ability to manufacture consent at a scale that previously was only the domain of the large media corporations. The establishment media is obviously threatened by this and are working to ensure that the new media follows the same guidelines as the old, even if that means censorship.
Of course, that's not how the establishment media phrases it (and probably not even how they believe it). They see it as "protecting" the people from unsavory "Russian fake news". In reality, though, that's just a lie they tell themselves and tell us to justify their continued hold on the ability to decide which opinions can be held by "right-thinking people". If they were truly interested in "the marketplace of ideas", they wouldn't be pushing so hard to make platforms as centralized, controllable and censorship friendly as they are.
Manufacturing consent online is dependent on the rules of the game. Russian-bot-syndrome is a fraud issue, in this case a state actor manipulating the 1-person-1-voice assumption of the game. If we're making a marketplace metaphor, then this market is being tilted toward actors with the resources to pay for bots, workers, or influencers. It's not like it's limited to Russia; Bloomberg was somewhat showy during the primary about paying people to tweet for him, and it's safe to assume any other well-resourced actor with an interest in manufacturing consent is doing the same thing.
The censorship debate is an indicator of game-rule collapse in social media. The platforms are reaching for top down control because they can't cook up a better way to reduce fraud and (let's call it) low-quality behavior. Ironically this method reduces the overall authenticity of the platforms and counteracts the intent of the censorship, and thus you get game-rule collapse.
It was an interesting situation because in some ways, I prefer the transparency involved? But it's sort of like any sale of an account, even if temporary - it seems disingenuous by nature. Almost like how an MLM makes you sell to your friends.
Has anyone demonstrated how many people these "Russian bots" have influenced? There's enough craziness online before throwing in non-linear warfare. I'd imagine it's far less influence than, say, Charles Koch has. Why is it ok that he has an outsized influence on our "democracy" but such a terrible thought that Putin has influence? It's not like either person will act in our collective interest.
It's connected to the same reason we only allow US citizens to be involved in US politics. You at least assume that a citizen has a direct interest in the country's domestic well-being. We talk about russian bots nationally because violating that norm is a cultural scandal, but there's plenty of discussion around outsize influence by corporations and the rich in social media as well. It's not really okay for anybody.
By the publishers of newspapers. The winter soldier hearings as essentially a mass confessional of war crimes witnessed and participated in during Vietnam flat out weren't covered at all on the East Coast for one.
I really don't get that logic. Of course, everyone agrees that hate speech is bad (and so are a lot of things, but I digress). But how is it not censorship when one of the world's most powerful companies does it? Do they get a free pass because they are governed by shareholders and make a lot of money? I can see how it's not censorship if Bob does not want people to post things he disagrees with on his cat picture forum with 200 users. When a few massive companies effectively control the possibility of reaching 95ish% of the audience on the Internet, it's censorship in the very worst sense of the word, and I don't see how it's possible to support it without being an unequivocal opponent of free speech.
but does everyone agree on what hate speech is? That's the danger. You can just claim any opinion you don't like is hate speech. You can say endorsing a particular candidate is hate speech and those people can justifiably be censored; their views invalid (and in some places; justifiably killed).
It was once considered offensive, in many places a crime, to say homosexuality is morally okay or that the Bible should be translated into German and English or to say God doesn't exist.
There is no distinction between "Free speech" and "hate speech," because it requires you to qualify the former. There are exceptions in many countries, but they are for very specific things: child abuse and advocating specific violence against individuals.
Well a bunch of people are running around now saying speech is violence...so in the not too distant future we might be saying someone was murdered by words.
It is interesting that you say that. In English, we do use phrases like "X was destroyed by Z" ( I forget the exact idiom, but kids seem to be using it -- god I feel old ), where no actual destruction beyond verbal attack took place.
I know you were referring to something else, but it got me thinking that we are already using the phrase. Our legal system just does not allow a lot of 'word damage' to be adjudicated.
I'm not gonna lie when I was a child decades ago it was well known even amongst childrens books at the time that that line's a load of horse shit. There's tons of books where that exact phrase is used to show that ignoring verbal abuse is wrong and emotionally damaging.
There have always been limitations on freedom of speech, including speech that incites violence. So your example, while deliberately hyperbolic (I don't think anyone would say that words literally murder people), has always been a normal thing.
It's not hyperbolic at all, or have you not seen the "Silence is Violence" rhetoric everywhere? It could literally come from Orwell's world of "War is Peace, Freedom is Slavery, Ignorance is Strength"
The book, The Coddling of the American Mind, does a great job of showing how the goalposts for what is and isn't violent have been moved considerably in the past few years in academic circles.
Finally, violence is okay, so long as it's against the "wrong people," like the professor who was put on probation for assaulting an opposing party member with a bicycle lock, or the guy in Charlottesville who was fined $1 for assault:
> It's not hyperbolic at all, or have you not seen the "Silence is Violence" rhetoric everywhere? It could literally come from Orwell's world of "War is Peace, Freedom is Slavery, Ignorance is Strength"
I’m aware of the “silence is violence” slogan. It means that inaction in the face of injustice is tacit support for the status quo. It doesn’t literally mean, for example, that all people are being violent while they are sleeping, or that people who are unable to speak are being violent. I’m sure there are some people who use the slogan in preposterous ways, but that’s true of all slogans. You’re looking into this way more than necessary. There’s a pretty clear reasonable interpretation of the slogan if you’re willing to look for that interpretation in good faith.
That interpretation is entirely too generous. That expression "Silence is Violence" is explicitly intended to compel speech and its clear meaning is that if you don't, you are contributing to the violence against minorities.
This is not an extreme example. The expression has always been used (at least in the current climate) to mean, you agree with us, verbally and visibly and loudly, or we attack you.
Edit: If you think the above example is not an example of what "silence is violence" means, by all means, explain why rather than just flyby downvoting.
That example is a crowd intimidating people with the intent to compel speech, of course, and they’re using the slogan “silence is violence.” But those are two different things. You could pick any slogan you want and have a mob recite it while intimidating people into agreeing. That’s not an indictment of the slogan.
It's not just a slogan. That is the actual end result of such an ideology.
Silence is not Violence. Silence is the opposite of violence. Silences is stopping, thinking, looking at all the evidence, carefully evaluating and coming up with a sound decision.
This slogan says: "Be outraged immediately without knowing any real facts about the situation"
It's literally DoubleSpeak. You are literally, right now, using DoubleThink.
Silence is not the opposite of violence. Peace is the opposite of violence.
I interpret the quote "silence is violence" to mean by not speaking out against violence, you implicitly support or contribute to it. People may disagree if this is true, but it certainly doesn't feel Orwellian.
In Germany instead of the "silence is violence" slogan people often use the famous Niemöller quote/poem but I have always understand the slogan to express the same sentiment.
First they came for the socialists, and I did not speak out—
Because I was not a socialist.
Then they came for the trade unionists, and I did not speak out—
Because I was not a trade unionist.
Then they came for the Jews, and I did not speak out—
Because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
That would start the discussion of "when does an example become the standard" which I don't really want to go into. Suffice it to say I do not watch the news, I very rarely visit Twitter and do not follow anyone, and that is the only way I have ever seen that expression used - in the news, on Medium, on FB, on anywhere, when I've come across it. "Agree with us or you are violent."
I don't think there's a generous way to interpret that expression. Silence is de facto not violence. Violence requires physical action.
> Suffice it to say I do not watch the news, I very rarely visit Twitter and do not follow anyone, and that is the only way I have ever seen that expression used - in the news, on Medium, on FB, on anywhere, when I've come across it. "Agree with us or you are violent."
Have you Googled the term? Apart from the first page or so being dominated by that very recent event of the crowd intimidating people and many other people conflating that event with that slogan, you'll find plenty of articles about what it means: that choosing to not speak out about an issue helps support the status quo. In fact, I've generally seen it used to try to persuade people who don't want to support the status quo that staying quiet or trying to "not be political" is in fact supporting the status quo.
> that choosing to not speak out about an issue helps support the status quo
I mean that's just fine, and a perfectly fine point to make - and one with which in fact I agree; I have railed against police and prosecutors' offices for years, having been on the ass-end of their horror myself.
But if that's what one means to say, then say that; because the word 'violence' has a specific meaning not captured by "don't support the status quo".
This is a long way of saying I generally don't like slogans :/
For example, some say that meat is murder. I don't think we should be outlawing meat, and thus in the eyes of the ones making such a statement, I'm supporting some forms of murder remaining legal.
But you aren't concluding, because people disagree on precisely what qualifies as "murder," that there should be no laws against murder. That is the argument proposed in the earlier comment about why there should not be laws against hate speech.
No, there's a set of statutes that lay out what murder looks like, and ultimately it is up to a jury of your peers to determine if what you did satisfies the criterion. That's in fact exactly why the jury system was invented, because reasonable people can disagree, so the assumption becomes that "if a reasonable plurality of people DO agree, there's a good chance it is a good enough standard by which to act."
The subject of murder is not an appropriate analogy here, really.
Why is that not perfectly analogous? The law can describe what is and isn’t hate speech, and courts and juries can decide individual cases when necessary. This is the same for all criminal laws. The fact that not all people will agree what is and isn’t a violation of a given law at a given time is simply not a valid argument for why a given law shouldn’t exist.
That's not the question. The question is whether there should be any laws against murder, given that people disagree on precisely what constitutes murder.
It's a question of what is meant by the term censorship. In the strictest sense, moderation and censorship are very often the same thing. If for instance, I post something terrible in a comment on here, and the administration of HN deletes that comment, then that's censorship.
However, when most people talk about censorship they're using it not in the strict sense, but rather as a shorthand for someone violating their first amendment right. In this case this is really only a crime when it's a government entity doing it, although people don't typically differentiate between the government and any large organization, which technically are legally allowed to censor you on their platform or property.
There's a larger discussion that needs to happen with regards to censorship. There are two extremes at play here, on the one hand there's the absolute freedom stance of literally nothing censored (only example I can think of for this is maybe the dark web, but really everyone censors if only a little), even shouting fire in a crowded theater or posting child pornography. On the other extreme is the absolute censorship of someplace like China, where only permitted thoughts and expressions can be posted. The US and most of the rest of the world tends to fall somewhere in the middle.
The big struggle right now is that everyone has recognized that there's clearly some kind of problem. We're seeing unprecedented levels of misinformation, and a frankly weaponization of social media both for profit, and for international politics. I don't know that anyone has a good solution for how to address that problem, but the pendulum seems to be swinging towards a more censorship focused response.
> other extreme is the absolute censorship of someplace like China, where only permitted thoughts and expressions can be posted. The US and most of the rest of the world tends to fall somewhere in the middle.
It's like other countries only exist as rhetorical devices for most of HN. If you actually used the fediverse you'll see that there are plenty of Chinese users on it criticizing the state. It's the Western fediverse users being censored for wrongthink this time. Even the creator of Mastodon straight up doesn't believe in free speech wrt. to certain far right beliefs.
> Of course, everyone agrees that hate speech is bad
As an extreme example: Do you really think the KKK believe hate speech is bad? Even if they do agree, do you think their definition looks anything like your own?
I find it hard to believe that someone who has lived through the last four years can say with a straight face that everyone agrees hate speech is bad. One would think the last US election cycle would have gone differently if that premise were true.
Google regardless of it's size is still private, and should be allowed to host who it wants or doesn't, same as you should be forced to host visitors you dont want.
Free speech is that they shouldn't be a law by a government to punish expression of ideas or opinions.
citizens or companies should be allowed to host and not host whoever they want.
Anyway it's meaningless to believe hate speech is bad, because hate speech is an undefined term. It just means something someone somewhere would like to punish someone else for saying.
Do you consider is censorship if a huge Internet/media company removes illegal content like child pornography, explicit calls to "imminent lawless action," phishing/fraud attempts, explicit misinformation (like false claims that an election date has been moved), or content that goes against their own community guidelines (pornography, violence, etc.)? Do you consider those things censorship or opposition to free speech?
You're mixing up two different things: sites removing illegal content because they're mandated to do so, and sites removing legal content because they choose to do so.
> People agree to very sensible things. Like hate speech is bad and it’s not censorship if it’s not mandated by the government.
We have two things to unpack here. First, hate speech. What is it? Who gets to decide what the word means and what is their procedure for deciding? Is the definition stable or fluid (or even very fluid)? Is hate speech universally wrong, or only wrong when issuing forth from certain speakers? If we all agree that it's wrong, then why are people engaging in it, even unintentionally?
Second, censorship. Is self-censorship not censorship? Why must the state be involved in order to censor? We're TV networks that for decades voluntarily forbade their programming from portraying homosexuals being censored or not? What is unique about state authority versus corporate authority as it relates to censorship?
"Hate speech is bad" is a very abstract statement. The sentence conveys almost no actual concrete meaning. It seems like a rational or sensible statement, but it delegates almost all of the actual work to feelings and emotions, and highly subjective ones at that. I don't find "wanna grab a cup of coffee" terribly hateful, but apparently some people do.
I get the overall sentiment of your post, and I think I mostly agree. Nevertheless, the way we stop this nonsense is to say at the beginning that it is an abstraction over extremely subjective feelings and emotion, and thus has no basis other than eventual mob rule authoritarianism.
That’s a non sequitur. What do you mean “too?” What does this have to do with the claim that “it’s not censorship if it’s not mandated by the government”?
I suspect they thought you were saying "hate speech is bad" sounds the opposite of sensible, when really you were referring to the part following the 'and'.
The is what I believe is the goal of placing restrictions on people based on “mental health”. It’s so open ended and not easily verifiable that it becomes a sliding scale.
If I'm understanding you right, the idea is that some harms, physical ones, are fair game for the law to cover, but other harms (mental ones) or a collection of boundary cases are less (if at all) within the purview of legislation.
I think there's a good conversation to be had as to what in particular makes physical harms so special as compared to others, and how existing law in every country (including the US) can constitutionally include some non-physical harms within its legislation (such as laws against sending threatening letters, or child pornography law, or fraud).
>Such harms can be reliably detected, with stringent enough criteria.
Mental harms, in many cases, can also be detected by competent professionals; besides that, it is entirely possible for physical harms to heal and for supporting evidence of their infliction be used to convict. Further, many physical harms depend at least partially on the victim's characteristics or situation; a concert pianist is arguably harmed more by someone cutting off his finger than a schoolteacher would be, for instance. Many physical harms that are rightfully legislated against often require the testimony of the victim for the case to succeed. For a wide class of 'mental harms' it is accurate to say that they are indeed physiological responses - from PTSD to lethargy and insomnia. This is in contrast to the caricature that mental harms are necessarily merely 'hurt feelings'.
I also have concerns that the difficulty or the fact of sometimes being nebulous features of mental harms should necessarily rule out such lawmaking. At best, the minimum for proving such harm should at least be set out by the legislators or judiciary, if the standard of evidence is the roadblock to legislation.
It's also worth remembering that we're talking about harms here, not mere hurts. Harms are much harder to fabricate than hurts are.
It seems as though you're invoking a slippery slope fallacy; it's possible to say exactly the same about doctors working for the state who minify or trivilazize the examination of physical harm on dissidents too. The fact that expert testimony can be bought off or compelled does not preclude expert testimony from being an important consideration in general. The opioid crisis for instance has shown there are many incompetent doctors, but I doubt you'd refuse the testimony of a doctor to help your case when you are injured by someone else.
You can prove physical harm beyond a reasonable doubt. Mental harm is frequently concocted as a bullying tactic, e.g. the recent NY Times editorial controversy where employees said running an editorial they disagreed with made them "feel unsafe." https://www.npr.org/2020/06/08/871817721/head-of-new-york-ti...
The article you link did not mention the employees saying running the editorial made them "feel unsafe". Neither the word safe nor unsafe appears in the article. It says the article "reportedly elicited strong objections" from the staff.
I don't see this as an argument against such legislation; consider that many physical crimes are also hard (or even impossible) to prove beyond a reasonable doubt, considered case-by-case. Rape very often qualifies here, as does the mens rea of various other crimes, which may rely upon testimony. Both actus reus and mens rea are required for a conviction, and while the actus reus may be easier to prove (but again, in many cases not beyond a reasonable doubt), we do not abolish the role of intention in the justice system simply because it's hard to prove.
Accusations of physical harms can also be concocted as bullying tactics too, in which the harm was suffered as a result of either a self-inflicted injury, or inflicted by somebody else. Such cases can be thrown out due to insufficient evidence. I see no reason why the same cannot be said for a subset of mental harms, in which there are equivalent doctors available to use their expertise to judge the harm.
Some problems don’t have easy simple solutions. Any answer will have some outside cases.
If a schizophrenic parent has in the past harmed someone, should a court ignore this when determining custody. It is unfair. If you err on being too lenient some people will be harmed. If you err on being stringent some people will be harmed.
Complex problem cannot be solved with ideology and maxims. All solutions will fail some people sometimes.
> People agree to very sensible things. Like hate speech is bad and it’s not censorship if it’s not mandated by the government.
I'm paraphrasing what was here a few days ago:
Our banking partner is uncomfortable that the realistic sex toys modeled after magical creatures have the colors that strongly represent human organs. You will either have to change the colors or we will not be able to continue providing you with our services.
To be fair, they didn't claim that people don't also agree to very stupid and malicious things, and in fact rather implied that that's the likely result of supposedly sensible starting points.
And that's why I'm incredibly cynical about politicians and activists who use amorphous political terms like "hate speech". It eventually becomes a club wielded by whoever is making the rules of today's Calvinball game.
I love that aphorism. Unintended consequences. We should teach unintended consequences in grade school, high school, and have advanced degrees in it. How to see them before they explode, how to mediate them, and how to fix them once they're running at full steam.
Lately I've been imagining it along with the slowly boiling frog story and the crab-mentality too. As in some people can't tell we're headed to hell because it's coming so slowly, and some people will actively stop others from escaping hell or trying to fix the situation.
There's a hypothesis I came up with awhile ago and it seems to hold pretty true. If we talk about problems as O(n) where n is the causation distance[0] we've solved the vast majority of O(1) and O(0) problems. It makes sense that biologically we would be primed to think in this way because they are decent approximate solutions for small groups. But the world we live in now is much more complex and many events are coupled and the low order approximations aren't good solutions. The problem I see is that people are treating O(5) problems like they are O(1). As a society we discuss things in this way instead of trying to understand the complexity, nuance, and coupling that exists in many of our modern problems. A good example of this is global warming. People treat it as "if we switch to renewables then we've solved global warming" when reality is substantially more complicated. But I don't know how to get people to realize problems are higher order problems and that the first order approximation isn't a reasonable solution.
[0] So O(0) means x causes itself. O(1) is y causes x. O(n) is n causes y causes ... causes x. This is just a simplified framework and not meant to be taken too seriously.
I agree, I've been thinking along these lines for a while too; thank you for phrasing it so clearly.
My current thinking led me to conclude that we don't have sufficiently good tools[0] for modelling O(n) problems with n > 2. Particularly when (what your simplification doesn't capture) there are feedback loops involved.
Take this O(2) problem: x causes more y, y causes more z, z causes less y but much more x. Or in a pictorial form:
You can't just think your way through that problem, you have to model it - estimate coefficients (even if qualitatively), account for assumptions, and simulate the dynamic behavior.
I argue that we lack both mental and technological tools to cope with this.
Speaking of global warming, a year ago I presented this problem: https://news.ycombinator.com/item?id=20480438 - "Will increase in coal exports of Poland increase Poland's CO₂ footprint?" Yes? No? How badly?
The question is at least this complicated:
Coal exports
^
| [provides Z coal to]
|
| [needs α*X = A kWh for coal]
Mining coal <---------------------\
| |
| [provides X coal to] |
v |
Coal power plants |
| | |
| | [γ*X = Y kWh burning coal] |
| v |
| Electricity --------------------/
|
| [burned coal into β*X = N kg of CO₂]
v
CO₂ emissions
(Presented this way it not only tells you that, ceteris paribus, it will, but roughly by how much and what are the parameters that can be tweaked to mitigate it.)
Why aren't we talking about climate change in these terms with general public? Why aren't feedback loops taught in school?
--
[0] - Or, if they exist, they aren't sufficiently well known outside some think tanks or some random academic papers.
Your ascii art is much better than mine and I'm not going to attempt it, but I agree with everything that you've said except for
> I argue that we lack both mental and technological tools to cope with this.
I do think we have the tools to solve these issues. I do not think the mental tools are in the hands of the average person (likely not even in most of your above average people because the barrier to entry is exceedingly high and trying to model any problem like this is mentally exhausting and it thus never becomes second nature). Many of the subjects broached here aren't brought up until graduate studies in STEM fields, and even then not always. An O(aleph_n) problem is intractable but clearly O(10) isn't. We should be arguing about what order approximation is "good enough" but ignoring all the problems that arises is missing a lot of fundamental problem solving. Good for a first go, but you don't stop there. I think this comes down to people not understanding the iterative process. 0) Create an idea. 1) Check for validity. 2) Attack and tear it down. 3) If something remains, rebuild and goto 2 else goto 0. I find people stop at 1 on their own ideas but jump to 2 (and don't allow for 3) for others ideas.
> Why aren't feedback loops taught in school?
I think 3 other things should be discussed as well. Dynamic problems (people often reduce things to static and try to turn positive sum games into zero sum. We could say the TeMPOraL component), probabilistic problems, and most importantly: an optimal solution does not equate to everyone being happy (or really anyone). Or to quote Picard:
> It is possible to commit no mistakes and still lose. That is not a weakness. That is life.
The last part I think is extremely important but hard to teach.
(I should also mention that I do enjoy most of the comments you provide to HN)
> I think 3 other things should be discussed as well.
Strongly agreed with all three.
> Dynamic problems (people often reduce things to static and try to turn positive sum games into zero sum.
That's what I implicitly meant by talking (again and again) about feedback loops; problems with such loops are a subset of dynamic problems, and one very frequently seen in the world. But you've rightfully pointed out the superset. I think most people, like you say, try to turn everything into a static problem as soon as possible, so they can have a conclusive and time-invariant opinion on it. But it's not the proper way to think about the world[0]!
(I only disagree with the "try to turn positive sum games into zero sum"; zero-sum games also require perceiving the feedback loops involved. And then there are negative-sum games.)
> probabilistic problems
Yup. Basic probability is taught to schoolchildren, but as a toy (or just another math oddity) rather than a tool for perceiving the world.
(Thank you for the kind words :).)
--
[0] - Unless your problem has a fixed point that you can point out.
> (I only disagree with the "try to turn positive sum games into zero sum"; zero-sum games also require perceiving the feedback loops involved. And then there are negative-sum games.)
This is an often snipe I make to people talking about economics (I do agree with the lack of mention of negative sum games, but they also tend to be less common, at least in what people are about). Like the whole point of the economic game is to create new value where it didn't previously exist (tangent).
> Yup. Basic probability is taught to schoolchildren, but as a toy (or just another math oddity) rather than a tool for perceiving the world.
I think this is where we get a lot of "I'm not good at math" and "what is it useful for" discussion. Ironically everyone hates word problems, but at the heart of it that's what it is about.
Great point. Those problems are really hard to reason about, partly because without specific knowledge of the coefficients, all you can expect a well-reasoned person to conclude is that "it can go either way". And even knowing the data, most practical problems in this category would take either computer modeling or simplifying assumptions to really draw conclusions about.
Worse, someone motivated to shape the story one way or the other can create a just-so story where they emphasize only one feedback path or the other, depending on what conclusion they want their audience to draw.
I think the best antidote, although by no means a cure, is to teach clear and specific examples early on so that everyone at least can have a mental category for this class of problem, if not the tools to work through them.
There's a danger of bad reasoning being involved, but I argue that "well-reasoned people" and just-so stories are problems either way. But I think that attaching a specific model to a problem grounds the conversation in reality.
Taking the carbon exports example I pasted, the model presented structurally tells you that carbon footprint is going to grow with exports. We can haggle about "how much", but - under this model - not about "whether". You can tweak the parameters to mitigate impact, you can extend the model with extra components and tweak those to cancel out the impact (and that automatically generates you reasonable solution candidates!). Or, you can flat out say that the model doesn't simplify the reality correctly, and propose an alternative one, and we can then discuss the new model.
The good thing is, at every point in the above considerations you're dealing with models and reality and somewhat strict reasoning, instead of endlessly bickering about whether A causes B or the other way around, or whether arguing A causes B is a slippery slope, or whatnot.
I strongly agree with teaching examples, both real (serious) ones and toy ones, to teach this kind of thinking.
Jevons paradox is indeed great to dig into and I suppose offer some sort of counterexample to what I'm talking about. The nature of the phenomenon is in a feedback loop, and whether it'll go good or bad depends on the parameters (the increased use can reduce the value of the intervention, cancel it out, or even make it worse than doing nothing). But from what I hear, people sometimes pick one of the possible outcomes and use it as thought stopper (e.g. "we shouldn't do X because obviously Jevons paradox will make things worse!").
> [0] - Or, if they exist, they aren't sufficiently well known outside some think tanks or some random academic papers.
Are you familiar with Judea Pearl's work regarding graphical analysis of causal problems? If not, he'd probably interest you. While he mostly falls in the category of "random academic papers" (and academic books), but he has also co-authored a very readable (and enjoyable) popular science book. A review of that book is here: http://bostonreview.net/science-nature/tim-maudlin-why-world. And a more technical overview of his graphical approach is here: https://www.timlrx.com/2018/08/09/applications-of-dags-in-ca....
Any issue about which there's a cultural movement going on can serve as a handy pretext for measures that consolidate one's own power. It kind of seems obvious to point out, but nonetheless let's continue to be open to the possibility that such consolidation is not mere coincidence, or more to the point, that the measure is not even well-intentioned and in fact has nothing do to with the ostensible issue/reason (in this case, hate speech). The most cynical of power grabs are usually cloaked in the most noble of pretenses. That's how you make the unpalatable, palatable.
I recently watched an excellent speech concerning freedom of speech and freedom of protest by Rowan Atkinson, and feel it's very appropriate to share here: https://www.youtube.com/watch?v=BiqDZlAZygU
Censorship always starts with unpopular speech. Historically it's been raunchy porn, religious blasphemy, and direct opposition to the state. Hate speech seems to be a new one that works well to sell censorship to liberals (who should know better).
I don't watch Fox News, but even I know that Fox invites Democrats on to talk about their opinions. Here is an interview where a BLM leader rants for 5 minutes: https://youtu.be/FTjBJiXalHU?t=59
This is like having a bulletin board in your store and not being able to take anything down from it.
Yes, Google and Apple are big. You can say well, it’s different because in this world there’s only two boards for the entire country, that’s true! But it’s not a censorship problem, it’s an antitrust problem.
You hit the nail on the head, if they were just "more alphabets in the soup", people wouldn't have much leg to stand on; but because the internet has no analog to the actual PUBLIC square, when you have 1-3 companies that control access to what 95% of the internet sees, you have a problem - because there's no alternative and no public square on which to register your complaint.
the real problem here is that we allow google, apple or facebook to control public discourse. we let companies decide what we are allowed to talk about.
regardless of wheter we agree with what's being removed or not, this can't be healthy.
i am not american, so my interpretation may be off, but here is how i understand the problem:
many people would like hatespeech to go away. jet the US constitution prohibits government censorship, so the government can't do much about it. instead they rely on companies like google and facebook to do the work for them.
the companies are also compelled by public pressure to do what the government can't.
contrast that to germany, where hatespeech like the promotion of nazi ideas is outright illegal.
while i haven't verified this, this puts less pressure on companies to censure anything that isn't mandated by law.
public demands for the control of speech can also more easily b etranslated into law, so that the public doesn't need to resort to pressuring companies. on the contrary, they expect the government to protect them from companies that act in bad faith.
it is hard to say which system is better. if there were many small companies each making different decisions about public discourse, then things would be fine.
the problem is not so much the removal of outright hatespeech, but the more subtle influence in for example what is allowed to be posted about the covid epidemic, or other sensitive topics like political opinions, fact checking and all that.
as it stands, i prefer that decisions about what speech is allowed is controlled by law such that we can use legal means to combat abuse.
many places have cultures and also law that for decades has worked perfectly fine reigning in the very worst forms of hate speeech (say holocaust denial in my country) while not descending into a sort of activism that starts to get silly.
There's no automatic mechanism that turns sensible rules into insensible ones, and it's also need not be the case with sensible hate speech rules.
With cases like Google's play store the issue seems more concrete. On the one hand it's the overwhelming power and lack of due process that large firms have over software. Decentralise this and put authority into the hands of people who know their networks and the situtation will imporove. Secondly it also seems to be a very activist employee base at companies like Google that's gone somewhat overboard. Again, an accountability issue. If these things were decided publicly, it would moderate to reasonable levels.
Historic revisionism as an subgroup of hate speech is a prime example of slippery slope and moving of definition. I would add "fake news" to hate speech.
The problem is the political spectrum of journalists is wildly biased compared to the average citizen. This also apparently shows up with censorship of phone apps.
sorry I have no idea what you're trying to say. Holocaust denial is generally considered to be both historical revisionism and hate speech, the former being a tool for the latter. Is this just semantics?
Ye it is semantics alright. Not taking the debate to the Holocaust, hate speech is not just incitement to hatred (quite broad) or incitement to crime (quite specific), against a group anymore. It is like "fake news", in that sense, when Trump comically turned the term against its creators.
I feel that when it comes to Google its not about if it is hate speech or not, but who controls it. I.e. Zuckerberg is fine although there are multiple long-lasting Facebook groups that have been used to incite crimes, but Aaron Swartz would not be (today). It is quite amusing how Facebook is not shut down in Europe even though many European countries would shut down any local company being so lax and arbitrary with moderation as Facebook.
What are you doing? Are you trying to ban speech you don't like? What body determines what is "fake news" and "hate speech"? It can't be done, which is why the only sane policy is free speech.
We have laws against violence, and it's a very clear line.
And even that is destructive. The slippery slope of "speech I don't like" has a tendency to ever expand; not completely unlike, say, government. It is a very human tendency. This is the main reason, even small encroachment should be pointed out.
I think we are in agreement on Google's case in particular being a little more straight forward.
Your suggestion sounds nice in principle, but how would you propose to create mandated democratic control of a corporate entity?
The only mechanism which exists I can think of would be to nationalize the corporate entity and have the folks controlling it be elected positions. That seems pretty extreme though as a response to a corporate entity becoming successful and growing enough that it influences the zeitgeist.
That does not sound extreme at all to me. It would not need to be controlled by people in elected positions - it could just be mandated that employees have to follow a specific charter and be as neutral as possible. A bit like many national news services, like the BBC. Considering how much influence Alphabet's products (especially Search) have gained over everyone's lives, I think something like that is much overdue and the only reasonable solution. That, or extreme regulation. At the very least, the search algorithm should be made fully public.
Section 230 was about child pornography and became used as a safe harbor for anything.
I am not usually agreeing w the Trump admin but they do have a point there.
In general our thinking about freedom of speech is itself idiosyncratic in the same way. Human FREEDOMS means doing what you want. It’s not the same as a right to a megaphone maintained by thousands of employees and infrastructure of large corporations to give you a platform to say anything unfiltered to 5 million people at once. I would argue that such interpretations of the First Amendment have been detrimental to society. Speech on giant platforms should be vetted like on Wikipedia’s Talk Page, where mutually distrusting people engage in responsible fact checking BEFORE the crowd sees the main page with these claims.
But hey I also argue similarly that the supreme court’s Heller decision similarly obviated the Well Regulated Militia clause into irrelevancy, so now anyone can have a gun no matter whether they are part of any well regulated organization or not. No checks on individual action that can affect others.
Now we reap what we sow as a society. Yes FREEDOM of speech is important but what we call freedom today has greatly expanded even to unlimited political donations by super PACs and so on. Again a supreme court decision where expanding freedoms in Citizens United harms democracy. A win for ideologal purity I guess, but is socity better off?
PS: before someone objects with “who will be the factcheckers/watchers I will say it will be self selected and self policed like on Wikipedia, as long as there is a healthy mix of views, it’s better than one wacko with a megaphone. Who does this celebrity culture help? It further divides us. And that’s why we can’t have nice things!
The CDA as a whole was an attempt to regulate indecency and obscenity on the internet. Think "pornography that might be seen by minors"[0]. (Remember, this was the 90s.) Most of it (with the exception of section 230) was struck down in court for obvious first-amendment reasons. Section 230 was added later during the process by the House, after the bill had passed through the Senate, and was more about defamation than anything else[1].
Life, liberty and the pursuit of happiness was chosen for a reason in contrast to the term Locke used (estate): individual freedom was supposed to be balanced with the interests of the society that allowed them to exist and be pursued in the first place, which are what courts consider fundamental rights.
That's the "slippery slope" fallacy. There's a knee-jerk reaction in America that any censorship is bad and will somehow always lead to more censorship. But there area places where censorship has been implemented in small ways and it hasn't led to some sort of free speech apocalypse. In Germany or Israel, for instance, it's illegal to deny the holocaust. That's a pretty sensible limit considering their history. All these years later, they're still free an functioning democracies.
Kindly take this as a devils advocate post before you reflexively downvote.
It looks like letting people say anything they want on major social media platforms is only having one major positive effect: a few advertising companies are becoming very rich.
The negative effects include:
- incited violence (gang-oriented gun crime in Chicago is often fanned by social media posts for example)
- bad medical decisions (vaccine/COVID misinformation)
- cancel culture/political manipulation (people taking other people's posts as facts when they are not)
I would like to uphold the principle of free speech and forcing social media providers to be free speech agents even though they are private companies, but it's starting to get hard to defend. I am losing faith that strict adherence to free speech is going to result in a smarter, happier humanity. It might be better if less people speak their mind.
Which powerful, privileged people should get to decide what we are allowed to hear about? When the power is inevitably abused, how can we address that abuse when we may not be allowed to know about it?
Is it even necessary for free speech to directly result in a happier humanity? What if it simply preserves the conditions that we need for progress, or merely keeps us from sliding backwards? Would that be enough to make it worthwhile for you?
> Which powerful, privileged people should get to decide what we are allowed to hear about?
Journalists and news media, bound by the respect and principles of their profession, fulfilled this function in the past. There seemed to be a time in the past where division between reporting and editorials were more separate. We've destroyed the institution of news media without a good replacement; now people are taking editorials (people's social media posts) as the equivalent of news.
> Is it even necessary for free speech to directly result in a happier humanity?
If not then what is it worth?
> What if it simply preserves the conditions that we need for progress
I'm simply not seeing how social media after a good 10 years of it is progressing anything other than the profits of its owners.
The respect and principles of the profession and a bucket or horse piss will get you a bucket of horse piss - it isn't worth anything and certainly not as a check on power.
It is nonsensical to claim social media destroyed the news media. It was already dying before the internet let alone social media. To put blame on it is a blatant lie from the losers of the old era who got regularly dunked on by bloggers and forum posters in basic fact checking.
Early wikipedia "not suitable for reports" clearly did a better job. They didn't catch up with the internet until it got basic enough for them to follow it with Twitter.
Everyone is constantly complaining about private censorship being a slippery slope...
And yet today, despite decades of "censorship" by Facebook and Google, you can see whatever porn you want, snuff films, terrorist propaganda, hate speech, libel/slander spread by instigators like Glenn Beck and Alex Jones. Just not on Google or Facebook.
Different private entities and people have different levels of tolerance. If you want filth, use Gab or 4chan/8chan. If you want forums that are partially moderated, use Facebook/Google/Reddit. If you want forums that are fully moderated, join a private or niche board like HN.
It might be Apple forcing them to do so. For example you can't publish app with porn content. Even if your app is some kind of forum, you're obliged at least to filter out explicit content in the app.
Yes with the standard settings in place you have to navigate directly to nsfw subreddits and even then there is a age restriction in place. There are no auto fill suggestions nor search results and there is no nsfw content on the /r/all page inside the app.
I'm not deeply versed in Gab's history... But lets hypothetically say that Gab was not created for that purpose, but for precisely the purpose it claims it was created for:
To be an open platform for free speech, no censorship.
Wouldn't it have ended up in the exact same state it is now? Any service that guarantees no censorship is going to have the majority of its userbase be the runoff from other major websites. When voat was created, I 100% believed that they were not attempting to create extremist havens, but their userbase was all the people expelled from reddit for targeted harassment campaigns.
I hate this dynamic. We need a way to break this cycle, because right now it's actively killing competitors to existing social networks.
I agree with your point - it is hard to tell the difference generally, and it is an important point to remember, but the behaviour of the company itself shows that it is not an issue this time.
Gab bans people openly and credulously discussing marxism. I have experimented with and experienced this directly. So, it fails my litmus test for "an uncensored platform."
And it's a bit comical, because Gab as a community experience is much smaller (in my perspective) from even weird sites like minds or funky social blockchain plays. Why they felt the need to ban discusions of marxism or a general strike is beyond me.
Fair. By my own admission, I don't know much about gab.
I think the first time I ever heard about it was when Firefox banned Dissenter from their addons. Dissenter to me was a genius idea that has an ugly userbase. I'd love to have a version of Dissenter that isn't populated entirely by bigots.
I think the idea of Dissenter really has some value, you walk along the web for all sorts of reasons, and then up in the corner in your toolbar you see "oh, someone from my community has said something about this". Rather than the social network taking you to a site, the site takes you to the network.
That by itself implies that every URL you visit has to be looked up to see if there's a related discussion.
No way I'd trust any add-on/startup/mega corp to do that. I barely trust Mozilla to keep my history on their servers, and that's only because they only keep the last few months and purge older data.
Because you'd still know everyone who went to a specific site because they'd be sending you unique hash. Even if you ignore that, you'd know what clusters of people all use the same sites, how often and when.
That was the intent of the bloom filter. Configured properly it would actively filter out the need to endlessly send the server requests like "Hey, have anything for this hash?"
However I suppose that if the site does have comments, then you do have to make requests to the server to get them...
Still, I believe you're being overly pessimistic here. I think there may be some solutions to this. Maybe not perfect, but better. Lets say our social network "Ascenter" has become corrupt and is looking blackmail its participants. What about this?:
The design currently requires you to send a sha(URL+salt) to the server to look up comments. This prevents Ascenter from directly knowing what site the comments refer to, but the comments themselves will be a big clue. What if to look at the comments you have to decrypt them using sha(URL+salt2)? Ascenter will have no means to derive this key, it will only be able to determine how many comments have being placed and how large they are. That improves things a bit...
But Ascenter might be able to crawl the web to discover conversations. Particularly for salacious sites if it's looking for blackmail. So... What if the salts are the answer to that? If you had your own set of salts you could use them to create your own private groups, Ascenter would have no way to access that conversation. Or even figure out that they have occurred.
With the presence of public and private salts, what if the browser plugin itself could be configured with a blacklist of sites not to send requests for? You could still have private channel comments, but not public. I could see the community per-generating a black list...
One last note I'd end on here is that the level of trust we are expecting right now is a huge bar lower than the level of trust we give general social networks
Consider what HackerNews could do if it went rogue like like Ascenter? To blackmail you all they would need to do is go to one of your old comments, rewrite it as something salacious, then blackmail you with it. Comments on HackerNews arn't signed, and arn't encrypted. We're quite vulnerable to them.
edit: sorry for the long post. This was a bit of a stream of consciousness.
TL;DR: The bloom filter limits the risk, and I think there are cryptographic solutions that reduce the level of user exposure to below that of current social networks.
The bloom filter wouldn't stop you from confirming folks gathered around any specific page that has content, and would have a fixed probability of leaking data even if there was no data there.
And that ignores the problem that you're not going to be able to sync the bloom filter in real time, so now you're going to need to have a very merge-friendly design for these annotations.
No, I'm not being pessimistic. This is just a candid analysis of the difficulties of doing this competently. If you'd like to do it incompetently, feel free.
Likely because the intended audience is not people who see it as a viable alternative or something worth even entertaining. The irony of the situation being someone took the time to make their own platform for like minded individuals, and people in other spaces who've been known to tow the "you're free to make your own platform" line get extraordinarily bent out of sort when people actusally go and do just that.
The hilarious part being that by having the censorship in the first place and not just letting folks work it out amongst themselves, you just increase the echo chamber factor, which at some point, you have to come to terms with in real life on the basis these people exist in the real world. The very act of technically enforced societal marginalization in and of itself is an "extremism" amplifier/polarization catalyst. What confuses me is why I feel like I'm the only one who regularly brings it up. It's not that hard of a realization to reach From first principles. Especially if you spent any time in your life as a social misfit.
Well, you don't see me make fun of Parler as much because they're honest about their intentions. Gab originally claimed they were about "freedom" so it seems pretty fair to me to call them out on moderation that is clearly political.
"Any service that guarantees no censorship is going to have the majority of its userbase be the runoff from other major websites"
Exactly, which is why you are wrong here:
"When voat was created, I 100% believed that they were not attempting to create extremist havens"
It is not as if the major websites are quick to censor their users. If anything they are too cautious and have only banned terrorists after widespread pressure and threats of advertiser boycotts. If you set up a platform that is open to the few people who were so extreme that even Reddit or Twitter banned them, then you are creating an extremist haven, no matter what language you use to describe your intentions.
One could argue that intentions don't really matter, ether. "The Purpose Of A System Is What It Does" [1]. If your system ends up being a forum for a certain type of posts, then that's kind of what it is, regardless of what you originally wanted it to be.
This is a useless hypothetical. We know what Gab was and why it was taken down.
If you want to know how a healthy alternative would have turned out, go looking for one. It almost certainly exists, there have been at least a dozen twitter competitors in the past and I know a few people who tried them out. (I can't remember their names, though.)
If they didn't intend it, they were being incredibly naive. If your defining characteristic is that you don't censor things that are banned on Reddit, then your site is only attractive to people who want to do things banned on Reddit.
Yes, any large no-censorship platform for humans will be swamped by Nazis.
You could avoid being large — small, high-trust groups work perfectly fine without censorship.
You could add more moderation (aka censorship), both platform-wide and within communities. Reddit seems to be heading in that direction.
You could avoid being a platform. Some sites are inherently platforms, but does every site need comments?
Or you could genetically engineer humanity into a kinder, better species. This would also be the way to make anarcho-communism work — the economic system with the greatest freedoms, but also the most susceptible to bad actors. The Culture series shows you a glimpse of what this future could be.
I've got to finish going through The Culture series. I thought it was bold stroke to write a book series about a future where humans are domesticated by their own AI.
I think the "fully automated gay space luxury communism" depicted in the Culture series is the best future we can hope for. I wouldn't mind welcoming new robot overlords if it makes that possible.
>To be an open platform for free speech, no censorship.
Unfortunately this kind of rhetoric is frequently coded language meaning "Hey Nazis, we won't kick you off of our platform for threatening to shoot up a synagogue". I think people should still legally be able to create unmoderated platforms, but the majority of people won't participate in them because they quickly become cesspools of hatred and harassment. Gab is the newer, shinier tech startup version of this, but it's existed before in the forms of 4chan, 8chan, and probably other platforms that I'm not familiar with.
Maybe that's a sign you should pause. Sometimes it is okay to admit you don't know about a subject. It is okay to sit and listen instead of voicing an opinion.
Well... I mean really I was not trying to say Gab was innocent. I really didn't know until others provided some helpful context. I was speaking more to the original topic of this HN submission.
I never know how to handle topic shifts in the conversation trees in Reddit, Hacker News, and others.
Do you speak only to the comment you're responding to? Do you speak as if that comment is in the context of the submission? Do you keep the context as on topic as possible? Do you indicate which one of the three you're doing when you start your comment off? I feel like this is an internet rule I have not sussed out on my own. And I can see its caused some trouble for others. Sorry.
The line of thinking seems to be: "Gab's target demographic is users that have been banned from the mainstream platforms for hate speech and conspiracies. Therefore by targeting those users, it's 'purpose built' for hate speech and conspiracies.". My question is, can't you apply this to basically anything that's "free speech"? The mechanism seems to be that most people are content with the mainstream platforms, and therefore the only people who go for the "free speech" platforms end up being the people who were banned from the mainstream platforms. By that logic, is tor "purpose built" for criminals, since basically only people with stuff to hide use it? How is it different than gab?
If your site has been overrun by ultra-far-right types for basically its entire existence and you do nothing to mitigate this then you're very clearly complicit.
But you're right, most of the "no moderation, anything goes" online communities tend to be overrun by extremists but it's easy to see why: you only need a minority of very dedicated trolls posting outrageous content 24/7 to ruin a community. It takes time and effort to post insightful content and analysis, meanwhile you can throw shit at the walls at a large scale very easily.
That's why it's pretty obvious to me that if you want to actually have interesting discussions online and a plurality of opinions you need moderation, otherwise the low effort bullies take over.
>If your site has been overrun by ultra-far-right types for basically its entire existence and you do nothing to mitigate this then you're very clearly complicit.
How do you mitigate without running counter to the original idea of "free speeech"?
It's not at all clear what "the original idea of 'free speech'" even was. In the US, the wording of the First Amendment is quite vague.
I think people have a mental model that social media sites and apps are like a communication medium. They are a neutral carrier that transmits an idea X from person A to B. The site itself is not "tainted" by the content of X or get involved in the choices of A and B.
But a more accurate model is that they are amplifiers and selectors. The algorithms and ML models at the heart of every social media app often determine who B is. A is casting X out into the aether and the site itself uses its own code to select the set of Bs that will receive it—both who they are and how large that set is. From that perspective, I think it is fair that apps take greater responsibility for the content they host.
Here's an analogy that might help:
Consider a typical print shop. You show up with your pamphlet, pay them some money, and they hand you back a stack of copies. Then you go out and distribute them. The print shop doesn't care what your pamphlet says and I think is free from much moral obligation to care.
Now consider a different print shop. You drop off your pamphlet and give them some money. Lots of other people do. Then the print shop itself decides how many copies to make for each pamphlet. Then it also decides itself which street corners to leave which pamphlets on. That sounds an awful lot to me like they have a lot of responsibility over the content of those pamphlets.
The latter is much closer to how most social media apps behave today.
but does gab use "algorithms and ML" to determine what gets shown? Doesn't it use a upvote/downvote model like hn or reddit? Is a site that uses a "order by upvotes" ranking system closer to the first print shop or the second? What about bulletin boards that ranks by last post?
> Doesn't it use a upvote/downvote model like hn or reddit?
Is that really any different? If your print shop counts the user-submitted tallies on a chalkboard to decide which pamphlets to print, the print shop is still choosing to use that rule to decide what to print.
Because then you can't make any sort of public facing site with UGC without being burdened with the responsibility of what's being posted. Come to think of it, the distinction is entirely arbitrary. Run a bulletin board that sorts by last reply? You are responsible for the user content. Run a mailing list that forwards every message to the end user, and the end-user implements the same sort by default? You're off the hook, even though the end result is the same.
> Because then you can't make any sort of public facing site with UGC without being burdened with the responsibility of what's being posted.
Now you've got it.
> Run a bulletin board that sorts by last reply?
There is maybe an argument that your level of responsibility somewhat depends on the complexity of the algorithm you use to decide how much amplification to apply to any given piece of content.
I don't think responsibility is black and white.
> Run a mailing list that forwards every message to the end user, and the end-user implements the same sort by default? You're off the hook, even though the end result is the same.
The end result is the same but the agency is not. The end-user chose to apply that sorting, so they have accepted some of the responsibility for what they consume.
If I shoot someone with a gun, I'm totally responsible. If I give you the gun and you shoot them, you are responsible. Maybe I still bear some responsibility for giving you the gun. But you certainly have taken on more responsibility than you would have if I shot them.
Here's maybe another way to think about it. If you're choosing to run a bulletin board, presumably you're doing so to get something out of it for yourself. Is it fair for you to receive that benefit while taking no responsibility for anything that happens on it?
Is that a net benefit for society? The last thing I want is for google (or other tech giants) to be even more trigger-happy about banning people because they view you as a high risk user. "decentralizing the web" isn't a good excuse, as most people don't have to know how to set up their own hosting, and only shifts the liability from the host to the search engine (because you have to find the content somehow).
>The end-user chose to apply that sorting, so they have accepted some of the responsibility for what they consume.
Don't we already have that? On reddit you can sort by "hot", "new", "rising", "controversial", and "top". On gab you can sort by "hot" and "top". I'm not sure how that would change things, other than forcing yet another modal that users have to click through.
>If you're choosing to run a bulletin board, presumably you're doing so to get something out of it for yourself. Is it fair for you to receive that benefit while taking no responsibility for anything that happens on it?
Not every website has to be a for-profit venture. Many (small) forums run essentially on donations, or are low maintenance side projects attached to a bigger project.
By having more than a middle school understanding of what "free speech" is about. There is no "original idea" of free speech, there never has been, it is a concept that is used to refer to a wide variety of legal frameworks across different times and places. In Germany a person's free speech rights do not include holocaust denial. For most of the history of the United States free speech has been more limited than it is today; it was not all that long ago that we had the "equal time" rule that required media outlets to host both liberal and conservative commentary. You generally do not have a right to organize an insurrection against any government and whining about free speech will not convince anyone otherwise.
>it was not all that long ago that we had the "equal time" rule that required media outlets to host both liberal and conservative commentary
That only ever applied to broadcast media (and maybe only to prime-time TV). Publishers of the written word have never been required by the US government to grant equal time.
>For most of the history of the United States free speech has been more limited than it is today
I don't know what you could mean by that unless you are referring to the fact that before the internet became mainstream, you had to own a printing press or something like that to reach a mass audience.
A century ago in the United States the phrase "shouting fire in a crowded theater" was used in a Supreme Court ruling upholding the censorship of anti-draft activists during World War I, and within living memory the United States had various laws censoring pornographic photos and videos. There was even a time when it was illegal to have the Post Office carry written information about contraception:
In case anyone tries to claim that the founders intended for the most expansive possible understanding of freedom of speech, the fact is that one of the earliest laws passed in the United States was a law that censored criticisms of the Federal government (in an attempt to crack down on foreign misinformation campaigns):
>In case anyone tries to claim that the founders intended for the most expansive possible understanding of freedom of speech, the fact is that one of the earliest laws passed in the United States was a law that censored criticisms of the Federal government (in an attempt to crack down on foreign misinformation campaigns):
I'm not sure whether that proves your point. The wikipedia article says that it was controversial, caused the federalist party to lose the following election, and ultimately expired after 4 years.
The fact that the law was passed by the same men who ratified the constitution says a lot about their concept of freedom of speech, even if it was controversial and short lived. If the founders really meant for free speech to be as expansive as it is today it is hard to see how such a law could have been passed in the first place.
>The fact that the law was passed by the same men who ratified the constitution says a lot about their concept of freedom of speech
You can also argue that it was defeated by the same men who ratified the constitution, and that the "free speech" side ultimately prevailed, therefore they really did mean free speech to be that expansive.
OK, but the topic is hate speech in particular, and there has never been a time when hate speech modulo calls to violent action (and possibly calls on landlords or employers to discriminate) has been unlawful in the US.
>Hate speech in the United States is not regulated due to the robust right to free speech found in the American Constitution. The U.S. Supreme Court has repeatedly ruled that hate speech is legally protected free speech under the First Amendment. The most recent Supreme Court case on the issue was in 2017, when the justices unanimously reaffirmed that there is effectively no "hate speech" exception to the free speech rights protected by the First Amendment.
Hate speech in the United States is not regulated by any government. Google is considered by the courts to be non-governmental, so the courts will not prevent Google from regulating hate speech on the platforms it owns as Google sees fit.
Just decide for yourself. Literally the first post I get at the moment comes from "QAnon and the Great Awakening" and is in support of that right-wing vigilante shooter in Kenosha, calling the district attorney who's prosecuting him "evil".
The next few posts I see mention either "arresting the dems", some anti-vax stuff and so many crypto-fascist dogwhistles that I'm getting tinnitus.
If you need more proof I'll let you ding into this.
From my perspective we still need other information, do they also ban other apps that are purpose-built for hate speech? Was it just this app because of the controversy over banned Twitter users going to it?
and why does that matter? They later switched to ActivityPub/Mastodon for a bit. They still run a Mastodon fork, but defederated back in May.
Gab was also banned explicitly by URL by any app makers and in many ActivityPub libraries (you can find checks where it hashes the URL and compares it to known Gab URLs).
> a purpose-built platform for conspiracies and hate speech
We're in a bizarre world right now where you can label any opinion you don't agree with as hate speech, dehumanize police and call every conservative a literal Nazi and that's all okay now for some reason.
At some point we have to remember that historically, a lot of people who thought they were right, about slavery, homosexuality, war, abortion, polygamy, and other controversial topics, eventually came down on the wrong side of history.
The attack on speech and ideas has never been more profound. If you don't like an idea, you don't have to listen, but people are going to continue to go to fringes whenever their voices are silences. That will create more extreme platforms and more extremism, not less.
Ah, the delicious irony of attacking certain types of speech and ideas as "attacks on speech and ideas". What they are doing and what you are doing are no different; it's all just politics. There's not even anything new about what's happening today. How do you think people's minds were changed on all those "controversial" topics you cite? A whole lot of social pressure. You don't get to make lofty statements about free speech, and then turn around and grumble that other people's speech somehow isn't "playing fair".
There is holocaust denying, calls to kill blacks and rape fantasies about female politicians and movie stars. There are literal calls for the rise of the white race to eradicate everyone else on the front page of gab daily. If you want to defend shit like this on the merit of free speech the probability that you are in fact a white nationalist are not to far.
And the history is plastered with censorship the world was never more free than it is now you are just repeating non facts without even doing a hint of research.
Even if an app is banned from the play store, the developer can distribute it through f-droid.
Even if an app is banned from the F-droid main repository, its developers can distribute it through their own F-droid repository
Even if the app is banned from their own f-droid repository, the dev can distribute his source code and users can build the app with Android Studio.
Even if Google bans your Android Studio account, you can use another compiler.
Even if the internet bans you, you can mail the code via a handwritten letter to your users, who will copy the code line by line and build it using their own compiler.
Alternative repositories are a first-class feature of F-Droid. Alternative app stores aren't really a first class feature of Android (though they are obviously possible, in contrast to iOS)
> bans your Android Studio account
Don't give them ideas. (there's thankfully no such thing as an Android Studio account)
> there's thankfully no such thing as an Android Studio account
I meant Google Account. Isn't a google account necessary for anything on developer.android.com? Sorry if I am wrong. Any url on that domain used to redirect me to a Google Login page.
A google account is not required for using the android SDK.
You have to agree with the TOS if you download the prebuilt SDK from google, yes. Building the SDK from source is unfortunately quite hard but Debian has made some progress with this.
That article was written with such blatantly bad faith language it had the opposite of the intended effect and made me start distrusting whatever Gab is (I had no idea before this thread)
> F-Droid won’t tolerate oppression or harassment against marginalized groups. Because of this, it won’t package nor distribute apps that promote any of these things
If this accurately describes Gab, I don't have sympathy for Yet Another Racist Community pretending that they're defending freeze peach.
Gab is a platform, ergo the judgement is about what the users publish on that platform. It's the exact same point Google seems to be making: that platform hosts content we wouldn't host, ergo the app must not be on our app store.
Anyone who wants to, including the gab maintainers, can create an Fdroid repository containing any apps they want it to, including gab, with or without the permission of the app maintainers. I run a few Fdroid repositories myself for apps that aren't on the play store or in the main Fdroid repository. The Fdroid server tool is open source. You can host your repository on GitHub or gitlab, and update it using their ci/actions/etc. It doesn't require much technical knowledge.
pip3 install --user fdroidserver # or install with package manager.
fdroid init # first run only. Now edit URLs and descriptions in config.py
cd fdroid/repo
wget gab-latest.apk
cd ..
fdroid update # on first run, add --create-metadata and then edit the files in the metadata folder and run update again.
git add repo archive
git commit -m update
If the gab devs were somewhat competent they'd create their own F-droid repo and distribute their app through that - F-droid allows this, and cannot stop this.
Hopefully it will. Although, Google engages in anticompetitive behavior by hindering 3rd party apps' abilities to autoupgrade, install in the background or batch install apps.
Those intentional limitations mean that F-Droid will never have feature parity with the Play Store.
Hopefully Epic Games with their antitrust lawsuits against Google and Apple will succeed and we'll see some positive changes.
For now, the best everyone can do is to promote F-Droid. Many users never heard about YouTube without ads, so getting NewPipe might be a good motivation to try F-Droid.
This will work for the people reading this here or who would otherwise already do this. I think decentralization is something we should aspire to promote more broadly than within the community using alternative app repositories.
There were some people who were pushing for the mentioned apps to be removed from fdroid but at the end the fdroid maintainers decided to not remove them.
https://en.wikipedia.org/wiki/Fediverse
" The Fediverse (a portmanteau of "federation" and "universe") is the ensemble of federated (i.e. interconnected) servers that are used for web publishing (i.e. social networking, microblogging, blogging, or websites) and file hosting, but which, while independently hosted, can communicate with each other. On different servers (instances), users can create so-called identities. These identities are able to communicate over the boundaries of the instances because the software running on the servers supports one or more communication protocols which follow an open standard. As an identity on the fediverse, users are able to post text and other media, or to follow posts by other identities. In some cases, users can even show or share data (video, audio, text, and other files) publicly or to a selected group of identities and allow other identities to edit other users' data (such as a calendar or an address book)." (wiki)
First, don't assume that just because you've never heard a word, that means no one has heard of it. There are quite a lot of us who know what the fediverse is.
Second, and this is a little pedantic, fediverse is a portmanteau, not an acronym.
Fediverse isn't an acronym, nor is it a term "no one's heard of". You haven't heard it before, and it's great to explain it, but don't project that either.
The rationale they gave is that hate speech appears on these apps, because some of the microblogging sites that can be accessed via Fediverse have this kind of content. Based on this rationale, I look forward to Google Play removing Chrome, Firefox, and all other web browsers from the store as well.
This is not the case. Don't assume goodwill from Google.
> The rationale they gave is that hate speech appears on these apps,
So does in countless app on the store, and not in the third party content, but app themselves pretty much, and Google don't touch it.
A much more rational assumption is that Google sees the Fediverse as something that can come to steal their cattle (eyeballs,) and they have a plan to subplant it.
You will very soon see them turning even more picky, and eventually remove even censored fediverse apps.
They are repeating the trick they did with uBlock. First, they say do a purge, with an option for the most resistant to "play along," and a few month later, they pull the rug again. This way, they evade an immediate backlash.
Well, we asked for this. We demanded Twitter and FB censor their content, we applauded Cloudflare* for deplatforming websites. Now those monopolies can use the precedent to control more of the web.
Asking a platform to moderate the content on its website isn't the same as asking an app distributor to ban apps which could conceivably be used to communicate objectionable ideas. Those are completely different.
Well, we didn't. Did we say that apps that give ubiquitous access to generally positive content should be removed? No, not really. Did we say that apps that contain user content should consider a bit what behaviour they enable and promote? Yes, and they have done so for a long while, remember, there's illegal content other than what people would call "hate speech".
The only way they can define and enforce "hate speech" policies is by torturing the language, the definitions, and hoping no one looks closely at the resulting policies.
In the US, beyond explicit calls for illegal behavior (usually illegal) and kiddie porn (always illegal), we don't have many restrictions or even a legal definition of "hate speech." But even those laws aren't consistently applied. For the most part, it's up to the judgement of the viewer or based on the "reasonable person" rule and "reasonable people" seem to be getting rarer and rarer..
>>The rationale [Google] gave is that hate speech appears on these apps
>The only way they can define and enforce "hate speech" policies is by torturing the language, the definitions, and hoping no one looks closely at the resulting policies
The rest of your comment is about the first amendment to the US constitution, which constrains governments (US Federal and state governments), not private parties such as Google.
Google would be well within its rights in the US to ban all hate speech from all the platforms it owns.
> The rationale they gave is that hate speech appears on these apps, because some of the microblogging sites that can be accessed via Fediverse have this kind of content.
Do the apps connect by default to a server that allows hate speech? If that's the case, is this an instance of bad defaults? Maybe the apps could work around this by just connecting to a server that's moderated to be acceptable to Google, but leave it configurable.
> No. You need to explicitly connect in most fediverse apps. They're like browsers
It's pretty egregious to ban an app like that.
Still, I wonder if an innocuous default server (or set of them) might smooth things over with Google. That would increase the friction of accessing the content they don't like.
Android's also relatively open, so if they only have issues with specific servers, maybe the app could just drop a text-file blocklist on the filesystem that a user could tecnically edit to change it (but just not via the app).
Similar things happened to some android reddit clients a few years ago and it was resolved by removing certain subreddits from menu of pre-filled subreddits. So maybe these apps just need to remove any problematic pre-filled servers.
There was a thread on twitter recently about how app stores would have rejected the first web browser, asking what future innovations might not happen due to them. Guess federated social media is a part of the answer.
These threads make it obvious that most people would be okay with malls being the only place where shopping is allowed to happen.
"Our local mall is great, I don't see a need for any other malls or standalone stores. Malls provide an air conditioned space, on-site site security, and even refreshments while you shop."
I think it stems from a lack of empathy for others. I don't buy dildos so I see no reason to allow dildo stores to exist.
I have never seen a shopping mall with a sex shop. Maybe lingerie or some novelty items at a Spencer’s, but I doubt mall operators would want to deal with people complaining about their kids walking by a store selling pornography and sex toys.
They usually end up in undesirable locations or in rundown strip malls.
I go past a mall with a dildo shop inside it on my current walking commute to work.
I saw another one of the same chain in the main train station of… I think it was Hannover? I was travelling a lot by InterRail and the places are blurring together.
Zürich has (or had) a sex shop with the wares clearly visible from the outside in the expensive bit of the city centre.
Cambridge (UK) has an Ann Summers in its main shopping centre. Thinking of the UK, I’ve seen vibrating cock rings openly stocked in Tesco, which is the largest supermarket chain in the UK.
Attitudes to such topics are surprisingly flexible.
I think most of the comments here are US-centric, which also happens to be where the app stores are mostly implemented, so they come with the same source values.
Why do people find a corner-case of an analogy, and somehow think it negates the analogy?
It's just an analogy.
Of course 'some malls have sex shops' - but obviously, most of them don't.
The vast majority of corps don't want their brands anywhere near porn, sex, guns, politics, hate/contentious speech etc..
Most decent malls are actually selective of who they want in there, and the 'other residents' of the mall have a say as well.
There are an infinite number of places porn/guns/politics can be sold, so that's not a problem, it's just not going to be in a system wherein the other players are wary of it.
It's actually good reason why we need a lot of alternative points of distribution.
I wonder if there should even be some legislation around that, in terms of the kinds of app stores that are preloaded must be more 'open' and that alternative options must be provided very easily.
Thinking about them in terms of the mall or a shopping plaza is good. However, you mistakenly compared the app store with the Gap.
The app store is the mall space, numerous apps abound, as do shops. The gap might not sell porn, but Victoria secret, Spencers, Hollywood (nights?? I can't recall) exist in the mall and might not be porn but... Some retail plazas might have an adult section. Best buy or Fry's at one point had adult magazines and videos.
Some retail plazas even have other things progressives and democrats find offensive, like guns and old classic literature. Occasionally you can also find a pawn shop in a retail plaza.
There was a point where you would rent out your mall or retail space to anyone with the money to cover the cost plus a small profit. The decadence, wealth inequality, and unrealistic valuations from government interference have led to a world where it's fairly easy for some to cherry pick who they want renting from them based upon their ideological, political, or racial spectrum or beliefs. The same now occurs in our app stores.
High end malls never had any of those things, anywhere I’ve been in the US. Victoria’s Secret and Spencer’s are pretty far from sex shops or pornography in my opinion.
Not that I agree the policies of app stores to restrict items solely because they are pornography or sex related. But high end mall operators might reject sex shops for many of the same reasons.
For anyone else concerned with the recent Big Tech censorship, people are actively building alternative social medial platforms and they are taking off. The RedditAlternatives subreddit [0] provides a fairly good overview of them. Some are rather extreme, others are more reasonable, but there are alternatives, and it's up to us, the reasonable people, to choose where we want to spend our time and money.
Well, I can tell you what's bugging me personally about this. There's a growing divide in our society between the "equal opportunity" and "equal outcome" camps. And while the "equal opportunity" people mostly have the "just let me grill" attitude, "equal outcomers" are pushing increasingly harder, while doing their best to silence any opposing voices. It's now getting to a point where raising one's kids to be proud of their achievements, seek self-improvement, and pick friends based on shared values, is considered sinful and is being pushed back against.
As a person who was born in a country that tried implementing equal outcomes for 70 years, and ended up with extreme corruption, poverty and social distrust, I don't want to see another round of this happen here. So I am hoping that if enough reasonable people acknowledge the problem, the society will reach some sort of a compromise before the lives of several generations are completely wasted, like they were in the USSR. And having platforms where this sort of discussion is not considered "cancerous" is a very important step IMHO.
Any evidence at all that self-improvement is under attack, or are these all euphemisms for something else? Maybe "pick friends based on shared values" means "I wouldn't let my kids play with kids from the other side of the tracks", which should indeed get some pushback.
Here's one for you [0]. That's an infographic from the Smithsonian museum that attributed Enlightenment-age values, like individualism, family structure, and work ethic to "whiteness" and implied that it should be opposed. They removed it after pushback (search archive.org for more).
That's the tip of the iceberg though. There's a whole industry of wrapping this narrative into struggle session-like training while charging 7 figures to various-level budgets. You can find many examples like [1] if you search for Chris Rufo. Except adults sort of understand that it's a kickback-driven nonsense and don't take it seriously. So now they are taking the Critical Race Theory with very similar postulates into schools.
It's not about helping minorities learn from the more successful, and reach for the stars. That's purely about making kids hate what their parents did for them, rather than building on top of that.
I don't see anything that indicates these values should be opposed. I've seen controversy over this before and to be honest it mostly seems to be people projecting their own idea of what the left thinks - that "white values" implies they are not "black values" and that all whiteness = bad. Then they conclude from a decent summary of dominant values in America something like /r/conservative's take: "#DefundSmithsonian ... We are paying for white genocide with our tax dollars."
Maybe I should just let this just let this comment stand for its own inanity, but dammit, someone is wrong on the internet.
That poster was at the National Museum of African-American History and Culture, where you might expect to learn the differences between African-American culture and the dominant White-Anglo-American Culture. Context is important. In no way it implied that "Enlightenment-age values ... should be opposed." Also, it's kind of ironic to be cheering for the takedown of speech you didn't like on a thread where you complain about the silencing of the speech you do like. If you don't like what the poster said or how it said it, maybe oppose it with more speech.
About the whole industry of "training while charging 7 figures" isn't that just how the pendulum swings right now? Before it was prayer meetings and then survivalist tactics for team-building. Plus most employment in the US is at-will, so I suppose if employees don't like it, they are free to find other employers. Freedom cuts both ways too. Also, what's wrong with 7 figures? I thought the free-market and entrepreneurs charging for the value they bring to willing buyers were all good things.
When you conclude it's "purely about making kids hate what their parents did for them", well, that's the kind of hyperbole that's hard to take seriously. It is totally free of argument or evidence, and it sounds so much like the conservatives of the 50's and 60's about how the peace-activists, civil rights leaders, and hippies were going to turn their sons and daughters against their parents.
Soviet Union was never about equal outcome. It was proclaimed, but worked around in every way possible. Jews were limited in access to education, ex-nobility was limited in rights, the party members were given all kinds of preferential treatment and nomenklatura living in relative luxury while the peasants were starving. Hey, even the city dwellers were privileged compared to peasants who were tied to the land and required visas for inner travel. And access to Moscow and its opportunities was tightly controlled.
>It was proclaimed, but worked around in every way possible.
It always does. Each time someone proclaims equity, they carve out some sort of exception for themselves and their family. Like the mayor of Chicago that mysteriously had heavy police presence around her home, while ordering them to stand down everywhere else.
The distribution of outcome in one generation is, rather explicitly, the distribution of opportunity granted to the next one. Policies like progressive taxation and universal welfare programs that can be cast as equal outcome initiatives are, in the minds of their proponents, often about checking this runaway feedback loop that might otherwise leave all the opportunity (wealth, by another name) piling up with fewer and fewer people.
I'm confused about this response. A powerful authority is how you enforce those "equal outcomes." You can have authoritarian left (communism) or authoritarian right (fascism), but both are authoritarian. Let me put it this way: Fascism is authoritarianism but authoritarianism isn't fascism.
Edit: Can someone explain the downvotes? Is the argument that there isn't an authoritarian left? I'm referring to this basic model [0]
Once you have a strong central authority, you are not equal by definition. The stronger and more centralized the power is, the easier it is for the ruling class to pass the status to their kids. So ambitious people won't dream of starting companies and changing the world. Their only route to success will be to join the party ladder and play political games. USSR in a nutshell.
As a part of a minority, I heavily disagree. The new push for diversity create self doubt. am I really achieving on merit, or just the result of this new social justice movement? I tend to think the people of the left are politicians. They haven’t done much to help me as a student, business owner, or parent. Rather, they just want me to be a victim, someone who they can rely upon to keep themselves employed.
Yep. You know the trick of surviving 10+ years in a corporate managerial position? You pick your subordinates out of slightly underqualified mediocrities. Not only they will be completely helpless without you (let alone, challenge your position), but they will also be infinitely loyal to you, as they won't get another chance under anyone else.
Well, that kind of people now want to run our entire society this way. They want you to be that loyal mediocrity, infinitely thankful for the handout, and supporting their cause without questioning.
Believe me, it's a road to nowhere. Corporate mediocrities end up taking antidepressants for life, or killing themselves after the boss gets fired and they have zero shot at paying that mortgage. The life of self-improvement, hard work and achievement is 1000x more rewarding and I'm glad more people are starting to realize that.
We need more people like you speaking up though. Because virtually every criticism from outside the minorities is now almost unconditionally labeled racist.
>mostly just want people to have the same access to schools that whites have had for decades.
I would disagree with that. Kids from the families with inherited wealth always enjoyed their hilltop mansions, gated communities, private schools and Ivy League admissions by donation. And they continue doing that, they're above the rules. Nobody is going to put a low barrier social housing in their backyard, or force their kids to network with us, the plebs.
It's the middle class that made their own wealth is under attack now. Doctors, engineers, small business founders. Apparently, we can't have in a community with no used needles near the playgrounds. Because poor homeless. I can't apparently spend time teaching my pre-school kids to read and to write, so once they go to school, they can focus on more advanced stuff with other like minded kids. For the sake of social justice, the class has to be diluted by a few troublemakers whos parents didn't care. But, of course, not the class where the Governor's kids go.
Mind you, nobody will mind if you go and help the affected people directly. Go offer free counseling to homeless. Go teach Python to kids from poor families. Go do lemonade stand projects with underrepresented minorities. But do it in your own free time and at your own expense. I am pretty certain, many reasonable people will follow, since our society generally values being generous and positive. Except that's not what the activists are doing. They don't want to solve the problem in their free time, they want others to somehow find time and solve the problem for them.
>See, e.g. Glen Beck and Tucker Carlson urging their followers to doxx and threaten their critics.
I've seen the piece from Tucker Carlson. NYT journalists threatened to publish his home address. He stated that it already happened before, leading to threats to his family, and said that if they do it again, he will retaliate. He didn't mention any personal details there, so the NYT pulled back and they've reached a stalemate.
>See, for example, Hollywood, the NBA, the music industry, etc.
Hollywood has deteriorated to releasing heavily engineered comic book films. The art aspect is gone, the creativity is gone, many genres like parody/comedy are gone. Mainstream music isn't at it's peak either. It's pretty consistent with the left-leaning big tech, that focuses on making people replaceable drones following procedures, because they have much less bargaining power this way. I am not into sports, so no clue on NBA/NFL.
That's a noxiously inaccurate caricature of what people in each "camp" believe (to the extent there even are two camps). Practically nobody believes in strict communist-style equal outcomes. The most any significant number want is more equal outcomes, because the distribution of outcomes has clearly diverged from the distribution of any real merit. What the majority want is equal opportunity, just like they say. That's something we don't have, and it's disgusting to appropriate that phrase for those whose beliefs are more accurately described as discriminatory and/or segregationist. It's equally disingenuous to imply that pride in achievement or desire for self-improvement are either distinguishing characteristics of or unique to that group. The vast majority of those you deride as "equal outcome" believers also have those characteristics. If anything, it's the "OK for heritage to determine outcome" crowd who don't believe in achievement and improvement.
People call that kind of twisting of facts and words "cancerous" because it is. You could argue reasonably that attempts to address current inequity are misguided or have gone too far, but not by misrepresenting what people believe.
Reddit recently dramatically revised their moderation policy and has generally been banning a lot of subs. Reddit very much wants to be a profitable business, and has no problem elimiting non-profitable subreddits.
"When your content is too cancerous for church, maybe the issue isn't the church".
Feel free to replace church with whatever is appropriate to your situation, but there have always been censors and there will always be people who resist the censors. The censors have acted against homosexuals, trans folk, hippies, anti-war demonstrators and many others who we no longer see fit to censor today and I will continue to be in the camp of people who work to thwart the censors.
This site listes Parler and Voat which have rampant, unchecked antisemitism and racism. I would love to see a non-censorious site that takes hate speech seriously but they don't seem to exist for some reason...
The thing is, any service that offers a "censorship-free" alternative will instantly attract mostly by those whose content is garbage for everyone else: usually it's pedophiles until the operators get a couple not-so-nice letters or a police raid and then at least introduce modest moderation, and then come Nazis, antisemites and conspiracy myth peddlers.
And said hate peddlers then complain that they don't have the reach they enjoyed on Twitter, Youtube and Facebook... well d'oh. Turns out one who is not a Nazi, a journalist reporting about them, or an antifa activist collecting information wants to spend their free time on a platform dominated by Nazis.
Last time I checked, Voat was a bunch of people letting the n-word rip in every comment like they're 10yo and discovered a bad word for the first time.
There was very little sign of intelligence. I had no urge to go back.
Hitler salutes, calling for gassing Jews, or "out with migrants" have been illegal in Germany since 1945 - for obvious reasons, one might add. That is illegal content under German laws, but "muh free speech!!!11" for Americans.
This is a tricky problem even to define, because there's "Reddit" which is the owners and paid moderators, then there is "reddit" which is the universe of its subreddits and its unpaid, unaccountable moderators.
For example, it's been well known for awhile that if you subscribe to any of the men's rights subs, you are auto-banned from several of the feminism ones (including TwoXChromosomes). That kind of thing certainly isn't lenient but also isn't under Reddit's control.
Just a slight correction - nobody knows what subs you subscribe to on Reddit, it all works by looking at you commenting in these "wrong" subs.
It does not matter what the content of the comment was, of course. What matters is the fact you left the bubble and interacted with a different one, you traitorous scum.
This sort of sloganistic un-logic that constantly gets up-voted here is a great reminder that most tech people aren't mature enough to make decisions about how the rest of the society communicates.
Yes, if the only defense of your content is that it isn't literally illegal to produce and distribute, maybe choose something better to do with your time.
In the context of a lot of Reddit alternatives, the argument of 'not literally illegal' isn't always what is being made. Sometimes it's more along the lines of 'stop interfering with our law breaking' or 'let us openly promote real workd violence'. Sometimes people have migrated to sites where the administration is either legally insulated by strong local laws (ie: Voat in Switzerland), or otherwise ambivalent attitudes towards actually following regulations (Chan sites, self-hosted forms). The incel communities, and many other violent communities come to mind right away.
A good rule of thumb, but it's worth noting that content banned from reddit isn't necessarily 'too cancerous', it can also be 'the wrong kind of cancerous'.
I know you're just expressing a common curiosity. A friendly reminder that snuff film production is a thing and if a community creates a big audience/market for this stuff then the consequences are horrific.
Isn't this the justification for every kind of moral panic?
If we let kids watch sex scenes they'll become prostitutes. If we let kids play grand theft auto they'll shoot up their school.
Everything has the potential for negative externalities. I don't expect anyone to "think of the poor morbidly curious people" because it's strongly taboo in our culture but suggesting that - cause and effect: gore equals more murder for snuff - is a stretch.
You got it the wrong way around. If a lot of people consume a type of content, there are going to be people wanting to produce it for <money, fame, kicks, ...>.
To push the analogy to the extreme, consuming illegal pornography doesn't actually harm anyone, the production does, however the consumption drivers production, therefore the consumption does harm.
Yeah but we're not talking about "the knockout challenge" or destroying milk jugs in the supermarket. Most people don't murder for the lulz and they also enjoy not being in prison for decades. Despite the FUD pushed by popular media there aren't going to be that many people producing it for money/fame/kicks. The vast majority of gore on the internet is a product of security cameras or otherwise hidden cameras.
There are some non cancerous communities that aren't welcome on reddit.
Lots of legal gray area ones that revolve around data hoarding / archiving are constantly threatened and sometimes taken down for piracy. The subreddits try to police the most blatant piracy but due to the nature of why people archive/hoard data it can be difficult.
The highest profile one is /r/piracy. The illegal sharing part of the subreddit spawned a forum for good general piracy discussion. It was at serious risk of being deleted and had to go to extreme lengths to preserve the community that formed around the actual illegal sharing part.
/r/datahoarders and /r/opendirectories are a couple other I personally have seen similar things happen. In my opinion they're a couple of the greatest subreddits out there so the fact that they could be banned out of nowhere on a technicality is a little concerning.
Reddit is plenty cancerous, depending on how you define that. Certainly in the sense that people aren't arguing in good faith, aren't actively engaging, but are just firing talking points or insults at one another... the sort of thing good moderation used to help limit so that the quality of discussion remained high.
I'd way rather read a good-faith, well written exploration into a controversial topic, than someone who toes the mainstream line of acceptable opinions entirely while being rude and belligerent.
They recently banned every popular Marxism/Leninist subreddit. They claim it was for advocating violence but I know for a fact it didn't happen with any more regularity than any other political subreddit, including /r/politics.
We live in a time of extreme devaluation of terms. The Democratic Republic of Congo is a hardcore dictatorship. The People's Republic of China is run by a close circle of elites. Likewise, "Hate Speech" is used far too often to shut down fairly reasonable criticism against the "equal outcome, not equal opportunity" policies bundled together with original sin and Orwellian struggle sessions.
OK, but to be fair, it's hard to suggest a meaningful alternative to "hate speech" to describe the conversations happening on Voat and other such platforms.
Places like voat consistently have holocaust denial and calls to actually murder jewish and black people at the very tippy top. There are few places that can be more accurately called "filled with hate speech" than voat.
to be clear, I believe they will be defederating from instances that push over content against the flagship instance's ToS; those instances will still be able to run Lemmy software.
I think that's a natural selection bias. Most people are either fine with reddit and therefore don't care about finding alternatives, or they aren't interested in what Reddit has to offer, in which case they ignore it. Even if they'd be open to an alternative that's better, spending a lot of time on a subreddit focused on alternatives, shopping through curated lists of them to try them all out is a pretty high investment of time and energy.
The kinds of people who are going to spend a lot of time doing that are likely to be people who like how reddit works a lot, but have an ax to grind against some aspect of it, and usually that aspect is moderation. Most normal people who interact inoffensively have very few interactions with moderators. It's only going to be controversial people who gravitate to that sort of thing, and any platform that's full of people who frequently court controversy is bound to fill up rather quickly with insane freaks.
Well, because I don't want to join communities where hate speech is common, much less the prevailing type of content. I don't think that speech should be outlawed, but that doesn't mean I want to be where it is. So if reddit helps remove hate speech and it funnels to these alternatives, that only makes reddit more attractive to me.
Plainly: I want reddit to censor hate speech on their site, but I don't want hosting companies to disallow those alernative sites to exist, nor do I want Chrome to disallow any user to visit any of those alternatives in their browser.
Ok. But when you start denying basic infrastructure to those with viewpoints you dislike --- domain names, DDOS protection, app store distribution --- you're no longer trying to just maintain some particular community's culture, but instead eradicate certain speech from public life.
It should be illegal to deny fundamental infrastructure to someone on the basis of his philosophy or point of view. That's what a free society means.
You're arguing a point that the parent poster never made.
The parent poster simply said that they want to join communities free of (or, mostly free of) hate speech and that being free/mostly free of hate speech is an attractive quality for them.
Not once did they advocate for denying infrastructure or any of the other things your entire comment is based on. In fact, they stated the opposite.
Engage in whatever wordplay you want: reddit is a website built on community content and interaction. That there are user-created communities within the larger community does not turn it into some sort of "infrastructure" of internet expression. The infrastructure to build one's own site, and communities that go with it, are and should continue to be made available to all regardless of viewpoint. Given Voat's existence and the fact that people do use it, I do not see a problem.
> We don't have to subsidize their free use of internet resources.
We subsidize the FAANG's use of internet resources; the Internet was largely originally created with public money (military, academia, etc).
> They are welcome to buy their own servers or even rent them.
Are they? What if they get banned from there? What if they can't collect money from their community, because all the donation sites ban them? What if their donation sites are killed because all the payment processors drop them?
Your argument leads to the logical end of "they're welcome to make their own internet".
What happens when something you like to discuss falls into the category of hate speech? You think it couldn't happen, but plenty of topics that were totally reasonable subjects of discussion, plenty of totally reasonable publicly-held opinions from 20 years ago are now in this basket. It's totally plausible to imagine some newer topic you feel strongly about eventually becoming so, and you ending up on the side of wanting to have honest discussion about it and being locked out.
Can't imagine it happening to you? Well, maybe you don't like pedophilia, or the huge push of incest porn that seems to be everywhere. Maybe you have legitimate, non-racially-oriented concerns about the riots around the country? Maybe you're worried about some particular changes coming to your children's education, or you're worried about the impact of ideas like UBI? Well, your opinions that are acceptable to post in polite places today might not be in 2030.
The only way anyone should be okay with this continual incursion into free speech is if they have truly tied themselves to the idea of being 100% on board with whatever restriction is coming down the pipe next. Such a person has no principles, and it's hard to imagine defending them.
> Are they? What if they get banned from there? What if they can't collect money from their community, because all the donation sites ban them? What if their donation sites are killed because all the payment processors drop them? Your argument leads to the logical end of "they're welcome to make their own internet".
This is slippery-slope nonsense, just where this conversation always ends up. You can "what if..." you're way to us banning these individuals from driving cars if you want, but it's not what's being discussed.
I am very, very, very in favor of reddit disallowing hate speech on their community as that is an active behavior on the site itself for a community they own. I am very, very, very against any organization barring access to things like domain name registration, hosting, etc. on the basis of their expressed ideas up to the point of them directly enabling clearly criminal behavior (e.g., directly organizing violent attacks, sharing revenge porn, etc.).
Private citizens kicking someone out of a restaurant for being rude is not the same thing as government actors barring them from ever owning a restaurant, or going somewhere else where their rudeness is welcome.
> The only way anyone should be okay with this continual incursion into free speech is if they have truly tied themselves to the idea of being 100% on board with whatever restriction is coming down the pipe next. Such a person has no principles, and it's hard to imagine defending them.
This is a false dichotomy built on the aforementioned slippery slope fallacy.
You can't claim slippery slope if it already happened. Payment processors, domain name registrars, hosting providers, and anti-DDoS services already ban people they dislike. If we slide any further down this slope, the deplatformed will have to lay their own optic fibre.
Its not possible to do that. Nodes cant know whats in a chunk of data that comes by and from whom it comes. But freenet is to host website like sites called freesites. You cant really spam anyone with sites you can create as many as you want but if no one visits them they will eventually just "fall out" of the de central storage.
Technically you could probably bring it down if you simulate a massive amount of users who only access spam sites but that would probably become rather expensive because you would attack your own nodes and your own nodes would try to help the network. It would also need to persist forever else everything would simply go back to normal shortly after the attack is stooped.
The danger here is viewpoint based discrimination. It should be illegal for a big platform to shut down certain points of view when these points of view are legal and expressed in a normal manner. You can shut down spam without harming the principle of free expression because you can express any idea in a way that isn't spam.
The law gives you a legitimate and stable line that you can draw between allowed and disallowed speech. It's fine for a platform to censor what's illegal, because in a sense, we all agree on the law and have a say in its content. But we have no say in big tech content policy, and that's what makes it illegitimate.
My basic point is that it's infuriating and wrong for big tech to impose its values on the public. A public space that's privately owned is still a public space. A free society is one where you don't get barred from public spaces because of your opinions.
> It should be illegal for a big platform to shut down certain points of view when these points of view are legal and expressed in a normal manner.
Platforms aren't public venues, they're private businesses offering access to a service under terms that serve their interests and business needs first and foremost... terms that everyone agreed to before being able to use the platform.
>The law gives you a legitimate and stable line that you can draw between allowed and disallowed speech. It's fine for a platform to censor what's illegal, because in a sense, we all agree on the law and have a say in its content. But we have no say in big tech content policy, and that's what makes it illegitimate.
The law also allows for private ownership of businesses, contracts and freedom of association. The law says Google's platform is Google's property and Google can do whatever it darn well likes with it.
>You can shut down spam without harming the principle of free expression because you can express any idea in a way that isn't spam.
Who gets to define what spam is? Free expression isn't free if someone has the ability to define any arbitrary speech they don't like as "spam" and censor it. All of the slippery slope arguments applied to censorship of any other form of speech also apply to spam.
Given that they are not operating as neutral public venues then they should also not be afforded protection under Section 230. If they actively moderate they should be fully liable.
The very point of section 230 was to protect companies even if they did moderate. What you are suggesting is that section 230 should be repealed, and let the courts sort it all out.
> [An Act] To regulate interstate commerce by imposing limitations and penalties on the trans-mission of unsolicited commercial electronic mail via the Internet.
Censorship not related to government always strikes me as just complaining. Private entities, whether they be individuals or businesses should be free to determine what is an acceptable uses of their services, and what is not.
Do you allow any form of speech in your house or is your house a platform that censors speech?
I think comparisons between trillion dollar companies, counties, and houses make for weak arguments.
Counties can imprison or kill you for speaking incorrectly.
Companies? Only have the power to cause me trouble when they are oligopolies, e.g. big tech right now. It can be quite severe trouble, even if it isn’t literal imprisonment.
A house? I am aware that poverty (and being a minor) forces some people to live with abusive persons who can give them a choice of silence or homelessness. That kind of evil doesn’t scale up to affect everyone (perhaps if it did we might try harder to fix it?)
A home is not a platform at all, let alone a public one open to everyone, anonymously and without needing an invitation. It's a personal and intimate place.
Most service platforms are not public, either, requiring, at minimum, a way to identify the content creator, even if only to the admins. While consuming the content may be anonymous, production is not. The platform itself is only acting as a publisher. In this specific instance, Google is n the wrong.
4chan, being the obvious exception, since content can be created anonymously.
There’s a huge difference. The government has to let you speak. The corresponding duty on your part is that you won’t do harm (eg., shouting Fire, etC.)
That right, and duty, does not exist on a private platform that you don’t own. You are limited by the TOS and license that you, like most, willfully skip over. A private, non-government platform can tell you to STFU and Get Out and they don’t even need a reason, because it’s their platform.
You can easily set up your own free speech platform, but nobody is required to listen. That’s the main attraction of social media, a more-or-less guaranteed audience. At least until someone tells you to GTFO if you’re not hosting.
There’s a difference between “has some of it” and “is full of it”.
As SSC says, even if witch hunts are genuinely bad, if you found a town with the founding principle of having no witch hunts, you will end up with a town with 5 genuine civil libertarians, and 100 witches.
I think an interesting strategy would be to found a site with an initially high level of censorship, but a public commitment to reduce the amount/degree of censorship on the site over time with a particular timetable.
Unless there's a limitless supply of witches somewhere, the more witch-hunt-free towns there are, the more the witches will be distributed among those towns, because witches aren't a homogeneous group either.
So in theory, voat & co should either shrink or get less radical the more reddit-alternatives there are. That is, unless reddit is turning normal people into witches that will then populate the new alternative platforms.
Normal as in "average" or normal as in some kind of healthy & moral?
The average person puts their pitch fork back down and enjoys the bonding experience that a successful hunt brings to the community. The out-group isn't just persecuted because they are different, the whole process also helps confirm the unity of the in-group.
This is an interesting point!
However, I also note that there is a limited number of “genuine civil libertarians” (or, of people in general I guess), so I am not sure that having more of these would necessarily improve the ratios between the two within any particular town/site ?
Also if people are spread sufficiently thin, some of the sites will die from “no one is using this”, I think.
I don’t know how this all balances out, but what you brought up does seem like an important force/mechanism to take into account.
So, I’ll say “Further work is needed in order to understand the behavior in this area.”, haha.
I'd guess that the need for genuine civil libertarians would go down with the witch density. The super majority of people is probably fine with seeing one witch per day, but they really don't want to live anywhere where they feel surrounded by witches.
The dynamics involved in city districts rising and falling and rising again might be similar. Online communities lack the rent-dynamic however. A "difficult" part of town gets more attractive because the rent is much cheaper. I don't know whether there's anything equivalent in online communities that would "gentrify" a toxic community.
I think you just described TikTok. There is an extreme level of censorship due to how many children use the app. They've achieved a level of family-friendlyness that YouTube is trying very hard to get. Regarding politics, TikTok has often said that they just aren't a place for politics, and now they are being pressured to let up on political censorship because of issues about China and the coming US presidential election.
I was imagining an organization which from the outset planned to be very permissive in what they eventually allowed, with initial restrictions just being in order to cultivate a desirable community to start with, but now that you mention it, perhaps the motivation behind the policies don’t matter so much as the policies themselves. Good point.
That being said, I don’t imagine that TikTok will become quite as permissive in the end as I was imagining?
No, there's no difference between "has some of it" and "is full of it". Where could the line possibly be drawn between those two? Why should your values in particular control other people's speech? If you don't want to participate on a discussion because it's full of something you dislike, that's your call. But big tech companies have no business shutting down other people's legal conversations.
I think you guys agree. You are free to not participate. The point of contention is: are you free to deny others ability to participate if they want? (if you had that ability)
That's where idea of public platforms as infrastructure (and actual infrastructure like DNS and carriers) if coming from.
This reminds me of people wanting to ban or filter torrent protocol itself, instead of illegal content on it.
Pretty much all the subs like the the_donald, republicans, conservative, asktrumpsupporters that are against the big tech censorship will censor (ban) you for just showing their hypocrisy.
I clicked on few links or r/RedditAlternatives, and they look like Facebook pages of rural Ohio.
> Pretty much all the subs like the the_donald, republicans, conservative, asktrumpsupporters that are against the big tech censorship will censor (ban) you for just showing their hypocrisy.
This is like saying:
> Pretty much all the discussion groups that are against government censorship will censor you (throw you out) just for showing their hypocrisy.
It's not hypocritical to moderate one forum, on a platform where everyone can create his own forum, while also demanding that the platform itself doesn't censor discussions, anymore then it's hypocritical to moderate a discussion group, while also demanding that the government doesn't censor your discussions.
Moderation happens at the lowest level possible. Any content removal at a higher level is censorship.
This problem is common to nearly every subreddit, on every side of the political spectrum. Unaccountable moderators will ban you in a heartbeat for arbitrary reasons, such as disagreeing with the sub's majority opinion.
I was banned from my (very liberal) U.S. state's subreddit for quoting Martin Luther King. I quoted a passage from one his speeches that denounced political violence, such as the arson and vandalism we've seen in Kenosha recently. The subreddit moderator banned me within minutes, called me a Nazi, and immediately denied the appeal.
They're not unique in this behavior. Basically every political subreddit that exists, and even some that arn't innately political but get taken over, do the same thing.
I remember there were a bunch of posts in /r/zerowaste about someone explicitly advocating to be made a mod so that they can moderate the subreddit as an anti-capitalist. I really had to credit that person for the bravado of saying: "I'll ban people I disagree with, voooote for meeeeee".
In fact, the ONLY political subreddit I'm aware of that does not engage in this behavior is r/Libertarian. Credit to them for putting their money where their mouth is.
/r/libertarian went through a brief period not too long ago where a mod (or mods) took over and started banning a ton of people for their beliefs (mostly socialists).
To be fair the creator of the sub came back and fixed it.
I was an early adopter of Voat during the first Reddit purge but it turned into a racist cesspool. I'm wondering how many of these alternatives will follow that trend.
They typically get addicted to advertising dollars and clean their sites up
Opening an opportunity for someone else to sell the same "my fictional private sector free speech assurances that I imagined were part of the constitution are being upheld here" story to build a community
I was in the same boat. I ended up bouncing because it was so bad I couldn't stand it. But to me, that should be the solution- if you don't like it, don't participate in it.
We live in an age where people seem to think in very black and white terms and if your beliefs don't align you shouldn't have them or be able to voice them.
Aside from inciting violence, your beliefs are yours. If they aren't breaking the law, you do you. I don't have to agree with them or support them, essentially 'don't tread on me'.
But people need to remember when you deplatform these racist, ugly thoughts and words, you push those people together, galvanizing them and their beliefs. As others in this thread have stated, I believe that's why voat got so bad so quickly.
As they say, sunlight is the best disinfectant.
I don't know what the end all solution is here, but I know I deal with racism on an almost daily basis in real life, and I'm Hispanic, so I can't imagine how bad it must be for some others. Pretending it doesn't exist won't fix it. Pushing it to the furthest corners of the internet won't make it go away in real life.
look, i'm not trying to cut down alternative platforms by any means, however these alternative platforms are failures waiting to happen. the reason is that although they start low cost, eventually, if they grow, they need some form of income in order to continue. at that point, they need some form of advertising which, cause of their content, they cannot get and as such, they close up.
Honestly such a brazen step might be good in the long run. People are all to happy to be boiled frogs, and an overreach like that might jolt some off their ecosystem.
> Holy crap, google is apparently taking down all/most fediverse apps from google play on the grounds that that some servers in the fediverse engage in hate speech.
Good thing you can't find any hate speech on Play Store–promoted corporate behemoths Twitter and Facebook!
This is a bad faith argument. Twitter and Facebook do a lot to take down hate speech from their platforms (with exceptions made for "public interest" cases like high level politicians).
Some servers on the fediverse specifically allow unfettered hate speech.
No, it's bad faith to pretend that some random Fediverse server that allows unfettered hate speech has the same societal impact as the mountains of misinformation and bigotry that flourish on Twitter and Facebook, despite their half-hearted efforts to look like they're fighting it.
Have any of the Fediverse apps that were banned from the app store done anything to prevent the issues that you say Facebook and Twitter have? I'll take the simple banning of a harmful server on their app as a way to change my mind on this issue.
I'm not defending these hypothetical Fediverse apps that don't moderate. I'm saying that compared to Fediverse apps exposing a few thousand people to hate speech, Facebook and Twitter actively promoting bigotry, propaganda and conspiracy theories to tens or hundreds of millions of people is far more harmful — even if they make a nominal effort to prevent it from happening.
And, more to the point, I'm alleging that the reason Google is taking down the Fediverse apps and not Facebook or Twitter has nothing to do with their moderation policies, and everything to do with money.
They have done exactly that multiple times. For example, the app Tusky has a hardcoded list of hate-speech instances for which the login is blocked, to just give you one example.
Other than that, the instances themselves simply choose not to federate with questionable instances. This happens in an almost organic way where if an instance refuses to block federation with another questionable instance, other instances will then also refuse to federate with that instance.
The result is a network where hate-speech and undesirable content has been almost organically filtered out.
But keep in mind that the (client) apps themselves are more like web browsers. Anyone can host their own website and access its content through one of these apps. This also means that instances which are blocked by an app can very easily circumvent that as well.
(Sorry for my bad english and repeatedly using the word "instances")
I asked about the apps that were banned from the play store. Tusky is still available on play store. Maybe their hardcoded list of hate-speech instances were enough for Google to keep them on the play store?
But the authors of these apps have no influence over what servers you connect with. It doesn't feel much different from visiting a hate speech website using your web browser, and they (luckily) do not get banned either.
> Like hate speech is bad and it’s not censorship if it’s not mandated by the government.
To be fully accurate, this is absolutely censorship, but it's not a violation of anyone's First Amendment rights. People often conflate the two.
We can argue about whether or not Google should ban certain opinions on their platform, or where the line should be drawn, but it is arguable that hey have the legal right to do so. And the Federal Government, just as inarguably, does not have this right.
Constitutions are squishy things, ultimately. They evolve over the years.
I take your point about legality. But, I'd make the point that we don't have a clear moral or legal concept of free speech that relates to the current world.
"We can argue about whether or not Google should ban certain opinions on their platform, or where the line should be drawn, but it is arguable that hey have the legal right to do so."
At some point, the distinction between legal and moral breaks down.
In theory, google could legally do quite a lot. Say google decides to censor all mentions of the tiananmen square rising. They could remove it from search, youtube, android phones, chrome. Gmail could spam filter emails mentioning it. They could exert influence outside of companies/services that they own directly. None of that is illegal (at least not unconstitutional). In practice, this is very close to what China does with the great firewall.
It also wouldn't stand. Something like this would be too contradictory to the moral concept of free speech.
Constitutions are interpreted, and the supreme court is not the only interpreter. It's a cultural construct as well as a legal one.
We are effectively at this point now. Google, Twitter, Facebook, etc... These aren't platforms in the way newspapers were. They're not platforms at all. They're the level ground, in terms of speech, press, the right to petition the government or practice religion. The reasons that amendment was written runs through google.
> Constitutions are squishy things, ultimately. They evolve over the years.
They're really not -- or shouldn't be. Certainly not when you have a textualist/originalist majority on the court charged with deciding what is and isn't constitutional.
I get your broader point that societies and norms change. But the last thing any of us should want is a constitution that is lightly referenced and broadly interpreted because history shows such easily interpreted and changed documents benefit the oppressor far more than the oppressed.
Your final point is a strong one: Substantial parts (and, for some forms of speech, nearly all parts) of our freedom of speech runs through a handful of large social -media companies. There's little reason that can't be addressed with appropriate federal legislation but, if that's not enough, then let's get on with the heavy lift of actually amending the Constitution, rather than hoping our better angels prevail in interpreting it.
Agree. To extend your argument, we find that over the years that the application of the constitution has to be understood in new contexts. It doesn't evolve, but how it is understood to apply is discussed and what it covers does. That's why textualism/originalism is important: if we understand what someone said and why they said it, we can better represent the intention of the rule here, several hundred years later.
It also wouldn't stand. Something like this would be too contradictory to the moral concept of free speech.
I don't know. If Google decided to censor "hate speech" in GMail, could anyone stop them? Some people would applaud. Machine learning is good enough now that misspellings and euphemisms can be caught. So topic-based censorship could really work. Especially for Google, which has so much history on each sender, and so much experience with spam filtering.
Hey! I'm fiddling around with this right now for a submission. What I've found so far makes it look like hate speech is not easy to catch:
1) Slang evolves.
2) Combining two languages, especially under-resourced languages, mean that a chunk of words fall outside the range of a hate-speech lexicon.
So maybe in languages which have huge resources devoted to it (English), its easy to figure out, but I don't know how well covered the evolving edge, and the code-mixed (language mixed) edges are.
Facebook is working on this.[1] There's even a $100,000 "Hateful Memes" competition.[2]
In order for AI to become a more effective tool for detecting hate speech, it must be able to understand content the way people do: holistically. When viewing a meme, for example, we don’t think about the words and photo independently of each other; we understand the combined meaning. This is extremely challenging for machines, however, because it means they can’t analyze the text and the image separately. They must combine these different modalities and understand how the meaning changes when they are presented together.
Facebook is going for the really hard case, where non-hate images and non-hate text combine to induce hate. They already have a text-only system.
If the CCP can do it —and they do it to catch all the circumventions Chinese internet users use to try to avoid censorship —which they do successfully, then we know it’s possible and if it’s possible the big players are doing it too.
People can be as hateful as they want and the government isn't coming after them. What people that use hateful verbiage are really arguing for is free from consequence speech and that just doesn't exist. They want to be allowed to be hateful and hurt Google's reputation and Google just needs to shut up aka these folks don't really value freedom of speech when people start using theirs to counter them.
So for some of my volunteer moderation work, this was a question I tried to answer, so that we could have a justification for censorship, instead of "We need to do this, or the forum will continue to be a dumpster fire of hate".
The argument was a closer look at the "market place of ideas" analogy.
There are actors in the market selling bad content. Content designed to addict, poison or over turn the fair functioning of the market place.
Earlier things were far slower, so this was not as pertinent a threat.
With social media, and virality, this is a clear and present danger for the functioning of the digital meeting places we enjoy.
Therefore, there is a need for action to prevent these market perverting actions.
<This ignores the class of locutionary actions that cause direct harm, such as hate speech etc. but the same argument can be made for them, and because of the clear harm they cause.>
Nathan Mattias at Cornell is someone who wrote/writes about it, while also working to help citizen run experiments to figure out what works for content moderation.
I don't think it's a question of the medium but a question of the actor doing the censoring. If the feds mandate the censoring on a private medium, I don't think it makes it legal. I think one difference is the enforcement, Google can't censor an individual universally, just on their platform, whereas the government can enforce it universally with force/jail/etc.
Unfortunately, it becomes virtually universal when a small set of massive companies (with similar censorship ideas) control 99% of all our communications and social media.
I think this evades the spirit of the legal protections here, at least.
Note: I'm not disagreeing with you, and don't really have a solution here. Just pointing out how the current situation feels like dangerous territory.
> If the feds mandate the censoring on a private medium, I don't think it makes it legal
FCC regulation of TV broadcast comes close, but apparently obscenity isn't protected under the First Amendment (perhaps you can tell I'm not a lawyer, or for that matter an American).
Yeah, there is this vague idea that the First Amendment doesn’t apply to certain categories of speech (yelling “Fire” in a crowded theater is the classic example) that get stretched to fit this sort of thing.
It's a problem with the drafting – the original authors always meant for there to be exceptions, but decided not to specify explicitly in the text what those exceptions were going to be – thus leaving it up to the courts to decide in practice what exceptions are allowed and what are not.
The obscenity exception was largely non-controversial until the 20th century, because there was a broad societal consensus, among both popular and elite opinion, that obscenity and pornography did not deserve First Amendment protection. It was only in the 20th century that societal consensus broke down, and it was in that context the US Supreme Court decided to reduce the scope of that exception. (It still exists, and is still occasionally enforced.) The original authors and ratifiers of the First Amendment supported laws against obscenity, and didn't believe the First Amendment prohibited them.
Ultimately the courts have to decide what laws mean, even constitutional laws – but they could always have given them more guidance, by being more explicit in the text about which exceptions are valid and which are not
A super-interesting question that is also an invite for people to flog their own personal theories. People will tell you lots of reasons: increasing incomes, the development of ubiquitous media, increasing diversity, weakened religious control mechanisms, etc.
There is probably at least a little truth to each of those, although I think many of them are also effects of central causes (e.g., religious control over common people's lives declined because of increasing incomes, which increased due in part to advances in communications tech).
I assume the reason for that is that the FCC grants a government-protected monopoly on wireless spectrum to a single entity. In the granting of a monopoly, they also demand extra "protections", in much the same way that there are regulations on other monopolies.
not really. it's just saying "this is mine, it's not public property or a public service, and thus i get to manage it how i want." I think MOST people would agree that that's a reasonable approach.
just because a lot of people rely on the google play store doesn't mean it's a public service in the legal sense. It's a very private piece of software that is NOT open source and is very obviously owned and managed by a single entity.
just like you get to choose who you let in your house. they get to choose how their software is used.
> just like you get to choose who you let in your house. they get to choose how their software is used.
The first instance is property rights. The second is copyright, a privilege granted by the government at the expense of others' property rights. When a private party leverages copyright to conduct censorship it's ultimately the government that is responsible for violating the victim's freedom of speech. Google certainly has the right to grant or deny access to their services as they please, but that is not the same as having a natural right to decide how the software they develop is used after it has already been released to the public.
“just because a lot of people rely on the google play store doesn't mean it's a public service in the legal sense”
I think that’s precisely what’s starting to be discussed now at national levels, with investigations into Apple, Facebook and Google in the EU and the USA.
In many countries, utilities are commercial entities, but they can’t refuse to serve customers because of what they say. I can see a future where we think the same of the big players on the web: commercial, but still public utilities.
Problem of course is that many countries also fear a completely open internet. Providers already have to filter pornography, hate speech, etc. So, would we end up with commercial entities that cannot filter the content published on their platform to suit their norms, but must filter it to suit the norms of the government? If so, would that apply to all sites, including, say, Hacker News, or pro- or anti-abortus sites, or just to large ones? If so, what’s ‘large’?
I agree with you, but the flip side is that they are not liable for the speech on their property due to an exception in section 230 of the communications decency act.
If they have shown the ability to control speech on their platforms section 230 should be repealed and Google etc should be responsible for the content on their property like any other publisher.
Section 230 exists because it is operationally impractical for websites to affirmative approve of all, most, or even a significant portion or content before it is published by users.
If there's anything that is an indisputable fact, it's that no high-volume platform with user content can proactively police their platform 100%. I think that's a silly rationale to say that they should be prohibited from manually policing content that is brought to their attention afterwards.
I never said they should be prohibited from policing content merely that if they are capable of policing content they should be held liable for content on their platform like every other publisher.
There is no reason to give these censorious companies extra legal protections no other publisher has if they are censoring society. They have been protected by a regulation that is now causing intense centralizing of power in the hand of a few technocrats and it is actively harmful to the rest of society to so empower them over everyone else.
It is operationally impractical that society should be subject to Google's whims but Google not liable for Google's network content.
>User content on most websites are published by automation, and are not reviewed by humans, like this comment.
So what? They choose to publish it. They can choose not to. Their own internal business practices don't require society to give them loopholes with which they get out of all liability and abuse the rights of others.
It's a good reason to the publisher, its not a good reason for the rest of society.
>It is not reasonable to make people guilty by proxy. Someone either did the crime, or they didn’t.
If there is no crime in publishing the content then overturning special protections in section 230 will have no effect.
>If you own a building, and someone writes a bomb threat on a bathroom stall, are you guilty of a bomb threat?
If you have a magazine, and publish letters to the editor that contain bomb threats, yes. A building isn't a publishing business, and these companies are in the business of publishing, but it turned into publishing on computers and suddenly they get a special exemption.
They choose what goes on their platform, who can go on it, what is said. They've demonstrated amply their ability to control speech and enforce policy. Let them have their platforms, let them have the liability like everyone else.
It's the side effect of allowing a small number of largely unregulated companies to control so much of our communications. TV and radio using public right-of-ways like radio bands are much more tightly regulated to ensure "equal time". That's not the case for social media or mobile platforms and I suspect any attempt to regulate those would be met with a great deal of resistance. Not the least complaint would be that regulation has a history of keeping small players out, potentially further cementing the monopoly of a few companies. I don't know the answer to any of this, but I think it's something that will need to have an answer if our democracy is to survive.
>> The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses to both present controversial issues of public importance and to do so in a manner that was—in the FCC's view—honest, equitable, and balanced. The FCC eliminated the policy in 1987 and removed the rule that implemented the policy from the Federal Register in August 2011.
And for good reason. It doesn't make sense to mandate equal time for mainstream and fringe positions, rational proposals and ones riddled with contradictions. Either some government censor is responsible for deciding which positions are "serious" enough to warrant equal time or the media eventually gets overwhelmed with nonsense and conspiracy theories. If you want to see where the "fairness doctrine" leads, just have a look at some of the less discriminating social media sites.
Evidence suggests that the fairness doctrine worked really well until we ended it. Until it’s elimination mass media news in the US was pretty middle of the road.
Seems to me around the time that it was ended there were several other things going on.
The advent of cable news networks which gave a massive incentive to sensationalism and strong partisan ties as multiple players joined the space with a need to create a sustainable viewership.
Satellite feeds became common ensuring a single message instead of having a layer of abstraction in the form of a local or regional newscaster; instead of relaying facts, they can relay a highly opinionated version.
Local and independent news stations were being purchased and consolidated into national telecom companies with their own partisan editorial bends, a la Nexstar and Sinclair.
I have to believe that all of the above had a much greater influence on news discourse in the past few decades than the elimination of the fairness doctrine. Furthermore, if you give government the power to regulate anything; always expect the current party in power to use that regulation as a weapon. Can you imagine what our leaders would do given even more power to control and manipulate the media narrative? Ending this was a good decision.
I feel there should be some laws for when someone calls themselves news or journalism. So no monopoly for news, but when does call themselves this, there should be some ethical and truth finding considerations attached.
>there should be some ethical and truth finding considerations attached.
But who decides what's true? And why should we let them? Majority consensus is an easy answer, but we'd need something else if we were to regulate truth at a level we could enforce on journalists.
I'm personally more worried about that question spiraling out of control than I am about offering equal air time.
Truth might not be the best word, but the intention is about factual and empirical observations. So news/journalism is X happened at Y, backed up with as much sources as the journalist can muster.
A much better reason to get rid of equal time policies is because on the Internet, spectrum is effectively unlimited.
On TV or radio, you can only have so many stations. But with the internet, people can make a new web site and publish there, they're not limited by the available spectrum. Therefore, the kind of regulation that was needed in a constrained environment (broadcasted TV/Radio), does not really make sense when those constraints are lifted.
I am very anti-social-media-regulation. Partly for the reasons you mention and partly because I see greater regulation balkanizing the internet and driving us increasingly farther from the promise of an egalitarian open internet.
As for alternatives, I think we just need people to collectively decide that some other platform (ideally a decentralized one) is better than the incumbent. Facebook depends on its inertia. Suppose every Facebook use went cold turkey and switched to something else instead (let's say Mastodon for the sake of argument). In a year, nobody would be talking about Facebook's monopoly.
Where I think things get sticky right now, though, and I'll even say -the- reason we haven't seen innovation in social media, is that incumbents on the scale of Facebook have the capital sufficient to either buy or sue any plausible competition into the ground before the competition has a chance at taking their market share. Imagine a world where Facebook had been blocked from burying Instagram and WhatsApp with money!
I think I would be in favor of greater regulation against these winner-takes-all tactics on a more economic level, although exactly how that regulation would work in a way that was both fair and non-trivial to evade I don't know.
It seems like the obvious answer is modernized competitive market laws that prevent companies from leaving competition-mode and entering castle building-mode.
That's the problem of the all-private internet. There is no virtual street corner at which to protest. There is no internet post office to handle your mail. There is no internet water utility who isn't allowed to shut off your service no matter how many people complain about you.
There is only profit. The moment you become unprofitable for whatever reason you will lose everything. If tomorrow 51% of the world decided they hated left-handed people they would all find their accounts disabled, their website registrations suspended, their entire online presence forced into secrecy.
So far that's only happened, to my knowledge, to terrorists and white supremacists, but there is absolutely no legal reason why it can't happen to anyone else.
Yep, and this is the core problem that I think much of the debate around social networks and online services in general is missing - the debate typically centers around these entities' legal rights, and completely forgets the fact that the online scenario actually has very little equivalent in the real world.
The real world contains public spaces. It contains within it the recognition that some part of all of this around us, belongs to everyone.
And while that has been the center of much of the rhetoric about the internet since its inception, that rhetoric has never actually been true IN FACT. It's a mishmash of private entities controlling their piece of the puzzle.
I think, as another poster mentioned, if democracy is to survive, the concept of "some part of the internet and its services are a public good" must take hold.
Now, that's a scary-ass thing to say because unlike a piece of land, or drinking water, these things don't just "exist". They exist only as long as some entity pays for them, which means that such a statement implies things about who pays (government? subsidies? you pay but it isn't yours? special kinds of taxes?).
And yet I think avoiding dystopia requires going that way. I have no idea what it would look like.
Of course, there's an alternative.
Google/Twitter/FB/etc. can agree that they don't censor anyone unless that person breaks the law. That puts the discussion right back where it should have been in the first place: In the public, political sphere, where The People have the ability to influence the outcome.
But then, why would Google etc. do that? Too enticing, all that power.
It's worth noting that there's also a side that wants the opposite of that, to censor bad opinions, but not be able to refuse making a gay wedding cake.
Hypocrites are not limited to specific groups, they're universal.
No one is forcing Google or their employees to write hate speech on a cake or an app though, that's a pretty massive difference in analogies. Nor are people asking for a free-for-all where Google can't delete any speech on their platforms.
People were fine when they were deleting spam and had a limited content restriction policies against things like directly promoting violence or posting gore/cp and other obvious tier stuff.
I haven't heard many people pushing for governments to force Google et al to not be able to delete things from their platforms either - outside of some tiny fringes who don't understand how the internet works.
Which is therefore still consistently pro-freedom. Likewise compelled speech + censorship of an arbitrary and ever expanding list of wrongthink is consistently authoritarian.
I really don't see the contradiction in either of these worldviews.
It's the classic centralized top-down puppet-mastery of individuals choices vs embracing the chaos of freedom of individual choice (within some limited boundaries). This battle has been waged for as long as society has been around and is a natural side-effect of power structures.
Both situations are really a matter of freedom of speech. Forget Google, does a person with a personal website that allows comments get to control what comments they want displayed on their site or not? Does that person get to choose what work they want to accept from a potential customer?
Both situations boil down to freedom of speech. Both have extra, specific laws that deal with their situations. Without a well thought out justification, a mismatch between the position on those is likely hypocritical, no matter which you are for or against. A well thought out positions may not be, but I don't think most people actually have a well thought out opinion on the intricacies of how these intersect, and what it means, and instead fall back on what they would like to be able to do in that situation, or on their impression based on the way it was presented to them (I think it far more likely contextual presentation is to blame for some of this than actual reasoning). It does little good to point out the hypocrisy of some group on a specific issue when that form of hypocrisy is widespread and rampant. We should also point out the cause of the hypocrisy itself.
There's a subtle distinction though: one is censoring based on the speech, and the other is censoring based on the speaker.
The cake example specifically is more subtle (legally), since there's an argument that the cake is custom. I think this gets very tricky legally, but on the broader point, it isn't hypocritical to say censorship based on concept is okay, but based on speaker is not.
> I think this gets very tricky legally, but on the broader point, it isn't hypocritical to say censorship based on concept is okay, but based on speaker is not.
I don't know. I think that depends on how acceptable you think it is to censor based on the Islamic religion, or the idea of homosexuality, even if you think censoring Muslims and homosexuality is not. At what point does censoring discussing about homosexuality become censoring homosexuals? I'm sure some people would say immediately, and to them, there's no difference between censorship based on show they are and what they feel or believe?
That's why I say it requires a very well thought out argument. I can be convinced that it isn't hypocritical to distinguish these (I'm exploring my thoughts on this subject, I don't have extremely held opinions on it, other than that it's complicated), but nobody has to be satisfaction yet.
> I think that depends on how acceptable you think it is to censor based on the Islamic religion
Let me give you an example: It is acceptable to enforce the rule that laws cannot favor Islam. It is not acceptable to enforce that Muslim individuals cannot hold positions in government.
The first is discrimination based on content, the second is discrimination based on, let's call it character.
That's not really answering the question, which I think gets to the point of the distinction you made. Is it okay to censor discussion of the Islamic religion? That's a concept. We can agree it's not okay to censor based on an individual, but you distinguished the types of censorship based on concepts and individuals. Why is it okay to censor Islam but not Muslims (or is it not okay, in which case your prior delineation of circumstances doesn't hold)?
> It is acceptable to enforce the rule that laws cannot favor Islam.
That's not even about censorship, so I'm not sure how it applies.
Again, this is why I think it's important to have a well thought out argument, otherwise it may be hypocritical. I'm not even pushing a different side, I'm just trying to get you to articulate specifically why these two things are different, and pointing to examples doesn't do that at all. It's just a list of value judgements that you assume someone else will agree with without providing the rational behind those judgements (presumably believing it's self evident).
If someone cannot distinguish why two separate situations are different but states as fact that they are, then they are being hypocritical, whether those situation are different or not. Nobody should be stating things as fact that they can't explain. Being hypocritical has nothing to do with the truth, it has to do with knowledge, actions and beliefs.
> That's not even about censorship, so I'm not sure how it applies.
I'd argue that political action is a form of speech, so laws that prevent certain kinds of political action are speech. How do you draw the distinction between a law, which you're arguing isn't speech, and political donations which are, at least under the law today, a form of speech. I'm claiming these are all the same thing, because you can't draw a non-arbitrary line between a congressperson addressing congress and a congressperson drafting a bill and me asking a congressperson to draft a bill. They're all speech. You're free to disagree with that framing, but that is how I view speech. It's for this reason that I also don't agree with the common-in-the US excuse of "it's just speech so it can't hurt anyone", or similar. The only difference between your "just speech" and a law is who is willing to listen to the speaker.
Now, you can claim that we're talking about government representatives, so things are different, which may or may not be true, but I'll accept that. Perhaps people who have more authority should be given less freedom, an interesting tradeoff. But let's ignore the government entirely and just discuss what makes the two situations distinct.
So, it clearly isn't okay to censor Muslims, because that rule cannot be fair/equally applied. It will, by definition, be discriminatory against individuals, which seems to be the bad thing we want to avoid (or maybe I'm wrong and you're okay with discriminating against individuals?). However, censoring certain topics may be done without discriminating against individuals. It applies equally, for example, to a Muslim person wanting to extoll Islam and to a Christian wanting to demonize it.
If we bring this full circle: preventing Muslims from speaking ensures that they will not be represented [in the discourse, in the government, whatever]. This can, realistically, only be harmful to them. Banning discussion of Islam certainly has the capacity to be harmful to them, but also has the capacity to not be harmful to them (for example if the discourse is full of demonization of Islam, perhaps banning discussion of Islam is a net-gain for individual Muslim people).
Obviously this requires, like, an actual fair moderator, and it raises a bunch of tricky questions (like is the meta-discussion of whether it should be acceptable to discuss Islam, itself an acceptable discussion?), but it isn't, call it, implicitly harmful.
tl;dr: Removing people from discourse cannot be beneficial to those people. Removing concepts from discourse depends.
>>> It is acceptable to enforce the rule that laws cannot favor Islam.
> How do you draw the distinction between a law, which you're arguing isn't speech, and political donations which are, at least under the law today, a form of speech.
No, it's not about censorship because it's not preventing speech, it's about ensuring equality. It's fundamentally different in the same way a law that gives a right is different than a law that prevents an action. There's a fundamental difference between reduction of something (or the increase of other things to match) and the elimination of something.
> discriminatory against individuals, which seems to be the bad thing we want to avoid (or maybe I'm wrong and you're okay with discriminating against individuals?).
Well, I did just say in the prior comment "We can agree it's not okay to censor based on an individual", so I'm not sure why it needs to be a question...
> However, censoring certain topics may be done without discriminating against individuals.
And censoring the types of cake you make may not affect individuals either, if it's over something like your dislike for dogs. Let's assume both cases affect individuals. If you think it fundamentally changes what the argument is to restrict the web services side to items that affect individuals, then that points towards some ambiguity in the phrasing of the things being compared on one or both sides. That's progress.
> for example if the discourse is full of demonization of Islam, perhaps banning discussion of Islam is a net-gain for individual Muslim people
I think we're on shakier ground if we're justifying specific actions based on perceived public opinion and actions, because that's extremely subjective. What one person views as fair and rational discourse on a topic another views as lies and slander (dog whistles are a thing, as is being accused of dog whistling when honestly requesting information).
> Obviously this requires, like, an actual fair moderator
That doesn't exist. By the nature of the items we're discussing, I think there's a good chance it might be impossible to exist. The fair moderator of today is the one seen as blatantly biased in the discussions of tomorrow. Social mores change.
> tl;dr: Removing people from discourse cannot be beneficial to those people. Removing concepts from discourse depends.
Does it? What's a concept that being removed from discourse is beneficial? I think this is a very subjective point, and can't be taken at face value. There are some people that believe no concept should be entirely taboo. Those people actually wrote our Bill of Rights.
While I agree that removing a subgroup of people from discourse is generally not beneficial to those people, I will note it's generally agreed that it's sometimes beneficial for society. That's what we do with criminals. It may be that it being okay for some groups and not others means the way you group (the attributes that make the protected classes) matters most here.
> Forget Google, does a person with a personal website that allows comments get to control what comments they want displayed on their site or not? Does that person get to choose what work they want to accept from a potential customer?
Why would you ever want to ignore Google in this situation? Unless you don't want to earnestly debate the real topic?
Their size, control, and the fact they run massive platforms for publishers to post content, whose consumers often can post their own content, with multiple centrally controlled areas for control for hundreds of millions of people at any moment, makes it pointless to ignore such important context and semantics.
Debating the role of these major gatekeepers at a cultural, rational, and moral level - in the context of the position which they actually hold in society - is an entirely valid, and essential to any useful arguments.
Just because there are some weak analogies to governmental constitutions, or small business's/independent publisher's/individuals website's freedom to choose what people post directly on their site/app, doesn't mean it's a useful position to judge what self-imposed limitations should be employed by major monopolistic companies who run entire portions of the internet.
Great power requires great responsibility.
The difference here is basic common sense.
The fact people keep resorting to these reductionist analogies, ignoring critical context and arguing from positions of fundamentally different levels of power/responsibility, in order to push some pro-censorship and expansive moderation of speech, shows the weakness in these arguments.
If you need to ignore significant amounts of context to make your points then you're being deceitful, whether intentionally or not.
I'd go even further to argue this intentional ignorance of context and semantics is a fundamental tactic used by pro-censorship activists - despite the fact context and intention are fundamental to almost all human communication, verbal or otherwise.
At a more local level it's commonplace for these activists to ignore the obvious intention and context of the speaker, and critically the tastes and tolerances of the specific audience itself - who are entirely voluntary consumers. Making whole words, phrases, and concepts totally off limits, regardless if it was used in intentionally and rationally derogatory or offensive ways.
It's basically intentionally missing the forest for the trees. All for some vague greater good. No matter how many false-positives, unintended side-effects, misunderstandings, or wasteful side-shows it creates.
> Why would you ever want to ignore Google in this situation? Unless you don't want to earnestly debate the real topic?
Because presumably Google isn't special and doesn't lave laws and ethics that apply only to them? I'm trying to distill the issue to its essence.
> The fact people keep resorting to these reductionist analogies, ignoring critical context and arguing from positions of fundamentally different levels of power/responsibility, in order to push some pro-censorship and expansive moderation of speech, shows the weakness in these arguments.
If you think I'm doing that then you're making a lot of assumptions about what I was trying to communicate and why I commented, and you're way off base.
I'll make it clear, just because I pointed out a fault I see in a position doesn't mean I subscribe to the traditionally opposed opinion, it just means I'm willing to point out faults as I see them.
Additionally, hypocrisy doesn't care how correct one argument is over another. It's entirely possibly for someone to be a hypocrite and correct. We have a situation where some people believe A is okay and B is not, and some people believe A is not okay but B is okay. Unless they can actually distinguish why A and B are different, both groups of people are hypocrites, because they're either regurgitating someone else's opinion, or they're speaking before they understand what they're talking about. Whether one side or the other is "right" doesn't matter, hypocrisy isn't about correctness, it's about using the same knowledge and convictions to come to the same desired outcome with similar facts. Without the knowledge to distinguish between situations, how can someone be so sure that they differ?
And that's before we even go to the fact that there's actually four possible combinations of those attributes, not the two that have been presented so far (both negatively), and that the "right" way to view this is one of those.
I'm not saying all people making such arguments are making consistent arguments, but there's a consistent argument to be had that being forced to say (or write) something (say on a cake) and being forced to not say (or write) something are both compulsions in communication, and to be opposed to all compulsions in communication (to the positive or negative). I've heard several people argue that a cake shop should be compelled to sell a cake to a gay couple, but shouldn't be compelled to write two same-genedered names on it or compelled to craft a plastic figurine of two grooms or two brides for the top. I've heard it argued that if the gay couple wants "Susan and Jeff" and a little figurine of a bride and groom, or any other artistic expressions they'd do for a strait couple should be compellable, but the government shouldn't be able to compel artistic expressions or writing.
Now, I'd boycott the hell out of such an establishment, but as an abstract argument, I think compelled expression is a bad idea. It's really not that huge a step from compelled expression to re-education camps.
Twitter's service isn't that they write 140 character prose for you, and YouTube's service isn't that they create videos to your specification. It would be hypocritical to demand that YouTube be forced to create a custom video to your specifications (or a ghost writer forced to write a book for someone with whom they disagree) and yet the cake shop shouldn't be forced to write two same-gendered names on a cake. These people arguing against forced cake lettering aren't arguing for forced book creation or forced video creation.
Once again, refusing to make a gay wedding cake makes you a jerk and worthy of boycotting, but there is a consistent argument to be made simultaneously against forced expression and against forced silence.
The difference is in the legal definition of protected class. It is illegal to discriminate against someone based on their membership of a protected class -- ethnicity or disability for instance.
Removing an opinion or banning a user based on violation of an agreed upon term of service is not the same thing. Having an opinion does not make you a member of a protected class, and a private corporation is free to allow you or disallow you from use of their services to broadcast that opinion. Newspapers have been doing this since the dawn of print. Google could not, for example, ban someone for being Jewish.
You can argue about whether sexual orientation deserves status as a protected class, but it is disingenuous to claim that the two are the same thing under the law. It is a false equivalency.
a) As I stated in the argument about the difference between discrimination against protected classes versus hosting content on a private server, I'd say that legally, these are not equivalent.
b) Ethically is an interesting question. Since the ethics of denying someone service based on their sexual orientation is largely viewed as reprehensible, maybe a better question would be whether or not (freed from questions of protected class) the baker would decorate a Nazi themed cake versus allowing federated apps that are largely used for the dissemination of white supremacist ideology to be hosted?
c) In principle, I'd say they are not equivalent for the following reason: selling and decorating a cake is a business transaction between two entities. The cake (decorated or not) ownership moves from producer to consumer. The consumer is purchasing a physical cake. If the cake is ever made public, it is at the behest of the purchasers of the cake, and any consequences of that public display will be suffered by the purchaser. Essentially, the baker's name is not on the cake, and no one needs to know.
Hosting apps or other contents affects the reputation of the hosting company, and damages to their business reputation fall on it. Think about Facebook being recognized as a conduit for foreign interference in U.S. elections, or whether or not the New York Times will accept ad content from an adult video company. The name on the masthead is the entity that suffers the damage first.
You could make the argument that the "socially liberal" side that wants to censor hate speech, but protect gay people from discrimination is logically consistent.
A social liberal could argue on the point of protecting the rights of a marginalized minority. By censoring (for example) calls for violence, social liberals are protecting the safety of the targeted group. By requiring a cake shop to serve gay couples (or interracial couples, to throw in another example), social liberals are protecting a marginalized minority's access to services.
None of the reporting I saw on the wedding cake cases actually described the cakes.
Are we talking normal wedding cakes, that you can buy from nearly any baker, with some ordinary decorations, that just have two men's names instead of one men's name and one woman's name after the "Congratulations", and have two mass-produced little plastic men on top instead of one little plastic man and one little plastic woman?
Or are we talking something you'd get from a baker like Duff Goldman, which is a custom designed and made unique work of art specifically for you that captures the artist's interpretation of your wedding, and inherently is an act of speech on the part of the artist?
Size (importance) of the company. Can your electricity company disconnect you because they don't like what you're saying online using that electricity?
An electricity company is a utility however. They're a natural monopoly in a way that a "website" can't be
If [large social media platform] doesn't want that kind of content, it's not unreasonable to simply make one to soak up that "ignored" market segment. Reddit can't shut you down for hosting your own internet forum for instance
it's a terrible precedent, and borderline corrupt. mayor garcetti has proven to be a weak orator and leader, and weak leaders resort to force rather than reason and persuasion. he's solely focused on winning political points with wealthy backers here and nationally, because of his ambitions (and ranking behind gov. newsom), the people and precedent be damned.
he's ineffective with the power he has, can't get the homeless off the streets, can't build housing or make it affordable, can't improve educational outcomes, can't reduce unemployment and underemployment, has no real empathy for regular people (despite his emotion-laden language, and spanish!), and yet he wants to reach into our private lives and coerce behavior at the margin (saying this despite wholly agreeing that house parties are a terrible idea right now, but let's persuade, not force, and have a dialogue).
the irony is that angelenos have been an exceedingly compliant group to his orders, adhering to both lockdown and _outdoor_ mask mandates at upwards of 95%. even if some of that is social signaling, that's startlingly high, making any a dictator proud. and yet he wants more.
>but I don’t think it’s the right precedent to set.
I've thought about this statement for about a half hour now. What does it mean? That punishing individuals for ignoring public health mandates is a precedent we don't want? Or is it just the 'utility' being used that is problematic for you?
I don't know how I feel about cutting off the water service, but "cruel and unusual" seems like a reach.
It would be cruel and unusual to deprive a prisoner of water because they have no other means of attaining it when you withhold it. Turning off city water service to a property is different. The property owner has other options they can take to get the water they need to stay alive.
In Russia the right to access clean water is enshrined in the constitution. If a vilage relies on a certain lake / river, and lost that access e.g. to pollution, the government is complelled by law to fix the issue at no expense to the residents. It is not good enough to say "they could hire a water truck".
This also strikes me as abuse of power, one must not suffer arbitrary and random punishments. Whats bext, we will start cutting internet access to everyone who swears on the street, disabling electriciry to anyone who protests?
> If [large social media platform] doesn't want that kind of content, it's not unreasonable to simply make one to soak up that "ignored" market segment. Reddit can't shut you down for hosting your own internet forum for instance
This is true enough for Reddit. It's far less true of the Play Store, because the platform (controlled by the same people as the store) throws up scary warnings if you try to install any other store so that almost nobody uses them, and on the only other major phone platform third party stores are prohibited outright. Which means to get your users to follow you, you don't just have to get them to visit a different website, you have to get them to replace their phone with one from a different hardware vendor, switch operating systems, and replace all of their other apps -- if that's even possible for them.
And what when the only two platforms both do the same thing? It's obviously not feasible for an individual app developer to create their own phone platform and hardware and get everyone to switch to it.
Google removing apps with spam/malware is one thing, & not something people would actually complain about. That also doesn't apply to removing the fediverse apps. They don't contain malware, they're simply alternative social networks. If they're going to remove them for having content they consider inappropriate or whatever excuse they're using, they need to remove other social media apps like facebook & twitter, because they certainly both have plenty of that too. Otherwise it just looks like an attempt to remove competitors to facebook/twitter. I also don't see what this has to do with a business refusing to make a cake. Businesses do have a right to refuse service. In this case if you're going to remove apps claiming they violate a specific violation, but don't remove other apps which will also inevitably be in violation the same way, it's reasonable for people to question it.
I have to say I generally agree with you that when someone points out a contradiction in some common political stance, the reverse of that contradiction exists in the opposite stance. It seems pretty common, though it generally results from distilling a more complex view into a simplified statement (which may edge into the territory of creating men of straw).
>generally results from distilling a more complex view into a simplified statement
couldn't agree more. it's so frustrating seeing this everywhere online. this isn't twitter, you can write as much as you want. I read comments online to try and understand other viewpoints, and I can't do that without substance.
The first hypocrisy is the defense of the right of a business to make arbitrary decisions w.r.t. service (not bake the cake), while simultaneously demanding that the business not have the power to refuse service (condemning private censorship).
The reverse position is not hypocritical in the same way, because condemning discrimination against customers on LGBT grounds is not at odds with censoring discriminatory speech - in fact, the two positions are aligned.
You could try to argue that private censorship is itself a form of discrimination, but most people who hold the second position would not concede that the people who practice hate speech are a minority worthy of protection - so for them, no discrimination is occurring.
Comparing "allowing/disallowing the use of a tool" to spread YOUR message, to the "demanding that an artist/artisan create a message of YOUR liking" is very disingenuous. To be fair, I'm a libertarian and in a perfect world, you can do whatever and allow whoever you want to use/not use YOUR business. But this analogy of the gay wedding cake is simply not a good one.
It was the first one that came to mind, forgive me.
The basic thesis of my analogy was "you want 'free speech' forced upon private companies, but also want to allow them the freedom to dictate what content they allow under their 'brand'"
Being a gay is not a choice, the same way as being a black, a minority etc. Being an asshole is a choice on the other way, the same as being radical left or radical right. The former is protected the latter is not.
It's not a deliberate trick on anyone's part, but rather complexity induced contradiction. Similar destruction has happened to other rights - jury trial via plea bargains, equal representation via forced arbitration clauses, "papers please" via driving and flying, unreasonable search and seizure via web services, double jeopardy via overlapping jurisdictions, and of course federated government via pervasive commerce.
Or another trick, just place it in another country and then you don't have to worry about the 1st amendment at all, and no I'm not talking about China. UK or Australia will do just fine.
This is not true at all, and neither is the comment you responded to.
The constitution doesn't apply to a location or a medium, it applies to an actor: the US government (and state/local subdivisions). The US government has to follow it everywhere, and nobody else has to follow it anywhere.
My comment was a bit tongue-in-cheek, as I think the parent was as well.
Americans do tend to run around quoting their first amendment rights like the whole world has them. As you say, it's strictly a US government thing.
While Australia for the most part enjoys free speech, it is not enshrined in the (AU) constitution. The government will occasionally order censorship[1], usually around whistle-blowing, investigations and court cases.
There was a protest in the major Australian papers last year about the erosion of press freedoms[2].
Actually, this doesn't work. There's a specific court doctrine called the State Actors Rule. If a private entity is working on behalf of the government, then all of the constitutional protections applied to the government also apply to that private entity within the scope of them being a state actor. For example, this is why it is unconstitutional for Donald Trump to block you on Twitter, or for the Air Force's esports team to block you from their Twitch streams. This also extends to physical venues and company towns.
That's because the airwaves are public, so the FCC could regulate / require equal time in the public interest. A TV or radio station isn't operating entirely on its own dime - it's using a public resource, so it should be done in the public interest.
There was no fairness doctrine applied to, say, newspapers.
Right, I meant they're not bound by the first amendment. There are plenty of other regulations that apply to specific outlets, but they're not rules to protect free speech
The first amendment, like less encompassing provisions in other liberal countries, was arguably intended to forbid prior censorship, whereby opinions had to be vetted by some authority before publication. It was expanded in its interpretation to other kinds of censorship, through the idea that a chilling effect on speech was comparable to a priori censorship.
Religion and morality are distinct and the rationale for the separation of church and state is because of the power that religion wields by virtue of it's structure. Look back to the religious conflicts that were raging and provided the background for these decisions.
As much as morality can share certain qualities with religion, there is a fairly clear distinction that's worth maintaining.
The First Amendment's guarantee of free speech is worded as an absolute, there are no exceptions in the text.
However, no functioning society could allow unlimited free speech. There are many exceptions to the First Amendment – fraud, perjury, defamation, death threats, "shouting fire in a crowded theatre", speech in violation of privacy or duties of confidentiality, etc.
I don't think the original authors of the First Amendment meant it to be unlimited. They didn't intend it to legalise fraud or perjury or defamation.
But, given they didn't leave any guidance in the text as to which exceptions are valid and which are not, it is basically up to SCOTUS to decide. And which exceptions SCOTUS accepts as valid change as the moods of its majority changes – and will likely continue to change in the future.
The equivalent provision in the European Convention on Human Rights is Article 10, which says: "The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary".
I think that's better than the First Amendment in that it acknowledges in the text the reality that exceptions are necessary, and makes some attempt to outline what the exceptions are. However, there is still a lot of room for interpretation by the European Court of Human Rights as to the proper scope of all those exceptions, especially regarding what is "necessary in a democratic society" and what isn't. Few would claim the Court always gets it right. But, at least, the European Court of Human Rights is arguably a far less politicised institution that the US Supreme Court.
> Most (if not all) EU countries protect the freedom of speech in their constitution.
Not particularly well, as it seems when compared to the US.
You can just look at some of the laws that they have regarding speech, on their books, to see how there are definitely more things that are restricted that you can say, in comparison to what it restricted in the US.
You object to one of the specific exceptions in ECHR Article 10, not to the idea of making the exceptions explicit. The point I was defending was that the exceptions should be explicit, not that the particular list of exceptions in the ECHR is the right list
This would imply there are interpretations where it isn't allowed even in the cases of the worst most extreme content. Does any organization/group advocate such an interpretation? As far as I'm aware all groups either support an interpretation that allows the government to censor some speech or supports considering some speech as not counting as speech so it can be censored without censoring speech (using sophistry to hide censorship of speech).
There's nothing intrinsically superior to either SCOTUS or any group or organization's interpretation.
I'm pretty sure you are able to interpret it as not allowing even in the worst and most extreme cases, you just don't want to. There's no need for an argument from authority.
I'm not trying to claim any one group is authoritative. I'm saying that I'm not aware of any group, regardless of their level of authority, who uses an interpretation that includes the worst material. Everyone (that I'm aware of) makes an exception for at least one form of material, even the ACLU or similar organizations. Even the Libertarian party, one of the groups most in favor of limiting government, is not against censorship of the worst sort of material.
> This would imply there are interpretations where it isn't allowed even in the cases of the worst most extreme content.
That would be the interpretation where someone actually reads the text of the Constitution instead of making up exceptions out of whole cloth.
> Does any organization/group advocate such an interpretation?
Yes, obviously. The Libertarian Party is one example:
>> … we oppose all attempts by government to abridge the freedom of speech and press, as well as government censorship in any form …[1]
>> We support full freedom of expression and oppose government censorship, regulation, or control of communications media and technology.[1]
Direct threats of harm are still actionable, of course. In that situation you aren't punishing the speaker for what they said but rather defending yourself in response to a reasonable expectation of imminent and irreversible harm. The speech is merely evidence of intent.
The issue is what is classified as free speech? Perjury for example might not seem like a free speech issue, but under the most extreme versions it must be allowed.
As such it’s common for groups to carve out what the want limited as simply not qualifying as speech. Aka we can ban spam because we are banning the medium and not the message. This then gets into issues like should flag burning be allowed which blur the line between message and medium. Thus simply saying you support free speech is a rather meaningless statement. People need to look at the specifics on what each group considers speech etc.
> Perjury for example might not seem like a free speech issue, but under the most extreme versions it must be allowed.
If your perjury gets someone else punished when they shouldn't have been, you are an accessory to that unjust punishment. Conversely, if your perjury shields someone from just punishment then you are an accessory to the harm they caused their victim, and potentially future victims who would have otherwise been protected. Once again it is not the speech which is the problem but rather the actual harm which you helped to cause.
> This then gets into issues like should flag burning be allowed which blur the line between message and medium.
That isn't even a question of speech, just private property. If it's your flag then you should be able to burn it, for any reason or no reason. Banning flag-burning on the basis of the message the act is attempting to communicate would obviously infringe on the freedom of speech under even the narrowest of definitions. If you're burning someone else's flag then that can be punished as theft and destruction of others' property independent of any speech which may or may not have been intended. Freedom of speech is not a "get out of jail free" card. If in the course of making your speech you also take actions which harm others then you can be punished for those actions; the fact that they were accompanied by speech is no excuse.
> Thus simply saying you support free speech is a rather meaningless statement.
I kind of agree with you here. Freedom of speech only takes on concrete form when combined with consistent support for private property rights, in which case you get freedom of speech as a natural by-product. The question becomes not "Is this speech?" but rather "Does this action cause injury or otherwise infringe on anyone else's property rights?"—something that speech per se can never do.
> Once again it is not the speech which is the problem but rather the actual harm which you helped to cause.
Unfortunately that doesn’t work as outlawing harm from speech ends up outlawing political speech because someone doesn’t get elected. Similarly, advertising your product is detrimental to your competition.
Effectively you need to define what kinds of harm speech is allowed to produce. Anti Vaccination videos for example actually get people killed, and we accept that as a valid trade off in the US.
Again, it’s really tempting to define something as outside of free speech but selecting what is and isn’t allowed is kind of the point. So, you can’t simply punt this issue.
This definition of censorship is bad and must be changed.
Large general-purpose platforms should be forced do distribute all content which is not otherwise illegal, and only perform removal based through the standard legal system.
I'm not on Googles side here, but... As nice as your sentiment is, there are plenty of real problems which have to be addressed.
Who defines, and what are the definitions of "Large" and "General Purpose" in this context?
Am I allowed to moderate content as fits with my ToS at X number of users, but at X+1 users I become forced to publish all content which is "not otherwise illegal"?
In regards to what is "not otherwise illegal", which laws from which country apply?
How does one combat things such as excessive spam? If I am forced to allow any content which is "otherwise not illegal", and my service becomes literally unusable because of someone posting 10,000 cat pictures per second (cats aren't illegal, I'm forced to distribute the cats), what is my recourse? Am I still allowed to rate limit? Because that, boiled down, is censoring a users ability to post "otherwise not illegal" content.
One big part of the argument of the judges was based on contracts. When a user makes a facebook account, it creates a contract between the user and facebook. The user agrees to share their data for ads and stuff, and facebook agrees to publish their posts. Thus they have to do that, they cannot just decide to not publish some posts, that would be a contract violation.
Just like when you order a pizza with 10 toppings, the pizza service cannot deliver a pizza with 8 toppings and still expect you to pay the full price
And as second part, the court has ruled that a clause "we choose which posts to publish" would be invalid in the contract, because a contract must be fair to all involved parties and not violate basic rights like freedom of speech.
My post was meant to be more rhetorical in nature, to demonstrate that overzealous censorship is not an issue which can be solved by simply stating "force all non-illegal content to be/remain published".
It's easy to hand-wave it to the courts, but there are many issues with that approach as well (e.g. non-technically competent people making decisions about technology, government not immune to corruption nor censorship, courts are slow by nature).
Your answer also conveniently side-steps the nitty-gritty implementation issues. How does a platform deal with spam posts if they are unable to delete content that is legal? If you don't like a website, simply post a few thousand Viagra advertisements to their front page for a few weeks and they either have to advertise Viagra or shutdown.
Speaking of advertising, why bother pay for ad-space on a website like Reddit, when I can just spam my product on every subreddit over and over again and they have no choice but to keep it up?
Will these rules also be applicable to other forms of media which also have a viewership above the arbitrary "large" line? Newspaper, books, TV? If not, why not?
These are just a few off-the-cuff issues that come to mind. I'm sure people smarter than me who take some time to seriously consider this approach will be able to find hundreds of such examples of why "forced to publish all non-illegal content" sounds great but is simply not a feasible solution to the problem.
As an aside, I find it odd that the answer to censorship by private companies is to offload everything to the largest centralized system with the longest history of censorship: governments.
They should have to choose between neutral publishing and liability on their networks. If they want to censor then section 230 of the communications decency act should be repealed and let them take ownership of their decisions.
As it stands now they get zero liability for speech they can clearly police. I see hate groups in Facebook that Facebook doesn't close if reported. Let them own it
It’s hard to make a free speech site whose content guidelines follow the First Amendment, because the credit card networks will ban you. Gab, for example, can only accept checks and crypto.
It's quite disputable that google even has the right to a platform of anywhere near their current power and influence (waiting for antitrust law to be enforced) and the distinction in any case seems quite artificial when google buys politicians and receives significant public funding and such.
Google does not receive public funding. By your logic, a $2-off a $10 item coupon is me receiving $2 from the store. Strange how I don't have a new $2 in my wallet - in fact I seem to be missing 8.
"but it's not a violation of anyone's First Amendment rights."
True. But while technically correct, it is becoming a more and more academic distinction.
Facebook-Apple-Google-Twitter, and maybe a few other tech behemoths, probably have greater ability to censor people and content than most governments throughout history. It's also true, in theory, democratically elected governments could reign them in. But it's also true they have unprecedented power to manipulate public opinion, if they chose to do so, to get people elected or legislation passed or blocked.
We don't even have to argue about it. Folks agree to a legal binding contract that can likely be altered at any time for any reason and I'm willing to bet verbiage about yanking apps for any reason is well defined too.
Decided to look, pretty clear.
"Google may make changes to this Agreement at any time with notice to Developer"
and
"Google may terminate this Agreement with You immediately upon written notice"
Corporate monopolies can easily force one-sided contracts on users, but that's just a reflection of their market power. In a perfect world judges would rule them unenforceable, in our world corporations fund the people who appoint judges.
I don't see anyone be forced to engage with Google, and it's not as if the play store doesn't provide value back to developers. It's enforceable because it isn't one sided. In a perfect would we hold the legislators accountable for this mess since they're the ones writing these laws.
> To be fully accurate, this is absolutely censorship, but it's not a violation of anyone's First Amendment rights. People often conflate the two.
You're technically right, but in practice it really doesn't matter that the "government" can't censor you when the modern "public square" is held almost entirely by private companies. The government doesn't really hold any power in shaping modern public discourse anymore, tech companies do! Now the fact that said private company's ideals line up closely with one certain political party is just a coincidence, I'm sure.
You could look at this like private citizens trying to block access to public lands (beaches) in order to make them de facto private.
Federated apps are trying to establish a commons, right? If Google can block access, then there is no commons. If the government can force easements on private land owners, I suspect the EFF could make a case for the same for App stores.
Nobody would be making Google host the Commons, just access. This is a much simpler case IMO than forcing Twitter or Facebook to carry messages they don’t want.
I am sure the local water or electricity company can’t refuse to provide you the service because they don’t like your politics. And the gay wedding cake controversy. Those platforms that operate like a quasi monopoly (search, social network, mobile services, internet service providers) could very well be treated like utility companies or something intermediate with a similar obligation not to use their dominant position to control speech in the country.
Historically, the position of the censor has always been as the governmental authority. Censorship is a threat from a governmental perspective. The assertion that moderation of hate speech on private platforms is censorship is a form of moral panic. As the owner of private property if someone were to graffiti my walls without permission it would be absolutely absurd to suggest that the person who vandalized my goods without consent has more rights than I do to display the building in a manner I don't agree with. Tell me how that is any different from moderating a site to remove content that I consider contrary to my interests.
> Historically, the position of the censor has always been as the governmental authority
This is just plain false. The dictionary definition [1] disagress with you, Wikipedia explicitly mentions private private organisations [2]. Religion institutions have censored a lot of things in the past [3]. If what you said were true, government censorship would not be a phrase people used as it would be a pleonasm.
> As the owner of private property if someone were to graffiti my walls without permission
This is a false equivalence. You're not allowing anyone to paint on your walls, so it's not a place where expression is allowed. Censorships is selectively suppressing expression.
If you would open your walls for public painting, and then remove a few of them you don't like, you're indeed censoring those artists. That doesn't mean it wrong/illegal, it's just the definition of censorship.
>> Historically, the position of the censor has always been as the governmental authority
> This is just plain false. The dictionary definition [1] disagress with you, Wikipedia explicitly mentions private private organisations [2]. Religion institutions have censored a lot of things in the past [3]. If what you said were true, government censorship would not be a phrase people used as it would be a pleonasm.
Nice strawman, but that wasn't my argument. My statement was quite clearly not about the definition but the historical position of the censor. Let's first take the etymology of the word and examine it in context shall we? The Roman Censor was a magisterial position that in addition to running the census in regular intervals also responsible for morality laws also known as regimen morum. It was under the regimen morum that Socrates was found guilty of corrupting the youth and his censorship which led to his sentence of drinking hemlock and ultimately, death. We can look at early China and see similar themes arise in that the position of the censor was typically a governmental magistrate with high authority in place to regulate moralistic behavior.
When the enlightenment philosophers started espousing free speech movement they were fighting against monarchical tyrants who were trying to quell the dissent from a relatively new invention, the press. Sadly during this time the Catholic Church was also engaging in the Spanish Inquisition, the slaughter of Jews and Non-Conformist counter establishment churches. Freedom of Conscious, the ability to choose ones own morals, religion, method of governance, the denouncement of the absolute authority of monarchs were all part of the boiling pot that made free speech thinkers and ultimately those who wrote the US constitution encode the rights of all people to speak freely without consequence from governmental authority.
However, that is not to say that what you say is without consequences, a spoken threat can still be grounds for civil or criminal prosecution, likewise so can libel or slander be grounds of a lawsuit. Furthermore, while the enlightenment philosophers who espoused free speech are often credited for those freedoms we currently enjoy, not many of them endorsed the concept of universal toleration. You see, there is a difference between freedom of conscious, speech, expression, and religion vs universal toleration. Universal toleration is the idea hate speech, the suppression of minority opinion, community persecution of morality, pornography of all forms regardless of the level of exploitation involved including but not limited to child porn, and snuff films, must be tolerated without let or limit.
There exists a boundary, a line of demarcation, which can be used to separate what is tolerable with what is not and that line is consent. Can the parties involved in an activity consent to that activity? Two gay people aren't hurting each other, but threatening to beat them up certainly is. What is the age of consent vs the age of maximal maturity? Are sex workers a blight on society or are the conducting business on their own terms. Do people doing drugs deserve prison? These are intellectual and philosophical quandaries, some of them I fall into one side or the other on. Personally I would like to see voting, drinking, drug use, age of consent, what age you can enlist at, etc all be raised beyond even 21. But the one thing I think we can all agree on is that if you do something, with the intent and purpose to harm another then that is intolerable.
So where does that make me fall on the corporate censorship debate? Well there are different kinds of corporate censorship, the kind that I abhor and the kind that I endorse. I abhor the idea of a corporation censoring press about them when it shows them in a negative light despite uncomfortable truths. But if I am running a platform and people are spreading content that is contrary to my intent, that I could consider harmful to my business, specifically hate speech that is intended to suppress a group or ideal then that to me is not even censorship, it is moderation of content and editorial discretion, and I fully endorse that.
Great. Google should become liable for all the illegal content that slips through, all the way to the officers and directors of the company.
I think we as the society agree that child porn and child sexual abuse is a criminally punishable offense. A platform in a possession of it clearly is in a possession and distribution of child pornography, hence officers and directors should be charged under those statues.
Isn't that why we have safe harbor laws to encourage platforms to self regulate and collaborate with law enforcement in exchange for not being sued for those violations ?
Without those provisions no website could allow user created content, as they would instantly be sued to oblivion.
If Google wants to be viewed as a safe harbor, so Google should behave as one and stop deciding what is "hateful". What you/Google/government consider hateful can be what I or the people in Belarus consider the truth. This trend of "censorship for your own good" needs to stop.
I believe all platforms do moderation of some sort. It sounds like what you are proposing is that nobody be allowed to do any moderation. How do you think that would work?
It would work as the internet is supposed to work. Like it worked about until 2015. Slander and libel is punishable under law. This is the way to punish lies and excesses. For the rest, lets just talk, freely, and let the best ideas win.
> Isn't that why we have safe harbor laws to encourage platforms to self regulate and collaborate with law enforcement in exchange for not being sued for those violations ?
> Without those provisions no website could allow user created content, as they would instantly be sued to oblivion.
I want to make it clear that I do not have a problem with a safe harbor what so ever.
If Google wants to prevent its users who ask it from being able to access some portion of internet, Google absolutely can do that. That portion of the internet should not be able to go after Google for acting on behalf of the users.
What Google is doing is preventing users that did not ask for that from being able to access that portion of the internet. It is akin to Verizon deciding "that content is bad so we are going to block it in our pipe". If Google wants to do that, then fine it is providing the "clean internet experience" and it should absolutely be punished when it fails at doing that (hosting child porn)
Not only are these apps web browsers of a subset of sites, other web browsers on the Play Store already allow access to these sites in exactly the same manner as the app, using the SAME PROTOCOL.
In respect to this move by Google, there are no differences between web browsers and Mastodon browsers. The only difference is that if Google applies their ban equally and fairly, they lose tremendous amounts of money by banning all web browsers.
We shall see if Google follows it's own policy on this manner or if they are descriminating based on what lines their pockets.
Shouldn't this policy apply to all communication apps? Web, IRC, SMS, and even phone calls can connect you with a hate speech provider if you know the right address to dial.
This sets a dangerous precedent and highlights why we should continue removing Google/etc as dependencies in our lives.
Richard Stallman was right. The computer (smartphone) for millions of people has become a prison. The smartphone has been a tragedy for the human species.
"Oh it doesn't matter they went after people I disagree with"
Okay wise guy, what happens if the other side takes power? Think that's never happened historically? Then they'll come after you and I. How about we just not give this kind of power to anyone and let people make their own choices.
It has been shown time and time again that giving unrestricted uncensored access to billions will lead to the worst ideas proliferating. Look at QAnon and conspiracy theories running wild. Look anti-vaxxers and how America, one of the most advanced countries in the world, is now having Measles outbreaks after decades of having nearly 0 cases. Look at how simple things such as wearing a mask has become a controversial topic.
Yes, everyone would obviously prefer a fully open and uncensored platform, but the reality is that those are very easy for bad actors to take advantage of. So many things on the web is disappearing thanks to these bad actors. Public APIs are getting locked down, Catpchas everywhere, passwords and 2fa are getting increasingly more complicated, and so on.
If you really think every platform should be 100% open, you live in an idealistic universe that is not this one. The whole idea of "the solution to bad speech is more speech" simply does not work. It just doesn't.
It's easy to blame all the flaws of human character on its most highly available and proximate expression which is speech. There is violence, anger, hatred, envy, all kinds of evils in the world, the world isn't perfect, man isn't perfect, and trying to control others' thoughts isn't going to change that. That would actually be regressing several hundreds years back to the Dark Ages. If your assumption is that banning free speech would improve things, I would say that is the idealistic fantasy. There's no proof that it would work or wouldn't have the same unintended consequences as before. The more these platforms push back against their users, the more they turn the public against them, the more opportunities they create to be disrupted. In a recent poll 3 out of 4 Americans said they were willing to die for free speech.
> trying to control others' thoughts isn't going to change that
Controlling the spread of misinformation is not "controlling others thoughts".
> your assumption is that banning free speech would improve things
Banning specific content such as anti-vaxx is not "banning free speech". If anything, let it go rampant on your platform is actually helping promote it. Private platforms have no such obligation. It's like if I came to your house and forced lies about you to your family, and when you threw me out, I claimed that you were trying to ban my free speech.
The is the first I (as well as others here, apparently) had ever heard of the "fediverse". Wikipedia [1] helps me out with what it is technically... but can anyone who knows describe what its content is actually like?
In terms of hate speech or illegal content... does that make up the vast majority of fediverse content, in the way that pirated media makes up the vast majority of torrents? (Even though torrents can also still be used for 100% legitimate and legal purposes.)
Or is it like the dark web, which from what I understand is mostly legal content, but still hosts a sizeable proportion of content for illegal services and content?
Or is it more like what Reddit used to be, where it's 99% all for good and fun, but with a tiny minority of super-hateful communities? (These days all those super-hateful communities have been banned for a while, which is why I mean the "old Reddit".)
Just trying to get a basic context here. Not looking for speculation, but the impressions from people who actually use it...
I use Mastodon to get away from the negativity and hate speech on Twitter and Reddit.
In my experience, there has been a very minimal amount of hate speech on my timeline.
The nature of the decentralized, federated system does allow hate groups to easily gain a platform. However it's just as easy, if not easier to prevent their instance from communicating with yours.
It's very much NOT like the torrent analogy you made and a lot more in line with your Reddit analogy.
I had some experience with Mastodon, which is a fediverse alternative to Twitter. So basically, think like this:
Anyone could start their own 'Twitter' server, and doesn't matter which server you are in, you can send messages/follow anyone in any server, unless the person or server blocks you/your server.
In Mastodon, it's mostly about legal content, but with different servers focusing on different subjects, or offering different levels of privacy and/or free-speech. Some servers focus on a safe-space for transgenders, others, a non-censored place for alt-right people.
If anything the instances I have visited have been cleaner than twitter and Facebook. On Twitter and in Facebook comments I've seen some serious hate speech.
I haven't used Mastodon much, but in my experience, the Fediverse is mostly being created by people who are politically left (socialists, anti-capitalists in general), so the content I've seen leans that way and they mostly take a hard stance on hate speech. Check out dev.lemmy.ml (federated reddit) for an example of what I mean.
However, anyone can create an instance in the Fediverse, like when Gab created their own Mastodon instance. Basically every other instance chose not to federate with them, though.
That's what is nice about the Fediverse, you can pick what community with what rules is best for you.
There's racist stuff all over Google's biggest properties. Racist results on Google, videos on youtube, sites hosted on GCP. Give me a break with your fake trashy virtue signalling Google.
This is nothing more than a monopoly stomping out a platform it doesn't like. Probably because federation is a threat to Google's position as the central hub of the internet.
I really doubt google cares about federated applications at all. I suspect they're just trying to avoid a potential future shitstorm where they're accused of facilitating something nasty because they "approved" apps to be on the play store.
No, as others have said, you'd have to apply the same reasoning to a web browser. Google dominates the web search, so there's no need to prevent other browsers because it all drives Google's revenue anyway. Any app that allows you to break out of the garden is banned.
Viewed through the lens of "what will cause the most bad publicity" a browser is understood by 90% of the population. The "fediverse" isn't. It can be spun to cause damage in the way a browser cannot.
Google don't care about the federated stuff because it's not big enough to bother them, in fact it probably siphons all the crap they'd rather not index off to a place they no longer have to worry about.
They care about bad publicity and being seen to implicitly support "bad" things via their app store.
Seems the opposite to me: most of the Fediverse is web accessible, where Google Search can crawl it. Google's nemesis are closed platforms owned by other companies (like Facebook).
> This is nothing more than a monopoly stomping out a platform it doesn't like. Probably because federation is a threat to Google's position as the central hub of the internet.
Never attribute to malice that which is adequately explained by stupidity.
The real reason is that Google employees live in an ideological echo chamber where censoring anything that doesn't align with their ideologies is a completely normal thing to do. They think they're protecting the world from bad ideas, without noticing how they themselves promote other bad ideas.
Basically: the moralistic version of the Dunning-Kruger effect.
That's fine and reasonable on an individual level, but not only does it seem like a clear case of violation of duty to shareholders to make otherwise unsound biz decisions on the basis of "it's moral", it's also just not an accurate depiction of the business process.
Google is a business, and at the level these decisions are made, the decision is made overwhelmingly through the lens of business. A more fitting phrase in the context of a business would more accurately be "never attribute to goodwill that which is adequately explained by sound business thinking".
>They think they're protecting the world from bad ideas, without noticing how they themselves promote other bad ideas.
They, being Google executives, are only thinking of the company's bottom line when they make these decisions.
Don't underestimate the impact of a few thousand (or more) Rose Twitter users spamming a companies marketing page. That company goes to Google and says "make this stop, I don't care how", then Google execs decide who they want to piss off, which is generally the people paying them the least amount of money.
On a related topic. I was trying to search videos from the Wisconsin shooting of the 17 year old who shot and killed two people and injured one. Every video I clicked on was already taken down due to offensive content. I found it strange given that Youtube is full of videos of shooting incidents.
Now, I do understand the families of the victims may want the videos taking down. But it seems to me the mob of people flagging these videos have a different motive. This shows that trusting the community to flag offensive content has its flaws. (although at Youtube's scale there's no alternative really)
That's actually very disturbing considering the amount of misinformation over this incident, and that they're charging that kid for murder!
Were you able to find the videos? They tell a pretty different story than the media narrative. Every person he shoots attacked him or had a gun. He doesn't shoot the kid behind them who had his hands up, or any other bystanders. He then walks with his hands up and turns himself in to police.
It's still up on several alternative sites (BitChute and PeerTube instances) along with some I can't mention here. I always use youtube-dl to download YouTube videos, Tweets and Reddit videos for anything that's going on right now; especially shootings. They get taken down pretty fast and it has a chilling effect because people are only seeing the CNN/MSNBC/FOX versions that are HEAVILY edited.
Earlier footage indicates the first man shot was attempting to start a fight with the militiamen, but there's a gap in the timeline where nobody was filming before the fight broke out.
The NYT has a very good article on the subject (and I'm a man who's very critical of the NYT)
OK I found a video showing the first fatal shooting [1]. It's not as clear as the video showing the mob chase (where it indeed looked like self defense) but from what I can tell the person who died got shot while running away from the shooter (it's hard to tell where the shooter is exactly during this time). Doesn't look like self defense to me. Hopefully there are better recordings of the whole incident.
Update: I take it back what I said. I found another video [2] that clears the picture. The guy wasn't running away from the shooter, he was chasing the shooter and throwing something at him. He then kept running after him and the shooter turns around and shoots him. The confusing part is that she shooter is shown coming from behind after the shooting happened. But it's because he goes around the car after the shooting and comes from the other side. I now believe all the shooting he did that night was self defense. He wasn't supposed to be there and play police but that's a different story.
Watch the actual video. Dude pulls the gun and it bringing it around to shoot the kid. This is after the kid gets kicked in the head. The guy is also a convicted felon.
I do believe what he did was straight up self defense.
> Dude pulls the gun and it bringing it around to shoot the kid.
But the kid had already pointed his own gun and actually already shot someone.
If seeing someone pulling a gun and starting to point it at you is justification to shoot in self defense, then the guy who got shot was acting in self-defense, according to you. I guess the only thing he did wrong was not shoot quicker than the kid.
How can your self-defense logic only apply to the guy doing the shooting but not to the guy not shooting?
This is the issue. There is ZERO video of this (only the aftermath). We don't know why that first guy got shot in the head, or if it was this kid that did it (forensics will show) and that guy he shot had a criminal records.
It could have been a self defense shot too. His ability to control himself after the two he shot on video goes to enforce that idea.
The kid had zero record. It's a bad situation for sure, but I don't think there is enough evidence to say the kid didn't act in self defense.
Self defense is an affirmative defense, so the onus is on him to prove that he was acting in self defense, not the other way around.
Anyway, it's a lot more complicated than this. I expect the prosecution to make the argument that he deliberately travelled to the event with an illegal weapon to provoke a situation where he could justify attacking someone in "self defense". That's still murder, even if he was in a genuinely threatening situation. And while Wisconsin has no duty to retreat laws, juries can consider opportunities to retreat when deciding if an act of self defense was actually necessary.
This is also clearly not community policing, since he obviously travelled from Illinois to aggressively confront a community he is not a member of.
I don’t think shooting three people can be considered a demonstration of self control. The standard is to shoot zero people, which I guess everyone else managed that night?
> The kid had zero record.
That can’t really be used as proof of innocence. You wouldn’t be able to convict anyone, since everyone starts off with zero record. He’s only 17, so hasn’t managed to avoid serious trouble for very long.
The guy in this thread being downvoted clearly has a political bias but the part of his argument advocating for access to the raw original length video is reasonable.
If youtube wants to disable comments, throw up warnings or blur out gore then fine but suppressing media from a highly politically charged situation is a mistake.
> I want the truth. That's something all US media is failing to give us right now.
What truth is this you are so deprived of? I presume you have some "real" source of truth that's more reliable than media that fact-checks itself and issues corrections?
> that's more reliable than media that fact-checks itself and issues corrections
oh boy. I'm not even going to start in on this one. If you actually believe any of the "fact checks" done by your favourite news outlet, instead of going out and doing a lot of research from a bunch of different sources and viewing the full actual video of events in context, you're not getting the right picture; not even remotely the right picture.
We've never had a media that's more blatantly bias and unreliable than the one we have right now.
I would highly recommend you watch the 2019 documentary Hoaxed. You're immediately going to dismiss it when you look it up because of the people in it; which goes to show how bad current media bias is.
I don't think everything from the documentary should be taken at face value, but it's still incredibly valuable in learning how the narrative has been so insanely skewed today.
I took a watch. Honestly, it's lazy and dumb. The siloing of modern social-media justifies the behavior of Cernovic and his ilk? There is genuine propaganda being actively promoted by malign actors in our media ecosystem, but Cernovic tries to use this as an excuse for his malign promotion of propaganda and misinformation.
That movie is by and for Mike Cernovic's existing audience. It was crowd funded by his audience, produced by his production company, and promoted on far-right websites where he contributes. The producers and directors appear to be mostly info-wars style far-right propagandists. Event the glowing IMDB reviews describe it as "preaching to the choir".
The project of Hoaxed is to create false equivalency between the misdeeds of the larger media industry, and the specific business model of promoting far-right misinformation that people like Cernovic engage in.
edit: I broadly agree with this guys take on the movie [1], which he expresses at greater length and clarity than I am presently able.
Right wing propagandists flooded twitter and reddit with deceptive edits, frame grabs, and narratives. I expect there are now multiple retaliatory take down campaigns by multiple groups and bot networks to try to preserve only videos that support their preferred narrative.
I mean the NYT has the videos and very clearly shows that he acted defensively each time. I was surprised by how clearly they were willing to show that considering their bias, I didn’t really read the content around the video, but that alone was interesting. Of course the title at the time made it sound like he was guilty so...
Followup reporting indicates that he was not asked to guard the business that he was "protecting", that the business asked him and others not to get involved and they did anyway, and that he instigated the altercation with the men chasing him in the video.
Here's an interview with a reporter who was on scene and rendered aid to the first victim. He gives a minute by minute recount of what happened from his perspective.
The instigation that led to the second set of altercations isn't entirely clear.
Can't the same offending toots be viewed in Google Chrome? Just screenshot their own browser with the same content they are reporting and ask when they will turn the lens inward. Or, find similar content in Twitter, or Facebook, etc. This sets a dangerous precedent.
This happens all the time--recently, a podcast app was removed from the Play Store because it could be used to listen to content which didn't meet Play Store guidelines. The only way to fix it is to post about it and generate enough outrage that Google hears about it and can undo the ban.
When you say that's the only way to fix it, you are literally correct. There is no real ticket or support mechanism, no appeals process, nothing. The fastest way to raise an issue with Google is to email a journalist or hitup your twitter followers.
Apps being rejected was a talking point for the anti-trust investigations for Apple. I don't think Google is quite so famous for its rejections but they were part of that investigation for other abuse, hopefully any changes that come about will apply to them too.
Perhaps the FTC should have an office dedicated to overseeing policy enforcement for the largest store platforms. There could be a mechanism for saying "hi, I was banned by store X under policy Y but I believe my app was unfairly targeted because Z circumstance and Q other apps should all be treated equally here, including the first party app they're trying to protect..."
A better fix is barring by law tech companies that control important platforms from using those platforms to censor legal speech. It simply should be illegal for Google to down an app because it contains legal words that blaspheme against Google's California values.
yes. husky in particular is a fork of another app called tusky, that internally implemented a login blacklist of explicitly-nazi instances and instances with lolicon content, after the author decided they didn't want those users running their software. husky's explicit sole purpose is to be tusky without that blacklist. tusky has not been removed from the play store.
i'm not familiar with the other apps on the list but i expect it might be some issue like promoting such instances in their registration screen.
also interestingly, the instance OP links to qoto.org is known within the fediverse for being full of creepers, because they've implemented a partial defederation and block circumvention. if you have an account on qoto.org, you can follow users who've blocked you, on instances that have blocked you, because it will recognize such, pull a list of posts via RSS instead of via activitypub, and fake an activitypub actor internally to generate posts for your feed. in their defense they have said that the posts are public anyway, and the user could just browse the public feed with a web browser, but it's clearly a bit different when posts from a person who has tried to block you appear in your feed normally as an item you can interact with. it's certainly against the spirit of consent.
8 years ago "Reddit Is Fun" was removed from the Play Store because it included "sexually explicit" material and it was related to the inclusion of NSFW and hate subreddits in the app's default subreddit list. Google was fine with people adding that stuff on their own, they just didn't want the app to be promoting or pushing it. The app was adjusted and updated and it was reinstated.
Big 4chan apps like Clover got banned off the play store long ago. The reason given was nsfw content, but in the app you had to manually add nsfw boards, much like Reddit's nsfw communities. Picking and choosing which social media platforms to ban has already been a thing.
I used to use chanu years ago, and I vaguely remember it installing with no imageboards registered to it. You had to go into the settings and tell it which imageboards you wanted it to access.
The thing that isn't being mentioned is that Google allow apps that make "reasonable" attempts to block content that violates their anti-hate speech policy. Reddit has shown that they're willing to ban the very worst content. No idea about 4chan.
In the case of the Fediverse apps they can't block anything because firstly there's no resources to police it, and secondly it's kind of the whole point of federation to let the user see what they want without getting in the way.
It has been a while since I went to 4chan, but for /b/, besides very few exceptions like child porn, the rule was "no rules, it also applies to mods".
So they can allow the worst kind of hate speech, porn and gore, but ban a harmless meme because mods find it annoying. So the only rationale from banning images from "Cuties" may be "because mods don't like it".
I always found the "normalizing" argument to be odd. Society has never been more against pedophilia and the likelihood of it becoming "legal" or "normal" is so low as to be laughable. I don't feel threatened by fringe groups with no power, only by censors willing to exploit these feelings for their own gain.
It has never been normal and the idea it could ever be is a delusion. It is a tactic to censor content moral guardians don't like. Look at the bar of perceived "pedophilia" which keeps rising every year.
Using this rationale the Facebook and Twitter apps should be removed from the Play Store as well: an abundance of hate speech can be found in/with those apps.
I think the problem is, that two things are getting mixed up.
First, there are content providers like Facebook or Youtube. Those platforms store the content of their users and if the content can be publicized the content providers have to apply their rules.
Second, there are software providers like the Play Store in its original function, Apples App Store or Amazons App Store. Those platforms should not care about content but just about software. If the software is malicious, ban it. But they should not ban software because if the content you can reach with it. That is the job if the distributing content providers.
That said, I am not a particular friend of censoring content at all. I just accept, that some content providers have their own rules about which content they accept and which content they don't want to support.
"Content provider" usually refers to people making that content (e.g. individual users of YouTube or Facebook). Maybe "platform" or "service provider" is more appropriate?
This does highlight how important the relative freedom of Android is. This is unfortunate but it does not stop the ability for people to load the APKs or get them off of F-Droid.
That being said, imagine if this happened on the Apple App Store...
For the most part, yes I believe the blocking is for malware sites. The current list I am aware: malware sites, deceptive sites, suspicious sites, sites with possible harmful programs, sites that load scripts from unauthenticated sources, sites that chrome believes were typed incorrectly.
I didn't intend to imply that chrome currently, intentionally blocks sites on other criteria. However, the framework is already there.
History shows that the ego of people always takes over if left unchecked. When it is possible to restrict something, that will eventually get restricted. This is especially true when discussing large companies, such as Google.
I was agreeing with you and not correcting you :-)
My favorite app is Tusky and is still available from F-Droid [0]. I think Tusky will be fine since they block Gab and so they won't be affected by this ban.
The worst thing of censorship is they always can find some new targets after they have censored their old targets. They do it one by one Until you find out every word you said is illegal, it is too late.
Been looking for a good client. Can you disable the block on Gab? I know the place is a cesspool, but I don't like the idea of missing content because an app developer doesn't like an instance.
As much as I hate Gab, I hate the possible extension of "unwanted content" especially considering Google's possible interests in China. I would not be surprised if next up to be removed are apps that allow access to instances that discuss the Uighurs or 天安门大屠杀 freely.
I've never used this service, but let's play along for a moment.
Let's say that 100% of the servers were bastions of hatespeech. It seems like the folks in this thread would be against banning it in that case. But I think that's wrong. If the app was 100% just hatespeech sites, then it should probably be banned.
Now, then we get to reality: What is the real mixture? 80%? 50%? 20%? I don't have any background knowledge to make a guess. But I think it would not be difficult to show that whatever percentage it is, it would be far greater than say, what you can find on the average internet site.
Why do I say that? Because instead of going on Facebook, which is so easy that over a billion people use it... you have to pick up an obscure piece of software and run a server, or something. So the only people who are going to pick this up are likely technophiles, and people who have been banned from mainstream sites.
I think this is a good example of "you can't solve human problems with cute tech workarounds" because I think Google is actually in the right here (given you think their ban on hate speech is general is legitimate). It shouldn't matter at all the technical details of how an app works or the fact that it's federated or distributed. All that really matters to the policy is what a user sees when they download and open the app.
And on that front I completely understand where Google is coming from. Because the fediverse has a laissez faire attitude towards moderation (i.e. instance owners don't really block much) much of what you see on the fediverse is stuff that would get you banned on mainstream social networks [1]. And that content is the default experience unlike a web browser/chat programs/communities where you have to seek out that kind of content.
I expect that an app tied to a specific instance that put effort into moderating their content and put up some safeguards against seeing content on other instances by default with some popup like "content on other servers is not vetted, are you sure?" would be allowed to stay.
[1] Which makes sense since these are some of the only havens for people who can't make it on Twitter/FB/Reddit.
The technical details do matter quite a bit though, because the same logic could be used to ban (for instance) web browsers. Because the web is an open protocol that can be used to communicate with any website, even objectionable ones, every browser is an app that gives access to hatespeech content. Google may ban the 8chan app, but they don't ban browsers despite browsers being able to access the 8chan website.
Of course, Google won't ban the web, but only because it doesn't suit them to. What we wind up with is Google having arbitrary control over which types of communication protocols are allowed on Android with no need to justify their decisions, because any protocol could carry hatespeech, and that's pretty concerning.
We desperately need a mobile general purpose computing platform that doesn’t make side loading so onerous. Apple and Google are going to continue pretending they aren’t selling general purpose computers and they may well convince the regulators.
Consumers need to be able to choose an operating system that gives users full control. I don’t want to be confined to a desktop or a laptop.
Taking it outside of the walled garden should always be Step 1 for apps like this. PWAs exist, and most users these days have seen the "Add to Home Screen" prompt on websites. There is no reason for this to be a native app.
> This is particularly worrisome because for most people Google Play is the only way they understand to install apps at all.
I disagree. Users dependent on installing apps are the least likely to use fediverse apps to begin with. I use F-Droid or download APKs from XDA, but I have no idea how I would even get started with Mastodon or these other decentralized apps. I frequently hear the term "start an instance of ___".
What does that mean? Can't I just create an account? When I hear "start an instance", I think of launching a Linux droplet on Digital Ocean and setting up an app there. I can't imagine what an average user that wants a Twitter alternative would think.
You can of course run your own instance, but yes, you can also just make an account. You need to decide which instance you're going to make it on, but after that it's 100% the same as any other website.
I frequently visits 1 Chinese dota2 esports site. That's a small & independent site with 10+ years history. A vibrant community with mostly young gamers on dota2.
Naturally, a lot of these visitors often engage in dynamic discussions on international relationships and domestic issues. Nothing too sensitive from my perspective. And most of the time, people contribute personal experiences that are quite valuable.
The site is frequently shutdown for a few weeks, probably once every 2-3 months. And sometimes it can be a long one, as long as a few months.
The reason is of cuz the site is engaged in "appropriate discussion".
I want this to be a cautionary tale that, despite different motivation, the end results of these powerful entity, exert influence that is almost identify in behaviors.
The good thing is that there is a process in US to correct the behaviors. While as there is none in China. And we have to stand up to protect the freedom.
I'm sure all the people who were for taking down certain Fediverse apps for hate speech from the F-droid app store won't have a problem with being taken down from the Play Store on the same grounds, right? After all, one can still download their apps somewhere else..
It needs to be regulated first by law makers to prevent companies applying arbitrary rules and discriminate clients based on subjective/someone's personal preferences.
Countries like Austria and Germany have a law called Wiederbetätigungsgesetz which prevents public display of neonationalistic symbolism or speech. I think such a law would be a great starting point for a new law preventing hate speech and discrimination based on ethnicity, religion, gender, or sexual orientation.
We can see a shift to the alt-right in the USA at the moment, where reasonable people and certain ideas like universal healthcare are being labeled as extreme left.
Apparently the far-right https://en.wikipedia.org/wiki/Alternative_for_Germany became the third largest political party in 2017, winning 94 seats in the Bundestag. So doesn't look like those hate speech laws are very effective.
As an Austrian, while I have absolutely no sympathy for our local Neo-Nazi groups and far-right Parties, I would personally see the Widerbetätigungsgesetz to be repealed as soon as possible. While it might be true that most people who have been affected by this law were actually far-right bigots, it is still scary to me that my government is able and willing to prosecute based on opinion and speech.
Maybe a tolerable law could be modelled using some concept of truth (eg. I think the Nazis were great vs. The Holocaust did not happen), but I am unsure how that could reasonably be applied when talking about contemporary affairs.
Some Fediverse apps have ban lists within them for certain instances. This has been hugely controversial in Fediverse community, to the point where some apps that fork apps with ban lists and republish them without those ban lists, sometimes get removed from F-Droid!
You can't just keep banning Fediverse instances. It's like banning websites. So what is this going to mean?
Approved instances. Here's a list of 200 instances ... get on our approved list to be a part of the app. That list might grow to 3000, but if you're not on it, your instance is not accessible.
> to the point where some apps that fork apps with ban lists and republish them without those ban lists, sometimes get removed from F-Droid!
Do you have links? The only example I could think of was Freetusky, which was unmaintained while upstream Tusky itself got regular updates. That's what I'd guess as reason for its quite recent archival: https://gitlab.com/fdroid/fdroiddata/-/commit/f9b7a9540f368f...
Latest version is 8.0.7 while upstream Tusky's latest version is 12.1.
I'm surprised they take it down. Why would they do that? Is the fediverse a threat to Google? It's a super niche thing for the a small minority of people who are not Google target users anyway.
Because they don't consider people who disagree with their political views legitimate, and wish to wipe them out via any means possible. The fact that the fediverse is small is neither here nor there: indeed, starting by picking off smaller players is a good way to establish a precedent and desensitise employees, so the next round of shutting down bigger players doesn't seem so bad. Boiling the frog, right in front of us.
As for why they do it, this essay may prove enlightening:
If we're going to play 6 degrees of Kevin Bacon with "hate speech" (a term that's loosely defined, of course) then there simply isn't going to be any internet left.
I really wish people would just ignore hate speech online like we've been doing for forever. I'm so tired of Google & Twitter trying to be the "internet police". It's so infuriating. The internet feels too "safe" now.
Obviously a "KKK Members" app in the app store is absolutely unacceptable, or something overt like that, but to dig into the content of an otherwise harmless app? Cmon, man. We're adults. Let us see what we want to see.
>Obviously a "KKK Members" app in the app store is absolutely unacceptable, or something overt like that
Why? if you want the internet to feel less "safe" and for people to simply ignore hate speech, why would this be unacceptable? It would certainly make the internet feel less safe for a lot of people, but maybe you're not one of those people, and those people can just ignore it as you said.
Wait? Isn't the Fediverse full of nazis, freeze-peachers and non-moderated unsafe instances? Oh, yes!
There we are. When framapiaf.org, mamot.fr and other "free speech" and "anti-censorship floss activists" will start to apply a proper moderation and clear blocks of fascists instances and users, maybe Google would reconsider to publish the Android clients on Google Play.
Those spaces are totally out of control and this is a part of the problem.
I use an Apple device and it’s decisions like this that make me glad of that, however, it’s just the lesser of two evils right now. Both mobile platforms are making scary decisions lately and my faith in them decreases daily.
I’m going to start supporting pine and any other open alternatives more. Pine is the only mobile platform I’m even vaguely optimistic about.
>I use an Apple device and it’s decisions like this that make me glad of that, however, it’s just the lesser of two evils right now.
Be aware that while it's possible for Android users to download Fediverse apps from an alternative app store like F-Droid instead, the same would not be possible on an Apple device if Apple were to do the same.
When I see the amount of people registering with Gmail accounts on the Fediverse when it is supposed to be a tech giants escape (Twitter here but still) and then complaining and crying about a "Google Play ban", I am laughing very very strong.
Without getting into the details of this - yeah I disagree with it - but, Android allows side loading. Isn’t that what everyone wanted from Apple?
On a higher level, what’s the problem and what should be changed? Google has a policy that people don’t like, Android can side load, isn’t that exactly what people want?
Sorry to go off-topic, but the entire existence of the word 'side-loading' is some 1984-level language manipulation. As if installing applications without getting the permission of some abstract corporate owner is the weird thing. No, having to get permission is the weird way to install things.
as a computer moves ever more towards a white-label appliance with a single purpose, the act of "installing" becomes more and more alien. It sucks, but a majority of the computer-using population cares not for it, and only want it to work. Think washing machine - have you ever seen people want to install apps into their washing machine?
If the singular fucking purpose of the washing machine was to run apps, I sure as fuck would expect to see people wanting to install apps on their washing machine.
You can call of whatever you wish. But how did loading from any untrusted source work out for the average consumer for the last 30 years? Viruses, malware, ransomware, etc?
Yes outside of the little HN/geek bubble, it’s way too easy for the average consumer to install malware on their computer.
HN users have been whining about not being able to side load on iOS devices for years. What they really seem to want is to force the app stores to carry anything.
You must get really tired after destroying straw men all day.
1. It worked out great by fostering tons of invention.
2. Untrusted code with sandboxing works well for the web/javascript ecosystem.
3. Yes "HN users" want the freedom to run software of their choice. Discussing a single aspect where one system is better is not some total endorsement of the whole system.
Regarding #3 Android gives you that choice. The open source community didn’t just complain about proprietary software they created something. Linux wasn’t easy to install along side Windows on PCs. But they made a product that some people wanted and advocated for their position until Linux is now the most widely used OS on phones and on servers - including Azure.
Google should be prohibited from pressuring device creators into not including other app stores and apps. While they do allow side loading, they still maintain an effective monopoly by making it difficult to use any other app store.
(Or at least that's the most reasonable argument/step that I've seen from an anti-trust perspective)
If you read HN you might know what sideloading is and how to do it. If you're anyone else and you even get as far as trying it (which isn't likely) you get a big scary warning about how obviously everything sideloaded is a virus and Google won't be able to scan it and you should use the play store (I don't actually know what it says, but the point is that it's a big scary warning).
The point is that sideloading presents several barriers to entry and at every extra step you'll lose a few more users. very few people will get all the way through enabling sideloading, figuring out how to actually do it with ADB or something, ignoring the scary prompts, and get all the way to actually installing and running an app. This is a nice workaround for a few people, not a solution to Google applying their policies in an aggressive way to remove apps they don't like that clearly aren't themselves dedicated to hate speech.
If you're anyone else and you even get as far as trying it (which isn't likely) you get a big scary warning about how obviously everything sideloaded is a virus and Google won't be able to scan it and you should use the play store (I don't actually know what it says, but the point is that it's a big scary warning).
So now you want both side loading and you don’t want warnings about the possible risks of doing so?
So on one hand you’re arguing that users are too ignorant to figure out side loading but they are smart enough to not download apps that may be malware?
It’s not like any mainstream vendors have had side loading and introduced a security vulnerability....
I did not say we should get rid of the warning, I said it will scare some users off. That's the point of the warning, and it's a good thing, but it means sideloading isn't a viable solution.
So what do you propose? You want the government to come in and enforce fairness? The HN talking point has been that users should be able to install whatever they want - with Android they can.
Here the main issue complex is IMHO that a) the ability to access is counted, b) this is penalized despite the system shipping with apps doing that too and c) the policy isn't consistently applied.
Roughly, web browsers, podcast players and Fediverse clients (and probably a bunch of other apps, those are just obvious examples) all fetch contents from a user-supplied source over HTTP(S) and display it. As long as they are not specifically promoting "banned" sites, they should all be treated the same in that regard.
So what do you propose? Not only have most HN users been advocating that the platform vendors allow alternate means of installing apps and being forced to do so by the government, now do you also want the government to make rules about what is allowed in the App Store?
I propose for Google to apply their guidelines consistently and fairly. Ideally on their own. If it takes a shitstorm every time, that's very annoying (but helped last time they tried to play this game with podcast apps) but at least fixes the problem for a bit. Maybe there is grounds for a lawsuit if they apply it massively unfairly, I'm not a legal expert on that.
Looking back at our last discussion, you have a serious problem of equating everything with a call for government intervention.
It is said that sideloading in Android will essentially be disabled in an upcoming release. It will still be possible for that small handful of nerds who know how to enable developer mode and install an .apk via ADB, but for the vast majority of people, the Google Play store is only going to become further entrenched as their sole source of apps.
I think this is really interesting. You hit the nail. A lot of the discussions nowadays on app stores is about having the cake and eat it. The app store providers are doomed if they do and doomed if they don't. What I hear is: "I like your platform for reach, but I would like to use my own policy"
> This is particularly worrisome because for most people Google Play is the only way they understand to install apps at all.
So this hurts discoverability and credability ("if it's not bad software, why isn't it allowed in the Google store like all the other apps?") of fediverse apps and networks.
I don't like this type of content. At all. But having Google or similar decide for me is freightening. More so are the people who buy into the idea of Google & Co being benevolent dictators.
Marginalizing ideas that breed on marginalization is to me a fool's errand. Light the ultimate sanitizer.
I could see that backfiring pretty badly. Just think of all the politicians who want to regulate (or even try to ban citizens from using Bitcoin) being able to say it's chock full of 'Hate Speech'.
It's hard to ban at this point - they could limit cashing out options, but the blockchain itself will live on as long as there are some places in the world where you can cash out.
I take the stance that the government can't actually ban or regulate Bitcoin - In the sense of changing the protocol or shutting it down. However the government can absolutely regulate my use of it as a citizen.
Wouldnt this be the same as banning web browsers because you can goto hate sites? Ban irc clients because you can connect to evil irc servers? Ban email clients because you can email to hate domains? Its up to the user to configure and connect to a server in a open client.
I don't like hate speech. But the only thing that worries me more than the fact that there are so many people filled with so much hate is not allowing that to be out in the open where society can address it.
Is it hard for Google to blacklist these questionable domains and block them at the Android level instead? .. if the claimed issue is really a concern. That way browsers will also benefit from it.
This doesn't seem like a complete story or any kind of rationale for taking down apps, if you have a problem with hate speech you should probably get off the internet.
> if you have a problem with hate speech you should probably get off the internet.
The internet, yes, as one cohesive whole. There aren't safe places, adult content can't be hidden from kids so let's not let kids use google until they're at least 18, maybe 21, and hate speech is something we see continuously everywhere...
I'm not sure what the end-game here is for Google, or for Apple, both of whom have recently been pretty openly flexing their monopoly power. There seem to be too many of these changes to be a coincidence, although that's obviously a possibility.
They seem rushed to establish some kind of precedent. Is there someone on Biden's team who is known to be a strong anti-monopolist, and this is in preparation for a administration change? I don't think Biden himself has ever had strong feelings here.
Or maybe someone at the Trump admin has pretty much given them a green light?
Rank speculation, all of what I wrote, but there seems to be a behavior pattern emerging recently among some of the most powerful tech companies.
>I'm not sure what the end-game here is for Google, or for Apple, both of whom have recently been pretty openly flexing their monopoly power. There seem to be too many of these changes to be a coincidence, although that's obviously a possibility.
For Google, seems like it's because there's an election in a few months and Google's doing whatever it can to stop Trump winning again.
I think censorship of hate speech is good, but doing so through the play store is a terrible idea. This will just promote more side loading which is not good for the security of android as a platform.
The play store needs to be fairly permissive in terms of content, so that people keep their phones secure. The primary purpose of app review needs to be security and quality, not censorship and rent-seeking.
When I opened up YT Music today, almost the entire scree was taken up by a link to a political playlist. I don't want to participate in politics — I just want to listen to some background music while I work. Why is Googling silencing one bunch of creators and promoting another bunch? How is this good for Google, the country or the world?
Big tech is directly influencing people today in the way so many screamed "The Russians" were influencing us in 2016 (and back during McCarthy).
I have a 500GB microSD card on my phone with all my music. I've been collecting it for decades, many from artists I meet in bars and pubs. I buy off Bandcamp when I have to, which only charges 15%, compared to Google/Amazon/Apple which charge 30%+, or streaming services that pay artists pennies.
Take control of your music. Download it, buy it DRM free, save it, back it up.
Take control of your video feed. Subscribe to YouTube channels using an RSS reader. Use 3rd party frontends like Invidious.
If you're and tech and know how to do this, than do it, and help write posts to show how others can too.
Unfortunately phones with SD cards are hard to come by these days. I think that was intentional to help prop up streaming because both each of the duopoly has heavy investments in streaming media.
Time to start mass-reporting Google Chrome on the store because I can use it allows access to child porn and Nazi forums I guess?
And also maybe revise all those antitrust laws into something actually useful - they're using their (kinda, mostly) monopoly to hinder competition by selectively applying their rules to whever they want and never themselves.
Disgusting censorship in such a position of power. Corporations like Facebook and Google and Apple shape society. There is no getting around it. It should no longer be acceptable to use the concepts of 'capitalism' and 'privateness' as an excuse for censorship on platforms that have become as good as public utilities.
The open web has never been more important. What next, will Google's Chrome browser start blocking fediverse web domains? That's their 'platform', too.
I'm glad I degoogled. This kind of thing is why.
It wasn't even hard after I got email switched over. If they didn't have a video monopoly, I would never have to use their stuff.
Not sure why this comment was dead, but I vouched for it. I've been slowly separating from Google for a while now for slightly different reasons, namely that I've grown increasingly distrustful of surveillance capitalism on the whole.
Any other mail service is already better than Google's. You don't have to host your own to get off Google, if that's what you're aiming for! There are a lot of other trustworthy providers. As someone who hosts their own email, I'd thank you for diversifying.
While, one the one hand, I don't have a lot of trouble with delivery even from a residential IP address and I'd recommend self-hosting, I understand anyone who's hesitant. My mail lands in Google spamboxes much more often than it should (I don't send any automated mail these days, i.e. not even website notifications or anything: everything is hand-written or at least triggered by the person receiving the email; my sending IP has been stable for a decade). Another downside of Google is that they hide the existence of the spam folder and many people will simply never see it and be able to update its filters by replying to me or marking it as not-spam. Heck, some Google-for-corporate mail service even blocks your email at smtp level and there is no recourse. By diversifying receiving servers, at least Google doesn't get to set one standard: if your mail doesn't arrive in a Google inbox, it's currently extremely hard to argue that "but it's google's fault" (when it totally is). Clearly I as an individual have an issue and the big google doesn't.
I hate how terribly locked down all modern operating systems are. I recently started using a PinePhone. It's up to people in tech to show that regular/ordinary people can get away from these mega corps that seek to control everything about the tech we use.
This will suffer from a backfire effect where the result is you just get more technically savvy radicals. Google's decision here exacerbates polarization. However, I actually welcome it because I would totally join a fediverse with a higher bar to entry, and where ideas had a longer period and participants were committed to building alternatives. It's the real punk. Everyone else can entertain and outrage themselves to death on public platforms where they compete for the reflected approval of a hive mind.
I realize the sort of people who make decisions like this think they are doing this to "win," as the only thing on anyone's mind right now is influencing November, and making sure it "never happens again," but Googlers and tech people like this are creating a self-isolating minority of themselves.
Exiting what has become the Karen-net is probably one of the most interesting problems to solve right now.
Censorship, deplatforming, and cancel culture are some of the most dangerous developments that have been normalized by progressives. I am not surprised to see Google do this given their internal culture has been weaponized by the progressive left.
Does anyone know the US case law on the 1st amendment being applied to corporate actors?
I see the argument often that things like this are a free speech violation but the 1st amendment says “Congress shall pass no law...”. It doesn’t apply to actors other than the state.
I’m not defending Google’s actions here but I also don’t think it’s technically a free speech violation at least as the amendment is written, so I’m wondering if there are any cases addressing this sort of censorship w.r.t. the 1sr amendment.
There's the first amendment and there's the concept of free speech. This absolutely violates free speech but does not violate the first amendment.
The only current framework to protect freedom of speech from private companies is to designate them as common carriers, e.g. phone companies legally cannot police what is said over their wires.
I don't think it's necessarily anti-free-speech. The idea of free speech is that everyone should be able to express their opinions without being prosecuted for it, not that everyone is entitled to a platform for those opinions. You should be able to write an article about absolutely anything, but you're not entitled to put it in my newspaper.
I of course agree that the mastodon ban is a bad move because it sets a double standard where browsers can access arbitrary web content but other apps can't, and because it grossly limits the things that the average user can do with their device. However, these are consumer rights issues, not free speech issues.
I think what a lot commenters are saying is that within their conception of the bigger idea of free speech they do consider themselves entitled to have their thoughts distributed by established platforms. Or put more mildly, platforms like Google ought not ban people for their content because it's the right thing to do. But I don't see many people advocating for government intervention, not with specifics anyway.
I think that's kind of a strange take. Like, if you are entitled to put an article in my newspaper no matter what the article says, doesn't that unduly restrict my speech by forcing me to endorse your ideas by using my platform to distribute them?
If Google doesn't want to distribute your ideas, I think it's odd to say that that's wrong of them. I do think that it's a problem that they get to act as a gatekeeper like that in the first place. I think the solution is to break up larger platforms and create a diverse ecosystem of smaller options, not to forbid platforms from ever moderating their content.
While true for early adopters, this won't hold true as the network grows.
Moreover, if you can access the very same content from Google Chrome, should Google remove their own browser too? The same happened to Podcast Addict, police all the podcasts in the world or we will remove your app.
The concern is Mastodon app developers are small one-man shops, less likely to be able to bring reason to bear and get Google to review these warnings. These app devs cannot possibly be held responsible for every post on the entire Mastodon network.
I know about them. But I would have to root my phone to install them. And that means trusting some binary blob from someone on the internet. So I would not do it.
On iOS maybe, but there's no problem installing 3rd party app stores on Android without rooting. See for example F-Droid.
Recent versions of Android have also improved the permissions model... you can specify what apps (e.g. F-Droid) are allowed to install other apps. You don't need to add a blanket allow forever.
Whether or not you agree with the action taken here, Gab has provoked this behavior for years, by calling itself uncensorable and asking its users to fork and resubmit fediverse apps with minimal changes to explicitly circumvent the Play Store Guidelines (even apps that did not implement a block of Gab to begin with), so it is hard for me to feel bad for Gab.
That a few app developers have now been put in a position where they must implement the block is unfortunate. I always thought it was a good indicator of the developer's morals, but not much more.