Yes, and I think that's one of the first complaints that many think of here, e.g. how many users on Twitter are there that post worse things than Trump, how many thousands of websites does AWS host that are worse than Parler?, and so on.
One might object "Sure, enforcement is not perfect and some bad actors will get by. But at least we are removing some things instead of nothing!", and while they are right about this as a first-order effect, they are missing what happens afterwards: people will, rightfully so, identity that policies are not evenly enforced, and this can have many further effects down the line.
You can still say whatever you want, you just can't compel others (including businesses) to assist you in conveying your speech. This is really basic stuff, it is surprising how difficult many otherwise smart people are finding it.
To be clear, you're referring to "the right, led by the President, organizes and executes a violent attack on the Capitol, so companies refuse to continue providing services to those people" as a "tit for tat".
There is a difference between cases when business needs to spend effort to assist you, and when it needs to spend effort to prevent you. Here preventing is actually harder, because there is no difference to AWS/Twitter which combination of numbers it stores and it has to go out of its way to read the number and block specifically your speech.
The dominant cost here isn't the cost of the employee that has to update their database, it is the cost of lost advertising or customers angry that your company is providing services to terrorists.
Certainly a relevant comparison, but if sexual orientation is, like race and gender, a protected characteristic, then it may make sense from a balancing perspective to not allow that to be a valid reason, even if "affiliation with a terrorist group" remains a valid reason to choose to deny service.
What happened was an attempted political assassination: Q anon supporters stormed the capitol with guns, zip ties, and pipe bombs, chanting "hang mike pence." I imagine we make "I will kill you" exemptions for free speech.
This is wildly incorrect. ' Q anon supporters storming the capitol with guns, zip ties, and pipe bombs, chanting "hang mike pence' is a dangerous media fantasy. A huge number of people demonstrated, a tiny number broke the law.
We are entering very dangerous, 1930's Reichstag like 'problem, reaction, solution' authoritarianism with these free speech crack downs. Patriot Act 2.0 is ready to go versus 'domestic terrorists'.
Quite the contrary. If anything, the coverage did not show the full story. The majority of these people were out for blood. They erected hanging stations. The fought with the police. This was no demonstration. These people were to overturn the election and hurt or kill anyone who got in their way. Free speech decidedly does not protect this.
Everything said by the person you replied to is the truth, not hyperbole. Just because the mob and their fatal invasion of the nation's capitol was not successful does not mean that their motives were not anti-democratic.
Why are you apologizing for the insurrection's actions?
MSNBC are an absolute disgrace IMO. 'The Proud Boys and QAnon supporters are the ones behind this.'
These are tiny bogey man segments of society cable TV bobbleheads terrify everyone with to help justify removing people's basic rights. The incoming biden crew have a Patriot Act 2.0 ready to go to combat 'domestic terrorists', which could include just about anyone who doesn't comply with the party line
"Freedom of religion is not freedom from consequences, you can believe in X god but don't be surprised when we do not allow you into our shops and universities"
Do you see a substantial difference between your statement and the one above?
The question is not about what is defined as protected class, but why we needed to start protect religious freedom. Allowing people to escalate disagreements in one issue into sabotage of each other, leads to hostility and all out war.
It's not as cut-and-dried as you suggest. Individuals--but not broad classes of people--can be excluded based on their behavior. The Nazis likely could be excluded as long as they insisted on wearing Nazi pins/garb, but not merely on suspicion of their beliefs. A spineless out-of-court settlement by an insurer doesn't tell us that much about how the case might have fared in court, only that the insurer chose to avoid further risk.
I think the more interesting part is that the ACLU was committed so strongly to non-discrimination in 1986.
And I agree the settlement doesn’t prove the claim was valid. But why would protection matter less when you’re wearing the garb of the political view you’re trying to express?
If the law is trying to protect unpopular political views from being excluded in public services, it matters the most precisely when they’re wearing MAGA hats or Newsom 2022 pins.
Yes, it is. It requires some impressive mental gymnastics to claim that on the one hand you have free speech, and on the other you have no realistic ability to exercise it. A right - especially a natural right - is only a right when it can be meaningfully exercised.
In particular, free speech only applies to speech you don't like, because nobody is attempting to shut down memes of cute kittens.
If that's how you define it, nobody will ever have free speech. There are plenty of things people can say that would get them kicked out of my house, de-friended, or fired if they were my employee. I bet you could even think of some scenarios yourself. There will always be the possibility of consequences.
The same thing said in one society can result in people kicking each other, killing each other then causing blood feud for centuries while in another society merely in a lawsuit.
For economy to work, large number of people who dislike each other need to cooperate, so it is generally a good strategy for the society to arrange consequences in such a way as to not cause escalating chain of feud.
But violence is being planned on Parler, today. It is an incubator for the escalation you’re talking about. We’ve left the realm of the hypothetical.
If there’s a bar where the owner lets a gang hang out and recruit members and plans crimes, the thing to do is shut down that bar. Not let it go unchecked because the gang will be mad about it.
'Violence' is probably being planned on Facebook, Signal, text messages etc etc by all sorts of 'evil doers' to quote GW Bush.
This doesn't mean you ban the medium. section 230 defines bigTech as platforms, not publishers. You don't close down AT&T because someone organized a punch up via text messages.
All reports of activity on Parler seem to indicate it’s basically a breeding ground for right wing extremism, so I think it’s incumbent on you to show why it’s more like text messaging than the aforementioned gang bar.
That said, Parler is not being shut down.
Also, Section 230 has nothing to do with AWS’s relationship with Parler.
No mob has gathered to shut down Parler, so I’m not really sure where that fits in (ironic, though, since people are using Parler to plan literal mob attacks).
Parler’s business partners are abandoning it, sure, but that’s by no means a “mob”. It’s much more akin to the bar’s suppliers refusing to sell it ingredients for drinks, if we want to stick with this analogy.
but is perfect enforcement even possible?
If it isn't, does that mean companies shouldn't bother enforcing at all?
I agree, it's a very complicated issue, and there will be repercussions, but that shouldn't stop a company from taking down this sort of thing. No reasonable company wants to be associated with this. Especially if people on Parler are planning something worse in the future.
They banned the app because the moderation policies allowed for said insurrection attempt to occur. There are tons of posts explicitly calling for violence that violate Parler's Terms of Service, yet have not been removed from Parler, meaning that there's a failure to moderate the platform. If Parler actually moderated its content to prevent users from plotting insurrection, the app would probably not have been removed. But the whole draw of Parler is that it never would have removed that content. Since high-profile Parler users were promoting Stop the Steal and potential violence, the app's userbase would see removal as a violation of "free speech social media" and move on from Parler to the next thing that allows them to talk about it.
It's not punishing all Parler users for the actions of a small group on Parler, it's punishing Parler itself for failing to keep that small group from breaking its own rules, along with Apple's, Google's, Amazon's, etc. I don't see anything wrong with that.
One might object "Sure, enforcement is not perfect and some bad actors will get by. But at least we are removing some things instead of nothing!", and while they are right about this as a first-order effect, they are missing what happens afterwards: people will, rightfully so, identity that policies are not evenly enforced, and this can have many further effects down the line.