I'll preface this by stating that I hate the idea of censorship and I always have.
However, were I to happen across someone motionless on the floor with a needle in their arm, I would give them NARCAN instead of a ride to a rehab center.
I'm from the United States. Social media is currently poisoning our country to a degree that I think it may be too late to try and draw out the fight for free speech by using free speech alone.
The idea is that we are supposed to be equally free to say whatever we want, regardless of our status or wealth. This has been corrupted.
The ability for one person to amplify their voice or ideas via hundreds or thousands of bots, paid assholes, and gullible people who lack the capacity for critical thought is a problem. It's a big problem.
Truth, facts, and hoping that people can apply logic to find their way to them isn't enough anymore and the people exploiting this advantage are getting better at it.
I agree that what YouTube is doing here is a slippery slope and it's scary. However, I personally think that this is now a war for democracy, and things are going to need to be sacrificed.
I do not like the idea that hard, easily proven facts can be overwhelmed by voluminous bullshit. If this is allowed to continue, our country is going to die. Banning obvious bullshit, albeit akin to censorship, is NARCAN. The underlying problem exists and is dangerous, but this will keep us going long enough to hopefully find a more suitable solution.
I say this for emphasis, not to personally attack you: Whenever I read or hear this it makes me extremely suspicious, because it often doesn't come from those who will do the sacrifice themselves.
Censorship and other forms of regulating free speech is simply never a good solution. Perpetrators of lies will even hide behind this, and create a false image of rebellious heroism, instead of facing public discourse and the consequences thereof.
YouTube is a private entity but they correctly assess that they have a significant responsibility here. But curating social media content is not unheard of, Wikipedia manages it surprisingly well. Instead they could for example certify and emphasize content that is grounded in science and verify content creators of such. This is from the top of my head but there are certainly other people who have smarter ideas that don't involve straight up censorship.
Talk is talk. Actions should have consequences, which is why we have rules and agreements in the real world. I also very much doubt that censorship achieves what it is supposed to. People don't suddenly become enlightened (depending on context it might be the other way around) when you ban that stuff. It can easily become worse.
> Perpetrators of lies will even hide behind this, and create a false image of rebellious heroism, instead of facing public discourse and the consequences thereof.
Unfortunately, I think it's a bit of a classical liberal ideal that "the truth will come through and those who were wrong will see the error of their ways".
To some, the truth is a tool. If it's not a useful tool, it will be discarded. Probably a more useful tool, like hyperbole, decontextualization, gross mis-interpretation, or just adding "new facts" will be used.
I think it's naive to say that laissez faire speech is going to solve all issues, just like it'd be naive to say that peaceful protests would be a viable path to democracy in North Korea.
(Personally, I'd like to see bias recognition, propaganda recognition, and the ability to shift perspectives taught. A dumb, noncritical society is going to do and say dumb things)
If you say it is naive to raise the idea that free speech alone will solve all issues, that is quite reasonable — but I would raise that is just as naive to think that a carefully administered censorship is an effective remedy, either.
The only possible remedy in the end is to build our societies, our law, and our culture to be resilient against those cases when people are wrong, because they often will be.
There are countless examples in history where the majority believed medical facts that were utter nonsense, such as bloodletting, lobotomies, electric shock therapy, radioactive dinnerware, etc. It seems naive to think that somehow human nature has changed and the propensity of mobs to correctly determine what is factual or not is different in the 21st century.
There are countless examples in history where speech was suppressed by the general society as the main hindrance to enlightenment. And it was these times that brought such practices about.
Do you think we should filter out astroturfers and Russian bots? What about human written Russian propaganda crafted to harm our society? What about a US citizen who crafts propaganda to spread intentional misinformation and harm society but benefit themselves?
We don't let non-citizens vote. We don't allow free political speech nor the right to assembly at polling places. We have independent election observers. We have to ensure that the voting process is not polluted by actors who would wish to harm it.
Similarly, there are actors who want to harm the free and good faith discourse of our public forums, and acting to preserve those open forums is moral, though it will always be fuzzy.
The Russian bot angle has just recently been uses as an election campaign, so it would be difficult to ban people on that suspicion if you think a bit about it. Worse, it has been used by unaccountable intelligence agencies to frame innocents.
Do you think we should filter out astroturfers and Russian bots?
I don't think that anyone is complaining about the filtering of deliberate fraud from outside influences. What people are complaining about are the obvious biases exhibited by the large social media companies. Take this new Hunter Biden example. Twitter claims that they don't want to promote hacking and releasing of private information. Hunter Biden's laptop was abandoned, not hacked. Besides, where was that privacy stance a week or two ago when Trump's taxes were leaked? Facebook claimed unreliable sourcing for the story. Where were such concerns when anonymous sources were claiming that Trump had said that our troops were "suckers and losers"? Or when anonymous sources were claiming that Trump had been peed on by Russian hookers?
We have independent election observers.
Yeah and we balance things out by letting both major parties watch for funny business. Who watches Facebook and Twitter? Those companies are loaded with ex-Democrat operatives in senior positions. They've run out anyone who even smells halfway conservative. Facebook and Twitter have demonstrated time and again that they aren't just getting rid of fake news and bots. They're weighing in on complex political and statistical issues by silencing information sources that they don't like.
there are actors who want to harm the free and good faith discourse of our public forums
Yeah and evidence shows that some of those people are controlling information from within the social media giants.
This ideal served us pretty well in the last century. Trust in objectivity grew in proportion to our investment in free speech and liberalism. Incidentally, we see a breakdown in this trust at the very moment that society is becoming more illiberal (Trumpism on the right and woke progressivism on the left). Correlation isn’t causation, but this seems to be as strong a causative signal as we get for these kinds of abstract social problems.
> Censorship and other forms of regulating free speech is simply never a good solution.
The almost free speech on the other side of the ocean seems to be working pretty well. You get to say anything that isn’t actively harmful to others, but you won’t be able to advertise bleach as a way to get Corona out of your system.
Sure, arresting people for quoting Churchill seems like a potential definition of "working". [1] There's no way that arresting people for a twitter post [2] could be abused. Nobody needs free speech for reposting rap lyrics. [3]
In the first case there about Paul Weston, the local Police Commisioner clarified the reasons for his arrest:
"It has been wrongly suggested that Mr Weston was arrested for reciting passages written by Winston Churchill. I understand he was not welcome outside the Winchester Guildhall, the Police were called and he was asked to move on. I also understand that he was not prepared to move on and was arrested for this reason."
> you won’t be able to advertise bleach as a way to get Corona out of your system.
you can't do this in the US either if you're actually selling bleach. are you saying in europe ordinary people aren't allowed to publicly speculate about cures for the virus?
That is misinformation. The Mesa Arizona police aren't investigating the death of Gary Lenius as a homicide. His wife Wanda isn't in prison. The poison was chloroquine phosphate, not bleach.
You heard or remember the story wrong. There wasn't a bleach couple. That was a form of chloroquine as an ingredient in fish tank cleaner. In some media, it was equated to (hydroxy)choloroquine as a medicinal drug even though this pharmaceutical drug has a high safety rating. Also, I don't think anyone went to prison even though it's assumed the engineer husband would have not knowingly drank poison. The wife also ingested some, but she survived.
In any case, the media can't pretend they rely on arbiters of truth because there is bias all around. Trump's statements on disinfectants were not as incendiary as some people made them out to be. Disinfectants can sometimes be used in the lungs or body, but this is a question for the medical field. The media speculated about poison control calls surging after Trump's statements, but any small uptick seemed to already be present since people have resorted to rigorous use of disinfectants in the home going all the way back to March. Finally, with the focus on lockdowns and isolation, we've actually somewhat missed the point of the DHS briefing on disinfectants and UV decontamination. We were supposed to be looking for simple innovations that made going to places like the grocery store safer and more manageable, but I haven't seen much progress in this area.
Nonsense. Almost free speech is censorship by another name.
Working "pretty well?" Sure, unless one tries to talk about things that really matter and strays outside the dotted lines of permissibility.
Some topics are verboten.
When you say "other side of the ocean," I assume you're talking about Europe.
Try to discuss the muslim rape gangs in the UK.
Try to discuss the surge in grenade attacks and many-fold increase in rape in Sweden.
Try to discuss the homeless migrants living on the streets and the Aljerian street wars in France.
Try to discuss Fulan Gong in China. Or tiannamen square.
Try to discuss political assassinations in Russia, or the bribes the wife of Moscow's mayor hands out.
Try to discuss alternative political parties in Ukraine.
I think there are scores of political prisoners serving time for wrongthink who might disagree with your assertion that almost free speech is working pretty well.
If you want to discover who your masters are, learn who you are not allowed to criticize.
If existing reality seems to "work", is it so bad to exclude non-establishment innovations, while the larger world/markets continue to compete and innovate?
Obviously. It’s just that I believe the people that would be/are locked up under the current laws (and the laws in my 30ish years of life) deserve what they get.
Conversely, I believe there are a lot of people in the US walking around freely that deserve to be locked up.
You're getting free speech wrong, at least by european standards. In Sweden or in France, you can certainly discuss the topics you proposed without risking prison or worse. But free speech applies to other people as well, who are equally free to refute your opinion. Free speech is not censorship merely because it stops at other people's basic rights. That's merely finding a balance between fundamental rights that may come in conflict.
Russia and China do not have free speech. If you say the wrong thing there, you go to prison (or worse). That's censorship.
Conversations about Muslim integration are sensitive enough that you wouldn't mention them in your workplace in fear of being labelled racist and possible repercussions. Not quite illegal, but its going that direction.
"If I say shitty racist things that others correctly perceive as shitty and racist people won't want to associate with me" isn't even remotely comparable to the state putting you in jail and it requires an incredibly easily bruised sense of self to suggest it.
Why do you instantly assume that anything said is "shitty and racist"? You seem to be doing the kind of thing that I am referring to - nothing in my comment was racist, but you are doing your best to imply that it is.
And potentially loosing your job is quite a big deal for most people. That's what happened to James Damore for bringing up the wrong subject.
Indeed, but you might note that happened in the US, and not Europe.
Not saying you think muslims are dangerous is at the same courtesy level as not telling your coworker his hair looks like shit every day. Nobody is going to (immediately) fire you for it, but people will definitely start avoiding you if you do it to an obnoxious degree.
I think it’s interesting that people treat Islam as closer to an ethnicity than a religion. Nobody on the left is clamoring to fight against “Christophobia”, in fact most of them actively mock religion, unless it’s Islam. I honestly don’t think organized religion should be protected from criticism, because to be fair you have to treat all religions the same. Even Scientology.
It's almost as if what you describe as mockery of Christianity is largely punching up at a secure and itself mildly- to significantly-onerous institution in the United States and that defense of Islamic people is an example of punching-down xenophobia aimed at individuals that's dressed thinly as "criticism of the religion".
If Christians were targeted by violence on American streets after 9/11 (as they were--along with, you know, completely unrelated Sikhs), "the left" would say that's wrong, too. And the reason that the status quo with regards to anti-Islam sentiment is what it is is because Americans in a whole lot of places genuinely haven't progressed much beyond that mindset.
> I think there are scores of political prisoners serving time for wrongthink who might disagree with your assertion that almost free speech is working pretty well.
So you're claiming that Sweden has put people into prison because they discussed the increase in rape? Citation please.
Your whole comment is a prime example of unfounded FUD. [1]
[1] With the exception of Russia, China, and probably Ukraine, but in what world do they belong in one bucket together with Sweden or France in terms of democracy?
The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum—even encourage the more critical and dissident views. That gives people the sense that there's free thinking going on, while all the time the presuppositions of the system are being reinforced by the limits put on the range of the debate.
Depends on how you enforce your acceptable opinion I think. You need to find a way that makes it socially unacceptable without literally locking people up for whatever they say.
>> The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum—even encourage the more critical and dissident views. That gives people the sense that there's free thinking going on, while all the time the presuppositions of the system are being reinforced by the limits put on the range of the debate.
That sounds bad, but I wonder how Chomsky would feel if that wasn't the case and the "spectrum of acceptable opinion" included stuff like literal Nazism.
Like front page newspaper stories, senior politicians and even a dramatisation on the desperate-to-be-inoffensive BBC, you mean? Sure, not everybody approves of people whose take on the cases obsesses about the ethnicities of the perpetrators above all else, but free speech doesn't mean a point of view has to be agreed with.
People responding to someone with "that's racist" isn't censorship. It's others exercising their free speech to strongly disagree with the points raised.
That just means it's a low value conversation. The end effect is very different. In my country there are anti-racism laws that /can/ put you in jail for saying something considered racist. Btw, I'm not totally against those laws. To say free speech has to be completely, 100% free, say /anything/ and any message you can think of, seems flawed to me.
How hyperbolic. I'm struggling to imagine what you may have experienced, exactly, to put discussing such issues in the UK alongside discussing Tiannamen Square in China.
I had to google the UK and France stuff. Seems to be, as bad as they are, well covered single cases (one group of rapists in the UK and a couple of riots in Dijon). So what exactly are you not "allowed" to discuss?
Get a grip on reality, jumping from an non-Brit doing a 5 sec Google search and maybe 10 sec headline reading to the conclussion that some mind cover-up exists and works is really sad...
You flatter yourself thinking that you had that much influence over my opinions. The government have done research on this and keep flip flopping on whether this will be released to the public (currently not). Doesn't that sound like a cover up to you?
No, simply becasue they are publicly talking about it. Not much of a cover up, it seems. Especially since the government is headed by Johnson, who I would suspect to be first one to publish such things.
The report still hasn't been published after months.
Or how about MPs telling victims to keep their mouths shut for the sake of diversity. It certainly gives me the opinion that they are trying to cover it up.
That is the same bahviour Trump shows when he retweets racist stuff. Both are unacceptable. The difference being, The MP you mentioned is part of the minority, and she deleted and the retweet and ecused herself. The article also discribes the original tweet coming from "a parody account" from a journalist.
So, you just confused the opposition, it is a Labour MP after all, with the government. And you totally ignored the origin of the retweet and the fact that the MP in question re-tweeted and did not write that tweet herself. Nice job, really.
So you do admit that you have no idea what the current government party is in the UK and that you did not read or understand the article you linked. Fair enough I guess.
OK, so assuming that free speech must remain inviolate, which I don't agree with, as free speech always has limits, do you see the current state of constant misinformation working as intended? Is the United States as the founding fathers intended it?
I'm pretty sure parent meant western Europe, not China and Russia. Who would assume someone means China and Russia when talking about countries with free speech?
> The almost free speech on the other side of the ocean seems to be working pretty well. You get to say anything that isn’t actively harmful to others, but you won’t be able to advertise bleach as a way to get Corona out of your system.
Alternatively, you don't go to jail for suggesting that anybody actually implied injection of bleach as a treatment for coronavirus.
Speech that is controversial is the only speech that needs protecting. You don't need free speech to protect your cat pictures or tweets about a salad you ate.
Absurd. Who are the arbiters of truth and what can and cannot be said? There is no such thing as almost free speech in a free society except perhaps for clear cut physical threats.
Also big tech is going to end democracy as we know it unless we get serious. They can decide who gets elected into office and we are on a path to replacing elected representatives with kings.
The legal system provides the mechanism to address that specific category of speech.
“Defamation is an area of law that provides a civil remedy when someone's words end up causing harm to your reputation or your livelihood. Libel is a written or published defamatory statement, while slander is defamation that is spoken by the defendant.”
If you want free speech you need to learn to live with free speech. That means not believing everything you see, critically thinking, and actually vetting the sources yourself. Its a societal process that plays out over time. The answer is not to have content oligarchs that decide what is truth and can be seen. Statements like this eventually are just seen as noise and ignored as people look for verifiable truth via trusted sources within platforms. In short, it works itself out.
Firstly, thanks for taking my comment in the manner it was intended. I was slightly concerned it might have gone down badly.
I take your point but I think you are brave to rely on the common sense of the entire world to protect you from false or misleading claims against you. Having seen a friend being roasted in the "mainstream media" with a story which was heavily biased against her, and seeing the venomous messages sent to her in the aftermath, I think I'm ok with some limits on free speech.
I really hate this hand-wringing over supposed censorship.
The Internet has had to deal with trolls and cheaters from day one. If someone sends a packet that indicates they got a headshot in a multiplayer FPS, they could be cheating so we invented anti-cheat. If someone spams an email account claiming to be a Nigerian price, it doesn’t mean that’s true so we invented spam detection. If someone makes a blog post claiming product X changed their life, it could be an ad so we have ad detectors.
Year after year, people find new ways to manipulate online communities. The people who run them have an obligation to prevent them from becoming toxic.
Now we have powerful actors manipulating political discourse and suddenly the rules are different? It’s complicated but that’s what happens when you serve billion(s) of users.
I‘m not saying that nothing should be done to counter blatant lies and cheating. I just think it is much more effective, free and sustainable to use emphasis and counterarguments.
> Censorship and other forms of regulating free speech is simply never a good solution. Perpetrators of lies will even hide behind this, and create a false image of rebellious heroism, instead of facing public discourse and the consequences thereof.
> YouTube is a private entity but they correctly assess that they have a significant responsibility here. But curating social media content is not unheard of, Wikipedia manages it surprisingly well. Instead they could for example certify and emphasize content that is grounded in science and verify content creators of such. This is from the top of my head but there are certainly other people who have smarter ideas that don't involve straight up censorship.
I'm 100% certain that some crank and their followers are complaining about censorship on Wikipedia right now, because that community decided to not present their tendentious opinions or lies as the truth or to not present them at all.
The word "censorship" has been ruined: it's devolved into an epithet to describe any barrier, not matter how justified, between a speaker and his desired audience. It's not censorship for YouTube to ban this content, just like it's not censorship for me to ban anti-vaxxers and white suprecemists from putting signs up on my lawn. The mass dissemination of truth (or our best approximation) has always depended on decisions to not disseminate. You can't expect the common man to sort through a bag of 99 compelling lies and one truth to find the truth. That's too much work for an amateur, and too many will settle for a lie. Everyone relies on others to improve the truth-to-lie ratio to some manageable ratio that an individual can handle.
It's also worth noting is that one of the main reasons Wikipedia does better than YouTube is that it's process is 100% manual. Google's bias towards totally automated processes driven by some quantity of shallow data points greatly reduces how effective they can actually be.
That last point is really intriguing. My comparison was unfair. Also the products are very different. With Wikipedia you get collectively curated and moderated content. With YouTube it’s many competing part-products that try to get attention. Apples and oranges.
> This is from the top of my head but there are certainly other people who have smarter ideas that don't involve straight up censorship.
So what can you conclude about the private intent of key decision makers?
I find that imputing malign intent often improves my ability to forecast outcomes. Whether there is actual bad faith almost doesn't matter if the assumption improves forecasts.
Care to make any predictions then? I'm always highly skeptical of confirmation bias and I think when I ask people to make predictions they immediately get a bit nervous which hopefully is enough to help them recognize they're only willing to claim their methodology works and not quite so eager to prove it.
I agree. The thing that makes me suspicious is an absolute demand that things have to be taken at, and discussed at, face value.
The existence of malign intent is hardly mysterious. It's like the Firesign Theater skit with a political guy declaring, "And you can believe me. Because I never lie. And I'm always right." Part of the gag is that the guy picked to read the line is capable of delivering it really, really convincingly. You go, 'wow! He really means that… hey waitaminnit…'
I think it's interesting to see these tech giants react to a repeating pattern of using direct manipulative lying as a weapon. It fits with some of my 'singularity' theories: corporations are like aggregate people, and AI is a still higher level than that, and it's neither easy nor desirable to use these newer forms of intelligence to replicate the worst follies of the individual human.
At some point, tolerating the wilder excesses of the individual 'cell' will kill the 'host' organism, which if it's a corporation depends on living humans, if it's an AI depends on living human societies. If you've got an AI that isn't really an AI and is just a large informational weapon, or a corporation that isn't really an aggregate person and is just the intentions of one human expressed through power, there's no constraint on the actions of the larger more powerful entity. But if you have the larger organism (whether it's a corporation or a proper AI with a broad enough awareness) it's going to eventually resist self-destructive actions even if they're expressed in a complex way. I see this as YouTube as a larger organism recognizing it is being used as a weapon and trying to protect itself against future punishment or diminishment by preventing itself from being used in a particular way.
> Talk is talk. Actions should have consequences, which is why we have rules and agreements in the real world.
But threats are talk, and threats often motivate action without undertaking it. So talk is, in a real sense, action. Many forms of speech can have profound effects.
For example, I say, "Pay me $400 more a month or I will evict you." If you believe I can evict you (even if I probably can't) then my talk has changed behavior. Can I then say, "I took no action?"
The way to preserve free speech (really "free promotion" or "free publishing" in this case, as there are other avenues for these videos) is exactly what you say — consequences. Newspapers have free speech, but are subject to libel laws, for instance.
There's no real mechanism to hold YouTube to account for promoting and spreading this stuff, and they are naturally incentivised for that mechanism never to come about. To try and ensure it never does, they're obviously settling for this.
Want "free speech" back? Give it meaningful consequences.
youtubers are also subject to libel (or slander?) laws. the ones with a large audience also make enough money to be worth pursuing in court. newspapers are no more liable than youtubers for merely being incorrect.
well consider the laws mandating equality of coverage in broadcasting in some jurisdictions if you like — those apply directly to the publishers, not the content creators.
The core point is the same: the platforms want to avoid enforced regulation, so they're self-regulating in ways that free-speech proponents don't like
Funny how consequences always means bad ones, "people who say something I don't like get the stick!" and then later "I never thought I would get the stick sobs person brutalized with stick!".
Where do you get the idea that free speech means "freedom from consequences"? You're allowed to say offensive things, but you're not immune from (eg) getting fired for it
> certify and emphasize content that is grounded in science and verify content creators of such
Respectfully, how does this help? My impression is that the majority of the people who believe that e.g. vaccines are detrimental think that credentialing by traditional sources that don't agree with them (e.g. mainstream media) is simply evidence that the party has been compromised.
You'd be creating a subset of YouTube that is easier to ignore for the people who wish to.
Ultimately, there's no Youtube policy that could force people to believe the truth if they're dedicated to believing otherwise. But there's a lot of people who are what we might call vaccine denier adjacent. They don't think vaccines are bad, but they see the anti-vax position as something that's being reasonably and honestly debated, so they're often sympathetic to concerns like "maybe there should be fewer vaccines" or "this one vaccine might be dangerous because public figures I dislike are pushing for it".
The benefit of the tagging would be to make it clear to that set of people that, no, anyone who understands what they're talking about thinks modern vaccines are very safe.
The problem with “hate speech” is that increasingly more things are being added to that umbrella. It’s kind of similar to the way that “think of the children” is often abused to cover all sorts of things.
A hate crime is defined as 'Any criminal offence which is perceived by the victim or any other person, to be motivated by hostility....
A hate incident is any incident which the victim, or anyone else, thinks is based on someone’s prejudice towards them because of their race, religion, sexual orientation, disability or because they are transgender.
Evidence of the hate element is not a requirement. You do not need to personally perceive the incident to be hate related. It would be enough if another person, a witness or even a police officer thought that the incident was hate related.
That has nothing to do with expanded hate speech laws, and the in fact the provincial human rights council in British Columbia ruled against the complainant in the article, and ordered her to pay restitution to the salons.
I assume the GGP is referring to the C-16 bill passed by the federal government of Canada in 2016, which added gender expression and identity to existing human rights laws on discrimination.
I have repeatedly seen a perhaps willful misinterpretation, popularly stemming from Jordan Petersen, that this bill criminalizes misgendering people when that's not the case.
>"After Bill C-16 amended the Criminal Code, Canadian law prohibited hate propaganda against groups
that can be identified based on gender identity or gender expression. The bill also allowed for more
severe sentencing if it is proved that a particular offense was motivated by a bias or prejudice
against a person's gender identity or gender expression.
>
>However, experts say misusing a pronoun would not constitute hate propaganda, nor can it be used as >sole evidence of discrimination.
>
>"If it's just the pronoun, not much is going to happen," explained Cheryl Milne, director of the Asper >Centre for Constitutional Rights at the University of Toronto told AFP."
You’re right, I was confusing the two issues. But this article from the cbc with commentary from two legal experts isn’t completely reassuring:
> Does the bill legislate the use of certain language? And could someone go to jail for using the wrong pronoun?
>In the Criminal Code, which does not reference pronouns, Cossman says misusing pronouns alone would not constitute a criminal act.
>“The misuse of gender pronouns, without more, cannot rise to the level of a crime,” she says. “It cannot rise to the level of advocating genocide, inciting hatred, hate speech or hate crimes … (it) simply cannot meet the threshold.”
>The Canadian Human Rights Act does not mention pronouns either. The act protects certain groups from discrimination.
>“Would it cover the accidental misuse of a pronoun? I would say it’s very unlikely,” Cossman says. “Would it cover a situation where an individual repeatedly, consistently refuses to use a person’s chosen pronoun? It might.”
>If someone refused to use a preferred pronoun — and it was determined to constitute discrimination or harassment — could that potentially result in jail time?
>It is possible, Brown says, through a process that would start with a complaint and progress to a proceeding before a human rights tribunal. If the tribunal rules that harassment or discrimination took place, there would typically be an order for monetary and non-monetary remedies. A non-monetary remedy may include sensitivity training, issuing an apology, or even a publication ban, he says.
>If the person refused to comply with the tribunal's order, this would result in a contempt proceeding being sent to the Divisional or Federal Court, Brown says. The court could then potentially send a person to jail “until they purge the contempt,” he says.
>“It could happen,” Brown says. “Is it likely to happen? I don’t think so. But, my opinion on whether or not that's likely has a lot to do with the particular case that you're looking at.”
>“The path to prison is not straightforward. It’s not easy. But, it’s there. It’s been used before in breach of tribunal orders.”
Hey thanks for educating me about this. I dunno why my inquiry was downvoted (was it not additive to ask for more sourcing?) but I did want to say genuinely I appreciate actual law I can point to and track over time. Has it been used to prosecute anyone that you're aware of, or how has it been applied in courts? Thanks again!
Sources certainly can add some factual, concrete evidence to someone's argument to ask for a source or two.
That being said, I think some just near-reflexively reply 'Source? Source?' as a sort of low-effort 'rebuttal' by implying it's just, like, your opinion, man when they haven't necessarily come up with a well-thought-out argument.
I won't disagree that well-sourced arguments add much to a discussion, especially if you're not well-informed about the topic at hand.
--
But don't worry about the down-votes. It happens, who can know for sure why, they don't really matter, and if you're earnest in your comments, you'll almost always going to end up in the black. :)
and since we don't use anything resembling the scientific method with our laws there's no way of knowing if hate speech laws are effective. Being a law doesn't make it good. e.g. war on drugs
I hate it as well, and if I could come up with some new way of advancing our species aside from free speech I would. I cannot and cannot see how such a thing would ever be possible.
Here's a bit of sunshine: the human species has not changed in thousands of years. We're still the same hairless monkeys we were back when people were thinking up all kinds of cool shit. Knowing this, it follows that 10% or so of the population are, well, nuts. Another 10% are so are willing to do anything to become famous. It has always been this way. (Of course, I'm just guessing the exact numbers)
What's happened is that these folks are using this wonderful new instant worldwide publishing tool we've made to continue their pathologies. And companies are happy to make money letting them do so as long as nobody gets too upset. We are now reaching the point where lots of people are getting upset and those companies are running around like a single fireman in a city on fire trying to figure out how to make their business keep going.
The good news is that we're still the same species as before, and we've solved over and over again the problem of what it takes to have a peaceful and progressive society. The problem we have is that social media should not exist. It shouldn't be regulated, broken up, or any of that. It simply should not exist. The incentives here are not aligned with the survival of our species (There's a huge discussion of what to make instead. That's for another day)
I think the problem is not easy to solve, but it's certainly easy to explain. It's not the end of civilization as we know it, at least it doesn't have to be.
> it follows that 10% or so of the population are, well, nuts. Another 10% are so are willing to do anything to become famous. It has always been this way.
Not sure that's true, though. Here's a cloud to cover your sunshine: what if the very concept that beliefs have to correspond to observable reality is a relatively new one, or a relatively unpopular one (with a 10% of the population subscribing to it)? What if it's an unchanging constant that people believe whatever works for them in the immediate, observable way? What if it's natural for people to believe in lies, if these lies don't seem to cause obvious, instant hurt to them, and especially if these lies benefits them socially (going along better with their tribe, finding their in-group, raising status)?
I worry that for most of history, we could get away with believing in bullshit. And now we've scaled up and advanced our societies to the point they need accurate beliefs to function. Problem is, holding inaccurate beliefs doesn't usually hurt you in an immediately obvious ways, so our monkey brains don't connect the failures of our institutions with the bullshit individuals spread to each other.
Thank you for saying this. It takes the discussion in a different, but I think more useful, direction. Suffering is caused when you believe something that isn't true, and the rational idealists don't want to accept "it's natural for people to believe in lies", as you say. (There is a juicy irony here, which I will leave as an exercise for the reader.)
The key innovation we require, as a society, are better feedback mechanisms to correct inaccurate beliefs. This means more frequent, but smaller, pain in response to mistakes. To those who object, the alternative isn't no pain, but rather, the catastrophic pain of collapse. Society is really a collection of institutions, and somehow in all institutions, loyalty has usurped principle, and accountability fails because the accuser is shunned, hated, feared. So, institution by institution, we weaken.
The solution, for example in academia, is to shun (fire, blacklist, etc) those "researchers" who cheat, lie, and distort their findings. In the justice system, lawyers, police, and judges who break the rules should be punished more, not less, severely than the rest of us. Again, these are not changes born of malice, or a desire to see more suffering, but rather a price we must accept to maintain healthy institutions with high standards, and so, prevent societal collapse.
In some ways I think "cancel culture" is a reaction to this lack of accountability, as an attempt to replace it, to ape it, almost like a cargo cult. Of course, there are huge problems with this, not the least of which there is not an institution behind it, but rather just mob rule that answers to no-one, not even itself.
I definitely agree on the cancel culture aspect. I'm just out of college, and for people my age and younger, there is such a desperate and helpless anger towards the people in power.
There is absolutely nothing concrete that I can do on a day-to-day basis to protect my world, my friends, my family from people that I watch lie, cheat, and steal on national television. It's absolutely sickening.
I vote, and I canvass, and I donate. But man, when I see someone on twitter talking about how they think gay people should be shot like animals or something equally insane, it feels like there's one small right thing I could do, which is to make that person feel like shit. I don't, mostly -- it's just a toxic cycle for my personal wellbeing -- but I see the attraction. Yelling at people online (and to a further extent, excluding them from your world) feels like the only thing you can do sometimes.
I'm 20 years out of college, and I feel the same way. At my best, I remember to scale my energy according to proximity: things nearer to me get more energy. Part of the problem is that we're presented with a parade of information about stuff that isn't happening here, around me. That, coupled with the aching feeling that you're a bad person if you don't pay attention to what's happening in the world, even its far away. That is really sick, because it's guilting you into paying attention to stuff you cannot, by definition, control.
And then we, the techno-futurists, don't want to deal with what's in front of us because its too small, doesn't scale, or because this problem will go away once we reach the economic Singularity or have flying cars or whatever utopian dream is in fashion.
Personally, I think we need better systems to leverage people's real, concrete pain into actionable change. Like, you should be able to donate money or time to fixing the things that really hurt you. Maybe its the DMV. Or maybe its traffic court. Or maybe its a late fee on a credit card. Or maybe its finding out that a fact you believed was made up. Right now, everything is so siloed. Imagine if everyone hurt by the DMV could pledge money to fix the problem.
> The key innovation we require, as a society, are better feedback mechanisms to correct inaccurate beliefs.
I have come to the conclusion that managing our collective delusions is more important than correcting them. Reality is unbearable to the human psyche. Most collective delusions are benign or even beneficial to society.
I’ve been wondering for some time if all cognitive biases are necessary for us to function; some socially, some because evidence is extremely scarce compared to scientific standards, some because of the need to avoid negative effects of exploration, etc.
This is my line of thinking. Which is why I see professions that deal with the real world (engineers, doctors, lawyers to some extent) remaining more grounded in their thinking when compared to those that focus on people's opinions (politicians, media, academics, professional moralisers). The former are regularly conditioned to think in terms of unavoidable realities, while the later are repeatedly conditioned to drift further and further into their collective delusions.
I don’t actually know if this is true. Do you have studies to back this claim up? I don’t trust that the medical system is more grounded in their thinking- the Public Health Service purposefully withheld medical treatment for syphilis for hundreds of black men for over 40 years. They willfully decided to target black men with this disease and then refused to treat them when a cure became available, going so far as to prevent the military from informing or treating these men during war time. Even at the time the experiment began, the ethical guidelines recommended syphilis be treated with what they had at the time, and the doctors knowing ignored this ethical recommendation, and continued on passing through the hands of multiple doctors working on preventing healthcare from being issued to black men for over 40 years. The medical community promoted people participating in this act and it was only after repeated whistleblower attempts was the extent of this horrific behavior uncovered. Indeed it was only when the media started to cover it that the full extent of the inhumane, unjustifiable behavior at the hands of doctors came to light.
To this day we know that the medical field is systematically unable to accurately treat black and female patients equivalently to white and male patients. (Unable for example to realize heart attacks in female patients represent differently for decades).
I don’t buy without formal study that the average deviation from basic opinions of reality deviate more in certain fields of study than others. Claiming it without any evidence is itself a form of deviating from observed reality.
> Which is why I see professions that deal with the real world (engineers, doctors, lawyers to some extent) remaining more grounded in their thinking when compared to those that focus on people's opinions (politicians, media, academics, professional moralisers).
I very much agree. However, a largely unrealized ~fact is that people from these professions (well, ones who hold the "right" ideas) are regularly portrayed by politicians and the media as flawless, omniscient, beyond reproach (which is increasingly being physically enforced on the internet, with a net that grows ever larger). The downstream consequences of this likely well-intended behavior are extremely complex - sure, censorship will provide a setback to conspiracy theorists' ability to speak their ideas freely, but how much fuel does this provide to their resolve, and how many mainstream people will see what is clearly happening and dip their toes into the pool, and have their eyes opened even further when they discover the vast amount of actually truthful information that doesn't make it into the "trustworthy mainstream news" (even further reinforcing the idea in their mind that the public is lied to, and illustrating that this is done regularly)?
A meme war approach to governance seems like not the best approach for the sole (for now) superpower on the planet, and they have a very delicate balancing act to truth-proof the internet without the mainstream masses noticing and starting to ask questions.
We've been warned about this behavior many times throughout history by people wiser than ourselves, perhaps as a species it might be an opportune time to pause for a moment and consider where the path we are currently heading down at stop speed eventually leads.
"A liar should have a good memory." - Quintilian
"If you tell the truth, you don’t have to remember anything." - Mark Twain
"The least initial deviation from the truth is multiplied later thousandfold." - Aristotle
> we've solved over and over again the problem of what it takes to have a peaceful and progressive society
... have we? Much of the 20th century was spent at war, with a death total somewhere in the region of a hundred million. As were most of the previous centuries. There have been isolated pockets where one might have a lifetime of stability over a wide area, but few of those were also "free".
> The problem we have is that social media should not exist
The challenge is to thread the needle between freedom degenerating into chaos and murder (Rohingya, etc) and total clampdown that also escalates into disappearances and murder.
Actually, if we could snap our fingers and wish social media out of existence tomorrow nothing really bad would happen, besides some influencers becoming poorer.
The opposite of social media today (election manipulation, divided societies, ethnic cleansing) is the state of affairs before social media, which wasn't bad at all. I'm struggling to think of large-scale benefits social media has brought us, because the individual ones like talking more to grandma and getting more dates are clear enough. But come on, people should just pick up that phone and go out more.
It is interesting that people were horrible or selfish or manipulative before the Internet, before the Web, and are horrible or selfish or manipulative AFTER widespread adoption of the Web and these "platforms". It's almost like arguing about the platforms is a side-point to the root cause of the problem - people's personalities and selfishness, and some people's gullibility without investigating "facts" presented to them.
It's obvious that the Internet and social media wouldn't solve these issues with people, or hide these flaws in humanity. I am baffled why people think it would. It's just another form of something governments or institutions cannot handle/control, which has been occurring since written history began. The only difference is the sheer quantity and ease of having a voice these days.
(For reference regarding my use of these "platforms": I use YouTube periodically to mainly watch the C++ conference videos to further highlight how out of touch I am with the modern iterations of the language, but have nothing to do with Facebook or Twitter since I found it to be a stream of people talking about themselves; by definition that is not a conversation, so is not social and should be renamed "Selfish Media" IMHO).
Your analysis seems on point to me even from someone who doesn't really use the services. YouTube definitely doesn't promote conversation of any kind. It's a platform for consumption with essentially no standards applied to what's on it (contrasted with TV which has at least some bar). I can't speak on Facebook, but Twitter is also lacking in conversationality. Replies are limited in length and the way they're displayed means that you have to go out of your way to see replies to replies. So, that's why both YouTube and Twitter are closer to "selfish media"
> Actually, if we could snap our fingers and wish social media out of existence tomorrow nothing really bad would happen, besides some influencers becoming poorer.
I disagree. I would lose the ability to contact many good friends.
Why are you dependent on Facebook for these things? If you suddenly lost your account, you would lose those friends or would you find another way to communicate?
believe me, I would really prefer not to rely on social media for this kind of thing. unfortunately a simple list of names and numbers tends to decay over time. plus, some people in my life just prefer to be contacted over facebook messenger.
a surprising number of people don't realize that a) carriers are obligated to port their number and b) it is possible to transfer contacts when they buy a new phone. instead of fixing the original problem, they post "hey new number (xxx)-xxx-xxxx, add me!" or "new phone, send your numbers!" to facebook or instagram. I can't explain how this is supposed to work to every single person I want to stay in contact with, and I don't check social media much so I tend to miss these posts.
if I lost my facebook account, it wouldn't be a huge issue. I'd be able to track everyone down on other platforms. if all social media were shut down, there are at least a couple people I'd never be able to contact again.
No, I shouldn't just pick up the phone more. I dislike talking on the phone. I'm happy we have instant text communication. It doensn't require me (in general) to stop what I'm doing and talk to someone and I can have a private conversation - well, at least private to anyone sitting around me. I'm truly don't want to give it up.
Obliterating email and whatsapp would probably put a lot of people out of business permanently, far more so than the pandemic.
"But I didn't mean those!"
OK, fine, now we need to define social media. Clearly everybody wants to count Facebook, Instagram, and Twitter. But some of the major incidents come from memetic circulation of fake news and racial hatred propaganda etc on Whatsapp. The Christchurch shooter (and others) were radicalised by 4/8chan. Both he and Anders Breivik also cited racial hatred material produced by "traditional", "legitimate" sources such as major newspapers.
(I wonder if anyone's ever done an "impact factor of evil" analysis on the citation graphs of mass shooter manifestoes? Going all the way back to Kaczynski, T and beyond)
> people should just pick up that phone and go out more
You've missed all the "millenials NEVER answer the phone" memes, then?
> The opposite of social media today (election manipulation, divided societies, ethnic cleansing) is the state of affairs before social media, which wasn't bad at all
My point is that none of those are new, it's just that people struggle to recognise the warning patterns. Countries that were badly hit by SARS-COV were more attuned to the risks of SARS-COV-2. Countries that were badly hit by 20th century fascism are more attuned to the risks of 21st century fascism. Hence all the endless (somewhat fruitless) discussion about who counts as a Nazi and when it might be OK to punch them.
It's not just the medium, we have to look at the messages. Can we ban certain messages without throwing out the medium?
Here's my definition of the elements of "toxic social media":
1) the automated surfacing of messages from random people because they are "popular" (i.e. highly engaged) mixed in with personal messages. This is the amplifying effect that makes things hot.
2) the immediacy - a message can be spread insanely rapidly, faster than any individual humans can understand or respond to it. This is the fuel.
It's the fact that it happens at an inhuman scale that I think is the biggest part. Email at a human-to-human level is like letter writing. Information can travel through it, but it can't amplify by bypassing the human connections in the way that Twitter trending topics can. Likewise 1:1 or limited-sized groups on Whatsapp.
I'd even put Facebook groups into that bucket, so long as they're full of humans who know each other. It's once you get to groups with millions of people who don't know each other directly that things get out of hand.
You raise a good point. Is the problem "social media", or is it simply democratized publishing? Would a "fediverse" of distributed blogs really be more resistant to misinformation, or would it just be harder to censor? Is this simply a problem inherent to the internet?
In the old days, the number of available channels for mass-distribution of opinions was low, and people were in general restricted to face-to-face communication with a crowd, or possibly calling a radio show or sending a letter to a newspaper. This made it easy to control the spread when someone voiced opinions or information that were disruptive to society. Today, many people around the Globe have considerably stronger protections of their free speech compared to 50 or 100 years ago, and this combined with the fact that channels for mass-distribution of opinions is now a commodity, freely available to billions of people.
Seen in that perspective, social media could be considered a part of the problem simply for being a way for people to voice their opinions, but as you wrote, a "fediverse" would not be more resistant to disruptive opinions. It will be a problem as long a people have a variety of ways to communicate with lots of people.
Why not? In the era of the attention economy, an algorithm generating engagement keeps people more engaged.
I think it's naive to assume that a fediverse wouldn't eventually include somebody developing such an algorithm that becomes popular among the average user base.
It is even more naive because everything already optimized for engagement by selective pressure without algorithms. Headlines were the original clickbait.
Blaming algorithims is the "big lie" essentially of repeating something enough that people eventually believe it is true. I noticed it before and dismissed it as the bullshit it was but even computer literate people are falling for the hysteria.
> ... have we? Much of the 20th century was spent at war, with a death total somewhere in the region of a hundred million. As were most of the previous centuries.
In fact, casualties of war, as a percentage of population, have been more or less constant for hundreds of years.
You suggest that not removing wrong Youtube comments "degenerates" to the situation in Myanmar. This depiction was presented by publishers that want to get a handle on Youtube because of changing business models and is not substantiated by facts. Even Putin would do better, although that story was a lie and an election strategy.
Are you honestly claiming that banning paid lies on YouTube leads to Rohingya-style genocide?
It's increasingly difficult to have rational conversations with Americans these days, and this idea is certainly a big part of it.
It just astonishes me that Americans see little difference between a private company forbidding liars from using their free service, and people being rounded up at gunpoint and being killed.
I am not claiming that. In fact, it's probably necessary to prevent that kind of disaster, and I wish YouTube would ban a bit more aggressively - but also with more thought and better review.
(I seem to have been misunderstood quite badly, a sibling comment to yours makes exactly the opposite allegation! Also, I'm a Brit)
The risk of any ban system is that true-but-inconvenient stuff ends up getting banned as well. The very real risk that video evidence of murders by police might be banned for "promoting violence", for example. There's already the controversy over removing evidence of war crimes in Syria - kind of understandable, given that those must be pretty traumatic videos.
Nah if you wanted to prevent the genocide you need to kill the perperators until they stop. Blaming the channel for genocide is as facile as blaming encryption for crime since in both cases they could talk to the same people to plan crimes. Genocide occured when humans were hunter-gatherers stop blaming the tech for it.
That's not prevention, though, that's punishment after the fact. Prevention requires doing something about it in the very early stages.
(yes, this is a pre-crime argument, but crimes against humanity committed by and on behalf of states or warring ethnic factions are qualitively different from ordinary crime.)
The idea that a loony subset are now on the digital-loose reminds me of how someone once described the reputation of Chinese group tourists. They explained that it wasn't indicative of all Chinese, but more likely a recently upwardly-mobile lower-middle class with new money to travel. It was implied that these were the equivalent of Australian bogans or American rednecks (I guess). After all, Americans have had a particular reputation as travellers also, so perhaps went through the same phase?
> The problem we have is that social media should not exist.
So you are saying we should just get rid of the internet? Social media is not just a few popular sites. It is the entirety of the internet. You get rid of the "social media" sites and people will still connect. Crap will still flow. You'll obviously see a temporary drop until people adjust and reconnect. The internet is about connecting people and making the exchange of information easier. You cannot get rid of "social media" without getting rid of the internet.
We aren't the same at all. In the last 10'000 years, we developed lactose tolerance, enabling us to herd cows for more than just meat. Only about 200 years ago, the first people with three arteries instead of two in their arms are showing up and recent studies seem to indicate that wisdom teeth are evolving away.
The evolution of a species never stops, even if it gets a bit slow. And our brains may have changed in the last 10'000 years too. Possibly even in the last 200.
You are right. Hardware hardly changed. However, one can argue, that software that we install into the new copies of hardware changed quite a bit. I think education and developing critical thinking skills should be the key to fixing those problems. Alas, the current trend is to try a way to get an education with as little effort as possible, which is destined to fail.
I really do think the hardware/software analogy here works quite well. In that sense, one could argue that the software has also become more complex, and perhaps more fragile, in recent times in order to better assist individuals in navigating this world.
To your other point, I think that "education" and "critical thinking skills" are often conflated as one and the same, or that perhaps one necessarily leads to the other, but this is certainly not the case. While I think being educated to some degree is important in order to share a "common language" with others and to become a contributing member of society, the ability to think critically, and to act on it, is really necessary if we are to succeed and not fail catastrophically along the way.
To go back to the analogy, having the ability to execute the software faithfully is one thing, but understanding when it might make sense to modify the software and how to do so responsibly given the fixed hardware we have available is really the critical piece.
Happy to engage you in another forum, which would probably be more productive. Ping me if you'd like to video chat.
But it's a good question and I'll try to reply in a brief and somewhat hand-wavy way (due to the medium, space, etc) Everything I'm going to say has caveats. And I can't cover them all. Meh.
If you believe that the species as a whole has never been this unstable before, then the real question is this: what has changed? Whatever it is, it's obviously happened in the last 20 years, maybe even in the last 7 or so. The thing to do would be to identify those changes and come up with a testable theory that can be tested using the data. In general, however, I am persuaded that the overall problem is that we've incentivized consumer technology based on a battle for attention span. This isn't new; it's been going on forever. But the network effect has shot it through the roof, and even simple AI is proving quite capable of taking that business model even further. If it weren't, there wouldn't be so much money in it.
+If you accept that premise, then that's how the conversation should generally go. If you don't accept that premise, then we need to back up a bit and talk about the effects of technology in general.
Here's a more detailed explanation and defense of my underlying thesis I published twelve years ago. This just didn't appear overnight; a lot of folks were trying to ring the warning bells earlier. (BTW, when I published it on HN, it got a lot of votes, but it also got a lot of "Well, sure, but 1) there'll be a new startup that will replace big tech soon, 2) we'll just invent some new app that will solve the problems we have from the current apps, etc") It was a sad demonstration of how all of us tech folk have a tendency to do the best we can to deny and ignore any implications of what we're doing aside from wondering whether our server is up. (I exaggerate for effect)
> If you believe that the species as a whole has never been this unstable before
What does this actually mean? The world is a very big place, covered by lots of political, economic, and power systems.
The West - primarily, but not exclusively, the English-speaking parts - has seen a big change in the way politics is conducted, with less dignity and more verbal viciousness, but this hasn't really been matched by physical or "kinetic" politics. The sibling comment that there are fewer bombs going off is correct. Even Islamic fundamentalism has got less dangerous in the west - while the great middle eastern war started in 2001 rages on, expanding upwards to Nagoro-Karabakh.
If we want to talk stability, perhaps we should return to control systems theory. The feedback loops of the world have got shorter and faster. This moves the poles of the system around - and, without damping, makes it less stable.
Imagine being so privileged and blind-sighted that thinking that increased political animosity caused by social media in America is equivalent to the worst instability of the species ever.
How does this sort of comment help? To be constructive, please:
(1) Describe /why/ this is a privileged position, and
(2) Describe /why/ this is "blind-sighted".
As far as I can tell, the OP is right; It's political instability that can instantly touch the globe, along with WMDs, and a head of the state who seems unwilling to adhere to peaceful transfer of power (see: "Will he go?")
I'm really tired of the word "privileged", so much so that I've become a snowflake and get /instantly/ triggered by its usage. It seems to be the "commie" of our era.
I liked your blog post, this part stuck out to me:
> Intelligence is going down as fewer and fewer books are being read (news flash: the printed book industry is on the way out unless this trend stops), and social organizations like churches and civic clubs see fewer and fewer members attend their meetings. The skills that are increasing? Reflex time. Ability to solve abstract, short-timespan problems. Basically the skills we need to interact with our entertainment. More and more, Indians and Chinese – people coming from cultures who have been shut out of the technical world until recently – are writing software for hardcore western appetites to consume.
"Ability to solve abstract, short-timespan problems"
I wonder about this. I speculatively propose that there also seems to be a curious inability (or unwillingness) for people who happen to be blessed with the skill of abstract thinking, to apply that skill to the greater world (the complex system we have built for ourselves to live within). I would say that abstract thinking, done right, allows the one wielding it the ability to step out of the ~axiomatic frame in which they live, and use to evaluate the world with a relatively clean mind - to set their individual political beliefs and preferences aside and consider things from a disinterested 3rd party perspective. At least to some degree. And I think mass social media, including forums like this one, likely contribute in significant ways that we do not currently understand. Whether a change in our aggregate ability has taken place over time, I have no opinion, but an ambitious person could likely pick up on it if it had by reviewing conversations from a decade ago or so.
> If you believe that the species as a whole has never been this unstable before, then the real question is this: what has changed? Whatever it is, it's obviously happened in the last 20 years, maybe even in the last 7 or so. The thing to do would be to identify those changes and come up with a testable theory that can be tested using the data.
This seems like the kind of abstract thinking (disconnection from the specific object level characteristics of the problem) that we need more of. Do you happen to have any theories on what steps we should be considering to get us out of this mess, that could plausibly be applied at national or planetary scale (or, know of any formal or informal organizations that are working on such ideas)?
Your thesis is based on the notion that “society is unstable” more than ever before. How are you measuring that?
Despite tons of noise in social media, real life up has been incredibly stable for the last couple of decades in the west. The least amount of violence, lowest extreme poverty, highest disposable income, etc etc.
In the 70s (before social media), there were multiple bombings every year in the US. That is completely unfathomable today. The headlines and discussions on social media today now heavily surround parsing Trumps words and arguing about how he’s racist and doing things wrong.
Social media is just giving you the impression that things are unstable because people like to complain when by any standard long-term measure it’s just as stable as ever (barring recent economic instability caused by COVID).
Some questions about the US?
- when was the last time a prominent politician was assassinated?
- when was the last time a government building was bombed?
- when was the last time a prominent politician was assaulted and left in the hospital?
> real life up has been incredibly stable for the last couple of decades in the west
This isn't universally true in the US. Millions lost their homes in the great recession and never fully recovered, it's just easy to forget that the American economy doesn't work for everyone when it does work for you. Now millions more face eviction and starvation amidst a major public health crisis.
> In the 70s (before social media), there were multiple bombings every year in the US. That is completely unfathomable today.
True, but mass shootings were unfathomable in the 70's and happen at an alarming rate today. To your point, we've largely become desensitized to those stories because they just happen so often now. I think that the patterns of violence have shifted, not the presence of violence.
> Social media is just giving you the impression that things are unstable because people like to complain when by any standard long-term measure it’s just as stable as ever
I disagree with this too. Wealth inequality is skyrocketing and has been for many years. Average life expectancy has been declining in the US. Our current president has discussed running for a 3rd term and preemptively declined to accept the results of a Presidential election overseen by his own government. Has that ever happened before?
> when was the last time a prominent politician was assassinated? - when was the last time a government building was bombed? - when was the last time a prominent politician was assaulted and left in the hospital?
These questions say more about the government's ability to protect its members than providing insight into the stability of society. Congressman Steve Scalise was shot in 2017 during a practice session for a congressional baseball game, but survived. Gabby Giffords was shot in 2011 and survived. There are numerous attempts at these things that we'll surely never know about.
It's also worth pointing out that a group plotted to kidnap the sitting governor of Michigan and try her for treason. Law enforcement officials interviewed about that case largely said they'd never seen anything like that before.
> This isn't universally true in the US. Millions lost their homes in the great recession and never fully recovered, it's just easy to forget that the American economy doesn't work for everyone when it does work for you. Now millions more face eviction and starvation amidst a major public health crisis.
On a per capita basis, home ownership was at an all-time high during the 2000s before the crisis and crashed down to 90s levels after the crisis and has returned now to a level not ever seen during the 1900s. In the worst of the Great Recession more people still owned homes than during the 70s-80s.
Starvation is not a realistic outcome for anyone in the US who chooses to use social safety nets. Both government and private programs provide meals to people who need them and are available all across the country. What you might be thinking of is “food insecure” households, which are people that would need such programs.
WRT wealth inequality, that is mostly a meaningless statistic for actual lifestyle of the lower and middle class. All it tells you is that the wealth growth was faster at the top than at the bottom. Not that the bottom got worse or even stayed stagnant for that matter.
From the perspective of the data on violence, purchasing power, disposable income, home ownership, etc, society appears to be quite stable. From the perspective of shared stories on social media, I can see why you might be led to think otherwise.
the US life expectancy is falling largely due to suicide, drug overdoses, and obesity.
the richest city on earth has a sizable, entrenched homeless population that openly do drugs on the street. Are we not in the midst of the opioid epidemic? Can what happens in Chicago be characterized as a war?
There is deep dysfunction and societal numbness to not see these realities; despite the riches accrued and the definitive technological advance, these problems are apparently unable to be solved.
Hold on. The richest city has that because the richest city allows that. Policies in these rich cities are saying, it's inhumane to lock them up. Now you're responding, ah hah, see how messed up we are we a society that this exists. It's proof of my thesis. But you think it was better in the 80s? The answer is that it was worse in the 80s, but the policy response was different.
You'll need a different gauge than that. I'll give you life expectancy. But I'll raise you obesity and other self-induced problems that have gotten worse statistically.
> Your thesis is based on the notion that “society is unstable” more than ever before
It certainly isn't. But it is more polarized and increasing content control in the last 10 years made everything 10x times worse. We continue on this path...
If people just could run their mouths more freely in the most vulgar form, I think the situation would improve quickly. Some people watch sports for that, but I think really telling you opinion to someone on the internet can also be helpful. I don't think the world is ending from that.
>In the 70s (before social media), there were multiple bombings every year in the US. That is completely unfathomable today. T
We get quite a lot of Islamic terrorist attacks in Europe still, but they do seem to be getting less effective over time - A man with a knife rather than a man with a truck.
The fact that rich and important people have very good security - why is this at all relevant?
Before the USSR collapsed, when was the last time a prominent Soviet politician was assassinated, a Soviet government building bombed, a prominent Soviet politician assaulted and left in the hospital?
At least 200,000 Americans and probably a lot more have died in the last eight months, most of them completely unnecessarily. Waving this away as "economic instability" shows a lack of perspective.
And over all hangs the grim spectre of the climate emergency, which we as a species are doing nothing to fix.
> and we've solved over and over again the problem of what it takes to have a peaceful and progressive society
When was America ever "peaceful" or "progressive"? I lived there for thirty years and my ideas, boring and mainstream by European standards, were constantly considered "laughable", "naïve", "ignorant." Now I live in Europe again, where I again get all the things that Americans thought were laughable and impossible.
In particular, America's been continuously at war for my whole life, and I'm no longer a young man. America's spent more money on weapons than the next ten countries put together. Calling America "peaceful" is totally false to the fact - it's obscene considering you've killed hundreds of thousands of completely innocent people in the last twenty years.
> ...what YouTube is doing here is a slippery slope and it's scary...
BTW, "slippery slope" was intended to be an example of fallacious thinking. Before it became a popular term, it was intended as a criticism of arguments of the form, "A will lead to B will lead to C, so A is bad."
I'm not bringing this up (just) to be pedantic; the overwhelming evidence to date is that it is extraordinarily difficult to get any of these modern media giants to make any kind of move on any subject at all. You can say very nearly anything you want on YouTube's network, or Twitter's, or Facebook's, and in all cases, the most common forms of censorship are from the DMCA, not politics.
>Before it became a popular term, it was intended as a criticism of arguments of the form, "A will lead to B will lead to C, so A is bad."
Sorry, this is backwards. Before it became a criticism of arguments "A will lead to B and so on" it was a term used to signify a "slippery slope", that is, something that has the potential to snowball to something much worse.
The name as a "criticism of the fallacy" resulted from the term being used as an argument -- not the other way around.
No, it's always a fallacy. If A leads to B, B leads to C, and C is bad, argue that people shouldn't do B, not A. Especially if A also leads to D, which is good.
If jumping out of a window usually leads to uncontrolled descent and flight into terrain, which then leads to experiencing a very large and sudden velocity change, which is followed by death from the injuries caused by said ∆v/∆t, I'm definitely going to argue against jumping out of a window. Sure, for the above A->B->C->D(eath) chain, there are many ways to prevent it from reaching the last node, but the default behavior is still A->B->C->D, so it's fair to argue against doing A without evident mitigations of steps B and/or C.
But if you're arguing not to jump out a window, and I have base jumping experience and am wearing a parachute, the argument is silly. Especially if the building is on fire.
Slippery slope arguments assume A leads to D and discard any positive effects A could have out of hand. That's why they're a fallacy.
No, a slippery slope is just that: A slippery slope.
A slippery slope is a inclined surface, a place in space that once you are into, makes you transition very fast to a worse state, like your head hitting the floor at great velocity.
That is you can start well, in a good state, and transition to a much worse state over time. Your definition is also valid but not the only one way to interpret it.
It is an expression that has been used in literature for hundreds of years, not just in english. It is a metaphor.
Do you make youtube videos? I do, and I am very conscious of what I say on my videos because I can not say whatever I want.
I can not say words like "Covid"(extremely forbidden) "negro" referring to someone skin color or something the machine will mistake as me talking about people when I am really not.
And it is also getting worse and worse. In the old times you could say whatever you wanted on youtube, you could even post illegal things like films(posters could abuse). Now you can not, and it is going to become worse in the future and it is youtube who abuses.
The Trump and brexit events were extremely important. People did things that the status quo did not want. Before that you had people like Hillary Clinton or Bezos that believed that because they controlled most media, they controlled people as puppets.
They felt very confident that they could advance their globalist program without serious resistance by the people.
After that it became clear that they had to control Internet too. At least the mass media on the Internet that affects most people.
I don't say that as a Trump supporter, I am not even American, but I have eyes. I see media reaction when someone gets killed by police and the media silence when a Trump supporter gets killed by a democrat radical and it is just disgusting how sectarian they are.
Yeah, it turns out when something becomes mainstream, it becomes beholden to mainstream sensibilities. You could post anything you wanted on old YouTube because nobody cared. Now people care.
I understand the position but let me argue for the other side a little bit. The problem with companies like Youtube, Google, Amazon, and Facebook is that they do everything algorithmically, since anything else wouldn't scale. Meanwhile, they have no incentives to care about false positives and the power relation between them and their customers is extremely asymmetric - basically, it's a system of oppression. To the false positive victims of algorithmic censorship, this can become a Dystopian nightmare of bureaucracy. Like in the movie "Brazil" or a Kafka story, only much worse because everything is automated and biased against them.
You can see that with Youtube content policing, Facebook and Google ads accounts, etc. Anyone whose livelihood depends on their services can lose their income at any time for no reason whatsoever. Even spreading the risk will not help much, it still means you might loose 30% or 40% of your income over night at any time thanks to a change of the algorithms. None of this is transparent and the companies will not explain anything, so you will not even know why or how to avoid this. You might claim it's anyone's own fault to depend on such companies, but there are many types of businesses where the dependence is unavoidable. These companies have quasi-monopolies in some areas.
AFAIK, the only remedy is to regulate algorithmic control of people and their content by laws more tightly. At least the appeals process needs to be more transparent and there need to be way more humans in the loop.
In this case it's the same, by the way. I guarantee you that many videos will be flagged falsely and many reputable Youtube content creators will be demonetized once again for nothing. This might be tolerable here and there, but the algorithmic oppression of people accumulates across all domains and also has a strong chilling effect.
In the long run, algorithmic oppression of people by a faceless company bureaucracy may turn out to be a bigger problem for society than what these moderation efforts are trying to solve. It creates strong and effective power structures that govern our lives, work in parallel to democratic power structures, and are not democratically legitimized at all. What's worse, the people who create them sometimes don't even seem to understand what they are creating, which is basically a recipe for creating systemic evil.
> AFAIK, the only remedy is to regulate algorithmic control of people and their content by laws more tightly. At least the appeals process needs to be more transparent and there need to be way more humans in the loop.
This sounds sounds like a very good (discrete) idea to me, well worthy of a dedicated HN thread of it's own, something like "What are the plausible societal pros and cons of forcing(!) social media companies to expose the implementations of their filtering and recommendation algorithms? What bad and good may(!) come of such a policy change?"
I think it could be argued that the way we as a community go about discussing world events is suboptimal. As it is, a commonly recurring pattern (here and elsewhere) is that some singular event occurs, a news story gets posted, and then we have the same general discussions/arguments that have been had many times before (here and elsewhere, many of them decades/centuries old), just in a somewhat differing context. Within these discussions, a large number of very important philosophical perspectives on the particular incidents appear in the comments, but are rarely discussed in depth. And then, time goes on, we all go to bed after having "had our say", and wake up the next day to do largely the same thing again. Rinse, repeat, day after month after year after decade.
I wonder...if we A/B tested a new, experimental approach, where rather than the genesis or focal point of the discussion being a news article about a specific event, instead we chose an abstract, philosophical topic as the primary topic of discussion, and then people posted instances of relevant events, and discussed those events in the context of the main philosophical idea...might these conversations bear more fruit, or fruit of a different kind?
Going further with this idea, might it also be plausibly beneficial to have some sort of a modified version of the HN Guidelines in place for these types of discussions, in order to encourage certain types of thinking (open minded, non-axiomatic, exploratory, minimized criticality and dismissiveness), and discourage others (excessive certainty and materialism, etc)?
Does this sound like a generally good idea or a bad idea?
I don't really see why YouTube choosing not to display stuff its algorithms (rightly or wrongly) identify as bullshit about vaccines containing microchips is more objectionable than YouTube choosing not to display stuff its algorithms (rightly or wrongly) identify as sexual content though, as it has done since the very beginning.
Yes, algorithmic flagging gets things wrong, but there's nothing new in this announcement unless one believes that antivax sentiment, unlike nudity, is something YouTube ought to be obliged to broadcast
I think it’s risky to believe something is a war. It’s too often this line of thinking that is used to remove civil liberties.
I take a different view. We are actively having an open debate and that is a good thing.
YouTube is not part of our government, they are like a magazine or tv station of the past and they can and IMO have the right to choose what content they allow or disallow and this is totally fine and has nothing to do with free speech (in the legal sense)
Now if we past a law that stated I can not speak my mind about covid that would be a law that IMO violates the 1st amendment.
But what is happening today is not bad it’s just one company making a decision on it’s own.
A separate question for the legal system to probably decide is whether YouTube as part of Google is acting as a monopoly to abuse its power. But I do not think anything about YouTubes decision as a business to not allow certain content published is anyway a matter of our 1st amendment rights... hopefully my perspective isn’t too poorly received - but thankfully you can down vote me, or up vote me and I don’t have to worry about being throw in jail for posting - happy hacking
That’s an overly simplistic and reductive view of free speech rights. Can my internet service provider shut off my access if I spend too much time reading “misinformation”? Can they give the government a list of all the people reading “misinformation”? Can they require I attend a reeducation program before restoring my access?
The solution is decentralized, verifiable forms of authority. Not unelected, unaccountable corporations that selectively (and with repeated obvious bias) choose what is “true.” Especially not YouTube, of all places, who has about as much authority as McDonalds, as far as I’m concerned.
That’s it. No attempt to have a single source of authoritative information is going to work post-Internet. The cat is already out of the bag. This model of the world is outdated and needs to be put out to pasture.
Is this solution possible with misinformation campaigns being waged against it from its inception?
I'm not disagreeing with you on an ideological level, but this solution is pie in the sky.
How do you propose this solution be enacted while the supporters it would need to come to fruition are actively being lobbied by the problem its supposed to solve?
I think it will occur naturally over the next few decades. Each misstep that Big Tech / Big Media makes chips off a tiny sliver of their edifice. Over time this will splinter into a thousand pieces, and some attempt to unify information across them will be invented and adopted.
It’s easy to forget that the modern Internet itself is only about thirty years old. We’re still at the very, very early stages.
I was on the early Internet. I miss it greatly, but also, it was the wild west, which was awesome when I was essentially living in the woods secluded from the outside world. The freedom was intoxicating. It ended up shaping my entire life.
I've watched the Internet grow and evolve into what it is and spent the majority of my free time participating in it from 1993 until today and probably until I die.
The thing is, the problem has always existed but its now being scaled up. There were kids pretending to be adults and puffing up their chests at the expense of decency. There was child porn. There were racists. There were pockets of criminals. There weren't many flat out idiots because using the Internet required at least a mononom of intellect to participate.
Yes, it's still early for the Internet, but America is just getting dumber while the Internet is getting easier and easier to use. How does it get better before it gets much worse?
You might have grown older and more frightful. I think kids today are fine navigating misinformation that another even older generation thinks will destroy societies.
I think it’s just a natural part of the technology adoption process.
Only early adopters use it > widespread usage > simplified, centralized solutions eat the usage market (FB, etc.) > the normal people learn how to be technical users, at least enough to go independent > decentralization takes hold. We’re now in between the last two steps.
Compare it to something like reading and writing or paper. Roughly the same process happened.
> How does it get better before it gets much worse?
This is the million dollar question. For me I came to see how dogmatic ideas like the Intellectual Monopoly system are. How economic theories fall apart in the face of the gift of digital technology and it's zero marginal cost of reproduction.
I am deeply inspired by the writing and work of Arthur Brock and Eric Harris-Braun.
This has to be one of the most frightening proposals I’ve seen.
How would this body be formed? What body currently exists that is completely unbiased?
There have been numerous examples of Western gov’ts outright lying to the more citizens, often for decades. That would be the official truth, with any dissenting opinions deemed “illegal”.
This literally sounds like 1984 and the Ministry of Truth.
Edit: replied to subcomment, not parent comment, very regrettable.
This. YouTube should be free to allow or block any content they want on their privately owned and permissions platform.
At the same time, no company should have the power to decide what content can be shared and spread.
It’s all of our responsibilities to take back the power we always had and give it back to individuals. The other their is through open protocols through decentralized / federated permissionless infrastructure.
Start with yourself. Prefer better alternatives whenever possible.
If you have the means: Start hosting. Contribute (code, tutorials, docs). Spread the word. Contribute funds to projects. Answer support questions/GH issues/forum posts from people who are having issues.
When it comes to people you know and interact with, start with the “low-hanging fruit” (individuals who have skills/interest in technology or are already agreeing about the problem).
I used to be outwardly and loudly critical of the system and people supporting it, but that got me nowhere, just isolated.
When I stopped all criticism and just started aligning my life how I would like others to live, I finally started to get somewhere.
How is your comment substantively different from someone saying
"I love freedom but..." and then proceeding to sing the virtues of how ignoring the 4th amendment will save us from terrorists?
We've already gone down a very similar road once we we don't like where it leads. Turns out ignoring the 4th amendment didn't help us stop terrorists and now 20yr later the government is still ignoring the 4th amendment to our detriment despite the whole terrorism thing having mostly run its course and reached a steady state. Why will it be different if we take a similar approach to a different right?
The problem with censorship is that it "proves" that there is a "conspiracy" and it is like putting gas on a fire. I know this from people around me who believe in that stuff. If something is taken offline, than that "damn mainstream media is trying to hide a truth".
EDIT: IMHO, it would be much better option to flag/warn that video as misleading and provide link to resources with correct information.
I would argue that the real gas on the conspiracy fire is the combination of the ad/attention based internet economy and the power of algorithms. They plug in data and tell algorithms to increase viewing time and clicks, however you can. Well, it turns out that of all the topics people are interested in, outrage is one of the best ways to keep people engaged and to encourage them to engage others. So when person X clicks on one news item questioning vaccine safety, the algorithm starts sprinkling in more inflammatory articles and a few videos. So they click on a couple. Which leads them to groups of other people that reinforce the belief and leads to more clicks and more viewership. Very quickly, most of their online experience shows them that vaccines never worked, that it is a government hoax, and there are a plethora of “experts” to back up and continually reinforce this idea. We seem to have accidentally automated conspiracy theory propagation.
Actually, I would say the problem with censorship is that it rarely stops at a good place. It might start from good intentions, but eventually we're shushing people that go against the herd without actually knowing they're wrong.
In this case, it seems pretty obvious that there aren't microchips in vaccines. I'm not even sure what those chips would do, or how they'd work.
But what if some day the government really did do that, or something else that seems ridiculous? We'll be censoring anyone who tries to tell us otherwise.
And 1 step worse: The government could order YouTube/etc to do that censoring, and it'll look like it's just their normal thing.
The problem with censorship is that legit and valuable free speech will eventually be restricted.
Don't be fooled by the pro-censorship narrative (although that is course the point of censorship).
Claiming people believe there are microchips physically inside vaccines is a major distortion of what these people think. In fact even the Reuters article doesn't say that. It's the age-old tactic of making up something ludicrous that sounds vaguely like a group's concerns and then claiming they all believe it so they shouldn't be listened to.
What they are concerned about is the creation of a system that enables rapid marking of people to identify if they've had certain vaccines or not. Sometimes this surfaces as talk about "quantum dots" or "dot tattoos" which is a reference to research into a way to imprint codes underneath people's skin. The microchip idea comes from the fact that this is already done for animals (rfid tags).
This provokes concern in many people for similar reasons to the (supposed) Chinese notion of 'social credit': the point of tagging the population with a physical marker of compliance is to enable very efficient stripping of their rights if they haven't complied. Sort of a step short of imprisonment. For instance, some politicians already talk of blocking air travel for anyone who wasn't vaccinated (against many kinds of things).
Now if you trust vaccines, no problem. But scientists are busy setting fire to their trust in all sorts of ways so the population of people who don't trust them will only increase:
1. Rapid "moonshot" vaccine development programmes that are skipping the long testing process usually involved.
2. Rising awareness of how politicised academia has become. See the other story about Nature magazine.
3. A Swine Flu vaccine that caused neurological damage in a small number of cases (but Swine Flu hurt a tiny number of people too, so the cure was worse than the disease in this case).
4. Insistence on a waiver of liability for vaccine manufacturers.
5. Many examples of low standards or making contradictory statements about viruses and COVID.
6. Demanding that anyone criticising them is silenced.
And so on. The list could go on all day. Point is, it's entirely rational to be lowering trust in scientists at the moment, so any effective system of tracking and enforcement around their decisions is going to be legitimately controversial. You don't have to be a tinfoil hat wearer to observe that standards have been considerably loosened around vaccines this year.
> Banning obvious bullshit, albeit akin to censorship, is NARCAN.
This is the crucial comparison.
YouTube, Twitter, and Facebook do not promulgate free speech. They are driven by algorithms that prioritize certain information over others. Such algorithmic prioritizations are not platforms for free speech and removing harmful content from these algorithmcally-generated feeds is not the same as censoring speech in the domain of free discourse.
Given YouTube's algorithmically-generated feed, removing certain harmful content is more accurately identified as curation than censorship.
Wrong - 1st Ammendment was already ruled to cover algorithims in cryptography export cases. You're trying to rationalize what you already feel is the truth.
The definition of harmful content is already a tautology which belies horrifying implications because it puts agency of the reaction soley upon the "speech". It would mean that in Aparthiad contexts an interracial handshake would be responsible for a riot.
> Social media is currently poisoning our country to a degree that I think it may be too late to try and draw out the fight for free speech by using free speech alone.
Why do people think this? A few selected anecdotes and some tiny numbers (1000 people liking a post talking about covid causing 5G? Tiny numbers)?
What is the end-game that is so feared?
If the people as a whole can't be trusted to make sensible decisions and judgements then abolish democracy and copy the Chinese system.
Or, if the problem is uninformed people, then adopt a representative democracy where a random sample of 1000 voters are chosen by sortition and metaphorically locked in a room with the candidates for the governing council for a month (as if they were a jury) after which they elect a handful to government.
I agree with the overall message, but I think we need to discuss this in the context of current reality, not pure theory.
First, neither copy the chinese system or vote by sortition are currently realistic alternatives.
Second, it's not like free speech exists in purely theoretical space. Free speech in an empty room is not interesting. Free speech before the press is not like free speech after. Free speech now is not like it was in the 90s. Youtube and fb genuinely created a lot more freedom of speech in practice. They also monopolize it.
Lets not pretend that there aren't costs, regardless of what way this happens.
That said, a world where the public, advertisers and politicians lobby a beseech youtube to restrict speech is a nasty one. This is going to be ugly.
People think that because it was their honest hope/belief that technology would work to amplify ONLY the kind of information they like: Vote liberal, open the borders, affirmative action everywhere, include tech elites in decision making, heavier taxes for everyone above my income bracket, condemn every country who is not a western liberal democracy. When they realized that technology is amplifying EVERYBODY voices from the town's loony to ideological enemies they want to reign the system back, they dont want actual freedom, they want finely tuned control masked as a virtue.
Good points. I also hoped that the internet would make everyone smarter by allowing anyone to read physics papers and watch videos on how to build a pc. But we are social creatures, so here I am posting on a message board instead.
That Internet existed, until 1996 or so. It still exists somehow, but what % of the population at any given time do you expect will be reading Wittgenstein or Braudel, or looking forward to the next Witten's preprint? 10%, 5%, 1%? The scary thing is that now he have Fields Medal winners claiming that 2+2 could be 5 only to not appear as an asshole before the digital Robespierres.
The end game is hard radicalization towards fascism by movements like qanon, which structurally encourages its followers to take matters into their own hands and commit acts of violence based on modern Blood Libel.
I dont think issue is uninformed people or people in general not making sensible decisions.
Large issue is euphemisms we use and collective inability to call spade a spade. Misinformation about covid does not randomly happen because uneducated Johny randomly something something. It happens, because well educated people intentionally spread it as a way to acquire or keep power. It also happens, because Youtube algorithms are easy to use by smart educated actor in order to promote whatever that person want to promote. And the people who put time and effort into this tend to be overwhelmingly bad actors.
I watched "triumph of will" once and then I was getting holocaust denial videos suggested for months. I was not getting history or lectures tho. After a while that went down. Then, I watched some gaming videos and started to get such materials again.
That is to point out that for whatever reason, youtube itself is promoting these things. It is not "people in general", I sure as hell was not seeking these materials. And maybe if we have been able to admit out loud that it is not just some teenagers cosplaying white supremacists or stupid hippy moms promoting anti-vaccins, but instead well organized political groups pressing for own agenda, we would be better off.
Because current discussion about speech is absurdely naive.
The end game that is feared is the fall of civilization mirroring the collapse of the western roman empire, and it’s quite possible in a thousand years people will peg the collapse of our civilization as having started around 1950 or so.
With global climate disasters and large chunks of the population prone to tribal superstition looming, nearly everyone being spoonfed technology they don’t begin to understand much less have any ability to replicate, and the government of the global hegemon inches away from collapse into autocracy, people are afraid of dark ages, but most of them can’t quite articulate it.
Rome fell and a big part of europe and the mediterranean lost the ability to maintain cities many of which dumped 90% of their populations to the countryside. Specialization gone, most people were just doing subsistence agriculture. Literacy plummeted to almost no one outside a very very few people in monasteries.
This whole religious adherence to party aligned mythology about everyday things (say, vaccines) is pervasive in the western world, and it is not some surface defect, it’s a rot that goes through to the core.
But things have been pretty nice as long as almost everybody has been alive so the belief that everything will continue to be fine is about all that holds things together.
People don’t appreciate how precarious civilization is.
That is a fair concern, and I share it, but one can hardly blame that on social media. The Romans didn't have anything like our modern conception of social media, and they managed to collapse just fine without it. The problem as I see it is that fighting other Romans became more profitable for the Roman decision makers than fighting their neighbours.
Social media is an effective tool which can be used as a weapon, but the problem isn't that Americans have effective weapons; it's that they're incentivised to use them against each other. The solution to me seems pretty obviously a return to federalism. It's the fact that there is only one throne that makes everyone fight for it with whatever is at hand rather than build things up locally.
> the collapse of our civilization as having started around 1950 or so
The pessimist (or realist) in me tends to agree, that a fundamental shift occurred after the second world war. What exactly is the nature of this change is hard to say - something about the dominant ideology of the ruling class, the relationship of wealth, business, politics and governance - maybe.
At the same time, this period has been one of the greatest maturing and flowering of our collective global civilization. Not to go all Pinker ¹, but the optimist side of me wants to believe that the cultural and technological advances we've made have enough momentum to overcome the negative tendencies that are possibly bringing us to the brink of collapse.
Personally, I am afraid of very large media companies making AIs that have utility functions intended to change the behavior of humans without their consent.
But that's not AIs. That's 'the algorithm', and has long since already been made. Hell, it was made in Germany in the 1930s: it's called 'radio'. Radio is STILL used to change the behavior of humans.
I'm not nearly as afraid of the very large media companies doing this as I am of HUMANS taking the controls of such a machine and using it for that very purpose. Very large companies, AIs, nations are aggregates and don't inherently take wild radical positions.
Individual humans do. We're the random factor, the chaos feeding the genetic algorithm, we're supposed to get up to some crazy extreme stuff that will sink or float on the larger scale depending on how it affects society. But when we are able to amplify our individual whims to the scale of companies or countries, which is ALWAYS through using these utility functions intended to change the behavior of humans whether that's exploiting YouTube or radio, there's trouble.
I'm not nearly as worried about AI or aggregate entities like corporations. I'm worried about the existence of these utility functions, and before radio it was 'the broadsheets' and yellow journalism, and before then maybe it was nefarious clay tablets or papyrus.
Power corrupts, and individual whims aren't good models for healthy societies. We keep rediscovering this over and over, and sometimes the civilization collapses, and sometimes it doesn't. Here's hoping if we do get AI, it has some insights into this and why its own existence hinges upon not blowing up the underlying levels (of corporations, of individual people, of cells and the well-being of the individual people).
I feel like the 'thine arm/subgroup/demographic is unclean, cast it out and burn it cos it's eeeevil' is far more often an individual human perspective, amplified by these utility functions because somebody LET that human do that.
Wildly misleading statement, especially given the tiny sample size versus hundreds of millions of adults in the US.
From that same link (demonstrating Americans are overwhelmingly in favor of vaccines; which is obvious given the high vaccination rates in the US):
"While the vast majority (82%) chose in favor of vaccines, 8% selected responses expressing serious doubt. An additional 9% said they were unsure."
Also:
"In a survey of 1,025 randomly sampled adults, 84% of respondents said they believed that it is extremely or very important that parents vaccinate their children"
"Additionally, 86% believed vaccines are not more dangerous than the diseases they prevent, whereas 89% said they are aware of the advantages and disadvantages of vaccines."
FWIW I think there is a major breakdown of The West underway that begun when China effectively ripped Western ideology apart after the end of the cold war. The Western ideology after WW2 was that liberal markets (eventually) implied (rise of) a democracy with civil liberties. China showed that this ideology is false by creating liberal-enough markets while retaining absolute state control (and the Chinese state has vastly concentrated its power, and the power within the Chinese state has been vastly concentrated as well, to the point of it effectively being a Fuhrerstate). Meanwhile The East has shown that so-called illiberal democracies can be widely accepted by the people. And the US are currently testing the hypothesis whether free speech is survivable for a democratic society in the presence of so-called social media. I strongly suspect these are going to be the sucker-punch for The West in combination with Covid-related economic crises. It is likely the liberal era is going to end within this decade.
Essentially, the international scope of the communications tools online has allowed for free-speech absolutism to be hacked by the propaganda methods refined via the Cold War.
the NYT isn't highly likely to publish things that source to Russian propaganda ops, because they employ fact-checkers, care about pedigree of information, and change their editorial process in light of system failures. They get tripped from time-to-time, but the same trick doesn't usually work twice.
My relatives on Facebook? Not so much. Nor did Facebook (until recently) care to put an editorial eye to advertising as long as the money was green.
It is not a guarantee that a system structured like this is healthy for democracy, nor is it a guarantee that a democracy doesn't destabilize under it.
Yes, those people. As I said, they get tripped from time-to-time, but they modify their process when it happens.
In that case, they had multiple intelligence agency sources feeding them the same bad data, so their cross-checks failed. They've down-sampled the reliability of those sources in subsequent years.
The NYTimes role in society is packaging official narratives for mass consumption. Their MO is mindlessly repeating whatever the IC tells them. This is not a new concept [1].
How are alternatives working out? Because we're watching people worry so much about an Orwellian future that they're running headlong into a Stephensonian future.
The difference between Orwell and Stephenson is that Orwell spent time as a police officer in Burma, and a solider in Spain. Stephenson does not understand the corrupting influence of power the way Orwell does.
And the alternatives are fantastic, for those who chose to use them intelligently.
If we're going to require intelligence for the whole system to succeed, we're setting ourselves up for failure. Half of everyone is below average.
> The difference between Orwell and Stephenson is that Orwell spent time as a police officer in Burma, and a solider in Spain. Stephenson does not understand the corrupting influence of power...
So you're saying Orwell has practical experience, such that one should weigh his musings on the subject more highly, while Stephenson is some rando expositing from a place he doesn't understand?
Fascinating. What happens if a bunch of people just disagree with your argumentum ab auctoritate and accept Stephenson's musings anyway? That would result in a bad outcome?
Should we have a mechanism in place to tilt people towards Orwell's interpretation over Stephenson (such as, hypothetically, putting one and not the other on the required reading curricula of schools)? If not, what happens if the Stephensonian interpretation of risks wins out?
That massive (and growing) unaccountable bureaucracy with secret law that types up the talking points the NYT helpfully launders into national security "journalism".
A war for democracy would be fighting to give everyone a voice, even to those who you disagree with. Remember:
- Those who sacrifice freedom for security, deserve neither.
We shouldn't sacrifice freedom allowing platforms to censor people.
Annoys me the bullshit on youtube on all levels, but I don't want any platform removing the right of those people to say their ideas. People can have bots, but they will never be as strong as normal users. For example, 1mi bots giving likes to a video pro-nazism, this will trend and 1mi users will downvote it :) considering the platform that youtube is, 1mi real users is almost nothing.
Also someone disagreeing with you don't mean that this person is stupid/dumb/lack critical thinking, this is a quite common thing nowadays, consider everyone who think different to be wrong and dumb, instead of analysing that most of the things break down to social, personal and moral values, which differ from people, cultures, and so on...
So let the real democracy happen and downvote what you don't like or don't approve, upvote what you endorse, etc...
Democracy functions with an educated electorate. If you're saying that YouTube has to act as gatekeeper ban non-libellous, legal misinformation, you're passing a grim judgment on your fellow citizens which implies that the battle for the country is already lost.
Our fellow citizens elected a reality TV star as president of a country with nuclear weapons.
People are passing judgment for, I think, fully rational reasons.
The Atlantic has had several good articles on how fundamental societal trust is dissociating in America. The election of Trump isn't the only aspect of it, but it did scare about half the country into thinking the other half isn't rational in the same way they are. And the feeling is mutual.
The size of the electing body is a little irrelevant. People trusted the system to disallow someone as unqualified as Trump, and it does not. Disillusion with that system runs high.
If anything, it probably runs higher than it would have if he had been elected by a majority. People feel let down not only by their fellow citizens, but by the laws that govern who is president.
If a review is found to state injurious lies or false statements about a business, that'd constitute removable libel or fraud. YouTube should similarly remove review videos from people whom they have string reason to believe never used a product or service.
Well yeah, that just about sums it up for me. There’s a subtle myth in absolutist free speech circles (to which I used to belong, and which I still think envision a society worth working towards) - that people are rational. In a world of rational actors, the solution to bad speech is good speech.
Humans, on the other hand, have proven themselves largely unprepared for the modern media environment. We have consumer protection laws about what can be put in our food, and I think it’s worth exploring what that would look like for media.
Personally, I come down on a power*responsibility compromise. Individual people should have the broadest possible freedoms, but:
1. Wealthy and powerful people have an unchecked ability to amplify their voices, and we should think carefully about how to manage that. Returning to a sane world of campaign finance law would be one example.
2. Media corporations have a lot of power, and many have abdicated the corresponding responsibility. I think we should work towards editorial standards, mandatory complaint resolution and correction at least for those masquerading as news.
3. Platforms like FB and YT have shown themselves capable of algorithmically maximizing engagement, and it seems that for a subset of their users the effect is radicalization and conspiratorial nonsense (Qanon, anti-vax, climate deniers, holocaust deniers etc.) I don’t have the solution, but a bit more gatekeeping against misinformation would probably help.
I've not been an absolutist free-speech person but what you've arrived at works equally well for me.
It seems obvious to me that humans… and I am one!… are not able to handle things that well. Often we have to look at outcomes to populations to really get a sense of what's going on, as we view things in this blindered way and are deeply persuadable in known, exploitable ways. And that becomes a meta-game: become immune to a particular crude argument, fall to a more multilayered argument, and so on.
A key factor here is this (in my opinion): conspiratorial nonsense is not nonsense. Even if it seems bonkers and useless, there's something about it that is serving a purpose, but it's very likely not the purpose that it 'says on the tin'. For instance, there is a great deal of 'nonsense' that exists for the purpose of goading humans on to acts of random terrorism, and for grooming humans to be an extension of an army from somewhere else. Building an army to destroy a country is a VERY high-value activity, but it's not done by showing up and going, HI! I represent the army of Kazakhstan! Please acquire a weapon and shoot your countrymen for me for the glory of country that is not you!
So it's done, by quite a few state and non-state actors, in different ways. And when you've got a global pandemic and the ability to propagandize your country's enemy and get 'em to normalize licking each others' nostrils… well, you do what you can.
Hence 'conspiratorial nonsense' toward nonsensical or nonexistent ends, that happens to fulfill a purpose other than what's argued by the conspiratorial nonsense. The key concept here is that it's possible to misinform on purpose towards a defined goal that's orthogonal to the apparent misinformation.
One VERY IMPORTANT aspect of this, as Facebook and Google have learned, is that if you're doing that and you have access to big data to inspect how your misinformation is affecting POPULATIONS where you can't directly get response data back from the targets, you can use the big data to ask questions like
"Which of these two arguments for hunting lizard aliens that are also pedophiles, most inclines the target towards murdering black people? Which accusation towards this political figure most inclines the target towards inciting civil war and overthrowing the government of the target's host country?"
And that's how it's done. Given sophisticated enough ability to interpret data, you can direct 'nonsense' VERY effectively towards explicit other goals which aren't hard to work out if you're not the target. You need the data, though, otherwise you're fumbling in the dark. Google and Facebook are complicit in this to the extent that they go, 'you gots the money, we gots the honey' and place themselves at the disposal of anybody wishing these services.
This is a great example of how rights aren't free, they are counterbalanced by responsibilities.
The right to freedom of speech seems to bear the cost of education. An educated person should be able to assess the things they hear and make their own choice on what to believe, uneducated people seem unable to do the same.
I don't believe that censorship from a 10,000 foot view is a binary state. You're thinking that I support redaction, I suspect; probably because it's hard to represent my beliefs in a few lines of text.
My view is that attempting to convert facts -- delivered by the scientific method -- to fiction by way of purposeful misinformation is a more corrosive form of censorship than showing the door to a comparatively miniscule number of who-knows-whos seeking to corrupt our agreed upon truths for their own benefit.
> My view is that attempting to convert facts -- delivered by the scientific method -- to fiction by way of purposeful misinformation is a more corrosive form of censorship
Then you simply don't know what the word censorship means.
Disallowing certain information be shared, excepting libel/slander/fraud/privacy invasion/classified data and other private data, is censorship. The onus is on you to prove the non-binary nature of a very ordinary concept.
You happen across a skyscraper with a politically charged message on its façade that you disagree with.
You can either:
A) Paint over it.
B) Surround the entire space around the message with hundreds of other messages -- lies -- in the same style and color, designed to sway people to an opposing ideology.
The same effect has been achieved in both scenarios. Which is more insidious?
I agree with you on the harms that you have identified (overwhelming bullshit).
I think it’s worth thinking about this problem by comparing and contrasting social media to traditional public spaces (like a soapbox in a town square).
By comparing these two kinds of public spaces (ignoring for now the fact that social media is privately owned), one can start to identify the mechanisms in social media platforms that are enabling overwhelming bullshit and work to mitigate, modify, or eliminate those mechanisms.
For what it’s worth, my opinion is that the amplification and discovery mechanisms in any media platform (from print to TV/radio to social media) play a key role in the spreading of bullshit. For that reason, I like twitter’s moves to introduce friction into retweeting. Additionally I support efforts of the social media companies to defeat bot farms.
The real problem is that people are looking for solid medical advice on YouTube, Facebook and Twitter. No amount of censorship is going to change that.
> The real problem is that people are looking for solid medical advice on YouTube
The thing is... solid medical advice does exist on YouTube, so it's not entirely unreasonable to go looking for it there!
"Just go to your doctor"
Yes well, a) that's not free in America while YouTube is, and b) your local doctor could have been bottom of his class, you have no way of knowing. It's like saying don't watch world class programmers on youtube, hire a local programmer instead.
The realities of censorship are and historically have been large net losses. That was one of the large premises of the founding of the United States, among many other ideals.
So in the interest humanity and historical evidence, censorship is and will always be a bad idea. The best ideas will eventually win, as history has also shown us.
Source: US constitution, Venezuelan history, Romanian history, British history, Korean history, Vietnam history, Chinese history, Cuban history, Cambodian history, Soviet Union history, German history. That’s just a start.
I'm pretty sure current China would disagree with you given the amount of people they've raised from essentially peasantry to a modern middle class in the past generation.
Let me preface this by saying I used to be a free speech absolutist, and maybe I will be again some day. I still don't think the government should be banning speech. But I think companies moderating their spaces is necessary.
Good moderation is large part of why this forum is so good in the first place. The history of the internet has shown us that unmoderated spaces turn into dumpster fires.
German history seems to be the classic example of an unchecked conspiracy theory doing massive damage. That is why holocaust denial was made illegal in much of Europe.
I am ignorant of most of the other histories. Would love to hear your take.
(Also as an aside, I upvoted you since I think you make good points and provoke an interesting discussion, even if we don't agree on everything).
Thanks for the upvote. To your point about companies censoring speech as they own the platform, I'd agree they claim to be a platform but I don't think they're acting like platform. Based on their actions they’re acting as a Publisher because they’re editing and curving speech like a newspaper does with opinion pieces.
Platform has a legal definition, which was litigated in the early 90s. The result was if you’re a platform you’re not legally liable for the content of your website. The legislation calls out that’s speech must be user generated and flow of ideas cannot be restricted unless they are already illegal (eg. how to make a bomb).
Publisher also has a legal definition, which social media companies certainly don’t want because then they’re liable for all content and can be regulated further. Actually these companies don’t want any regulation The business models will eventually come under attack due to overreaching privacy policies and general CEO autonomy.
I’d also like to point out there’s some incestuous situations going on between the staff of Facebook, Twitter and Democratic campaigns including Biden’s. If we’re honest with ourselves we know it’s highly likely collusion and back channels exist between the two. I can provide more information if you’d like.
No no no, this is such bullshit. People wouldn't believe in these conspiracy theories if they weren't in a state of downward mobility. Look at the suicide rate in the US, especially for those without more than a high school education. Look at the increasing opiate addiction and overdoses. Is it any wonder malevolent conspiratorial thinking is increasing? It is unfortunate people are unable to correctly identify the actual cause of their suffering, but we shouldn't silence these voices.
What I mean to say is conspiracy theories are a symptom and not the disease, silencing conspiracy theories does nothing to address the actual issue. The anger and discontentment of the working class caused by declining wages is what needs to be fixed.
Interestingly, I'm skeptical of Alphabet for exactly the same reasons :)
I feel like regarding tech giants handling relatively unprecedented expansions of social media and broadcasting, it wasn't up to you whether politics got involved. As soon as third parties started to work out exploits to your conceptual system, ya got involved.
'The algorithm' has a lot to do with this, but at heart it's an exploit and not the algorithm itself. Until the algorithm is actually intelligent and can think about itself and what it's doing, you and the company you work for are exploitable. As such, you may feel democratization of media and the ability for individuals to broadcast and information-search is vital to societies and humanity as a whole, and you'd be right, but that's not what you're doing. So long as your work is exploitable in this way, you become partisan, you are just blind to WHO you are being partisan for. And we've seen some pretty wild consequences out of the range of possible outcomes this produces.
> The ability for one person to amplify their voice or ideas
This is nothing new, look at who possess medias that most peoples read and trust every day.
It is naïve to think disinformation is something new, and oh, disinformation certainly doesn't apply to "ME".
Banning obvious bullshit? Like flat earth? Do you feel the need to ban that? I don't.
(Peoples) Wise or not wise enough to discern truth, you won't make anything better by tasking someone else to discern what is truth and not for others.
Sacrifice some liberty for security, and you will lose both.
I don’t think the problem is so much social media, but the polarization of our epistemological institutions, which erodes trust in them and rightfully so—we now have to take our academic information with a particularly large grain of salt, accounting for the aligned interests of the media and certain departments in the academy. However, many go too far, moving from skepticism to positions which can’t be supported by the evidence. They throw the baby out with the bath water. The only solution in my mind is to restore heterodoxy to these epistemological institutions, so people of all stripes can trust them. I know this kind of comment draws out bad faith arguments like a moth to flame (“...biased against reality”), so I hope we can dispense with that and focus on solutions.
You're exactly right that we don't have a single important principle we need to protect, we have a complex tapestry of many that we need to balance.
Total censorship with total security isn't the answer. Total freedom with the total loss of truth is also not the answer. As anyone should expect, the ideal is not at either extreme.
Is it messy trying to sort it out, will mistakes get made, will some decisions make some people unhappy? Yes. But it's where we're at. It's the price we pay for living in a society that's nurtured fraud and money-driven nihilism for so long.
I personally think that this is now a war for democracy
In this war for democracy, which side of the war do you think wants everyone to be able to access any information and which side do you think wants to control the flow of information?
How can we value each and every voter's input if we don't value their ability to filter for themselves what is true?
When you order a product on Amazon that immediately breaks and then you find out that of its 10000 reviews, 9750 of them were fake, I commend your ability to put your personal feelings aside in lieu of the greater fight for free speech.
That's a poor analogy. We're not talking about a service that is trying to help users achieve clearly definable goals like eliminating fraud with their customers' consent. No one is claiming that Amazon customers object to having Amazon's help to prevent being defrauded.
We're talking about communication platforms where politically motivated decisions are being made to filter out information that the decision-makers don't like, and the users are saying, "We don't want you to make those decisions for us."
But also think through my original point. Why should we value democracy, the notion that each person has valuable and necessary input in building a society, when we hand control of information over to the very few? Shouldn't we cut out the middlemen and just have our information overlords be in charge of creating and implementing all our policies? Why trust people to vote when we don't trust them to choose and evaluate information?
If social media is poisonous and the powerful are using it to control ideas, the solution isn't to let the billionaire class social media giants be the arbiter of truth, methinks.
"misinformation" is indistinguishable from "thinks I think are false" which in turn is hard to distinguish from "things I disagree with".
> "misinformation" is indistinguishable from "thinks I think are false" which in turn is hard to distinguish from "things I disagree with".
Only if you believe the post-modern claim that there's no objectively verifiable truth and that all narratives are equally valid.
There is a huge difference between "vaccines contain microchips" and "water retains the essence of homeopathic ingredients" on the one hand and "COVID spreads through droplets" on the other.
EDIT: I do agree billionaires shouldn't be the final arbiter, though.
If the billionaire controllers of the tech giants (advertising based services!) aren't doing it, they are simply delegating this to other billionaires who are paying the bills. Like it's often said, 'if you're not paying for it you are the product'. Social media consumers are the target, not the ones directing the narrative.
In the absence of the tech giants choosing what's permissible, you get different other billionaires who can define a benefit to pushing a 'you should drink bleach and overthrow the government' narrative, because THEY are not the 'you' referred to, plus it ain't their government.
Acting like that's an organic social narrative is malpractice.
b. it's not a good thing for billionaires to determine what is censored
To me, there isn't an inherent conflict.
I think we (the collective) keep dancing around the issue. Essentially, perhaps it is time for advertising agencies and corporate users of social media to be regulated like news publishers. At least there, there's some expectation that they don't misrepresent facts. One could argue news regulation is also on the decline.
The next step is asking ourselves whether social media made everyone a "journalist" of sorts, and how that should be regulated. After all, taking on corporate entities alone wouldn't have stopped QAnon.
> I agree that what YouTube is doing here is a slippery slope and it's scary. However, I personally think that this is now a war for democracy, and things are going to need to be sacrificed.
YouTube choosing what you can see is as far from democracy as it can get. Not sure how you get from that to saving "democracy".
Youtube is part of a public, non-state-controlled entity. As such, it has close to nothing to do with democracy as a form of government. It does have a social impact, which then influences democracy. As such, Youtube, and FB and Twitter and all the others, do have a responsibility for all of us. And they have to be held accountable for their actions, up to know they just raked in the profits and gave shit about anything else. Them now changing that is a good thing.
What I don't see in your response is an actual argument for how YouTube doing something which not democratic is good for democracy.
> up to know they just raked in the profits and gave shit about anything else. Them now changing that is a good thing.
Censoring unpopular and controversial views on their platform makes it more attractive for advertisers, which is good for YouTube's bottom line. This action in no way indicates an renunciation of a profit motive.
I think the idea is to censor untrue and dangerous ideas rather than unpopular ideas. Yes, determining that is challenging. But denying COVID, for example, is very dangerous to all of society. Thousands of people are dying because of COVID deniers.
To me, this is theoretically similar as if a massive video campaign was messaging you to drink bleach. I’m sure that sort of video is against YouTube’s terms of service — at least, I would hope that it is!
The difference is really that some lunatic could stand in Times Square and scream that we should all drink bleach. But in that public forum, no one would take them seriously, and plenty would deride them. But with a great production value, paid influencers, just the right wording and messaging, you can manipulate social networks into spreading very dangerous information and actually making people believe things that can kill them. I think that problem, gone unchecked, leads to serious consequences and it would be in the best interest of society as a whole to fix it.
> But with a great production value, paid influencers, just the right wording and messaging, you can manipulate social networks into spreading very dangerous information and actually making people believe things that can kill them.
Clearly, you’d need an arbiter that is immune to such influences. Someone who knows what’s really true and can sensor the misinformation despite its high production values. I think the interesting question is where you’d find such an oracle, and how you’d convince it to work for YouTube.
The point being, nothing Youtube does is un-democratic to beginn with.
And up to now, Youtube and FB gained from contrversial content, not the other way round.
You are entitled to your opinion and to say it in public. You are not entitled to be protected from the social consequences. And if what you say is true, that this move drives the add revenue, that is only because people want to distance themselves from these views. Which would tell us something about public views. I just don't thin that is the case.
Holding them accountable by giving them more power sounds like a losing strategy. Because that is exactly what the "pressure on big tech" has done in favor of classical publishers that thank them by favoring their censorious actions in articles.
There was an understanding that they don't remove content they disagree with and would get criticism in return if they do. Now people want them to do exactly that. That is far worse in my opinion, especially for those without power.
I agree but it's not selfless or altruism. In the EU they are one law away from being effectively shut down.
The EU commission is eyeing the behavior of social media networks for a while now and the increasing misinformation and radicalization of groups like white supremacists or QAnon pose a very real threat to democracy and society.
If the social media giants like Facebook, Twitter, Google, etc. don't do anything against this, the EU will make a law that will make the social networks check each and every post for unlawful content.
Oh, I would never go as far as thinking they did this for some altruistic reason. Rather that they finally reached a point where the benefit-risk equation tipped to the other side.
YouTube choosing who gets to manipulate what you are going to see is another reframing of this, that might make more sense.
Since we've seen a considerable amount of what you get when somebody is able to pay YouTube, and pay Facebook, to specify what you're going to see, on a very granular and Big Data level, in a targeted way.
In a real and practical sense YouTube having access to extremely effective social media tools and just hiring them out to any bozo offering money, with no accountability for who's trying to accomplish what, is the thing that's as far from democracy as you can get.
I feel that the people most hot to argue for a 'unmediated' world, and most freaked out if this is challenged, are the people who are in fact putting a lot of time, money and effort into mediating and manipulating this 'unmediated' world to be unrepresentative of reality in a way that benefits them and their intentions.
Then if you do challenge them, they simply lie to your face and say it's all about freedom versus censorship… because to win, they need to retain the ability to mediate the world to their liking, and even to acknowledge that this is possible suggests the possibility that they are already doing it as hard as they can, and know it.
But this is how it's always been, before the web people got their news from a different set of media companies NBC/ABC/CBS. 20th century media conglomerates didn't have the problem of having to censor viral user generated content but they were still censors, and much stricter censors than the social media companies because their channels had more limited information bandwidth so they had to be much more selective about what to broadcast.
Slavery was how it's always been until it stopped being how it's always been. Saying something is how it is is not a reason why it should be the way it is.
If you want to support it, you actually have to make an argument for it.
I don't know how this ever became an issue of "free speech" in the first place. This isn't a matter of free speech and never has been.
If you call up Rush Limbaugh or Howard Stern in the middle of afternoon drive hour and he doesn't like what you have to say, he's going to hang up the phone on you (assuming you could get through in the first place). This is no different - google has absolutely no obligation to host your garbage on their site.
If you want to post conspiracy theories, go buy a server from dell, pay for a business line from your ISP, and setup your own site, and host your own content. If the content you're hosting violates laws, it'll probably get taken down.
You're free to say whatever you want, nobody else is REQUIRED to listen to you, host you, or advertise you and never has been. About the only way out of that box is if you can prove they were discriminating against you because you were a member of a protected class (in the US) and not because of the content of your data.
Youtube and facebook refusing to host something is not a slippery slope to something being completely censored. Anyone can rent a server in pretty much any country. Saying the biggest companies in the world not hosting you is the same as being completely censored is like saying you will die of thirst if you aren't allowed to drink coke.
> like saying you will die of thirst if you aren't allowed to drink coke.
I will die of thirst though if every source of water has a megacorp in the supply chain that can cut me off from it.
Imagine a world where your tap water is ultimately sourced from Dasani. And imagine Dasani can cut off my water if I say bad things they don't like. That is the world we live in: s/tap water/information/g, s/Dasani/tech megacorps/g
To reach a significant audience on the internet, you must go through a tech megacorp at some layer in the stack. And that tech megacorp can decide to cut you off from the internet at their layer in the stack if they don't like what you are saying or if Twitter mobs put pressure on them to do so.
Internet service providers should be common carriers. If they were, your whole argument falls apart. And that is what we should be fighting for. It would be the easiest to do, with the least odds of bad side effects.
No it isn't. Again, you can rent a VPS for under $10 in almost any country in the world.
> To reach a significant audience on the internet
This is not necessarily true, but it isn't really the point. You can put stuff up and people can access it.
You are not entitled to maximum easy discovery. You are not entitled to be advertised and promoted to a significant audience. The internet is a pull medium not push. If people want to hear what you have to say they can.
> Banning obvious bullshit, albeit akin to censorship, is NARCAN.
I'll agree with your plan for banning "obvious bullshit" so long as I get to decide what constitutes "obvious bullshit". Don't like that idea? Then why should I like it when you get to decide? Or when some Google policy goon gets to decide? I never voted for you or for that policy goon.
Whenever you give anyone power to censor b"obvious bullshit", it takes a few milliseconds for "obvious bullshit" to become "political positions I dislike". That's how people work. The disruption to the free flow of information is more harmful than any amount of bullshit. Your "war for democracy" is in fact a war against democracy, since democracy is about building a distributed consensus about what constitutes truth, not about letting unelected censors control the public.
Anyone who tries to control what strangers say to each other is someone who wants to control your thoughts and rule over you. No thanks.
Social media is not some kind of existential problem. What you're claiming is a massive civilization-scale problem is what people claimed about the printing press. Oh no! Anyone can make a pamphlet! This terrible danger must be dealt with! We can't allow the bible to be published in lay people's language! They might misinterpret it if they can read it without the help of priests! Smash the presses! -- this is what people actually thought. Now we think this attitude is ridiculous, because it is.
The real danger we have right now is a few people using their influence over social media platforms to control public conversation and subvert democracy by manipulating the conversation. I am completely opposed to all SV attempts to decide for themselves what is true and what is false and censor eight billion people based on the result. No. They do not get to have that power.
You don't have free speech on a platform like YouTube. It is Google's walled garden, with Google's set of rules. Free speech and censorship is for and by the government, and it was never absolute to begin with; always been a nuance.
I think those fears are entirely understandable - but this is the system working as intended. The problem is not that social media is allowed to do what it does (i.e. give everyone a platform); the problem is that capital has warped the public realm such that a handful of gigantic social media corporations have captured a substantial proportion of the world's attention.
The solution is, imo, to break those corporations up - to ensure competition and to break open these walled gardens which allow nonsense to thrive.
I would seriously hazard against any attempt to restrict individual freedoms regarding free speech. The US, for all its problems - is significantly better than many countries on earth in terms of the ability for any random citizen to say what they like without fear of retribution. Granting politicians and mega-corporations the right and ability to effectively negate entire sets of views and opinions will only ultimately benefit the worst people on earth.
It's a tricky problem but to my mind the issue is one of forcing capitalism to fulfil its promise; competition is the key to preventing tyranny in a capitalist system.
>>I hate the idea of censorship and I always have. However...
The first spade is free speech itself. Neither the custom nor the law applies to YouTube. In theory, it's a private site that chooses what videos it shows. In practice, youtube is a newly powerful monopoly. Both YouTube and Alphabet. There's a tension here.
In 2020 reality, political freedoms such as speech, association and even religion are realized on Google, facebook and co. When a politician or activist speaks to voters, it's on youtube. Whether or not breastfeeding is obscene is a Youtube decision. This isn't absolute, but it is to a large and increasing extent true. Your freedom of speech is violated in all but the most technical sense when fb, twitter, and/or youtube shut your mouth.
Social media is not a market, or marketplace of ideas. At best its a strip mall where two jovial consortiums own all the stores. This is the place where people decide to vote for X, that the virus is a hoax, or that climate change is real. It's at the heart of freedom of speech.
The second spade you called out well. These sites are corruptible and corrupted. Malicious misinformation spreads well here. This is already causing societal harm. They did this.
Another spade is reality regardless. I'm pretty confident that more, less powerful players, open protocols and such would be serving us better. This wouldn't have fixed everything though. Shared narratives have to be enforced somehow: culture, religion, media. If everyone is on a different stream, the shared narrative is broken. This would probably be true regardless of monopoly, though I think the consequences would be less severe.
Shared narratives are not necessarily true. Truth is not necessarily known, or necessarily in Google's best interest at any given time. In fact, truth is expensive. Formal truth seeking systems are the legal system, scientific method, drug approval process...
Going down this path means that Youtube (also fb, twitter) will be deciding and implementing the shared narrative. Religion? How about cults? Fringe activists? How about fringey elected politicians.
The final spade is inevitabilities.
These aren't could bes. The really hot current issues (eg the pandemic) are also hot political and even religious issues. This isn't nudity, which works as a canary for freedom of speech. This goes to the heart of political life on day 1.
Gotta make the star wars warning. The galactic republic fell to raucous applause. Chaos, and the need to suppress it.
> Whether or not an exposed nipple while breastfeeding is obscene is a decision made in their head offices.
This has been the case for the last 100 years friend
> “The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. ...We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society. ...In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons...who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.”
...
> “Universal literacy was supposed to educate the common man to control his environment. Once he could read and write he would have a mind fit to rule. So ran the democratic doctrine. But instead of a mind, universal literacy has given him rubber stamps, rubber stamps inked with advertising slogans, with editorials, with published scientific data, with the trivialities of the tabloids and the platitudes of history, but quite innocent of original thought. Each man's rubber stamps are the duplicates of millions of others, so that when those millions are exposed to the same stimuli, all receive identical imprints. It may seem an exaggeration to say that the American public gets most of its ideas in this wholesale fashion. The mechanism by which ideas are disseminated on a large scale is propaganda, in the broad sense of an organized effort to spread a particular belief or doctrine.”
Does this really come close to diagnosing the root problem? I am frustrated when I see this argument.
”Intellectual property is the oil of the 21 century. Look at the richest men a hundred years ago; they all made their money extracting natural resources or moving them around. All today's richest men have made their money out of intellectual property.”
— Mark Getty
The root problem is that the Intellectual Monopoly system destroys the potential for democratic use and understanding of protocols - as well as everything else. Intellectual Monopoly means we are kept stupid because there are gatekeepers - usually people belonging to families with inherited wealth. The idea of the ‘Intellectual Property system’ is the newest means of production, whose function is to enclose and monopolize ideas, which should in fact belong to all children of the world. Especially when digital breaks the rules of economic theories, with it’s near-zero marginal cost of reproduction. So the modern Bougie both 1) parasitically extracts economic rents for something that has been made artificially scarce in the first place, as well as 2) unorganically stemming the rate of innovation - setting a limit on deeper cooperation and coherence/alignment, and harmony. But this reality is obscured by the cultural myths of Magical Voluntarism, Meritocracy and more. We are gaslighting people by telling them they have ‘mental health issues”, when in fact the system has evolved to put a limit on their learning and growth, and which sucks them dry.
We need to get rid of closed corporate supply chains and move to Sensorica-type Open Value Networks that use the Resource Event Agent (REA) accounting system. There’s people working on implementing this using Holochain’s distributed data-integrity engine.
In using such a system (compared to an internal corporate network), we transparently measure the actual material scarcity of the wonderful resources our Mother Earth provides. And then we can both 1) democratically decide who gets to use which resources (and where, when, how and why, etc.), and 2) we don’t have to use the Intellectual Monopoly system to pretend that having a monopoly is what is needed for successful innovation.
What this leads to is a symphony of scalar gift economies.
While I agree that social media is not the root problem, I cannot grasp why a disquisition about Intellectual Property could help us to understand what is going on.
It wasn't a bait. Is just that I cannot connect Intellectual Property issues with social media issues.
> Social media is currently poisoning our country
What IP has to do with a kid joining extreme right (or left) groups, higher rates of teenager suicides and people believing anything you put in front of them?
All these happen because of social media, at least in part, and I do believe there is another kind of problem at the root. But I cannot associate it with IP. That's it.
I wanted to say there is an underlying problem. I don't know what it is, but I cannot see why it has to be IP. I didn't said that I disagree.
I did said that I cannot understand the relation between social media problems and intellectual property. This was a chance for you to explain better (I you were willing to). But... nevermind.
"According to Bauwens and Franco Iacomella, late capitalism is beset by the twin structural irrationalities of artificial abundance and artificial scarcity:
1. The current political economy is based on a false idea of material abundance. We call it pseudo-abundance. It is based on a commitment to permanent growth, the infinite accumulation of capital and debt-driven dynamics through compound interest. This is unsustainable, of course, because infinite growth is logically and physically impossible in any physically constrained, finite system.
2. The current political economy is based on a false idea of “immaterial scarcity.” It believes that an exaggerated set of intellectual property monopolies – for copyrights, trademarks and patents – should restrain the sharing of scientific, social and economic innovations. Hence the system discourages human cooperation, excludes many people from benefiting from innovation and slows the collective learning of humanity. In an age of grave global challenges, the political economy keeps many practical alternatives sequestered behind private firewalls or unfunded if they cannot generate adequate profits."
The argument you're making is common and always boils down to this belief: "most people are too stupid to discern the truth"
History shows this to belief to be incorrect, and instead shows that any central power holding this belief will eventually commit terrible crimes against humanity.
A small group of "intelligent" people can not possibly out-compute billions of humans' brain processing power.
I don't think people are too stupid to discern the truth.
I think a lot of people are too overwhelmed by the minutia of navigating their own stressful lives to handle a deluge of lies designed by expert propagandists to be easier and more appealing for them to consume than the truth.
The difference is that these aren't leaflets being dropped from the sky on a village of a few hundred people. These lies that span our entire country and they are coming from people that have access to analytical data which allows them to observe how their lie is being received so that they can refine their messaging immediately.
There has never been a propaganda delivery system like this before. History isn't going to be useful here.
What makes you so sure your mind hasn't been pwned in an analogous way targeted towards smarter/more sophisticated demographics?
A priori, I would expect more educated people to be more easily controlled/brainwashed; that's what the original point of education was- to inclcate a certain ideology/worldview.
The people on facebook get weak waves of brainwashing targeted via algorithm; people in universities get a more powerful version of the same.
I hear people say free speech is "a threat to our democracy" often these days. I've never understood, can you clarify what you mean?
Is it an actual threat to your right to vote? Or is it just a threat to people voting the way you like? In which case isn't the answer just to put out equally salacious information in the opposite direction? That seems like the democratic option to me.
...is it even proven that "social media is poisoning our country?"
To be blunt, the only time I noticed people complaining about it was when Trump got elected. Barack Obama if I remember was actually praised for his usage of social media and targeting. A lot of this anti-social media is a reaction more against him than anything and the left is trying to lock things down to prevent a Trump from ever happening again.
What a comically cliche and stereotypical error that people keep making. 20 years ago it was the Bush and Cheney saying the same thing. Now its the liberals saying that the "poor, ignorant masses" need to be saved from themselves. For the record, I'm also a liberal, but old school liberal, not the contemporary liberal that eschews giving the same rights to those that I disagree with.
Who gets to decide what gets sacrificed? You? What happens when The Other Side gets control? The laws that protect their free speech are the exact same laws that will protect our rights when the opposition gets control.
Never, ever, ever mess with the ideals of Free Speech. No one is smart enough or uncorrupt enough to navigate that properly. The only thing that will save us as a society is completely unadulterated free speech. What we do as a society is to educate as best as we can and let the chips fall where they may. Which is hard, because we have the worst levels of education in developed countries, so we are reaping what we sowed for the last few decades. But basically it's herd immunity for information.
OTOH, free speech is not a right on itself, but a tool for the betterment of society. If speech is used for a purpose opposed to that - to bring on fascism or risking our extinction - then it's natural, and necessary, to react against it.
> OTOH, free speech is not a right on itself, but a tool for the betterment of society.
This is only true for censors. There is no such thing as a "right in itself," there are things we agree will be rights, and that we choose to guarantee. You choose to not guarantee freedom of speech, and rationalize it as some sort of natural law.
I do worry specifically about what defines “misinformation”—sure, let’s ban “the vaccine has a microchip programmed to kill you” or something, but would we ban a vaccine recipient reporting adverse reactions they experienced? This will probably be the most widely and rapidly deployed vaccine in the history of the world, and I think it’s important that any and all necessary information becomes widely available, and I don’t necessarily trust the proper and official channels to always report all of that information.
Adverse reactions are not something to be taken lightly. A proper investigation is required to determine what the root cause is. One person making a youtube video would be extremely misleading. You have to look at what medications they're currently on, if they're fully healthy, etc, etc. information is neutral, but people can misapply what they think they know and project their ignorance in a way that will be amplified. I work in biotech (vaccines) and I see this kind of faulty thinking applied to vaccines quite often, so maybe I'm more sensitive than most.
As a very staunch advocate for free speech, I understand where you are coming from, but I have some major issues with this kind of position.
For one, while the scale of information is at a new level than in the pre-internet era, I don't think the fundamental lessons from the Renaissance or the Enlightenment change because of that. This "I hate it but it's justified because of dire circumstances" position is not new, and has, as we learned from the above, been the primary tool of abuse by totalitarians and tyrants. Forgetting those lessons and falling for the same trap again is naive at best.
Second, any discussion on this topic inevitably brings out everyone shitting on the most obvious surface issues as brought to the mass conciousness by the mass media, but almost never do I see discussion about nation states, corporations, military, and other non-individual actors and their actions in this realm, which means to me either people are ignorant of this, or they are purposfully omitting them. Sockpuppetry and other fine tuned military psyop techniques are at play and if they get ignored what will happen if the pro-censorship people get their way is they will silence individual voices but end up doing almost nothing to stop the more sophisticated and much more dangerous things going on. I'll know I've shown up for a good conversation when it isn't me who is bringing up Ivy Lee and Edward Bernays, operation mockingbird, the church commitee, sockpuppetry, OCA (online covert action), media conglomerate ownership, etc.
Third, I think the claims that "this is a fight for democracy" are misguided, and likely to be abused in the search for justification of the censorship push. To me it's just another "we have to stop the terrorists!", a lie which I experienced the real consequences of in the form of war, one the country has never recovered from in the realm of freedoms and rights. Lies we now know were about control and not about protection.
I'm telling you now, so please listen, on this topic, this is a patriot act moment (decade). Please, please don't fall for it, and have the courage to allow people not just to speak their minds, but as RMS talks about, read what they wilt, as it's just as much about the freedom to read as the freedom to speak.
As for the particular topic dejour, if you want some credible reporting that slightly bucks the mainstream narrative, I suggest Whitney Webb[1][2] as a good journalist doing this work. I've been consistently impressed at not just her scoops and insights, but her willingness to say "I'm not going to speculate, lets stick to the facts." when being interviewed.
Indeed. One of the reasons why Germany didn't renazify after WWII is because the German government criminalized Nazi ideology almost immediately.
Our common, absolutist interpretation of the First Amendment dates only to the Brandenburg v. Ohio ruling in 1969. If we didn't have conservative presidents unbalancing the Supreme Court with ultra-conservative justices, we might have a Supreme Court willing to re-evaluate the First Amendment's applicability in light of widespread hate speech. But alas, this Is Trump's America...
Along with what I suspect is a large number of HN people, I also feel the same way. The high ideal is for people to say what they want, and we'll all think and decide what we think the facts are, and what decisions should be made.
Now, it seems to not work. It is patently the case that a lot of people simply cannot think.
One issue is that there's too much noise, as you mention. Money comes into it as well.
But underlying that is a lack of education. Western European constitutions tended to be written just about during the early industrial age, with updates up to about the mid 20th century.
That world is gone, but it left a lot of baggage for the world we live in today. In the old world, you had one chance to be educated, and it tended to be for a short period. Let's not forget that this was also a highly polarized world, where kids needed to be told which side of the great struggle between ideologies they were meant to support. So it's not as if we had a world of free thinking before the current abyss was reached. But importantly this world, especially in the West, also pretended that everyone was free to speak as they liked. We did educate a few people in how to think critically, but most people just got told what to think. If you ever heard your gran spewing out some pnemonic like ROYGBIV or the times tables, you know what I mean. As soon as you ask a question that is slightly inquisitive, there's no answer.
We had authorities back then too, and the system kept them on a pedestal, to make it easy for everyone to think as they were supposed to. The authorities got all the air/paper time, but also they made sure that the Overton window was kept a certain size.
Now there is so much airtime that everyone can have a say, and it turns out a large number of people simply can't think for themselves, because school is still pretty parochial. "Learn your times tables, here's your new opinion about helping the planet". (I don't hate the planet, and I agree with some of the things they tell the kids, but the way I see schools talk about it... it isn't thinking, it's indoctrination.) For an example, when I'm on FB looking at the US system, some older person will inevitably say "You kids don't know anything, we have a Republic not a Democracy, each state gets 2 senators". Which completely misses the point of discussing whether that's a good idea.
School never taught people how to think critically. The closest we came to it (I did the IB Diploma) was a bit of math and science, and only the fortunate ones figured out that it's not about learning a bunch of facts, it's about how humanity came to learn those facts, and how we could potentially learn more. Nobody examines you on "how would you find this out" because it's hard to fit into the industrial school system. I'm good friends with a statistician, and he says even phd folks have problems in this area.
What I see online is this world where a lot of people are simply trying their way at thinking critically. A lot of stuff reminds me of high schoolers at debate club. There's this void into which you put conspiracy theories, and people try their way at arguing one way or the other. Only thing is, they are adults with the right to vote, and with little guidance. In a way it's funny, like that flat earth documentary where the people actually buy a gyroscope and actually try to measure the flatness of a canal.
So what should we do? There's no satisfying answer. Democracy was thought up by some men-of-letters who didn't think most people should have a say. I'm not an expert but it sounds they only thought smart people should have a say. It would be sad if the only way to move forward is to beat it into people what they are supposed to think, or who the authorities are, because ultimately we want reason to be the authority, not specific people.
I'd be interested to know what your thoughts are on the UK and German models of "free speech" where no one has an absolute right to say what they want - certain subjects such as Holocaust denial or promotion of terrorism are banned. You could argue that we have been on a slippery slope for a long time but you could equally argue the slippery slope is a fallacy.
I think it's important to think about what the actual purpose is of the rights that we value. The right to free speech is simply a shortcut to a more fundamental right to allow people to live freely without undue influence from others. It may be that the absolute right to free speech might not be the best way to achieve that.
>"The right to free speech is simply a shortcut to a more fundamental right to allow people to live freely without undue influence from others."
You are asserting a series of very controversial points here:
1) Some rights are more important than others.
2) Positive rights can override negative rights.
3) 'Living freely without undue influence from others' is more important than 'the right to free speech'.
Which philosophical or moral framework are you using? This may be compatible with utlitarianism and Rawlsianism (though you'd have to do the numbers on the long-term impacts, not just your short-term goals), but I'm not sure how other ethics would support it.
Sorry maybe I didn't express it correctly but my thoughts were that nobody believes the right to make sounds from our lungs or put words on a page are fundamental - what is more important to us is that our views and opinions are heard by others and that no authority can choose for us what we hear. Living in this way gives us more freedom to choose how we live and allows us to decide who to be influenced by.
Now take the example of promotion of terrorism - it's possible to argue that by allowing someone to post certain types of messages to a group on Facebook, they are creating an undue influence on the people in that group. For example by asking his followers to reject the views of outsiders (a common tactic in many religions, cults and conspiracy groups) a terrorist could be argued to have taken the freedom to hear other viewpoints away from his followers.
Now you can of course say that any terrorist or religious or cult leader is acting within a system of free speech - no one has technically had their freedom taken from them. That is true, but the reality of how humans choose who and what they listen to create the same effect as if they had lost those faculties.
You're now arguing that you also have a right to be heard? That's quite a right!
I am not sure how you would define your 'undue influence' test, as it seems internally inconsistent:
>"For example by asking his followers to reject the views of outsiders (a common tactic in many religions, cults and conspiracy groups) a terrorist could be argued to have taken the freedom to hear other viewpoints away from his followers."
Do you see the inconsistency here? You are looking to forcibly silence one group, so they can listen to 'outsiders'. From the other side, it would look like you're the one advocating in favor of an even more extreme 'undue influence'.
> Do you see the inconsistency here?... From the other side, it would look like you're the one advocating in favor of an even more extreme 'undue influence'.
I can see why you think there is an inconsistency here but I don't think there is.
Most people would say that the right to free speech includes two parts. Firstly we have the right to say what we want, to express those ideas to others and to be able to reach others without being censored. Secondly we have the right to hear the ideas of others without them being censored by the government. The first part of this right allows us to change and hopefully improve the world around us. The second part of this right allows us to use the best information available in order to lead the best life that we can. Free speech is the best tool that we know of to achieve these outcomes but it is still simply a means to an end.
When I give the example of a terrorist leader influencing his followers, I am trying to give an example of someone who uses the first part of the freedom of speech to improve the world around him (in his opinion) but who takes away the second part of the freedom of speech from his followers. That of course depends strongly on whether or not those followers would be able to lead better lives outside of his influence. It's a very fair argument to make that we as a society cannot know for certain whose ideas are right or wrong and therefore there is no kind of censorship of ideas can be justified. One man's terrorist is another's freedom fighter etc.
As another example, imagine a scientist who is interested in the genetic differences between races. He may decide based on years of research to conclude that one race has a genetic advantage over another and present the evidence that formed his conclusion. However, being a scientist, he would be open to different interpretations of that evidence and accept new evidence that may alter his conclusion.
Alternatively a racist keen to promote his race as superior to others may present the same conclusion, but present it in a way that discourages critical thinking or alternative interpretations. He may encourage those who listen to reject the views of others as biased. His followers would become closed to the new evidence. This doctrine could be considered a violation of his follower's free speech right to use the best information available in order to improve their lives.
As with other freedoms we often decide to restrict them as soon as they limit the freedoms of those around us - for example, you are perfectly free to sing loudly in your own house during the day, but as soon as you start to disrupt the sleep of your neighbours when you practice your songs at 3am then you should have that freedom limited.
In the same way, it could be argued that we should allow people the freedom of speech only in so far as it doesn't limit the freedom of others to benefit from that same freedom. In a world where people can be deceived or oppressed purely by speech alone, you have to consider when and how it should be limited. In an absolutely ideal world, we would make no topic of speech off limits - we would simply rely on people's education and critical thinking abilities to allow them to make the correct choices. However, in the real world, many individuals such as children cannot be trusted to think critically, or not to be deceived by misinformation.
To bring this back to the original query - what are your thoughts on the UK and German government approaches to free speech - my view is that our governments have absolutely no way to control how ideas are presented - they cannot ensure that ideas such as differences between races are considered in purely scientific context. They cannot control how someone may use that information to cause harm. Their ultimate goal is to prevent people from being harmed by these bad ideas but since they have no ability to control how someone presents their ideas, they fall back to censoring the ideas themselves. That is not to say I necessarily agree with this policy - but if you consider how simply limiting certain ideas can increase the freedom of those who might hear them, it's possible to see some kind of justification.
I think that trying to develop your own ethical/moral reasoning on a subject this complex is quite haphazard; you seem to be trying to carve-out/describe a nuanced position, but there are a lot of problems with your reasoning. To be clear, this is a tough subject, and I am not trying to impugn your motives in any way.
My view is that government authority is at its perigee when it comes to free speech, and that any regulation is both problematic and suspect. This stems from Huemer's convincing arguments in "The Problem of Political Authority".
If you've been found not guilty of a crime in the US and admit to it afterward, you cannot be prosecuted again by protection of Double Jeopardy, correct?
Is that not the same thing as saying some rights are more important than others?
I don't see how your Double Jeopardy example demonstrates 'ranking' of rights. There's no right to punishment of criminals. The US Bill of Rights is a list of negative rights.
I would say that (2) and (3) go further than one. You could say that they are subsets of (1).
> I don't see how your Double Jeopardy example demonstrates 'ranking' of rights.
I was thinking, for example, one person's right to freedom from violent crime over another person's right to not be put on trail again for the same crime.
> I would say that (2) and (3) go further than one. You could say that they are subsets of (1).
Well, I think the non-aggression principle applies without reference to the 'justice system'. That is to say, whether or not someone has committed a crime, for which they may or may not have been convicted, they have a moral obligation not to perpetrate violence against you. Post-hoc detention or punishment does not 'correct' or remediate aggression, and I am not aware of any right to prevent hypothetical aggression.
I guess we're going off on a tangent here, but then I'm not sure what a person's right to freedom from violent crime actually provides them, if not a systematic process by which they can get justice.
I don't think there is a real "right to freedom from violent crime", at least not in the enforceable sense.
I would say there is a moral duty not to commit violence against others, and there is a moral right to defend one's self, but I don't see how you really guarantee "right to freedom from violent crime", so it really seems more like a dubious promise. This may be a consequence of my general skepticism of positive rights.
"I'd be interested to know what your thoughts are on the UK and German models of "free speech" where no one has an absolute right to say what they want"
The US version of freedom of speech was created based on personal experience with the British model. The American version is broad and almost absolute. The British model was a document that said you had it and a crown that would execute you or take your property for your words (usually when those words were against the crown).
Freedom of speech is fundamental to freedom of conscious, which is different than "live freely without undue influence from others". I'm very happy with American style freedom of speech, where the cure to speech you don't like is more speech instead of jail and violence.
The German model is not substantially different from the US model, US freedom of speech is mostly lip service and there are plenty of related laws that can and will be applied against free speech in the US such as laws against terrorism and laws against obscenity. One major difference is that Nazis are more protected in the US than in many other countries. This has purely historical reasons, though. Mainland US was never occupied by Nazis, so consequently it also was never "de-nazified".
However, if you're a communist or an ISIS sympathizer or want to publicly display some nipples you won't necessarily have more free speech in the US than in Germany.
The idea of an arbiter of what is and isn't free speech is deeply repulsive to me. So is cough medicine. So is chemotherapy. I can't see every cell of the cancer but collectively, there's enough of it visible to shake me. Do you kill the poison with a lesser poison? Depends on your level of concern, I suppose.
Philosophically, I'm in a weird place right now to be honest. If you told me that our country was going to round up all propagandists, science deniers, and white supremacists and throw them in jail based on hard evidence of purposeful misinformation or credible threat -- regardless of how those things are defined and by whom -- I think I would respond, "k"
I acknowledge that I would be resolute in lazily ruining people's lives based on a perceived threat to the greater good out of some longing to be bored again. That's scary to me.
I'm a nerd, wholly driven by logic (to this point), and I can feel the boundaries of its apogee being prodded. It's not just the country that's sick -- my ideologies are also under the weather.
The tl;dr answer would be that personally, I'd be fine with banning Holocaust denial or the promotion of terrorism because at the end of the day, I'm just a everyday liberal software engineer. Neither of those bans would ever apply to me.
That is, unless someone decided to attack me with false or shaky accusations. Therein lies the problem, eh? But what's the alternative? There's no winning.
>>The tl;dr answer would be that personally, I'd be fine with banning Holocaust denial or the promotion of terrorism because at the end of the day, I'm just a everyday liberal software engineer. Neither of those bans would ever apply to me.
"Neither of those bans would ever apply to me" is a pretty revealing statement. In fact, it's almost so perfectly comical that I would give way better than even odds that your post was a parody if someone just read it to me out loud randomly.
This term itself is bunk. Part of science--science being imperfect and reliant on experimentation--is arriving at contradictory results. It is a natural part of the scientific process and is codified in the scientific method.
What is the lesser evil here? Are some scientist so beyond reproach that any incongruent findings are illegal? What if they're right? What if they're wrong but there was no intent to mislead. What if the authority that anoints untouchable scientists is compromised or otherwise acting in bad faith?
Nothing good is going to come from giving credence to a ridiculous notion like "science denier."
Free speech isn't about all voices being equal though, it is about the freedom to speak out against the government. It's an important distinction because the latter is needed for democracy to work where as a former doesn't work in practice because some voices should carry more weight (like subject matter experts).
Ironically the former seems more prevalent than ever with every nutjob on social media voicing their opinions, while Trump has been systematically eroding the latter with his "fake news" campaigns and reporter biases.
You're posting this on HN, a platform that has moderators and doesn't allow certain things to be said. So you implicitly agree that there is a trade-off between free speech and ability to live in a community.
Unlike the people in this comic, I'm actually making a point, which you seem to have missed: community, by necessity, doesn't work with absolute freedom. What matters is what we agree to compromise over and why we do so.
No, not really. I’ll put up with draconian and biased moderation to skim curated tech articles in the morning before I start my day. Don’t confuse that with how exist in society.
I support some extremely limited baseline censorship. Pretty much just "don't incite violence or get people harmed/killed". But I guess you need a World War ravaging your continent first to get there.
Social media is a disease, regardless of the messages dispersed over it.
However, a war for democracy - democracy is about what the people want, not what the self appointed overseers want. Unfortunately if the majority want to indulge in corrupt thinking then that's democracy. That's how you end up with Trump.
If you want to fight for democracy then you have to let people express themselves. Free speech is the cornerstone of democracy so a fight for democracy is a fight for free speech.
What you seem to be asking for in terms of sacrifice is an authoritarian intervention where we appoint people to determine what is truth and ban alternative thinking. This obviously is just as bad as what is going on now. Once you have the truth committee in place you can predict where that goes once "misinformation" is under control. Start with enforcing scientific truths, and then leaders will abuse this power to further their own truths. Imagine corporations lobbying the Department of Truth.
We are fked for now. I imagine a lot more pain and difficult change is ahead of us. I lament for pre-internet times. I used to champion the idea of the internet giving equal platform for all but I never foresaw the internet being such a huge part of everyday life, and with that comes the good and the bad of humanity.
The 2020 US elections are a storm coming at us, and no one knows just how bad the west will be ravaged after it's done.
So YouTube becomes "peer reviewed", accepting only proven research. BFD. They are just going to lose out on all the ad revenue for videos that will go elsewhere.
I disagree. Youtube being the authority on information is as far removed from democracy as you could be.
The reaction is based on unsubstantiated axioms. Does COVID-misinformation do significant damage? I don't think so. Are there concrete examples?
Information gets censored, while simultaneously political information gets censored on Twitter because it implicates a presidential candidate. The reason is that it is information obtained by a hack. Good for democracy? A form of government where voters should hold representatives accountable? This is not a Trump endorsement but I feel driven to state the fact.
> Truth, facts, and hoping that people can apply logic to find their way to them isn't enough anymore and the people exploiting this advantage are getting better at it.
Again, I don't think so, are there concrete examples where this happened? And if so, an advertising company which has some easy ways to distinguish itself by removing unpopular opinions (yes, they are actually unpopular)? It doesn't really make sense to me in any way.
Who are these manipulators? Putin? Trump? Cambridge Analytica? Communists? Capitalists?
Where do we actually see dangerous misinformation? The worst decisions are made from fear and I believe your point of view adds to panic reactions.
> say whatever we want, regardless of our status or wealth. This has been corrupted.
Who do you think would would be able to speak freely if you put constraints here. It won't be low status or low income people.
Disinformation on the internet is not a real threat. Not even compared to Nigerian prince scammers. Not having a authoritative voice might look scary, but it is actually a good development. Otherwise I would suggest church or something with more rigid believe systems.
The point is that youtube is not an absolute authority and truth reference. So when they decide what is misinformation they position themselves above public scrutiny.
It's not that the masses are dumb or smart. The few deciders from Youtube are not truth dispensers that we should all trust above all else.
And the few people on the editorial boards of selected news papers or wherever are?
Do I need to mention the "kicking babies out of incubators", or the "Iraq has chemical weapons" bits?
Do you honestly prefer to give the manufacture of consent machinery to the exact same people that had it before hand? The very reason why people don't trust the media is because the old "guardians of the news" thoroughly failed at their supposed mission
That is a serious problem. Media, traditional media in print and TV and so on, has failings. Problem with all the fake news crying from certain people, it is hard to call these failings out. And the fake news crowd has some truth to point to. And that makes propaganda so much more powerfull, the most powerful lies contain a grain of truth.
No, more like reducing risks to their brand. Covid is historic and will soil the public image of all these ones whom don't take it as the public health hazard that it is
YouTube is simply minimizing brand risks which can very easily escalate into costly legislative fights, therefore they are maximizing profits on an uncertain time
They decided this move is less damaging to them than not doing it
If democracy requires free speech to be sacrificed, why do you think keeping democracy is a worthwhile endeavor? Perhaps you should be questioning the validity of universal suffrage.
If your mode of government can't withstand fake twitter bots, you should probably just toss it and start over with something else.
Fake Twitter bots are also a form of censorship. Censorship does not work just by imposing silence, but also by overwhelming with noise. Free speech is valuable because it leads to a contest of ideas. If it degenerates into a contest of who has the resources to scream louder the battle for free speech has been lost already.
there seems to be a rather simple fix for this: a bloom filter for ideas.
place this within the flow of information or at the intake of the government, take your pick. but it's obvious to me that democracy/universal suffrage gives incentive to actors (both foreign and domestic) to wage a mind-control war against the population.
if people are legitimately willing to question something as foundational as free speech, they should really consider the other parts of the foundation too.
You are suggesting that we curtail 1st Amendment rights in order to save social media platforms. I would argue instead that social media platforms have been facing the abyss for some time and cannot be saved. New censors beget new methods of exploitation and platforms literally adding new continents of users will only add to the noise. Let’s not fall into the trap of throwing away our guaranteed rights for short term convenience. Instead let’s consider that social media is not a source of truth going forward and adjust accordingly.
Once you give people the power to censor things, you can't assume that the people with that power will act in good faith forever, even if they do currently. Would you want to give that power to your political opponents? If not it's better than nobody have that power.
Someone always has and always will have that power though. I would rather the owner of the platform than the government have that power.
Think about this: would you rather have the power to decide what speech you allow in your house, or would you rather the government decide for you? Because those are really your choices.
Can facts actually be overwhelmed by voluminous bullshit? That in itself is bullshit. If I had a billion to spend on insisting the sky had polkadots very few would believe it and most who say "the sky is polkadotted" would be taking the piss. Confusingly the zeitgeist itself is bullshit that unwittingly self-describes itself as the menance.
Really the example chosen was harmless - what makes the bullshit believed isn't plausiblity but because /they want to believe it/ and you cannot censor the inner desires, fears, and self-justifications.
But the facts don't stop being true even if they believe the bullshit. That is what makes successful rationalizations so dangerous as reality comes home to roost as they insist it is safe to play in the street because cars always swerved out of the way before.
"The video platform said it would now ban any content with claims about [planetary motion] that contradict consensus from local [priests] or the [Catholic Church]."
__Any__ qualified individual/institution that deviates from the "authority" falls under this guidance. That means no dissent is tolerated, regardless of qualification of the source.
Authoritarianism is authoritarianism.
> The ability for one person to amplify their voice or ideas via hundreds or thousands of bots, paid assholes, and gullible people who lack the capacity for critical thought is a problem. It's a big problem.
The problem is, actually, "gullible people who lack the capacity for critical thought". It doesn't really matter if it is "one person", or a tightly knit group, or even one's own government.
> The ability for one person to amplify their voice or ideas
A wonderful ability, and a cherished Human Right.
> Truth, facts, and hoping that people can apply logic to find their way to them isn't enough anymore and the people exploiting this advantage are getting better at it.
That is your opinion, and imo highly likely not even an original opinion. Where did you first read that "truth, facts, and [logic] isn't enough anymore"? It a very 'interesting' meme, don't you agree?
>The problem is, actually, "gullible people who lack the capacity for critical thought". It doesn't really matter if it is "one person", or a tightly knit group, or even one's own government.
Since you can't wave a magic wand and expect people to get smarter in the matters of public health, you have to come up with a solution that works for todays populace. IMHO the only solution is to artificially amplify the voice of experts. Yes, some may call this "gatekeeping", but yeah we don't want to give equal time to crystals and homeopathy as we do to established accepted science.
What are you talking about?! Don’t blame social media, the biggest and loudest source of COVID-related misinformation have been institutions like the WHO.
There is very little science (we don’t yet have enough data, and not enough time has passed), so all we have to go on is “broscience” (smartly connecting anecdotal facts and whatever little data we have, and drawing sensible conclusions). By designating one opinion as “officially correct science”, YouTube is seriously damaging humanity.
COVID-19 is a novel virus. All information is misinformation until it isn't. I could say that the reason that some people are asymptomatic is that the virus is lying dormant in them like herpes. Can anyone credibly refute that?
Logic would dictate that a novel virus -- novel in that we don't know everything about it -- is a threat to the survival of our species as a whole until disproven otherwise and that we should collectively do everything that we can to minimize its spread.
What I know to be true is that to curb the infection rate, we should stop spitting in each other's mouths. Yes, there is science at the base of that assertion, but mostly it's just common fucking sense.
If you can tell me how basic common sense is being subverted on a massive scale without the use of a platform of social manipulation, I'm willing to hear you out.
Why do you want to curb the infection rate? What’s the end goal?
I’m strongly on the Swedish/Trump side here. We know that the virus is practically harmless for youth and young adults. If immunity persists, we should let them get infected ASAP while protecting the old/vulnerable. If immunity fades, then waiting for the vaccine is pointless anyways. As a young person, I won’t be taking the vaccine anytime soon anyways, not because I have anything against vaccines, but because I have many doubts about barely tested drugs in general (I’ll take it after it’s been tested for a few years).
How do we know it's harmless for youth and young adults? You know what you've seen. Again, this is a novel virus. It's new. It could be setting up shop in your brain, heart, kidney, liver, semen -- wherever.
It could be invisibly weakening your lungs just enough that a flu can come through and kill you.
I would suggest a deep dive on mercury poisoning, radiation poisoning, lead poisoning, asbestos, and whatever other reminders you may find throughout history that assuming you know what nature is up to without the benefit of lengthy scientific review never turns out well for the cocksure.
Well we won’t know what it’s long term effects are for the next 70 years... hell we’re even now figuring out that Herpes virus might have something to do with dementia...
And say we get a vaccine next year. Don’t you worry that if we relax lockdowns, we’re just gonna trigger the next devastating virus?
Let’s face it, life is a risk, and we accept a certain amount of elevated risk in exchange for comfort/utility (e.g. unprotected sex, industry-produced food, driving). This virus seems in line (at least for young adults). So the only real risk here, is government repression.
I don't know about "practically harmless for youth and young adults" - but as they say, I believe in your right to express your opinion.
Seeing the downvotes, I suppose the majority here disagree with the statement. But at least the comment is still readable/visible. I think it's healthier for public discourse to have differing sides and points of view expressed, to argue in a good sense of the term, by logic and evidence, to hopefully arrive at some common understanding.
"Fact checking" and judgements of "misinformation" are necessary, I can see, but a question remains - who checks the fact-checkers, and judges the censors?
I don't trust Google to be a neutral party in such decisions - maybe in this particular case they're right, but we shouldn't depend on a for-profit corporation to be a trustworthy judge of anything in general.
Do they have the right to ban whatever they see unfit on their platforms? Yes, I think so.
Completely false. The WHO claimed there was no evidence of human to human transmission after doctors in Wuhan were already getting sick, failed to share warnings from Taiwanese researchers with the rest of the world, argued against travel restrictions from China and also claimed mask use wasn't helpful.
They did a horrible job that has cost many, many lives. I've been in Taiwan and was gritting my teeth while diffing what they were saying vs local reporting from December through March.
China and WHO confirmed human to human transmission on Jan 20, before any doctors died and before China closed Wuhan province. Anyone who says or implies otherwise are lying or misinformed.
Their actual message is exactly what you should want and expect from a science based organisation:
Preliminary investigations conducted by the Chinese authorities have found no clear evidence of human-to-human transmission of the novel #coronavirus (2019-nCoV) identified in #Wuhan, #China🇨🇳.
There is nothing false or misleading, and they updated the public as new information came to light.
I can't emphasis enough how correct this behaviour is for a scientific organisation.
And for the sceptics who will claim China was covering something up: why didn't they shut down Wuhan until 23 Jan?
The Taiwanese CDC shared evidence in December that strongly suggested human-to-human transmission[1]. The WHO chose not to share this communication and went further by publishing the above tweet which suggested the exact opposite.
Their co-lead on COVID-19 even went so far as to hang up on a journalist[2] who asked about Taiwan.
If this is your idea of "correct behaviour" for a scientific org, I cringe to imagine your vision of a poorly run one.
I'm not defending WHOs treatment of Taiwan. I think that incident was atrocious.
Your link says (about the Dec 31 email): "Public health professionals could discern from this wording that there was a real possibility of human-to-human transmission of the disease. However, because at the time there were as yet no cases of the disease in Taiwan, we could not state directly and conclusively that there had been human-to-human transmission."
That's no different to what China or the WHO were saying in this timeframe. The constant "no confirmed human transmission" were because everyone was watching for it.
Again, there is no evidence China hid anything here, and Taiwan doesn't say anything different.
I apologize if this is presumptuous of me, but I get the impression that you don't live here, haven't followed this topic in much detail and probably haven't read anything at all in local media from that time frame.
Here's more background:
Dr. Li Wenliang's email warning colleagues about the outbreak was sent on December 30th. Multiple Taiwanese doctors were working in Wuhan at that time. Taiwan's very early and very small outbreak was from people evacuated from Wuhan. The CDC warnings weren't about what was seen in the Covid positive evacuees in Taiwan. The warnings were based on direct experience in Wuhan and they did suggest human-to-human transmission.
There was no hard proof, but there was a great deal of circumstantial evidence. Yes, it's possible Taiwan's CDC was overly suspicious due to memories of how SARS was covered up in 2003. However, given the way epidemics spread, the wiser course is to take circumstantial evidence of transmission very seriously.
There was no hard proof, but there was a great deal of circumstantial evidence. Yes, it's possible Taiwan's CDC was overly suspicious due to memories of how SARS was covered up in 2003. However, given the way epidemics spread, the wiser course is to take circumstantial evidence of transmission very seriously.
I agree with this entirely. But I don't think there is any evidence that the WHO - or China - thought otherwise.
I'd note that the WHO bulletins from both Jan 5[1] and Jan 12[2] mentioned "Based on the preliminary information from the Chinese investigation team, no evidence of significant human-to-human transmission and no health care worker infections have been reported." (Jan 12, Jan 5 was similar).
Now this turned out to be wrong, but that doesn't indicate a cover-up. As your link from Taiwan said, no one definitively knew.
I'm very aware of the messages (not emails) shared by the doctor. I know he was forced to retract them by the regional government, and later exonerated by the central government. I think the dates here are important - as you mentioned it was Dec 30 he sent them, and by Jan 20 the central government had taken over. I do think the Wuhan regional government did try to minimise news of the outbreak, and I think criticism of that is fair - but needs to be tempered by acknowledgement that the central government acted relatively quickly and didn't try to hide things.
I think it's likely that external pressure from Taiwan, Japan, Korea and elsewhere made it clear to the central government that they needed to intervene.
I hope we can agree to argue about spilled milk doesn't help anyone. Especially when this spilled milk is used now to basically discredit very relevant organizations in the midst of a pandemic. But maybe you can share some recent fuck ups from the side of the WHO, real fuck ups and not opinions that changed over time as they learned more and more about COVID-19.
Refusing to deal with Taiwan's CDC in a reasonable manner is a massive and continuing fuck up.
The WHO is a politically captured organization lead by a man who covered up three separate cholera outbreaks in his own country. I have little respect for or interest in them.
No January 2019. As a reply to the WHO revising their opinion on human-to-human transmission commment above. I read the situation reports a lot as well, I guess I'll start doing so again.
Regarding the GP, attacking the WHO for changing their minds regarding human to human transmission as he found a statement dating before the one you shared. Doing that would also allow you to attack the WHO for not talking about COVID-19 before it struck, same logic and just as wrong.
No, they also protested banning of flights (e.g. when Trump banned flights from China and later from the EU), praised China on their pandemic response, ... most recently, they only came out against lockdowns just now, even though others (e.g. Trump, Sweden, ...) were promoting this policy for months already.
China, more local authorities, screwed up the early days. Then they got this thing nder control quite fast, with harsh measures.
It is still not possible to tell whther the Swedish way was the bast way or not. Lockdowns worked in the early days, the goal was to avoid overloading hospitals. Stuff like SARS and MERS didn't spread by planes to a large extent, did they? So not banning them kind of made sense, until people learned more and their opinion changed.
Lockdowns helped halt the spread when the virus wasn't already everywehere, at least much better then they would now where the virus has spread across the globe, more or less.
And just yesterday, Germany and France reinstituted lockdowns. Things change, and I'd rather listen to people who change their minds with changed facts instead of broken clocks that a right twice per day.
The YouTube suggestion algorithm is the problem! Create a new account, watch any political video, and then click through the suggestions. Keep clicking through the suggestions and in hour you will be on some extremist nonsense.
YouTube is the source of this problem, and their solution is censorship? It's ridiculous. Show people the videos they have subscribed or searched for, and that's it. The majority of people don't search out extremist content, they find it through suggestions and related videos.
Twitter, Facebook, and YouTube (Google) are the ones who have created these problems. They profit from this crap when people mindlessly flick through the suggestions. They profit when tweets, FB posts, or videos generate discussion and clicks. They profit when bots post content that actual humans click on and discuss. They have created these problems to pad their pockets. And now their "solution" to it is mass censorship.
What an outrage. These companies need to be absolutely shattered into tiny little pieces.
You don't even have to watch a political video. I have crap popping up all the time, "Watch this professor get destroyed by <screamer>!"
I know youtube is a shithole and yet it still surprises me. My son's favorite kid's show is only on YouTube so I created a new account, subscribed just to their channel and still inappropriate content shows up. How hard is this?
I also can't block it for my daughter because she has school assignments that involve watching YouTube videos. Sooner or later that's going to bite the school in the ass.
For curating videos for one's own children, if the catalog isn't changing too often, you might be able to `youtube-dl` the videos, and put the files on kids' devices or your home media device/appliance.
I've been using `youtube-dl` successfully for lecture videos on my TV appliance (supposedly to watch while using the stationary bike). An example command line is in a script that grabs the videos from a conference: https://www.neilvandyke.org/racket/download-racketcon-2019-v...
I suppose one could automate the process, with a simple UI to approve or submit a URL for viewing, and then automatically youtube-dl it and make it available to the kids' devices.
That doesn't work. My kids are literally being assigned to watch specific YouTube videos on their Chromebooks and then answer questions as part of the ridiculous "distance learning" nonsense that the school district came up with. I can't review and approve each video. And downloaded videos won't be available on Chromebooks.
If you are up for it, you could host the videos on a computer on your network and serve them with Plex. Plex has a decent web-app for viewing your catalog. Chromebooks should have no problem with that setup provided they can get on the same network as the server.
There's also the added issue that the chromebooks are probably running through a VPN for the district filtering and monitoring. This at least the way my children's district runs things.
There's a button, "not interested". Click it. My youtube is mostly pretty nice. I get recommendations for cooking videos because I subscribe to and watch a couple of channels. I get recommendations for GDC videos, various programming videos. I mark videos as not interest often. And if there is something It think might give it the wrong idea I often right click and view in an incognito window. Have had very few issues keeping my youtube recommendations relatively clean, and I really enjoy some channels. Souped Up Recipes, Chinese Cooking Demystified, Coding Adventure, Veritasium, Kurzgesagt
I think they were just offering advice to that particular person. I even forgot there was a 'not interested' option. I just keep good company on YT I guess.
Also, I agree you don't even need to watch a political video to see political garbage on youtube. Last night they showed me some bizarre political conspiracy ad saying Pelosi and Harris are going to invoke the 25th amendment on Biden to make Harris the president. YouTube was promoting this nonsense at the top of the feed.
I'm overall sympathetic to the schools and how much work they've had to do to get everyone set up for virtual learning. That said, if I'd been in charge these laptops would be locked down. I'd whitelist only trusted educational site, and I'd tell the teachers that they'd have to find a way to work with those.
I regularily see shady ads as well. This is the actual outrage IMO. Before going off and create a corporate censorship network, they should at least apply some "standars" to what they let through on the ad side.
But I guess it is all too late. The Circle proofs to be a pretty prophetic book.
Yes I totally agree. In this case I was using the ios app so ad blockers do not apply and I was also using YT without being logged in. I need to find a better solution for this.
Using this threads sidearm to promote YouTube Premium is pretty much beside the point. I wasn't trying to emphasis my particular dislike for ads. In fact, if the ad is haflway decent, or even interesting, I dont mind getting the occassional ad, especially since I see it as a way of supporting the actual channel owners. What I object to, and in particular in the context of this posting, is that I regularily see ads which never would make it through on more local media. Some of these are outright scams. And I wonder, what is the difference between someone spreading "misinformation" on their channel, and someone else paying YouTube to spread "information" which will ultimately be used to scam the user.
Also, YouTube has channel which upwards of 1 mio subscribers which only consist of pirated content. I truely wonder how these slip through the cracks.
Your kid will be corrupted by society anyway. What's this whole thing about protecting children from the reality of the real world anyway?
A bit off topic but a serious question. What's your logical reasoning for blocking the content now. It's just slowing down what he will be able to see (by deliberately going around you) within a couple years. I never understood this with parents.
> Your kid will be corrupted by society anyway. What's this whole thing about protecting children from the reality of the real world anyway?
> A bit off topic but a serious question. What's your logical reasoning for blocking the content now. It's just slowing down what he will be able to see (by deliberately going around you) within a couple years. I never understood this with parents.
Children are not little adults. They're still developing and learning things that you and I would consider basic. It makes sense to delay their exposure to certain things until they have the maturity and knowledge to process them properly.
You've never understood why parents protect children from things that they will eventually be exposed to as adults? If your goal is to give your children unfiltered access to "reality," parents are entirely unnecessary after birth.
>You've never understood why parents protect children from things that they will eventually be exposed to as adults? If your goal is to give your children unfiltered access to "reality," parents are entirely unnecessary after birth.
Hey I have a different opinion than you. I think you should respect that rather than say parents are unnecessary after birth. That's rude.
First off think about it logically. Why indeed do parents have to protect children from things they will not only be exposed to as adults but things they will inevitably seek and successfully expose themselves to BEFORE they become adults.
Your job as a parent is to protect your kid from actual harm and to teach your kid and feed your kid and guide your kid. Your optional side job is to protect your kid from "reality" that's your own personal choice and there are actual scientific observations of the result.
The podcast below actually illustrates a scientific study on the subject:
A part of it actually follows two blind kids with different parents and different parenting choices.
One parent chose to protect his blind kid from reality.
The other parent gave the blind kid a bike for a birthday present two weeks after the crazy blind kid lost his two front teeth from running into a wall.
Do you want to know the end result of sheltering a blind kid versus encouraging one to explore reality? Watch the podcast to find out. Very interesting if your a parent. Let's just say one blind kid (now adult) now walks around the world as if he can see and literally uses echolocation to navigate the world. The other blind kid walks the world as if he's a crippled blind man.
Also about kids. Talking about how kids today only wander around their homes while kids in the 60s and 70s wander around the whole town by themselves. It's attributed to different parenting styles and fear of exposing kids to the real world.
Parents in the 70s were much less fearful about protecting kids from reality. Parents today are scared of everything. But the illogical difference here is that from the 70s to now, crime has actually gone down. Your kids today are more safe than ever before but parents are more restrictive and scared than they ever were before.
A handy rule of thumb I employ is that any hot take that includes the phrase "kids today" can be immediately ignored. Any generalization of that scope is laughable.
( Edited after being fairly criticized for not contributing to the conversation )
It's problematic to say that scientists have 'proved' anything. Scientist's conclusions are always based on the available data, and at any point new data may come along that disproves their conclusion. Scientists know this of course, which is why reputable ones are never going to step forward and announce "I have proved..."
Scientists can't prove anything. They can only make statistical correlations and establish causation to a degree.
When I use the word "prove" i don't mean like proving things in math. Nobody means this, it's obvious to everyone what they mean. Prove in this context means well established. For example if I say scientists have proved the theory of gravity.
What's going on here is extreme pedantry. Prove has meaning in different contexts, you have chosen the most technical pedantic context to suit some misguided purpose rather than the most obvious context. There's no point in taking this further.
I am not trying to be pedantic. There are many contexts in which 'prove' might be a reasonable term colloquially.
However you are referring to a social science when you talk about differences between kids of different generations. I think using 'proved' in that context puts you on pretty shakey ground.
Whatever works is what I believe. I feel lots of parents just don't have the balls to not coddle their children, and they justify this fear with rude comments like this.
FYI the two blind kids actually knew each other when they were kids, and I "kid" you not the kid that was the "libertarian" bullied and beat up the other one because he was thinking "wtf, why is this other kid helpless."
Listen to both podcasts. Worth it for any parent as it gives a true more "scientific" perspective on the coddling going on with kids today.
I'll point up to the original topic. Do you think it's a good idea to let a young child watch anti-vaccine videos? What about political videos which claim that 'the other' side supports killing babies? Young kids don't yet have the context to understand lying and propaganda, and it's not clear when they'd have the mental maturity to understand it.
More specifically, I was raised in a generation where adults didn't worry particularly about children seeing violent or upsetting content and it brought me nothing but nightmares and a lifetime of insomnia.
I would explain the content to them. Explain all the different sides and my personal views on it as well as direct him to content with opposing views. Then I would let him form his own opinion.
My job as a parent is to help him understand. Not to build a bubble.
As for violent content I would tell my kid to just watch out for that and that the content is unpleasant. It's his choice whether he wants to see it. Usually at that age adult content or violent content is uninteresting or unpleasant so kids tend to avoid it anyway.
I will happily concede that your three year old is a more advanced thinker than mine is.
As for violent content, with YouTube specifically it is easy to watch it without being forewarned. I don't know if you are aware of Elsagate but the whole goal is to get kids to watch horribly disturbing content without warning.
A baby can crawl in random directions. Of course I'll protect him from crawling off a ledge. But when he has the capacity to understand and judge I coddle as little as possible and expose as much possible. 5-6 is around the right time.
At 3 I don't let him run across the street. At 5 I tell him to look both ways and I tell him what happens to someone who doesn't, then I let him play outside.
Youtube, fb & such have monopolized online media. Then these problems "crop up," many as side effects of their own antipatterns.
Now they have the public and politicians lobbying beseeching them to please do something... They're being handed the power to censor us, to deal with problems that they themselves create.
There is a big difference between facebook and a Standard Oil or a Bell. Those companies were necessary. Post breakup, we still needed shards to produce oil, phone lines and such.
If facebook was crushed to dust, there would be no shortage of social media.
I remember the day I learned that Youtube comes preinstalled on iOS devices -- because the Youtube app somehow (despite never being used, let alone being logged into Google) managed to put a push notification on my home screen -- a push notification for a new upload from a fringe alt-right channel with just a few thousand views. Naturally, every single recommended video on that video was alt-right and far right wing with some Neonazis sprinkled in.
Social media drives antisocial behavior and social division intentionally, because these bring huge numbers of clicks and thus ad money.
Fair enough, according to my Apple ID history I did in fact install it a couple years ago. But the point still stands - the app wasn't used, not connected to any account, yet one day it still pushed extremist content to my home screen.
An app cannot send you push notifications unless you have opened the app and allowed push notifications (or gone in to settings an explicitly enabled them, which seems unlikely). So it is incorrect to say "the app wasn't used". It was obviously used, just maybe not in a long time.
Sorry, I'm not trying to nitpick. But it seems like all you've said so far is "An app which I downloaded and enabled push notifications for sent me a push notification recommending a video I didn't agree with politically". The term "extremist content" can mean basically anything depending on who is speaking.
The YouTube suggestion algorithm is also amazing. As a hobbyist I find so much cool stuff completely unrelated to politics. I hope they can fix the bad without ruining the good, is what I'm saying.
Agree. YouTube has some great content on it. Channels that I regularly follow include Technology Connections[1] (despite not being that much into electronics), Summoning Salt[2] (despite not playing video games), PBS Eons[3]. There is also some good political content there (including Lindsey Ellis[4] and vlogbrothers[5]).
Despite this YouTube really needs to wake up to the fact that some of it’s content is really toxic and potentially dangerous, and that it potentially suggests this toxic content to an unsuspecting audience. Optimally it wouldn’t do that, while still suggesting some of the great content there.
I actually appreciate the suggestions, it's how I have found most of the content I enjoy, and it's an important way to let new YouTube channels get established.
But yeah, I dread when someone sends me a politically charged video and suddenly my feed is full of crazy people.
We cannot expect any actor (human or otherwise) to act against their own best interest. Google/Youtube, an entity with incentives, will act to maximize it's share of your time spent with them - it cannot function any other way, lest a competitor does that job better and they are destroyed/bought. We can wish for the underlying dynamics to be different, but they are what they are. All of society is currently lost in hopeless battles, fighting the "tides" - in this case underlying incentive structures.
You can wish for legislation to save you from your outrage - to shatter these companies into little pieces. But legislation doesn't change underlying incentives. In fact, legislation nearly universally distorts incentives further from what you wish them to be. For example, in this case, fear of legislative action against "Big Tech" having too much power/influence/misinformation-potential, is likely the underlying cause for their inclination toward censorship.
The classic Walmart South Park episode is a nice visual syllogism for this idea.
I offer no solutions at the level of "society", but at the individual level I offer you to consider that "outrage" is a waste of your time and your emotion. Easier said than done, I know.
All else being the same, an actor will also minimize its cost and effort. Youtube can captivate viewers even if its suggestion system doesn't select videos for quality, so why bother implementing it.
How is this different from the outrage factories of the tabloid news?
Is YouTube’s approach to extremist content different from a tabloid in any way other than efficiency/reach?
Are we really complaining about a basic aspect of human psychology rather than a business innovation from these companies?
If we smash YouTube, what about the tabloid press? How do we actually regulate this stuff in a consistent way? (And how do we do it inside the confines of the first amendment?)
I won't accidentally end up on some outrage tabloid news site if I start at AP News and start clicking articles. There is nothing wrong with outrage tabloid news existing, but it shouldn't be something that I accidentally happen upon while browsing normal news. YouTube will just send you to extremist crap if you keep watching suggested videos long enough.
Fair. Is this qualitatively or just quantitatively different from discovery in previous generations; flicking through the channels you see Fox when you normally choose MSNBC. Or flicking through the newspapers on the rack you see USA Today on your way to picking up NYTimes.
I think part of it at least is that you simply get recommended way more things now than then; the shocking recommendations are the only ones we remember.
I do also agree that there is some sort of feedback loop going on here that wasn’t happening as strongly before, and I have a vague hunch that bots / astroturfers may be amplifying these feedback loops in ways that didn’t apply to previous generations.
> Create a new account, watch any political video, and then click through the suggestions. Keep clicking through the suggestions and in hour you will be on some extremist nonsense.
Could that simply be the inevitable effect of Youtube being filled with garbage? It seems that suggestions linked to the current video by some keyword match (or whatever) but otherwise random would have this problem.
E.g. if you pick a good political YT video on some topic, and then randomly select half a dozen other YT videos on the same topic, they are probably garbage.
So while the selection process isn't blameless (it could be improved to promote quality videos) much of the fault lies with the quality of the available material.
How would the selection process promote quality? How does the machine know that? You can't tell from the likes and dislikes; such populist metrics would promote garbage all the same. It seems that it would require a large team of workers to curate videos.
The algorithm makes sense in certain cases, such as music or tv shows, but it's not a one size fits all solution.
Another issue is the use of autoplay (which is enabled by default in chrome on my phone), where they keep shoveling more content at you even if you didn't intend to watch more.
> Show people the videos they have subscribed or searched for, and that's it.
I dunno, there's value in suggesting related content. If I subscribe to a woodworking channel, and Youtube suggests other woodworking content... that doesn't seem like a problem.
Related content, sure, but the problem with extremist videos in Youtube (at least for the GP, but also for me and my friends) is that they pop up out of nowhere.
I think the suggestions are insidious and lead to binge viewing by design. However, adults are adults, and they can make their own decisions. I am okay with suggestions in general but maybe they should be completely removed for news or political content. And they should be removed for kids accounts.
It should not be possible to search for a Biden or Trump speech and half an hour later, through suggestion surfing alone, you are viewing alt-right or antifa extremist content.
No, the problem is that extremist content is a nonsense concept.
Specifically, there's no reliable way of determining whether content is "extremist content" because a lot of information that the political and religious extremes focus on is actually true information. It often these extreme views that provide unique insights and the most valuable information. So how - and moreover why - should Youtube try to sift through a physics lecture on metallurgy and attempt to differentiate it from a physics lecture on metallurgy.
There's often no way of knowing whether the lecture is being given by a college physics professor or a 9/11 truther, except by examining the contents - and both might be telling the truth. Heck the college physics professor could be a 9/11 truther and the truther could be a college professor... that doesn't mean that this lecture on metallurgy should be removed from the college's account.
The fact that some people can't tell the difference between an Einstein and a wannabe Einstein doesn't mean that we should censor them both.
> There's often no way of knowing whether the lecture is being given by a college physics professor or a 9/11 truther, except by examining the contents
Exactly this.
Your points about extremist content stand as well. We've countless examples of NGOs that fight extremism, that get their content taken down due to algorithmic monitoring.
AI aren't capable of this sort of nuanced decision making and there is no evidence that they're going to be capable in the near future.
The only method to achieve consistent, quality content monitoring involves human eyes backed by qualified human judgment.
A free flow of information is a net good. Restricting information is a net bad. Western society learned this 400 years ago.
These are the same arguments which were used against the printing press, now being used against the internet, fighting against a new enlightenment the same way that the church fought to maintain their monopoly on knowledge.
All combating "extremism" does is protect entrenched stakeholders from the truth.
A free flow of information is a net good - by default.
From this point, publicly visible information that consistently shows to be counterproductive could be reasonably restricted from the highest volume platforms.
1 zillion loopholes ought to be enough for anyone.
No, that would infringe the 1st Amendment. What we really need is a conglomerate of mega-corporations to fill that role and decide what is real and what is misinformation, that way we achieve the same result without the letter of the law being infringed. /s
Banning everyone who disagrees with the current state of the science is also not how science works; and yet YouTube is going to attempt (and fail) to do just that.
YouTube is almost certainly going to be banning literal doctors with actual PHDs from expressing scientific opinions which later turn out to be substantiated.
The article says this ban includes material that "explicitly disputes health authorities’ guidance on self-isolation or social distancing.". There are some rather eminent doctors who do exactly that. Social distancing policy is a political, not a scientific, question of trade-offs.
> YouTube is almost certainly going to be banning literal doctors with actual PHDs from expressing scientific opinions which later turn out to be substantiated.
To be fair, Youtube shouldn't be the platform where doctors express their scientific opinion or try to win over support.
Why not? It pretty accessible and a nicely done video does more for the education of the general public than some pdf on a server behind a Springer paywall.
How are they going to get the research done if they can't present the hypothesis and supporting observations? How will they get the results peer reviewed if they can't present them because they're not peer-reviewed yet?
Researchers typically present findings in journals (or conferences in computer science). These YouTube videos aren’t research, they’re some mix of unscrupulous money grab and deliberate misinformation from Republican-affiliated groups trying to mitigate the political damage of the administrations abdication of responsibility for stopping the pandemic.
Generally speaking by attempting to publish the result in a journal or conference proceedings. There’s a process in place where your unreleased manuscript is sent to other researchers in your field for review. After making corrections based on their feedback, you can be approved for publication, at which point the general public gets a chance to evaluate it.
Having done it, I don’t necessarily love the process and would love for the process to be more open and accessible, but it generally works ok.
because a 'nicely done video' (in terms of popularity) can very seldomly convey the information needed to make a valid point. hek even scientific writers/writings strugle with packing the right type and ammount of information in their works without sacrificing readability/citations.
i am in no way against scientific material on yt but very much against a world in which a professionally edited and promoted youtube video is the minimal standard for scientific progress.
Banning all practitioners of science who disagree with the current state of science is not how science works.
The jury is a little out on the utility of allowing unpedigreed randos to pontificate on the nature of the science with the same amount of volume and podium-authority as qualified, practicing scientists with years of experience. Academia certainly doesn't consider them the same; we don't allow any random undergraduate to walk up to the podium and kick the lecturing professor off the stage so that they can have 5 minutes to explain how etheric currents are actually what makes lightning work.
We aren't, practically speaking, seeing a lot of evidence that the public can tell the difference right now. This is a problem.
> YouTube is almost certainly going to be banning literal doctors with actual PHDs from expressing scientific opinions which later turn out to be substantiated.
How are they going to deal with situations where health authorities in different countries have different positions? Like right now, Germany recommends masks and Sweden doesn't. Are they going to censor anti-mask videos in Germany and pro-mask videos in Sweden?
The issue is not that they got it wrong the first time(or lied on purpose to defend the economy, or to prevent panic and hoarding).
The issue is that people were marked as conspiracy theorist nutjobs for holding some views that were the official recommendation a week ago (or became the recommendation a week later). If officials can get it wrong, why would you censor your "drunk uncle" from getting it wrong?
The problem is that govts around the world didn't do this in a scientific fashion. They had mask shortages and wanted to keep the masks for health workers and also not make people worry. Now they have masks and want everyone to wear one.
Very hard to have any credibility after a few similar interactions. This is not about speech all over the world (maybe except China), this is about the political elites losing leadership.
Sure it does. But lying about that as the reasons and saying masks don't work is misinformation. And that is what the government health officials said.
They intentionally engaged in a misinformation campaign. And now they are considered the authority against which any other opinion will be judged misinformation.
You're right. My issue is that why did US health officials say masks were not effective against respiratory infections, when health officials from Asian countries had deemed them effective.
Why didn't we have the science? Respiratory infections aren't new. I remember debating people early in the outbreak about the use of masks and they were pointing out that there was little scientific evidence on the use of masks, especially non n-95.
For a while I assumed they lied as well. Now, looking back I'm not so sure. There really wasn't much scientific evidence of masks being effective for respiratory diseases.
Dr Fauci has said their initial stance on masks was to ensure that masks were available for health care workers[0]. He did not say their stance changed due to new evidence.
No, they pretty much lied. The CDC was for example recommending n95 usage for medical personnel and surgical masks for infected people during the first SARS outbreak.
Fauci admitted they lied because they knew there was a mask shortage, and they wanted to make sure medical personnel could get masks before the general population bought them all.
This is probably the biggest problem. Trust in government due to bad decisions and outright lies is eroding rapidly. Worldwide. In a few months, the willingness to cooperate from citizens will be almost zero. Rightly so. If I catch someone lying for his own benefit, or not telling the truth because they think I am too stupid to understand, why should I ever listen to anything from the same person/orga again?
Part of the problem is that the media keeps on misleading people about how other countries are doing in order to push the narrative that their country's government is inexplicably failing at dealing with Covid-19. For example, here in the UK (and I think the US is similar), the media and opposition have been pushing the idea that a competent government would be able to set up a working test and trace program to control the virus, like South Korea, and the fact that the UK was forced to resort to closing and restricting pubs and restaurants and limiting social interactions is proof ours is incompetent. The BBC has also been pushing the narrative that there's no evidence those kind of local lockdown measures work. In reality, of course, South Korea has been heavily reliant on precisely those kind of measures - every time they lift them and try to rely on test and trace, masks, and weaker social distancing measures, cases start increasing and they have to reimpose them.
For example, take the South Korea section in this article about how Europe is failing: https://www.bbc.co.uk/news/world-54482905 It starts off by attributing their success to "mass testing and strict contact tracing". In reality, South Korea is testing at about a tenth the rate of a typical European country (10-20 thousand a day), doesn't routinely test people with mild potential symptoms, and has no way of reliably detecting cases that don't have links to ones they know about - which have been appearing in increasingly alarming numbers. Technically, "tests have always been easily available" as the article says, but they successfully discourage most people from getting one by charging for them under most circumstances. They don't even appear to be trying to improve this either; their testing capacity seems to be at the same level it was in like March.
As for the strict contact tracing, this sentence hides a multitude of things: "the majority of people have accepted it and prioritised controlling the pandemic over their own privacy". I'm sure the people who didn't have the entire internet speculating that they were cheating on their partner, and aren't being criminalized for refusing to out themselves as gay to their homophobic employer in a country with no legal protections, and don't oppose the government, have no problem with it. Something tells me that the BBC would be infinitely less willing to put a positive spin on screwing over a minority of people like that in the name of the greater good if the UK or US government did it though.
The article goes on to downplay the other restrictions. "When infections rise, stricter measures are put in place", the BBC says, but makes it sound a limited, mostly brief and long ago thing: "At one point there was a 21:00 curfew on restaurants in the capital, Seoul. But now, social distancing guidelines are back to their lowest level." In reality, they'd literally only just dropped social distancing meaures to that level and allowed bars in Seoul to reopen right before the article was published: https://uk.reuters.com/article/uk-health-coronavirus-southko... It's still too early to even see the effect of this on cases. That bar closure was also imposed only two weeks after the previous 9pm curfew and complete ban on indoor dining/drinking in Seoul establishments was lifted. If the UK did this, there'd be endless BBC headlines about U-turns, confusing, seemingly arbitrary measures, and speculation as to whether businesses would survive - not this kind of attempt to minimize the extent of the measures. Especially the part about night clubs reopening; South Korea has tried this at least twice and had to do almost immediate U-turns both times, and there's no reason to expect they can manage it now.
I suspect the BBC would also have plenty to say about (for example) the flu vaccine screw-up if it happened in the UK: http://www.koreaherald.com/view.php?ud=20201015001058 They've certainly put enough of a negative spin on the fact that the UK doesn't have enough flu vaccines immediately to deal with unprecidented demand and the implications for the fight against Covid-19, even though there's nothing the government could really do given the lead times. But because it happened in the supposedly more competent success story country, most people will never hear about it and the illusion that our government is just incompetent compared to them will remain unchallenged.
If you search Google news for "masks" between February and March you can see where the surgeon general and others were saying that masks may increase the likelihood of infection because of touching your face.
Did you even watch the video? 1. He says there is no need to wear masks in public "at this time". 2. He says there is no evidence that masks protect you against infections (which is of course something that can and did change over time) 3. He mentioned that touching your face can be dangerous, not that wearing masks increases the risk. I don't think anyone disputed this then or now.
That is something else than people saying now that there is no evidence that masks help, because now we have evidence.
FWIW: Early news about COVID-19 and criticism is much easier to find on DuckDuckGo than Google.
> In an interview on Fox News' "Fox & Friends" on Monday, Adams said wearing face masks could actually increase a person's risk of contracting the coronavirus.
> "You can increase your risk of getting it by wearing a mask if you are not a health care provider," Adams said. "Folks who don't know how to wear them properly tend to touch their faces a lot and actually can increase the spread of coronavirus," he added.
> Adams' comments Monday reiterate his blaring tweet from the weekend, urging people to "STOP BUYING MASKS." He said that they were "NOT effective" to the general public and noted that the increased demand in masks puts medical professionals at risk.
> The World Health Organization joined TikTok last week to provide accurate information about COVID-19. In one of two videos posted, it explained most people should not wear masks and provided instructions for how to properly wear one.
> The meta-analysis itself avoided drawing any conclusions at all, and would not even admit that N95 respirators worked.
I mean, either you believe the science or you don't. You don't get to complain that it didn't support your preconception.
> According to intention-to-treat, the studies unanimously found masks to be useless. But there were a lot of signs that intention-to-treat wasn’t the right choice here.
Not a great sign.
> In other words, respirators are better than masks are better than nothing. It would be wrong to genuinely conclude this, because it’s not statistically significant. But it would also be wrong to conclude the studies show masks don’t work, because they mostly show respirators don’t work, and we (hopefully) know they do.
My issue is that why did US health officials say masks were not effective against respiratory infections, when health officials from Asian countries had deemed them effective.
Because Asian countries were basing their advice on science and the US was basing the advice on what would poll well with Republican voters who believe individual freedom is more important than community welfare. The US absolutely did have access to all the science, and chose to (temporarily) ignore it in order to try to gain votes the presidential election.
If Asian countries had been going in to an election and courting votes from a strongly conservative base their advice would have been different.
I have to disagree on your assessment about whether our science community based the advice on polls from Republican voters.
I was in favor of masks (might have helped since I lived in Asia previously), and debating people early on about the usage. I don't know about now, but there was a lack of published evidence on non n95 masks, and even the use of n95 masks were debatable because of fit tests.
As someone else pointed out, this is what Sweden says to this day.
"The scientific evidence around the effectiveness of face masks in combatting the spread of infection is weak, which is why different countries have arrived at different recommendations."
I doubt the Swedes are trying to sway the US election.
France officials had the very same discourse regarding masks (highly discouraged in February and March, became mandatory in a lot of places in July). The current government is very much closer to US democrats than republicans, to say the least.
Probably not a right vs left decision.
>Because Asian countries were basing their advice on science
Actually for the most part this isn't true, our government was also telling us to wear masks but because of our previous experience with SARS everyone wore them anyway. Now of course it is illegal not to wear one.
> Because Asian countries were basing their advice on science and the US was basing the advice on what would poll well with Republican voters who believe individual freedom is more important than community welfare.
It was the WHO who repeatedly stated that healthy people shouldn't wear masks, or close borders for that matter, and most politicians across the world didn't question that. And why would they, it's the job of the WHO to understand the science for us.
(If anything, Trump was initially harder on COVID than most by pushing for travel restrictions. At that time it was the Democratic NYC establishment that put individual freedom over safety [1], the two positions only flipped later on.)
The initial response, especially by the WHO, was an absolutely disaster. Asian countries did better than the US and the West in general because they _didn't_ listen to the 'experts'.
Of note, all the stuff about border closures making things worse was 100% politically motivated bullshit with no foundation in science - even the New York Times ended up admitting it: https://www.nytimes.com/2020/09/30/world/europe/ski-party-pa... (What they don't admit to, but was really obvious at the time, is that in the US it was driven in part by anti-Trump politics - the press quite openly associated it with all his other border policies they hated, there was a piece of pre-pandemic fiction about Trump frantically closing all the borders to stop a disease that was already in the US that circulated all over social media as a supposedly prophetic warning, and I think Biden's campaign even pushed the idea that Trump's travel restrictions made things worse after that NYT piece came out. Of course, in reality it was actually the whole of Europe who ended up frantically closing all their borders far too late, whilst the US stuck to targeted and relatively timely travel restrictions - if I recall correctly, pretty much the exact opposite of the "prophetic" anti-Trump fiction.)
Indeed. Something thought to be right was actually wrong. Happens all the time. And of course something thought to be wrong was actually right. This is why disallowing "misinformation" is utterly stupid.
This is not an argument to allow any fearmongering and actively spreading a harmful message. You can use this argument to say nothing should ever be forbidden/punished to say ever.
> This is not an argument to allow any fearmongering
So let's ban information that claims corona isn't that bad then, because it's DANGEROUS and you should BE SCARED and PEOPLE ARE DYING and if you don't wear the mask YOU COULD KILL PEOPLE.
Ok, so everyone should be allowed to broadcast, promote and advertise: "all white people are inherently evil and criminal and should all be exterminated, go out and shoot every and kill every white person you see!"?
With speech, obviously there’s going to be a line after which you descend into madness and that line is legally speaking usually drawn at directly inciting violence. That’s what is considered “spreading a harmful message”. But by censoring COVID, Youtube is moving this line quite a bit, to a point where we’re talking nowhere near as harmful messages. The point is that unlike with someone saying “kill people”, here it’s not that obvious whether an idea is “wrong” or “right”.
There's more things banned. I cannot put an ad on TV that all gay people or Christians should be fired and shunned from society, even though that's not inciting violence.
My point was more that there's limits to free speech when it's harmful as a response to someone who said that everything should always be allowed everywhere.
You can debate whether that should include Covid misinformation and to what extent, but I would say unsubstantiated claims like vaccines will make you infertile are harmful to public health and are definitely candidates.
> I cannot put an ad on TV that all gay people or Christians should be fired and shunned from society
You absolutely can both legally, and practically at least the first and probably the second. Heck, the former has been part of the main editorial content on some TV outlets, not just advertising.
But do you censor the people saying there is a better decision? And do you then censor the people that advocated for the former "better decision" at the time when it was 'current'.
There wasn't any new information, it was just policy decisions hiding the information.
1st stage: "We don't have enough masks to go around, we need them for doctors and such -- ok, let's tell citizens "don't wear masks, as they do no good"
2nd stage: "We have ample of masks now imported. OK, let's tell citizens they have to wear masks".
Dr Fauci publicly claimed masks were not effective as a means of protecting the supply of masks to healthcare workers, he knew his claims were false at the time he made them.
So no, it wasn't "science", it was deliberately misleading public statements.
"We do not currently recommend face masks in public settings since the scientific evidence around the effectiveness of face masks in combatting the spread of infection is unclear. However, there may be situations where face masks can be useful despite the uncertain state of knowledge about the effects.
Face masks must always be seen as complementary to other recommendations: stay at home when you have symptoms, wash your hands regularly and keep at a distance from others."
Regardless of what the science says, Fauci believed masks worked, and he engaged in a deliberate misinformation campaign with the goal of preserving PPE for medical staff.
People who have admitted publicly to engaging in a misinformation campaign are now considered to be the authorities against which dissenting views will be judged misinformation.
As I was looking at this back in February / March there were plenty of studies dating back years or decades, looking at N95 and surgical masks and their role in preventing the spread of respiratory infections. The WHO and all western countries were claiming contrary to those studies that masks do not work and should not be worn. Then when things got more critical, EU countries did a 180 and started making masks mandatory or recommending them. Finally the WHO commissioned a meta-analysis and lo and behold in June it turned out that masks do work.
The Swedish public health agency's quite lonely in the opinion it holds now. But even so, I don't see why Sweden's or its health agency's opinion should count for anything in all of this. If anything I'd rather listen to what Taiwan, the KCDC or CCDC have to say, since they actually have some results to show besides empty words and opinions.
This was so disappointing to watch happen. It was obvious (even at the time) that it was to keep people from hoarding masks so they were available for healthcare workers. Fauci, the Surgeon General, and everyone else should have refused to lie.
Now the damage is done, and more people distrust Science.
And all of that was caused directly by the UN? Or was more complicated than that, and the UN could do much as the veto powers blocked or delayed things? I'd say it is rather the latter.
There were UN troops in Srebrenica, and their failure is a well covered topic. There are multiple resolutions against Israel, the first from the Suez crisis. Most just get ignored, not the least due to US vetos. The Iraq invasion was a purely US thing that had nothing to do with the UN, some NATO members supported it but not all.
There is a difference between causing disaster, which the UN didn't, and failing to prevent it, which the UN does. Expecting perfect results from one side and no results at all, or close to jone, from the other is just disengnious.
The UN is made up members. Things like US vetos are part of the process and must be included.
WHO has a global outlook and you have a person story. Those two conflict more often than not. Telling you masks don't work and then telling you they always worked and you must wear one is a big messaging error that has cost many lives. And the reasons behind are political. Trust with me has been broken.
If it's important I'll include a link with detailed context for each and throw in a few more examples for you.
When one compares the advice and recommendations related to SARS with those for SARS-CoV-2 it becomes obvious that the latter could only be explained by dishonesty or stupidity.
This is how the politics of science can works. Masks may run out is the fear. Do not be honest and tell everyone they don't work. Masks are restocked, force everyone to wear a mask.
There is nothing scientific on the politics of science then. With that strategy you will have less and less people listening to you, because a subset will always find out you have lied.
A lot of scientific dialogue now happens through mediums like Youtube. Scientists use it as a research tool, and a way to disseminate information to other scientists. Censoring discourse is extremely dangerous, and its detrimental effect on scientific advancement is just one of the reasons why.
And youtube would censor everybody saying that masks don't help healthy people.
There's a mile-wide chasm between posting an opinion that differs from the official government line and posting something that's deliberate misinformation designed to harm people. YouTube are only removing the latter. The article actually quotes a YouTube spokesman saying as much "A YouTube spokesman told Reuters that general discussions in videos about “broad concerns” over the vaccine would remain on the platform."
>There's a mile-wide chasm between posting an opinion that differs from the official government line and posting something that's deliberate misinformation designed to harm people.
This is true, but almost no content would fall under your definition of "deliberate misinformation designed to harm people". That phrasing might be said to be the standard, but it wouldn't be applied that way.
Also, the quote from the YouTube spokesperson pretty explicitly says that if something is wrong with the vaccine YouTube will not allow you to say that. That would be something specific, rather than broad. I doubt that they would actually silence someone that points a specific flaw in the vaccine, but that's what their spokesperson said (read: their words are unreliable).
This is how it works with modern information spread. Of course you can have concerns and you should be able to state them openly. Any doctor that wouldn't have concerns without a long term clinical study does lie.
This is what makes the mask situation so illustrative. You have the same guy who is deemed the ultimate authority saying two opposing things, so one of them must have been deliberate misinformation to harm people under these rules. You can't get on TV and say "masks are useless" and the next minute, in the face of no new data, get on TV and say "masks are the law" and somehow walk away with intention preserved and reputation intact.
None of this finicky language is ever going to amount to anything, frankly. The powers that be ultimately control what is broadcast to the masses, even if those powers change their minds.
Don't you think it makes sense that as we learned about how the virus spreads, and as the number of people infected exploded, that the mask recommendation might change?
I think you're trying to mock scientific experts, but what kind of expert would NOT be willing to change their recommendation based on new, strong, contrary information?
Of course experts would prefer not to make a recommendation until all the facts are known, to avoid being wrong. But that simply was not (still and is not) possible with the pandemic.
We have to decide how to react with the information we have and it's just rational to be willing to change our decision based on strong new information.
This is not what happened, not in the US anyway. Fauci, after the recommendations changed, admitted that masks were discouraged to prioritize them for first responders[1].
So we have government health experts publicly saying they engaged in misinformation. But they are the experts against which every other opinion is to be judged as misinformation.
I am not in favor or against Fauci or his comments (I am not even american), and I am not a linguist either so perhaps you can help me:
Is "not recommended" == "discouraged"?
"He also acknowledged that masks were initially not recommended to the general public so that first responders wouldn’t feel the strain of a shortage of PPE."
As far as I understand, 'discouraging' is to saying publicly "Do not buy masks" and 'not recommending' might be just that, as in, not saying "Go and buy a masks or you'll spread the disease".
He actively discouraged mask use. Fauci and the Surgeon General said wearing a mask was more likely to get you sick because you would touch your face more.
"You can increase your risk of getting it by wearing a mask if you are not a health care provider," Adams said. "Folks who don't know how to wear them properly tend to touch their faces a lot and actually can increase the spread of coronavirus," he added.
Thanks. I don't know the whole story, but I cannot find Fauci's name (ctrl + F) in the last article you linked. It mentions a Surgeon and a Vice President.
Surgeon General Jerome Adams and Vice President Mike Pence have urged people against buying and wearing masks.
The point is, if every platform pulls a YouTube, there will never be new, strong contrary information. It’s a tough problem. But scientific advancement depends on the ability to challenge broadly held opinion.
I think part of the solution to the misinformation problem is to A) have a much better education system, and B) have much better media presentation around the facts.
If you don’t have A, all is lost ultimately. There’s not much hope for democracy if the majority of the citizens are misinformed and incapable of critical thought.
Now think about all the conspiracy theories that actually turned out to be truth. MK ULTRA, Dark Alliance, Tuskegee Syphilis Study, COINTELPRO, PRISM, you name it. Would these be banned ("fact checked") from YouTube, Facebook or Twitter under these new rules?
On the other hand, there is a clear evidence that authoritarian regimes (Russia) are weaponizing social media, using troll farms, puppet accounts and fake news, to disrupt democracies by sowing discord, uncertainty and doubt and radicalise any and all opinion spectrums. I would be surprised if the Russian trolls did not actively work to promote and decry Covid conspiracies at the same time.
What is the correct solution here, that I don't know. No approach is flawless and there will always be collateral damage.
> there is a clear evidence that authoritarian regimes
United States, say it with me, United States
My country suffered a literal coup as a consequence of US meddling in our internal affairs and hijacking our media, newspapers, radio, against the national interest and democratic values of the nation spreading literal fake news
I'll let you guess which country I am talking about, because heaven knows there's not only one
Sure nobody denies this but "you deserve it therefore stop complaining" is not really an argument, more like shutting down the point entirely. I agree with you that the sanctimonious outrage over the attempted, but laughable election interference is quite tone deaf.
That already happened, basically. Anyone who talks about the CIA in anything but a positive light is a "conspiracy theorist" (their words).
It's a criminal organization that has perpetrated an endless list of crimes against humanity, sponsored by the American taxpayer and undocumented money from terror campaigns. Now they have a happy little venture capital firm that invests in new tech, like google and facebook. What do you suppose the ROI is for those investments?
We're too concerned with edge cases to justify the absolute nightmare we are in with regards to information. For every "true" conspiracy theory there are millions of junk ones - of course some will be partially correct.
We are so afraid of losing "one correct idea" that we allow orders of magnitude more bad ones to flourish and drown out good ideas anyway.
Dealing with foreign powers abusing the system would be easy if the current administration didn't benefit from it.
The large amounts of misinformation also threatens debate, does it not? Most misinformation and conspiracy theories are not looking for proper debate and often times result to logical fallacies and falsehoods to prove their point. Thus, proper debate is drowned out.
By and large I don't think debate is threatened. We have to draw a pragmatic line somewhere, otherwise we will waste time debating if the earth is flat and the endless amount of absurd ideas. Ultimately, we've already debated many of these ideals, if not explicitly.
I don't know if conspiracies do that much harm to debate. They're quite the fringe topic. Misinformation, yes, but classifying misinformation (in its not-so-obvious forms) is a hard problem. So while I agree there needs to be a line, I also think that in enforcing a ban on misinformation we will necessarily end up limiting debate. The line should stay at something like inciting violence.
How much time do you have? There are endless amounts conspiracy theories, often times discussed by people who don't have any clue. Society NEEDS a way to cut through the chaff and noise, or eventually it'll collapse in on itself, and someone wins (typically the one with better information.)
> The line should stay at something like inciting violence.
The government line should definitely stay there. But private corporations should be free to push any agenda that they like.
You forget that not all conspiracy theories are equal. A democracy could survive hundreds of bullshit conspiracy theories for decades, but suppress the next COINTELPRO and your democracy may be dead in five years.
Don't forget Epstein! "US media and political elites are engaging in a child sex trafficking ring" was (still is?) a conspiracy theory for years, if not decades.
> What is the correct solution here, that I don't know.
I know, ban what is fundamentally rooted in bullshit. Homeopathy, "all and every vaccine is bad", "these essential oils will cure your cancer", "5G causes corona", "earth is flat".
Done, there's no collateral damage, but helluva lot less lying.
Clowncore published a humorous music video titled Earth is Flat but renamed it to Earth because the algorithm can't detect humor, YouTube had un-recommended the original video and put a big info box next to it, insisting that the earth is not flat.
This is the slippery slope of censorship on display for everyone to see. What panel of certified physicians is YouTube using to identify what information is correct vs. incorrect? I'm pretty sure they don't have any doctors doing this. The ban is a political act and not a scientific one.
Agree on this one - authoritative bodies can be wrong / misled as well, and shouldn't be the only arbiters of truth.
Ex: The WHO made many mistakes during COVID, including faulty recommendations and (alleged) influence by a governing body to limit bad press. [1]
(Before the ad hominem attacks come in, I'm obviously very pro vaccine...but against censorship and generally distrust authoritative bodies of most kinds.)
Youtube's content moderation team also has a terrible track record.
One of my favorite science channels, Cody's Lab, gets banned all the time because he has educational videos on making explosives. The guy is so family friendly he doesn't even swear in his videos.
He got raided / investigated by a government body as well at some point (it's in one of his videos) where they went looking for nuclear materials. He had some, but it was traces and stored securely (he made his own lead/concrete container).
Anyway, Youtube - and by extension, the US government - are a bit cagey about instructions on how to make explosives (and thermite). If amateurs try to emulate it, they can injure themselves badly. I mean you only have to look at some of the "fail" compilations on youtube to see how stupid some people are.
Showing people how to make explosives at home is the opposite of family friendly. Mixing this stiff is dangerous, for those mixing and for those near by. Neither is it educational. Nor true science. Imagine he would make the same videos about drugs instead of explosives, would it still be family friendly? Both things are basic chemistry after all.
> authoritative bodies can be wrong / misled as well, and shouldn't be the only arbiters of truth
I agree, but with a very large caveat - the human brain is not good at probabilities, it basically only recognises “outcome A is guaranteed” and “outcomes A and B are equally likely”. So how much credit should we give to an authority who is correct 90% of the time?
90% isn’t great when it comes to life-or-death matters, so it’s dangerous to recommend “always trust them”. But recommending “don’t always trust them” results in people thinking that the authorites and the crackpots are 50/50 correct, which results in even more people dying :(
Yes but this has been politicised so heavily even the truth is normally garnished with a partizan message. It's hard to believe YouTube will be even remotely competent when it comes to filtering fact from fiction in this case.
There's a difference between being wrong about "put a band aid on a cut" and "aids is transmitted by having sex with Puerto Ricans". Health authorities are good at the first, and very bad at the second.
I think there is merit to the argument that if you cannot guarantee 100% correctness, that you cannot strip people of their ability to form their opinion themselves. And that requires access to different perspectives.
You assume that the WHO and others have to be right all the time everytime. And that they changed their minds regarding COVID-19 because of "public" pressure and not because of new insights. Both assumptions are not correct IMHO.
The CDC did bow to political authority on COVID-19 measures so. Contrary to science.
> You assume that the WHO and others have to be right all the time everytime.
If information is going to be censored based on what they say, they do have to be right all the time. And the point is, they aren't right all the time.
I have a hard time believing they would recommend removing content they aren't sure about.
By the way, Youtube has been banning pornography basically since day one. No outcry about that. Counterpoint, would you accept removing content about abortion and gay marriage rights? If the answer is yes, it indicates that it is not free speech you are worried about.
The ban on pornography isn't specific though. A comparable move would be to ban all videos covering medical information - strange but not "we're deciding what the truth is".
> What panel of certified physicians is YouTube using to identify what information is correct vs. incorrect?
The cheapest content moderators possible, of course. Outsourced through contract agencies in foreign countries to keep costs down. I doubt anybody with a medical degree would take the job. Here's an article about the life of YouTube, Facebook and Twitter content moderators in Manila:
"At the end of a shift, my mind is so exhausted that I can’t even think,” said a Twitter moderator in Manila. He said he occasionally dreamed about being the victim of a suicide bombing or a car accident, his brain recycling images that he reviewed during his shift. “To do this job, you have to be a strong person and know yourself very well." The moderators worked for Facebook, Facebook-owned Instagram, Google-owned YouTube, Twitter and the Twitter-owned video-streaming platform Periscope, as well as other such apps, all through intermediaries such as Accenture and Cognizant.
If these platforms don't do anything then everyone talks about how misinformation is rampant on these platforms, how dangerous it is and that nothings being done about it. If they do, then everyone's worried about censorship or blamed to be politically motivated.
There could be simple legislation that defines what is and what isn't to be censored, then these platforms can adhere to that and check a compliance box and leave it at that.
Perhaps a better response would be, what do you think these platforms should do, really? Maybe we can crowdsource a good idea. And, while at it, also think how to thread the needle of keeping these platforms as less scammy etc. as possible, so that the people won't stop using the platforms.
I don’t think it’s this simple, because the “don’t do anything” case is actually “use engagement-maximizing algorithms to actively spread misinformation as fast as possible.”
> And, while at it, also think how to thread the needle of keeping these platforms as less scammy etc. as possible, so that the people won't stop using the platforms.
Why should anyone care about these platforms' engagement metrics? What about these platforms necessitates their survival?
The assumption that Youtube is an unalloyed good for the world is a very controversial one.
Misinformation coming from some random youtubers is not dangerous as nobody really cares what they say.
The problem starts when you have a Yale educated doctor telling publicly that eggs (or butter, or fat) are unhealthy and then you have Harvard educated doctor who tells you the opposite.
Same with wearing masks. We heard from the official sources (WHO, governments) that wearing mask by healthy people does not help, while 3 months later we hear the opposite and then again we hear they don't help.
Censorship on YT will not solve the problem that people stopped believing in science and that happened because many scientists were working hard to lose public trust.
Except literally millions of people see and consume similar content, and then spread it to millions of other people in both its original form and other media.
This is an actual thing that we can see happen in real time. It very, very obviously causes damage - and claiming it’s just “some random youtubers” is a deliberate attempt to misrepresent that.
If you disagree that this approach is wise, or effective, or morally correct, thats a totally fine argument to make. You don’t need to also lie about the impact in order to make it.
> There could be simple legislation that defines what is and what isn't to be censored
A lot of countries used to have censors around a few decades ago. Usually they regulated radio, TV broadcast, and cinema, and the popularity of such government regulation were at times pretty high, especially among parents and conservatives. In my nation it stopped dangerous information that taught children to kill other children, and also information that could harm national interests.
They were however removed on the principles of free speech concepts (not the US constitution specific version). It also happened that the amount of children that killed other children did not actually rise, so there were clearly a lack of any scientific method to determining how dangerous a piece of information really was.
I think they should ban the least amount of content possible. Most problems stem from DMCA, although partisan issues are even more dangerous.
> everyone talks about how misinformation is rampant on these platforms
There is no tangible evidence for that, supporters of this strategy believe in conspiracies themselves. Same strategy as with Nazis on the net: ignore them.
That really works and it already has for years before everyone became panicked.
Something has to be done. Obvious bullshit is fooling huge chunks of the population and causing huge damage. We have always selected acceptable content and it has usually been a positive. Schools have always picked which info to teach and what to not and that's largely a positive thing.
A Samuel Adams quote posted by Naval today on twitter I’ve been thinking about:
> If you love wealth better than liberty, the tranquility of servitude than the animated contest of freedom — go home from us in peace. We ask not your counsels or arms. Crouch down and lick the hands which feed you. May your chains sit lightly upon you, and may posterity forget that you were our countrymen!
Feeling hungry for change, what will our generation trade in exchange for our inheritance?
> Obvious bullshit is fooling huge chunks of the population
By definition, 'Obvious bullshit' cannot fool huge chunks of the population. If some bullshit can fool a huge part of the population then it is not obvious to them.
When people say things like "Obvious bullshit is fooling huge chunks of the population", they * mean* "views that I find disagreeable are becoming popular". Tough luck: that's how democracy works.
Anyone who wants to get in the way of strangers talking to each other is someone who thinks he's your king.
Obvious bullshit isn't a problem unless you consider all the world's religions a problem. But if you do, then that's by far the biggest and most serious problem and YouTube really must stamp it with priority over the piddly flash-in-the-pan crackpot science and conspiracy theories.
> Obvious bullshit is fooling huge chunks of the population and causing huge damage.
“Obvious” is debatable. Hardly anyone watching scientific content of any kind is an expert in the subject matter. What the public has always relied on is some sort of authority figure lending credibility to some idea or news story. For instance, saying masks don’t work is “obvious” bullshit. But remember back in March? Our glorious leaders were urging us not to buy masks because they don’t work, and besides, doctors need them. Even though they don’t work. And to have argued differently at the time was “misinformation.”
Now we’re trusting Facebook, Twitter, and Google to be the authority from which all truth flows.
> But remember back in March? Our glorious leaders were urging us not to buy masks because they don’t work, and besides, doctors need them.
I think this is a bit disingenuous. First of all, you're looking back with the benefit of hindsight and the knowledge of the virus that we have today. Additionally, weren't most of the recommendations around mask wearing that people not gobble up PPE because there was such a shortage of it around the country for first responders and hospital staff? Isn't that one of the primary reasons why people across the country started making homemade masks out of t-shirts and other spare cloth?
I'd also note that we just learned of this Virus at the end of December. We were and are still learning new things about how the virus spreads and how we can effectively fight its spread. The fact that some of the early guidance has been reversed is evidence that scientists are learning more about the virus, not that we shouldn't listen to them or that their advice is 'obvious bullshit'.
You’re completely misinterpreting the mask situation.
Of course they say there’s no evidence that a mask will protect YOU. Masks aren’t supposed to protect YOU. They prevent YOU transmitting the virus to OTHERS. So of course it doesn’t reduce YOUR risk of respiratory infection. That conclusion has not changed.
The guidance was pretty clear: they didn’t want people wearing masks because there was a PPE shortage. Now there isn’t a shortage, so they want people wearing masks.
Are you sure that masks don't protect you? You've made that statement three times in four sentences.
It's false that the guidance was clear the reason for recommending people not wear masks was because of a shortage. US health officials stated that masks were ineffective and may increase the likelihood of infection because of touching your face while wearing a mask.
As someone pointed out Sweden to this day has this to say about mask usage:
"The scientific evidence around the effectiveness of face masks in combatting the spread of infection is weak, which is why different countries have arrived at different recommendations."
The benefit of a free exchange of ideas is that it allows people to originate and develop the ideas that we later recognize are correct in hindsight. The eventual move towards masks wasn't inevitable; there are some countries even today where masks aren't broadly recommended, so if Internet platforms had banned mask advocacy in March there probably would have been a lot more.
(I'm not sure this is on point to Youtube's policy here, though - they say they aren't banning all vaccine skepticism, just specific categories of silly disinformation like microchips.)
That is one big leap, assuming the WHO and other public health organizations would not have changed their minds if it wasn't for "experts" on the internet. I highly doubt that.
Censoring content like this is pure bullshit that will have little effect on the positive side, but complaining about a measures that may safe lives is also not really enlightened in any way.
My country made masks for protests mandatory. Now people are complaining too. I would say that you should enjoy this new freedom. In normal times you meet the water cannon if you show up with a mask to a protest.
The problem is that many outlets lied directly in their readers faces with a reference to intelligence agencies. Turns out it was all a bunch of bullshit too.
Just like eating 'fads' lollies as a kid is not a slippery slope to drug addiction. Banning clear misinformation* is not a slippery slope to censorship.
Just like drug addiction, each step along the path to censorship is a clear decisive act. I am way more concerned about the fast train to banning encryption than the baby step of banning misinformation.
And by the way, let's also not lose sight of the fact that presenting misinformation is a bastard political act itself. Fuck those people.
This is assuming that the YouTube management (or the software behind) is authoritative of what information is correct. In reality, YouTube has not just once banned stuff "by mistake" and will continue to do so.
And it is not even management who makes the final censoring decisions. Does being content moderator require degree in any related area, or is it just group of trained low wage hires to review user or AI flagged content and decide on ban/leave? Honestly I would like to try out such job shaddowing myself.
I make no such assumption. I merely suggest that the line between mistake and "mistake" is a deliberate step. It is a choice the management decided to take and were rightly ashamed of.
Making an argument for 'misinformation cesspit' over 'moderation' is just never going to be an argument you can win. Not with the business, and not with the public. This is one of those 'perfect is the enemy of good' things. If you don't give them a good business option to take, you will get handed the easy one (and it sure as shit is not going to be perfect).
The problem is that YouTube is so large, and while it is not governmental it is one of the few video sharing platforms that people remotely care about. Coming in behind is places like vimeo. Then super far behind are various smaller video sites (such as ones that use PeerTube).
While presenting misinformation is a political act, it is a more 'neutral' political act. In that, since many people are able to put forward content, that doesn't stop ideas from either side. Yes, that doesn't mean that truth will automatically come from that mix, but it allows for the potential of truth to be found. When you have a massive company controlling so much of content, it has a large amount of ability to influence the people using the platform.
As well, I think banning misinformation is a legitimate slippery slope. It increases people's willingness to have other's censored for their incorrect beliefs, which once you have gotten far enough will be the new normal.
Getting rid of the algorithms that actively distribute certain types of misinformation as widely as possible would probably mostly solve the issue while maintaining the broadest definition of “free speech.”
But since that’s their entire business model, it’s unlikely to happen.
But then again you run into the problem of determining what is misinformation, vs e.g. counterpoints / arguments / critical thinking / devil's advocate videos.
I for one am glad I'm not responsible for setting policies like that.
More freedom of speech is the answer to misinformation because then it can be challenged. Censorship allows convenient mistruths to spread unchallenged.
> More freedom of speech is the answer to misinformation because then it can be challenged
Do we have any evidence that it's true? Anecdotally, it seems that the more the internet has expanded our ability to communicate, the more widespread misinformation has gotten.
I see the moral and anti-authoritarian arguments against censorship, but only in the context of accepting the costs. This "have your cake and eat it" argument feels hollow.
>Anecdotally, it seems that the more the internet has expanded our ability to communicate, the more widespread misinformation has gotten.
I don't think this is the right conclusion. After thinking about it more, it's not like the internet is patient zero for conspiracies and misinformation. It's just that the internet has exposed to the rest of us how common it is for people to hold such beliefs.
I think it's more than just us noticing such beliefs for the first time. More people than before seem to be acting on misinformation, which would be apparent even without knowing their private beliefs.
Is/was there less misinformation among a population living under strong censorship? We have several examples to choose from, and I'd wager there's more, at any point in history compared to freer societies.
> Anecdotally, it seems that the more the internet has expanded our ability to communicate, the more widespread misinformation has gotten.
I completely agree, but why should we assume falsehoods and truth travel at the same rate? History would suggest otherwise.
No one here is advocating strong censorship though, and there is a massive chasm between banning specific topics during a pandemic and having generally strong censorship.
You're declaring the opposite, but for a proponent of the slippery slope argument this would be a perfect example of going down the slippery slope. First came banning hate speech, now comes COVID, what's next?
I wasn't making a slippery slope argument, I'm answering to whether there is more or less misinformation under a freer or a less free society. The easiest and best place to start is always going to be at the more extreme ends of something.
There's literally no reason to believe that this is correct and anything else other than dogma.
There's quite a lot of psychological evidence suggesting that first impressions stick stronger than corrections (which everyone knows who has ever accidentally believed something just because they heard it quickly), that only people who are predisposed to corrections actually believe them, and that in others corrections even augment belief in the originally wrong claim ("see, the mainstream media feels threatened by this revelation!")
There's also a good book by Bartels and Achen[1] suggesting that prevalence of wrong beliefs can actually increase with education (climate change denial is more prominent among highly educated Republicans, highly engaged people on both sides of the political spectrum are more likely to make wrong assumptions about the demographics of the other), because they have more capacity to rationalise false priors.
So there's actually significant reason to believe that more speech, by increasing engagement increases partisanship, which in turn leads to ample adoption of misinformation. Which I think makes a little bit more sense when one looks at the state of discourse in the last few years.
> There's literally no reason to believe that this is correct and anything else other than dogma.
Christianity, for example, was supported by blasphemy laws. Once those laws were undermined or removed its doctrines and practices were able to be challenged, and now we see greater numbers of non-believers in formerly Christian countries as a result.
There's certainly dogma in that example but not the sort you were positing.
As I've pointed out elsewhere, you've time limited the results for no apparent reason. That a lie can get halfway round the world before truth has a chance to get its trousers on does not obviate the power of truth and the power to speak truth.
>Once those laws were undermined or removed its doctrines and practices were able to be challenged
This is alt history. Religious rule in the Western World came to an end through revolution. The French revolution in particular. Religious power was ousted out of systems of governance by force, civic codes were established and that was that. In the communist world, the other large sphere that is now irreligious, force was used even more explicitly. Nothing had anything to do with peasants having a discussion about religion over a cup of coffee after blasphemy laws were lifted. Secularism as well as religion were imposed by elites and institutions, blasphemy laws were lifted afterwards.
I know this is anecdotal, but in someone like my dad removing the video has the same effect. I'm pretty sure he's really bought into a few COVID videos that he watched drunk on Youtube because they were removed by the time he tried to go back and revisit them sober.
Fact checking is hard and time-consuming but making up bullshit is easy. People love sharing contary-to-conventional-wisdom news and ideas that they come across, because it makes them feel smart.
Given these two facts, it's far easier for lies to spread quicker than truth.
How do you solve for this in your spherical cow model of free speech?
Real-world events such as elections, legislative votes, military ops, referenda, protests and gatherings are time-limited. The lie may have already had its desired effect by the time the truth comes out, if it ever does. Lies about, for example, the stab-in-the-back, tobacco and WMDs in Iraq got the liars what they wanted at the cost of millions of lives.
The truth may never become as widely known as the lie due to the frictions it encounters in spreading. People like to see news that confirms their own biases (me too), they like spreading news that's counter-intuitive or unusual, and they hate reading boring stuff (like fact-checks with a long list of citations). An interesting lie will spread quicker and wider, and be remembered for far longer, than the correction that just a few days later.
Overall this whole "free speech kills lies" idea sounds like religious dogma to me. It may be true in some cases, but it's taken as an immutable, universal article of faith and its proponents never feel the need to cite any actual evidence or address the horrific consequences of the edge cases. It's an idealistic image of how the world works, similar to a socialist utopia.
Free speech is crucial to liberty and progress, and the government must not regulate speech. But that should not apply to private platforms.
> The lie may have already had its desired effect by the time the truth comes out, if it ever does
Yes, that's unfortunate. There is no way to consistently detect lies, which is also unfortunate for anyone arguing for censorship as then you cannot ascertain which claims should be censored and which should not.
> Lies about, for example, the stab-in-the-back, tobacco and WMDs in Iraq got the liars what they wanted at the cost of millions of lives.
I don't know what "stab-in-the-back" is but tobacco and WMDs are examples of lack of speech. If we'd all had access to secret documents then the lies about WMDs wouldn't have worked quite as well, I reckon.
> Overall this whole "free speech kills lies" idea sounds like religious dogma to me
Which is ironic considering you've stated it again, and again without reason to back it up.
> its proponents never feel the need to cite any actual evidence or address the horrific consequences of the edge cases
There are no edge cases because it's not promising utopia, it's promising freedom, and it does that because the alternatives are worse - I would rather choose free speech for all than your idea of censoring people based on which method? The abilities of the high priests to divine it, perhaps?
> There are no edge cases because it's not promising utopia, it's promising freedom,
The edge cases are where millions die because of lies perpetuated unchecked. "Freedom" is the ability to say whatever you want without the government getting involved, nothing else. If a private party doesn't like what you say, tough shit get off their property. Otherwise you're impinging on their freedom to keep anyone the hell they want off their property.
Speaking of high priests, it would be pretty funny if discussion boards required declarations of faith to join and being kicked off was akin to excommunication. In such an environment, moderation is just another name for "suppression of heresy within the religion". Surely the government wouldn't dare intervene in the free practice of religion, would they?
Overall, I find it hilarious in this whole discussion that conservatives appear to be arguing for more government interference in private property rights and freedom of expression.
> The edge cases are where millions die because of lies perpetuated unchecked
When did this happen? The Holocaust? If there weren't blasphemy laws then the Bible could've been challenged and you wouldn't have rampant anti-semitism waiting to be pushed into something far more abhorrent than nasty speech. Same goes for the cause of the pogroms, free speech would've been more likely to end them before they had begun.
There are plenty of other socialist movements that ended in genocide of many more millions - would you say they existed in an environment of free speech?
> "Freedom" is the ability to say whatever you want without the government getting involved, nothing else
Freedom of speech is the ability, without interference by government or society, to say what you want, to whom you want, at the time of your choosing; and to listen to whom you want, when you want. The US government is restrained in interference by the constitution but that does not mean other entities don't interfere in others' speech. Was there no racist action before racist action was defined in law?
> If a private party doesn't like what you say, tough shit get off their property. Otherwise you're impinging on their freedom to keep anyone the hell they want off their property.
There are exceptions to every rule - arrest warrants for certain classes of crime are one example, you cannot simply tell the police to go away if you're wanted for murder. When the application of several entities' rights butt up against one another then one right or entity's rights may have more merit than another, which is why we have courts to decide on such novel moments or for legislators to define the sensible limits of rights.
Since you're really talking about social media though and not physical property, I would agree except in the case of monopolies. If HN wants to kick someone off, no problem. FB or Twitter or Google are different propositions.
> Overall, I find it hilarious in this whole discussion that conservatives appear to be arguing for more government interference in private property rights and freedom of expression.
I'm not a conservative, nor an anarchist. Regardless, government is supposed to interfere where rights are being impinged in order to protect those rights. Hence, you've got the wrong end of the stick, we "conservatives" are arguing for government interference against censorship.
Because time matters. Look at elections (where there are huge incentives to exploit the fixed timescale), or look at fraudulent operations like pyramid schemes - eventually they get exposed, but if the originators are sufficiently disciplined to cash out and bail before that they may reap handsome profits. The incentives to spread false information are a function of the profits that can be obtained from doing so before the next decision cycle.
As a parallel example, look at the way the clock influences strategies in professional sports, with teams using timeouts and deliberate fouls to game the outcome of a match. At that point they're not playing ball, but rather a meta-game about time management. It's one reason I like baseball; the game runs for a defined number of plays (9 innings, each of which runs until 3 batters have been struck out), so while games can run very long, that sort of metagame doesn't really exist.
It's not that I like censorship, in fact I'm adamantly opposed to it. But saying 'boo censorship, yay free speech' is ducking the real and measurable problem of false content spread by a mix of ignorance and dishonesty.
I support calls for more education and critical thinking, but that takes years to pay off; the answer to high levels of fraud or corruption is not as simple as 'let's just create a culture of honesty', which substitutes the desired outcome for the solution.
Consider also specific examples of disinformation campaigns. It's easy to create the appearance of controversy by attacking the formation of social consensus, and then to point to point to that appearance of controversy as pseudo-evidence that the prevailing public consensus is a deception carried out by The Powers That Be.
That's an interesting paper and I hope to give it more than the cursory read I've given it so far. I've requested the dataset (there's a waiting period of 4 weeks, unfortunately) because the outcomes obviously rely upon what is considered false news. Their reliance upon "independent" fact-checkers is not good enough in my eyes, because of what I've learnt about those doing the fact checking (just one example[1], there are many).
I agree, false content is a real and somewhat measurable problem and that it is spread through ignorance and dishonesty. However, we know from experience that censorship does not lead to a culture of honesty - far more often it is used to support the opposite - and would you contend that it encourages critical thinking? Those cultures certainly support education as long as it's approved. Hence, I'm not sure why any of those should count against having more free speech.
I'm not against being pragmatic, I simply think that there are other more effective and largely unheralded ways these problems could be mitigated. For instance, you shared a paper on trolling (I've yet to read, my apologies). Instead of widespread censorship in response, why not give users the powers that social media companies have? I've seen Twitter claim that they removed bots and score users based on behaviour, why can't you or I see those scores or know who bots are? Why can't I be given power over the algorithms and filters that dictate my feed? Surely that would be a better way forward that diktat by very powerful entities from above?
Free speech implies the freedom to choose who to listen to, and censorship also removes that.
Youtube already had plenty of free speech where everyone can say anything unhindered and is rewarded if they generate attention.
The ultimate free speech hasn't lead anywhere good on the internet because quacks that previously where relegated to soap boxes in their villages when free speech was thought of in the US are now given the same platform and power of voice as field experts who have studied their domain for years and know the dangers.
> Youtube already had plenty of free speech where everyone can say anything unhindered and is rewarded if they generate attention.
Hard to separate that out from the algorithms that promote videos. Would it be as bad, in your eyes, if discovery was "flattened", so to speak? i.e. no recommends, no shaping of the feed, no shadowbans or hidden videos
No because bullshit is easier to make than truth. There is dozens of bot channels that have nothing better to do at the moment but post videos that look like news channel broadcasts with a semi-convincing TTS reads various news regarding conspiracy theories (ie, "Bill Gates about to be arrested over 5G-Corona").
They post videos at almost hourly pace, distributed over several channels and each video has a unique image to it (ie, different logos, different TTS settings). If normal well researched content had to compete with that, it would loose by being buried under the sheer mass of bot generated bullshit.
And that isn't even counting the number of videos produced by actual humans that are just as bad.
A lot of conspiracy theorists aren't above threatening violence and law suits (usually libel) against those exposing them or simply posting the truth, I've heard of multiple instances of that occuring, atleast one in-person.
This isn't two sides arguing peacefully about one position or the other, it's one side being violent and disrespectful to the other. "Flattening" the discovery of other content is basically asking the otherside to be not-violent and stand back so that the violent side can have their say.
Sometimes violence comes from both sides of the debate too (ie, US politics) but that doesn't make the amount of disrespect by people purposefully lying or spreading lies any better.
Free speech, as I see it, means that you can express your opinion freely without repercussions unless it's advocating for violence against people or groups of people. If you're talking about facts, those should not fall under free speech under the simple premise that facts are either provable or you should say that you have no proof or you should expose them as the opinions they are.
> If normal well researched content had to compete with that, it would loose by being buried under the sheer mass of bot generated bullshit.
How do you stay above it all? It's not censored now but you seem able to detect it and protect yourself.
> A lot of conspiracy theorists aren't above threatening violence
That is censorship, I'm not sure how that supports an argument for more free speech? We have laws against assault already.
> and law suits (usually libel) against those exposing them or simply posting the truth
How could they win? Defamation laws, at least in the US, value truth above other considerations.
> it's one side being violent
Again, I'm not sure what this has to do with a free speech argument?
> and disrespectful to the other
I'm also not sure why this is relevant - it's not nice but so what?
> Sometimes violence comes from both sides of the debate too (ie, US politics) but that doesn't make the amount of disrespect by people purposefully lying or spreading lies any better.
Violence and disrespect really are completely different ball games, I don't see how conflating them clarifies things.
> Free speech, as I see it, means that you can express your opinion freely without repercussions unless it's advocating for violence against people or groups of people.
I agree, that's a very good starting point.
> If you're talking about facts, those should not fall under free speech under the simple premise that facts are either provable or you should say that you have no proof or you should expose them as the opinions they are.
If you could find someone who can judge such things infallibly then perhaps I would agree.
I doubt that, and if that was even the case, until then lies can do a lot of damage.
>How do you stay above it all? It's not censored now but you seem able to detect it and protect yourself.
I don't. Sometimes I get fed bullshit and believe it. I'm certain there is things I believe right now that are lies. Sometimes I find out a more proper explanation, sometimes I never get any truth.
Getting behind what is lies and what not requires education, something the US for example has abandoned over time for the rich to enjoy.
>That is censorship, I'm not sure how that supports an argument for more free speech? We have laws against assault already.
Censorship is enacted by government, not private individuals. Besides, they are smart enough to never call for violence directly. They say "I wish someone would call SWAT on this guy" or "this kinda behaviour ends you with a broken nose wink wink".
>How could they win? Defamation laws, at least in the US, value truth above other considerations.
In the US, a lawsuit isn't about who is right but who has enough money to keep the lawsuit going for longer.
>Again, I'm not sure what this has to do with a free speech argument?
Free speech is valuable in respectful and calm debate. Disrespectful and violent speech is not free speech, it's violence.
>Violence and disrespect really are completely different ball games, I don't see how conflating them clarifies things
Reread, I don't conflate them.
>If you could find someone who can judge such things infallibly then perhaps I would agree.
It doesn't have to be infallible, just reasonably provable or not disprovable. If you're lying on purpose then this entire point is moot anyway, no need to prove anything. Then you're just out for harm.
> I doubt that, and if that was even the case, until then lies can do a lot of damage.
Strangely, it's only the proponents of censorship that claim it will lead to utopia.
> Censorship is enacted by government, not private individuals.
This is not true at all. Free speech is not a synonym for the US 1st amendment. That part of the constitution only protects government interference, which implies that there can be other types, and there's more than one country in the world. Are you really going to claim that social media companies cannot censor?
> Besides, they are smart enough to never call for violence directly. They say "I wish someone would call SWAT on this guy" or "this kinda behaviour ends you with a broken nose wink wink".
This sounds to me like conspiracy theory talk, as does the bit about rich people. I'm sure both happen but to paint everything or everyone that way lacks nuance at best and sounds like something you'd hear from a quack in a tinfoil hat at worst.
> In the US, a lawsuit isn't about who is right but who has enough money to keep the lawsuit going for longer.
This is partly true, but it is no doubt true that the law - explicit as it is - states the opposite. Again, who was claiming that the system was perfect?
> Disrespectful and violent speech is not free speech, it's violence.
> Reread, I don't conflate them.
If you didn't then you just have. I suspect you, either knowingly or unknowingly wish to tie disrespect to violence because it's convenient for arguing against disrespectful speech being free. Well, SCOTUS has ruled that even some forms of "violent speech" are covered by the 1st amendment (speech containing threats must be credible and imminent to lose coverage), disrespect is definitely covered.
Can I blashpheme or is that too disrespectful? Is satire allowed or is that too disrespecful? Would I be committing violent acts by dismissing God? I'd expect that kind of talk from religious fundamentalists.
> It doesn't have to be infallible, just reasonably provable or not disprovable
You're circling back to a court system with judges, but…
> If you're lying on purpose then this entire point is moot anyway, no need to prove anything. Then you're just out for harm.
…you appear to have discarded presumption of innocence along the way. Terrifying.
>…you appear to have discarded presumption of innocence along the way. Terrifying.
No, a court still has to prove that but it's a lot easier in most cases where harm is intended.
>Can I blashpheme or is that too disrespectful? Is satire allowed or is that too disrespecful? Would I be committing violent acts by dismissing God? I'd expect that kind of talk from religious fundamentalists.
The disrespect is meant in terms of a reasonable discussion. Blasphemy usually lies in the realm of religion, which is in the realm of opinion, unless you don't believe in freedom in religion. Satire is rarely calling for violence last I checked and most reputable satirists I know check their facts before the show to make sure they don't get things wrong. Plenty of people like to dress up violent speech as satire but it rarely works well when they're not punching up.
>If you didn't then you just have. I suspect you, either knowingly or unknowingly wish to tie disrespect to violence because it's convenient for arguing against disrespectful speech being free. Well, SCOTUS has ruled that even some forms of "violent speech" are covered by the 1st amendment (speech containing threats must be credible and imminent to lose coverage), disrespect is definitely covered.
As you so beautifully stated above, free speech isn't a synonym for the US paper scrolls. In germany, violent speech is far outside the realm of protected speech. If you threaten to punch someone in germany, the courts can put you in jail for up to 30 days, depending on situation, history and such, usually it's a fine or 1-2 weeks probation/public work. The exceptable clause is that the speech must clearly be unactionable, ie you jokingly tell your friend you're gonna punch them if they win against you in Mario Kart or something like that.
>This is partly true, but it is no doubt true that the law - explicit as it is - states the opposite. Again, who was claiming that the system was perfect?
If it's not actionable then it doesn't matter what the law says since it's not enforcced.
>This sounds to me like conspiracy theory talk, as does the bit about rich people. I'm sure both happen but to paint everything or everyone that way lacks nuance at best and sounds like something you'd hear from a quack in a tinfoil hat at worst.
There have been multiple instances of this happening over the years, most famously KeemStar, if I recall the name correctly, had a private stream for dedicated followers where he would repeatedly wish for someone to be SWATted, which can be a death sentence in the US.
>This is not true at all. Free speech is not a synonym for the US 1st amendment. That part of the constitution only protects government interference, which implies that there can be other types, and there's more than one country in the world. Are you really going to claim that social media companies cannot censor?
The german constitution may not know free speech, but it knows freedom of opinion, which is a tighter subset. Censorship in germany is still defined as being done by the government.
In colloquial language you may refer to some actions that SMC do as censorship but it is hardly correct. What SMC are doing is enabling or surpressing people's ability to share their speech.
>Strangely, it's only the proponents of censorship that claim it will lead to utopia.
Germany has clauses that make you criminally accountable for lies you tell. It's called "Volksverhetzung" and generally means any type of speech that is capable or suitable to arouse hatred against people, a group of people or otherwise defined membership in some thing. It has well defined (by our constitutional court) limits and it has worked well in the past to root out fascist groups and bring them to justice before they do anything to actually harm people.
Let's put aside the irony of using Germany as an example, somewhere that had hate speech laws that didn't prevent National Socialism at all, and rampant anti-semitism that was left unchallenged because of blasphemy laws, and which now has to limit freedom of speech in fear of Nazism rising again. Notice how nothing remotely similar has ever taken hold in the US even though they allow Nazis to parade and speak freely. A culture of negative liberty and free speech is a more effective safeguard. Germany even had lese-majeste laws up until recently, I wouldn't pick it as my example.
> The disrespect is meant in terms of a reasonable discussion.
Which begs the question. Again we are led back to the question of what is (un)reasonable? Something you repeatedly fail to define, beyond persistent conflation with violence. I'm yet to see the necessary link between unreasonableness and harm. I think you're being unreasonable and likely you feel likewise - where's the actual harm?
> Blasphemy usually lies in the realm of religion, which is in the realm of opinion, unless you don't believe in freedom in religion.
“the realm of religion” is a curious phrase - do religious people not speak? I wonder if you're making such a nebulous distinction because you'd rather not have to ban several religious books (and hence religions. Hooray for freedom of religion) because they're quite clearly more hateful and an incitement in theory and in practice than nearly everything ever written on Twitter and Facebook that you would hold "unreasonable".
I'd allow them their freedom to speak, and I would allow people to speak against them. Freedom of religion (and from religion) is predicated on freedom of conscience, which is only fulfilled by having freedom of speech.
> Satire is rarely calling for violence last I checked and most reputable satirists I know check their facts before the show to make sure they don't get things wrong.
Forgive me, but this is utter nonsense. Why would satarists need to check facts? Is it because ssatarists in Germany are liable to be put in danger and legal hazard?[1]
> Plenty of people like to dress up violent speech as satire but it rarely works well when they're not punching up.
The world is not a black and white place filled with victims and oppressors. You might note how all oppressors are against freedom of speech.
> As you so beautifully stated above, free speech isn't a synonym for the US paper scrolls. In germany, violent speech is far outside the realm of protected speech. If you threaten to punch someone in germany, the courts can put you in jail for up to 30 days, depending on situation, history and such, usually it's a fine or 1-2 weeks probation/public work.
It's good to hear that you have an assault law. Are less or more people attacked in German for expressing a view than in the US? By all accounts Germany still has rampant anti-semitism[2] and regular attacks on citizens for their speech that are incited by hateful speech. German law would restrict criticism of those incitements because it's “the realm of religion” and "hateful" to do so, no doubt.
> >This is partly true, but it is no doubt true that the law - explicit as it is - states the opposite. Again, who was claiming that the system was perfect?
> If it's not actionable then it doesn't matter what the law says since it's not enforcced.
I'll have to send you to court for telling lies ;-) You can search for "us defamation win" and see a whole host of times courts decided before either side ran out of money. Usually, running out of money leads to a settlement if the plaintiff is not the underdog, which makes the settlement by the SPLC of Maajid Nawaz's case[3] perplexing… if we assume your statement to be true.
> There have been multiple instances of this happening over the years, most famously KeemStar, if I recall the name correctly, had a private stream for dedicated followers where he would repeatedly wish for someone to be SWATted, which can be a death sentence in the US.
Are you arguing that private individuals do or don't enact censorship? Hard to tell, you seem to have started out on one side and are now on the other.
> The german constitution may not know free speech, but it knows freedom of opinion, which is a tighter subset. Censorship in germany is still defined as being done by the government.
Censorship in German law. It's quite possible for censorship to be possible and not covered by law. If the law changes tomorrow will your argument change too? (or again)
> In colloquial language you may refer to some actions that SMC do as censorship but it is hardly correct. What SMC are doing is enabling or surpressing people's ability to share their speech.
In English that is known as censorship, since at least the publication of On Liberty, but it's also a good example of German paternalism and application of positive liberty, the form of "liberty" that always leads to bad things.
> > Strangely, it's only the proponents of censorship that claim it will lead to utopia.
> Germany has clauses that make you criminally accountable for lies you tell. It's called "Volksverhetzung"
…
> and it has worked well in the past to root out fascist groups and bring them to justice before they do anything to actually harm people.
Pre-crime as a category is tyrannical. Those who cannot remember - or understand - the past are condemned to repeat it.
>Notice how nothing remotely similar has ever taken hold in the US even though they allow Nazis to parade and speak freely.
Ah, I think I see what side you stand on, if you can't see all the far-right stuff going down where Nazis are parading US politics for almost 4 years.
>In English that is known as censorship, since at least the publication of On Liberty, but it's also a good example of German paternalism and application of positive liberty, the form of "liberty" that always leads to bad things.
I disagree, positive liberty ensures that as much as your liberty remains preserved without infringing on the liberties of other people. Your freedom ends where other people's freedom begin.
>Pre-crime as a category is tyrannical. Those who cannot remember - or understand - the past are condemned to repeat it.
Is it pre-crime if you've been calling for the death of a minority group? That is a crime in my eyes, regardless of how actionable it was, as it hurts those minorities regardless due to the sentiment expressed.
>Are you arguing that private individuals do or don't enact censorship? Hard to tell, you seem to have started out on one side and are now on the other.
Private individuals can surpres speech by their own speech, that isn't censorship but it is violent in many cases.
>Forgive me, but this is utter nonsense. Why would satarists need to check facts? Is it because ssatarists in Germany are liable to be put in danger and legal hazard?[1]
If they are lying about things, yes. A satirist can't just start insulting people or spew lies without repercussions from the people injured as a result, physically or otherwise.
>I'd allow them their freedom to speak, and I would allow people to speak against them. Freedom of religion (and from religion) is predicated on freedom of conscience, which is only fulfilled by having freedom of speech.
I don't think that freedom of religion requires an american freedom of speech but feel free to prove me wrong. While germany has problems with anti-semitism, people are working against it, the government is working against it.
THe US in recent years has started to have problems with anti-semitism, however, since your government is not doing anything in that regard, one has to rely on civil organization to attempt to even track it's rise.
The US has many problems at the moment and a lot of them stem from the fact that the US is unwilling to stamp out those people who will only take the liberties granted to them to stamp those very same liberties out. In the early 20th century, germany had much experience in this regard, you can't use speech to surpress nazis, they will use guns and violence to silence you if the government does not stop it.
But feel free to wallow in your imagined freedoms until the steel boots come.
> Ah, I think I see what side you stand on, if you can't see all the far-right stuff going down where Nazis are parading US politics for almost 4 years.
Ah, the promised rise of fascism in the US government! It's been almost 4 years now, when will it materialise? There are, however, fascists in the streets burning, looting and rioting, and they've been implicated in the deaths of many people. Do wake me when the government fascism begins though ;-)
> I disagree, positive liberty ensures that as much as your liberty remains preserved without infringing on the liberties of other people. Your freedom ends where other people's freedom begin.
That's not positive liberty, that's negative liberty[1].
> >Pre-crime as a category is tyrannical. Those who cannot remember - or understand - the past are condemned to repeat it.
> Is it pre-crime if you've been calling for the death of a minority group?
No. Pre-crime is the judgement of a crime that has not occurred.
> That is a crime in my eyes, regardless of how actionable it was, as it hurts those minorities regardless due to the sentiment expressed.
Words do not cause objective harm, that's why there's a distinction easily made between words and other actions like using your fists. Feeling hurt by words is something we would all wish to avoid but incredibly dangerous as the basis for criminal statute.
> Private individuals can surpres speech by their own speech, that isn't censorship but it is violent in many cases.
Speech is not violence, violence involves the application of physical force. If speech causes someone to anticipate imminent physical force then I would criminalise that. Luckily, that was done a thousand years ago in English common law under "assault" so I don't need to bother.
>Forgive me, but this is utter nonsense. Why would satarists need to check facts? Is it because ssatarists in Germany are liable to be put in danger and legal hazard?[1]
> If they are lying about things, yes. A satirist can't just start insulting people or spew lies without repercussions from the people injured as a result, physically or otherwise.
Luckily (again) there are laws in place for malicious lying that defames, in civil law. It is only a crime in countries that are not au fait with freedom, like Germany.
> I don't think that freedom of religion requires an american freedom of speech but feel free to prove me wrong.
If the government can interfere in your speech then you may only practise your religion - and I'm including private practice - as much as they allow. That's not freedom, that's an allowance, like daddy might give a child at the weekend.
> While germany has problems with anti-semitism, people are working against it, the government is working against it.
Hard to believe you could write that if you'd read the article. I wouldn't point the finger at anyone else if you can put out an apologetic like that, and yet you do…
> THe US in recent years has started to have problems with anti-semitism, however, since your government is not doing anything in that regard, one has to rely on civil organization to attempt to even track it's rise.
I'm not an American, and you'll find that anti-semitism in the US is rising for reasons that will be uncomfortable to you politically[2], though it should hardly be a surprise that socialist movements are anti-semitic, we had at least a whole century of it to learn from.
> The US has many problems at the moment and a lot of them stem from the fact that the US is unwilling to stamp out those people…
Did you write that and then this with a straight face?
> But feel free to wallow in your imagined freedoms until the steel boots come.
I was worried about satire in Germany but I see it is safe in your hands.
> In the early 20th century, germany had much experience in this regard, you can't use speech to surpress nazis, they will use guns and violence to silence you if the government does not stop it.
Actually, Germany institued speech laws against Nazis, from[3]:
> In Weimar Germany, the Nazis and their ideas were censored – regularly, in fact. Leading Nazis including Joseph Goebbels, Theodor Fritsch and Julius Streicher were all prosecuted for hate speech before they rose to power – and Streicher was imprisoned twice. The Nazi publication Der Stürmer was regularly confiscated and its editors were taken to court on at least 36 occasions. Anti-Semitic speech was explicitly prohibited by law, leading to more than 200 prosecutions in the 15 years before Hitler came to power. ‘As subsequent history so painfully testifies’, writes civil-liberties campaigner Alan Borovoy in When Freedoms Collide, ‘this type of legislation proved ineffectual on the one occasion when there was a real argument for it’.
By contrast, Nazism did not arise in the US despite there being widespread racism, and the use of free speech actually helped the civil rights movement bring equality to all. In the words of Frederick Douglass from his speech, "A Plea For Freedom of Speech in Boston"[4]:
> Liberty is meaningless where the right to utter one’s thoughts and opinions has ceased to exist. That, of all rights, is the dread of tyrants. It is the right which they first of all strike down. They know its power. Thrones, dominions, principalities, and powers, founded in injustice and wrong, are sure to tremble, if men are allowed to reason of righteousness, temperance, and of a judgment to come in their presence. Slavery cannot tolerate free speech
The issues of fake news and misinformation have come about because the freedom of speech at scale that the internet and sites like Youtube enables.
What we're experiencing is the limit of freedom of speech.
It used to be that at scale things were controlled by the media and governments because they had the monopoly on the infrastructure required, which was costly.
You can say that this enabled a level of censorship, but that also enabled to filter a lot of noise.
> What we're experiencing is the limit of freedom of speech.
What we're experiencing is what happened with the printing press and every communication technology since - the democritisation of communication. That always leads to unrest because it moves power towards the masses.
> You can say that this enabled a level of censorship, but that also enabled to filter a lot of noise.
I would say that because it did, but on the noise point, I cannot listen to or read more than I did then, I only have 24 hours in a day. I can choose from more sources but the volume has not gone up. I am the filter now - should I not be?
Are you going to tell me what I should read and listen to? I have to tell you that I think I can do a better job that you at that, thanks all the same.
I'd say there was a stronger relationship between governments that censor and those that oppress. Which oppressive governments are using free speech to conceal the truth?
This assumes rational, reasonable and good-faith arguments and behaviour though; if there's a thousand voices (pushed by interest groups, foreign governments, advertisers, etc) pushing one narrative, what's a single challenging voice going to do?
I mean I can invoke Godwin's Law, but we don't have to go that far back; everybody knew Trump was a terrible person and he should not have been eligible for the presidency, but because of tolerance he was given a shot anyway, and a minority of the population (~19%, ~63 million votes) managed to make him president.
Secondly, Popper is not calling for speech to be supressed in any way except for those who choose violence instead of speech. The original[1] is better than a Wikipedia article that cuts off the important part of his words:
> In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion…
Hence he is supportive of the right to discussion for all. He contrasts this with those who won't support speech and sets the limits:
> for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols
They've given up on speech so he gives up on tolerance, which is wise. Free speech is there to reduce violence and supplant it and nothing Popper wrote in the "paradox of tolerance" contradicts that.
> as long as we can counter them by rational argument and keep them in check by public opinion
Which we cannot do the way Youtube worked until now. The way YouTubes algorithms worked rational argument gets drowned out and public opinion cannot keep anyone in check.
Demonetize and demote these videos, without taking them down entirely. Make them accessible through search but do not recommend them to people who are not specifically looking for the videos.
What "these videos" though? How do you determine which ones are misinformation, when WHO was demonstrably wrong at least 3 times in this pandemic? What's your source of infallible truth?
And how often did the WHO insist that their stance, in any of these three cases which you didn't name, was correct after it turned out to be not the case?
Neither did they continue with these claims. Stop moving the gioal post. If we keep applying unreasonable near-perfect standards to one side and way more lax standards for the other we don't do good to neither side.
You're the one moving the goal posts. If a reputable organization thinks their statement was incorrect, they admit to it and issue a correction. WHO did not do that.
This "paradox" is resolved by https://en.wikipedia.org/wiki/Streitbare_Demokratie - self-defending democracy. Before this idea, a democracy would allow itself to be subverted by allowing a dictator to be democratically elected, and consequences be damned. Not a good idea as we find out.
By extension, a system that wants the idea of free speech to stay viable needs to protect itself from ideas that bring harm to it. Maybe it can be argued that from mathematical sense there is some kind of paradox (a set of all sets that are not blah blah), but this is no contradiction for us humans. There is no mathematical beauty in submitting to a wave of harmful, malicious misinformation.
I think I disagree with that statement. Censorship would prevent the information from being accessible, make it illegal, come with repercussions for those who publish it, or other such things.
Platforms like YouTube, Google Search, Twitter, Facebook and all have a problem of amplification through algorithms that don't promote well proven, high quality, trusted content. You could almost see these platforms themselves as a kind of censorship through obscuring and creating noise around quality information.
I think because of the way content is amplified on such platforms, that's why they need policies like this, to limit the possible damage and liability they might face if they'd promote something that caused a lot of damages.
This is very different from someone choosing to host their own content which is equally accessible to all other content. But as soon as you rank content, as soon as you have an algorithm pick and choose content to shown you, in what order to show it to you and all that, you actually have a propaganda problem, because you become a propaganda machine, and that's really what these platforms have become.
Yes, if they didn't recommend the video, if it could only be found through search, there wouldn't be any issue with it.
Though even with search, if the search algorithms are biased, and prioritize things in a way that could be argued to be recommendation, like personalized results based on what it thinks you'll like for the given keywords, you could say that's also a problem, which is why I mentioned Google Search.
To put it another way, you'd expect that the easiest content to find and discover, and thus be shown and recommended would also be the most commonly held opinion and current scientific consensus. But you'd also want representation for less commonly held opinions and scientific hypothesis and claims. Just that, if 1 scientist is pushing some wildly different idea, and 99 others are pushing the same idea, you'd expect that the idea pushed by 99 scientist would be the easiest to come around and that it be clear that it is weighted to 99%, while the other would be a little harder to come around and it be clear that it's weighted to 1%.
The issue is those platforms fail heavily to represent things proportionally as such. Often, the outlying wild claim will be recommended more heavily, and be promoted over others. Even without algorithms, the people pushing wild claims generally employ amplification tactics by spamming and posting multiple times over and over, etc. to falsely boost their representation.
So that's the problem I see here. Banning the more problematic instances of this is their poor solution to keep their platforms operating in a way that doesn't offer proper representation (because it makes more money), but avoid the extreme cases that could put them in hot water.
Don't jump to conclusions, please. Because you seem to assume the WHO didn't change their minds based in research but on public opinion. And that they would "censor" any other information. No proof for either.
Also, only governments can censor. Youtube is a private entity, so whatever they do cannot be censorship yo begin with.
Censorship =|= free speech. Free speech is a right guaranteed by the state, what Youtube is doing is more resembling a news paper refusing to pick publish certain readers letters. Freedom of speech is not supposed to guarantee you the maximum possible reach or to protect you from the consequences of ehat you said, social and professional.
And well - the way washing hands after autopsies and before surgery to prevent infection needed 30 years to be accepted by the community (1850-1880 I think) - even panels of certified physicians could be wrong
What is not a political act these days? If youtube bans spam content, that involves some subjectiveness. Would you say banning spam is political? How about if Youtube's algorithms downrank this kind of content instead of outright ban? Is that not political? Does every algorithm need government approval then?
> The president's medical report got banned, probably because it was good news. It was dry, factual, scientific... and now banned as coronavirus misinformation. Evidently his health is disputed by our social media overlords. They must know more than the doctor.
Do you have a source for that frankly outrageous claim?
Fully agree, and I'm not sure why you were modded down. I guess the answer is that the censorship of unpopular views is okay with most people. This could be an innocent consequence of our click-bait culture, or it could be something more sinister.
>You can always 'showdead' on your HN preferences.
I haven't been around here very long and recently figured out 'showdead' because I wanted to be able to see what folks were talking about -- especially when there were replies to such comments.
In fact, even if I'm skimming comments, I always read flagged/dead comments -- to make sure I hear all the voices. Even the ones I don't appreciate or even dislike (although that's not always the 'dead' ones)
> The president's medical report got banned, probably because it was good news. It was dry, factual, scientific.
Which services banned Trump's recent medical report? I would have assumed "Twitter blocks Trump medical record release" had been in the news and cannot find a single story about it. There was this tweet from Trump (not his doctor) that got flagged as containing COVID-19 misinformation, but was not deleted[1]:
> A total and complete sign off from White House Doctors yesterday. That means I can’t get it (immune), and can’t give it. Very nice to know!!!
There's a good reason Twitter flagged this: Trump is full of crap and idiotic tweets like this will literally get people killed. COVID-19 recovery definitely does not provide immunity to reinfection.
Second of all, this is not how vaccines work. Recovery from an illness does not mean that your immune system was maximally effective: a weak immune response coupled with anti-viral medication and hospital treatment could mean that your immune system is not equipped to fight the virus a second time. This happens often with flus and colds.
This isn't a particularly great argument (and is scientifically dubious).
There are different strains of flus, colds, and recently, a few different strains of coronavirus.
You can't generally be reinfected by the same strain. What can happen is that new strains can infect you. There are tons of cold strains around, and the flu mutates quickly enough that each year we get a new one, so vaccines have to be redeveloped for each new round of mutations.
Coronavirus is the same way, except that as far as I know, it mutates much more slowly than cold and flu viruses, meaning that vaccines (and having gotten it once) will be equally and more effective than flu or cold vaccines.
> You can't generally be reinfected by the same strain.
Unless the word "generally" is doing a lot of work in this sentence - YES YOU CAN be reinfected by the same strain. It's called "homologous reinfection": the person's natural immune response is not always up to the task of actually providing immunity to a specific viral strain, and that exact same viral strain can reinfect the patient even immediately after recovery.
So you are agreeing with the statement, and describing a rare outlier exception. I believe the number of documented Covid-19 re-infections is very small, and that one (1) person has died after re-infection. Statistically speaking, we're still up in the six sigma range of people who have not been re-infected. This (statistics) is the basis of public health policy.
That is not a reasonable way to evaluate the situation. Sure, some people have broken immune systems, and there can be a new and different virus that evolves from the old one but doesn't earn a distinct name. That doesn't count.
He is immune. Similarly, we say that cars can cross the Golden Gate Bridge, despite the fact that the bridge could be destroyed (by bombing for example) or that one could call the NASA crawler-transporter a "car" that is 90 feet wide. Absurd, exotic, and rare possibilities don't change the basic truth.
Vaccines do work like getting the illness, or less so. The illness is nearly certain to provide long-term immunity, but a vaccine is not quite so likely. Those who can't generate immunity will survive the vaccine but not the illness.
The reinfections were likely from a genetically different strain. So was Trump more right or more wrong with his simplified message? I would say that if he tested negative, he would be unable to transmit infection so that part would be true. I would say that he could not be infected (with the same strain) so that would be also true. I don't believe any sensible person believes that they become immune to all strains after recovering from one, and I do not believe that Trump was saying this. Do you?
Not really, given that the genetic strains are pretty well tracked and regionally locked.
It's more likely that the reinfections came from different immune responses, where the first infection was beat with a cytotoxic T-cell response (viral defense) but not an antibody response (viral and bacterial defense), the latter of which increases immunity with antibodies.
Could you please post a source for SARS-Cov-19 genetic strain tracking? Are all of the testing facilities doing gene sequencing on their positive results?
The re-infection cases that I read about were shown to have been from different strains. Perhaps this is not the common case, but without more information, I am reluctant to accept your statement at face value.
> I would say that if he tested negative, he would be unable to transmit infection so that part would be true.
This has not, to my knowledge, happened yet.
> I don't believe any sensible person believes that they become immune to all strains after recovering from one, and I do not believe that Trump was saying this. Do you?
I certainly do believe that the message trump intends to convey is that he is immune to the coronavirus and can no longer be re-infected period.
>Which services banned Trump's recent medical report? I would have assumed "Twitter blocks Trump medical record release" had been in the news and cannot find a single story about it.
Because that's not how censorship works. You don't block it and then draw attention to the thing you blocked. That would be counter productive. Here, have a look at this,
Twitter also regularly bans and blocks things temporarily while they're trending to stop the spread. Once attention has moved to another issue, they sometimes lift the restrictions. In effect, deleting their censorship actions.
> The ban is a political act and not a scientific one.
It is a public health act and a moral act, and therefore both scientific and political. These considerations don't take place in a vacuum of Free Speech Patriots versus Social Media Authoritarians. YouTube's decision has to be considered alongside the widespread consensus among doctors and public health officials that social media disinformation about COVID-19 has killed tens of thousands of people worldwide, in a pandemic that has killed over 1,000,000 people in less than a year.
And let's be absolutely clear: the reason why YouTube's actions are so politically controversial is that unscrupulous politicians like Trump and Bolsorano have deliberately politicized misinformation about COVID-19. It is not YouTube's fault that "should we listen to scientists about COVID-19?" has become a partisan issue.
Also: "Section 230" concerns are irrelevant here (and irrelevant in most cases of social media "censorship"). Section 230 specifically allows content providers a great deal of latitude in blocking material they deem "objectionable," including constitutionally-protected speech. YouTube is well within their rights to make reasonable determinations about what constitutes COVID-19 misinformation, and to ban such materials.
YouTube blocks pornography with nary a peep from free speech activists. This is because a "free speech" argument is transparently absurd: YouTube is a business and not the government, and is allowed to regulate the data it keeps on its servers for business reasons. Yet the idea that YouTube might regulate for reasons of ethics or public health in the middle of the worst pandemic in 100 years is apparently the slippery slope of censorship.
Much of the information being banned has been published by credible, knowledgeable doctors and scientists who are working "in the trenches".
Epidemiologists who have specialized in this area are being censored because political bodies (such as the WHO) have released contrary information, but the WHO (and others) have contradicted themselves multiple times throughout the SARS-Cov-19 pandemic.
To answer your question directly, it seems to me that objective science becomes political when an unpopular politician cites a scientific publication or opinion. Attacking the messenger then becomes important to those politically opposed, and the original scientific message gets lost in the political controversy.
Enter this search on Google and see what you get:
"Is hydroxychloroquine effective in treating COVID-19?"
Why should there be such a controversy about this issue? It only became political when Trump mentioned it.
Sorry for not being clear. I am not the best communicator. What you should notice upon entering that search term are the wildly conflicting results. Google will present a "No" answer near the top, but if you scroll down through the thousands of results, you will see some saying "Yes", some saying "No", and some saying "We don't know". Google (and all of the other big social media giants) was quick to post a definitive answer, but is it the correct answer? The volume of uncertainty would seem to indicate that it is not.
You shouldn't make any conclusions based on skimming search results though. Ask critical questions like:
- Who is answering the question?
- Who paid for that answer?
- What platform is it posted on, and where lie their loyalties?
The problem with search results for questions like that is that you end up on a ton of trash media sites, none of which are neutral; they are all paid for, operated by or supported by parties with political goals and ambitions.
I did this a few days ago. You need to limit the date range to before he mentioned it to find anything suggesting positive results. I did find a recent story mentioning possible positive outcomes buried near the bottom of one story.
He didn't say objective science is political, but banning is. Science promotes the free exchange of ideas with the hope that the truth will be established and falsehood will be disproved.
If misinformation leading to loss of life can be traced back to YT, wouldn't YT be far more at risk for across-the-board negative response? Science and politics have a massive intersection in the arena of public communication; e.g., fracking, nuclear energy, climate change, vaccines...
IMO, any nation which cannot establish a harmonious relationship with their own institutions of science is in a shaky spot.
Yes it does in some cases. Take Alzheimer's disease for example. What is the mechanism for the disease? We don't know. But the consensus is it's not amyloid plaques, based on the most recent data. Some scientists think it still is, but the consensus is it's not.
Scientific truth is not black and white but rather shades of gray. Yes, some things we know the absolute truth (or rather there is so much evidence that it's unlikely it's wrong), but there is plenty that we're just "pretty sure" and that's typically based on consensus.
>Yes it does in some cases. Take Alzheimer's disease for example. What is the mechanism for the disease? We don't know. But the consensus is it's not amyloid plaques, based on the most recent data. Some scientists think it still is, but the consensus is it's not.
The scientific justification we have (in the situation you describe) for our belief comes from the data, not the the consensus.
>Scientific truth is not black and white but rather shades of gray. Yes, some things we know the absolute truth (or rather there is so much evidence that it's unlikely it's wrong), but there is plenty that we're just "pretty sure" and that's typically based on consensus.
On the one hand you speak of scientific truth, on the other hand you speak of what we say we know. It's important to consider these things separately. While what we say we know is often based on what we perceive as a consensus view, scientific truth is not. Yes, a person might be pretty sure that the speed of light is three hundred million meters per second because that is what he perceives to be the consensus among physicists. But the "scientific truth", to whatever extent it is an actual thing, is determined by other factors.
Of course, consensus plays a part in the scientific process. When doing research scientists take into consideration scientific consensus. But when it comes to the strength of a scientific theory, consensus is irrelevant. That is determined by things like how much empirical evidence is there that the theory is correct? Are there simpler explanations? etc.
The scientific justification we have (in the situation you describe) for our belief comes from the data, not the the consensus.
But the data is not always unambiguous. There can be, and often is, two competing theories. It is often consensus as to which one is accepted as mainstream.
I got back to my Alzheimer’s example. It’s the general consensus that the amyloid hypothesis is wrong. However, there are those that hold onto it and it’s not because they don’t grasp the data. There are still enough unknowns to firmly put it to rest.
The bacterial hypothesis is another good example. The data supported that stress and diet was the cause of stomach ulcers. That was the “scientific truth”. It was wrong.
Consensus is nothing more than “what is the best interpretation of the data based on our current understanding”. Current understanding being defined by “what do most scientists think?”.
Your definition of consensus was worded in a way which was confusing to me, so I have reworded it: "Conensus is what most scientists think is the best interpretation of the data".
Sure. I don't have any significant disagreement with that definition.
>The bacterial hypothesis is another good example. The data supported that stress and diet was the cause of stomach ulcers. That was the “scientific truth”.
Not scientific truth. Perhaps the strongest theory given the evidence. Point is, consensus irrelevant non-factor for scientific strength of theory.
For policy makers to earn the respect and attention of the average Joe, to impart important and scientifically sound information. If they're so bad at the job that they can't refute the ideas of Covid flat-earthers, maybe nobody should be listening to what they have to say anyway.
The real issue is that the media and many authorities have squandered their trust and political capital. And now instead of building that up they want to substitute authoritarianism and censorship.
>they can't refute the ideas of Covid flat-earthers
What is a covid flat-earther? If you are trying to lump flat earthers in with a group of people who are lockdown sceptics you're doing yourself a disservice.
How about let me read what I want and mind your own business? What makes you think you're smarter than the average joe and in position to dictate anything to him at all?
Perhaps to remember that every average Joe is equipped with their own brain and ability to think for themselves.
The role of science is to study topics empirically. It is not to establish sacred truths that need to be protected from scrutiny.
A few years ago if you said the big tech firms were each working on establishing their own ministry of truth, most people would have told you to take your tinfoil hat off. But this is exactly what we are seeing now. Independent thought is not a problem that needs to be solved. The only reason we are in this situation to begin with is because these institutions have done so much to tarnish their own credibility, that they now think the best solution is to limit people’s ability to think for themselves. It’s horribly dystopic.
That is true, but it increases the probability of a theory being valid. The exception to that are most notable, but I think it is nevertheless the rule.
Only if repeated experimentation affirms it. Simply being "I'm a scientist and I like this" is not the implication here, but this is what youtube and other major players are trying to legitamize.
These seem totally reasonable given the circumstances. I wish people would read these specific examples instead of jumping immediately to accusations of widespread censorship and accusing Youtube of being arbiters of disputed science.
>Content that disputes the efficacy of local health authorities’ or WHO's guidance on physical distancing or self-isolation measures to reduce transmission of COVID-19
How about this? Can reasonable people disagree about distancing and isolation measures?
You know, I'm sort of thankful for the increase in private/public company censorship, because they are doing it so poorly and with such bad timing that I think it's starting to create a kickback effect where more people are waking up to the issues... so I am hoping it causes a pendulum swing back the other direction. (though that's hope, I'm not sure how likely it is).
It's interesting to see how servile so many hn'ers are about censorship though.
The Chief Scientist of the WHO still believed as recently as August that the mortality rate for this virus was still at 1% [1], even though Sweden had published its first data at the start of that month saying that the mortality rate in their case was around 0.4-0.6%. I've seen people mentioning a more recent CDC number of 0.35% mortality.
So, when even the Chief Scientist of the WHO doesn't know what she's talking about then based on what premises will the YT censors carry out the censorship?
Mortality rates change, they are also dependent environmental factors including population genetics, diet, prevalence of pre-existing conditions and the quality and availability of medical care.
The statistics around case mortality rate and overall mortality rate also changes as new and more reliable data is gathered and analyzed, the WHO estimates was quite likely “correct” for the time frame and data they had available.
The fact that Sweden published a data set that contradicts that doesn’t make their assumption false, or that “they don’t know what they are talking about”.
I don't buy mortality numbers around 0.3 ~ 0.5%; South Korea's data shows 1.76% mortality rate[1], which probably has collected one of the most comprehensive COVID-19 case database in the world as well as good medical infrastructures.
Youtube, the company that can't even be trusted to keep the scariest shit possible off Youtube Kids repeatedly to the point where I won't even let my kids watch it anymore, being in charge of filtering and categorizing videos? Nope.
The article mentions specifically conspiracy theories saying covid vaccines will kill you or be used to implant tracking microchips. I don't think it takes a scientist to call that misinformation.
The article also says that they will remove videos that contradict the people they've decided are trusted experts.
> YouTube says it already removes content that disputes the existence or transmission of COVID-19, promotes medically unsubstantiated methods of treatment, discourages people from seeking medical care or explicitly disputes health authorities’ guidance on self-isolation or social distancing.
I guess you haven't looked into the ideas behind implanting trackable devices in vaccine doses (because that's crazy!), but here's something I pulled up in 3 seconds https://news.rice.edu/2019/12/18/quantum-dot-tattoos-hold-va... . I'm not sure that "misinformation" is the word I would use other than to characterize the categorical denial of people's concerns.
And who is to decide what is information and what is misinformation? Google? US government? Sergej? Google Shareholders? Youtube users by voting? Pope? Churches? Some poor employee tasked of tagging content? AI algorithm? A man pretending to be an AI?
As much as I hate misinformation, I hate censorship even more because it's easy to misuse. None should have a power to decide on one and only ultimate source of truth.
I sort of feel like a dunce reading this thread, as I can't get my head around the fact that Youtube removing a video is being called censorship. Doesn't Hacker News let readers downvote unpopular opinions until they're effectively erased? Why isn't anyone calling HN on its censorship? Is it just the size of the viewership that matters, as if, after a big enough user threshold you have to act like a public service? I'm being honest here, why should Youtube have to host any uploaded video it doesn't want to? Question to free speech absolutists: should Youtube allow pornographic videos on its site? What about extremely violent videos, like car crash fatalities, etc? What about child pornography? Why does Youtube /have/ to host garbage?
Isn't the issue here something else, that a lot of people now have bandwith to express what they want, yet the biggest channels for this expression are just like every other important commercial segment, in that a few very big companies are in control of most of the bandwith? When did network TV stations were accused of censorship if they didn't want to air a KKK ad, or interview one of its leaders just because he wanted to air his opinions on race?
> contradict consensus from local health authorities or the World Health Organization.
That implies that local health authorities are in consensus with the WHO. What happens if local authorities disagree with the WHO? Can either viewpoint then be blocked?
I question what constitutes censorship here to people? Is all censorship is bad? Moderation is bad? Should Hacker News in particular not have rules to censor content that does not follow rules? Censorship has and will always be part of our lives. The problem is the application of it.
I applaud YouTube, Twitter and FB primarily for trying to not facilitate distribution of bullshit all because free speech. Free speech has limits in everyone's house.
You tried to twist my words. I called the content bullshit and not one of those services say it is. Anything they ban is considered to be violation of their ToS. If you have a dispute with the determination, you can file a lawsuit against them and have a judge determine if it was a violation of their ToS.
I did not try to twist your words. If I did, it was entirely unintentional.
> I called the content bullshit and not one of those services say it is.
You said:
> I applaud YouTube, Twitter and FB primarily for trying to not facilitate distribution of bullshit all because free speech.
I may have misunderstood what you were trying to say, but it sounded to me like you were happy these services were trying to filter out bullshit.
> Anything they ban is considered to be violation of their ToS. If you have a dispute with the determination, you can file a lawsuit against them and have a judge determine if it was a violation of their ToS.
I'm not claiming anything about the legality of what they're doing. Only that I disagree with filtering content based on an arbitrary definition of "bullshit". If that is not what you were arguing, then I apologize for my misunderstanding.
I've been thinking about that a lot recently.
Here's an idea - instead of censoring the false data, we could invest in educating people, developing critical thinking.
Sure, that's more difficult than banhammering posts in social media, but it's more useful in the long-term.
Absolutely terrible terrible idea. Deeply complex issues, like a global pandemic caused by a new virus which there is little understanding of require liberal treatment of new ideas, and a free flow of information, to enable society to collectively tackle them.
People having the liberty to publish information that may be wrong is how information that may be right has a chance to get exposure.
We do not know, before dissemination and public review, what is and isn't correct information.
The process of critiquing a flawed analysis, or critiquing a flawed criticism of a correct analysis, is also extremely valuable.
@dang can I make a suggestion? On threads with high comment counts, put numerical pagination links at the very top of the comments. Otherwise to see the next pages of comments I have to scroll to the bottom, click "more", scroll to the bottom again, click "more" again... If I could just click a little "1", "2", "3", etc. to jump straight to second and third page top level comments, I think it would help improve visibility of those other top level comments.
Problem is it seems that the order of posts frequently changes within a discussion tree. I don't think HN was really built with these large discussions in mind.
There was a point in time when the CDC were saying that masks are innefective at controlling the spread. I assume that was to mitigate hoarding in an effort to keep medical professionals supplied.
If I had made a YouTube video explaining the effectiveness of masks during that time, whose video would have been removed, mine or the CDC's? Who would've made that decision and what method would they have used to reach their conclusion?
It's a patch a best, and I wager many people here see the side effects already on the horizon: faulty algorithms tagging actual information as misinformation, news organizations continuing to skid by while spewing misleading claims, videos and channels deleted without recourse.
All of this is painful. Youtube's ineptitude specifically has been painful to content creators in the past. Censorship of a "public" forum is always immensely painful, and heavy handed attempts by Mega Corporations at controlling public discourse always feel dystopian.
The most painful aspect of it all is that I have come to personally agree with these measures, purely as an immediate firefighting response.
But this is not a solution. As a society, our mental immune system is weak against disinformation and even the most obvious unscientific propaganda. These traits are even welcomed by the powerful in other contexts, that's why it's such a big deal that the people who started those fires are so closely related to those putting them out right now.
Given the time right now, silicon valley hasn't woken up yet. It's currently mostly Europeans here who have a deeper connection to authoritarian history.
Once the Americans get here, expect to see more support or at the very least the tired "actually this is a company therefore it's not censorship"
What I find offensive is the idea that groups of individuals decide what information is good or bad for everyone. That implies we can't decide for ourselves, that experts should filter information before it gets to us. How naive is that, is there a lack of civic education and history in the dangers posed here?
While I under what YouTube is trying to do, even if it raise issue with free speech, I feel the bigger issue is how they plan to determine if a video is in line with their policies.
Due to the sheer volume of videos YouTube needs to handle, this inevitably will end up as some sort of ML/ Natural Language Processing algorithm. Needless to say, the problem of determining whether a contents of avideo is inline with currently established scientific knowledge, is WAY beyond the ability of the current AI.
In practice it seems, that YouTube struggles with basic stuff, like flagging an anti-mask debunking video that contains a part of the video its debunking.
I fear that the in practice, this will mean that if your video contains the 'C-word' in any context, it has a good chance of being deleted, and an even better chance of being demonetized.
This type of censorship is what other authoritarian countries viewed as half assed attempt to censorship. You are trying to censor somethings but need to really decide and every decision is scrutinized by everyone, also from the media. This is definitely a slippery slope towards censorship of even scrutiny
One nice feature on many social media platforms is that one can unfollow or block people who seem to generate more heat than light in their posts. I am willing to read and interact with someone with whom I disagree as long as the discussion remains civil and productive.
Trolls have been around from the early days of the internet. I remember one student from Dartmouth that posted on many of the science newsgroups under the name Archimedes Plutonium. He continually posted that the Plutonium atom was God. This started many unproductive discussions and spammed many newsgroups. I'm sure you can imagine the flame wars... I am thankful that current social media lets me block/ignore trolls.
But you're counting on your intelligence to act as a filter. It has become very clear in the last many years that a non-insignificant percentage of the populace doesn't possess the knowledge or level of rational thought to properly evaluate what they are being fed en-mass. It's increasingly easy to exploit some people's herd mentality and lack of education to rise movements that can have significant impact. I often talk to people who suddenly spout off some random conspiracy theory because they read about it 10 times on Facebook and they lack the ability to evaluate it properly. I wholly agree that I'm not a fan of censorship but likewise I don't think private platforms have a responsibility to equally showcase what is known to be false.
It's all going to be a mess no matter how you handle it... but permitting an endemic lack of education over the last generation has brought the concept of democracy to this point. I wish there was a clearer solution.
Google has been taking an aggressive approach to Covid content since the start of the epidemic. I tried searching for previous articles regarding commentary on it about it on Google but they're now quite hard to find.
We had a news aggregator kicked out of Google Play because it "promoted misinformation" but it was simply the same headlines (and sources) Google News uses! It has since been reinstated. I'd seen other mentions of apps being removed for the same reasons, which I assume was done in an automated fashion.
There's also been commentary on the same kind of thing happening on YouTube for months for the general topic of Covid-19
Oh no, so most big media channels need to be closed down. This will be a slaughterfest :) I rarely find a news channel with accurate COVID-19 information. The UK Telegraph being one of the rare good cases.
* Most e.g. present the latest positve PCR test numbers per day as "confirmed new infections", whilst they are not "new" and not "infections". They are just "confirmed positives".
They are not "new", because the statistics don't give away if they are old or new, they don't store old PCR or antibody tests results.
And they are certainly not "infections" just "positives", as old infections and therefore immune people will test positive with PCR. Given the 7 months timeframe and the immense high dark figure of asymptotic cases, and the upcoming cold season with many people getting suddenly symptomatic (getting the common cold) and therefore getting tested finally, the real infection rate could be 20% less than the reported one. Those people did not get tested before, will be tested now, will be positive but not infected.
* Previously they reported for months the mortality rate as CFR, not IFR. The CFR is only relevant for healthcare providers ("how many people die in hospitals"), but not for policies ("how many people die after being infected"). The CFR just looked much better being 10-30x higher (citing Ionnadis latest stats here), so policies looked much better. https://doi.org/10.1111/eci.13423
* They continue to mixup HCQ prevention with treatment, for pure political gain.
And many more such lies and inaccuracies, mostly by intent, not by mistake.
Getting rid of all the misinformation would be a godsend, but we certainly don't need even more censorship, as already done under the conspiracy label.
People have stopped recognizing evil just because it suits their current myopic political agendas. Talk about lacking the power of critical thought, the problem for democracies are not the fringe loonies that believe in whatever the idiotic theory of the week is, but the majority that trades freedoms for momentary comfort. The argument for censorship always goes "I'm not really for censorship, but now we really have to thought police people who disagree with me, who am smart and correct, or the world will end" and it is invalid, evil and stupid in any context.
Some of my views will be controversial, and that's absolutely fine - I am quite happy that my views and opinions won't put me in jail, thanks to our constitution.
That said, with this freedom comes also the freedom of websites like youtube, HN, etc to NOT offer me a platform, if my views are opposed to said platform's morals.
There are plenty of alternative platforms that won't deplatform this stuff. Those platforms are free to band whatever they want as well. Or do you think the conservative platforms should allow anything to be posted as well?
We should realise by now that Google, youtube, facebook and apple really DO have the truth! They studied truth hard, and know it! Why others don't know the truth is their problem.
Its definitely not the case that the FAANG companies are espousing a self-serving opinion, nor that they are driving a politically divisive agenda for their benefit. Its just not that!
So sorry - FAANG have the Truth, and those who get banned deserved it! And if you know what's good for you, you'd better learn the truth too and quick!
I am betting that all of the recent censorship by Google, Facebook, and Twitter is a thin attempt to ward off the anti-trust actions that the US wants to impose on the big tech companies. There is false information out there - that is nothing new - but we're in the territory where we are censoring opinions, viewpoints, and non-predominant theory. Is light a particle or wave? Ban everyone who believes X, they are fools and they shouldn't be allowed to infect others with their heresy.
It's going to be interesting to see how YouTube polices information from Govt sources that are arguably lacking in scientific rigour.
For eg. India has a whole ministry devoted to 'alternative' medicine called AYUSH (which is an acronym for Ayurveda, Yoga and Naturopathy, Unani, Siddha, Homoeopathy) which puts out videos like this:
Isn't YouTube banning certain content exactly what free speech is about? YouTube, a corporate entity with its own goals and beliefs, exercises its right to say what it thinks is the truth. It's similar to most newspapers: they're not entirely objective.
If content publishers want to post Corona related information that YouTube thinks is false, they'll just have to find/create another platform for it.
Now donate the ad money you generated off videos that contained misinfo. Otherwise this doesn't mean anything.
Social media companies continue to profit off conspiracy and extremist content. Then when their position becomes politically untenable, they just say 'Oooops, our bad. Can't do post that anymore.' But they still get to keep the bags of money that content generated.
Sincere question for the folks who posit that any regulation of broadcast of information is pragmatically bad: do you think there's no problem with the scale of the disinformation campaigns in popular social media? Or that it's a problem, but same as it ever was? Or that it's an unsolvable problem? Or that there's some other solution?
Again, a simple solution is to slow the spread of information by tying it to a geographic radius of influence. Most trolls should have an audience of 1, it should take a year of positive community behavior to ever have a video shown to over 50 people.
That will dramatically affect revenue, but it will vastly improve quality.
I’m not a fan of moves like this, but this is not the same thing as having your right to free speech infringed upon. You don’t have a right to appear on a public company’s video service. You have a right to say what you please (within limits) without fear of being punished by the government.
Hyperreality - Hyperreality is used in semiotics and postmodern philosophy to describe an inability of consciousness to distinguish reality from a simulation of reality, especially in technologically advanced post-modern societies. Hyperreality is a way of characterizing what our consciousness defines as "real" in a world where a multitude of media can radically shape and filter an original event or experience. Hyperreality is seen as a condition in which what is real and what is fiction are seamlessly blended together so that there is no clear distinction between where one ends and the other begins. Hyperreality is a hypothetical communications infrastructure made possible by information technology. It allows the commingling of physical reality with virtual reality and human intelligence with artificial intelligence. Individuals may find themselves for different reasons, more in tune or involved with the hyperreal world and less with the physical real world.
Hyperreality is what I thought of when I saw the comments here. I think, these are 20somethings and lower 30s techies who, despite living in an incredibly privileged time compared to decades ago, believe that they're living in the worst of times.
Yes, there's meanness on Twitter, lies and manipulations in the media. I think if you live in that world, it's never been so "unstable" - a word used repeatedly in this thread's comments.
Statistically, there is less not more deaths from things like terrorism, less not more things like racism and sexism, less not more suffering from disease and so forth. I'm talking about statistics, here, not perceptions.
But the hyperreality of the Twitterverse paints a very different picture.
So now we have grand essays defending censorship on the grounds that society is more unstable than it ever was. Than it has been in a decade, yes. Than it has been in 30, 50, 80 years? No.
BigTech has created something that's never existed before. Perhaps the best phrase is it is an "Epistemological Monopoly"
If you run a newspaper, you publish lots of letters from nuts and cranks. You run stories on some of the things they do. People love hearing about other colorful people in their community (as long as they're peaceful, of course) When you run news stories, the general idea is that you have an editorial staff which is either balanced politically and temperamentally or is able to fake being so, the goal being to cut off and prevent any appearance of the publication accidentally slanting news when there's a much better and generally-accepted way of telling it. If it's passionate, stick that shit in the editorial section where it belongs.
But there's no longer that distinction between news, quirky folks, and slice-of-life stories. In fact, the more that it's all muddled up, the more eyeballs you can reach. It's even become it's own anti-joke. I doubt many of the flat-earthers are actually serious, but it's a way to be wacky that the internet overlords overlook.
And there lies the rub. If you control the gateway to information for billions of people, you effectively control where they go online. And since you control where they go online, you control what they know. These things logically follow one another. If they didn't, so many billions of dollars wouldn't be invested in internet advertising every week. We control what you know and when you know it. Driving through town? Hey, did you know there's a great pizza restaurant a block away? Computers are our brain, and internet companies are making huge bank on knowing stuff for us and telling us what would be fun to do.
So if they don't like an idea, think it's dangerous, not only does it leave their platform, it ceases to exist for most people. In the physical world it may exist, but in the minds of the world it does not.
That is immensely fucked up. The only thing worse would be subtle algorithms that do the same thing but are black boxes and difficult to prove exist or how they work. At least in this instance we're all having an honest conversation about it.
I could make a long, long list of information everybody knew was dangerous that we would have happily shut down if we could: germ theory, heliocentric solar system, bacteria causing ulcers, and so on. I could make an equally-long list of information everybody knew was true and good for folks that we later abandoned: phrenology, Social Darwinism, Eugenics, homosexuality being deviance, etc. Hell, just look at how many times everybody that's serious have flipped around on issues during the pandemic. That's how it's supposed to work. At one point, we had ER physicians secretly going behind the backs of their hospitals to share information. This was both breaking their contracts and could have caused great harm to themselves, their patients, and the institutions they worked for.
Now these problems are solved. Ideas contrary to leading opinions just cease to exist, and ideas supported by leading opinion, mixed with a lot of heavy-handed emotion and hyperbole, get stuck at the top of your feed.
We don't fight you over wrongthink anymore. We don't stick you in the editorial section or write articles praising you for your creativity even if most of us disagree. You and your ideas just cease to exist as meaningful concepts; no mentioning, no chatting, no links, sometimes even no association with people who feel that way. We have certainly done a great job of fixing that pesky "How do we keep the most people using our service" problem. Congrats guys.
A private party setting standards for acceptable content is NOT censorship. It’s their right, and it’s also a good idea. Anything other than that WOULD be censorship.
Who cares? Stop pretending like YouTube is an authoritative source on anything, it's an entertainment platform optimized for selling ads, not free-speech zone.
I think we need to be careful about the free speech debate in this case. Free speech applies to civic life, not your relationship with a business. Near me we have a bar that doesn't tolerate swearing. On your first offense, the bartender gives a warning. On the second offense, you have to leave. The bar owner has the right to make a rule that makes his business a more pleasant environment. As a result, the bar does quite well. Google has the right to do the same in order to make the experience of YT better.
It is important to remember in these censorship discussions that it has never in human history been so easy and cheap to create content seen by millions of people than it is right now. Before the web if you wanted to spread medical misinformation widely you probably needed millions of dollars to promote and distribute your ideas yourself. In that sense you were much more censored in the past than you are today even if all the social media companies ban you.
Typical BS title. The modern "news" really cant help themselves. Youtube banned information that their censors disagree with. That's it. The rest is just labels that belie the fundamental truth: the MSM is a perception management platform, they see themselves not as providers but as deciders. Their consumers are really subjects, and they at every opportunity treat them as stupid.
It says "claims that contradict consensus" and then gives the example of content saying the that COVID19 vaccines will cause death or infertility - is there a consensus that a vaccine will be 100% non-fatal and not have infertility as a side-effect? I have not heard of this.
I understand the obvious point that there are people screaming at the camera saying it's part of the globalist trans-dimensional vampire plot to reduce the world population, led by the regular cast of rich characters - but what's wrong with saying that? Too many people believe it? In my country people perform idol worship, why isn't that banned? Poor people donate their paltry savings to temples, mosques, churches, etc. whose corrupt trustees often enrich themselves. It's not harmless.
I also understand that to successfully beat COVID-19 we need everyone to be on board - but has anyone gamed out what happens when you censor the people who have these views? Do you think it becomes easier to convince them to vaccinate or wear a mask? I am not impressed with the competence of the decision makers at YouTube.
The "marketplace of ideas" is, like "meritocracy", one of those dangerous libertarian myths that at one time were held as a given in internet culture.
The truth is, ideas are less like market goods than they are like viruses. And dangerous ideas, like viruses, need to be suppressed before they can spread.
> YouTube doesn't allow content that spreads medical misinformation that contradicts local health authorities’ or the World Health Organization’s (WHO) medical information about COVID-19
"Most locations probably have an infection fatality rate less than 0.20% and with appropriate, precise non-pharmacological measures that selectively try to protect high-risk vulnerable populations and settings, the infection fatality rate may be brought even lower. "
That's the point though: if a video had, in the intervening time, asserted this more accurate IFR, it'd be against the letter-of-the-law per these rules.
The WHO hasn't been the infallible source of truth that Google is trying to portray - nobody has.
This is hardly the only thing or topic that youtube bans and will continue to ban. Youtube is extremely censorous and it's not just things that are 'bad'. This fits entirely within their existing behavior and is nothing novel. It isn't going to be the straw that breaks the camel's back. It's just turning up the temp on the frog in the pot.
Yes, people do need to stop centralizing in a single corporate entity to communicate. But is banning vaccine misinformation the reason to do it? No.
> "The video platform said it would now ban any content with claims about COVID-19 vaccines that contradict consensus from local health authorities or the World Health Organization."
The local health authorities (in the US) and the WHO also told us in February and March that masks were useless for normal people. Even though people in Asian countries were broadly wearing masks based on prior experience from SARS. It turns out that the US public health authorities and the WHO were both wrong.
Is your argument that because US public health authorities and WHO might occasionally be wrong, we therefore need to make sure we also get the opinions of people who are almost always wrong?
Would be curious if they will ban claims that if the vaccine is released before the election then it is dangerous, and if it is released after it is not (a democratic talking point unfortunately).
The "slippery slope" is always a lazy argument w. little substance. It's easy to get induction wrong even in simple math problems let alone trying to pigeonhole humanity into tidy formula. The idea that the market will regulate itself because (... because what?) is also lazy. The bottom line is it really makes a great deal of sense to say,"hey, don't drink bleach," "hey, no Nazis on this platform please!", "hey, it's highly unlikely Hillary Clinton is a pedophile" - This is serious - this is real life and in good conscience we should all be supportive of this kind of "censorship" for the good of humanity.
People always say there's a right and a wrong or that there are two equally valid sides to an argument but that's just plain wrong. The Earth isn't flat, the government isn't run by lizard people, and masks are effective at curbing spread of a airborne disease.
You make it sound like both sides are perpetuating hoaxes but YouTube is only censoring the hoaxes from one side.
There are always different perspective on everything. You can determine the value of each for yourself and try to convince others of its merits. That is your freedom and your limit.
> You make it sound like both sides are perpetuating hoaxes
The only thing that's lazy is imagining that all situations have obvious choices of what's right and wrong, and that people who don't agree with you are crazy, stupid, or bad people. What's much more likely is that you are highly ignorant to all the facts. Your "good conscience" would lead humanity into a hellish reality. Or was nazi germany an impossible "slippery slope"?
> The video platform said it would now ban any content with claims about COVID-19 vaccines that contradict consensus from local health authorities or the World Health Organization.
Has this hypothetical scenario ever happened historically:
- Current scientific consensus is wrong
- Scientist with dissenting theory is right
- Consensus eventually changes to align with dissenting theory as evidence mounts for it
I'm inclined to think it has happened many times in history, and will happen many times in the future. And if so, doesn't that mean we could get into a situation where step 3 (consensus realigning) doesn't happen because the truth is actively suppressed by every major speech platform?
"YouTube says it already removes content that disputes the existence or transmission of COVID-19, promotes medically unsubstantiated methods of treatment, discourages people from seeking medical care or explicitly disputes health authorities’ guidance on self-isolation or social distancing."
Fortunately, there has been worldwide expert consensus on all of these things since day 1 of the pandemic, so YouTube does not find itself in the difficult position of acting as the arbiter of truth.
The problem with this is that it's only selectively applied. What happens when Pharma companies get caught lying in about effects of drugs? Are they going to be de-platformed? No, because they pay for ads and so they can spread lies all day.
Look at Study 329 by GSK which said that Paxil was effective in children (when in fact the data showed that it doesn't work and causes suicidal behavior in children https://en.wikipedia.org/wiki/Study_329
). That published study was a lie and was published and used to put millions of children on drugs. Proven actual harm done to innocent kids.
Yet, GSK and other Pharma companies (which have all been caught doing the same thing) are still allowed to buy ads, to put things on TV, use social media (by the way often with fake grass roots organizations - Astroturfing - which also presumably is against social media policy).
I have no problem with the concept of it, but let's be real, this is not done in a fair way. This is done for specific political reasons. Look what is going on with Dr. Mercola as a real life case study of this.
Aha. Very trustworthy. 70% of the ads I see on youtube are borderline scammy. Shady investment thingies, sometimes outright scam. But YouTube is filtering bad content now, aha. Double standard at its best. Puritanism I guess. Maybe it is time to boycott this nonsense.
So what constitutes as misinformation? Science Youtubers like thunderf00t have been making regular updates on the subject, would that constitute for fake news? What about videos discussing the effects of stuff like masks and so on?
Who is to say what is misinformation, the ministry of truth or what? This is getting rediculus. You can't barely comment on youtube anymore if what you have to say is even slightly un-pc or controversial.
However, were I to happen across someone motionless on the floor with a needle in their arm, I would give them NARCAN instead of a ride to a rehab center.
I'm from the United States. Social media is currently poisoning our country to a degree that I think it may be too late to try and draw out the fight for free speech by using free speech alone.
The idea is that we are supposed to be equally free to say whatever we want, regardless of our status or wealth. This has been corrupted.
The ability for one person to amplify their voice or ideas via hundreds or thousands of bots, paid assholes, and gullible people who lack the capacity for critical thought is a problem. It's a big problem.
Truth, facts, and hoping that people can apply logic to find their way to them isn't enough anymore and the people exploiting this advantage are getting better at it.
I agree that what YouTube is doing here is a slippery slope and it's scary. However, I personally think that this is now a war for democracy, and things are going to need to be sacrificed.
I do not like the idea that hard, easily proven facts can be overwhelmed by voluminous bullshit. If this is allowed to continue, our country is going to die. Banning obvious bullshit, albeit akin to censorship, is NARCAN. The underlying problem exists and is dangerous, but this will keep us going long enough to hopefully find a more suitable solution.
I hate this.