The problem is the poorly-considered but extremely widespread epistemology that goes something like "whether a claim is true depends on how convinced certain people are of that claim."
> Once you create a fact checking entity, it will decide for itself what are object truths.
No, it will just make claims, which may be true or false. There's nothing mysterious or troubling about this. They're not "deciding" what's true.
It is exactly what they are doing. You are playing with semantics. The public as well as private companies use the claims as an argument for validation of truth.
The troubling aspect is it comes with some level of authority backing the claims. There are consequences for not aligning with the positions of the claims.
It very well may be an explanation of the truth, and there's nothing wrong with that. If that's what you mean by "validation" then that's great! But if you mean that whether a claim is "valid" depends on any one person or source's position on the claim, then you're back to that bad epistemology I described.
> But if you mean that whether a claim is "valid" depends on any one person or source's position on the claim, then you're back to that bad epistemology I described.
No, I'm not asserting that view. However, what you are stating I think is the viewpoint of most people. They will accept the claim as valid.
> However, what you are stating I think is the viewpoint of most people.
Do you think that most people think that whether a claim is true depends on Facebook's stance on that claim? That would be extremely shocking to me. My impressions is the opposite: that there is mainstream repulsion to Facebook (and Twitter, etc.) being so bold as to take any stance on factual issues.
People do leverage Facebook's claims when they align with their own. You are probably right in that it doesn't directly change anyone's opinion that disagrees.
However, it does reinforce confirmation bias. Those that align with Facebook's viewpoint will be less likely to listen to other user's opposing viewpoints as the feel emboldened that they viewpoint has some official support.
So I think it has an indirect effect in that it weakens user to user influence.
> Once you create a fact checking entity, it will decide for itself what are object truths
okay, so what? What do you think a court does? The guy who reads the meter at the water sanitation plant or someone who hands out parking tickets?
A fact checker is nothing but an institution with the authority to adjudicate on some questions which are relevant for the maintenance of the platform, no different from any other authority that manages public conduct. Where is the problem?
And of course fact-checkers are not unaccountable. Like in this case their behavior itself is topic of public discourse.
The view of "if we just implement it right" it will be fine is a fallacy.
These organizations represent immense power through influence. By their very existence arises the conflict power and truth. If you can have influence over the fact checkers, then you have influence over truth. Who would you trust to form such an organization? Where is the oversight? Who monitors the monitors?
We are seeing what "reasonable censorship" actually looks like. It is an oxymoron.
>The view of "if we just implement it right" it will be fine is a fallacy.
Good thing I didn't say that then. The original comment that I was challenging said there was no such thing as fact checking. "Fact checking is hard" is not an argument in support of the idea that fact checking doesn't exist as a concept.
>We are seeing what "reasonable censorship" actually looks like. It is an oxymoron.
What is your definition of censorship? Because there is plenty of speech that I think is reasonable to censor. Obviously there is the illegal speech. Should Facebook be forced to host threats, defamation, copyright infringement, child porn, etc? What about the speech that is not illegal but is objectionable in some way? Should Facebook be forced to include hardcore porn in people's feeds if someone posts it? Once we establish that not all speech is appropriate in all contexts, censorship sure starts to seem reasonable. We just don't call it censorship because of the negative connotation. We call "reasonable censorship" moderation.
Yes, what is bound by law is not an option. However, in most cases the law should handle it. Meaning that the post remains unless some legal action is taken as the host often can't just know the legal status except in the more obvious case of child porn etc.
It is censorship when it is out of the control of users. It is filtering when the user has a choice.
Yes, users want to see or not see some categories of content. That certainly is not a problem when it is presented to the user as a choice.
There are platforms that operate mostly in this method, like MeWe.
Within a fact checking entity you essentially have the same issue as regulatory capture.