> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
Strong disagree. This is a very naive understanding of the situation. "Fact-checking" by users is just more of the kind of shouting back and forth that these social networks are already full of. That's why a third-party fact checks are important.
I have a complicated history with this viewpoint. I remember back when Wikipedia was launched in 2001, I thought- there is no way this will work... it will just end up as a cesspool. Boy was I wrong. I think I was wrong because Wikipedia has a very well defined and enforced moderation model, for example: a focus on no original research and neutral point of view.
How can this be replicated with topics that are by definition controversial, and happening in real time? I don't know. But I don't think Meta/X have any sort of vested interest in seeing sober, fact-based conversations. In fact, their incentives work entirely in the opposite direction: the more anger/divisive the content drives additional traffic and engagement [1]. Whereas, with Wikipedia, I would argue the opposite is true: Wikipedia would never have gained the dominance it has if it was full of emotionally-charged content with dubious/no sourcing.
So I guess my conclusion from this is that I doubt any community-sourced "fact checking" efforts in-sourced from the social media platforms themselves will be successful, because the incentives are misaligned for the platform. Why invest any effort into something that will drive down engagement on your platform?
> ... we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. ...
True, but that doesn't discount that it's a win for Meta
1) Shouting matches create more ad impressions, as people interact more with the platform. The shouting matches also get more attention from other viewers than any calm factual statement.
2) Less legal responsibility / costs / overhead
3) Less potential flak from being officially involved in fact-checking in a way that displeases the current political group in power
Users lose, but are people who still use FB today going to use FB less because the official fact checkers are gone? Almost certainly not in any significant numbers
But "fact-checking" by people in authority is OK? Isn't that like, authoritarian?
"Fact-checking" completely removed the ability for debate and is therefore antithetical to a functional democracy. Pushing back against authority, because they are often dead wrong, is foundational to a free society. It's hard to imagine anything more authoritarian than "No I don't have to debate because I'm a fact-checker and by that measure alone you're wrong and I'm right". Very Orwellian indeed!
Additionally, the number of times that I've observed "fact-checkers" lying thru their teeth for obvious political reasons is absurd.
They are given the title of fact checker, ending debate, this is the authoritarian part. It does not matter who employs them. If fact checkers were angels we wouldn’t have this problem. However fact checkers are subject to human nature just like the rest of us, to be biased, wrong, etc.. Do you think these fact checkers don’t have their own opinions? Do you think they don’t vote? Don’t lie?
You are assuming the people in social media are a representative cut of people in the society but what you will notice quickly is that this is not the case, just look at echo chambers.
If I am trying to debate the same fact on a far-right or far-left post, undoubtedly both will come up with the same discussion and conclusion - let's not lie to ourselves.
So for your claim to have any validity the requirement of a fair, unbiased group of people on all posts would need to be given (in the first instance, there are a lot more issues with this, just look at the loud people versus the ones not bothering anymore to comment as discussing seems impossible) and that is just de facto not the case and the reason fact-checking is indeed helpful.
Without some sort of controls in place, fact-checking becomes useless because it's subject to being gamed by those with the most time on their hands and/or malicious tools, e.g. bots and sock puppets.
You should look into the implementation, at least the one that X has published. It's not just users shouting back and forth at each other. It's actually a pretty impressive system
Its more naive to think a fact-checking unit susceptible to govt pressure is likely to be better.
There will always be govt pressure in one form or another to censor content they doesnt like. And we've obviously seen how this works with the Dems for the last 4 years.
Strong disagree. This is a very naive understanding of the situation. "Fact-checking" by users is just more of the kind of shouting back and forth that these social networks are already full of. That's why a third-party fact checks are important.