In addition to identifying the network, the authors also calculate that taking down as few as ten influential accounts can halve the retweets of misinformation. I'm not convinced that it's quite that easy, because the network would respond and adapt, but it's still an eye-opening demonstration of the power of social media companies.
The paper also finds that many misinformation accounts are not bots. Currently, misinformation is not a bannable offense for most social networks. Twitter, Facebook, and YouTube leaders have said, verbatim, that they don't want to be "arbiters of truth". Of course, that leaves open the question of what responsibility they have and what actions they should take.
There's also the question of what is misinformation? You have a continuum from differences of opinion, to people being mistaken, to deliberate misinformation. Where do you (reliably) draw the line? I can see why they wouldn't want them to be "arbiters of truth" -- I'm not sure I would want them to be either.
I'm not sure there is a question about what it is. Misinformation is what it is, information that is wrong, it is false information. Peoples opinions cannot be misinformation, they are opinions. I do agree there is a gray area between people being mistaken and deliberate misinformation, but there is no need to draw a line there, both should be addressed. People should be corrected and misinformation should not be spread further.
There is nothing more infuriating than watching educated left-leaning people twist themselves into a knot trying side step calling lies, lies, because they want to appear impartial.
There are lots of people who will "question" endlessly the meaning of "misinformation". Trying to make exceptions for that .01% of the time a "gray area" can exist, while ignoring the real harm to society caused by the 99.99% "deliberate misinformation".
I guarantee you, the responses to your comment will be exactly that.
I would compare it to science-denial. Like people who refuse vaccines because of the tiny risks involved, while ignoring the outsized benefits.
People who continuously debate the nuance of "what is" "misinformation", while ignoring the harm it does, is no better.
And while one singular bit of misinformation may seem harmless, goofy, and opinionated, ie. Obama is a muslim, crowd size, "Uranium One" etc... A constantly stream of them practically creates an alternate reality for people who believes them, and that is extremely dangerous.
> Trying to make exceptions for that .01% of the time a "gray area" can exist, while ignoring the real harm to society caused by the 99.99% "deliberate misinformation".
The .01% is enough of a basis for a lawsuit. And grandstanding by the right claiming the company is biased (never mind that almost all "misinformation" is right leaning, so it's likely the .01% false positives are also likely right leaning). I would absolutely not want to be taking on such risks from a business perspective. If it's societal good we're aiming for, let society (elected officials) figure out how best to achieve that. Do you really trust for-profit organizations to not have their "editors" swayed by customers?
If that infurates you, it is a sign to be a necessity. There is a reason why social networks do not want to certify information to be true or not. And it is certainly not because of some "twisted side-stepping".
Evidence can change and something regarded as false yesterday, can be true tomorrow. Congratulations, you committed censorship.
Take your example of vaccinations. Huge topic, even in my country which is at a 96% protected rate. Misinformation would evidentely be that anti-vaxxers are a significant problem that should be talked about and that we seriously need legislation to enforce vaccination.
Blatently false, the numbers show it. Having any other opinion is pure science denial.
That said, it can be a problem in other parts of the world. So what should be the reference here?
You aren't even able to develop a theory about the damage of misinformation that would withstand scientific scrutiny. Maybe only because of lacking data than there not being a correlation, but still.
So in the end I would conclude you only want to censor things you do not like.
If you are more certain about your point of view and are able to articulate it, maybe more people will believe you.
Evidence can change and something regarded as false yesterday, can be true tomorrow.
Earlier today I saw a thread of tweets criticizing a media figure for promoting the claim that Chelsea Clinton is married to a nephew of George Soros.
This is a relatively easy claim to check; the identity of Chelsea Clinton's husband is not a secret, and Soros' family tree is also known. Her husband is not, in fact, a nephew of George Soros.
Do you believe it is correct, then, to identify a tweet claiming "Chelsea Clinton is married to George Soros' nephew" as "misinformation"? Or would you, in a sense of charitable benevolence to the shifting nature of truth, want to hold off, just in case tomorrow she wakes up, decides she's unhappy, and files for divorce so that she can actually go marry a Soros nephew?
I love this. Taking a politically charged topic and trying to prove something irrefutably false while trying to make a completely different point. Yes, of course that is misinformation.
Let's try it more neutral. Water is an aggregate state of carbon dioxide. Is this misinformation? Yes. Do I want social networks to be chemical teachers? No.
> Take your example of vaccinations. Huge topic, even in my country which is at a 96% protected rate. Misinformation would evidentely be that anti-vaxxers are a significant problem that should be talked about and that we seriously need legislation to enforce vaccination.
> Blatently false, the numbers show it. Having any other opinion is pure science denial.
For measles, the herd immunity threshold is 93-95% [1].
The problem is that there's no such thing as a positive truth. Truth always refers to a certain framework which is shared and acknowledged by a given group, which also distinguishes groups one from another. (So we may say, truth is necessarily a question of ideology.)
The interesting part is the mutual reinforcement of targeted content and what we may call "idiosyncratic truth," since both refer to distinctive group boundaries and the respective framework of shared world view. Social media will always reinforce such bias, it's part of the setup.
Lastly, if we acknowledge that there's no way to identify an unbiased (positive) truth, it may be also hard to identify bias at all. It may be possible to identify misinformation which is spread arbitrarily as part of a broader strategy (then better called disinformation) but even then it may be hard to reach consensus. (Elefant in the room may be the UN presentation proceeding the last Golf War.)
Much longer explanation that there is, in fact, truth: http://yudkowsky.net/rational/the-simple-truth/. And it's exactly that naive, obvious, intuitive thing you think of before trying to sophisticate or politicize the concept.
> The problem is that there's no such thing as a positive truth. Truth always refers to a certain framework which is shared and acknowledged by a given group, which also distinguishes groups one from another. (So we may say, truth is necessarily a question of ideology.)
Aristotle defined truth essentially correctly: "To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true."
Are you saying that reality is also ideological (quite a radical claim) or simply that perceptions of reality (and therefore what people think is true) are ideological (which is a rather tamer claim)?
However, we do know, at least since Kant, that perception is not a simple matter of apperception. Reality is beyond reach and we have to resort to internal combinational capabilities to assert reality, thus considered a construct. Constructed reality always adheres to broader frameworks, retroactive interpretation, etc. Moreover, when it comes to abstract concepts, perception isn't really a means of asserting reality anymore. How could we check a proposition like, "The US is ruled by the deep state" by simple perception? In fact, when we are confronted with a proposition like this, it is much more for the overall credibility that we allow to be attributed to the proposition and/or its source and how it fits to our own broader world view, whether we tend to acknowledge the proposition or to reject it. (Fact checking may be here seen as a sophisticated approach to rejecting the proposition at first.) So, yes, ideology is a thing and is to be considered as part of what we use to acknowledge as truth.
>> The problem is that there's no such thing as a positive truth. Truth always refers to a certain framework which is shared and acknowledged by a given group, which also distinguishes groups one from another. (So we may say, truth is necessarily a question of ideology.)
I think that is a great example of something which is obviously untrue.
If we may not connect directly to reality, which has been a well established idea for the last 250 years, how could we deny that what is reality to us is essentially a construct?
That's not a counterpoint. That strip is countering a statement based on the flawed assumption that aliens would count using the same base-10 system most of us here use.
Which really is the argument that falsehoods are [some portion of the time] rooted in wrong assumptions and premises.
I would argue that this only strengthens the statement that facts exist.
That some people (here on Earth) want to claim that 2+2=5 because they have invented their own number 2 (which they conveniently withhold), is precisely the kind of misinformation being discussed in the thread.
I agree that it's a difficult social and political problem. Interestingly, companies are fine banning hate speech, for which the same can be said.
There might be better solutions. One idea is to pay nonpartisan organizations to fact check viral posts. Another is to reduce the use of engagement as a metric, which tends to promote emotionally charged rather than factual content. For example, in January Facebook committed to prioritizing "meaningful social interactions".
And if what’s generally taken to be true actually fits the definition of misinformation, and other misinformation networks are created with the intent to combat that, is this really any different from how politics has always been? It seems we are exchanging a lot more bullshit now, but at least it can be more easily observed when it’s on the internet vs a physical medium. Right now I think the internet has generally made things worse, but it seems possible that we can raise the standard as time goes on. At least now I think a subset of highly interested people are able to more accurately determine what’s factual than anyone would have been able to just a couple decades ago.
The key part is probably the distinction between deliberate and accidental misinformation. Yes the latter can be problematic as well, but it would seem ridiculous to ban someone for simply making mistakes. Everyone does so, from professional journalists to average Joes on the street.
Unfortunately, even if those pushing deliberate misinformation are banned, then the problem will keep going since others will spread the information because they mistakenly believe its true.
That's in a way a similar problem - how do you distinguish between deliberate and accidental (especially if you start banning deliberate misinformation, and deliberate misinformers start trying to make it look accidental)? You might be able to catch the more blatant examples (such as it being the only thing they tweet about and it being consistently wrong), but in general I'm not sure how you can look at a false message and decide if it was deliberate or a mistake.
If you then decide to ban all misinformation (because false information is a problem even if it is a mistake), that goes back to the difficulty of deciding what is misinformation - people can cherry-pick true information in a way that gives a misleading impression, or if someone makes a statement that might be true, but there is no evidence for it, do you have to assume that it is false?
I agree that there is a problem here, but I don't know that banning misinformation is the right way to solve it - there are too many gray areas, and it grants _a lot_ of power to social media companies to let them determine what is true.
One problem with making misinformation a bannable offense on Twitter is that they'd have to ban so many mainstream journalists. (Seriously, it's obnoxious how many totally bogus viral claims they start and spread on Twitter; some combination of wide reach, trust, and never bothering to verify anything they post on social media.)
Cite an example of a mainstream journalist propagating a viral claim that any reasonable person should have known was false? That's what misinformation networks do.
There's a lot of what we might call "uncritical claim reporting" going on, where a journalist accurately reports that someone said something without any kind of regard for how implausible that is. Lot of this going on in Brexit, for example. The "£350m for the NHS bus" which rested on at least three ludicrous assumptions.
Edit: let's have a look at https://twitter.com/afneil/status/1001158920663109632 - Andrew Neil is Chairman of the Spectator (important "serious" rightwing politics magazine) and a BBC presenter. He's accusing the NYT of "Stalinism". One of these has to be wrong.
There's that too, of course, but I mean that they can literally - for instance - get tens of thousands of retweets by posting a photoshopped photo on their verified Twitter account tied to their employer that happens to go viral because it confirms people's existing beliefs, and that doesn't hurt their career in the slighest: https://archive.is/vx4z1 (I've even caught at least one journalist just outright making stuff up on Twitter, though I can't remember who and when off-hand.)
Oh, at least ten thousand of those retweets were post-correction, as is usual on Twitter.
The fundamental problem to all of this is those who believe they are righteous (on any side of politics) will use whatever methods they can to promote their "truth". This includes infiltrating "credible" sources. So it's impossible to have an entire organisation that can be universally considered to be an arbiter of truth, because you can be fairly certain at least some of its members are bad actors. That's not to say some organisations aren't more accurate than others – of course some are. But if you can't entirely remove the need for critical analysis on the part of the reader, you would be better off teaching the reader to be critical than attempt to "fix" a system which fundamentally can't be fixed.
Deciding who we should and shouldn't silence is not the solution. Whatever one chooses, some propagandists are going to win. Giving people the tools to determine the veracity of a statement themselves is (usually this is simple common sense, the ability to reason, some knowledge of rhetoric and an understanding that personal bias in prevalent in themselves and others - that it's human nature). Throw in a bit of knowledge about politics (both domestic and international) and armed with these tools I'm confident the majority would be able to come to accurate conclusions on their own.
The problem with a lot of large companies (especially social media ones) is that they're stuck trying to solve non technical problems with tech because actually solving them any other way isn't scalable.
The solution to misinformation in a community would likely be community management/moderation, but that can't scale to the size of something like Facebook or Twitter. Hence the awkard tech 'solutions' that don't work.
What aspect of the study makes you believe they are relying on popularity? They are mostly relying on a very limited set of approved sources (which could be criticized), but their approach seems pretty straightforward. You've gotta have some baseline to compare to.
I'm glad someone is making a public/academic attempt to measure this, because it's hard to imagine just how easy it is to spread disinformation.
So, they're talking about how to disrupt the propagation of information deemed deleterious, given that actors are operating networks of amplifiers. They target the network to disrupt, and then strategize methods of dismantling the network.
But the qualifier "bad" is ensconced in lofty vocabulary. The reality is this is likely a generalizable analysis of how to attack information networks regardless of alignment. All of the high-level adjectives ascribed to these networks are window dressing, related to a separate discrete task: target selection.
After that, it's just take down operations.
The inverse activity is conceptualizing "take-down resistant" information networks, at which point, it becomes evident on closer inspection that all of this is just the throes of an arms race. So let's just cut to the chase, and stop thinking in terms of cat and mouse games.
This is all simply reduced to harasser/harassed relationships. Calculate motives, look at what the objectives of the belligerents are, and instead of feeding into the meat grinder, aim for greener pastures. Go where everyone else isn't.
Once a take-down resistant network is assembled, it can be filled with any sort of content.
Given that the content may be arbitrary, looking at the content alone is not enough. Analysis of at-risk "swing voting" audiences would be the qualifier for an offending orchestrated botnet.
The intent of these botnets for the past victory was to "milgram" people into voting Trump (influence opinion, by surrounding and immersing of easily swindled individuals in currated content, many sources creating a magnitude perception for illusory peer pressure) [0] so, the content is nothing if it isn't seen.
Whatever the case, this is last year's tactic. It's possible that it may never be used again, in anticipation of adaptaion to defend against it. That's if you buy into the hypothesis that this is why people voted the way they did in the 2016 election, and that external technical influence was truly responsible, and even capable of such a manner of attack.
To be honest, it also sounds like a kissing cousing to the hand-wavey premise of flashing inserts of suggestive hypnotic single-frame advertisments (BUY MORE POPCORN fnord) in movie theaters to provoke subliminal brainwashing so that people buy more popcorn at movie theaters.
This looks at the election days. What worries me most is that it's still is ongoing today, and it's more obvious than ever, and completely taking over social networks. Every presidential tweet is followed by a horde of bots praising some religious aspect of belief in the greater work achieved here. None of it makes sense.
Twitter's not fixing these issues because they're not issues.
Their stock up 100% from the 52 week low. Nothing's broken.
How could disinformation running rampant on your platform be bad, if your stock is up 100%?
Shareholders are happy, the market's happy, who cares if your pipe is carrying a cultural toxin, as long as that pipe is also delivering revenue and shareholder value?
(Of course, I disagree with what I wrote above. But it lays bare why this hasn't been addressed. Also applies to other social networks, owners of digital, print, and broadcast media outlets.)
I agree with your reading of the current situation. The spread of bots, propaganda and disinformation campaigns is of course a huge threat to the longer-term viability of their platform, but I'm not really convinced anyone is prioritizing that right now, they're all too drunk on sailing into worldwide prominence.
Either they'll fix it, or at some point, consumers will jump ship to a competing platform that includes appropriate protections against these issues.
Cambridge Analytica showed that they figured out that people who are highly religious can be manipulated easily. They are called them something like “low information” voters. Meaning they vote more on emotion vs facts. The fact that Trump got the religion vote astounds me as he seems to hold no Christian morality at all.
It's all about people have multiple priorities and views really. If Trump's philosophy is closer to yours on various social views despite his religion beliefs being very different, it's possible you may vote for him anyway since you believe the alternative would be worse.
religions seem to have completely embraced the idea of 'the ends justifies the means'. The president could probably reveal himself to be a literal demon and they would still support him.
It is surprising that Trump got the religious vote, since he had shown support for abortion previously, had been divorced multiple times, and engaged in recreational sex with women he was not married to.
Maybe religious people aren't the one-dimensional voters that you imagined them to be. But if you were to admit that, how would you slam both Trump and non-atheists in the same post?
"Despite being a good god-fearing American, I'll vote for Satan in 2020, because of his fiscal conservatism and his views that immigrant children should be separated from their parents"
Your point would be valid if religion was a low priority hobby.
It isn't, so (a subset of) [religious voting block] are just religious when it suits them, ie hypocrites.
Ugh. Is what happened so very difficult to understand? Chistians were faced with voting for someone they were generally not comfortable with vs. someone believed to be truly evil. Her husband votoed a bill to ban so named "partial birth abortion" and Hillary expressed support for late term abortions.
If you could get over your contempt for people who see things differently than you do and actually talk with some of them you might find that a lot of people were voting against Hillary instead of for Trump.
> Asked by a reporter today what kind of language he could support in a revived bill, Mr. Clinton said he had originally planned to sign the earlier one.
> ''The problem is,'' he said, ''there are a few hundred women every year who have personally agonizing situations where their children are born or are about to be born with terrible deformities, which will cause them to die either just before, during or just after childbirth. And these women, among other things, cannot preserve the ability to have further children unless the enormity -- the enormous size of the baby's head is reduced before being extracted from their bodies.
> ''You know, Hillary and I, we only had one child. And I just cannot look at a woman who's in a situation where the baby she is bearing, against all her wishes and prayers, is going to die anyway, and tell her that I'm signing a law which will prevent her from ever having another child.''
Clearly a monster of pure evil. And his wife too.
Whereas her opponent supported torture, dehumanized immigrants every chance he got, has 5 children with 3 different women, bragged about cheating and sexual assault, lied as easy as breathing. Admittedly after the election, but supported a pedophile, although he joked about dating his daughter years before.
I fully stand by exactly what I originally posted. As a raised pentecostal Christian, if that should matter at all.
They weren't voting between Trump and no one. The alternative was Hillary Clinton. For a conservative christian you'll have a hard time finding someone they'd be more scared of having as president.
Why not to use detected bots for automatic false-news alerter, like ad-block or spam filter, but for false news?
I currently use few accounts, which are spreading false-news, as indicators with great efficiency. A bit of automation and a browser plugin from trusted source (like Bellingcat) can help to fight that problem.
That's a reverse tragedy of the commons: the happiness of the one. Twitter's (not Tweeter) shareholders can jump off a cliff as far as I'm concerned, you - and they - can't place their own interests over those of the world at large. They're a communications medium and one with serious problems, user engagement metrics come into play when what the users engage with is not outright propaganda.
Fake news isn't there to drive user engagement, it is there to derail societies and institutions that underpin those societies.
Honestly, is fake news there to derail societies (funded by anarchist ideologues?) or is it to make advertising revenue for the publishers? As far as I've worked out, it's the latter. What's the evidence that it's done with a political aim of derailing a society?
I could be wrong, but I don’t think the poster above yours were implying the aim was to take down all of society — I think they were implying the goal of some of these networks is to bring down particular targeted societies.
I tend to think you’re both correct. I think it’s a combination, some people are into driving traffic through clickbait and some networks are amplifying divisive subjects in an attempt to make The Others appear to be unreasonable.
There are certainly many other networks taking part as well. There are definitely other networks of unorganized regular citizens who are also intentionally pushing misinformation in order to make their ideological rivals appear to be completely irrational. Whether these citizen groups are a consequence of the former groups I’m not sure, but knowingly pushing misinformation is definitely not isolated to governments and clickbait sites. We seem to be overrun with unapologetic and ideology driven liars.
This is somewhat of a tangent but still related, we can see an active attack on science and higher education right now and it’s pretty clear why, many of these ideological driven people instinctively know they’ll never return to their former glory when science and an educated populace stands in their way. I think we’re going to discover many of the root causes of the misinformation campaigns stem from the same battleground - many groups and individuals who instinctively know science and the-closest-thing-to-truth-we-have stands in their way.
People talking about politics always try to make their opponents appear unreasonable. You surely can't be complaining about normal people grouping together and promoting the ideas they believe in, can you? The alternative would be censorship of all political dissent, and suppression of citizens talking about opposition political parties. It would be non-democracy.
Your last paragraph about science could be about all sorts of groups but it sounds like you have your personal political enemy in mind. You could be describing Muslims who are opposed to science's revelation that women are as smart as men. Or it could be about liberals who are opposed to science's revelation that the smartest men are smarter than the smartest women (male IQ has a higher standard deviation than female). Or is about feminists who are trying to suppress science that shows that women aren't as hard-working as men? Or is it about transgender activists who are trying to suppress science showing that women are not men? I wouldn't want to censor any of these people. Who am I to say what the one true truth really is for all time? Even science doesn't know that.
What I really think the GP meant, but didn't say because when you actually write it down, it sounds ridiculous, was a conspiracy theory like "Putin wants to destroy America so he supported fake news to get Trump elected and since Trump is obviously bad, he'll destroy America.". As far as I can tell, that's what the opposition to fake news is all about - Trump winning the election. That's certainly when it started anyway.
In time, we'll look back on this argument and ask ourselves "Why the fuck didn't we all delete our accounts as soon as that became obvious?" (As the world burns around us...)
Interesting content, but I get really worried when studies and research show a tendency being influenced by trends; it's often a sign of poorly aligned incentives and sometimes even produces hasty or bad results in favour of career maximization.
"... If falsehood had, like truth, but one face only, we should be upon better terms; for we should then take for certain the contrary to what the liar says: but the reverse of truth has a hundred thousand forms, and a field indefinite, without bound or limit. The Pythagoreans make good to be certain and finite, and evil, infinite and uncertain. There are a thousand ways to miss the white, there is only one to hit it. ..." -- Montaigne
https://ebooks.adelaide.edu.au/m/montaigne/michel/essays/boo...
I wish there was a way of making misinformation a punishable offense. (cue people moaning about misuse, who decides what is misinformation etc which is why I said wish).
At the moment people can publicly and on-the-record spout the most outrageous bullshit and even when they get proven wrong there is no repercussion. none.
> At the moment people can publicy and on-the-record spout the most outrageous bullshit and even when they get proven wrong there is no repercussion. none.
On a personal level, just cut those people out of your life. Refusing to interact with people who knowingly lie and knowingly spread misinformation is the only rational choice. Time is our most valuable resource, spend your time on projects and people who will carry us forward rather than wasting time on people who are just worthless timesinks.
I've had a look through their list of "low credibility websites" and they seem to be either humour sites (like the The Onion and ClickHole) or right wing sites (I admit there may be some left-wing ones on there that I'm not familiar with).
Is the claim that there are only the right-wing sources for these misinformation networks, or is this just one network they examined?
The article specifically mentions snopes and politifact as sources of truth. They can be, but have also been shown to be less than perfect, and do tend to lean left.
Yesterday, twitter was full of pictures of the immigrant kids that Trump put in cages. This morning, we find out that the pictures were taken in 2014. It's a difficult problem because the retractions are never broadcast/retweeted as prevalently as the original fake news.
This is a difficult area, and the stakes are so high that the arms race will doubtless continue. Because the stakes are so high, you are competing with organizations from allied countries as well as non-allied countries, and from multinational corporations and industry groups, all throwing tens of millions of dollars each on think-tanks and lobbying groups, whose main activities are to publish source material that can be used as supporting "facts" for various news stories.
I think that this is probably a dangerous game to get into; calling for citations is, I think, only likely to be helpful when there is a mutually recognised authority to cite (and, besides, when the claim to be resolved is a factual one—'biased' is not a characterisation that can be proven or disproven without some really careful definitions, on which two disputing parties are likely to disagree). Here's the first hit in a search for "Snopes biased". I've intentionally mangled the first component of the URL to avoid linking to whatever kind of site this is.
I don't know Daily Caller, but the title alone makes me fairly confident that it's a biased source; but there it is, a citation to back up the claim that Snopes is biased.
> I think that this is probably a dangerous game to get into; calling for citations is, I think, only likely to be helpful when there is a mutually recognised authority
> biased' is not a characterisation that can be proven or disproven without some really careful definitions, on which two disputing parties are likely to disagree)
That happens to be the modus operendi of the propagandists, the liars, to say there is no truth, there's no way to determine it, there's no source more accurate than another. They say 'it's all propaganda', trying to put themselves on equal footing with honest people. It's liars who say, 'everyone lies'.
In fact, humanity has developed many ways of distinguishing truth from lie, fact from superstition, critical thought from ideology, physics from alchemy. That is, in a significant way, the project of the Enlightenment, which led to science, the rule of law, liberty, democracy, and all the things that have been born of them (including the Internet). Not coincidentally, I think, the propagandists are generally enemies of democracy and seek to suppress its effectiveness and the power of the public, in part by undermining its foundation of informed citizenry and open debate by interfering with it and saying it's impossible.
> That happens to be the modus operendi of the propagandists, the liars, to say there is no truth, there's no way to determine it, there's no source more accurate than another. They say 'it's all propaganda', trying to put themselves on equal footing with honest people. It's liars who say, 'everyone lies'.
Yes, this was my point. If you (not you personally) are arguing against a propagandist who says that all your sources are biased, then I don't know what is an effective rhetorical strategy—it seems that no-one does, if there even is one—but I'm almost certain that asking for a citation for the claim of bias isn't it, since all it does is shift the question from "who is biased?" to "who can be trusted to judge bias?".
> If you (not you personally) are arguing against a propagandist who says that all your sources are biased, then I don't know what is an effective rhetorical strategy—it seems that no-one does, if there even is one—but I'm almost certain that asking for a citation for the claim of bias isn't it, since all it does is shift the question from "who is biased?" to "who can be trusted to judge bias?".
I see your point, but I think the propagandist is only effective if we buy their premise, that there is no way to distinguish between biased and unbiased sources (and information). My point is that there are many ways to respond, that it's not at all a new problem, and that the solution is reason, in the Enlightenment sense. The propagandist says it's all arbitrary faith, but reason always has been their downfall (which is why propaganda often appeals to hatred, to crowd out the application of reason).
(On one had, this comment and my GP comment are written so loosely and broadly that it's a bit embarrassing; on the other, I think my point gets across and I don't have time to write a treatise.)
Daily Caller is a news/opinion blog owned by Fox News talk show host Tucker Carlson.
In my experience they tend to be biased against truth as it were (and so would happily choose Snopes as a target)— but that’s just the humble opinion of one observer.
But surely anyone who believes Snopes is biased must have some real evidence they could point to.
The above "Citation needed." is an opportunity to point out the concrete reasons (if any) why bias is supposed. If the only evidence is from sources that are widely questioned then that itself is something to note.
Are you seriously trying to claim Snopes isn't biased? I would say that's the more outrageous claim here. If Snopes is absent of any bias, what is their methodology by which they achieve that? Here is someone [1] who tried to find out and could not.
And as far as bias, a quora post [2] seems to give the impression that Snopes was heavily favoring Hillary Clinton through the campaing.
So I'd like to turn this around: could those that believe Snopes is not biased give some kind of citation?
> there it is, a citation to back up the claim that Snopes is biased
Disinformation is built on shared assumptions with weak evidence.
When "everybody knows" the media is biased, all the effort by reporters, fact checkers and editors is instantly discredited and disinformation gets equal standing.
"Citation needed" is a challenge to examine the evidence. Sure, you can find citations to disinformation that "substantiates" disinformation.
It's not about the citation, it's about examining the credibility of the evidence.
No, it's not perfect and it never will be. But letting "everybody knows" statements go unchallenged is how we get to a world where "nobody knows" and evidence doesn't matter.
So you want a link snope or politifact link? Even if people provided citations, wouldn't you reject them because it isn't snopes or politifact ( the approved "fact" finders )?
The fact that much of the liberal media and institutions have rallied around snopes should indicate that snopes has a left bias. The fact that you ( likely a left leaning individual working in the media ) support it probably indicates it as well.
> And if you know of a fact-checking resource that has less bias, please share.
Everyone is biased. Especially when politics, money and propaganda is involved.
The issue with news media isn't facts. It's the spin on the facts.
Look at Trump backing out of the NK meeting last week - which is a fact. But look at how CNN and Foxnews spun that.
CNN : "Trump is backing out of NK meeting because he is a bumbling fool failing at NK negotiations"
Foxnews : "Trump is backing out of NK meeting because he a deal making genius who won't let NK screw over the US again."
Same "fact", but wildly different spin. Fact checking is never going to solve this issue because "facts" are only a small part of the news industry. The core of the news industry is to push propaganda and to influence the people to think a certain way.
Also, propaganda isn't necessarily a bad thing. Every country needs propaganda to exist.
If you are interested, here is a former TIME magazine editor speaking on the topic of propaganda, history and news.
There's a huge difference between institutions that have made mistakes and those that spread lies with abandon, there is no equivalency between the two.
Can you specify a bureaucratic regime for distinguishing the two that you'd trust an opposition government to operate? If not, they are equivalent for public policy purposes.
To Trump supporters, The New York Times is an institution that spreads lies with abandon, and they get to operate enforcement agencies at the moment. The belief that it's seeking the truth in good faith is essentially just liberalism. Same for Fox and friends.
The best line of defense against this nonsense is education, free and of a very high quality. Absent that people will be pretty easy to manipulate. So whenever you're faced with a problem like this I would argue for better education rather than for some kind of control over the media, the liars should die out because nobody believes them anymore.
Given how incredibly entrenched some power structures based on lies are it looks as if my plan is mostly failing. Education budgets are being cut left right and center and the quality of education has been steadily trending down since ~1950 or so.
Education is locally controlled and political polarization is geographical. Right and left wing communities could both operate stellar schools and their graduates could be even less able to agree on truth.
Some are so sure of their world view that they just assume their conventional wisdom is correct about specific subjects that they clearly (to someone who has) have not studied. I know I was back in 04 when a friend mentioned something to me that I thought was absurd, but is now common knowledge for anyone understands F=ma and has checked that specific fact (that I'm deliberately not mentioning). For example, look up the art collection Skippy's brother keeps, or that shop owners (now hidden but thoroughly archived) Instagram account.
"Fact checking" sites are about as unscientific as it gets. They have two obvious flaws, first they generally only "check" facts they can "debunk", second, they often frame what they are checking in a way that lets them use a corner case.
The right way would be to just list data, things others can independently check and allow the reader to draw their own conclusions.
> Some are so sure of their world view that they just assume their conventional wisdom is correct about specific subjects that they clearly (to someone who has) have not studied. I know I was back in 04 when a friend mentioned something to me that I thought was absurd, but is now common knowledge for anyone understands F=ma and has checked that specific fact (that I'm deliberately not mentioning). For example, look up the art collection Skippy's brother keeps, or that shop owners (now hidden but thoroughly archived) Instagram account.
This paragraph is almost incomprehensible to me. Is it meant to be read with a lot of background knowledge, or am I just being dumb (or both)?
Ya, it requires significant background. The trouble is, if I just say exactly what it is about, people will automatically assume things about stuff they haven't checked themselves, but have been strongly conditioned to believe.
Consider suggesting to a parishioner that a priest they deeply trust is up to no good. How similar is your reaction? Both of you have beliefs head so strongly you don't even need to check what was suggested. I cant imagine you would make that comment if you really looked at the "art" collection I alluded to.
> Both of you have beliefs head so strongly you don't even need to check what was suggested.
Aside from documenting to what 'Skippy' refers, you don't seem actually to have suggested anything in particular—at least, not clearly enough that I can tell what it is. (It's quite possible that I would be defensive if I understood what you were suggesting, but I can't now because I have no idea against what I'm trying to defend.)
I believe people will avoid looking at things that challenge their assumptions rather than spend a few seconds checking them. I have no doubt you can find that art collection in a few seconds, I also have no doubt you are not going to.
What's so hard about checking out Skippie's bro's art collection? It's a highly specific, easy to check thing. It's so common to try and distract specific stuff with something else; I strongly suspect you wouldn't be commenting about this if you had.
If anyone here gives a crap, he believes that John Podesta's art collection means he is part of an international pedophile ring, seemingly run out of a pizza place.
Not sure why he is too scared to just say this. I guess in the cold light of day it sounds pretty stupid. Perhaps if you wrap it up in code words it sounds a bit better, or at least makes him feel special.
The Russians really did a number on the US didn't they.
You are so sure of your beliefs that you are not going to look at the art collection (don't even think about the Instagram account!). By all means, swear, blame the Russians, and make basic mistakes that prove you cant bring yourself to click. Think about it, you are spending all this time refusing to make a simple search. Why? You reaction is a textbook emotional reaction.... denial at all costs.
I believe people will go to extreme lengths to avoid specific things that challenge their wold view. This is an excellent example, you are not going to look at the art collection, but need a way to deflect that fact, so instead you demand I talk about something else. Maybe make some personal attacks and keep trying to change the subject?
It's funny that you say that after having repeatedly avoided stating your world view for fear that it will be challenged, and constantly deflecting (the comment I am replying to is an excellent example of a deflection). I'm positive the irony is completely lost on you though.
So, one last time, share your views and maybe a link or two. No deflections, no comments on your irrelevant theories about peoples worldviews (obviously born by reactions to when you behave as you have in this thread) and no childish code words.
I made a suggestion, that people, on their own, checkout an art collection and a (archived) Instagram account. You have more than enough information to find both. If you don't want to, then don't. Asking that I instead talk about something else, or elaborate on what I suggested you look at is fine, ask all you want. It might be obvious if you took a few seconds to search for "podesta art" why I am not going to link to it.
The paper also finds that many misinformation accounts are not bots. Currently, misinformation is not a bannable offense for most social networks. Twitter, Facebook, and YouTube leaders have said, verbatim, that they don't want to be "arbiters of truth". Of course, that leaves open the question of what responsibility they have and what actions they should take.