Critical skepticism should be a core competence as well. Even well known and highly respected journals like Nature or NYT likely have biases at the editor/peer review level. They should be read with at least one eye toward skepticism that [money/prestige/desire to get published/etc] have influenced the results of the paper/article.
I've read that scientists often have to "massage" their papers if publishing on a divisive/politicized topic like climate change or transgenderism in order to not have their paper dismissed out of hand (for example, one I saw recently: https://patricktbrown.org/2023/09/05/the-not-so-secret-formu...).
It’s a fair point, and it’s a shame that something we took for granted growing up in the 90s - a culture of being reflexively critical of all media, whether state or private - now finds itself awkwardly shoehorned into the hellscape of 21st century American politics.
I will say also though that I wouldn’t _personally_ equate the sort of biases you allude to in your comment - extremely troubling though they are - with content produced by actual malicious actors.
My litmus test for whether to respond to a piece of media with healthy skepticism or outright rejection often comes down to a simple gut-check: “Is this coming - ultimately - from a place of hate?”. I’ve found that to be surprisingly effective.
I also feel like that was a political lesson we were supposed to have learned from 20th century history - maybe the most significant and enduring lesson. But here we are again, and it’s true what they say: every generation has to fight for it.
> My litmus test for whether to respond to a piece of media with healthy skepticism or outright rejection often comes down to a simple gut-check: “Is this coming - ultimately - from a place of hate?”. I’ve found that to be surprisingly effective.
How exactly are you confirming this "surprising effectiveness"? Going with what your gut thinks is literally the default for everyone, it's the opposite of critical thinking.
I think what you are saying ultimately reduces down to the question: 'Do you like this person?', which is a fine position to have, but don't lie to yourself that it's anything other than that.
Claims made in circumstances where there’s evidence of hatred or malice - either as motive or intent - warrant scepticism and a higher burden of proof relative to other claims since, as we know from recent and not-so-recent history, the weaponisation of hate is both a very effective and frequently-deployed political strategy. That formulation okay with you?
It’s just an heuristic. Apparently one that makes no sense to you. Have a great weekend.
That heuristic causes you to miss all the self serving lies that are everywhere. It might make your social life flourish since you trust all of those lies and people like when you trust them, but it will make your head full of lies and falsehoods.
For example, a fat person might say that being fat has nothing to do with how much they eat, I have heard that many times, that doesn't come from a place of hate but you shouldn't trust it anyway. It is however self serving, that is the main thing to look out for.
I sometimes feel like modern education could really use a "defense against the Dark Arts"-style information savviness class. In theory, it's one of the things research projects are supposed to teach you (critical thinking), but I think it could stand to be even more focused: embrace and emphasize the notion that people state falsehoods as truth, whether it be due to lack of attention to detail, conflicting incentives, or overt malicious intent.
Comprehension is a balance. Believe too much and you've left the path of reason. But we've also seen skepticism come right round to conspiracy theory... Believe too little, and someone ends up standing in a pizze parlor brandishing a firearm demanding to see the nonexistent basement.
Reading that article, all I can think of is the bald-faced lies the public was told during the pandemic because they were convenient for steering behavior. My trust in “science” was seriously damaged by that episode.
1. Public health agencies originally stated that N95 respirators don’t work and require special fit training. This guidance vanished the moment respirators stopped being in short supply.
2. The public was repeatedly told the virus was not thought to be transmissible by airborne particles, even though there was evidence that it was. Similarly, well after fomite transmission had been ruled out, health agencies continued to preach hand-washing and surface disinfection.
3. Concerns that the vaccines might have serious side effects were blasted as anti-science lunacy. Later, the vaccines were proved to cause serious side effects in some people.
4. Any discussion of the lab leak theory was treated as radioactive.
I could go on.
What’s bothersome about these examples isn’t that the science was imperfect or evolving. It’s that the public were lied to, and were also told they should not dare question Science (with a capital S). When I read that scientific journals censor entire avenues of inquiry because the implications are undesirable, I wonder if real science even exists in politically sensitive fields.
Off the top of my head? The vaccines were sterilizing. Nope not at all. Mildly reduced the chance of transmission while sick. But Pfizer was “moving at the speed of science”. Another is that natural immunity was worse than getting the vaccine. I’m sure there’s more, but the comment Will be flagged so why bother?
Just so you know, it's a bit creepy when someone makes a post on a contentious topic to immediately want to take the discussion offline and to request their PII.
The more paranoid in the audience will take this type of behavior as evidence that lists are being kept of 'unbelievers' and that they are 'being silenced' etc etc etc. You know the type of thing.
While this can come about perfectly naturally when an intellectually lazy internal ruling establishment only wants agreement instead of critical thinking, it should also be noted that such a result is also a deliberate end goal of social attacks by external hostile actors, such the russian government and it's "firehose of falsehood" method of propaganda.
This can result in paralysis and/or radicalization of the effected population depending on how it's used.
Unfortunately we kind of have both the internal and external problems in effect right now.
This is a lot more common, but I wouldn't call this proper skepticism. It's usually just people being cynical and biased in their own way. They've subscribed to a political tribe and social media echo chamber that happens to be opposed to something in Nature, and that's the reason why they're doubting whatever is written in Nature, with some made-up justification that sounds plausible only within that echo chamber but is transparently wrong to people not caught up in that particular group think. That said, actual skepticism should be applied everywhere, including towards Nature.
Do you think cynical mistrust (so-called "scepticism") truly is more common than lack of healthy scepticism towards information provided by trusted sources? Or is it rather more obnoxiously salient? Someone who rejects Nature claiming that the Earth is an oblate spheroid is much more likely to make their voice heard than someone who would happily accept whatever is published in a scientific journal with no question whatsoever.
Either way, at face value, I do agree that cynical mistrust (especially when partisan) is more immediately damaging to society than uncritical absolute trust, regardless of each of their relative frequencies. However, I do believe that each of these phenomena reinforce each other in a vicious cycle. The contrarian denialist feels vindicated when he sees the blind acceptance placed on certain sources by the naïve truster on the other end of the political spectrum. The naïve truster, on the other hand, feels that any criticism or questioning directed towards trusted sources on his side is to behave like the contrarian denialist would. Hence the mutual polarisation.
There's plenty of stuff in NYT that should by eyed with extreme skepticism. It's got an agenda and is bending its content around that agenda like every other mainstream commercial news outlet.
I would relax "plenty of stuff" to "everything" - the New York Times has adopted a policy of "advocacy over reporting" for several years now[1] and is untrustworthy as a journalistic entity as a result. They're no long a news organization, they're an activism organization that uses news as their political vehicle.
This is dependent on your social circle. My experience agrees with GP and disagrees with yours.
Both over-skepticism and under-skepticism are bad and should be recognized and fought. If you only oppose one of them, then it's likely because your preferred party is, too.
Strongly agree. I realized this in 2016 when I introduced my mother in law to YouTube for cooking videos, and within a few months she was excitedly describing how she found videos giving her “secret truths” about the world, then showed me a website explaining Hillary and Obamas place in a satanist cabal. She was a person who literally did not consume any politics prior.
It’s hard to understand for those of us delving social media daily. How do you know what to trust and distrust? It’s not actually so obvious in a world of user submitted content, without existing bias to guard yourself with. We build this bias over years spent in online discourse. So what if you’ve never been part of online discourse before?
My schooling, taught us media literacy and how to judge sources. Usually, if a source is saying something incompatible with reality, that's a bad source. These nuggets of wisdom usually came in English class or the library, you know, that thing goobers always say they will "never use" in their life so ignore.
It's an unfortunate reality that many schools in the US are under the governance of people pushing the exact screeds the schools should be teaching kids to criticize. Is it any wonder we have an education crisis in this country?
That can help with misinformation, but it's less effective against disinformation. No one is smart enough to be able to pick out the flaws in every bit of active, malicious disinformation in subjects they're not experts in.
This is the same woman who probably taught your SO not to take candy from strangers. I don't think you need to be engaged in social media from a young age to tune your BS detector.
Also on that note, I do think education (especially humanities and social science) plays a role in being able to tell the difference between something being discussed in good faith and complete hokum, so it shouldn't come as a surprise that conservatives are trying to lower the quality of education Americans get as well: https://newrepublic.com/article/167375/republican-plan-devas...
This is part of a cycle that keeps Americans undereducated to make it easier to scare them into voting R.
> This is the same woman who probably taught your SO not to take candy from strangers. I don't think you need to be engaged in social media from a young age to tune your BS detector.
I don’t feel like repeating a specific advice not to take candy from strangers indicates any critical reasoning personally
Maybe it's just something people in power converge towards, as a function of preserving their power, regardless where they are on the political spectrum.
1) People are actually a bit resistant to "Low-quality and misleading information". We can see people generally turning to higher quality sources than have previously been available which is why everything is getting fragmented. It is becoming impossible to maintain the fake illusion of consensus by the corporate media; the opposing voices are just too persistent and - in an uncomfortable number of cases - too reasonable no matter how much muck gets slung at them.
2) Special award for having a freely available 5,000 word pure-text article and being unable to display it without Javascript and 300kB of downloads. Cookie to the product manager, keeping devs employed. Probably have some wicked Kubernetes admin skills in their technical organisation to serve this small quantity of text.
The problem with "articles" such as the one under discussion is that information veracity is inherently subjective, assuming no true perfect framing of the information (such as is the case with history), and therefore the truth will be under the type of constant debate that breathes life into democratic societies.
To my point and yours, such articles are unlikely to be referencing falsehoods presented by corporate media but instead contrary views.
Which means that the hypothetical "ignoring" curriculum that the authors suggest would devolve into, if not start as, lessons in how to ignore most else but the corporate media view.
Given the dogma that ostensibly serves to reinforce democracy in ideal conditions but has obviously instead enforced oppression in every other environment: that the media is the primary and honest arbiter of truth.
> People are actually a bit resistant to "Low-quality and misleading information". We can see people generally turning to higher quality sources than have previously been available which is why everything is getting fragmented.
Do you have anything to back this up or is this just your opinion on the situation?
A lot of times when I see people online complaining about the mainstream media and talking about how alternative sources are better they end up linking to something that is way less trustworthy, but it aligns with their existing beliefs. Maybe it's just my experience, but I rarely see people going "CNN is shit, so here's a ProPublica article". It's usually "CNN is shit, so here's an article from truth-and-freedom.ru."
I agree that the mainstream media isn't as high quality as they make it out to be, but, on average, I don't feel like alternative sources are any better.
I don’t think it makes sense to trust any single organization, because there is usually some motive other than finding the truth (profit, reputation/fear, power).
Rather find individuals/journalists who are searching for the truth and over the years don’t lie to your face.
Social media censorship does not help with this problem, and if we want to live in a functioning democracy more real debate is needed.
It's not an option for all situations, but I've found that photos and videos from events, taken with cell phones and shared online by relatively average people who happen to be in the area, often offer a far more realistic and accurate portrayal of what's going on than the so-called "mainstream" media's coverage.
For larger events or situations, it's often possible to get footage from numerous different sources, including from different physical angles, helping to corroborate and clarify what's actually going on.
One example of this are the protests that happened in Canada during early 2022. The Canadian "mainstream" media, which is quite tightly controlled and close to the government, seemed to be portraying the protesters in a very negative light.
On the other hand, there were also many photos and ample footage online, taken by various people walking through the protests, which showed very different views of the situation. The "mainstream" media's portrayal often didn't correspond well to what I was seeing from all of these other independent sources.
I didn't particularly trust the Canadian "mainstream" media before then, but that situation really made me distrust them and their "news" coverage.
I've seen such crowdsourced or alternative footage abused, often by mislabling it.
In one case, my YouTube addiction of watching Things That Go Boom paid off when I saw a video showing a huge refinery explosion reported as being in the Gaza Strip and recognised it instead as coming from a Pemex plant in Mexico: <https://yewtu.be/watch?v=0LD82y4ulds>
I posted evidence of the true nature and challenged the poster over the disinformation. They never issued a correction or apology, though the post disappeared some time later (months, as I recall).
This is fairly common: actual incident footage from one context is portrayed in another.
There's a classic cartoon illustrating this: two men running, one with a raised knife, though in the cropped framing of the television camera it appears that the fleeing man is threatening the attacker:
<https://archive.org/details/The-Media-Shows-You-What-They-Wa...>
Another classic example of framing shows two soldiers around a prisoner, one holding a rifle to the prisoner's head, the other offering water from a canteen. Depending on how that image is cropped, very different narratives are created:
My own preference is to attempt to source, view, and read information from multiple perspectives, whether that's video, images, or narratives/accounts. One lesson from our own senses is that we create the most accurate views of our world when relying on multiple sensory inputs. Our eyes, ears, and sense of balance can each fool us individually, it's much harder for them to be fooled collectively, or at least, the contradictions between inputs become apparent. This is a lesson well-understood by pilots, where the inner ear and actual or artificial horizon signals are often at odds (the "death spiral" in instrument conditions such as clouds, fog, or night flight is one tragic outcome, see JFK Jr.'s death).
And of course there are artisans who do manipulate these senses, which we find in cinema and stage magic, though those also rely strongly on a single and highly-crafted perspective on the action.
Did the self-selected amateur news sources that seemed more accurate than mainstream news happen, by any chance, to align with your preferred narrative?
The top one that springs to mind is the Joe Rogan Experience. Wildly popular, and reveals a massive market for long form in-depth discussion. If I look at a recent episode, he just had Tulsi Gabbard on - US Hose of Reps, endorsed Joe Biden now a Republican [0].
It isn't academic fare, but Tulsi Gabbard for 2.5 hours is a much more high-information look into US politics than anything that could be considered mainstream in the early 2000s.
[0] My mistake, she's technically an independent. Should spend some time listening to Joe Rogan I suppose.
No, Joe Rogan (and most alt media) responds to the NYT, while the NYT doesn't respond to Joe Rogan except to smear him. That is the test of whether a media source is mainstream, or at least respectable - does the NYT cite it seriously or ignore it except to denigrate it?
I'd agree lot of people have good skills to know what to ignore. Like knowing to go through a persons post history to get context on the kind of beliefs held that correlate with saying:
>It is becoming impossible to maintain the fake illusion of consensus by the corporate media
Literally your last comment included a "At its worst its propoganda to convince people its ok to keep buying things and living as they currently do as the rich abscond with the profits." so it seems we're on a similar page about there being propaganda out there.
Articulate your complaint here. I have, as advised, glanced at your comment history. I'm not sure what you don't like.
How would you know if the alternate sources are higher quality? How would you know if the new consensus isn't being manufactured by powerful interests or whatever gets the most clicks on social media?
> Low-quality and misleading information online can hijack people’s attention, often by evoking curiosity, outrage, or anger. Resisting certain types of information and actors online requires people to adopt new mental habits that help them avoid being tempted by attention-grabbing and potentially harmful content.
Step 1: disable all notifications and background app refresh on your phone for all social media apps. Your mental health and battery life will improve dramatically.
Give them your attention on your own terms and schedule, not theirs.
Step 3: sponsor block. It uses crowdsource info to skip in-band advertising on YouTube.
(If you’re on safari, I’d also recommend Vinegar. It replaces the YouTube player with a regular HTML5 player, which removes all ads and lets you use PiP)
Aside from YouTube and gmail; I’ve long since (probably 4+years ago) got rid of social media and couldn’t be happier. My attention span sky rocketed. I noticed I can comfortably sit and read heavy technical books for a while before getting distracted which I couldn’t do before
So, how much effort is it to setup and maintain PiHole? Conceptually it sounds great, but if I set it up properly it's going to impact not just me, but everything in the house, which mean my wife.
And Fiddly and "F-with it" stuff just is not attractive. There's enough internet rage when everything is (supposedly) working as it should much less with something like PiHole potentially mucking things up.
On maintenance: there is an included admin console from which you can whitelist or blacklist domains. The external adlists are updated automatically.
The only time I've been frustrated with my PiHole is when I forget it exists and try to use analytics tools (Logrocket, FB analytics) for work. Otherwise haven't noticed any adverse effects on my web experience.
I have been running a pihole on a raspberry for several years and I cannot recall a single time it gave us problems.
Depending on how comfortable you are with Linux and networking, setup could take from an hour to perhaps a day? Also do note that you can start testing it yourself, and only switch the entire household to it when you are confident it works.
pihole is very stable - deploy it on a cheap pi/nuc/etc that lives near the router and forget about it forever.
as a failsafe you can set your router's main DNS server to the pihole and set the secondary DNS to something like google or cloudflare. this way if pihole ever goes down you just see ads instead of the whole network being knocked offline.
This article is literally titled "Critical Ignoring as a Core Competence". The first strategy recommended in the abstract is "self-nudging, in which one ignores temptations by removing them from one’s digital environments."
You'd be hard pressed to find something more on the nose than disabling push notifications from low quality sources.
That's a bomb where a scalpel is called for. On sites like Twitter and YouTube, your notifications are almost entirely a function of who you follow/subscribe to.
Agree disabling entirely is a total cudgel and not a great long term solution, as it sets us up for the equivalent of crash diets or cold turkey substance use followed by almost inevitable relapses.
I think the root cause is that our digital environments do not provide adequate tools to manage and audit how we spend our attention. Tech favors 'offense' right now, not 'defense'.
The example of social sites and following is one that just about everyone faces. We see someone we like in the moment, we follow, and over time its death by a thousand papercuts. Could be substacks, podcasts, youtubers, whoever. Social sites are not incentivized to help us with this problem as reengagement drives revenue. It's laborious to go back and unsubscribe because it's a one-by-one process and often we want to keep these ideas somewhere in orbit.
I think the solution is invest in 'attention defense'. We need something on our side at the OS level that can audit what we're signed up for, let us get a big picture view of these, aggregate and filter them, make suggestions for what to continue to follow and to what degree, all packaged in a way that lets us retain our initiative. I think the enabling technology for this is language models, because they can sufficiently understand the content to compress and parse out the preferable.
My newest hack: I have a note on my phone called “Things I should do instead of checking twitter”. I have also hidden the app. So whenever I want to start it up, I have to search for it and now also the note shows up…
I usually first check the note and get reminded of all the productive, cool, fun things I could do instead. It works about 50% of the time.
On my phone, I have DnD mode always turned on. I have network wide suppression of ads with pfblockerng-devel. I use a separate browser for social networks that I rarely open. These three things already helped a lot.
Yes, I was suggested this (use multiple profiles), too. But it does not work as effectively: Opening up a different browser requires more attention than just opening up a different colored tab (i.e. profile).
I think this is what the article suggest: Observe yourself and be proactive and skeptical when noticing any external incentives that try to trigger certain behavior from you. It is not possible to prevent these things from happening, but you can modify the context to reduce chances of it happening again.
> Opening up a different browser requires more attention than just opening up a different colored tab (i.e. profile).
In both Firefox and Chromium, I can't open a new profile on a new tab. (This may be a difference in terms.)
I create different aliases/ scripts/ desktop icons for each profile, so typing or clicking "Facebook" opens the browser profile for that, while "Reddit" opens a different one, etc.
Firefox has "containers" which allow cookie separation on different tabs, but they use the same add-ons/ extensions, so I use that feature less frequently.
What I meant was Multi-Account Containers [1]. It is a feature to isolate browsing on Firefox into different categories, comparable to profiles (and often suggested as the better alternative).
Apart from the isolation feature of using different browsers, it is also important for me to reduce the lock-in effect of being dependent on a single browser. If I use different browsers all the time, it will be easier to switch or dump a browser in the future.
Even better: just uninstall the apps and only use the sites via mobile web. The benefits: you'll never see a single notification from apps you don't have, they'll never be able to eat your battery, openings for tracking are much reduced, and the slightly less awesome UI of the web version will somewhat reduce the dopamine hit you get from interacting with them.
Yep, and clearly read the terms and conditions checkbook when you sign up for stuff to make sure it doesn't also include the ability to send marketing emails.
This is a great example of a situation where modifying your own behavior is a superior solution when compared to relying on a technical solution for the situation. Technology solutions can only get us so far. Relying on notifications puts you at the mercy of the platform. Training yourself to check your email every so often is much simpler, and requires no added complexity at all.
> Relying on notifications puts you at the mercy of the platform.
Is this really an issue? You're still relying on the platform to host and serve your actual emails. If you don't trust them to notify you properly, why would you trust them to hold your entire email history?
This also means you're going to have moments where you check your emails and it wqs wasted time because there are no new emails since last time you checked.
Agree, to have agency in the digital world requires us having control over our initiative. There's both behavioral and technical ways to do this today:
1. Look only at certain times each day (see habit chaining, e.g. 'look at email after I get lunch')
2. Set a focus mode that allows notifications from email at certain times
3. Allow email notifications only from specific high-importance contacts
4. Just set a timer, calendar event, or or recurring reminder
Sure, that’s great and all, but some people have disabilities like ADHD that does not always allow modification of behavior to fit a particular neurotypical pattern that easily, and technology is often enormously useful helping correct that.
Maybe set a once or twice daily alarm to check your email. As a person with ADHD, my life has improved a lot after I started using alarms and calendar events for everything. Much better than hoping future me will magically develop better memory skills.
Gmail isn’t social media. My doorbell gives me notifications. Facebook, X, Instagram, Snapchat, TikTok, etc are not allowed to give me notifications no matter how much they protest about it.
What I do is in Fastmail I have a rule that snoozes all messages to around the time I usually take lunch. If I'm expecting a transactional I will just go fish it up when I need it.
I also use an app called Buzzkill that temporarily mutes serial noisemakers and spams any text message that contains a politician's name.
Finally, I wear an analog watch so I do not check my phone for the time.
Pretty much the only live push notifications I get are from direct messages, chats I am @tagged in, and credit card transactions.
The problem is that while sometimes the thumbnails are crap, some of the best and highest quality content is marketed that way.
I don't blame them: it works, and part of the process of making a great video is also making sure as many people see it as possible, to maximize return on investment from developing and producing it.
You have to learn to discern the relative quality of the specific producers/brands/channels. It's a lot of work and it takes
years. I use yt-dlp in a cronjob to make local mirrors of all the videos from a set of channels I consume.
The specific issue is that youtube is a competitive market, where competition is enforced. Youtube stopped showing people their own subscriptions, and now only shows users whatever the algorithm wants. If the content you make doesn't achieve a high enough "conversion" rate, ie people clicking the thumbnail and watching the video, youtube doesn't just stop promoting it to new viewers, but also doesn't show it to people who have explicitly subscribed to you! This means that you HAVE to play the dumb click bait games or your channel simply dies, no matter how good your content is. It doesn't matter that there is very little overlap between your viewers and Logan Paul's viewers, you are still competing with their conversion rate.
Every single channel that considers themselves "above" clickbait has tried avoiding it and seen how hard youtube punishes them for daring to be different.
Clickbait isn't about getting slightly more revenue, clickbait is about attempting to stay afloat in youtube's absurd machine.
To YouTube's credit, they still offer a reverse-chronological view of the videos from the channels to which you subscribe. It's the only reason I'm still a user.
It's the third or fourth tab down on the tvOS app.
Since they're moves subscriptions to a separate tab and not your default feed years ago, most people just scroll the homepage instead of going to the subscription tab.
Honestly, I prefer it that way. I don't always want to only watch a creator's most recent video. If they have some good videos from years before I clicked subscribe, I'd like youtube to surface those for me.
Ignoring irrelevant stuff was always necessary when interacting with computers.
My mum had this problem that she was trying to read the screen like a page of a book, starting in the top left.
Not the best strategy when the screen contains multiple windows and the most recent message is usually centered in it's own little window that has it's own title and sometimes even menubar.
I think people's heuristics naturally change. You could spend entire evenings watching television a couple of decades ago. Now too, but now there are competitors. (Unfortunately you can also scroll while in bed and really most places outside your living room.)
There is probably at least a million times more informational content now compared to three decades ago. But is that information filtering/overload a problem which is proportionally bigger? Not really. The major “problem” is that people have changed their heuristics from trusting the Mainstream Media to a wide variety of more and less legitimate sources. So before people might think “I don't trust this because a journalist didn't say it” while now they might think “I actively distrust it because a journalist said it”. (Of course that's just one example of many heuristics now that we are all so “fragmented” (Oh no! Filter bubbles!).)
On another note: these academic/straight-laced approaches to the problem (or supposed problem) feel just like being back at school. You are given homework which is ostensibly about making you an independent “citizen”, but the coursework and framing is so narrow and top-down that it's more of an exercise in being instructed by credentialed experts. Maybe school really is (and ever was post-Prussian Empire) a factory for conformity.
I use the same strategy. When I see an information grabbing headline I note that "X may be true" but unless this affects my life or it is a subject I'm interested in I'm perfectly fine with keeping it uncertain.
Your finely tuned critical ignoring can be gamed by attention-getting specialists.
They are beavering away analzying how people think and what causes their eyes not to stop on something.
The losers end up being the people who have something genuine to say, but don't have those "AGO" (attention getting optimization) specialists working for them.
Their three methods are all things I've employed for a long time and find to be fairly obvious, but I realize a lot of people don't employee even one of them.
I use lateral leading extensively, even when the source seems reputable. Having multiple reputable sources before you spread that knowledge to others is important to your own credibility. Even with that understanding, I've had multiple people tell me that they don't have time for that and prefer to just believe whatever their preferred source tells them. I think that's where a lot of the problems with corporate media started; They knew their viewers/readers would believe anything they told them and abused that.
I fully support critical ignoring, and would just note that it has to be done programmatically. That is, you have to block things before they appear on the screen, through many of the tools mentioned in this thread. (For those who have to interact with social media professionally, but want to limit its lure, New Feed Eradicator is a godsend. https://chrome.google.com/webstore/detail/news-feed-eradicat...)
The reason you have to "ignore" them before you even see them is because of the "clickbait asymmetry principle", a corollary to Brandolini's bullshit asymmetry principle. Don't spend time vetting the stuff that wants to behaviorally addict you.
I'm often amazed how old people that lived the majority of their lives under a communist regime (in my country) that exposed them to propaganda all the time can fall so easily for for various "youtube revelations". I feel if anyone it is the younger generation that should be more susceptible to such misinformation. Everyone who lived during the communist times knew media as worthless. It's reflected in the jokes of the era, for example "- Have you seen today's newspaper? - No. I'm a non smoker" (explanation is - no one would ever buy a newspaper for news, only smokers buy it for rolling cigarettes).
The problem is you're ignoring the propaganda they did fall for. In the US we have the "America is the best and most free country in the world" propaganda that tons of us fall for. Also the "police are great and are only out for your good" propaganda. Then, especially the older generation are completely surprised every time a hidden cam video of the cops doing some bad crap and say "well, that's just cops now, it wasn't like that back then".
We all have biases and propaganda we fall for. The issue with the internet versus past media is we now have unlimited sources of propaganda 24/7, rather than it just being select articles in the paper or on the news. Even if you only fall for something 1% of the time, if you see 1000 of it a day, you still fall for crap all the time.
I'm interested in the response from the commenter you replied to. I suspect their premise is missing the point you rightly highlight. To most people Propaganda is only things they don't believe. But if they believe something, its reality. Culture and ideology can be difficult to explain to someone who's enveloped in it.
I will list a few things that an American (like me) may not realize is part propaganda/cultural influence.
- Governments can regulate Food to be healthy and not overly processed with refined sugars.
- You can experience freedom without owning a car.
- The Metric system can easily be adopted.
- Supporting paid-sick leave / vacation is not communist. ( I've genuinely heard this a lot)
>To most people Propaganda is only things they don't believe. But if they believe something, its reality.
Interesting.
For me propaganda is simply any broadcast that has a hidden agenda. Propaganda can lie, it can tell the truth, or it can be more nuanced, but if its made with an idea of furthering a certain cause it is propaganda and one should treat it all with a grain of salt regardless if it is propaganda aligned with my worldview. There is hostile and friendly propaganda too.
Perhaps this "definition" is unique to my generation or area. I remember seeing on Polish TV a brief exchange with some Ukrainian official when a (Polish) reporter made some comment about how effective "your propaganda" is. One could tell there was some offense taken about these words. The reporter was definitely using my definition of propaganda, but was misunderstood.
I believe this definition of propaganda is not unique to my country either. A number of countries in the past had "ministries of propaganda". Surely of they considered all propaganda false they would not use that name?
Another commenter said that the difference between truth and propaganda is what you believe the truth is.
- Would I like food to generally be healthier and contain less processed sugars? Yes. Do I think that the government should regulate food to be that way? Not necessarily. I won't claim something ridiculous like "regulating food is a violation of the nth amendment!", but I'm generally skeptical of government regulation without a clear and compelling case for why it's necessary. While the obesity epidemic is a serious issue, I'm not convinced that regulating food is the way to solve it. I guess my skepticism towards regulation comes at least partially from cultural influence, but that doesn't mean that it's wrong.
- You can absolutely experience freedom without a car, but the ability a car gives you to just drive anywhere (where there's a road) is unmatched.
- Customary/metric interop is the real issue, but the thing is most Americans don't interact with the metric system at all. Thus, for most people switching to metric is a lot of work (you gotta replace rulers and tape measures and learn to think in km and km/hr) for zero gain. We really should have just gone all in metric in the 1800s, at this point we're stuck with customary units :(
- I've never heard someone describe paid sick leave as communist, but it doesn't surprise me that people do. Totally agree on this one
Interesting observation, though there's an explanation which suggests itself.
Under Soviet Bloc regimes, propaganda was an instrument of the state, and was emitted through state organs, such as Pravda and Isvestia, Russian for "truth" and "news", leading to the Soviet-era ironic quip: there is no truth in Pravda and no news in Isvestia.
That is, the principle mechanism for avoiding propaganda was to ignore the "mainstream press" under such regimes. Which would suggest putting more faith in alternative sources and the rumour / gossip channel.
Whilst there remain issues with mainstream media, even in non-authoritarian states, it's far more often the alternative media and (largely online) gossip channels which now serve propaganda. This is a case where previously-useful adaptations now prove poor guides.
"[...] false information and conspiracy theories, whose prevalence may lead people to doubt the very existence of 'truth' or a shared reality."
This is an insane modern take that I have yet to understand. Can anyone shed some light on how this notion of subjective truth came to be? There have always been conspiracy theorists and rumor spreaders and communities within which they flourished, and discerning the reputation of a source of information has always been a required skill in order to not fall victim to actors like these. What changed?
The former legal counsel for FOX argued internally that the First Amendment protected the network for spreading lies about election fraud even the employees privately thought there was no election fraud.
In other words, there are little repercussions for the blatant political lies and as a result people don’t know what to believe. I have a member of family who used horse dewormer on himself when he got Covid. He also was not vaccinated because he believed it would alter his DNA. I used to make fun of dumb hicks and now I realize I am part of a family of dumb hicks.
I've gotten pretty good at ignoring/blocking trolls over the years.
FB has become a favorite place for old geezers to crap on each other.
About 5 years ago I started whittling down my "Friends List" to less than 100 because so many started ignoring/stopped caring about truth and facts and started getting so belligerent that keeping them around because a sort of self abuse.
I don't regret that all because things haven't gotten better since.
I think the core problems are some people love bullshit and others are willing to sell it to them. That's become a booming business nowadays.
Those that love and buy bullshit being sold don't care that it's bullshit. I have literally been told "I don't care if it's a lie, it's still the truth."
I'm glad they're putting it specifically under the "psychological science" box, too – systematically understanding things beyond "turn your notifications off! don't read the news!" is going to be very useful once the next startup figures out another way to "optimize engagement" with a different type of content.
re: "Low-quality and misleading information online can hijack people’s attention,"
Well, yeah. But frankly, it's the least of my worries (personal). My BS detector is sensitive enough that I don't get sucked into these type of pointless distractions.
My problem is consuming things that are generally higher-quality, not misleading, are relatively interesting, etc. but are *for me* - at least for the foreseeable future - completely useless. I have to catch myself with, "Mark, you're not prepping for Jeopardy" or "Cocktail party fodder is cool, but you're not going to any cocktail parties anytime soon" and then try to move on to something useful. More and more that seems to be silence and my own thoughts.
It's not easy. Often it's simple and easy to turn on some media source and be a passive consumer. "Don't make it think" but to a fault.
Anybody with even a cursory understanding of Adversarial Networks and critical thinking skills can conclude that this is a horrible idea. The end result will be better forged information, indistinguishable by anyone (in the limit). We probably need rubes as a buffer.
After reading this entire article, I can't help but conclude that the authors are promoting conformist in-the-box thinking above all else - with the caveat that the section on lateral approaches is good - meaning not relying on single sources, not even the ones you tend to trust, but instead conducting alternative searches that rely on completely different sources. While that's good advice, the overall conclusion isn't one I like very much:
> "As long as students are led to believe that critical thinking requires above all the effortful processing of text, they will continue to fall prey to informational traps and manipulated signals of epistemic quality. At the same time that students learn critical thinking, they should learn the core competence of thoughtfully and strategically allocating their attentional resources online. This will often entail selecting a few valuable pieces of information and deliberately ignoring others."
Given that more propaganda resources are directed at college-atending students than perhaps any other demographic, with one goal being recruiting them into faith in institutional pronouncements, teaching them not to question the official 'trusted sources' is a recipe for the rise of totalitarian thinking. Systematic institutional corruption is a fact of life - when pharmaceutical and fossil fuel and financial and weapons and tech corporations are deeply integrated into governmental and academic institutions, what else can you expect?
There are dozens of examples, and certainly the more historical ones are less likely to cause immediate blowback - but let's take the issue of modern viral biotechnology research, which is now entirely capabale of taking just about any mammalian virus and converting it into a human pathogen via direct manipulation of its cell-surface receptor binding motifs such that they specifically target human cells. After the outbreak of Covid, a vast governmental and corporate disinformation campaign was instituted to hide this fact (and its highly probable role in the Covid outbreak) from the public - and this disinformation campaign was run via the 'trusted sources.' See also: the existence of WMDs in Iraq, the necessity for Wall Street bailouts in 2008-2009, why the public should approve of warrantless domestic mass surveillance, the notion that all cancers are due to personal dietary choices and parental genetics (rather than industrial carcinogens), etc.
Most of the alternative views on the above topics appeared in what the authors would probably call fringe sources, and they appear to be saying that such sources should be categorically ignored. Brainwashing 101?
- This infodemic is nontrivial because exposure to misinformation has been shown to reduce people’s intention to be vaccinated against COVID-19 (Loomba et al., 2021).
Article about ignorance make conclusions that are nowdays justified by noneffectivnes or even harmful results from medical intervention like taking poorly tested vaccine against covid.
A lot of people just ignored common narrative about effectivnes because it needs time for proof.
Basically article use language that trying to speak against which is hypocritical.
> For instance, reddit.com’s AskHistorians subreddit, one of the largest history forums online, removes questions that use the JAQing technique to deny the basic facts of the Holocaust
I feel like this « reason » is also widely used to shut anyone asking annoying question that doesn’t fit the narrative.
> I feel like this « reason » is also widely used to shut anyone asking annoying question that doesn’t fit the narrative.
It absolutely is. It's a classic tactic of social justice warriors (in the party-agnostic sense) in particular, and authoritarians in general - "you're not arguing/asking questions in good faith, you don't get to talk any more."
I read up on "JAQing" in particular, and it's very clearly just a way to shut down opposing thoughts without having to discuss why.
That informational abundance or disinformational abundance create a scarcity of attention is hardly novel.
Unfortunately most advice tends toward conscious and deliberate rejection of such sources, which continue that assault on attention.
As anyone who's implemented an adblocker or similar online annoyance filters is aware, automating the process of distraction rejection is a vastly more effective and less-attention-costly approach. Rather than individually reject cookies, or cookie notices, or trackers, or ads, or various interstitials / "recommendations" / nags / pop-ups / fly-overs, and the like, I've applied and created sets of tools which remove those without my further conscious awareness. Users of PiHole may occasionally check the dashboard blocking statistics and be amazed at how much not only useless but actively counterproductive crud has been avoided.
Information overload requires cheap, fast, regret-free rejection tools.[1] I've come to suspect that worldviews and models specifically function in this manner, identifying key information which we should focus on, and costlessly discarding the rest.[2] The article does nod to this briefly, particularly in the note referencing Gigerenzer & Gaissmaier, 2011, but on balance misplaces its emphasis, most especially in its suggestions.
Another surprisingly effective tool is randomness, whether in sampling or in decisionmaking directly. Random sampling is a statistical tool for choosing a very small set of observations of interest, where the absolute number is utterly relevant rather than the sampled fraction. That is, a sample size of 30, or 300, or 3,000, is equally effective whether the total population size is ten thousand, ten million, or ten billion.[3]
Within a media environment, in which there are publishers, channels, and distributors, the challenges shift somewhat.
One exceedingly useful tool is noise rejection, or as I put it on Mastodon, "block fuckwits".[4] By blocking those who are misleading, annoying, toxic, or simply cannot or will not engage productively, noise levels fall tremendously. One key feature of the Fediverse is a panopoly of filtering, blocking, and limiting tools, from keyword filters to lists to muting (temporarily or permanently) to blocking (by or against individuals, or entire instances, gradated as silencing or complete exclusion). As a result antisocial behaviour carries sharp consequences.
Timeboxing, and other mechanisms of setting a media budget can also help. Most of use hugely overestimate our message-processing capabilities. Even highly prolific individuals or groups typically have an upper bound of a few hundred incoming messages per day, and perhaps 50 outbound (most of which are exceedingly short --- acknowledgements or yes/no decisions). See Walt Mossberg and Stephen Wolfram, both averaging about 150--300 inbound daily emails. Content moderators seem to have an upper capacity of about 700-800 items/day, which affords about 35--40 seconds of attention to each item, if no other tasks are included at all in an eight-hour workday.[5] Scheduling email and media breaks at specific times of day, perhaps gradated between high-priority (boss, clients) and lower (the rest of the world) contacts is also hugely useful. My email schedule is largely now a few times per year....
A feature lacking from many mainstream tools and platforms is precisely this ability, and especially at the aggregate level. Within, say, YouTube, application stores from Apple and Android, or marketplaces such as Amazon, there is not ability to permit or deny presentations of offerings from a specific channel. Nor is there any way for groups to aggregate their own blocking behaviours. Put another way, providers, merchants, and channels carry no high-cost risk of low-value offerings. If the cost of putting out a crap app, video, book, or merchandise was immediate and permanent blocking by individuals or whole groups, behaviours might improve.[6] Contrast alternatives such as the Debian GNU/Linux package repository where specific sets of policies and behavioural requirements eliminate entire classes of end-user-hostile behaviours. Debian's organisational orientation and premise is focused on its users.[7] Documents aren't enough, the project also has institutional and social conventions for achieving those goals.
Removing apps and disabling notifications is a start. Blocking sites entirely is even more preferable. A fairly recent development has been the emergence of alternate front-ends to major commercial social media platforms which permit directed access whilst avoiding much of the social manipulation, and incidentally, privacy-abusing surveillance, of them. Threadreader and Nitter for Twitter; Invidioius, Piped, yt-dlp, and freestanding media apps (mpv, xine, mplayer, etc.) for YouTube (and a host of other video and audio platforms), Teddit for Reddit, and more.[8]
PiHole or similar adblockers, mentioned above, can be used not only to block advertising, but entire sites and domains exhibiting toxic practices. These blocklists are often collaboratively generated and distributed, which extends the reach and effectiveness on them; they're de facto means for citizens of the Internet to reject the terms and practices imposed by Internet monopolies and publishers. Power law distributions mean that even relatively short lists are surprisingly effective.
Hacker News itself incorporates a similar mechanism through a list of 67k+ banned sites.[9]
Legislation, regulation, class-action lawsuits, and other forms of collective action are the underrepresented remedies lacking from most discussion of this topic. Much as neither Soviet Communism nor Western corporate capitalism finally addressed labour exploitation or environmental concerns, but rather collective unionisation, organising, advocacy, protest, lawsuits, legislation, and regulation did, I think we're facing a similar situation here. Market failures aren't resolved by marketing-it-harder solutions. Boycotts and wallet advocacy have some utility, but are ultimately limited especially where monopolies are absolutely pervasive. That's a slow, long, and expensive battle, but most battles worth fighting are.
3. This concept is central to modern statistics, but hugely misunderstood by the general public. The sample size is sensitive to the size of the effect you're trying to measure, such that if only a very small portion of the entire population is behaving in some manner of interest, you'll need to scale your sampling such that a representitive number of that subgroup are included. Still, small samples can be highly effective. I'd seen this directly when assessing how many Google+ profiles had ever been active, or were recently active. Those numbers turned out to be about 9% and 0.6% of the totals. I'd had a reasonable estimate of the first value after checking only 100 randomly-selected profiles, and within 1,000 or so, of the second. I ultimately checked about 50,0000 profiles (using Google's own sitemaps). A subsequent study increased that sampling tenfold (and ran some more robust checks and processes to ensure effective random sampling), but largely confirmed my own results. See: <https://web.archive.org/web/20150130125653/https://ello.co/d...> and <https://blogs.perficient.com/2015/04/14/real-numbers-for-the...>.
4. <https://toot.cat/@dredmorbius/104371585950783019> I'll note that blocking can be made absolutely without prejudice, as a recognition that another party may be harmed by, or react poorly to, your own posting. Or with varying degrees of prejudice, where appropriate. It's also entirely at the discretion of the blocking party, and grousing about being blocked is virtually always exceedingly poor form, and a validation of the action.
6. Why this is hasn't been sufficiently studied to my view, though Cory Doctorow's concepts of enshittification and chokepoint capitalism demonstrate how forcing low-relevance search results on both customers and vendors can, at least in the short term, maximise the profits of the monopolist.
8. LibRedirect is a browser extension that automatically ... redirects ... to alternatives for these and other services: <https://libredirect.github.io/>
Not that that needs adding to, but I can't help myself ...
The methods mentioned in TFA are not entirely useless, and can be helpful.
"Self nudging" includes in part the mechanisms I'd described as automated cheap rejection. The main difference is that my emphasis is on developing such tools in a shared and collective manner rather than relying on individuals to discover, develop, and deploy them on their own. A similar concept from the perspective of diet is that it's far easier to resist temptation by not buying junk food in the first place than to attempt to ignore the bowl of crisps on the table or the bag of biscuits in the larder.
"Don't feed the trolls", or as one early-aughts formulation put it, a denial of attention attack, can also be useful. I generally limit my direct engagement with disinformation, or try to make that as short and direct as possible (e.g., link directly to refuting evidence). What I'll often do on HN or elsewhere is "be the discussion I'd like to see", and post a take on a topic that I'd like to see emphasized, whether that supports or rebuts a source article. Where an argument seems reasonably novel I'll occasionally try to draw out specifics, I'm most interested in where those come from, and find often that the person posting either doesn't know or won't admit this.[1]
Curiously, the "provide an alternative narrative" bears some semblance to the "firehose of falsehood" propaganda method. The foundations and goals differ, but both seek to dilute a particular countermessage.
________________________________
Notes:
1. These often trace to political or politicised tropes. Common examples include:
- The mischaracterisation of Adam Smith's "invisible hand" description, which traces to Jacob Viner at the University of Chicago, and is much debunked, particularly well by the late Gavin Kennedy of Adam Smith's Lost Legacy.
- The "monopoly on violence" mischaracterisation of Max Weber's definition of government, "the claim to the legitimate use on physical force", where both claim and legitimate use are key. The more common misformulation traces to Robert Nozick and Murray Rothbard, in the 1970s and 1960s respectively.
- "Free market of ideas" which actually seems to have derived from free-market ideological propaganda and propagandists, does not have roots in J.S. Mill, and on examination has little foundation in the actual formulation and dissemination of ideas. See Jill Gordon's "John Stuart Mill and the 'Marketplace of Ideas'" (1997) <https://philpapers.org/rec/GORJSM> and Stanley Ingber, "The Marketplace of Ideas: A legitimizing myth" <https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article...> (PDF) particularly.
A recent instance of broad hand-wavey claims resulting in no citations offered despite multiple responses on the thread: <https://news.ycombinator.com/item?id=37386379>. Sometimes simply stating that the emperor is naked is the best one can do.
Here are the things that I think most discussions of Digital Citizenship/Information Literacy/Disinformation Mitigation etc. miss:
* People's time and especially their mental bandwidth are limited and there exist large swaths of the citizenry who are not students. This paper touches on this when it suggests that lateral research into a source is a less onerous task than reading the text itself deeply and acquiring the necessary background context to evaluate content on its own. But that still requires knowledge, skills, and expertise that a lot of the population does not have. Acquiring it is likely feasible for students, but it is not for a single mom of 3 kids working 2 jobs, many retirees, etc. A lot of America's population (the country I am most familiar with) is running with 95%+ of their bandwidth taken up at all times, especially post COVID when so many places are understaffed and it's more cost effective to force existing employees to do 2+ jobs. Tired people do not have the energy to change their habits, learn new tools, etc. There's also much less tech literacy than we would expect. As an example, my current boss who has run a successful local business for over a decade can't sort a Google Sheet. Expecting her to sit down and learn these skills when she's running a business and parenting 2 children under five is futile.
* Information Literacy skills are not stagnant as the falsehood/clickbait/etc. problem is an arms race with several well-funded and well-researched actors having incentives to adapt to whatever tools and habits people do adopt. We can't just teach one set of skills and expect them to work 30, 40+ years on. As an example, lateral researching works well if your other sources stay stable and reliable but that is not what happens. Enshittification, buy-outs, etc. mean that constant re-evaluation of sources is needed. It is not a one and done, and there are great economic incentives to fight people doing this.
* Almost all work addressing the problem has focused on people attacking misinformation and falsehoods from an individual standpoint. Most people evaluate information in some sort of community context. One reason I think that I do alright is that I have families that are all tech-literate and span 5 generations and multiple political views and none of us have turned into conspiracy theorists. If my cousins and I roast each other
for using poor information sources, we can trust that we a.) have an idea of what we're talking about and b.) have each other's best interest in mind despite our disagreements. This makes it far easier to listen to critique from the 'other' side and to have our filter bubbles pierced.
My POV: I'm a librarian from a tech family who has been online for over 30 years. I've also done professional fact-checking and worked in politics/civics communication.
At the end of the day even this will fail because of human limits.
The 'firehose of falsehood' loves to flood the field with bullshit to purposefully get your to ignore as much as possible. The bullshit asymmetry theory says that even the act of ignoring requires entropy expenditure of which you as an individual have a limited amount of. After this your default state is one of disbelieve and a default ignore state. Your political enemies can then abuse this to insure you ignore critical information that is important to your future well being.
We humans are screwed. We are not designed to pick important information at a worldwide scale where events thousands of miles away can affect your life nearly instantly. This will only get worse as agent based intelligence becomes more common and able to work at scales far beyond any individual.
I've read that scientists often have to "massage" their papers if publishing on a divisive/politicized topic like climate change or transgenderism in order to not have their paper dismissed out of hand (for example, one I saw recently: https://patricktbrown.org/2023/09/05/the-not-so-secret-formu...).