I have been feeling depressed for a few days pondering the general credibility crisis that prevales over human communications as of late.
The spread of low quality content or plain disinformation is just an aspect of the problem. We have those who stand to gain from, politically or financially from influencing the global conversation to their advantage, and we've seen an endless parade of examples, that have elevated Astroturfing and manipulative deceit from a curiosity into an exact science and art form: Cambridge analytica, Facebook and their HR firm, Bannon and Breitbart, the Russian troll farms, twitterbots and fake profiles, the inexplicable fact that FB knows and tells advertisers exactly how many times their ad was shown and bills them accordingly, but they need to develop new technology to figure out exactly how this very same features were used to needle in an election, sow discord and amplify animosity.
OTOH I can read and hear all sorts of opinions from all kinds of experts on every field who I no longer trust, since I don't know if they are using their audience for profit and ever so subtly change the perception of those who listen: it puzzles me to no end that nowdays Microsoft is considered open source's best new friend, the same Microsoft that no more than 15 years ago was covertly funding SCO's Lawsuits vs Linux.
I feel like advertising has seemed to the very fabric of human communications, not to better humanity but for the short term gain and selfish goals of a few who can afford such services.
The internet used to be an Electronic Frontier where everybody could be who they really were and speak their minds. Now it's a poisoned cesspit where everybody lies about everything and those who do it better get to sell you stuff. <//3
That and the fact that various parts of the 'expert class' has largely discredited itself; from the replication crisis to the death of physics as a field .... we don't need to blame bots; the trust level is low.
It's always been that way. Seriously, the most valuable skill the internet has helped me foster is my bullshit o-meter. The actual problem that I see first hand is most people dont have that skill exercised at all. They've been living in small enough groups that they have high levels of trust between the members. That has been projected onto the rest of the internet, they are far to trusting and perform zero research before they start repeating what they've read or heard. And were not punishing that action, your reputation for information quality should go down if you repeat clearly false information.
Absolutely tons of false positives, but the bar is set very very high for me to actively voice my support or share information. That might even be the more important skill because of shear volume of information being generated and shared across the world. Very rarely do I voice my disagreement with stories or ideas, not supporting or sharing them seems to be good enough. Very few things are true/false, most knowledge is drifting around on an infinite scale between [0,1.0].
Yes. It's a nice balance around being open to new ideas, researching controversial ideas and taking my tine to register myself as supporting or disagreeing with an idea. Its critical thinking with healthy skepticism, but being unafraid to entertain my imagination concerning very controversial hypotheses. But the real butter might just be how slowly I let ideas sink into the agree/disagree, or perhaps that my middle ground is far larger as in I entertain a wide variety of conflicting ideas while only sharing and voicing my support for a small set of ideas that I'd truly stake my reputation on.
During the English revolt, the censorship of the press was suspended, and people could publish anything. And they did. And the number of inaccuracies spread rapidly.
It became common to argue that England had once enjoyed a rough democracy during the Anglo Saxon days, even though there is no evidence of that.
It became common to argue that studying the Bible was unimportant, compared to the importance of being moved to speak by the Holy Spirit.
A number of establishment figures thought they could stop the spread of error simply by writing books pointing out the errors -- which seems very similar to what is happening now.
After the King was killed, and the official Church limited in its legitimacy, a problem arose that no one had the legitimate authority to determine if someone was the Second Coming of Jesus Christ, so more and more people began to claim that they were, in fact, the Second Coming of Jesus Christ. And the competition among these so-called Second Christs somewhat resembles fights among modern day influencers on YouTube.
In the end, the public became exhausted with the way nothing seemed to have any legitimacy, and which point the public became nostalgic about having a King. And this made it inevitable that eventually the monarchy would be restored.
Ditto Neal Stephenson's "Anathem" which posits an alternate-Earth history with a future of widespread info war (quoted at https://news.ycombinator.com/item?id=14554592 ) and the evolution of a special quasi-priest class of people whose unique ability was to filter out the crap and find true things on the Internet.
In Anathem, to deal with spam the advanced pre-collapse society intentionally built machines to send out an intended message along with millions of tweaked variants to combat spam by making everything untrustworthy at first glance.
I personally think this state of affairs is the solution for the misinformation regime we find ourselves in. We need to combat bots with more bots that tweak and churn the messages promoted by everyone else to forcibly lower the superficial credibility of all information.
My hope is that by doing this, we can supplant the misinformation regime with a white noise regime that is no better or worse than pre-Internet communications for superficial (aka unsigned) traffic.
FWIW, this strategy is commonly used to counter traffic analysis of communications channels (e.g. encrypted military links), so it's not a new idea and it does work.
Ha that is interesting, but does that really apply to our fake news problem nowadays? Let's say a fake news site creates an article "Obama died in a hospital visit 3 weeks ago and was replaced by a robot."
Should we now make 150 different websites that spread 150 slightly different versions of this, who would gain from that?
I'm just thinking out loud.
Oh I think your point is that one of those 150 links would have to contain the truth and the rest of the 150 would slowly edge towards it. Kind of like this:
"Obama was injured and then replaced by a robot"
"Obama was injured and then was given robotic implants to heal"
"Obama was injured and given a pacemaker"
"Obama visited the hospital for a routine checkup, minor cold revealed"
"Obama did not visit the hospital 3 weeks ago, he was at a campaign rally"
Would you really say that you have helped the internet/humanity if you did that to every fake news link?
Even if this is so sophisticated that it autogenerated new domains, new content... people would just revert to following CNN/Foxnews/[standard outlet]. Then the people who read these fakenews links will have an even harder time to figure out who to trust. Or is it maybe the goal to push people towards mainstream news outlets? I can only imagine that as a result of such an approach.
I think the idea is, that when it is obvious that all information from unverified sources is false, then people will start to rely on the (cryptographically) signed, accredited sources when they want the 'real deal', and not let themselves to be misled when these signatures are missing. Some level of white noise is needed (enough to enounter multiple versions of the fake news) for people to recognize the value of checking the signature.
But maybe the white noise mechanism is not needed. There may currently be enough erosion of trust by 'black' noise to give platform builders the incentive to add the authenticity methods to their products and see widespread adoption of their use.
Foxnews proving to me that their latest article is actually from them only helps people who already trust that source.
The reason why Assange etc. post signatures is because they dont have control over Twitter. The ownership of the domain already is a form of authentication/signature that is more than sufficient for just about everybody and source authentication definitely is not the main problem that fakenews is about. Verifiying that the author is who you think it is, is probably the smallest, most insignifant part of fakenews. Much more central is that the content isnt false.
How do we prove that something is false? We usually can't, so we could at best try to find flaws in their thinking or quotes that are wrong and say 'probably false'. That's what fact checker sites are doing, they give out grades. In my opinion the approach of fact checker sites is the best we can do so far, the problem however is now identical to mainstream news: Corruption.
These fact checkers inevitably mess up or maximize their grading to achieve goals for their ideology of purse, which has arguably already happened and now we dont trust factcheckers anymore either.
Maybe this is an uncomfortable thing to say but this entire escapade with fakenews may just be a natural cycle that happens when corruption becomes too much and competition is emerging. So if we accept that reasoning then fakenews is just one ugly side effect but there are also good side effects, like new news sites emerging which may use outrageous new content to get viewers or superior ethics as their selling point. Hopefully the latter prevails but the cycle of gaining / losing trust will continue for as long as human beings are fallible.
> That's what fact checker sites are doing, they give out grades.
Hah! So also in Anatham, there are other machines that do this. The design the author wrote into the story involved two species of machines that work at full speed with 100% uptime to both revise and tweak the facts of a story and then, separately, to assign grades. Basically a world-wide generative adversarial network.
From the attacker's point-of-view, in order to deliver a false message they are forced to try to fight through a gauntlet of independent machinery that will first generate a bunch of alternatives and then will look at any particular story and assign a grade with knowledge that it's probably being attacked. That could be a very tough filter to consistently navigate, especially if our attacker is trying to conduct a broad campaign of misinformation.
From the victim's point-of-view, every piece of information they read now is associated with a score provided by their fact checking filters -- and there is no reason not to have multiple layers of grading filters.
"The world is going to crack wide open. There is something on the horizon. A massive connectivity. The barriers between us will disappear, and we’re not ready. We’ll hurt each other in new ways, we’ll sell and be sold, we’ll expose our most tender selves only to be mocked and destroyed.
We’ll be so vulnerable, and we’ll pay the price. We won’t be able to pretend that we can protect ourselves anymore. It’s a huge danger, a gigantic risk, but it’s worth it, if only we can learn to take care of each other, then this awesome destructive new connection won’t isolate us. It won’t leave us in the end so totally alone." - https://medium.com/@chrstphrmllr/you-are-not-safe-bebb0538e1...
The problem was that they got attention. I still dispute any relevant impact besides the talking points for political campaigns. Which rivals and probably exceeds the dishonesty of some trolls.
> it puzzles me to no end that nowdays Microsoft is considered open source's best new friend
Nobody should see it as anything other as trying to regain lost developers that went to greener pastures. Microsoft wanted to be Apple and in doing so removed any advantages their platforms offered. And the quality of win10 is abysmal.
> The problem was that they got attention. I still dispute any relevant impact besides the talking points for political campaigns. Which rivals and probably exceeds the dishonesty of some trolls.
Can't judge the impact, but there is no question of their ubiquity. The bulk of these laura freedoms, vets for tumps, deplorable sandys and similar accounts with hundreds of thousands tweets are Russian operators. Often you'd look at their likes, and their full of Cyrillc tweets they had to amplify for the Motherland due to some minute home front need. Sometimes, I'd tweet them a humiliating comment in Russian that Bing would never be able to translate, and get blocked within seconds.
Twitter today is likely close to 50% accounts being bots, and 80% content generated/circulated by bots. They also have everything they need to shut the bots down, but it will halve their user base and likely reduce engagement metrics to 1/10th.
>The internet used to be an Electronic Frontier where everybody could be who they really were
come on - on the internet nobody knows you're really a dog is quite an old cartoon at this point. It might be it was a place where everybody COULD be who they really were but it was also from the first a place where people could pretend to be who they weren't.
I think this is just the cycle of deep "disruption" - naive optimism/hype spurring development without regard for how the general public behaves, followed by speculative investments that lead to mass adoption, diluting ethics in the sell out, which attracts unsustainable exploitation... adoption continues to increase as trust falls... then, eventually, enough people burn out that cynical realism reaches the critical mass required for cultural/legislative regulations. Finally, we end up with a mediocre sustainable system - a stable playing field for the next disruption.
Things look bleak to the early adopters that saw dreams morph into nightmares... but this too shall pass.
Re: Microsoft, one of the early tests of a Russian fake news source was to fake that Windows 10 sends highly personal information like webcam video, keystrokes and mic audio to the mothership.
It got a bunch of traction on here and took off big time on Reddit, and people would quote it as proof in comments for months.
1) third party fact-checkers (eg. snopes) are credible and reliable.
2) the third party "Botometer" (machine-learning model) is valid and was trained on an accurately classified corpus.
This type of study always relies on the elevation of someone's subjective assessment (eg. "this account is a bot/human", "this fact-checker is honest") to the status of objective truth.
It is a serious weakness when studying subjects loaded with political consequence, wherein well-resourced stakeholders are constantly attempting to shape competing narratives by any means available to them.
EDIT: just to be clear, I think it's important to note that Nature and these authors likely have a bias towards what they consider more truthful news and that will be reflected in their choices of material and tools for this study.
>2) the third party "Botometer" (machine-learning model) is valid and was trained on an accurately classified corpus.
For anyone curious enough to follow up on how effective their "botometer" tool was, the methods section [0] details what they used and links to said tools. The language used is simple enough for most people to follow. I am not great at following their statistics, but the botometer seems to be at least decently accurate. Certainly, figure 2d seems pretty convincing to me for validating the botometer.
>1) third party fact-checkers (eg. snopes) are credible and reliable.
The methods section talks about how their model works with Onion articles. The great thing about The Onion is that it is a perfect positive control i.e it is always false. The authors state the following in the methods section:
>Many low-credibility sources label their content as satirical, and viral satire is sometimes mistaken for real news. For these reasons, satire sites are not excluded from the list of low-credibility sources. However, our findings are robust with respect to this choice. The Onion is the satirical source with the highest total volume of shares. We repeated our analyses of most viral articles (e.g., Fig. 3a) with articles from theonion.com excluded and the results were not affected.
I think the massively successful clickbait junk/fake news purveyors like Taboola and Outbrain are equally damaging because they are openly publishing garbage on the sites people visit to read 'news' and form opinions
Wiping out the bots would absolutely destroy the user counts of most social networks, so it's just not going to happen.
On the other end, there are a lot of formerly credible sources of information that have stepped in it badly and repeatedly, and tarnished their aura of authority. The fourth estate is looking more and more bought and paid for, and the academy has their own issues there, not to mention a mother of a reproducibility problem.
With Twitter and Instagram, it feels like there are more bots than real people. Facebook, I'm less sure of, but with as many random friend requests as I get from unknown "people" with half-baked profiles and no connections, it's happening there too.
I believe "curbing" is simply a nod to the fact that the people/groups controlling these bots have no interest in turning them off, and are unlikely to change their opinion. The harms this article shows are, after all, their purpose.
The actors who may be motivated to mitigate these harms, i. e. the social networks and possibly governments would face problems very similar to those with (other) spam, namely identification. Of course the history of spam filters makes me optimistic that it's feasible put a rather significant dent in the problem with even just modest efforts.
Summary: It's more of a semantic difference, arguably more precise because there is no off switch as such.
These people/companies are far and few actually, and they can decide to eliminate such mechanisms. The bots have used the lazy nature of humans to make a shortcut for economic advantage. It has uncontrollably spawned the culture of link baiting and downfall of truth and investigative journalism. Remember that humans first built the social sharing feature before bots showed up. Eliminate the bots and you’re only left with human social sharing on equal footing.
Good luck with that. It's buried in the supplemental materials, but mainstream media may be more reliant on social media bots to spread than the low-credibility stuff based on their data.
Eliminating all the internet bots would be the mundane real life first step of the Butlerian Jihad of the Dune universe. In the stories, they eliminate all machine computers in order to prevent AIs destroying society, and go back to human computers.
The idea that AIs (broadly defined to include simple bots) are net more trouble than they are worth has been kicked around in literature forever. It’s fun to live in the age where we get to wrestle with it in practice.
Haha you really believe this? Everybody lost because of the misuse. The effectiveness of real emails is way down from 15 years ago. Obviously there are many factors, like multitudes of alternative services, but wading through tons of spam opened the door to those new options. The same fate probably awaits today's popular services. We will move on, they will die.
Yes. I really believe this. I still use email daily as do most (all?) people I know. It's a primary means of communication, especially when you need a digital "paper trail" of the communication.
I remember email before Gmail. I used to delete hundreds of spam messages a day. It was torture. Not anymore. Now my inbox is all legit mail with vanishingly few spam messages. And my junk mail folder almost never has legit mail in it. For all intents and purposes the email spam problem is solved.
TBH this is not my experience with Gmail. Some mails randomly get classed as spam for unknow reasons while many are missclassified ("promotion" whereas it was a direct message and "primary" for a lot of stupid promotional stuff").
And legit important mails ending in spam (mostly billing) is a huge problem for many companies I know. On the other side you can spam without issue and end up in the primary box when you switch servers and providers frequently enough and target a small amount of people (under 100 000).
It almost feels like the only solution (for one aspect of the problem) is for the "good guys" to outmeme the "bad guys". But of course, that's much, much, much easier said than done.
This is a very good short term option, that I hope others have been working on. My personal actions have been to encourage my family member to stop using social media, point them to top quality content producers like Joe Rogan & redlettermedia. And most importantly, I help them to calm down, the information available is exhausting, but the world around us isn't changing that fast, I remind them to use that as a reference, talk to people I person, get involved in local politics, take a nature walk, life is short.
Is Joe Rogan truly the highest quality content producer? I've seen/heard some of his content, it's quite diffuse wrt information density. Do I really want to listen to 2.5 hour podcast to get what I can in 30 minutes from other podcasters? I also heard him asking Neil deGrasse Tyson about whether the moon landing was faked with a lot of skepticism in his voice...
I think hes a quality content source at the very least. You can usually find an abridged or best of for the most interesting podcasts. And you can reliably fall down to a complete unedited interview to get more balanced observation 9f his or guests opinions. He sticks out to me as one of my most interesting and he shares similar levels of openness and drive for the truth compared to the other independent producers I am aware of it frequent. There are other quality producers that do better jobs at interviewing different niches.
The truth is always changing, entertaining controversial hypotheses is reinforcing or exposing the truth. As my awareness, research, observational and social skills have expanded, so too has my worldview. If it we represent a worldview as a polygon, where each time you accept a new story into your worldview you add that point as a new vertex to your shape. What happens as you validate and accept more points outside your ahapn? Your total perimeter likely increases is length, ever increasing the total number of potentially mysterious stories available to you. The larger your worldview, the larger set of "what ifs" become available to you, "what ifs" produce larger impacts requiring larger and larger changes to your worldview. If you dont entertain the what ifs and discuss potentially impactful what ipfs, the more cataclysmic the potential change you'll have to make.
He likely asked In jest, but I feel it was probably healthily for Niel to retell the facts for himself as it was for Joe to entertain the unlikely idea, but incredibly impactful the story is if it's not the whole truth.
It strikes me as somewhat ironic that Nature would public this given that they recently tweeted:
> Editorial: The US Department of Health and Human Services proposes to establish a legal definition of whether someone is male or female based on the genitals they are born with. This proposal has no foundation in science and should be abandoned. [1]
That last sentence just absolutely boggles the mind and I'm not entirely sure how this journal retains any semblance of reputation when its editorial board is so clearly willing to put ideology over science.
It doesn't 'boggle' my mind. It would be helpful, I suppose, if you could explain how the last sentence shows they are 'so clearly willing to put ideology over science' - nothing could be less clear to me, although I'm far from expert. Otherwise it will seem like that's precisely what you're doing yourself, which I suspect is why you've been heavily downvoted.
To play devils advocate: I am against establishing such a definition, but I dislike when organizations that are supposed to be inherently unbiased and unopinionated state an opinion, even when it is one that I agree with. I feel like doing so gives potential detractors of said organization (like GP) more ground to criticize them on.
If the purpose of proposed legislation is to eliminate all consideration of exceptions, surely science demonstrating that exceptions exist is the pertinent factor, and not science demonstrating that exceptions are rare.
The last sentence shows that they, Nature, are seriously suggesting that there is something wrong with defining sex based on observable sexual characteristics. This shows they are willing to put ideology over science: the only reason to suggest such a thing is ideology.
> The last sentence shows that they, Nature, are seriously suggesting that there is something wrong with defining sex based on observable sexual characteristics
No. They are suggesting that there is something wrong with defining it based on just one particular observable characteristic.
There are numerous characteristics in humans that come in two forms or varieties, one generally associated with males and one generally with females. Many people end up with some characteristics of the form generally associated with males and with some characteristics generally associated with females.
For example, there is a response in the hypothalamus to the pheromone androstadienone that is different in males and females, and can be recognized on an MRI scan.
Another example is how "male" and "female" brains perform visual and spatial memory tasks, such as imagining how a shape would look when rotated. Males generally are better at this than females, and brain scans show that when doing this task males are using different parts of the brain than females use--male brains approach this task differently than do female brains.
Some people with male genitals have female androstadienone response and female visual and spatial processing, and some people with female genitals have male androstadienone response and male visual and spatial processing. In particular, studies have found that transgender people are often this way, with their brain having the responses of the gender they identify as rather than the gender their genitals suggest.
Are you aware of intersex people? This is why the editors of Nature are correct in saying the proposal has no foundation in science: https://en.wikipedia.org/wiki/Intersex How do people with ambiguous traits at birth get categorized according to such a proposal?
"For 99 percent of us, our sex and anatomy dictate our gender; they are essentially the same thing. But for the one percent of the population who are transgender or intersex, their sex and gender don’t align. What has complicated this issue is that some outlets have replaced the word sex with gender when reporting on the memo, particularly when making any references to anatomy."
"But intersex people possess both female and male anatomy, which leads to having a gender identity that may be different from the way they appear to the outside world. To suggest that this group proves that gender is completely unrelated to anatomy, or that a person’s sense of gender in the brain somehow operates in a way that is distinct from the rest of their body, is foolish and erroneous."
Where would you put someone born with male genitals, but who identifies as female, and for whom brain scans show a brain that has female hormone responses and female processing patterns, who undergoes male to female sex reassignment surgery?
“Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.”
No, really?
Sorry - go ahead and vote me down. I couldn’t resist. Sometimes science uncovers facts that seem obvious.
Stuff like this is scary. Check out this video about the caravan and migrants climbing the border wall. Strangely edited. Presented as breaking news. Music as though it's a suspenseful movie. Yet looking at the comments everyone seems to be buying into it.
The spread of low quality content or plain disinformation is just an aspect of the problem. We have those who stand to gain from, politically or financially from influencing the global conversation to their advantage, and we've seen an endless parade of examples, that have elevated Astroturfing and manipulative deceit from a curiosity into an exact science and art form: Cambridge analytica, Facebook and their HR firm, Bannon and Breitbart, the Russian troll farms, twitterbots and fake profiles, the inexplicable fact that FB knows and tells advertisers exactly how many times their ad was shown and bills them accordingly, but they need to develop new technology to figure out exactly how this very same features were used to needle in an election, sow discord and amplify animosity.
OTOH I can read and hear all sorts of opinions from all kinds of experts on every field who I no longer trust, since I don't know if they are using their audience for profit and ever so subtly change the perception of those who listen: it puzzles me to no end that nowdays Microsoft is considered open source's best new friend, the same Microsoft that no more than 15 years ago was covertly funding SCO's Lawsuits vs Linux.
I feel like advertising has seemed to the very fabric of human communications, not to better humanity but for the short term gain and selfish goals of a few who can afford such services.
The internet used to be an Electronic Frontier where everybody could be who they really were and speak their minds. Now it's a poisoned cesspit where everybody lies about everything and those who do it better get to sell you stuff. <//3