Hacker News new | past | comments | ask | show | jobs | submit login
Disinformation for hire: how a new breed of PR firms is selling lies online (buzzfeednews.com)
479 points by edward on Jan 7, 2020 | hide | past | favorite | 269 comments



A bunch of companies took hold of a distributed system of information and centralized it under their control.

Now those companies are claiming they cannot control disinformation, adversial content and whatnot because they have too much content. At the same time they are saying they would be the best in place to currate and control it because of market dynamics.

I'm not advocating for a total ban of content unless it's "government" approved.

But what's the difference between a government setting the rules and a corporation deciding the rules? We can vote out a government, but we can't vote out a corporation.

This isn't new and we've dealt with this in the past. The technology is different but the tactics are the same.


Exactly, there was a dude in the 18th century that came up with distributing power into three monoliths (writing code, executing code and catching exceptions[0]) and at a similar time the bones of the modern education and corporate system was created[1]. Since then we've been refining these structures without really changing the underlying assumptions.

Blabla..

The way I see forward is to build a social network that is local first (kind of like these ideas with having a "revolution of the personal server" a la urbit). We need to realize that "voting" and "money" and "shares" or "securities" etc. are all the same thing. Compositional game theory and interesting models of MPC like statebox are steps on the road towards solving "fake news" and other trust-topology based problems but I think even if we don't solve these things theoretically we can still get closer to the ideal by just relying on some heuristics (like maidsafe and parity are doing).

Anyway the key idea is: if we can create a "democratic" form of governance - for corporations (or startups) - that outcompetes traditional forms of organizing such organizations then that form of organization will take over everything. The key idea for this to happen is for the organizational scheme to be compositional so that organizations can be assembled ad-hoc to compete with a google or an amazon, then we'll have something more flexible and that thing will win in the long run.

[0]: https://en.wikipedia.org/wiki/Separation_of_powers#Typical_b... [1]: https://en.wikipedia.org/wiki/Royal_Prussian_Army_of_the_Nap...


I've reached pretty much the same conclusions, but you do make novel points that I think make tremendous sense.

> We need to realize that "voting" and "money" and "shares" or "securities" etc. are all the same thing

This blew my mind said like that. I can now see how this is all but 'materialistic vectors' in a 'phase space' taking us from now to "change" (however small/big). Conceptually you may reduce it to a particular instanciation of information theory, sort of a "memetic" signal (a "meme" is a "cultural gene" in anthropology, unrelated to the internet-thingy) that spreads onto and influences civilization(s), big groups of individual vectors. It's an interesting model, "tensors" of civilization so to speak, which we are increasingly able to estimate with actual data — whether numbers for votes and money, shares; whether principles or sentiment reduced to political/investment choices; I mean there's a way to model that thing, framed like that.

Great food for thought, thank you so much.

About a "social network that is local first": I think eventually we need to get together with like-minded people (from all fields) and design something, a spec. Solutions are aplently, what lacks is imho some common standards to iterate deeper, more focus.

About a "democratic" form of governance: that's next step, I see the network as a pre-requisite of sorts (not necessarily in theory, but in practice very much so, to "warm up minds" to the paradigm shift in coherent ways, to "teach" by example). However I'd caution against forcing a type of "regime" so to speak in the protocol, the spec itself: let some be dictatorial or in anarchy if they so wish, let all forms of governance emerge from first-principle-inspired bricks. (I guess this would speak to the "expressivity" of the spec, the topologies allowed/defined by it).

Fascinating topics which have occupied my mind for decades now (only 37 though...), that I think will define civilization throughout this century and beyond. It's our time to innovate politically in ways that go back 250 years, and in a deeper slower move some 2500 years. Grand cycles, eh!


You as an individual can stop using a corporation's products way more easily than you as an individual can vote out a government. My personal success with voting out governments is zero, but my success in boycotting corporations is pretty high. Occasionally others have stopped using them as well and they've stopped having a large effect on my life.


>> But what's the difference between a government setting the rules and a corporation deciding the rules? We can vote out a government, but we can't vote out a corporation.

> You as an individual can stop using a corporation's products way more easily than you as an individual can vote out a government. My personal success with voting out governments is zero, but my success in boycotting corporations is pretty high.

That presumes your main interaction with a corporation is by the consumption of its products as a customer. However, that's so often not the case that it might as well be assumed to be false. For instance, you can't boycott Equifax or an industrial chemicals factory that's polluting the air in your city because you're not their customer, but you're still affected by their decisions and rules.

The main difference between a government and a corporation setting the rules is that, in democracies, the government is theoretically accountable to the interests of all citizens while the corporation is only accountable to the interests of its shareholders and (depending on market conditions) its customers.


For example, Facebook creates accounts for people who don't even use the site.


I don't think a consumer can successfully opt out the way you claim. Maybe for an individual company, but how would one opt out of high-fructose corn syrup being in the market? You can drive yourself nuts trying to find only products without it, but the second you eat out or go to a friends' house you will be consuming it. On the other hand, as a citizen, I can vote in politicians who will heavily tax or ban HFCS.

In this (and many other) cases, government is a far more effective lever for change.


I have several friends who successfully don't consume HCFS, even when traveling or at friends' houses.

On the other hand, your vote for those politicians has near-zero effect. And if it does have an effect and they ban HFCS, then you've just imposed your will on people who wanted cheap sweeteners more than they wanted to avoid the health consequences of HCFS.


> I have several friends who successfully don't consume HCFS, even when traveling or at friends' houses.

If you say so. Are they able to avoid plastic waste, palm oil, and soy-based products? How many hoops should consumers have to jump through before we are allowed to have a healthy market?

> On the other hand, your vote for those politicians has near-zero effect.

Says you. I know a bunch of people who have healthcare because we voted for a certain President and a Congress who could get it done. It had a near-100 effect for all of those people.

> And if it does have an effect and they ban HFCS, then you've just imposed your will on people who wanted cheap sweeteners more than they wanted to avoid the health consequences of HFCS.

I'm not a Republican, I don't care about corporate profits over human well-being. Removing HCFS and similar products from the market is a net good. We already decided that tobacco products should be handled this way, and HFCS specifically is just as unhealthy.

Furthermore, the product choices available to ME are severely distorted by the presence of subsidized, unhealthy corn syrup. By removing it from the market, the American people will get better choices, with minimal medium or long term disruption to the number of choices available.


The issue is the Internet has provided bad actors (including rent-seeking entrenched interests) with the ability to anonymously, cheaply and pervasively spread lies and propaganda. They can precisely manipulate emotionally vulnerable segments, bury informed discussion, manipulate Overton windows, and subvert law and policy making for their own purposes.

Individual choice is of almost no help in managing the problem.


If that's your critique of the private sector, what about governments? I agree that those things are problems, but we're not comparing to an ideal state -- we're comparing to another deeply flawed thing. At least individual choice gets me something in the private sector.


For me “bad actors” includes all bad actors—-whether commercial, governmental or otherwise. They are all potential snake pits and they need to be scrutinized. :-)

But they are what we have to work with. To me it is irrational to refuse to use part of the toolkit (laws and regulation) because ideology.

Mandatory radical transparency is probably a part of any effective mitigation strategy.

We have to be realistic. The “system” cannot be perfected, and trying will cause horrible unintended consequences. It will always be gamed. There will always be charismatic sociopaths and selfish assholes. There will always be injustice and suffering.

But we can try to use the available tools judiciously to keep their negative impact down to an acceptable level.


But how many of the companies that you’ve boycotted are conducting business as usual? Isn’t that the real analog of voting out a government? A single person’s boycott is a drop in the bucket in the same way that a single person’s vote is.


A single person's vote has zero real impact whatsoever unless it's the one that changes the outcome, while each additional business boycotter increases the impact to the company.


> My personal success with voting out governments is zero, but my success in boycotting corporations is pretty high. Occasionally others have stopped using them as well and they've stopped having a large effect on my life.

This isn't even close to an apples-to-apples comparison. You interact with many more corporations than governments. Most of the corporations you interact with have some competition; most of the governments you interact with don't. And voting out a government is hardly the analogue of boycotting a corporation: what's your success rate with voting at all vs your success rate in getting a corporation's board replaced?


The word all your respondents are looking for to explain the flaw in your reasoning is "externality".


Governments impose negative externalities as well, and in a way that has just as little accountability (often on behalf of corporations).


It is extremely difficult to opt out of Facebook and Google. I have no beef with Microsoft, but many workers can't boycott them because they're required to use them for work.


Being forced to use a product at work is one thing, but is it really that difficult to opt out of Facebook and Google in one's personal life? Unless one needs to advertise a personal business or something, the former is as easy as deleting your account, and you can drop most of the latter's services in minutes (just use DDG/protonmail/openstreetmap/etc instead). Obviously if you have an Android phone that's another step.

All I'm saying is it can be done, and it needn't be all or nothing: one can proceed in whichever increments make the most sense.


Both companies track you unless you're extreme with ad and tracking blockers.


Yeah, they do. But it can still be meaningful to boycott the services.


> But what's the difference between a government setting the rules and a corporation deciding the rules? We can vote out a government, but we can't vote out a corporation.

Governments can take away your freedom and life, corporations can not.


> Governments can take away your freedom and life, corporations can not.

I guess that depends on how you define freedom. If it merely means to exist outside of a prison, then sure. On the other hand, if freedom means the freedom to live, to gain autonomy and mastery over ones' self, etc. then corporations are affecting your freedom and life every day through manipulation, pollution, lobbying, and anti-competitive behavior. The government is the only bulwark we have to effectively check social manipulation.


I meant the first thing, physically holding someone in prison.


Sure, but even then I wouldn't agree with you. Ke$ha was a hostage of Sony Records and her rapist, enforced by the US court system. Corporations will do whatever amount of damage they are allowed by law, including stripping a person of her economic and social freedom.


I see your point but you're reaching


Here's some good reading in case anybody believes this kind of stuff is new. They had it down to a science nearly 100 years ago.

https://en.wikipedia.org/wiki/Edward_Bernays


It is important that people know about Bernays but the nuance of how you describe this is like saying "online commerce isn't new, people have been shopping for centuries." Bernays dreamed I'm sure of the kind of automation, or maybe it would've even scared him?


I meant that using media to manipulate public opinion is nothing new, and it's happening at a scale that people are barely aware of, though the mechanisms have evolved...


Oh totally, I just wanted to clarify for anyone reading that my incorrect interpretation was what I first came up with. I also have a lot of problems talking with others who think that only the scale changing isn't that big of a deal, when that's the biggest deal, and makes it all something entirety new fundamentally.


a very good bbc documentary based on Bernays work is "The Century of the Self":

The Century of the Self - Part 1: "Happiness Machines" https://www.youtube.com/watch?v=DnPmg0R1M04

TL;DR: The story of the relationship between Sigmund Freud and his American nephew, Edward Bernays. Bernays invented the public relations profession in the 1920s and was the first person to take Freud's ideas to manipulate the masses. He showed American corporations how they could make people want things they didn't need by systematically linking mass-produced goods to their unconscious desires. Bernays was one of the main architects of the modern techniques of mass-consumer persuasion, using every trick in the book, from celebrity endorsement and outrageous PR stunts, to eroticising the motorcar. His most notorious coup was breaking the taboo on women smoking by persuading them that cigarettes were a symbol of independence and freedom. But Bernays was convinced that this was more than just a way of selling consumer goods. It was a new political idea of how to control the masses. By satisfying the inner irrational desires that his uncle had identified, people could be made happy and thus docile. It was the start of the all-consuming self which has come to dominate today's world.

---

The Century of the Self - Part 2: "The Engineering of Consent" https://www.youtube.com/watch?v=fEsPOt8MG7E

TL;DR: This episode explores how those in power in post-war America used Freud's ideas about the unconscious mind to try and control the masses. Politicians and planners came to believe Freud's underlying premise - that deep within all human beings were dangerous and irrational desires and fears. They were convinced that it was the unleashing of these instincts that had led to the barbarism of Nazi Germany. To stop it ever happening again they set out to find ways to control this hidden enemy within the human mind. Sigmund Freud's daughter, Anna, and his nephew, Edward Bernays, provided the centrepiece philosophy. The US government, big business, and the CIA used their ideas to develop techniques to manage and control the minds of the American people. But this was not a cynical exercise in manipulation. Those in power believed that the only way to make democracy work and create a stable society was to repress the savage barbarism that lurked just under the surface of normal American life.

---

The Century of the Self - Part 3: "There is a Policeman Inside All Our Heads; He Must Be Destroyed." https://www.youtube.com/watch?v=ub2LB2MaGoM

TL;DR: In the 1960s, a radical group of psychotherapists challenged the influence of Freudian ideas in America. They were inspired by the ideas of Wilhelm Reich, a pupil of, who had turned against him and was hated by the Freud family. He believed that the inner self did not need to be repressed and controlled. It should be encouraged to express itself. Out of this came a political movement that sought to create new beings free of the psychological conformity that had been implanted in people's minds by business and politics. This programme shows how this rapidly developed in America through self-help movements like Werber Erhard's Erhard Seminar Training - into the irresistible rise of the expressive self: the Me Generation. But the American corporations soon realised that this new self was not a threat but their greatest opportunity. It was in their interest to encourage people to feel they were unique individuals and then sell them ways to express that individuality. To do this they turned to techniques developed by Freudian psychoanalysts to read the inner desires of the new self.

---

The Century of the Self - Part 4: "Eight People Sipping Wine in Kettering" https://www.youtube.com/watch?v=VouaAz5mQAs

TL;DR: This episode explains how politicians on the left, in both Britain and America, turned to the techniques developed by business to read and fulfil the inner desires of the self. Both New Labour, under Tony Blair, and the Democrats, led by Bill Clinton, used the focus group, which had been invented by psychoanalysts, in order to regain power. They set out to mould their policies to people's inner desires and feelings, just as capitalism had learnt to do with products. Out of this grew a new culture of public relations and marketing in politics, business and journalism. One of its stars in Britain was Matthew Freud who followed in the footsteps of his relation, Edward Bernays, the inventor of public relations in the 1920s. The politicians believed they were creating a new and better form of democracy, one that truly responded to the inner feelings of individual. But what they didn't realise was that the aim of those who had originally created these techniques had not been to liberate the people but to develop a new way of controlling them.

---

edit: copy/pasta from youtube description


In re: Part 2, Holy "Forbidden Planet" Batman!

https://en.wikipedia.org/wiki/Forbidden_Planet

I don't want to spoil it, it's on the Netflix.


That's so algorithmic, it's scary. Thanks for the links.


Saving, thanks!


> The politicians believed they were creating a new and better form of democracy

That's extraordinary nonsense.

> and the CIA used their ideas to develop techniques to manage and control the minds of the American people. But this was not a cynical exercise in manipulation

The CIA intentionally tortured Ted Kaczynski (inspiring him to become the Unabomber) when he was 17 years old, as an experiment in mind control. They were not trying to help anyone.


> That's extraordinary nonsense.

What are you basing this on?


Common sense? Does it seem likely that politicians concluded being able to manipulate people better = improvement to democracy?

Hmm, actually, you may have a point.


Wearing seatbelts?

Drinking less alcohol?

Eating well?

Not beating your wife?

Voting?

Finishing school?

Recycling?

Most of the ways our government wanted us to change in the last century were fairly positive and actually barely controversial.

It's the controversial one's we talk about.


I am not entirely convinced myself (I doubt there was a conspiracy so much as shared incentives) but the documentary is quite compelling, I recommend it.


If you're interested in learning more about media manipulation, I highly recommend reading the book Trust Me, I'm Lying [1]. It's shocking how easy it is to sway public opinion or plant fake stories that could have a national impact.

[1]: https://en.wikipedia.org/wiki/Trust_Me,_I%27m_Lying


Also see Manufacturing Consent by Chomsky:

https://en.wikipedia.org/wiki/Manufacturing_Consent


> It's shocking how easy it is to sway public opinion or plant fake stories that could have a national impact.

If you want a specific example, just read about Nayirah's testimony. How the Bush government, Kuwaiti diplomats and a well placed PR firm along with collusion by news/media used an outright lie to sway the public into supporting war. Nobody was prosecuted for this, nobody went to jail.

If you want a second example, go read about the yellowcake propaganda based on forged documents, by once again a bush government and well placed PR firms along with collusion by news/media to use another outright lie to sway the public into supporting war. Nobody was prosecuted for this, nobody went to jail.

As George likes to say "Fool me once..."

"The first casualty in war is the truth".



Influence by Robert Cialdini also comes to mind.


> "I developed this for manipulating public opinion,” Peng told the Reporter, an investigative news site in Taipei, which partnered with BuzzFeed News for this article. He added that automation and artificial intelligence “can quickly generate traffic and publicity much faster than people.”

It's funny that Buzzfeed is reporting this because this is exactly the model that they created and helped disseminate all over the internet. It's a semi-automated system for creating clickbait to manipulate public opinion. I would love to see outlets like Buzzfeed, Huffpo... examine their own role in creating the world of clickbait, ragebait and misinformation that they so often complain about. That will never happen though.


I was thinking the same thing, is BuzzFeed reporting on this because they don't like competition?

I hate the new "hate clicks drive ad revenue" model to the internet and cable TV. It is so damaging to society, more so than any "fake news" thing posted on a random blog with a few wackos believing. We're talking 1000x the exposure and people believing it's "mainstream." Producers on cable TV purposefully tell people to play to an extreme, publications purposefully get pieces that get hate clicks and hate engagement because it boosts their numbers. It's all so horrible.


Everything that's old is new again.

Soon everyone will be so completely saturated with this type of news/entertainment, that I bet they'll crave things that are actually real.

I think on HN we tend to be ahead of the curve. Almost everyone on here has completely given up or is giving up on a lot of social media and news feeds.


>Almost everyone on here has completely given up or is giving up on a lot of social media and news feeds.

In fairness, HN itself is actually a social media news feed, and is also filled with it's share of clickbait outrage porn.

The reality is that no user sourced digital medium is going to be free of these kinds of issues. If you could figure out a way to actually create a system that is free of these issues, you could probably make a billion dollars.


There is an easy solution but you are not going to like it... require real names. And verify them. And keep track of their reputations.

Edit: Proposed this as a solution for disinformation agents. Guess I picked the wrong sub-thread to place that. Sorry for the confusion.


> require real names. And verify them.

Facebook does this and even requires some users to upload a photo ID to prove their identity[1], and that site is a cesspool.

A side effect is that people like me, who have no desire to create a public index of everything they've ever said or done on the internet, will not use those products in any capacity.

[1] https://www.facebook.com/help/contact/183000765122339


In reality, Facebook has been quite unsuccessful at verifying accounts. [1]

[1]: https://www.cbsnews.com/news/facebooks-fake-accounts-doubled...


Hah, I deleted my account months ago and made a new account with a made-up name that literally has the word "Fake" in it, just to keep tabs on one or two groups whose events I'm interested in. Didn't even know that there was a risk they'd try to verify it.


Counteranecdote: I made a fake account for work purposes (needed to look at certain things on Facebook that you can only see while logged in, and didn’t want to use my personal account) and it got deleted pretty quickly.


My wife tried to create a new FB account a month ago after many years of not having one. Entirely genuine, real name, photo and cell phone number. Rejected as deemed to be fake.


YouTube implemented real names and that was an absolute shitshow. Not only did it fail to meaningfully reduce toxic content, by removing the protection of anonymity it enabled cyberbullying based on demographics: "Oh, now I know who you are I can more effectively harass you based on your sexuality/gender/ethnicity/etc." They rolled it back for many reasons, but that was a big one.


Yeah, now if you state an opinion that's too controversial or against the grain you get blacklisted by anyone with enough time to feed your name into a database.

Real names: no, no, no, no, no


Even better as real names are stolen and traded, you can find out via identity theft that you're a real jerk online.


Have you seen the lowest common denominator facebook feed? Plenty of people will post plenty of garbage of their own volition even if you ignore just sharing other sources of clickbait and that's under their own name and face most of the time.

Reputation tracking is most useful for detecting bad faith actors (ie the person "just asking questions" but regular participates in extremist forums), but doesn't seem to be enough to promote any sort of self moderation.


Even for disinformation that wouldn't work very well. Reputations are made to be gamed even without it being a concrete thing, proxies exist, and it in itself is something exploitable for misinformation.

There is no easy epistemological shortcut to the truth (barring say mathematics and other verifiables), let alone the grey area of deliberately misleading truths.


this is exactly right. folks could likely get by anyway, but it would be dramatically reduced.

until there are consequences for lying online, and as long as there's money to be made by lying, enough people will be assholes and do it.


Politicians lie in real life and suffer few consequences.


Or at least a nobel peace prize


I think you're dead wrong, and have only to look at the history of yellow journalism (int he US) or the daily cesspool of tabloid newspapers (int he UK) to see that there has always been a large audience for trash.


My problem isn’t that tabloid have and always had a large audience, it is that formerly serious newspapers like the FT are becoming propaganda outlets. Even news agencies which historically had very strong standards seems to think they should put their thumb on the scale, like Reuters embargoing a story on O’Rourke to give him a chance to beat Ted Cruz. There isn’t any neutral, trustworthy news anymore. All news is now about pushing narrative.


I think its more like a fire that is burning a system we might have called ‘civic discourse.’ Our societal communication is sustaining damage, it looks like Fury Road on the other side.

The theory of moral sentiments, the ideas of The Republic, are an architecture built over centuries of human civilization. The forces of anarchy and disorder are hard at work pulling it down.


and behind that disorder and anarchy are corporations and totalitarian regimes looking to weaken democracies for their own self-serving ends.


Hearst and Pulitzer newspapers pushed public opinion for the Spanish American War based on a lie. So the cycle is alive and well.



The Two Minutes Hate. Truly Orwellian.


Upvoted for reminding me about this. I'm probably due a re-read of 1984 as I'd completely forgotten about the Two Minutes Hate. Orwell really was a visionary.


Two minutes hate is not something new, the difference is just how accesible creating and disseminating such sentiments have become.


And once "the internet" (some nasties behind it) found the way to make these "two minute" increments compound (e.g. YT showing a more extreme version than the one just watched), combined with profiling etc, then boom, CA and FB win elections ;)


This just once against demonstrates what Orwell really meant and America never heard, everything he painted in his dystopia was just as possible under authoritarian capitalism as it is under authoritarian communism. The American education system loves making kids read Animal Farm and 1984 but for some reason they never seem to mention that Orwell was an anarcho-socialist.


> We're talking 1000x the exposure and people believing it's "mainstream."

Twitter is now famous for this.

You'll see an account with two or three followers post some news story that pushes some agenda. Within minutes the tweet has over 8,000 retweet and likes. Less than an hour later its trending and snowballs from there. Suddenly, the mainstream media picks it up and reports it as fact - even though its blatantly obvious this story has been propagated and driven in a totally false manner.

It's become the blueprint for pushing propaganda from social media into the mainstream media, making public opinion remarkably easy to manipulate.


It has the ability to go even further than that. Once something has reached some ill-defined level of media coverarge, it's eligible for inclusion on Wikipedia, which then populates Google SERPs and all manner of other services. (And in Wiki's case, even if someone figures out the BS, they won't be allowed to do anything about it because the non-media coverage counts as "original research")

Granted, this generally won't happen unless an actual shill for whatever cause is editing the site, but still. Misinformation has never been able to move faster.


The book "Anti-Social" by Andrew Marantz introduced me to the concept of "High Arousal Emotions", which is exactly what clickbait, advertising and 'social networks' take advantage of to get clicks.

Personally, I don't think it runs up against the first amendment to limit the use of such a blatant emotional/mental hack.


>I don't think it runs up against the first amendment to limit the use of such a blatant emotional/mental hack.

The Snowden leaks resulted in some fairly high-arousal emotions, so I would be careful with where you go with that.


I don't believe it should be limited, I am generally an absolutist on free speech. But I do think media outlets need an internal reform, or we need to start looking at alternative methods of revenue that don't rely on societal damage. I honestly don't even think some of these people mean what they say, it is very much an Ann Coulter methodology.


Of course the massive irony is that if it was accepted as true the rationale in itself is a "High Arousal Emotion" to manipulate into accepting limitations and seeing nothing wrong with them. It doesn't technically mean it is wrong (Fallacy fallacy style) but I find it reason to be suspicious.

Right or wrong with the First Ammendment it isn't exactly precedented in the best of ways as the closest doctrines were abandoned for good reason. Fighting words and vague definitions of "inciting a riot" which mean "the mob really doesn't like you" instead of "calling for them to murder/burn something down". There is false advertising but that framework would be far more limited by design. Even if considered a good idea it would call for its own constitutional amendment.


I keep saying this: we don't need to be attacking the symptom, we need to be attacking the disease. Regulating what people are posting online will never get rid of the real problem, there is simply too massive of an incentive to manipulate people for financial gain. Until that's fixed we're just going to keep putting out new fires when someone discovers the next big unethical business practice that happen to work.


So what, pray tell, is the disease in your analogy? Human nature?


Honestly I'm surprised more people aren't burnt out on this. I can only be angry and get triggered by the same bullshit for so long before it just doesn't work.


Copied from where I've said this elsewhere:

Almost 10 years ago, I conducted an experiment. I watched an hour of CNN every night but it was never that night's coverage. It was from exactly two weeks ago.

It was amazing how much "breaking news!" was irrelevant or just outright wrong, how many large trend predictions were wrong, and how many "[person] will do X" were wrong. While the predictions could have been portrayed as opinions, they were presented as facts and the obvious next steps or conclusions.

I realized pretty quickly that avoiding CNN kept out the blatantly wrong information so even if I didn't replace it with anything, I was net ahead.

A few years ago, I discovered this article and realized that some portion of it was probably on purpose:

https://aeon.co/essays/how-the-internet-flips-elections-and-...


I gave up on TV news before I was even out of high school because I was fed up with the "milling" of incessantly covering to death a few topics with endless speculation and often dropping them before they resolved. That and youth as villain moral panics which were trivially bullshit like rainbow parties (even the coolest of kids claiming to have gotten blown by seven girls with different color lipsticks would get cries of bullshit by those who actually believe that someon has a "girlfriend in Canada", and "pharm parties" which consisted of filling mixing bowls with random cabinets full of pills.

Even the dumbasses who stick paper clips in electrical outlets because they think it would be funny or looking to get high wouldn't do that because it not only is obviously dangerous but not likely to get high. Smoking random literal area weeds would be more fun and less stupid.


> I watched an hour of CNN every night but it was never that night's coverage. It was from exactly two weeks ago.

This is an amazing thought experiment. I wonder if using this method, but extrapolating to multiple competing news sources could get remove bias both in recency as well as consistency.


I think the main conclusion of that thought experiment was that you're better off not watching the news in the first place. No point in trying to see if you could average out bias by watching multiple news sources; you've failed the moment you've started to watch any of them.


An old quote I heard once: “The news doesn’t tell you what to think, the news tells you what to think about

Doesn’t matter if the emotional jerking doesn’t work no more. Your mind is still hijacked.


But isn't that a sinister side of it too? Burn out and disengage those of us who are rational and reasonable.

Only 150 more years till we can apply to Starfleet...


Being disengaged from mainstream news reporting and clickbait articles isn’t the same as being disengaged from news, reading articles, and being informed though. It just means that instead of following the outrage threadmill and blindly trusting the first source you encounter, you get more emotionally disengaged and have to do more work to filter out your sources and read multiple different coverages of the same piece of info to get a more full and objective picture that isn’t clouded by the lens and emotional coloring through which the source could present it.

It does take more of your willpower and effort to do all that, but that’s the price we have to pay to get a more objective view of something. Back in the old days, the access to information in the first place was what people had to work hard for. In the age of information abundance, you still have to work to get it, but now you have to do more filtering and less figuring out how to access it in the first place.


I agree. About a year ago I made the conscious decision to remove the vast majority of "news", particularly political news, from my life. On Reddit, I blocked all political-related subreddits, removed CNN et al from my bookmarks, unsubscribed from news-related podcasts, removed all news apps from my phone, and I only browse sites /subreddits that pertain to specific interests of mine (HN, subreddits about programming, technology, fitness, financial news about specific companies I am following, etc). I will open a "news site" very rarely when I catch wind of a significant event happening (impeachment, bombing Iran), but again this is very rare. When it comes time to vote, I'll spend some time doing active research on current topics so I can make informed decisions, and the active research helps significantly to avoid clickbait/ragebait that pops up when passively browsing the internet.

I can say without a doubt it has made a significant improvement to my general mood and demeanor. I no longer get sucked into a trap reading infuriating news about the government or inane comments on social media sites. Now when I do happen to come across a clickbait/ragebait headline, my brain seems to just ignore it and carry on with life. Sometimes my friends will bring up the latest "omg Trump, did you hear?" news while we hang out and it will devolve into a bitchfest where they get visibly angry as they talk about it, meanwhile I just sit back and say "I have no idea what you're talking about". Ignorance is bliss, and I say that completely unironically.

It is a little sad because I previously loved being "in the know" and always kept up with the news and wanted to be involved in politics. I miss that aspect a little bit, and I'm certainly wary of the greater effect if everyone in society just disengaged from public debate, but for the most part the improvements have far outweighed that negative.

If you find yourself spending more than even a couple minutes a day being angry/stressed at current events, I strongly recommend limiting, if not totally cutting out, that type of news. It really is great.


One thing you can actually add: local news.

The local news people don't put every fire or robbery through some ideological/woke lens, generally, it's just facts.

Moreover, that it happens near to you gives it some extra empathetic relevance.

When it's 'people you kinda might now' you don't think of it as an abstraction.

I'm in Montreal and I watch PBS Vermont often. Burlington/Montpellier local news. It's so provincial it's almost funny.

It's really refreshing to see regular people, and to know that even if the events are 'local' - it's these kinds of events that are actually most relevant to most people's lives. The political stuff is weirdly not that important.


Yes, I deleted reddit altogether and rarely read/watch the news. Yes maybe this is a "privileged" position. But it's not as if I can affect much other than local happenings in my community. The thing about most political issues is that they are all more nuanced than we pretend and unless we're experts, we're probably wrong and/or underinformed so it's mostly a waste of time anyway. I stick to a few personal axioms and leave the rest


Sounds like you're a few axioms short of nihilism.


Another approach is trying to take a disciplined, abstract view of the situation.

For the news, pay less attention to the content of the news, but the style in which it is delivered, paying particular attention to word choice, chosen perspective, suspiciously excluded details, double standards, epistemic soundness (how would one actually know the "fact" that is being reported), etc.

For internet conversations, try to remain undecided on the particular issue being argued, but closely observe the nature of the conversation, using the same techniques as above.

I think if you can manage to do this skillfully, what would normally be an exercise in frustration and stress can transform into a pleasurable study of the nature of human beings, if you're into that sort of thing.


I'm with you, though it's hard to talk about what you observe from this point of view with people who are neck deep in a given narrative. It's isolating at the same time as enlightening.


Completely agree. On one hand, this seems like little more than plain old common sense, little more than observing the peculiarities of human psychological quirks in action. But then on the other hand, I can't escape this feeling that's there's something actually quite interesting going on here...more specifically, that relatively more intelligent people tend to be aware of these psychological phenomena, and are able to discuss them when the topic is the phenomena themselves, but when a topic is something else, this ability/knowledge "seems" [0] to ~vanish. And it seems it's not only that a strong psychological resistance to the phenomena arises, but that perhaps something occurs in the mind that makes prior knowledge ~literally inaccessible.

It seems fairly unlikely that this is a novel idea, but I've yet to come across any literature that discusses it directly. I imagine part of the problem is that studying such a thing would be incredibly difficult.

[0] I say "seems" because I am running purely on heuristics derived from aggregate patterns of aggregate behavior, comments, and voting - to be more certain, one would require the ability to somehow monitor individuals to see if this theory can actually be observed at the individual level.


They aren't exactly two distinct things, in the sense that if you can drive hate clicks with fake news you just put yourself in business.


BuzzFeed has an investigative news group[0] that is separate from the clickbait they got famous for. They publish articles closer to propublica than top 10 lists.

[0] https://www.buzzfeednews.com/investigations


And yet have been caught publishing some of the most overt lies of the last couple years, while buckling down on it. I keep seeing people try to defend them here, but they have already ruined any credibility they have both by their association and their behavior. It's not worth it; find a better institution to defend.


Genuinely curious what exactly you're referring to? I remember them publishing the Steele dossier, which had both truth and lies in it, but they added a strong caveat that they weren't able to verify the contents.

How is that different from what WikiLeaks did (with regards to NSA spying and the Manning leaks) that HN praises them for?


Mostly recalling some of the claims and allegations they pushed during the Kavanaugh debacle, which were very quickly destroyed even by other publications carrying more serious allegations.


They published a report that alleged Trump directed Michael Cohen to lie to congress. Mueller had to break silence to issue a denial about this.

https://www.nytimes.com/2019/01/19/business/media/buzzfeed-n...


The Mueller team (under inappropriate explicit pressure from the White House / DOJ) made a denial presumably because they wanted to be as careful as possible to preserve Michael Cohen’s and their own credibility for possible trial/etc., to avoid any impression they hadn’t dotted every i, and perhaps in part to appease the White House.

Any “inaccuracy” in Buzzfeed’s reporting was more or less based on a semantic dispute. Under a narrow definition, Trump didn’t “direct” Cohen to lie to Congress, he just suggested it using mob-boss-type language and coordinated the lying testimony through his other lawyers, and his longtime fixer Cohen knew how to read between the lines.

Immediately after Cohen’s lying testimony, one of Trump’s lawyers then called Cohen to congratulate him and tell him Trump was happy with his performance.

Buzzfeed stood (and still stands) behind their story, and the Mueller report and Cohen trial materials largely corroborate their reporting.

However, the Mueller team concluded that there is not enough direct evidence to e.g. indict Trump for suborning perjury in this case.

* * *

The people still selling the story that Buzzfeed completely screwed up and had their facts wrong are (hopefully unwittingly) peddling the same kind of disinformation that the article currently under discussion is talking about. There are some wealthy and powerful people trying to push this message down to further their own antisocial agendas for personal benefit.


"made a denial presumably because they wanted to be as careful as possible to preserve Michael Cohen’s and their own credibility"

I'm exceedingly doubtful that Mueller et. al. would make statements that were misrepresentative or lacking in credibility.


Huh? I said their goal was to maintain their own credibility: they didn’t want the general public to misconstrue the Buzzfeed News article’s use of the word “direct” to indicate that the president explicitly said words like “Mr. Cohen please go lie to Congress” or some similar completely clear statement suborning perjury, which they didn’t find evidence of. The President’s desire was conveyed via implication rather than as a direct order, and coordination about the finer details was done through his other lawyers rather than personally communicated.

But when Buzzfeed asked several outside legal experts (not Mueller’s team), they supported the article’s use of the word “direct” to describe the President’s communications with Cohen. The way Buzzfeed’s critics have attacked them for this story is largely disingenuous, especially after the first few months, when additional evidence came to light largely corroborating Buzzfeed’s reporting.

As I said, this is a semantic dispute about the meaning of the word “direct”. There is plenty of available evidence that the President wanted Cohen to go make lying statements to Congress, and successfully (using his typical mob-boss-style language) communicated that desire to Cohen, and then followed up with congratulations about a job well done afterward.

But the Special Counsel’s office presumably wanted to avoid any possible confusion about the precise nature of available evidence that might undermine their credibility if taken up by e.g. right-wing media pundits.


You misread what I wrote.

I didn't say they were protecting their credibility or not.

I said they wouldn't say anything that lacked credibility.

I'm saying the Mueller team is not going to lie, whatever they are saying. They are careful and deliberate.


> I said they wouldn't say anything that lacked credibility.

Yes that was precisely my point: the Mueller team made a correction out of an abundance of “careful and deliberate” caution.

They did not want the public to misconstrue the Buzzfeed News article’s language that Trump had “directed” Cohen to lie to mean that the Special Counsel’s office had uncovered an explicit statement to that effect. The main thrust of Buzzfeed’s reporting is clearly correct, but Mueller’s office wanted no room for misinterpretation based on differing understandings of the word “direct”.

I don’t understand why you would say you were “exceedingly doubtful” only to repeat my same argument.


I was actually agreeing with you by making a more general point, i.e. irrespective of the specifics, Mueller et. al. are super credible so I think we should take what they say as effectively the truth.

Re-reading it, I see how someone might think I was disagreeing.


Uh hello? https://www.cnn.com/2019/01/18/politics/mueller-statement-bu...

This is a titanic error, any new scoop from them must be verified from other sources now for them to be taken seriously.


The original BuzzFeed News article[1] says that Cohen was telling their reporters what he told prosecutors. They did not report it as undisputed fact.

Is it not newsworthy if a major witness is repeating his testimony to a reporter? How would you prefer they reported it? If news orgs never published the comments of known liars, we'd have very little political news.

BuzzFeed News also backs up Cohen's claims with the transcript of his House testimony[2].

Mueller's office (per your link) was pretty unspecific about what part of Cohen's statements they thought were misleading.

1. https://www.buzzfeednews.com/article/anthonycormier/cohen-tr...

2. https://www.documentcloud.org/documents/6021026-Michael-Cohe...


That's false. Cohen was not the source of the fraudulent claims, rather he was the subject:

"President Donald Trump directed his longtime attorney Michael Cohen to lie to Congress about negotiations to build a Trump Tower in Moscow, according to two federal law enforcement officials involved in an investigation of the matter."


I think the comment you are replying to is awkwardly worded, or misunderstanding the sequence of events or something.

Nevertheless, the article they linked to adds weight to the idea that the Buzzfeed reporting was definitely not "fraudulent" (as you characterised it).

Notably this exchange:

“So we’ve identified two crimes that you say you believe Donald Trump in some way directed you to take the actions for which you have pled guilty?” asked Rep. John Ratcliffe, a Republican from Texas.

“No sir,” Cohen said. “Three.”

“Ok. What is the third?”

“The third one is the misstatement to Congress. Two for campaign finance violations and one for misrepresentation — well, for lying to Congress.”

Now it's true that Muller didn't find enough evidence to support that. But nevertheless, Cohen certainly believed it, and claimed it to congress, and what he claims mirrors what Buzzfeed reported.

If we are discussing the reliability of Buzzfeed - well they reported something that ended up being confirmed by the person they were reporting about. I think that makes them at least somewhat credible.


How is that bad? Cohen claimed and still claims that. He even did so under oath. Something to the effect of, “he says it without saying it but I knew what he meant.” Not really any other reason for Cohen to lie to Congress than to help Trump.


This was the highest-profile fuck up by a news organization in 2019. They didn't merely publish propaganda or disinformation: they published fraudulent news of the highest consequence, so bad that the Special Counsel had to issue an emergency statement to prevent all hell from breaking loose.


Buzzfeed later wrote an explanation of their reporting[1].

It's interesting - Buzzfeed's claim is:

The facts of Cohen’s lies and his interactions with Trump are, largely, now settled. Our sources — federal law enforcement officials — interpreted the evidence Cohen presented as meaning that the president “directed” Cohen to lie. We now know that Mueller did not.

The Mueller denial of the Buzzfeed reporting is very limited:

"BuzzFeed's description of specific statements to the Special Counsel's Office, and characterization of documents and testimony obtained by this office, regarding Michael Cohen's Congressional testimony are not accurate,” Robert Mueller’s spokesman, Peter Carr, said.

This was prior the Mueller report being released.

We now know Mueller report says: "While working on the congressional statement, Cohen had extensive discussions with the President's personal counsel, who, according to Cohen, said that Cohen should not contradict the President" and "Cohen also discussed pardons with the President?s personal counsel and believed that if he stayed on message, he would get a pardon or the President would do "something else" to make the investigation end.[2]

The dispute seems mostly around the term "directed", and if it was directly by Trump or by his legal team. Both Mueller and Buzzfeed's sources agree that Trump and his legal team knew in advance that Cohen's congressional testimony contained lies.

The Buzzfeed reporting was based on an official's notes that said “he was asked to lie by DJT/DJT Jr., lawyers.”

Politfact agrees it is open to interpretation.[3]

In any case, it seems calling it "fraudulent" is going too far. It seems like there is broad agreement that Trump didn't use the words "Please lie", but he did imply that is what he wanted and that Cohen would be rewarded if he did, and Trump's legal team approved the statements that they knew included lies.

I'm going to do a HN taboo here and talk about voting. I realise this is an emotive subject and people have their predefined views. I'd ask people not to just vote on if they like Buzzfeed or not, and if they support Trump or not and instead consider if any of the things here have information they didn't know before. I believe that the answer to misinformation is information and I've tried to gather as much relevant information as possible here, and present both sides in as clear way as possible.

[1] https://www.buzzfeednews.com/article/bensmith/how-we-charact...

[2] Muller Report, part 3, page 134 https://www.documentcloud.org/documents/5955118-The-Mueller-... (Note that because of the weird pagination in this document you need to go to page 346 of this link to read part 3 page 134)

[3] https://www.politifact.com/punditfact/article/2019/feb/28/di...


What organization is really even credible anymore? In Germany, I liked to read the SPIEGEL magazine (not Spiegel Online, that's half a step before boulevard) until the Claas Relotius case happened [1]. They tried to save face and apologized etc. etc. but who says that just fixes it?

The Internet that has developed since the first ad-monetization almost demands, and at least incentivizes, shady behavior if you want to economically survive as an information processor - news outlet or others.

Many of us on HN would gladly buy premium for something they enjoy but I fear the vast majority of people wants everything for "free".

[1] https://en.wikipedia.org/wiki/Claas_Relotius#Fabrication_of_...


> What organization is really even credible anymore

Most of the major newspapers in the US are fairly credible, despite having issues like 'both-sidesism' and not doing well with asymmetry.

'Credible' doesn't mean "no mistakes". It means they own them and apologize for them. The Economist is still occasionally apologizing for buying into the WMD narrative during the run up to the war in Iraq.

The danger in "no one is credible" is that it elevates flat out propagandists to the same level as institutions that mostly get things right.

It's sort of like science as a process: what we think we know now may need correcting in the future, but real scientists have a process that allows for (sometimes difficult) course correction.


Almost all American outlets are biased.

There are many with 'high integrity' (i.e. fact check, in depth, write well) but there's so much editorialisation, that they are biased.

You almost have to read the news off the wire, or watch local news to get straight news. Almost everything on CNN, Fox, NYT, WSJ etc. is editorialised in some way, even the non-opinion pieces.

Edit: to anyone that doubts this, consider spending a month reading outlets that you might suspect 'have a bias' (i.e. 'the evil other side'). It becomes very clear, very quickly. Some of the most prominent forms of editorialising, even in the more straight news items comes from what they decide is newsworthy, how the headlines are worded, the facts they decided to leave in vs. what they leave out. The kinds of guests, the form of questions. The main evening broadcast news in the US is decent, but almost everything on cable or in print has bias, even when it's 'high quality'. I should add that this is not an American phenom, there are hardly any large 'straight news' agencies in the world; maybe the BBC.

It's also helpful to read/watch the news from a different country, where you don't have a 'stake in the game' so to speak, and it becomes evident. If you use Google Translate on Die Welt, Der Spiegel, and Die Zeit - you can see how the same news is reported differently.


I don’t believe BBC is any better than NYT, WSJ or WaPo. I have seen BBC’s coverage of India at times can be highly slanted and headlines editorialised to the extent, even when the facts are correct, that you walk away with a different impression than what actually happened on the ground. If they can be slanted about one topic, it would be irrational to assume that they won’t be slanted about something else. Just that you’ll never be able to figure out if they were in fact biased or not because of the gell-mann amnesia effect.


The BBC is definitely better than WSJ, WaPo or NYT for straight news.

First - they are substantially bigger and have much wider operations than any of them - by far. They have global correspondents etc. - a much bigger news room.

Second - they are neutral. NYT, WSJ and WaPo are not. Their editors would admit that, clearly. NYT is a left wing American commentary. WSJ is an economically liberal entity. The BBC actually has oversight and scrutiny because it's a public institution.

It's easy to demonstrate: take any contentious news item of the day, and then see the coverage by those outlets. WSJ won't even cover social issues. BBC generally runs straight news, the NYT will have a lot of editorial coverage op-end on it.

As for 'India' - your confusing short (or wrong) coverage with bias.

I'm from Canada - and I see this all the time: US outlets constantly misrepresent Canadian political issues. This is not because they're bad, it's that different nations are hugely different contexts - it's often very difficult to communicate something nuance without spending an hour going over the issues. And sometimes they just get it wrong. Indian political affairs are complicated - it's hard to narrow anything down to a few sound bites without getting some things wrong. I'll also bet $100 that none of the WSJ, Wapo or WSJ even touched on whatever Indian subject the BBC was covering, their readers don't care, and they don't have the budget or correspondents. I should have pointed out obviously the BBC has a national bias - most outlets do.


If only American newspapers could get over their bothsideism, then they could report exclusively on the One True Side. Although that might be dangerous since journalists aren't experts in everything and so aren't qualified to choose sides (even if one side seems stupid to experts.)


There are plenty of examples where there is absolutely a side that is correct and one that is not.

Do you think the earth is flat or that people who say so should be given equal time?

There are plenty of other instances where one side is demonstrably correct.


As always, thinking about flat Earthism is a terrible exercise for your brain. The fact that a few people exist who are utterly and obviously wrong about something, and that you’re not among them, should not encourage you to think that you’re probably right about any other issue.

I like Scott Alexander’s essay on this subject, The Cowpox of Doubt: https://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/

Basically, I think an irrational tendency towards “both sides have a point” is a lot better than an irrational tendency towards “my side is right”, and that humans tend to err in the latter way about ten thousand times more often than they err in the former.


>There are plenty of other instances where one side is demonstrably correct.

To who, an expert or a journalist? Flat Earthers can beat many people in arguments about the earth being flat, because general science knowledge is not very widespread. I could find plenty of journalists that don't know about, for example, the shadow length thing. The idea that newspapers should only quote truth-speakers does not address the reality of the limited knowledge of the journalists themselves. For them, flat earth theory is a choice between either ignoring everything but the mainstream consensus, or sometimes reporting on fringe groups. Clearly the second option is the right policy, especially because reporting on someone's claims does not imply that the newspaper thinks they are true. It may not be a fact that the earth is flat, but it is a fact that flat-earthers think the earth is flat. It may even be newsworthy.


Good journalists are bright enough to get enough information from enough people, assimilate it, and write it in a clear way for the rest of us.

They might do a 'human interest' story on crackpots like flat earthers, but good ones wouldn't "both sides" that issue, just as they shouldn't with other issues where there is strong scientific consensus, or verifiable facts demonstrating the veracity of one side's claims, and none on the other.

This isn't a new problem, and good journalists are capable of handling it.

Will they get it right 100% of the time? No. But the important thing is that there's a process and they're trying, and they'll admit it if they get it wrong. People get fired for getting things egregiously wrong.

None of that is true for propaganda outlets.


Good journalists will do the right thing no matter the standard policy, it's the bad journalists that need culture impressed upon them. Bad journalists are not good at telling who is right, so that makes bothsideism a good standard policy.


Agreed. Plus, print is also harder to quietly modify; silent post-hoc 'corrections' to online articles that wildly change the article's conclusions are the norm, with no rigorous path to present those changes to those who consumed the article previously.

A dedicated print subscription forces the printer to weigh the consequences of correcting an error more heavily, and a regular reader of a subscribed publication can be presented with corrected errors on the front page of each day's edition.

(Not that any of this would happen in print, either; and not that this is impossible with internet media; but rather that the natural consequences of the print medium pushes heavily towards a different set of behaviours. I doubt anyone would subscribe to an RSS feed of errors.)


I think der Spiegel is still pretty reasonable, and die Zeit remains a weekly go-to for me.

What shocks and saddens me is the complete decay of the French language news, quite sad for such an intensely literate culture.


Always amusing to see the vote suddenly plummet in a vary narrow window of time. Don't let anyone tell you HN doesn't have Reddit-style vote brigading. People get angry when you touch their institutions.


Buzzfeed has never really enjoyed a reputation of being a trusted news source[1], so this notion that anyone countering your outright dismissal of the publication is defending it is just silly. It's also just disproportionate considering outlets which are more widely trusted are given the benefit of the doubt despite egregious errors or lies. (such as NYTimes)

[1] - https://www.journalism.org/2014/10/21/appendix-c-trust-and-d...


[flagged]


Yet there are still people that can't seem to separate the two.


Maybe they should ditch the baggage and drop the Buzzfeed name then? I'm sure Theranos had an ace marketing department, but you wouldn't use them following the scandal.


That probably would be wise. But how long until "But they're owned by BuzzFeed!"


The purpose is to muddy truth and inflame emotions for clicks. They prefer the now known name. They aren't in it for reputation - theirs is as garbage as their publication.


This. Buzzfeed apologetics is so tiring.


Your local town newspaper has the last eight to thirteen pages of it devoted to paid classifieds and syndicated cartoons, but I think that most readers are more than capable of separating the two. I certainly haven't seen too many people rant about how the local paper is awful, while citing those two sections as Exhibit A and Exhibit B. [1]

For some reason, though, people seem utterly incapable of doing the same for Buzzfeed.

[1] There are plenty of reasons for why a local town paper may only be good as toilet paper, but the presence of classifieds, obituaries, and Dilbert cartoons is rarely the cause of it - or even a correlating signal.


Are the cartoons and classifieds even remotely comparable to news stories in how they're formatted? Like, maybe I'm missing something here, but usually the latter is clearly done in an 'ad' format, and the former is literally a different medium.

Both types of Buzzfeed stories are formatted as news articles with text and images. More of the latter for the non news ones, but still, a much more similar style.


And is the clickbait of 'Top 8 reasons for why _____, reason 3 will surprise you' remotely comparable to... Actual journalism?

We don't have to speculate. We can compare the two. Can you guess which is which?

[1] 19 Tweets That Will Surprise You Then Crack You The Fuck Up

[2] A Chicago Cop Is Accused Of Framing 51 People For Murder. Now, The Fight For Justice.

It's been a while since I've been one - but I think a literate six year old child could tell the difference between the two.

If we want to reduce the distinction to absurdity, though, I'll point out that both the classifieds, and the investigative piece on page 4 are just black, inky marks made on newsprint.

[1] https://www.buzzfeed.com/pedrofequiere/im-weaaaaak

[2] https://www.buzzfeednews.com/article/melissasegura/detective...


> to manipulate public opinion

Was Buzzfeed trying to manipulate people or just get clicks?


> just get clicks

Just because you have one goal (in this case traffic) doesn't mean your actions dont have other effects.

Generate clickbait that, for example, leverages the distrust the public has in a group, or is based on the desire to see someone/some group get their comeuppance, or the desire to hear tales about how your group is being exploited, etc, and you start manipulating public opinion about those things.

Fox News, for example, started conservative but far less radical...I dont know if they were radicalized by the success of their own success by this sort of manipulation, but it is certainly an option.

Once you have success at manipulation, even if that wasn't actually your intent, a financial incentive appears to MAKE it your intention.


> Fox News, for example, started conservative but far less radical.

Fox News was created to be a GOP propaganda vehicle [0]. They've gotten more radical as the GOP has because that's why Fox News exists. There are interesting questions around whether the tail is wagging the dog, but I think the simplest, most accurate perspective is to treat Fox and the Republican Party as a single unit whose goal is always to maximize GOP power.

[0]: https://www.businessinsider.com/roger-ailes-blueprint-fox-ne...


I think intent is pretty important to the concept of "manipulation": you don't often hear the sun being accused of manipulating the weather, despite the overwhelming influence it has over it.


Surely, to cause people to be angry and misinformed is bad whether it's part of the end goal or just for money through clicks.

I'd say it was worse a couple of years ago, though. It's as if there were a generation of young men and women in journalism who had grown up on Something Awful and 4chan, and whose main marketable skill were farming negative attention online. The craze for hiring that sort as "social media managers" have died off a little.


That's because manipulate implies active control, which the sun doesn't have. As a different example, you can manipulate the levers of a machine and, if you are not skilled or paying attention, get a result you did not plan for.


That is simply poor manipulation: intent was there but understanding was lacking, and so the outcome was not as desired.


an intent was there, but not the intent to create what became the eventual outcome. Buzzfeed has an intent - to create clicks. What GP was saying was that this creates an unintended effect, that of political and social manipulation, which unfortunately ends up as an incentivized loop for Buzzfeed.


>Fox News, for example, started conservative but far less radical.

I think you're really stretching the definition of radical. That the views may be far from your own does not make them radical.


It is absolutely true that views far from my own aren't automatically radical.

But the overdramatic "they are coming for YOU" rhetoric, (just as an example, I recall a lead into a discussion of food stamps that showed a fist punching through a map of the U.S.), the messianic treatment of Trump, the extreme yet hypocritical positions, these all lead me to conclude they've shifted to not just "from from me", but into radical.

I try to get news from multiple sources to reality-check my own biases, and I've regularly seen Fox report "facts" that no one else is, while avoiding big news that they don't like. I've seen them parrot lines from Breitbart and other sources that are widely considered unreliable and extreme. I've seen plenty of respectable conservative news outlets distance themselves from Fox News reporting more than once.

(There are studies that show that Fox viewers are less informed on issues than average, but those studies haven't done a good job on determining causation, and less informed does not equal radical, so I'm not basing my opinion on those.)


Isn't getting clicks manipulating people into clicking?

We might want to say it is categorically different than manipulating people in other ways such as getting them to buy certain products or vote certain ways, but if we go down that path then I think we can begin saying that about most forms of manipulating people and thus we would need to spend a bit more time working on a standard of how acceptable different forms of manipulation are.

There is also a question of how do you draw the line between manipulating someone, tricking someone, educating someone, and convincing someone. If scientists are trying to warn the public about the dangers of climate change are they trying to manipulate public opinion, educate the public, or convince the public?


> If scientists are trying to warn the public about the dangers of climate change are they trying to manipulate public opinion, educate the public, or convince the public?

Depending on the person, the methods used, and the level of integrity maintained, some combination of all three.

If you suppress legitimate criticism and intentionally distort facts, you are engaging in trickery.

If you correct misinformation and do your best to present an accurate representation of your understanding, you are educating.

Generally, scientists tend to do a pretty good job of focusing on education, but the dynamics of the discussion around the information they share tends to cloud that distinction.

The problem is that many groups have decided that trickery is more convincing than education and that should make compromising ethics and integrity mandatory. (While other groups seem to have had no integrity to start with.) As a result, the discussion of the distinction between education and trickery and accusations of trickery often drown out the actual attempts at education.


We might have a problem because scientists spend too much time just giving the information and not on working on the 'manipulation' side of it. For example, take any news site dedicated to scientific news and look at how much even they will twist the facts to make it easier to digest and more interesting.

Scientist: Chemical XYZ seen to reduce growth rate of cancer ABC cultured in a petri dish compared to control group. Around 10% reduction average, p < .01, see table 4. Not statistically significantly better than chemical MNO which was also being tested. Further research needed.

Science News: Chemical XYZ helps fight cancer ABC.

Normal News: Does <something that contains chemical XYZ> cure cancer?

If scientists were better at manipulating education to be engaging to the public they wouldn't lose out as often to those pushing fake (or at least far more questionable) information.


And then we get science reporting like this: https://www.smbc-comics.com/comic/science-journalism


> If you correct misinformation and do your best to present an accurate representation of your understanding, you are educating.

That is, if you treat the other side as a person and not as an ignorant ape. Three are very well known renowned scientists in my country that despite being right on all fronts (in that specific case, vaccinations) they harm themselves spectacularly by being complete pricks and treating critics (no matter how feeble the arguments against vaccines are) as sub-humans.

I had the opportunity to participate in a course about scientific communication a couple of years ago. A key point that was told us then is that you have people on the other side, not blank slates needed to be written. In other words, when communicating science, the best you can do is to present all facts, correct misinformation and what not, but leave the final decision to who is listening. You give them all the elements for a proper judgment, but you leave the judgment to whoever you are speaking to.

Perhaps people won't be convinced. Perhaps they'll believe you only partially. But IME you get far more interest from them (I've participated in a "meet the scientist" event once, answering questions from the general public) like that.


>> if you suppress legitimate criticism

Climate-change alarmists refuse to acknowledge that any criticism could possibly be legitimate. Every questioning of the narrative, even just a bit, is dismissed -- funded by Big Oil/Koch/right-wingers, not a "real" scientist, too stupid to understand why "adjustments" were necessary, "the science is settled!" -- etc. In many cases, editors insist critical articles be completely deleted/removed, rather than available to even be seen or discussed.

Example -- recent (2 days ago) post offers an alternative interpretation of respected climate scientists' own published data. Flagged almost immediately: https://news.ycombinator.com/item?id=21961462

Disagree? Please point to criticism of climate science widely deemed to be "legitimate."


This criticism has become the 'mortal sin' of modern science, in a similar way to criticisms of evolutionary theory. If a theory has no criticisms, it means no one is really thinking hard about it.


Alarmists, or extremists on both sides, are never really the best basis for such arguments.

> funded by Big Oil/Koch/right-wingers

I mean, after all, we all know that George Soros is financing climate change activism and protests, right? /s

Better is finding the calm voice. Looking for alarmists and squawking is always going to find polarizing, and often non-defensible, positions.


Yes. Manipulate people to click. Once you get that down, you can point the manipulation in different directions.


If your goal is just to monetize your website (via ads or perhaps sponsored content), manipulation doesn't really seem like it's in your wheelhouse. I suppose it's not utterly implausible that a political party or other major player could attempt to pay off an outlet's owners to use them for manipulation, but that seems a lot more expensive than just astroturfing on twitter and facebook.

One notable counter-example to my POV though would be that the Koch family has a tendency to hand big chunks of money to think-tanks, websites and universities for promoting their agenda. Not really sure what the best take-away from that is, though.


I would say yes, but I think BuzzFeed and BuzzFeed News should be separated here, because there is indeed some good investigative work.

Also the reliance on Boulevard as a news source is probably the fault of the reader.


> Was Buzzfeed trying to manipulate people or just get clicks?

Publishing is an act of manipulation, including caring about what's happening around them (which is typically considered a good thing).


What's the difference, practically speaking?


Getting clicks and trying to change public opinion I think are different.


Given how much of this is driven by upper management and ownership - just like Pivot To Video and other historically disastrous moves - I'm not really sure how much they can self-interrogate here. It'd basically end up being a takedown of their bosses, which always goes over well. If you're told to publish multiple articles a day (or even one piece per day) this limits your ability to do deep research or heavily-edited writing, and policies like that are common.

FWIW some unionized web news outlets HAVE been writing about this lately, but it's usually in response to ownership laying off chunks of their team and telling them to stick to clickbait. It happened recently at a couple of outlets that were (years back) originally owned by Gawker and changed owners multiple times.


Yeah, the main complaint of the ex-Gawker outlets seemed to be that management weren't letting them use their clickbait to manipulate public opinion in the political arena anymore and were making them stick to topics that were actually related to what the site was ostensibly about.


What is a site "ostensibly about"?


There is absolutely a distinction in getting website traffic and paying to manipulate public opinion.


> this is exactly the model that they created

My thought exactly. It's also one of the reasons I straight-up blacklist BuzzFeed in my NextDNS account.


The marketing departments at modern news companies are already pitching 'rebranded' forms of this model to management under the premise of "KPIs" and "user engagement" to hit their quarterly quotas.


I know right, how can we give any credence to this thoroughly researched report about how technology is being used to promote state sponsored disinformation campaigns, when Buzzfeed is out there AB testing headlines?


Buzzfeed also does solid reporting. When they announced that they would start doing so nobody believed them.

I don’t know the relationship between the two sides of the business.


BuzzFeed news is an internal division inside BuzzFeed dedicated to actual fact checked journalism pieces. I think they've showed up on HN a few times before.


Buzzfeed News ≠ Buzzfeed

It’s an important distinction :)


I don't think this is a useful comment. This comment is an example of "whataboutism", which is a logical fallacy [1]. It discredits the article because the article contradicts Buzzfeed's past actions, without actually making any statements about the merits of the article's claims.

I think it's dangerous to commit this sort of logical fallacy, especially when discussing online trolling and disinformation campaigns, because logical fallacies like these are the the lifeblood of disinformation campaigns. For example, whataboutism was and is commonly used in Soviet and Russian propaganda.

[1] https://en.wikipedia.org/wiki/Whataboutism


What are some examples of BuzzFeed promoting rage and misinformation?


You can find many examples here:

https://www.buzzfeed.com


> It's a semi-automated system for creating clickbait to manipulate public opinion.

What differentiates this from any other media outlet?


I find Buzzfeed News to be consistently factual and well-written. If anyone has evidence to the contrary I'd rather they discussed that than simply maligning the outlet


Vernor Vinge's https://en.wikipedia.org/wiki/Rainbows_End was quite prescient on this point, describing a world where the Internet largely becomes useless due to infinite disinformation crowding out the real.


I think eventually (hopefully?) most folks realize that the only solution is smaller, highly-moderated communities (like this one), and even then it's important to be aware of the implicit underlying biases of those communities. Some really great smaller subreddits also come to mind.


The downside is that small communities become echo chambers and cause people to further polarize themselves. It is vitally important to engaged with a diversity of opinion if you don't want to become radicalized, especially with viewpoints that you don't agree with.


Yeah, or things like Mastodon instances, moving from twitter to Mastodon has been the best media consumption decision I've made in the past couple of years.

I didn't notice how bad twitter actually was until I was away from it for a while. The constant noise of advertisements/manipulation feels to me like the hum of an AC, your brain filters it out after a while and you convince yourself it doesn't bother you, but only after it's turned off, do you realize how bad it actually was.


Key word here is "smaller". Anything works on a small-enough scale. In general, subreddits are very balkanized, with a strong echo chamber effect (downvotes for dissenting opinions) and terrible moderation. For any reason at all, dissenters can be put on a list to have their posts quietly hidden from anyone else.


Yes, the book had people form "belief circles" of that nature.


I'm not sure, but that doesn't sound all that great. It's kind of the problem now, people in belief circles with little basis in reality.


The problem is we kind of have that with Facebook groups, but when they’re filled with people with low online media literacy they become semi-private conspiracy breeding grounds


Seriously, why am I being downvoted? It’s well established that FB groups are being used as niche channels for spreading misinformation. The only difference between one of these and HN is the quality of moderation.


I get downvoted for saying unpopular but true things here, and so are you. Which undercuts the quality of moderation argument you are making.


I thought that’s Putin’s playbook. Since you can’t control the situation anyway, just push out a lot of disinformation so nobody believes anything anymore. You could argue that the current US president plays the same game by shouting “fake news” all the time.


But isn't the whole "Russian disinformation" thing in turn a way of discrediting information? Orwell was writing about this 80 years ago, it's never seemed more relevant.


That is how disinformation can be protected from exposure. If you fill the channels with enough disinformation, then even true information exposing disinformation is treated as disinformation. It’s very corrosive of society.


But it's also how perfectly valid discussions can be completely shut down - certainly very corrosive.


>whole "Russian disinformation"

That depends on how you define "whole." There must be examples of people either unknowingly and wrongly invoking "Russian disinformation" or intentionally invoking it for whatever purposes but there is definitely actual "Russian disinformation" occurring. Disinformation from all sorts of sources. It's imperative to be discerning now, and to be discerning about how to be discerning.


Very valid point.


The suggestion is that Vladislav Surkov [1] is behind this playbook.

[1] https://en.wikipedia.org/wiki/Vladislav_Surkov


Interesting guy. It just shows that we shouldn’t correlate high intelligence and talent with virtue.


It's funny considering "fake news" was invented by liberals in the US to shut down conservative media.


More surprising to me is that seems to have virtually wiped out legitimate journalism at the same time.


Our unwillingness to pay for journalism is what wiped it out.

People, in aggregate, appear to want to read "news stories" with little discrimination about the quality or accuracy of those stories. When consumers want a product without care for quality, the market optimizes for that. So you get free news that's worth what you paid for it.


Journalism as passionate advocacy for things actually believed in is far older than journalism as a product. Paying more for a journalism "product" doesn't magically make it not propaganda. I'd say the opposite happens - that journalism as product incentivizes outlets to pander to the audience to increase circulation, without regard to the truthfulness of the content. High-"quality" commercial journalism is clickbait, because what's rewarded is circulation. not degree of accuracy.

The problem is that journalism product crowds out (and delegitimatizes, intentionally, as competition) honest advocacy.


No it isn't. Lots of people are willing to pay for journalism, but it's a major pain to do it. That's the problem syndication was supposed to solve, but large media companies (apparently) made the decision to kill off RSS.


It's not any harder to pay for journalism than it is any other product online. I have subscriptions to a couple of newspapers I respect and getting those set up was no harder than buying a hoodie from a web store.


Perhaps. I should clarify that by "wiped out", I mean that many formerly reputable and workmanlike publications abandoned the basic principles of journalism (e.g., the five W's, separation of church and state, etc.).


Right. The reason they abandoned them was economics. They were no longer making enough money to pay for fact checkers, copy-editors, and investigative journalists. They were forced to focus on clickbait articles because those were the only ones that generated enough traffic to pay for the ads to keep the business afloat.


Can't really argue with that. Guess I should pry open my meager wallet and support a few of the last good ones.


I guess Cambridge Analytica's collapse with no real consequence for people working there showed both that there's a huge market for this, and high reward with low risk.


The consequences are so low that I would seriously question calling it a collapse. It's just been shuffled under a new banner as Emerdata under the same parent company with really no consequences outside of having to file some paperwork and rebrand.

https://medium.com/@wsiegelman/cambridge-analytica-executive...


CA was also doing all of this pretty openly.

It'll be much harder catching the smarter bad actors.


I wonder whether this'll create FOMO with any and all political actors left, right and center to jump on this kind of targeted manipulation bandwagon. Can't afford not be a player in this shady game :(


It's not cheating if everyone is doing it!


Taiwan's elections are this weekend and there's been a lot of articles about disinformation campaigns there ( https://www.nytimes.com/2020/01/06/technology/taiwan-electio... ), and it's interesting that the guy being interviewed is from there and speaking so openly about doing it.


I believe peddlers of disinformation can afford to be completely brazen about it, because nothing will ever be done about it. Oh sure, people love to complain about specific instances of disinformation (outright false or even just somewhat misleading information), but there is no similar love for even considering the idea that humanity, all of it down to a person, largely runs on half-truths and semi-delusional thinking.


I'll probably get downvoted for this, but here goes...

I listened to a podcast with a guy who runs a ring of disinformation sites in the US.

I don't think it's as big of a problem as most people think.

People don't get hit with some disinformation article and then suddenly start supporting another political party. Fake news is mostly just something to make people feel better about their views -- which are already completely set in stone.

Yes, it's fueling a bit of radicalization, which isn't great.

But I think more people are focused on this because they think it's the reason one party is getting more votes than the other. It's just not how fake news works.


My intuition tells me you're right.

My intuition also tells me that a big part of the reason people are so focused on it is that it gives them a reason to hate on their out-group. The reason I feel this is because when you encounter someone engaging in "group <x> is bad because <y>" type of behavior, and you ask them how they know(!) that <y> is true, they rarely have an answer, and they get even angrier.


I don't think it's as big of a problem as most people think. Yes, it's fueling a bit of radicalization, which isn't great.

What is the output of this radicalization? Just how big of a problem is it compared to what people are making out it? We know someone opened fire a pizza place as a direct result of this. Do you think the rise in hate crime (especially violent crime) is in some way related, too? Perps have left behind manifestos, like at El Paso, Christchurch, etc., with the same language as disinfo campaigns.

People are focused on this because it's the reason that people are being murdered because of their religion or where they were born.


Over what time period are you claiming there is a rise in hate crime? And how are you defining hate crime?

I've seen several kinds of fallacious arguments related to allegations of a rise in hate crimes. One is based on allowing people to simply claim that a hate crime occurred, without any evidence, verification, or objective definition of hate crime. Many of these claims were later shown to be false, though they continued to be cited as part of the evidence of a 'rise in hate crimes'. In these studies, the actual truth is that there is a rise in unsupported allegations of hate crime, wherein the reporter also decides for themselves what constitutes a 'hate crime.' This doesn't tell us very much about changes in the frequency or nature of real world crime.


> We know someone opened fire a pizza place as a direct result of this.

Pizzagate resulted in someone firing a single shot at the lock on a closet door to open it. In a discussion of disinformation, I think your characterization is misleading and exemplary of disinformation as well - in a manner quite similar to the disinformation that led that individual to decide to do a vigilante raid on the pizza parlor.


If we are going to be pedantic about it (in the interests of clearing up misinformation), it was 3 shots[1], and he pointed the gun at employees[2] before shooting at the closet.

[1] https://en.wikipedia.org/wiki/Pizzagate_conspiracy_theory#Cr...

[2] https://www.starnewsonline.com/news/20161205/dc-pizza-place-...


Ah, thanks for clarifying. Previous coverage I'd read hadn't specified how many times he shot at the lock (ex. "shot through the lock of a closet") and I guess my brain interpreted that as 1.

Was able to find multiple corroborating sources for the 3 shots number.


No problem.

I find it amusing that I'm receiving downvotes for this clarification - I'd love to understand the reasons behind them. I assume they are idealogical, but what ideology find the difference between 1 and 3 gunshots significant?


Misinformation on HN, unfortunately, is real. Those companies discussed in the article also have accounts here.


"Opened fire" and "firing a single shot" are the same thing. I'm not sure what point you're trying to make.


The difference is that "opened fire" is a weasel word in this instance- it is misleading as it linguistically leaves open the interpretation that multiple people were injured or killed


Until I read cwkoss' comment right now I had always assumed someone came to the pizza joint and started firing rounds at people. I admit I'm not American so I never cared enough to look up what "opened fire" meant here, but I think it's evident what people will think of when they read that expression. Hint: not someone shooting a closet's lock once to open it.


It was multiple shots, and he pointed the gun at employees first. See my other comment for references.


Thank you for clarifying this. Based on the coverage I've seen, I had always been under the impression that a shooter shot multiple rounds at a group of people. I had no idea it was a single round directed at a lock in a door. It's amazing how dishonest the headlines and soundbite coverage of this event were.


People have always done things like this. I haven't seen any stats on how much more common it is now, what the trend was before disinformation, etc.


I don't think it's as big of a problem as most people think.

Misinformation spread on Facebook led to the genocide in Myanmar[1].

Yes, it's fueling a bit of radicalization, which isn't great.

Radicalization is the real problem because it leads to rejection of democracy as a method for solving disagreements.

[1] https://www.reuters.com/article/us-myanmar-rohingya-facebook...


not a single person died due to facebook. it’s funny you believe this.


What do you mean "due to Facebook"?

It's true that the Facebook app or company didn't suddenly kill them. But my claim was very specific ("Misinformation spread on Facebook led to the genocide in Myanmar") and well backed up by the evidence.

It's the same as the role Radio Télévision Libre des Mille Collines (RTLM) played during the Rwandan genocide.

http://www.genocidearchiverwanda.org.rw/index.php/Radio_T%C3...

http://news.bbc.co.uk/2/hi/africa/3257748.stm

https://en.wikipedia.org/wiki/Radio_T%C3%A9l%C3%A9vision_Lib...


Facebook itself acknowledges its role:

The report concludes that, prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.[1]

and

In a surprising concession before the Senate intelligence committee in September 2018, Chief Operating Officer Sheryl Sandberg even accepted that Facebook may have a legal obligation to take down accounts that incentivize violence in countries like Myanmar. Sandberg called the situation “devastating” and acknowledged that the company needed to do more, but highlighted that Facebook had put increased resources behind being able to review content in Burmese. Shortly before the hearing, Facebook announced that it had taken the unusual step of removing a number of pages and accounts linked to the Myanmar military for “coordinated inauthentic behaviour” and in order to prevent them from “further inflam[ing] ethnic and religious tensions.”[2]

The report Facebook itself commissioned and posted at [1] says:

the consequences for the victim are severe, with lives and bodily integrity placed at risk from incitement to violence. and there is a high likelihood of these risks occurring in practice(they have occurred in the past and are happening today) (page 35).

I think it's a pretty reasonable thing to believe if Facebook themselves say it too.

[1] https://about.fb.com/news/2018/11/myanmar-hria/

[2]https://www.lawfareblog.com/facebooks-role-genocide-myanmar-...


Your theory is that babies are born with political views, or what?

And it's important to spend money make them feel better about their wrong beliefs, even though it doesn't matter?


> Your theory is that babies are born with political views, or what?

Actually, yes.

https://www.goodreads.com/book/show/17380040-our-political-n...


Most people are intelligent enough to form views outside of disinformation. For the people that read disinformation, their minds are -- for the most part -- completely made up already.

People don't go from being Democrats or swing voters to reading an article about how Hillary Clinton feasts on aborted babies in a whore house she runs in Russia, to suddenly voting for Trump.

I mean, sure, some people do. But the vast majority of disinformation is reinforcing.


Digression here: This reminds me of Isaac Asimov's Foundation series, based around a genius that predicts events and cultural paths based on a mathematical formulas for aggregated human psychology. It led me to imagine a world where societies can not only be influenced, but their actions predicted, and a system that would emerge that capitalizes on alphas in this environment much like the stock market. This would almost act as a solution to this problem we currently have, if many agents were predicting and acting on these alphas so that the alphas that do exist are very small. After all, most of the problems we currently experience are due to this potential (alphas, in my analogy) being so large and profitable.


Wow, you're prompting as little as I ever knew about the book "The I inside" by Allan Dean Foster of an artificial intelligence named Colligatarch mostly capable of, let's say, aggregation far beyond what Cambridge Analytica was capable of. It ends up leading to a high level detection of a very miniscule (an individual on a global scale) intrusion, but that's beside the point.

World peace, if achieved, has nothing to do with the eradication of free will. This is exemplified by Colligatarch's recipe for success in the book. It is not conquering the world step by step in any way, but it accurately enough predicts patterns on what the future will bring, and if the various people adhere, they automatically learn to fare better to follow these action plans. If you'd imagine the beginning of this kind of world-spanning awareness as described would start out very messy, at this point it's safe to assume that we might even be past that initial mess - which came into being through the two world wars at the beginning of the 20th century. Disclaimer: I'm treading far outside my field of expertise here, so if you know more than me, chime in, let's discuss the topic.


I've heard that Behavioral economics is inspired by "Psychohistory", and these Seshat folks are also pretty serious about it.

Any attempt that wants to be predictive will have to contend with "strange loops" of propaganda/PR efforts as well as the side-effects of their own work (since they arn't hiding.)

> Behavioral economics studies the effects of psychological, cognitive, emotional, cultural and social factors on the economic decisions of individuals and institutions and how those decisions vary from those implied by classical theory.

https://en.wikipedia.org/wiki/Behavioral_economics

> TerminusDB powers Seshat, the Global History Databank, which is like the Foundation in that Seshat is dedicated to bringing together the most current and comprehensive body of knowledge about human history in one place. The massive collection of historical information allows researchers to rigorously test different hypotheses about the rise and fall of large-scale societies and human history. The core TerminusDB team are active in the Cliodynamics community. Cliodynamics treats history as science — practitioners develop theories that explain such dynamic historical processes, translate these theories into mathematical models and then test predictions against data.

https://medium.com/terminusdb/terminusdb-whats-in-a-name-27b...

> Our unique Databank systematically collects what is currently known about the social and political organization of human societies and how civilizations have evolved over time. This massive collection of historical information allows us and others to rigorously test different hypotheses about the rise and fall of large-scale societies across the globe and human history.

http://seshatdatabank.info/

> Cliodynamics (/ˌkliːoʊdaɪˈnæmɪks/) is a transdisciplinary area of research integrating cultural evolution, economic history/cliometrics, macrosociology, the mathematical modeling of historical processes during the longue durée, and the construction and analysis of historical databases.[1] Cliodynamics treats history as science. Its practitioners develop theories that explain such dynamical processes as the rise and fall of empires, population booms and busts, spread and disappearance of religions.[2][3] These theories are translated into mathematical models. Finally, model predictions are tested against data. Thus, building and analyzing massive databases of historical and archaeological information is one of the most important goals of cliodynamics.[4]

https://en.wikipedia.org/wiki/Cliodynamics


Buzzfeed has a really funny idea about what cookies are required... just about everything related to tracking and add personalization in fact.

Couldn’t agree to that so I couldn’t read their article.


Unless it's a matter of principle to you, the article loads fine without cookies if you disable 3rd-pary and/or 1st-party Javascript. The cookie consent banner doesn't even load for me, I assume that with 3rd-pary JS enabled you get some kind of redirect or pop-up? I didn't get asked to consent to or enable anything.

I have no idea why more people on HN don't use UMatrix with at least 3rd-party JS turned off by default. It's trivial to re-enable for the sites that need it, and a nontrivial portion of news sites I visit work just fine (if not better) with JS disabled. Most popups just flat-out disappear.


It's the principle.

I do have uBlock Origin etc, but I'd rather not give them the eyeballs.


Agree. And anything that needs it goes into a container.


This represents the democratization of propaganda. Before okey large state actors (Pravda, BBC Empire Service, Voice of America, Radio France International etc).

Now it’s available retail.

Spam fighting has not caught up.


"Democratization" implies "equally available to all people". That's what the "demos" is about. But when you have to pay for something, what that really means is that it's available only to those with the money.

This is really the capitalization of propaganda — those with capital get to control the narrative.


Those with the capital have ways controlled the narrative (newspapers, and before that hiring people to just go out and talk to the people, e.g. to be elected tribune).

The cost of these propaganda efforts has plummeted, and increased automation will simply lower the cost further. Trump pays nothing to use Twitter and it netted him a presidency (though, per your comment, he already had a TV show).

I expectthat within the next four or five years there will be middle schoolers using this tech to run for student council.


This article was disappointingly light and hand-wavey on the technical details of the system. I would be very interested in a deep dive on the specifics.

Like, scraping and spinning web content onto a Wordpress blog that then syndicates it onto social media was state of the art 10 years ago. Is it just exciting now because, politics?


This is where the slippery slope of advertising leads, and I'm sort of amazed people can morally decide to start doing this - I imagine once you're in it's easy though as you'd get a hell of a grandiose power trip out of knowing you just controlled who was elected in Azerbaijan.


"We do it because there is demand"

That should not be a valid reason. Child pornography, slavery, drugs and harvested organs all have high demand.

I am sure these guys won't have any qualms about, say, defending and promoting taking homeopathic remedies instead of chemo for people with cancer. Or helping anti-vaxxers get children killed, if they decide to approach them and have deep enough pockets.

In addition to these "immediate" problems, there's also a long-term, perhaps nastier one.

As a species, what we know is what we are. Given enough time, every one of us is dead and gone, our physical and virtual wealth will dissipate. One of the few things which will perdure, and will always help our descendants, is knowledge.

Every bit of truth that we can collectively find is a hard-fought treasure for the future. Be it via scientific rigorousness or journalistic professionalism. Finding truth is one of the single most important things we can do for humanity, present and future.

What these people are doing to benefit themselves I see it as a crime against humanity as a whole.


One mid-future prediction I've been making for a while: eventually we will make lying in the public sphere either a crime or a civilly actionable tort. Libel and slander already can be, but I am suggesting something more broad: the criminalization of the promulgation of fiction, opinion, or propaganda not labeled as such.

We've always had tabloids and dumb mass media but smart microtargeted mass media powered by machine learning is to these what the machine gun is to the flintlock. Add deep fakes and machine generated text crafted to appeal to each reader and it's a tactical nuclear weapon.

Many religions believe in intelligent literal demons assigned to individuals to deceive them. Seems to me we are busy inventing a real version of that.

Seems to me that lying at industrial scale with AI and big data and social media is just not something society will be able to tolerate.


> Many religions believe in intelligent literal demons assigned to individuals to deceive them. Seems to me we are busy inventing a real version of that.

The Daemon-haunted World.

(In case it's not clear, I'm playing off the title of Carl Sagan's book "The Demon-Haunted World". We haven't fully exorcised "real" demons from our world yet here we are developing artificial ones, eh?)

https://en.wikipedia.org/wiki/The_Demon-Haunted_World

> The Demon-Haunted World: Science as a Candle in the Dark is a 1995 book by astrophysicist Carl Sagan, in which the author aims to explain the scientific method to laypeople, and to encourage people to learn critical and skeptical thinking. He explains methods to help distinguish between ideas that are considered valid science and those that can be considered pseudoscience. Sagan states that when new ideas are offered for consideration, they should be tested by means of skeptical thinking and should stand up to rigorous questioning.


What you're talking about isn't really possible, however establishing trust is. And relying on 3rd parties for trust when you don't know someone is also possible.


My 4th prediction is coming along nicely: https://news.ycombinator.com/item?id=21943361

> "4. Reality as a Service will allow users to choose the facts of their reality."


How does one theoretically get around the various anti-bot measures that the social media platforms have instituted?

When I try to create a human account it asks for a phone number and Twitter seems to have a bit of an overzealous system that blocks new accounts for "suspicious" activity. YouTube has similar measures in place for registration. Most of them have captcha as well.

Do these folks use humans to complete the registration and then have bots use the accounts?


There are services that sell proxies straight from their users' machines, so the traffic looks completely normal. You can also buy a lot of phone numbers for cheap if needed, though the phone requirement is almost always circumventable if nothing else is suscpicious. As for captchas - breaking them is trivial, there are services that do it but you can also beat most/all captchas with mturk for gathering data + training an off the shelf CV model.

A social media bot army is a project that a single developer with relevant experience/time can complete on their own, providing they are willing to spend a little bit.


Without reading the article yet: Ronan Farrow's Catch and Kill is an incredible in-depth account of efforts by The Weinstein Company, NBC, and The National Enquirer to use espionage and intimidation for suppression of reports of sexual misconduct.

The Wikipedia pages of some NBC executives were completely scrubbed of any hint of this suppression, for a while.


Reminds me of "The Doubt Factory" by Paolo Bacigalupi (fiction)

https://en.wikipedia.org/wiki/The_Doubt_Factory


"use every tool and take every advantage available in order to change reality according to our client's wishes"

We all need a healthy dose of cynicism to survive.


This is exactly why I stay away from things like buzzfeed.


One could argue that Buzzfeed is as much of a mouthpiece for propaganda-for-pay as the PR firms described in their article.


Is machine learning at all effective at spotting this sort of content?


It is ironic that the article is on buzzfeed.


Ironic story, coming from Buzzfeed.


Has the author never seen madmen or thank you for smoking.

It's not a new breed. It's just an evolution


For anyone who upvoted this and works for Facebook, YouTube, Twitter, or Google:

How do you justify your role in this? As Sacha Baron Cohen said in his ADL speech, Facebook takes money from what most people consider to be repugnant sources (e.g. white supremacists). Part of that goes into your paycheck.

The unfortunate reality is that most evil is done by organizations composed of well-intentioned people who don't see themselves as doing any harm (or having control over the harm).


Why only mention fake news and trolls that supposedly helped Trump and his supporters? It's as if there's none of this on the other side.


Isn't this just Buzzfeed?


This really isn't new, alex jones with his "infowars" has been doing the same for years.


This being "reported" by BuzzFeed is sadly ironic. They paved to way for manipulating influence.


I'm about to use the word blockchain, so stuck with me here I'm sorry.

But a web3 built on a the idea that to use a site (post, read) required even trivial thousandths of a cent would be unlikely to harm users but would at least dramatically increase the costs of this sort of thing. And moves the net to rely less on an advertising funding model.


You're describing proof of work, not blockchain. It's easy to implement it without a blockchain: make MTAs require the hash of the enveloppe of the mail (+ a nonce outside the enveloppe) to start with a certain number of 0 bits.

And make the MTAs return a parseable message when there aren't enough zeros, so the sender MTA can retry with other nonces until there are enough zeros.

And the enveloppe already contains a Message-ID which is to be unique (as it contains the hostname of the sender) and unpredictable, so there shouldn't be issues with replay attacks.


Blockchain would also mostly solve the corrupt ICANN issues being discussed on another thread, but people here don't really want to hear it for some reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: