Hacker News new | past | comments | ask | show | jobs | submit login

It’s not just “other people” though; it’s all of us. I think the crux is there is abundant research that shows humans are not particularly rational, but your take seems to assume they are. How do you reconcile those two diverging viewpoints?



Humans aren't necessarily rational, but they are good at social stuff - humans are wired for living in social situations. You only need the latter to give people credit for having a "BS radar."


Do we lump social media into this "social stuff"? Because I think there's mounting evidence that we aren't particularly good at managing it rationally. As I read it, that's really what the article and the law are about: how social media can be leveraged by adversarial actors by leveraging our irrational/unhealthy relationship with it.

Or, alternatively, maybe that's the how the veiled power-grab is being framed. But we'd probably still need to undermine that claim about irrationality to offer an alternative.


I question whether a study can accurately capture rationality in a comparable/aggregateable way between multiple people. Measuring the process used rather than the end result sounds quite challenging.

I expect that what those studies really measure is how well social media users align and agree with the researchers


The alternative theory is almost certainly correct. If you actually read the bill, there is nothing in it about quality of information or misinformation. At the same time, the government has been warning about "malinformation" recently, which they define roughly as "true facts that do not fit the broader narrative." If that definition sounds like something from the soviet union, that's because it is.

The talk about controlling the flow of bad information is a naked power grab, and always has been ever since the first governments tried to do it in the ancient era. What history has taught us is that the truth always catches up eventually, and that it is better to provide people with more information than less. I honestly don't think that the social media era has changed that at all, and it would require some very strong evidence that it did (which is completely lacking).

Remember that before Facebook, news/gossip junkies were subscribing to papers and tabloids of varying quality. We have just eliminated the dead trees. The content was just as bad, emotionally-charged, and false, and there were so many of these papers that you could create a Facebook-style information bubble.


I’ll be more blunt: how do you then undermine the claim that such a bill is necessary due to the irrationality of humans which can be more easily leveraged in the age of social media for bad outcomes?

It seems like you either need to make the case that:

1) humans aren’t irrational or

2) the risk of such irrationality at scale is not really a major risk or

3) social media isn’t uniquely poised to capitalize on that risk or

4) we already have tools capable of mitigating that risk

It sounds like you’re debating 3) but there seems to be some evidence that social media tends to spread “bad” information faster and farther than “good” information by hijacking innate human psychological traits more easily.


I'm suggesting primarily that 3 is true, and we (as individuals in a society which is and has always been full of misinformation) have developed pretty good BS detectors to compensate. Bad information has always been faster to spread than good information. The only difference today is that all information is faster.

Also, to some degree, 1 is fairly true. You see this in economics, for example, where individuals are generally irrational about their choices, but modeling populations as a whole by treating them as 100% rational individuals gives pretty similar results to real life.


>only difference today is that all information is faster.

This is exactly the point being made (although I’m not sure I am completely convinced). The analogy could be “people have always killed each other. Nuclear weapons just make it faster to kill lots of people. Hence there no need to ban nuclear weapons.” The idea being there is a tipping point where the scalability of technology outpaces our biologically evolved innate sense to control it.

I agree with #1 but only in some domains. You can see it with guessing the weight of a bull at a fair or the number of jelly beans in a jar. But there’s other examples where it breaks down. In economics we have bubbles that are destructive when corrected. I’m not sure we want that short term extreme volatility in something like governance.


The alternative to that short-term volatility is long-term tyranny. No thank you. I'll take the volatility.


I think that’s a false dichotomy and not particularly helpful. The question is more about what level of volatility can be tolerate while still maintaining a stable society.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: