Hacker News new | past | comments | ask | show | jobs | submit login

> One of the most self-perpetuating ones is the myth that "misinformation" is caused by not banning "bad" people, when it's actually caused by not exposing everyone to contrary viewpoints so that they lose the opportunity to have their mistakes corrected.

I'd argue that it's really two sides of the same coin. Take a social graph where nodes are people. It's not that eliminating bad nodes is a good solution, it's just that when you eliminate the dangerously influential bad nodes (the bad ones with a disproportionate amount of connections), it can have the effect of redistributing interactions more evenly in the graph, which means any given node is more likely to be exposed to a greater variety of nodes.

Ideally, we wouldn't have to eliminate any bad nodes, but that would require a stable graph where bad nodes don't attract more and more connections. Which I believe is an alternative formulation of your comment about the need to regulate algorithms.




> It's not that eliminating bad nodes is a good solution, it's just that when you eliminate the dangerously influential bad nodes (the bad ones with a disproportionate amount of connections), it can have the effect of redistributing interactions more evenly in the graph, which means any given node is more likely to be exposed to a greater variety of nodes.

There are at least two problems with this.

The first is that as long as you have an algorithm which is isolating people into silos and frothing people for "engagement," all you're going to do is create another one.

The second is that the bad man doesn't actually disappear from the world, and many people will follow them to wherever they go instead, where they're even more isolated and have less opportunity to break out of the bubble. The general trend is also causing alternate social media platforms to spring up which are splitting the population along party lines. Having more platforms is good, having the platforms be implicitly partisan is terrible. And having everyone on the right move to other platforms would leave the existing platforms with only people on the left, which will melt their brains just as much. You need the opposition around to keep yourself honest.

> Which I believe is an alternative formulation of your comment about the need to regulate algorithms.

Regulate algorithms can't work. Never mind the obvious First Amendment problems, think about the incentives. To fix the problem would be to break the business model; stop driving engagement. Billions of dollars at stake. So the political incentive is instead to let them keep radicalizing people, as long as they're radicalizing people for whoever is currently in power. If this isn't sufficiently alarming to you because of who is currently in power, imagine the public official they have to appease is Trump.

What we need are social media companies with a different business model. Ideally in some kind of federated system with competing discovery algorithms so the harm from a bad one has less scope, and people can choose not to use the ones known to be malignant without having to opt out of the networks with the largest network effect.


You have to interpret “regulate” charitably. I wasn’t advocating for any particular type of regulation, and not necessarily in reference to government intervention. Substitute with “tweak” if that’s less bothersome.


Fair enough.

It seems like it's an incentives problem. Something needs to change so that making money stops being aligned with doing the wrong thing.

Whatever fixes it is going to have a different business model. Maybe the best thing the government could do is do some trust busting. Make it easier to enter the market, increase the number of competitors and we have more chances to find the answer.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: