Hacker News new | past | comments | ask | show | jobs | submit login

> As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.

IMO the concerning part is hidden at the bottom. They want to go back to shoveling politics in front of users. They say it is based on viewing habits, but just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks. I just can't look away. FB makes theirnacrions sound noble or correct, but this is self serving engagement optimization.

Social media sites should give users an explicit lever to see political content or not. Maybe I'll turn it on for election season and off the rest of the year. Some political junkies will always have it set to "maximum". IMO that is better FB always making that decision for me.




>Social media sites should give users an explicit lever to see political content or not

Facebook does sorta have this, under Settings & Privacy > Content Preferences > Manage defaults. Note that the only options for "Political content" are "Show more" and "Default". The other categories listed also include "Show less". There is no "off" option for any of the categories.


IIRC, Political Content is by default restricted on Threads. But if someone you follow engages with or posts content that is political in nature, fb doesn't hide that for you


Just fyi, this is no longer the case on Threads based on this statement from yesterday:

https://www.threads.net/@mosseri/post/DEk65TJTXcQ?xmt=AQGzmI...


They will just relabel what is political. Union organizing? A bill on internet censorship? Anything mildly inconvenient to Meta or its shareholders? That's politics, you said you don't want to see any politics, didn't you? The culture war? Well, that's just pop culture, so that gets a pass.


Everything important is politics though. Celeb talks about her experiences - politics. Earth is getting warmer - politics.

Our lives ARE political.

Hell, right now researchers on misinformation are being harassed by senators to bankrupt them, and create living lessons to stop others from reducing the reach of manipulative content.

WE already had the entire free speech fight at the dawn of content moderation. We collectively ran millions of experiments, and realized that if you dont moderate community spaces, the best ideas DONT rise to the top, the most viral and emotional ones do.

If you want to see what no moderation looks like, you can see 4 Chan.

By nature, taking a stand on being factual, is automatically political because there are people who are disadvantaged by facts. Enron and oil producers spread FUD over global warming because it was problematic for their profits.

Stopping their FUD, is censorship via moderation. How is a regular joe going to combat a campaign designed to prevent people from reaching consensus?

Anyway, this is going to be fun.


I really do wish that one of the major platforms would a strict white- and black- list. "Doomscrolling" would be so much nicer if one could have, say, strict filters set to "Don't ever show me pranks, fake useless diy, kids being exploited, anything gym related" and "I really like snowboarding, WW2 history and pinball machines." Of course, the algorithm is still gonna "do its thing", but with a few hard guides.

Sure, initially the platform's view time would decrease, but then maybe people would actually like that platform.


Meta has failed (abysmally) at identifying and categorizing content where you’ve said “show me less of this.”

Bluesky’s not my favorite website but Xblock is proof that the app can go “this is a twitter screenshot and she doesn’t want to see those” at scale.

AI could identify, label, and hide all of these things.

On bluesky it already does: “this is rude” or “this content promotes self harm” , I wish both websites could suppress , snooze, or completely nuke “viral” or political content be it left or right. In bluesky’s case it’s not that I disagree with them. It’s just that I’ve had this shit that I more or less agree with shoved down my throat from every angle for a decade and I’m exhausted and don’t want to see or engage with it anymore. People who have nothing else to say 24/7 every single day of their life and mine just need to go away and I wish the AI on bluesky would just let me filter people whose content is primarily political temper tantrums because I don’t have the time or will to mute or block them all so I just don’t use the product.

In fact for moderation purposes, Facebook already is doing that on their back end. (a few years ago you could see automatically generated alt text like “a woman holding a baby” though I don’t use meta at the present time and don’t know if it’s still doing this.)

AI is already analyzing the memes and purging ones with themes they don’t like on FB though . Unlike bluesky moderation, it’s not presented as something I can leverage or access to make my experience more enjoyable on Facebook.

But that’s not how they’re leveraging AI right now. They won’t let it prevent me from seeing memes posts and content with themes **i** don’t like.


Reddit already has this feature, although it might be underused. Set up a multireddit. Everything you want and nothing you don't. They are also not bottomless (well, more so if you stick to smaller subs), so if you don't put too many subs in your multi you can also hard-limit your feed time. They're great.


In some way this already works - if you have the skill to actually not watch the stuff and flag it as “don’t show me again”.

If the platform’s view time increases only when it shows you “snowboarding, WW2 history and pinball machines", then you and the platform are aligned.


You talk about this like it's a service for the users.


> We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.

Great, so more filter bubbles? They don't learn, or more likely, don't care.


Filter bubbles are in. Blue sky and mastodon show that people want to self segregate. Even people remaining on Twitter are happy with the exodus.

Facebook is explicitly pro filter bubble. The community notes will come from your ingroup.

One irony is that diversity in online spaces leads to division. People no matter their politics and interests prefer people similar to them.

One way to look at this is by geography. Think of how a group of non English speaking Africans would talk together.

The other irony is that groups of people view the other groups as not similar to them and want to change them. It's always the outgroup that needs it's filter bubble bursting. It's always the other that is brainwashed.

So the downside of filter bubbles remain: more division, more separation between different people.


For me the major breaking change on social media is the forcing of non linear timelines. They're required to increase engagement and promoting content but thats the crux of the issue.

I liked the way early twitter worked, I have my bubble being the people I follow and I can see glimpses of the outside from the trending topics and what comes in as retweets, news, etc. Being able to see a thread without being logged in. Seeing analysis of people from the firehose showing different ways to see conversations and the bubbles.

I miss the fact that old tweets died, things had to be relevant to humans to be rekindled, meaning someone had to retweet to keep it alive instead of an algorithm deciding whats important for me based on how outrageous it is.

Bubbles are unavoidable, bubbles decided by algorithms are the worse of all alternatives.


Isn't there a difference between self-segregation and filter bubbles and how they're perceived?

If I go to a woodworking class, I won't be surprised to see people who like woodworking. If I go to the supermarket and everyone is talking about and liking woodworking, I start thinking that everyone likes woodworking.

A user explicitly signing up to specific topics are opting into a discussion. Filter bubbles are implicit.


Doubling-down on idiocracy and civilizational decline because there's money in it.


> They don't learn, or more likely, don't care.

Of course not. Enraged, uninformed people "engage", and that sells ads like hotcakes.

I don't know where people get this idea that Zuckerberg had any principles or gave a shit about anyone but himself. He's spineless, and his primary goal in life is has always been acquire as much wealth as possible by whatever means necessary.


> just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks

I guess FB will be the judge. They might even stop showing train wrecks to a person if they notice metrics dropping. Some of these metrics might even track the user’s well being, although most will focus on the well being of shareholders.

We lost the levers long time ago, replaced by opaque algorithms; are there any signs for this to change?


The way I read that — we tried hiding political content, but in the end lost user engagement to our competitors, so we decided to roll it back.

People say they don’t want political content, but they’re also more likely to engage with it if they see it.


> just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks

Maybe they need to be optimising for unregretted user seconds /s




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: