I'm only minimally familiar with BlueSky. Is it a fair analogy, for understanding what this is, to say it's like:
- If someone replies to this HN comment I just wrote, and I don't like it, I can delete their comment (because there is an thread ownership concept on BlueSky, and the earliest comment is the owner);
- If someone links to this HN comment I just wrote, and I don't like it, I can make that hyperlink disappear, or invalidate in some way (because BlueSky is a locked API garden and hyperlinks are not plain text, but magic cloud API tokens)
> - If someone replies to this HN comment I just wrote, and I don't like it, and I can delete their comment (because there is an thread ownership concept on BlueSky, and the earliest comment is the owner);
You can hide it - which I agree is functionally equivalent to deleting it as people are very unlikely to go dig for hidden comments. FWIW this is kinda similar to "flagging" a comment on HN, which (as far as I understand) will cause it to be hidden for everyone - at least until someone vouches it back.
> - If someone links to this HN comment I just wrote, and I don't like it, I can make that hyperlink disappear, or invalidate in some way (because BlueSky is a locked API garden and hyperlinks are not plain text, but magic cloud API tokens)
You can just make the hyperlink disappear to anyone using the default BlueSky frontend. But from my understanding, the hyperlink will still be accessible through the API.
To clarify how flagging works on HN: Flagging only hides a comment once "enough" users flag it, and even then remains visible to users with the showdead option set in their profile.
I think. I know N>1 because I sometimes flag something and it is not suppressed right away. Often though I flag something and then it is suppressed a few minutes later so I believe N=2 or N=3 or something small like that.
For that matter a lot of posts get suppressed right away, particularly from people who visit HN and immediately start posting stories from the same blog over and over again.
I think you can estimate the threshold by just counting the ratio of times you flag something and it immediately dies. You would wait a while to see which items die or do not die, and discard the subset which never die, and consider only the subset of items that transitioned to [flagged][dead].
If the [flagged] logic is simply "flags >= N", then of the subset of samples that are flag-killed with your involvement, you will be final flag in 1 of N of those.
The null hypothesis is "this experiment converges to a reciprocal integer".
I feel like this requires an assumption that any post you flag is equally likely to be flagged by other users, and at a similar time as you, in other words that your flagging behavior is very similar to others that use this feature.
I guess this might give you a pretty nice upper estimate though. Unless N is very high / very few users use the feature, and you are frequently much later to the scene as the average user of the feature. Then N may be underestimated.
Hacker News is biased so that the threshold for flagging is very low, in order to maximize the effective separation of signal from noise as much as possible (for as broad a definition of "noise" as possible.) If I had to guess it's probably 3, or it's tied to user karma (flags from higher karma accounts count more.)
And certain topics are likely to be flagged by multiple people, this can be assumed based on the topics that people complain aren't "HN worthy." Anything that can be considered politics, for instance, is probably going to get flagged because of the number of people who don't believe political stories of any kind have a place here.
That's probably a decent approximation, but it assumes the distribution of latencies for flagging is the same for you and the aggregate "everyone else."
I think so. It depends however on how fast I think I am compared to the other flaggers. If I was really, really fast I could always be the first flagger.
> You can hide it - which I agree is functionally equivalent to deleting it as people are very unlikely to go dig for hidden comments.
My understanding is that hidden replies are just one extra click away. From TFA: “All hidden replies will be placed behind a Hidden replies screen — so they’re still accessible, but much less visible.” Probably not much different from a collapsed subthread on Reddit.
The difference is that on actual HN, it takes more than one flag vote to hide a comment. (I suspect it's in the range 2-5, but I don't actually know.) So one person can't make a post they don't like disappear.
> The difference is that on actual HN, it takes more than one flag vote to hide a comment. (I suspect it's in the range 2-5, but I don't actually know.) So one person can't make a post they don't like disappear.
I don't know, but I would suspect if the user who wrote the parent comment flags a reply, it would be less weight than if another random user flagged it.
Pretty much, the first one being more like you can hide a comment that you don't like and if someone wants to see it they can click "see hidden replies"
My reading of "hiding replies" was that people who follow the replier would still see the reply as part of their normal feed. So you are not silencing them for their audience. Which is fair.
The removal of links to yourself is also very welcome.
Both tools have cons, and there's methods to work around them / prevent them from taking effect. But they address the default easy/trivial ways in which people can pollute other users, so in practice the value for users should prove high.
A) Make it so YOUR NAME and YOUR POST doesn't appear attached to a post that somebody else makes. Their post will still be visible to their followers.
B) Make it so that YOUR FOLLOWERS don't see somebody else's reply unless they click a "hidden replies" button. Their post will still be visible to their followers or someone looking at their timeline.
In no case can you prevent somebody's posts being visible to their followers.
The former doesn't make much sense in a distributed system where we expect to have multiple clients--and, preferably, no dominant one!--as there is no reason for the people you are restricting to opt in to that restriction.
> B) Make it so that YOUR FOLLOWERS don't see somebody else's reply unless they click a "hidden replies" button. Their post will still be visible to their followers or someone looking at their timeline.
Like, for this latter use case, if I am following you, I am buying into your frame and am often am going to agree with your moderation choices on the people who reply to you, so it actually makes a lot of sense to me that I would accept your ability to hide replies I "shouldn't" / "wouldn't want to" see.
Meanwhile, the only recourse to be heard for the person whose reply was hidden is to do so to their own audience; this not only seems fair, but strips them of the power they were trying to "steal" from you when they replied to your post in the hope of gaining access to your audience.
> A) Make it so YOUR NAME and YOUR POST doesn't appear attached to a post that somebody else makes. Their post will still be visible to their followers.
However, this former use case does not have the same structure: if I am following someone else and they talk about something you said, but now I can't see what/who they are talking about, I'm just going to get annoyed, as the goals of this feature don't align with the direct user of the client anymore.
At best, this just leads to people using workarounds, as the recourse is too powerful: anyone--not even merely the quoter--can add a reply with a screenshot of the post and a permalink through an external link shortener. At worst, it leads us down the dishonorable path of people demanding client DRM :/.
It sucks, because I was kind of excited about BlueSky actually caring about distributed incentive issues in a world where they were not the only client; but, this is the kind of mistake that people get goaded into making when they build a distributed system and start to assume centralization.
...and like, by the way: this feature is also inherently dishonest, as, even if it is documented how it works, a lot of users aren't going to understand it, and so they are going to assume they have a safety net in place that doesn't really exist in the protocol.
This honesty issue is similar to the way Snapchat claims people won't be able to (at least secretly, though I remember the original claim to be stronger) save the photos they send. Of course, people often can secretly screenshot your photos--such as using jailbroken devices or even merely a second camera via the good ol' analog loophole--but people send riskier photos because they trust the feature to protect them.
And yes, I totally understand that Snapchat's feature sort of works and maybe is better than nothing?... but, it only works due to continual effort Snapchat invests--from both their engineers and lawyers--to enforce the feature by embracing DRM, implementing user behavior profiling / banning, and sending legal cease and desist notices to alternative clients.
I hope (but no longer can assume) that BlueSky won't (or can't) ever go to such lengths; and so, in some sense, the feature is even less honest, right? :/ Even if the feature "sort of works", the power only comes from "pretty much there is only one client everyone has", and so the incentives are broken.
If I were building this, I would have done the exact opposite: if someone "quotes" you, there should be a copy of that content stored on the quoter's end, so that it is still visible even if the original is deleted... but like, that copy would be trivially editable, so no one ever treats previews as truth.
You have to implement it like that, as it has to be functionally equivalent to the analog loophole version of the feature -- the one where the quoter just attaches a potentially-forged screenshot of the post along with a link through an external shortener -- in order to align everyone's incentives.
Since I'm someone who believes in consent, then if someone else doesn't want me to reshare their post with my audience, then I respect their wishes. Particularly since that's the default.
It's possible to override this with a screenshot, but it's a clear escalation. The question is, how often do people do that? Does it become routine, or an exception? If you do a screenshot, maybe it would be a good idea to blur the username, if it doesn't really matter?
When people stop caring about mutual consent, things get messy, but I think tools that assume it are a good default that avoids unintentional conflict. Maybe it's a default that most Bluesky clients will follow?
Compare with robots.txt: if you don't want your website crawled, okay then, says the well-behaved crawler.
(I might want a client that keeps a snaphot regardless, just in case, but doesn't show or publish it.)
I like the robots.txt comparison, at the root it is about respecting someone else's wishes
I'm trying to decide how I feel about this unlinking quote feature, something about it strikes me as undesirable, kind of insulting, like the way I'm treated at airport security, presumed to be a bad actor
Of course, social media is rife with bad actors and harassment campaigns, so bsky is trying to create a world with tools to prevent that, but it just seems very reactive - here's one way harassment happens so we're building a button into every post that prevents that particular touch. Makes me think bsky is not forward thinking or creative, but just patching the old ship.
Last time I used Twitter for a couple months I feel I became enculturated very quickly into the popularity contest of picking a gang on some divisive issue and then making arguments or insulting arguments for internet points (Twitter is a very metric-forward interface, it's hard to use it without adjusting your behavior to try and make the numbers go up, and you make the numbers go up by jumping into a contentious drama and shooting your shot, aiming for reshares, requoting an opponent to dunk on them / make an example out of them)
I noticed this behavior change in myself looking forward to getting into internet arguments and deleted my account, it was a waste of time for me but the people inside the matrix seem to enjoy it.
The point I'm attempting to arrive at is that the design of Twitter encouraged this kind of behavior where you're ganging up on each other and so I find it foolhardy to duplicate the general design of a tool and patching it up in places hoping that people use it differently than the original.
They want to be a Twitter clone without the bits that made Twitter interesting and addictive.
(I ran in anti-e/acc and x-risk circles and also argued with people about Israel a lot, your cultural bubble may vary)
UI design comment: In their first example of detachment, two ways that "Removed by author" seems ambiguous:
* which author is meant (the quoter, or the quotee)
* whether the quoted post was removed, or merely the link to it
This ambiguity not only requires learning by all users of that platform, but will also be confusing to non-users, such as when screenshotted on another venue.
Something like "quote unlinked" is more terse/cryptic/technical, but at least it's something people will have to look up the meaning of, rather than misinterpret by default.
>The Bluesky app won’t list all the quote post removals directly on your post, but developers with knowledge of the Bluesky API will be able to access this data.
Seems like they have some good ideas for how to deal with quote posts/tweets. But I don't understand how decisions like this are compatible with their hopes to eventually make the network decentralized. Surely you can't trust individual servers to obey this anti harassment feature? Are they no longer planning to make it decentralized?
The way decentralization works on Bluesky is similar to how the web itself works.
The part that's decentralized is users' storage of data — anyone can host it themselves (we provide both a Docker image of a server that can do that, and the source code for the server, and a spec in case you want to write your own).
However, applications (such as Bluesky) are centralized in the sense that there's just one instance of each application and it can make choices about how to aggregate the decentralized data from the network. A useful analogy is how Google crawls the web into its index but can present stuff from its index according to its own rules.
In this case, quote posts removals don't remove anything from the user repositories (you can't remove data from someone else's repository just like you can't remove data from someone else's website). Rather, when you detach someone's quote, you're putting a record into your repository (declaring the intent to detach it). Our app (both server and client) respects that intent and displays it accordingly.
Indeed, other clients could choose to not respect that, but the default behavior for most users matters a lot — and in general good faith clients try to be well-behaved and align with users' expectations.
Breaking out feeds and moderation are also interesting design decisions that allow an app like Bluesky to give users more control and select algorithms from outside the centralization
Best explanation of the network topology yet, if I'm reading this correctly this removes the problem of not being able to participate on the global network just because the application you chose to interact with it is considered an "evil server" and banned from participation like mastodon, in other words, user accounts aren't associated with a home server that determines who they can interact with, is that right?
How does an application decide whose repos to index?
Same logic as a search engine might apply to the Web - some kind of crawling rules. That said, the network tends to use Relay services to optimize it. The Relays are essentially dumb caches that provide a firehose from multiple servers. We introduced them to make it easier for new applications to pull from the network, but they're optional; you can setup your own relay or crawl/subscribe directly with servers.
EDIT: to answer your question - this means the relay tends to decide what gets indexed.
It is. Everything in bluesky that runs on the AT protocol (like 99% of it) is currently public.
So if you as the OP detach your post from the QRT, all you are doing is posting a certificate (akin to a revocation cert) that declares the intent that you don't want their QRT to be connected to your post.
An app can still show it. Or it can show it behind a warning prompt. Or it can hide it entirely.
It's an intent and even if it was controlled entirely, a determined third party could still work around it so there's not much point in trying to stop them. Instead you operate on "Don't be a dick" principles.
But would any other client bother to implement this feature? Helping me not see toxic bullshit helps me as a user and I might opt in to various content curation schemes; but, if I am already looking at something, removing the context for what they are talking about does not help me. It is like, imagine if I was permalinked on HN to a flagged comment, because someone wanted me to see it, but the "parent" button didn't work.
Most people will be using the official app or the web site and if you're really dedicated one of the web archives will probably have the post and the quoted post archived.
It's not about being perfect, it's about cutting off wide masses from abusive behavior.
If BlueSky truly believes that there will even be an "official app (or the web site)" for their decentralized protocol--the one they claim they only even developed a client for at all in order to promote the usage of--then they have already fallen off mission.
The AT protocol is agnostic of Bluesky or Bluesky-specific content.
Different applications using the AT Protocol can publish records that have no relation to Bluesky posts, Bluesky follows, or other Bluesky concepts. For example, https://smokesignal.events/ is an AT protocol app that produces and aggregates its own record types ("events" and "RSVP"s).
So yes, there can't be any meaningful "official protocol client" (because the protocol isn't tied to a specific app).
However, realistically for each app (such as Bluesky or Smoke Signal) there'll usually be the most popular client (and the one we're developing is "official" in the sense that it's one we put on the app store under the Bluesky brand).
People can build other clients for Bluesky, but more importantly, they can build other apps on the protocol which have no relation to Bluesky (but can still ingest Bluesky data if they want to).
> People can build other clients for Bluesky, but more importantly, they can build other apps on the protocol which have no relation to Bluesky (but can still ingest Bluesky data if they want to).
Additionally, these apps can benefit from the distribution, moderation, and data hosting portability. ATProto allows for shared infrastructure across apps.
> If BlueSky truly believes that there will even be an "official app (or the web site)" for their decentralized protocol
The utter majority of people aren't nerds. They go on the App Store, type in "bluesky" and install the first hit, and that assumes they've heard of it in the first place.
Reddit, even before the Great API Crackdown, was just the same. The utter majority used the official client/website no matter how horrible they are/were to use - I'm honestly surprised old.reddit.com (the one with barely any JS) is still alive and kicking, new.reddit.com (the inbetween) for me keeps alternating between "it works" and "it redirects or shows the new UI that fails all the time with graphql errors"...
These release notes are about the official apps and website. Yes, other apps could do whatever they like, in theory. But they're concentrating on UI improvements to the official apps, and I don't know of any others.
Nobody builds a centralized app and then successfully decentralizes it. Nobody. If you want a decentralized network, you build a decentralized network - and then centralize it to the minimum degree necessary for it to actually work and scale.
That's exactly what they've done. Bluesky is a decentralised service. It was designed to be decentralised. It's currently centralised in a limited capacity because they didn't open up federation right away.
First they allowed DID federation (i.e. you can host your own DID by tying it to your domain name). Then feed curation. Then data storage. Then moderation features. And so on. There's really only a few things left and those are systems that are designed in a decentralised manner but for the sake of a smooth transition have yet to be handed over fully to the community.
Would you say activity on the web is getting more or less centralized? I wonder what the general public even perceives as “on the web” versus eg “a post on instagram”
Email is distributed but either you pay Google/Microsoft to host it for you or you host it somewhere else..that carefully and proactively follows their rules.
Anyone have a good analysis on the various strategies different sites have tried over the years? Feels like it would have to include threaded versus non-threaded conversations, as well. I don't typically think of that as a way to control toxic crowds, but it does help stay aware of conversations without having to follow down rabbit holes.
The only strategy that works is a tight knit community that is hostile to bad actors and good moderation. Trying to automate away human negativity will never work.
I'm not necessarily asking about how to automate negativity away. I'm honestly not clear that you want to eliminate all negativity. Indeed, I am generally wary of policies that kick out members. It can help some things, sure; but not having a path to good is a problem for members, too. Similarly, not allowing any bad behavior is the "purity spiral" that has killed a fair number of communities, too. You don't want it to flare out in such a way that it causes trouble, though.
I'm also curious on the tight knit community. I recognize some names here, sure. I think I would be stretching definitions to say I was part of this community, though? Specifically, I'm guessing there is, at most, a hand full that would ever recognize my username?
The Somethingawful forums is still one of the best places on the internet, the $10 price tag on accounts help reduce the bad actors (or at least puts a price tag on bad behavior) and thus make good moderation easier. Sadly the free internet suffers from the tragedy of the commons, of course an entirely pay to access internet will be it own nightmare.
That is fundamentally wrong. Pushing content moderation onto every person who posts a message on social media is a chilling effect on people posting. Pushing content moderation onto every person who reads a message on social media is a chilling effect on people reading.
People are fine reading stuff they disagree with, it’s fun that way. They just downvote if they disagree. It’s not fun sitting around in a moderated hugbox where everyone is saying the same thing
You're talking about a system of good actors that just disagree on what's being said, if you spend 10 minutes running your own site you'll quickly learn there is a metric shitton of bad actors.
Run something unmoderated long enough and, depending where you are, you'll have law enforcement knocking on your door.
>Downvotes are a perfect way to ensure you end up with a hugbox where people are afraid to disagree.
Hacker News has downvotes and people here disagree all the time. It's subjective but it even feels like downvoting and disagreement have increased over time, so the presence of downvotes at best doesn't seem to have an effect and at worse seems to encourage disagreement.
In fact, I actually can't think of a platform for which your claim is actually true.
I welcome censorship on my personal site because I control the censorship.
Likewise I'd like the option of controlling the conversation for responses to my posts. You're welcome to say whatever you want about me - under your own threads.
You're not going to get much sympathy if your stance is censorship = bad
> Upvote / downvotes are good enough on their own
Not sure I've seen this work. It's not good enough for Reddit or HN, both of which have heavy moderation.
If I couldn't run my own site with social features and reserve the right to kick anybody out just because I probably wouldn't run a site with social features.
For a small site the argument is that there are many other small sites you could go to if you got kicked out of one or the other.
For a site like Facebook or YouTube, however, it starts looking more and more like an essential public utility. I think of a case in my town where there is a person who is being a real jerk to their neighbors who run a "BiPOC garden". They've filed a federal discrimination case which probably won't accomplish anything because discrimination law applies to landlords, employers and similar gatekeepers -- like the stork said in this book
Having seen many, many comments in my lifetime that have ultimately been removed due to moderation: yes, we do seem to need someone babysitting us. Without moderation, bad actors end up ruling all spaces.
Bad actors are especially egregious in spaces dedicated toward a niche topic. Want to talk about hiking in a hiking forum? Nope! Ends up in culture war talking points and racial epithets.
That might be the type of communication platform you'd like, but the vast majority of people disagree. Good moderation is a Godsend, and dismissing it as "censorship" is a thought terminating cliche.
> Hacker News only works because it is heavily censored.
...by the community, which is not "censored" by most definitions of the word, which almost always refer to a single powerful entity doing the censoring.
Those distinctions have absolutely no basis in reality.
Censorship is merely the act of limiting access to information. It doesn't matter who is doing this. Yes, governments often do it, but so can private entities. An individual can even do it to himself, in the form of self-censorship.
Moderation is the application of censorship in order to modify a discussion. Once again, anyone could potentially do this, including governments. An individual can do this to himself, too, by selectively choosing which parts of a discussion to consume.
I've been timed out by dang for inciting flamewars, I took my time out like an adult and learned to improve my communication
If you don't like the moderation policies of HN, there are plenty of other places you can make comments. Continuing on your "censorship" opinion is likely to earn you more downvotes and flags from other users
Shadowban everyone and everything. Use moderators to instil a regime of discourse that puts certain concepts on a unjust pedestal. Use people who agree with the message as missionaries and AI/fake posts as fuel.
Sell this as a functional system of discourse and attach ads to the product so people have no need to login.
The recently introduced StarterPacks so people can collect groups of accounts on a topic for new users go quickly find and follow.
They seem to be holding off on indexing these, as well as the block lists, to prevent abuse. I saw they wrote that somewhere, but cannot find it at the moment
Well I was considering creating a Bluesky account, but now I definitely won't. I don't want my comments getting purged just because I argued with the author of a post.
This isn't "Anti-Toxicity" in as much it as "affirmation bubble creating". Your followers become your yes men and anyone arguing with you gets their comments purged from anyone seeing them.
> This helps you maintain control over a thread you started, ideally limiting dog-piling and other forms of harassment. On the other hand, quote posts are often used to correct misinformation too. To address this, we’re leaning into labeling services and hoping to integrate a Community Notes-like feature in the future.
I'm confused. It seems like they're acknowledging why this is a bad idea yet are proceeding with it?
If you don't want people to be able to reply critically to a post, why not have a private profile or not post it in the first place?
>If you don't want people to be able to reply critically to a post, why not have a private profile or not post it in the first place?
Exactly. Someone will post something abhorrent, critical replies will be removed, supportive replies will stay. It will make echo chambers worse than they ever have been. Then again, that may be exactly what the users want.
Reddit is a good example of how insane that can make discourse look in some subreddits, with one large one in particular showing how you truly can just engineer a faux consensus by deleting and banning every single comment and commenter that differs even slightly.
If we're going all in on how they've ruined their site, their change of the default sort to use a variety of engagement metrics rather than upvotes is why the frontpage is mostly ragebait and gossip posts, as those drive engagement metrics like comments, replies, and time spent reading comments.
There's also the fact that nearly 100% of frontpage posts are pruned by moderators within 12 hours now. Tracked by a subreddit (RedditMinusMods) that reddit banned without reason, but you can still find a link to a graph of that happening on HN of all places: https://news.ycombinator.com/item?id=36040282
Reddit enforces echo chambers mainly on two levels, the user level and the mod level.
Nobody respects etiquette anymore, and rather than upvoting valuable content they upvote whatever they like. As an extreme example, you can make the best, most honest, rational argumentation of a political issue, if the users don't agree with your stance it's going to be downvotted and hidden. Similarly, they will tolerate content that might be against the rules or the spirit of the subreddit if it's something they agree with.
On top of that, the vast majority of the time moderators enforce echo chambers themselves through bias, with a few of them going as far as banning every single user that posts in communities they disagree with, even if they have never engaged with the community they themselves moderate.
The corners of Reddit I inhabit do not suffer from these issues.
I spent a lot of time on usenet in the 1990s discussing politics there (mostly talk.politics.theory which was riven with libertarians). Given that I've lived another 40 years since then, I would simply not bother to do this anymore. Mass discussion of political issues is, in my eyes, mostly a dead end.
By contrast, locale-based subreddits, equipment-based subreddits, how-to-based subreddits remain, in my experience, relative gold mines.
Locale-based subreddits are some of the worst sub-reddits available.
You mean places like /r/korea, /r/australia, /r/melbourne etc. right?
They are the bottom of the barrel and the biggest wasted spaces due to moderator power trips and propaganda. Seriously, the moderators at these places are absolute shut-ins that subscribe to very extreme ideas and ban anything slightly away from what they believe in.
For example, Australia day is a day that celebrates Australia the country. You can be banned on Australian sub-reddits for saying "Happy Australia day" (something most Australians do). This is due to some insane, extremist ideas about Australia day.
Another example, /r/korea will silence anyone who does not agree with the US constant overwriting of Korean culture and Korean social mores. To speak on topics such as whether Korea should legalise drugs (it would be a disaster to do so, but Americans going to America) is a bannable offence.
Love how he used locale based smaller subreddits as some gotcha when those are some of the most heavily propagandized and censored subreddits. He comes off as one of those "I'm too smart to be propagandized" types, won't believe the narrative has been shaped in his small subreddits even when proof is staring at him right in his face.
I don't know what you read, but it wasn't anything that I wrote.
I've written only about my specific experience. I am not too smart to be propagandized.
Big subreddits clearly have major issues, and I acknowledged that. Presumably some small ones too. The ones I tend to hang out in ... I don't see any of the issues discussed here. There are no overarching mods, there is no groupspeak (to speak of), there is little to no banning/blocking.
So sure, maybe you have experience of smaller locale based subreddit that does. That's fine, no argument from me.
It seems that a common thread is that Reddit (and Redditlikes) fail when the topic is too big.
I generally use Reddit for small topics. My home town is 80,000 people embedded in a county with a total of 150,000. Our locale-based subreddit works fine (admittedly with a lot of predictable and repeated whining from certain demographics). Our moderators are rarely in sight or even detectable.
it's sorta like reading the breitbart comment section but it's all left wing. there's some good stuff, but there's also good stuff on mass email chains if you read enough of them.
> Exactly. Someone will post something abhorrent, critical replies will be removed, supportive replies will stay.
Precisely and this is the misguided approach to moderation I see everywhere - it basically turbo charges cry-bullies, trolls, and people devoted to spreading misinformation.
It is confusing language but basically they're saying "we're giving you this ability, but will be adding a new feature to fix this unfortunate side effect".
I think it will cause a shift to "screenshots of posts" as is done on Twitter/X when the retweeter does not want to @ / mention the original author.
That's perfectly fine. Screenshots as posts deter 90% (my guess) of people from going through the trouble of finding the original post/author, but still allow dedicated people to do that. It reduces the ability to trivially post mindless shit to someone on a whim. I totally believe that this approach can significantly increase the signal/noise ratio, without allowing anyone to silence anyone else arbitrarily.
This feature doesn't change that. Whoever wanted to doctor a screenshot will still do, and whoever wanted to post a real screenshot can still do.
Now, if someone wanted to quote-reply with the legitimacy of a link but now has to make do with a screenshot that their readers may not trust, well, sounds like they haven't earned a good reputation among their readers, and that's entirely their problem.
They're acknowledging that it has some downsides, that they think they can resolve through other features that they'll develop later.
> If you don't want people to be able to reply critically to a post, why not have a private profile or not post it in the first place?
The problem isn't people replying critically, it's people being assholes on the internet, leveraging their follower base to harass someone. This feature gives tool to the bluesky posters to protect themselves against this form of harassment.
Hence why they're going with it despite the downsides: it's a necessary feature for people to protect themselves against harassment.
> The problem isn't people replying critically, it's people being assholes on the internet... it's a necessary feature for people to protect themselves against harassment.
The problem is that "harassmemt" is a nebulous term. Many people inappropriately (in my opinion) claim any sort of dissenting voices are "harassment".
I'd dare say: social media promotes the idea that any sort of dissenting voices is harassment. It's not a spontaneous belief on the part of the people who claim that.
Dissenting voices from a ton of anonymous people looks a lot like harassment, especially when their expectations for what normal debate and politeness look like don’t match yours.
Of course. But there are also very real cases of people quote-posting someone with an opinion they don't agree with, leveraging their follower-base to start dogpiling. Even without the harassment problems, this is already not a very good way to have a conversation, as you'll end up with a ton of people with the same viewpoint replying. But then things can go even further with people starting to send death-threats and other niceties of this kind to the OP - which is definitely harassment territory.
Currently, there just isn't many good tools to protect yourself against this kind of dogpiling.
---
Personally, I think both are necessary: A tool to protect against using quote-posts for dogpiling, and another to correct misinformation.
Yeah, it's of course legitimate to argue against some specific tool or strategy to combat harassment, like the changes proposed here.
But your post sounded as if the whole concept of "harassment" was problematic, due to the risk of people abusing the anti-harrasment tools for censorship. I agree, the risk is there and needs to adressed, but situations of real harassment and "mob dynamics", in which an individual user suffers real harm but can't do anything about it themselves, also happen frequently. I think it's dangerous to ignore those by throwing the whole concept under the bus.
On Reddit I mostly see this feature used for political manipulation. Especially by people who to me are obviously propagandists.
It's a very common pattern for me to encounter something, writing a reply I think is superb, seeing it highly upvoted, and then seeing the person I replied to delete that comment.
It's incredibly effective-- you get your propaganda out, and when it gets questioned it just disappears-- the lies will have affected many and the rebuttal seen by few.
I think they’re intending to address something like THE X GROUP DID Y THING TO THE Z GROUP (which we will assume to be false) by allowing for factual corrections via “community notes” and “labels” that I assume the T&S team plans to control. Depending on how the specifics of this future plan manifest, I think this could be useful for quelling legitimate adversarial or genuinely uninformed and harmful misinformation. I still worry about the bias and transparency of whoever controls these labels, and the general potential of these features to enable the creation of echo chambers, but least this will all be transparent and observable via the developer API so factual investigation and reporting can be done on how this actually plays out in practice. I hate censorship and the general cowardice of people who would abuse these features to stifle legitimate dissent, but I think these features are a good compromise between that and the opposite real problem of malicious/harmful legitimate misinformation.
bluesky currently doesn't support private profiles, so that's a no-go.
while i fundamentally agree with what you're saying, the cases where a quote-skeet contains a polite or well-intentioned correction vs. is explicitly designed to enrage the quoter's follower-base and dunk on that person are way out of balance.
there are many people who will, rather than attempt to correct a misunderstanding (or simply ignore an opinion they don't agree with,) immediately jump to quote-tweeting and dunking on someone, trying to drag their followers into a dogpile. it's often just blatantly unnecessary and it is frequently a miserable experience to be on the receiving end of that.
i worry i'm coming across as lamenting cancel culture or something - i'm not - but i've absolutely had cases where someone with a different flavor of an opinion i agree with will see something i said, take my slightly different view on it, and quote skeet it with some nasty comment tearing into me. their friends, with no context of the discussion i was initially having, will just dive in and tear into me because they are clearly having fun doing it, not because it's actually correcting a problem or resolving a misunderstanding. i frequently have this happen from completely random passers-by to whom i wasn't speaking and who really would have no reason to see my post in the first place. i'm really not a controversial poster on bluesky, either. i really don't think i'm saying anything particularly offensive or earth-shattering. it's just a dumping ground for shower thoughts, mostly.
great example of this is some guy who randomly found a post i made about the fact that i was disappointed in joe biden's debate performance, which i'd say is a pretty uncontroversial opinion. he said some nasty stuff, i made the mistake of responding trying to explain myself, and then he just started replying to me exclusively in quote-skeets, berating me and mocking me, and the gaggle of idiots following him just dove on in and also decided to tear in to me for reasons that still don't make a whole lot of sense.
all this to say, if you've had any meaningful interaction with folks on these platforms, i feel like you understand the idealized version of a quote post and how far it is from how it's most frequently used.
Another real world example: A famous comedian once grossly misunderstood something I said, quote tweeted it with a framing based on that, and sent tens of thousands of views, hundreds of replies, and thousands of notifications my way over the course of a few minutes. All extremely hostile.
It can go well: SwiftOnSecurity once QTd me and their comment led to lots of good conversation on the quote and in my own notifications. But they cultivate a good community of followers and understand the responsibility they hold with a 6 digit follower count.
The thing about quote posting is that the very act completely collapses context, and puts all the power and responsibility on the quoter. An innocent mistake can be as harmful here as malice because, as I experienced, no amount of trying to address the mis-representation to people coming at me helped because I was just some nobody and someone they trusted told them what's what.
> The thing about quote posting is that the very act completely collapses context, and puts all the power and responsibility on the quoter.
There have been plenty of circumstances online where I was, in fact, the fool. I've had many interactions where I benefitted from being corrected. Almost universally, these corrections came in the form of a reply or private discussion designed to engage me specifically and not in a performative manner, and I found them very helpful.
I see less value to someone having the right (or perceived duty) to amplify my meager reach while correcting me in front of people who don't know me and who lack any empathy for me. I don't see the difference between this and someone, during a conversation in a bar, standing up on the bartop and yelling HEY EVERYONE, CHECK OUT WHAT THIS GUY JUST SAID! and then loudly correcting me while egging everyone else on.
I have questions about the impact of this feature on more famous figures - for example, I'm not sure I'd want a politician to employ this functionality against me. There is some balance that likely needs to be struck that takes into account the reach and impact of the original message and thus the potential for damage. They seem to be touching on that with the future community notes feature. I can see why that's a more appealing approach. That allows the opportunity for correction/response while not actively promoting the thing that's being responded to - just providing context when people come across the post organically.
I've had two pull-aside moments exactly like this on social media! They changed me for the better. The pull out and up of a quote never had that effect even if that's the stated intent of its most ardent defenders.
Sounds like they are making a tradeoff of creating community bubbles or silos in order to limit unwanted interactions.
Seems like safe spaces online, where people can be happy but perhaps they will be sharing opinions and information that are incorrect, but they won't be unhappy from people they disagree with providing the facts.
This is a terrible idea. They even note it by saying "we’re leaning into labeling services and hoping to integrate a Community Notes-like feature in the future".
Why push an update that is undoubtedly a net negative when the company itself suggests that it's not the best solution?
What makes you think this is a terrible feature. This is a federated service and the company is actively attempting to get the service to a state where it can exist independently of them, preferably without the service fracturing into a bunch of islands, mostly separate from each other (cough mastodon cough).
What solution would you propose that's better than this one and how is this solution a net negative (personally I think labelling services are a great approach to the problem)?
Community notes have been very effective in retaining public trust while combating misinformation on Twitter/X.
This is the polar opposite of that. Delete all opinions you disagree with, no matter the reasons.
Echo chambers are not productive. Full stop. It will lead to worse outcomes.
I would have a different opinion if they weren't advertising this as a general-purpose way to shut people up. This could work if it were just a method to remove "rule-breaking" posts where some authority would still evaluate the reason for removal.
Community Notes, and their abuse, is one of the reasons I left Xitter. I was a CN editor and saw that it just became another battleground for ideology
Bluesky, with ATProto, is putting more control into the author and user hands. You may not like this design change, but many people do. The nice thing about ATProto is that you can write a different UI that doesn't hide or allow for this, and still belong to the social network fabric at large. You don't have to rebuild a new, separate social network from scratch
They already stated they are planning on adding community notes along with further extending their labelling services.
Community Notes doesn't address dog piling though. And unlike twitter, Bluesky & ATProto have no concept of private accounts so when you get dog piled you can't just go priv like you can on twitter until things cool off.
This is specifically addressing that problem. i.e. Addressing toxicity. It's not about trying to correct misinformation.
The post mentions misinformation in this section only in the context that QRTs are a common way of addressing misinformation and that allowing you to detach could reduce QRTs ability to do that.
So while Community Notes should be a preferred method of addressing misinfo, it doesn't do anything to allow users to disengage or avoid toxicity (which is the point of this blog post).
If your goal is to create a space where people can only say nice things and only agree with you, then yeah this a great method to do it. It's a great way to create a tiny bubble of like-minded people and allow you to spiral ever deeper into the bubble as outside debate and challenging of assumptions becomes disallowed.
"Harassment" and "disagreement" are two separate things, and the fact that you're conflating them here tells me you're not a member of a marginalized group
I don't think it's especially polite to assume someone's degree of marginalization, marginalized people can also disagree on what moderation tools are desirable
This tool cannot prevent harassment and allow disagreement. The author can unilaterally choose what to remove, no matter the reason. When you give someone this capability, you will remove discourse and guarantee echo chambers.
TBH I think part of this is for very large accounts, where there is literally just too much engagement to deal with. You kind of need to be able to filter and create your echo chamber to preserve your sanity and use the application the way it is intended to be used (not some kind of updates-only account). Personally I highly doubt I will ever have a problem where I either need to get engaged by with someone (seems unhinged) or need to click extra buttons to disengage from someone unless I'm being pushed to do so by e.g. an excessively rude person or someone who is behaving unhinged at me.
Like these are low stakes. If someone wants to hide polite disagreement, that's cool, they're managing what it means to be their follower and if I don't like it I can stop following them. It has nothing to do with me personally and they're not obligated to receive disagreement because I want them to, that's a weird position to take.
I think "microblogging" format, such as Bluesky, Twitter, Mastodon etc, just seems to bring out the worst in people, I don't think it's possible to combat it properly when the format itself encourages it.
Glad it made you laugh, but in my experience as a moderator, these types of efforts result in the complete opposite and only empower trolls.
Like this is 10000% going to happen:
someone makes an inflammatory tweet. Bunch of replies, back and forth. The original poster will get someone to say something absolutely horrible when taken out of context of the argument, hide all other replies except the horrible one and act like a victim (while continuing to post inflammatory rage bait).
Trolls spreading misinformation will use this to hide any criticism. Serial scammers will do the exact same thing.
Not one single thing about this will reduce inflammatory and combative behavior on the internet, let alone bluesky.
No, the comments aren't deleted, just hidden for you and your own followers. And it will hid you from the hidden comment owner's community too.
Think about it though: how do harassment campaigns start on twitter : do you have someone actively pushing his followers like this: 'you should reply to all his comments and spam his DMs!', or is more like someone responding strongly to your tweets, then his followers starting, w/o coordination to spam you?
> No, the comments aren't deleted, just hidden for you and your own followers.
I understood this and nowhere in my post did I say I thought they were deleted. "hiding" is barely better and the effect is essentially the same since people aren't often going to go trolling through hidden replies.
Nothing about these changes prevent or mitigate the situations you are describing.
I do think the second situation is the most prevalent by far, and I think that hiding the origin tweet from bloodlusty followers would add enough friction to severely limit harassment, don't you think? Find the tweet your edgelord hero is responding to before writing an insulting DM?
In the first situation, I agree you can't do anything, but isn't the second one we'll mitigated? And if you don't think so, why?
Sorry, I'm kind of unsure I an following the scenario you are describing here - are you expecting the edgelord hero to have the goodwill to hide the comment so his followers won't harass? Or, are you saying that the person potentially being harassed can hide comments to avoid being harassed (not clear this would even work?)
All of these measures, however they are expected to work, can also be done with blocking.
The person being harassed hiding the edgelord comment would also hide their post from the edgelord followers, making them harder to find. That's the feature I think is extremely usefull. The rest, meh.
But all it takes is the edgelord or one of his followers screenshotting the comment and retweeting it before it’s hidden and this is effectively nullified. Their followers know the username, what’s actually being stopped here? this type of thing does nothing but hide information and empower trolls.
The plus side is that it empowers content creators to control the conversation, which I truly believe is a good thing, however in practice I find it completely abused especially at scale by bad actors. I do truly hope they reverse course on this decision.
People who would do this are a minority (I hope): that would be active harassment, and my experience is that in most cases, the community doing the harassing doesn't really means to, it's an outrage side effect.
No message are deleted, its basically just removing the link in between two posts to limit heat in the discourse.
For me, Twitter/mastodon/BlueSky are not really discussion boards and more announcement places, so removal of causal links do not disturb me, but I guess for you it seems more discussion oriented, which I did not understand in my first message.
Hidden, by all accounts, is a deletion on a platform like this.
US free speech allows someone to protest on public property (within reason, of course). If they could only protest in their home, would that be free speech? By forcing them to "hide" the protest from public view, you've essentially removed their capability of protesting.
I wasn't clear, but to me that thing you're focusing on isn't important, the 'remove your tweet from the hidden person timeline' is.
Let's say you posted pro-Y content (I wanted to say pro-X but Elon kinda ruined X as undefined variable in public discourse :/) that a pro-Z influencer with 10M followers dislike and heavily engage on. You _will_ receive at least thousands of DMs from this guy's follower (even on non-political subjects that freaking happen btw). You would then have to change your usage of bluesky. Unless you use this new functionality on the influencer tweet. Then his followers would have trouble finding you, and unless the harassment is organized (it isn't 99% of the time imho), you will be able to continue your life as nothing happened.
It's really nothing to do with "controlling the behavior of others".
What I'm seeing on Mastodon is a desire by quite a few people to use these open, public messaging systems as something more akin to private mailing lists. They have no desire for anyone-can-read/anyone-can-reply, they just want a nice little circle of people who all agree to be a part of the circle and to have the others there too.
I believe that many of the people who want this are too young to have ever been on old listserv-style mailing lists, and for them, this sort of messaging technology is obviously how you would do this sort of thing.
I don't think they are right, but I also see how they would think this, and they're not clearly wrong.
Twitterlikes seem to straddle the space between a public forum, with open discussion on particular topics, and a private blog, where the author moderates the comment section on their posts.
That's correct, but what are the alternatives? Most people aren't going to do it via email, so Listserv won't work. And they want convenient features like links and images.
I'm not a social media guy so I truly don't know if alternatives exist. Google Plus is the only service I know that seriously tried to solve the problem. At the moment Mastodon seems like the simplest approach. If they could define "circles" (that applies across servers), then it would be ideal.
I'm on server A and you are on server B. I define a circle (and own it), and I add you to it. The system ensures that only people within the circle can see the posts, and you cannot reshare outside the circle. If someone in the circle is misbehaving, I can kick them out. Perhaps add some moderation capability (within the circle).
You can argue that this is just a "Mastodon network within a Mastodon network", and you'd be correct. It still acts mostly like a "private" list.
Lots of other subtleties to figure out, but this alone would be a great start!
That's a way of adapting the fundamentally public design of the Twitterlikes to be like a listserv.
I'm not convinced that this is where we should be starting if creating a 21st century listserv-alike is the goal. You're taking a technology that was fundamentally conceived of in a "push-to-public" mindset and trying to fit it into a "push-to-the-group" mindset.
> You're taking a technology that was fundamentally conceived of in a "push-to-public" mindset and trying to fit it into a "push-to-the-group" mindset.
The benefit is you get to create one account that you can use in many groups/circles, while still posting things to the public and interacting with random users who are not in any of your groups.
It's like an inverted WhatsApp - the difference being that Whatsapp is private by default, and doesn't allow you to post to the "public". It also lacks features (e.g. threading).
> The benefit is you get to create one account that you can use in many groups/circles
Like an email address!
More seriously, fair point about the comparison to Whatsapp. But that gets at why I think this is the wrong direction to start from.
My gut instinct is that if you want what I think people want, then neither Twitterlikes nor Whatsapp/Telegramlikes are where to start (they do, however, serve as a series of lessons in UX and much more).
Except I cannot send an email to the whole world, as I angrily learned after switching to email from BBS's ;-)
Also, there is a reason every explanation of Mastodon says something to the effect of "your handle is like an email address - it's tied to a provider" :-)
Now that Twitter doesn't let anyone see anything without being logged in, the concept of a "anyone can read, a permissioned few can interact" is growing on me. I still find useful information from listservs archived on Google groups, if we can keep something like that going for the next century we'll be better off than the timeline where everything happens in a discord or slack, and when it's gone it's gone.
- If someone replies to this HN comment I just wrote, and I don't like it, I can delete their comment (because there is an thread ownership concept on BlueSky, and the earliest comment is the owner);
- If someone links to this HN comment I just wrote, and I don't like it, I can make that hyperlink disappear, or invalidate in some way (because BlueSky is a locked API garden and hyperlinks are not plain text, but magic cloud API tokens)
Is this much correct?