Hacker News new | past | comments | ask | show | jobs | submit login

No, they send 'envoys' to other social media sites to argue with and harass those who speak up against them anywhere on the Internet. Think "flash mob" crossed with "sealioning^", and look to Gamergate for being first to that particular line. For example, there's a lot of green accounts posting comments here about where to find their new community.

^ Sealioning: https://wondermark.com/1k62/




That's honestly one of the Big Questions™ about stuff like Mastodon; is how resilient it is against that sort of behaviour.

On a lot of levels, it seems to be designed to allow multiple groups to share blacklists, so that if a given target is a bad actor anywhere, as long as other groups trust the group they initially burned, word is allowed to get around and close the door in their face before they even say a word somewhere else.

That's really the sign of our times when it comes to internet communities - almost all communities have this "innocent until proven guilty" stance for random strangers. Having been a moderator before on at least one internet community, it's almost like you're a police detective "building a case". You don't just get a whiff of "gee, this person's probably acting in bad faith, let's just ban them" - you have to gradually build up a strong dossier of sorts proving that they're genuinely bad, otherwise (at least in the old-school communities I grew up in) it reflects extremely poorly on you.

But because of this, they're susceptible to all sorts of really rudimentary social hack vectors, some almost feeling as silly as "the same guy coming back five minutes later wearing a Groucho Marx mask and a slightly altered name - and fooling everyone".

-----------

I feel like that's really the novel thing about Mastodon et al - being able to ditch these social norms lets people unload a whole raft of scorched earth tools - like pre-emptive IP bans that follow people around and are in place before they even arrive.

They're dangerous tools - we could easily make ourselves susceptible to attacks where trolls get people kicked out of their own communities by impersonating them elsewhere, but it feels like the pendulum's been swung way to far in the "gentle and understanding" direction for decades, now.


How is Mastodon able to defend against abusive single-message throwaway accounts registered by a distributed swarm of malicious human beings?

My stalker of 20 years has been registering a new account every time they contact me for 20 years, specifically to abuse the “your first message is trusted and delivered” approach, with every social platform they’ve stalked me on. If my stalker has known how to do this for decades, then clearly these forum folks know how to as well.

By my read, Mastodon is vulnerable to this distressing and threatening behavior as long as each account is treated as a throwaway, even if it’s banned as soon as it’s caught. This allows the flash mob of abusers to sign up for a mass of Mastodon accounts, send a single threatening abusive message from each to one recipient, and then throw away the accounts and start over. They would succeed in their targeted harassment goal, while Mastodon would - as it defaults to “allow untrusted third parties to contact anyone” - continue acting as a delivery platform for abuse that cannot be stopped.

If I’ve misunderstood and there is some aspect of Mastodon that protects against anonymous users being treated as innocent long enough to deliver an abusive message, that would be invaluable to know.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: