I... don't understand? You were an unpopular speech platform. I worked in porn for 20 years. I know what it's like to get kicked out of AWS, so does John Young. Why weren't they dockerized and prepared to move containers and backups at the drop of a hat? Why didn't they have hot spares elsewhere? There are literally thousands of hosting companies running the gamut from penny-ante to larger platforms like Rackspace, operating in a hundred countries, many of which don't know and don't wanna know what you're doing. The same is true to a lesser extent of registries. And Apple/Google kicking you out of their stores? You can still bookmark a mobile-friendly web app onto the home screen of an Apple device. You're not going to lose Qspiracy fans and nazis just because they have to do an extra step to get to your site.
Nevermind that maybe, just maybe, if your platform users advocate for violent revolution, you're going to have a harder go of things. Prior to approximately October, 2001, there was a taleban.com too. I didn't read any hand-wringing when they got the axe. Newsflash: At least Tor can't remove you.
The simplest explanation is that they saw this coming a mile away (who honestly didn't?) and this is engineered martyrdom.
"The simplest explanation is that they saw this coming a mile away (who honestly didn't?) and this is engineered martyrdom."
I think its more boring than that. Their platform was built by amateurs who didn't know what they were doing. Things like soft-deleting posts and auto-incrementing post IDs. This is why it was so easily crawled in a day before the shutdown.
This is the real answer. It was very clear they had no competence to make an app of this scale. It would have collapsed under its own weight before long.
The most common criticism I'm aware of is that it leaks information about your system and its users. For example, you can gauge growth by how fast ids grow, and relative age of accounts. The former is potentially financially valuable. Often the latter is exposed properly through the UI, but the that can be a choice, rather than exposed due to an implementation detail. There are likely other criticisms as well.
It makes it easy to crawl. If you're trying to get all of the user profiles for a given website and all of the URLs look like foo.com/user/123 it's easy to download all of them, whereas foo.com/user/1234-abcdef1234-1234 is much harder to guess.
There's nothing particularly wrong about it. Twitter does something like this with a bit of timing thrown in the mix, which is why some smart-asses have made self-referencing tweets:
I believe they also for a long time had a public API^.. and I'm pretty sure they soft-delete posts (bonus that they could revive the tweet down the track if it was a 'mistake'). I don't think the Twitter team are amateurs though.
Having monotonically increasing IDs for non-distributed systems removes the possibility of duplicate IDs. In distributed systems ID conflicts are unavoidable, and you might have less conflicts if you generate random IDs, as well as it becomes difficult for an attacker to guess IDs.
^ likely it would have been rate-limited, which isn't a problem if you've got a large pool of users you can authenticate with
> Having monotonically increasing IDs for non-distributed systems removes the possibility of duplicate IDs.
In parler's case, having monotonically increasing ID typed as signed 32-bit integer _guaranteed_ duplicate IDs once they reached 2.1 billion unique items. This is well-known failure mode, and yet it took them by surprise.
Okay.. I concede, 32-bit is a rookie mistake. I didn't know that. Wow. Did they not account for millions of daily active users in a Twitter competitor...
Exactly, these people don't want to put in the work. They just want to use shortcuts, but just as you're not entitled to an account Twitter, you're also not entitled to a cloud service provider.
Parler hasn't seen 1/10th the the deplatforming that The Pirate Bay has seen for example, yet that managed to survive throughout the years and is still up now. The reality is that whoever was running Parler had no clue what they were doing.
Its more that nothing they do can overcome the fact that everyone has blacklisted them. You can't run a site on the internet without having ISPs and DNS registers on your side at least. Even if they rehost somewhere else they will just be hit with the next level of blocks and have wasted the money rehosting.
Who says it is the end of the story for Parler? They came on the scene many years after Twitter and Facebook. They have a lot of catching up to do, with a lot less money at their disposal. In the early days, Twitter ran on Ruby on Rails, hardly a scalable decision, either.
what a ridiculous comment. it was a crowd of some of the dumbest people alive - some attempting to kidnap, injure, and kill lawmakers, some thinking that by entering the building they'd find an OVERTHROW GOVT button. Idiots.
anyone that tries to both sides this shit deserves eternal shame
I personally just want a social network were I don't have to worry about being banned every minute. (Not sure if PArler qualifies, but I suppose it is their selling point).
Depends how much time it will take them to be back up again : if it takes too long, most of their potential users will have moved on, for instance to Gab.
What makes you think they can't attract talent? There are plenty of libertarians in tech that would be willing to work with them, including this SRE right here. I think they had other issues. Perhaps they were just out of their league and didn't have the leadership capable of fighting fires while simultaneously recruiting, scaling, policing, etc. Typical successful startup skills that many people don't have.
I’m responding to a person who made this assertion, but designing your site in a way that non-users can archive all media uploaded to it seems like a good sign!
apparently it's harder for them to get back than they thought.
I guess they are now facing the equivalent of the Holywood blacklist. Dockerized setups will not help if you can't get an alternative host.
"Matze said that Parler had faced trouble in finding a new service provider, ... He also said that others had refused to work with Parler: "Every vendor, from text message services to email providers to our lawyers, all ditched us, too, on the same day."
https://en.wikipedia.org/wiki/Parler#Attempts_to_return_onli...
It is probably easier to find an alternative hosting platform when you are booted from AWS because of porn. In the case of parler they also might need to migrate more data.
Their lawyers bailing gave me pause. Suggests things are substantially more serious than Amazon, Apple, Google, and Twilio suddenly finding a paying customer distasteful.
I don't think it's unreasonable for Parler to attempt to create the same kind of mainstream platform that these other services enjoy. Their service is not criminal in nature. Regardless of what some may claim, their service is simply a social media platform. If you're a technical user you can probably figure out the 'Darkweb'. You can't blame them for not wanting to restrict their userbase to only technical users. They're trying to be a viable alternative to the Silicon Valley giants. That's a reasonable thing to do.
They provided ~90 example out of millions of posts and comments. I signed up to see what all the fuss was about and didn’t see any examples of violence whatsoever, other than some nutty conspiracy stuff and some vaguely violent language that wouldn’t be out of place on Vox or Mother Jones.
If a platform has ANY illegal content what standard is then acceptable? If you moderate 99.999% of your content is it reasonable for you to be blacklisted for the 0.001%? If someone found 90 examples of calls to violence on Twitter would that make it OK to deplatform them too?
> If you moderate 99.999% of your content is it reasonable for you to be blacklisted for the 0.001%?
This sounds like you're trying to get us to pretend along that Parler had a moderation policy that was in any way effective, equally applied, or not just pandering to the crowd it was trying to entice.
> They provided ~90 example out of millions of posts and comments
This argument is problematic. In any criminal endeavor, what most of the criminals do isn’t illegal or even unusual.
When HSBC was laundering cartel money, they also served regular customers. They processed credit cards, gave loans and so on. 99.9999% legal activities and yet they were fined billions of dollars for fascinating money laundering operation of few murderous warlords.
If that’s your logic, then clearly Facebook and Twitter are 1,000x as guilty as Parler, considering that they have hosted extremely violent speech. The genocide in Myanmar, for instance.
This is about money and power, nothing else. Facebook is “too big to fail” so clearly it will never face any consequences, while Parler is a random little site that’s easy to pick on.
I do not object to the claim that FB/Twitter etc. are no better. I am simply pointing out that "arithmetically most of the things they did are harmless" is not a good argument. Most of the things Bin Laden did are benign, Him studying eating, partying, watching TV, taking a dump, playing a ball etc. probably accounts for %99.999 of his actions and being a horrible terrorist is just %0.001 of who he is, if we are looking for a benign/illegal ratio.
IMHO social media needs to be regulated, companies should not be able to simply cut you off from the rest of the public and you shouldn't be unaccountable for your actions and there must be ways for harm reduction. (if you tell/share a something, them maybe people affected by it should have right to equal exposure to challenge it. i.e. their POV could be attached to your post and everyone who interacted with your post receives a notification that the post was challenged)
Wouldn't the platform be protected by section 230? I imagine a large part of the reason Parler even exists is to abuse section 230, or get section 230 removed so they can then go after big tech, or call people hypocrites when only their service gets removed. But as it stands, isn't the service protected?
> The law allows for companies to engage in “good Samaritan” moderation of “objectionable” material without being treated like a publisher or speaker under the law.
My understanding is that the law simply says that doing _some_ moderation doesn't make you the publisher.
Nevermind that maybe, just maybe, if your platform users advocate for violent revolution, you're going to have a harder go of things. Prior to approximately October, 2001, there was a taleban.com too. I didn't read any hand-wringing when they got the axe. Newsflash: At least Tor can't remove you.
The simplest explanation is that they saw this coming a mile away (who honestly didn't?) and this is engineered martyrdom.