Hacker News new | past | comments | ask | show | jobs | submit login

There were some Lemmy-related discussions going on this past weekend. I think this is the one tidbit everyone around Hacker News wants to know:

> The current VPS couldn't be resized that much anymore, and load was going up with all the new users. So I bought the same server at Hetzner: a 32-core/64 thread 128GB RAM dedicated server. (For Mastodon, I doubled the RAM. For Lemmy I don't think it's needed yet.) I migrated the Lemmy software and database there, and moved over. This took 4 minutes of downtime.

So 32-core/64-threads with 128GB of RAM is running Lemmy.world, which is supporting ~36.7k users at the moment.

---------

A brief look at Hertzner comes up with: https://www.hetzner.com/dedicated-rootserver/ax161

A 32-core/64 thread + 128GB RAM server in Germany for 142 EUR / month.




32 cores and/or 128GB RAM - for how many concurrent users, 36.7k? It seems like Lemmy requires a fair bit of resources per user for a primarily text-based API, more than I'd expected. Wonder where the majority of the resource burden lies - uncached local database queries? Too many simultaneous database connections?


Federation takes quite a toll.

The way I read the article is that they didn't upgrade every time the load was (almost) too high, but instead opted immediately for the beefy dedicated server (considering the price it makes sense – it's on par with beefy VMs that have less power). They probably have resources to spare right now, but no need to go through the hassle of an upgrade soon (although 4 mins downtime sounds like they have things under control).

Having said that, I'm sure there's plenty to improve in Lemmy, given it's used at a bigger scale now.


Keep in mind a lot of instances are preparing for what they anticipate to be the largest migratory wave yet when third party apps shut down at the end of the month.


with some basic extrapolation, at how many users they'll hit the extra expensive Hertzner servers? or in other words, at which point they'll need to improve their architecture?


I honestly wish that the answer is "when we get so big to the point that a single machine can not handle it, we close registrations".

Can we please drop the "number must go up" mentality? The whole point of federated systems is to avoid concentration of power in a handful of servers. I'm sure that the people doing there have good intentions, but why can't we just let things just a little bit dispersed?


Sure, until you google some error, or some other random thing, find some thread somewhere, want to comment/ask/contribute, and you can't, since the registrations are locked.


I'm on kbin.social and participate in communities across ~20ish other instances regardless of whether they've turned off registrations temporarily or permanently. Disabling registrations only prevents new accounts on that instance, it doesn't stop people from posting and commenting from other instances.


Parent comment is the perfect example of how we got so used with dealing with shitty and user-hostile systems. We've been dealing with walled gardens for almost a generation now, it's like they don't even understand that it is possible to interact with a remote system without having everything in one centralized database.


Gmail isn't a walled garden but it doesn't have a 20k user limit


It would have if they were not profiting of all your data.

Besides, that's not the point. The point is that you don't need to have a Gmail account to reach and be reached by Gmail users.


People have forgotten how email works.


I think what the parent is alluding to, is the fact that, Googling something and ending up on a Lemmy instance that is not your local one. You can, in fact, not comment on these instances. You must access it through your local instance (e.g. lemmy.world/c/community@remote.com).


It won't be long until someone makes an extension that can detect AP servers and lets you interact with remote instances using your own.


A browser extension? While I agree, the client (browser) probably has to know about ActivityPub, or store some state that sites can read (without third party cookies), it's not fair to expect users install a browser extension for what would be basic functionality in their eyes.

Thankfully, there are proposals to add ActivityPub as a web API: https://github.com/webap-api/webap-browser-extension


I still prefer a handful of instances compared to the current state of resdit where it's all or nothing.


it'll be interesting to see if at that point, federation starts to become the way of scaling or not. If it was seamless, it wouldn't matter where you signed up and they could just host multiple lemmy instances on different servers. So far I have rather spotty experiences though with content sometimes making it to federated servers, sometimes not etc etc.


User scaling seems easy enough.

Its "community" (on Lemmy), or "magazine" (on kbin) scaling that seems hard.

But since each server has a local-copy of the community that its serving out, maybe the hardest part has already been solved by the Federation model. Each federated-instance is effectively a proxy / front-end for the users on that instance.

---------

I guess Mastodon is way larger than Lemmy though and they haven't had issues yet.


1TB RAM Hertzner servers are available, so at least 8x more scaling before that's a problem.

2TB RAM is common in commodity servers, albeit expensive ones ($1000ish/month). Somewhere between 4TB to 20TB RAM is the pragmatic limit (where costs for vertical scaling start to get far worse)


Interestingly it seems max memories have been going down or at least not increasing in commodity x86 servers. Vendors advertised 24 TB servers enabled by lots of (192?) DIMM sockets in 2018 or maybe even earlier.


Is RAM still the limiting factor at that scale? Are you assuming 256 CPU cores?


Current theory is that Lemmy software is mostly RAM limited.

No one knows for sure until we reach those caps.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: