Very small team and too many design docs, not good.
Huge team, multiple timezones, multiple squads, and few design docs, not good either.
And then you balance with all the values in between depending on your team size & culture.
Even within the same company, your approach will/should change as it grows. There's a critical point where move fast and break things approach will eventually end up with too many outages, production bugs, unpolished/confusing product, and last but not least, FTC eye watering fines.
I was there when lifehack blogs will continuously talk about a newly released note taking app(Wunderlist, Evernote, OneNote, etc.)
After trying several apps, including evernote, it felt they had too many features that made them too complex for my simple use case.
After trying several methods, my stable workflow is:
- Dropbox markdown file for year long notes.
- Google Keep for (shared) checkbox lists & multimedia notes.
- Unread emails + snoozing for TODO tasks.
for the db schema definition for this tool, where does the source of truth lives?
I'm trying to think what happens when a column gets deleted or added in the prod, ci, or dev db tier. Ideally those db schema changes should happen at the same time but real life doesn't work like that.
with some basic extrapolation, at how many users they'll hit the extra expensive Hertzner servers? or in other words, at which point they'll need to improve their architecture?
I honestly wish that the answer is "when we get so big to the point that a single machine can not handle it, we close registrations".
Can we please drop the "number must go up" mentality? The whole point of federated systems is to avoid concentration of power in a handful of servers. I'm sure that the people doing there have good intentions, but why can't we just let things just a little bit dispersed?
Sure, until you google some error, or some other random thing, find some thread somewhere, want to comment/ask/contribute, and you can't, since the registrations are locked.
I'm on kbin.social and participate in communities across ~20ish other instances regardless of whether they've turned off registrations temporarily or permanently. Disabling registrations only prevents new accounts on that instance, it doesn't stop people from posting and commenting from other instances.
Parent comment is the perfect example of how we got so used with dealing with shitty and user-hostile systems. We've been dealing with walled gardens for almost a generation now, it's like they don't even understand that it is possible to interact with a remote system without having everything in one centralized database.
I think what the parent is alluding to, is the fact that, Googling something and ending up on a Lemmy instance that is not your local one. You can, in fact, not comment on these instances. You must access it through your local instance (e.g. lemmy.world/c/community@remote.com).
A browser extension? While I agree, the client (browser) probably has to know about ActivityPub, or store some state that sites can read (without third party cookies), it's not fair to expect users install a browser extension for what would be basic functionality in their eyes.
it'll be interesting to see if at that point, federation starts to become the way of scaling or not. If it was seamless, it wouldn't matter where you signed up and they could just host multiple lemmy instances on different servers. So far I have rather spotty experiences though with content sometimes making it to federated servers, sometimes not etc etc.
Its "community" (on Lemmy), or "magazine" (on kbin) scaling that seems hard.
But since each server has a local-copy of the community that its serving out, maybe the hardest part has already been solved by the Federation model. Each federated-instance is effectively a proxy / front-end for the users on that instance.
---------
I guess Mastodon is way larger than Lemmy though and they haven't had issues yet.
1TB RAM Hertzner servers are available, so at least 8x more scaling before that's a problem.
2TB RAM is common in commodity servers, albeit expensive ones ($1000ish/month). Somewhere between 4TB to 20TB RAM is the pragmatic limit (where costs for vertical scaling start to get far worse)
Interestingly it seems max memories have been going down or at least not increasing in commodity x86 servers. Vendors advertised 24 TB servers enabled by lots of (192?) DIMM sockets in 2018 or maybe even earlier.
Very small team and too many design docs, not good.
Huge team, multiple timezones, multiple squads, and few design docs, not good either.
And then you balance with all the values in between depending on your team size & culture.
Even within the same company, your approach will/should change as it grows. There's a critical point where move fast and break things approach will eventually end up with too many outages, production bugs, unpolished/confusing product, and last but not least, FTC eye watering fines.