Yep, there's a premium on making your architecture more cloudy. However, the best point for Use One Big Server is not necessarily running your big monolithic API server, but your database.
Use One Big Database.
Seriously. If you are a backend engineer, nothing is worse than breaking up your data into self contained service databases, where everything is passed over Rest/RPC. Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care).
It is so much easier to do these joins efficiently in a single database than fanning out RPC calls to multiple different databases, not to mention dealing with inconsistencies, lack of atomicity, etc. etc. Spin up a specific reader of that database if there needs to be OLAP queries, or use a message bus. But keep your OLTP data within one database for as long as possible.
You can break apart a stateless microservice, but there are few things as stagnant in the world of software than data. It will keep you nimble for new product features. The boxes that they offer on cloud vendors today for managed databases are giant!
> Seriously. If you are a backend engineer, nothing is worse than breaking up your data into self contained service databases, where everything is passed over Rest/RPC. Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care).
This works until it doesn't and then you land in the position my company finds itself in where our databases can't handle the load we generate. We can't get bigger or faster hardware because we are using the biggest and fastest hardware you can buy.
Distributed systems suck, sure, and they make querying cross systems a nightmare. However, by giving those aspects up, what you gain is the ability to add new services, features, etc without running into scotty yelling "She can't take much more of it!"
Once you get to that point, it becomes SUPER hard to start splitting things out. All the sudden you have 10000 "just a one off" queries against several domains that are broken by trying carve out a domain into a single owner.
I don't know what's the complexity of your project, but more often than not the feeling of doom coming from hitting that wall is bigger than the actual effort it takes to solve it.
People often feel they should have anticipated and avoid the scaling issues altogether, but moving from a single DB to master/replica model, and/or shards or other solutions is fairly doable, and it doesn't come with worse tradeoffs than if you sharded/split services from the start. It always feels fragile and bolt on compared to the elegance of the single DB, but you'd also have many dirty hacks to have a multi DB setup work properly.
Also, you do that from a position where you usually have money, resources and a good knowledge of your core parts, which is not true when you're still growing full speed.
I can't speak for cogman10, but in my experience when you start to encounter issues of hitting the limit of "one big database" you are generally dealing with some really complicated shit and refactoring to dedicated read instances, shards, and other DB hacks are just short term solutions to buy time.
The long term solutions end up being difficult to implement and can be high risk because now you have real customers (maybe not so happy because now slow db) and probably not much in house experience for dealing with such large scale data; and an absolute lack of ability to hire existing talent as the few people that really can solve for it are up to their ears in job offers.
The other side of this is once you actually can’t scale a single DB the project has proved it’s value and you have a solid idea what you actually want.
Designing let alone building something scaleable on the other hand is a great way to waste extreme effort up front when it’s completely superfluous. That’s vastly more likely to actually kill a project than some growing pains especially when most projects never scale past a single reasonably optimized database.
You're not wrong. Probably more than 95% of applications will never outgrow one large relational database. I just think that this leads to an unfortunate, but mostly inevitable issue of complexity for the few that do hit such a level of success and scale.
Alex DeVrie (author of 'The DynamoDB Book') discusses that his approach is to essentially start all new projects with DynamoDB.
Now I don't really agree with him, yet I can't fully say he's wrong either. While we won't need it most of the time, reaching for a tool like this before we need it provides more time to really understand it when/if we reach that point.
@ithrow, yeah I know he is clearly biased which is why I don't really agree with him. I do however think it would have helped me to start using/learning before I needed it since the paradigm is so foreign to the relational model that is now second nature.
DynamoDB (and Mongo) is nice, right up until you need those relations. I haven’t found a document oriented database that gives me the consistency guarantees of a RDBMS yet.
You must not have looked at MongoDB. We have been delivering fully consistent ACID transactions since 4.0 which shipped several years. Yes, Jepsen did find some issues with the initial release of ACID transactions and yes, we fixed those problems pretty rapidly.
> Jepsen evaluated MongoDB version 4.2.6, and found that even at the strongest levels of read and write concern, it failed to preserve snapshot isolation. Instead, Jepsen observed read skew, cyclic information flow, duplicate writes, and internal consistency violations.
1 Updates
2020-05-26: MongoDB identified a bug in the transaction retry mechanism which they believe was responsible for the anomalies observed in this report; a patch is scheduled for 4.2.8.
Your initial claim was that these issues were addressed in 4.0.
Jepsen's report refutes your claim,and demonstrates MongoDB had serious reliability problems even in 4.2.6.
Frankly, your insistence in pulling the wool over everyone's eyes, specially on a topic that's easily verified, does not help built up trust on MongoDB
I can see the source of confusion. Apologies. I mentioned ACID transactions were released in 4.0 but did not explicitly mention when the problems arose which of course was in 4.2 which was actually released a year later. The version numbers are clearly referenced in the Jepsen article.
This is the core culture of MongoDB - cutting corners to optimise things a little more and cater to a NoSQL crowd. It's entire mindset is fundamentally different from what you'd get in a proper relational database and ignoring those things isn't going to do any software you write any favours.
It's been a long time since I've used Mongo so I don't know if it only supports eventual consistency, but DynamoDB does support transactions and traditional consistency, but it comes at the cost of reduced read throughput.
DynamoDB also supports relations, but they aren't called relations because they don't resemble anything like relations in traditional relational databases.
You may already know this, but just to clarify DynamoDB isn't really a document oriented database. It's both a key/value database and a columnar database, so in that sense I'd closer to Redis and Cassandra than Mongo, but there's definitely a lot of misinformation on this front.
> The long term solutions end up being difficult to implement and can be high risk because now you have real customers (maybe not so happy because now slow db) and probably not much in house experience for dealing with such large scale data; and an absolute lack of ability to hire existing talent as the few people that really can solve for it are up to their ears in job offers.
This is a problem of having succeeded beyond your expectations, which is a problem only unicorns have.
At that point you have all this income from having fully saturated the One Big Server (which, TBH, has unimaginably large capacity when everything is local with no network requests), so you can use that money to expand your capacity.
Any reason why the following won't work:
Step 1: Move the DB onto it's own DBOneBigServer[1]. Warn your customers of the downtime in advance. Keep the monolith as-is on the current OriginalOneBigServer.
Step 2: OriginalOneBigServer still saturated? Put copies of the monolith on separate machines behind a load-balancer.
Step 3: DBOneBigServer is still saturated, in spite of being the biggest Oxide rack there is? Okay, now go ahead and make RO instances, shards, etc. Monolith needs to connect to RO instances for RO operations, and business as usual for everything else.
Okay, so Step 3 is not as easy as you'd like, but until you get to the point that your DBOneBigServer cannot handle the loads, there's no point in spending the dev effort on sharding. Replication doesn't usually require a team of engineers f/time, like a distributed DB would.
If, after Step 3, you're still saturated, then it might be time to hire the f/time team of engineers to break up everything into microservices. While they get up to speed you're making more money than god.
Competitors who went the distributed route from day one have long since gone out of business because while they were still bugfixing in month 6, and solving operational issues for half of each workday (all at a higher salary) in month 12, and blowing their runway cash on AWS for the first 24 months, you had already deployed in month 2, spending less than they did.
I guess the TLDR is "don't architect your system as if you're gonna be a unicorn". It's the equivalent of you, personally, setting your two-year budget to include the revenue from winning a significant lottery.
You don't plan your personal life "just in case I win the lottery", so why do it with a company?
^ This. Not so long ago, I had worked in the finance department of a $350M company as one of the five IT guys and we had just begun implementing Step 2, after OriginalOneBigServer had shown its limits. DBOneBigServer was really big though, 256 GB RAM and 128 cores if I remember correctly. So big in fact that I implemented some of my ETL tasks as stored SQL procedures to be run directly on the server. The result? A task that would easily take a big fraction of OneBigServer memory and 15 hours (expected to increase correlatedly with the revenue) is run in 30 minutes.
It's worth noting that when I left we still were nowhere close to saturate DBOneBigServer.
Maybe unicorn is not the right word? If your app has millions of DAUs choking your DB, you should at least be tacking your next big investment round or some other success milestone.
Otherwise, your product is on it's way to failure, so good thing you did One Big DB...
These services didn’t need additional rounds of funding and aren't the kind of thing that would scale like a unicorn.
Some services might only been transient (like services based around a particular sports league or TV series) or be regional (like government sites or, also, sports leagues).
Not every service out there has aspirations to “change the world”. Some exist to fill a niche. But sometimes that “niche” still covers millions of people.
> I don't know what's the complexity of your project, but more often than not the feeling of doom coming from hitting that wall is bigger than the actual effort it takes to solve it.
We've spent and failed at multiple multi year projects to "solve" the problem. I'm sure there are more simple problems that are easier to disentangle. But not in our case.
I can share some. Had a similar experience as the parent comment. I do support "one big database" but it requires a dedicated db admin team to solve the tragedy of the commons problem.
Say you have one big database. You have 300 engineers and 30-50 product managers shipping new features every day accountable to the C-Suite. They are all writing queries to retrieve the data they want. One more join, one more N+1 query. Tons of indexes to support all the different queries, to the point where your indexes exceed the size of your tables in many cases. Database maintenance is always someone else's problem, because hey, it's one big shared database. You keep scaling up the instance size cause "hardware is cheap". Eventually you hit the m6g.16xlarge. You add read replicas. Congratulations, Now you have an eventually consistent system. You have to start figuring out which queries can hit the replica and which ones always need the fresh data. You start getting long replication lag, but it varies and you don't know why. If you decide to try to optimize a single table, you find dozens or 100+ queries that access it. You didn't write them. The engineers who did don't work here anymore....
I could go on, and all these problems are certainly solvable and could have been avoided with a little foresight, but you don't always have good engineers at a startup doing the "right thing" before you show up.
I think this hits the nail right on the head, and it's the same criticism I have of and article itself: the framing is that you split up a database or use small vms or containers for performance reasons, but that's not the primary reason these things are useful; they are useful for people scaling first and foremost, and for technical scaling only secondarily.
The tragedy of the commons with one big shared database is real and paralyzing. Teams not having the flexibility to evolve their own schemas because they have no idea who depends on them in the giant shared schema is paralyzing. Defining service boundaries and APIs with clarity around backwards compatibility is a good solution. Sometimes this is taken too far, into services that are too small, but the service boundaries and explicit APIs are nonetheless good, mostly for people scaling.
> Defining service boundaries and APIs with clarity around backwards compatibility is a good solution.
Can't you do that with one big database? Every application gets an account that only gives it access to what it needs. Treat database tables as APIs: if you want access to someone else's, you have to negotiate to get it, so it's known who uses what. You don't have to have one account with access to everything that everyone shares. You could
It would be easier to create different databases to achieve the same thing. Those could be in the same database server, but clear boundaries is the key.
Indeed! And functions with security definers can be useful here too. With those one can define a very strict and narrow API that way, with functions that write or query tables that users don't have any direct access to.
Look at it as an API written in DB functions, rather than in HTTP request handlers. One can even have neat API versioning through, indeed, the schema, and give different users (or application accounts) access to different (combinations of) APIs.
The rest is "just" a matter of organizational discipline, and a matter of teams to internalize externalities so that it doesn't devolve into a tragedy of the commons — a phenomenon that occurs in many shapes, not exclusively in shared databases; we can picture how it can happen for unfettered access to cloud resources just as easily.
But here's the common difference: through the cloud, there's clear accounting per IOP, per TB, per CPU hour, so incentive to use resources efficiently is can be applied on a per-team basis — often through budgeting. "Explain to me why your team uses 100x more resources than this other team" / "Explain to me why your team's usage has increased 10-fold in three months".
Yet there's no reason to think that you can only get accounting for cloud stuff. You could have usage accounting on your shared DB. Does anyone here have experience with any kind of usage accounting system for, say, PostgreSQL?
I think we're getting hung up on database server vs. database as conceptual entity. I think separation between the entities is more important (organizationally) and don't think it matters as much whether or not the server is shared.
These are real problems, but there can also be mitigations, particularly when it comes to people scaling. In many orgs, engineering teams are divided by feature mandate, and management calls it good-enough. In the beginning, the teams are empowered and feel productive by their focused mandates - it feels good to focus on your own work and largely ignore other teams. Before long, the Tragedy of the Commons effect develops.
I've had better success when feature-focused teams have tech-domain-focused "guilds" overlaid. Guilds aren't teams per-se, but they provide a level of coordination, and more importantly, permanency to communication among technical stakeholders. Teams don't make important decisions within their own bubble, and everything notable is written down. It's important for management to be bought in and value participation in these non-team activities when it comes to career advancement (not just pushing features).
In the end, you pick your poison, but I have certainly felt more empowered and productive in an org where there was effective collaboration on a smaller set of shared applications than the typical application soup that develops with full team ownership.
In uni we learnt about federated databases, i.e multiple autonomous, distributed, possibly heterogeneous databases joined together by some middleware to service user queries. I wonder how that would work for this situation, in the place of one single large database.
Federated databases are never usually mentioned in these kind of discussions involving 'web scale'. Maybe because of latency? I don't know
Sure. My point is that the organization problems are more difficult and interesting than the technical problems being discussed in the article and in most of the threads.
Introducing an enormous amount of overhead because training your software engineers to use acceptable amounts of resources instead of just accidentally crashing a node and not caring is a little ridiculous.
For whatever reason I've been thrown into a lot of companies at that exact moment when "hardware is cheap" and "not my problem" approaches couldn't cut it anymore...
So yes, it's super painful, and requires a lot of change in processes, mindsets, and it's hard to get everyone to understand things will get slower from there.
On the other end, micro-services and/or multi-DB is also super hard to get right. One of the surprise I had was all the "cache" that each services started silently adding on their little island when they realized the performance penalty they had from fetching data from half a dozen services on the more complicated operations. Or the same way DB abuse from one group could slow down everyone, and service abuse on the core parts (e.g. the "user" service) would impact most of the other services. More that a step forward, it felt a lot like a step sideways and continuing to do the same stuff, just in a different way.
My take from it was that teams that are good at split architectures are also usually good at monolith, and vice-versa. I feel from the parent who got stuck in the transition.
Sure, you'll get to m6g.16xlarge; but how many companies actually have oltp requirements that exceed the limits of single servers on AWS, eg u-12tb1.112xlarge or u-24tb1.metal (that's 12-24tb memory)?
I think these days the issues with high availability, cost/autoscaling/commitment, "tragedy of the commons", bureaucracy, and inter-team boundaries are much more likely to be the drawback than lack of raw power.
You do not need that many database developers, it's a myth. Facebook has 2 dedicated database engineers managing it. I work in United Nations, there is only 1 dedicated database developer in 1000+ team.
If you have a well designed database system. You do not need that many database engineers.
I do not disagree at all that what you are describing can happen. What I'm not understanding is why they're failing at multi year attempts to fix this.
Even in your scenario you could identify schemas and tables that can be separated and moved into a different database or at maturity into a more scalable NoSQL variety.
Generally once you get to the point that is being described that means you have a very strong sense on the of queries you are making. Once you have that it's not strictly necessary to even use a RDBMS, or at the very least, a single database server.
> Even in your scenario you could identify schemas and tables that can be separated and moved into a different database or at maturity into a more scalable NoSQL variety.
How? There's nothing tracking or reporting that (unless database management instrumentation has improved a lot recently), SQL queries aren't versioned or typechecked. Usually what happens is you move a table out and it seems fine, and then at the end of the month it turns out the billing job script was joining on that table and now your invoices aren't getting sent out.
> Generally once you get to the point that is being described that means you have a very strong sense on the of queries you are making.
No, just the opposite; you have zillions of queries being run from all over the case and no idea what they all are, because you've taught everyone that everything's in this one big database and they can just query for whatever it is they need.
It's resource intensive - but so is being in a giant tarpit/morass. Adding client query logging is cheaper and can be distributed. I just double checked, and neither Oracle nor Postgres warn 'never use it in production'
And if you have logs, you can see what actually gets queried, and by whom, and what doesn't get queried, and by whom.
That will also potentially let you start constructing views and moving actual underlying tables out of the way to where you can control them.
Which can let you untangle the giant spaghetti mess you're in.
But then, that's just me having actually done that a few times. You're welcome to complain about how it's actually unsolvable and will never get better, of course.
> It's resource intensive - but so is being in a giant tarpit/morass.
Agreed, but it means it's not really a viable option for digging yourself out of that hole if you're already in it. Most of the time if you're desperately trying to split up your database it's because you're already hitting performance issues.
> Adding client query logging is cheaper and can be distributed.
Right, but that only works if you've got a good handle on what all your clients are. If you've got a random critical script that you don't know about, client logging isn't going to catch that one's queries.
> But then, that's just me having actually done that a few times. You're welcome to complain about how it's actually unsolvable and will never get better, of course.
I've done it a few times too, it's always been a shitshow. Query logging is a useful tool to have in some cases but it's often not an option, and even when it is not a quick or easy fix. You're far better off not getting into that situation in the first place, by enforcing proper datastore ownership and scalable data models from the start, or at least from well before you start hitting the performance limits of your datastores.
If you are in the hole where you really cannot add load to your database server but want to log the queries, there is a technique called zero impact monitoring where you literally mirror the network traffic going to your database server, and use a separate server to reconstruct it into query logs. These logs identify the queries that are being run, and critically, who/what is running them.
I've seen this too. I guess 50% of query load were jobs that got deprecated in the next quarterly baseline.
It felt a system was needed to allocate query resource to teams, some kind of tradeable tokens that were scarce maybe, to incentivise more care and consciousness of the resource from the many users.
What we did was have a few levels of priority managed by a central org. It resulted in a lot of churn and hectares of indiscriminately killed query jobs every week, many that had business importance mixed in with the zombies.
Do you think it would make it better to have the tables hidden behind an API of views and stored procedures? Perhaps a small team of engineers maintaining that API would be be able to communicate effectively enough to avoid this "tragedy of commons" and balance the performance (and security!) needs of various clients?
This is so painfully painfully true. I’ve seen in born out personally at three different companies so far. Premature splitting up is bad too, but I think the “just use one Postgres for everything” crowd really underestimate how bad it gets in practice at scale
Maybe it’s all a matter of perspective? I’ve seen the ‘split things everywhere’ thing go wrong a lot more times than the ‘one big database’ thing. So I prefer the latter, but I imagine that may be different for other people.
Ultimately I think it’s mostly up to the quality of the team, not the technical choice.
I’ve seen splitting things go bad too. But less often and to a lesser degree of pain than mono dbs - a bad split is much easier to undo than monodb spaghetti.
However I think it’s “thou shall” rules like this blog post that force useless arguments. The reality is it depends, and you should be using your judgement, use the simplest thing (monodb) until it doesn’t work for you, then pursue splitting (or whatever). Just be aware of your problem domain, your likely max scale, and design for splitting the db sooner than you think before you’re stuck in mud.
And if you’re building something new in an already-at-scale company you should perhaps be starting with something like dynamo if it fits your usecase.
We have over 200 monolith applications each accessing overlapping schemas of data with their own sets of stored procedures, views, and direct queries. To migrate a portion of that data out into it's own database requires, generally, refactoring a large subset of the 200 monolith apps to no longer get all the data in one query, but rather a portion of the data with the query and the rest of the data with a new service.
Sharding the data is equally difficult because even tracing who is writing the data is spread from one side of the system to the next. We've tried to do that trough an elaborate system of views, but as you can imagine, those are too slow and cover too much data for some critical applications so they end up breaking the shard. That, in and of itself, introduces additional complexity with the evolution of the products.
Couple that with the fact that even with these solutions, getting a large portion of the organization is not on board with these solutions (why can't we JUST buy more hardware? Get JUST bigger databases?) and these efforts end up being sabotaged from the beginning because not everyone thinks it's a good idea (And if you think you are different, I suggest just looking at the rest of the comments here in HN that provide 20 different solutions to the problem some of which are "why can't you just buy more hardware?")
But, to add to all of this, we also just have organizational deficiencies that have really harmed these efforts. Including things like a bunch of random scripts checked into who knows where that are apparently mission critical and reading/writing across the entire database. General for things like "the application isn't doing the right thing, so this cron job run every Wednesday will go in and fix things up" Quiet literally 1000s of those scripts have been written.
This isn't to say we've been 100% unsuccessful at splitting some of the data into it's own server. But, it's a long and hard slog.
>Including things like a bunch of random scripts checked into who knows where that are apparently mission critical and reading/writing across the entire database.
This hits pretty hard right now, after reading this whole discussion.
When there is a galaxy with countless star systems of data its good to have locality owners of data who publish for their usage as domain leaders, and build a system that makes subscription and access grants frictionless.
100% agreed and that's what I've been trying to promote within the company. It's simply hard to get the momentum up to really affect this change. Nobody likes the idea that things have to get a little slower (because you add a new layer between the data) before they can get faster.
fwiw hacking hundreds of apps literally making them worse by fragmenting their source of record doesn't sound like a good plan. it's no surprise you have saboteurs, your company probably wants to survive and your plan is to shatter its brain.
outside view: you should be trying to debottleneck your sql server if that's the plan the whole org can get behind. when they all want you to succeed you'll find a way.
> fwiw hacking hundreds of apps literally making them worse by fragmenting their source of record doesn't sound like a good plan. it's no surprise you have saboteurs, your company probably wants to survive and your plan is to shatter its brain.
The brain is already shattered. This wouldn't "literally make them worse", instead it would say that "now instead of everyone in the world hitting the users table directly and adding or removing data from that table, we have one service in charge of managing users".
Far too often we have queries like
SELECT b.*, u.username FROM Bar b
JOIN users u ON b.userId = u.id
And why is this query doing that? To get a human readable username that isn't needed but at one point years ago made it nicer to debug the application.
> you should be trying to debottleneck your sql server if that's the plan the whole org can get behind.
Did you read my post? We absolutely HAVE been working, for years now, at "debottlenecking our sql server". We have a fairly large team of DBAs (about 30) who's whole job is "debottlenecking our sql server". What I'm saying is that we are, and have been, at the edge (and more often than not over the edge) of tipping over. We CAN'T buy our way out of this with new hardware because we already have the best available hardware. We already have read only replicas. We already have tried (and failed at) sharding the data.
The problem is data doesn't have stewards. As a result, we've spent years developing application code where nobody got in the way of saying "Maybe you shouldn't join these two domains together? Maybe there's another way to do this?"
assuming some beastly server with terabytes of ram, hundreds of fast cores, and an exotic io subsystem capable of ridiculous amounts of low latency iops, I'd guess the perf issue with that example is not sql server struggling with load but rather lock contention from the users table being heavily updated. unless that beast of a server is sitting pegged with a hardware bottleneck it can probably be debottlenecked by vertically partitioning the users table. ie: split the table into two (or more) to isolate the columns that change frequently from the ones that don't, replace the table with a view that joins it back together w/instead-of triggers conditionally updating the appropriate tables, etc. etc. then when this happens:
SELECT b.*, u.username FROM Bar b JOIN users u ON b.userId = u.id
sql server sees that you're only selecting username from the users view and eliminates the joins for the more contentious tables and breathes easy peasy
> And why is this query doing that? To get a human readable username that isn't needed but at one point years ago made it nicer to debug the application.
imo users should be able to do this and whatever else they want and it's not even unreasonable to want usernames for debugging purposes forever. I'd expect the db team to support the requirements of the apps teams and wouldn't want to have to get data from different sources
> assuming some beastly server with terabytes of ram, hundreds of fast cores, and an exotic io subsystem capable of ridiculous amounts of low latency iops, I'd guess the perf issue with that example is not sql server struggling with load but rather lock contention from the users table being heavily updated.
You'd guess wrong. The example above is not the only query our server runs. It's an example of some of the queries that can be run. We have a VERY complex relationship graph, far more than what you'll typically find. This is finance, after all.
I used the user example for something relatable without getting into the weeds of the domain.
We are particularly read heavy and write light. The issue is quiet literally that we have too many applications doing too many reads. We are literally running into problems where our tempDb can't keep up with the requests because there are too many of them doing too complex of work.
You are assuming we can just partition a table here or there and everything will just work swimmingly, that's simply not the case. Our tables do not so easily partition. (perhaps our users table would, but again, that was for illustrative purposes and by no means the most complex example).
Do you think that such a simple solution hasn't been explored by a team of 50 DBAs? Or that this sort of obvious problem wouldn't have been immediately fixed?
> Do you think that such a simple solution hasn't been explored by a team of 50 DBAs? Or that this sort of obvious problem wouldn't have been immediately fixed?
based on what you've shared, yeah. I also wouldn't expect a million DBAs to replace a single DBE
One nice compromise is to migrate to using read-only database connections for read tasks from the moment you upgrade from medium sized DB hardware to big hardware. Keep talking to the one big DB with both connections.
Then when you are looking at the cost of upgrading from big DB hardware to huge DB hardware, you've got another option available to compare cost-wise: a RW main instance and one more read-only replicas, where your monolith talks to both: read/write to the master and read-only to the replicas via a load balancer.
I've basically been building CRUD backends for websites and later apps since about 1996.
I've fortunately/unfortunately never yet been involved in a project that we couldn't comfortably host using one big write master and a handful of read slaves.
Maybe one day a project I'm involved with will approach "FAANG scale" where that stops working, but you can 100% run 10s of millions of dollars a month in revenue with that setup, at least in a bunch of typical web/app business models.
Early on I did hit the "OMG, we're cooking our database" where we needed to add read cacheing. When I first did that memcached was still written in Perl. So that joined my toolbox very early on (sometime in the late 90s).
Once read cacheing started to not keep up, it was easy enough to make the read cache/memcached layer understand and distribute reads across read slaves. I remember talking to Monty Widenius at The Open Source Conference, I think in Sad Jose around 2001 or so, about getting MySQL replication to use SSL so I could safely replicate to read slaves in Sydney and London from our write master in PAIX.
I have twice committed the sin of premature optimisation and sharded databases "because this one was _for sure_ going to get too big for our usual database setup". It only ever brought unneeded grief and never actually proved necessary.
Many databases can be distributed horizontally if you put in the extra work, would that not solve the problems you're describing? MariaDB supports at least two forms of replication (one master/replica and one multi-master), for example, and if you're willing to shell out for a MaxScale license it's a breeze to load balance it and have automatic failover.
I worked at a mobile game company for years and years, and our #1 biggest scaling concern was DB write throughput. We used Percona's MySQL fork/patch/whatever, we tuned as best we could, but when it comes down to it, gaming is a write-heavy application rather than the read-heavy applications I'm used to from ecommerce etc.
Sharding things out and replicating worked for us, but only because we were microservices-y and we were able to split our schemas up between different services. Still, there was one service that required the most disk space, the most write throughput, the most everything.
(IIRC it was the 'property' service, which recorded everything anyone owned in our games and was updated every time someone gained, lost, or used any item, building, ally, etc).
We did have two read replicas and the service didn't do reads from the primary so that it could focus on writes, but it was still a heavy load that was only solved by adding hardware, improving disks, adding RAM, and so on.
Not without big compromises and a lot of extra work. If you want a truly horizontally scaling database, and not just multi-master for the purpose of availability, a good example solution is Spanner. You have to lay your data out differently, you're very restricted in what kinds of queries you can make, etc.
Clarification, you can make unoptimized queries on Spanner with a great degree of freedom when you're doing offline analysis, but even then it's easy to hit something that's too slow to work at all, whereas in Postgres I know it'd not be a problem.
For what it's worth, I think distributing horizontally is also much easier if you're already limited your database to specific concerns by splitting it up in different ways. Sharding a very large database with lots of data deeply linked sounds like much more of a pain than something with a limited scope that isn't too deeply linked with data because it's already in other databases.
To some degree, sharding brings in a lot of the same complexities as different microservices with their own data store, in that you sometimes have to query across multiple sources and combine in the client.
Shouldn't your company have started to split things out and plan for hitting the limit of hardware a couple box sizes back? I feel there is a happy middle ground between "spend months making everything a service for our 10 users" and "welp i looks like we cant upsize the DB anymore, guess we should split things off now?"
That is, one huge table keyed by (for instance) alphabet and when the load gets too big you split it into a-m and n-z tables, each on either their own disk or their own machine.
Then just keep splitting it like that. All of your application logic stays the same … everything stays very flat and simple … you just point different queries to different shards.
I like this because the shards can evolve from their own disk IO to their own machines… and later you can reassemble them if you acquire faster hardware, etc.
> Once you get to that point, it becomes SUPER hard to start splitting things out.
Maybe, but if you split it from the start you die by a thousand cuts, and likely pay the cost up front, even if you’d never get to the volumes that’d require a split.
>Once you get to that point, it becomes SUPER hard to start splitting things out. All the sudden you have 10000 "just a one off" queries against several domains that are broken by trying carve out a domain into a single owner.
But that's survivorship bias and looking back at things from current problems perspective.
You know what's the least future proof and scalable project ? The one that gets canceled because they failed to deliver any value in reasonable time in the early phase. Once you get to "huge project status" you can afford glacial pace. Most of the time you can't afford that early on - so even if by some miracle you knew what scaling issues you're going to have long term and invested in fixing them early on - it's rarely been a good tradeoff in my experience.
I've seen more projects fail because they tangle themselves up in unnecessary complexity early on and fail to execute on core value proposition, than I've seen fail from being unable to manage the tech debt 10 years in. Developers like to complain about the second, but they get fired on the first kind. Unfortunately in todays job market they just resume pad their failures as "relevant experience" and move on to the next project - so there is not correcting feedback.
I'd be curious to know what your company does which generates this volume of data (if you can disclose), what database you are using and how you are planning to solve this issue.
There are multiple plans on how to fix this problem but they all end up boiling down to carving out domains and their owners and trying to pull apart the data from the database.
What's been keeping the lights on is "Always On" and read only replicas. New projects aren't adding load to the db and it's simply been a slow going getting stuff split apart.
What we've tried (and failed at) is sharding the data. The main issue we have is a bunch of systems reading directly from the db for common records rather than hitting other services. That means any change in structure requires a bunch of system wide updates.
You can get a machine with multiple terabytes of ram and hundreds of CPU cores easily. If you can afford that, you can afford a live replica to switch to during maintenance.
FastComments runs on one big DB in each region, with a hot backup... no issues yet.
Before you go to microservices you can also shard, as others have mentioned.
This is absolutely true - when I was at Bitbucket (ages ago at this point) and we were having issues with our DB server (mostly due to scaling), almost everyone we talked to said "buy a bigger box until you can't any more" because of how complex (and indirectly expensive) the alternatives are - sharding and microservices both have a ton more failure points than a single large box.
I'm sure they eventually moved off that single primary box, but for many years Bitbucket was run off 1 primary in each datacenter (with a failover), and a few read-only copies. If you're getting to the point where one database isn't enough, you're either doing something pretty weird, are working on a specific problem which needs a more complicated setup, or have grown to the point where investing in a microservice architecture starts to make sense.
One issue I've seen with this is that if you have a single, very large database, it can take a very, very long time to restore from backups. Or for that matter just taking backups.
I'd be interested to know if anyone has a good solution for that.
- you rsync or zfs send the database files from machine A to machine B. You would like the database to be off during this process, which will make it consistent. The big advantage of ZFS is that you can stop PG, snapshot the filesystem, and turn PG on again immediately, then send the snapshot. Machine B is now a cold backup replica of A. Your loss potential is limited to the time between backups.
- after the previous step is completed, you arrange for machine A to send WAL files to machine B. It's well documented. You could use rsync or scp here. It happens automatically and frequently. Machine B is now a warm replica of A -- if you need to turn it on in an emergency, you will only have lost one WAL file's worth of changes.
- after that step is completed, you give machine B credentials to login to A for live replication. Machine B is now a live, very slightly delayed read-only replica of A. Anything that A processes will be updated on B as soon as it is received.
You can go further and arrange to load balance requests between read-only replicas, while sending the write requests to the primary; you can look at Citus (now open source) to add multi-primary clustering.
This isn't really a backup, it's redundancy which is good thing but not the same as a backup solution. You can't get out of a drop table production type event this way.
It was first release around 2010 and gained robustness with every release hence not everyone is aware of it.
The for instance I don't think it's really required anymore to shutdown the database to do the initial sync if you use the proper tooling (for instance pg_basebackup if I remember correctly)
Going back 20 years with Oracle DB it was common to use "triple mirror" on storage to make a block level copy of the database. Lock the DB for changes, flush the logs, break the mirror. You now have a point in time copy of the database that could be mounted by a second system to create a tape backup, or as a recovery point to restore.
It takes exactly the time that it takes, bottlenecked by:
* your disk read speed on one end and write speed on the other, modulo compression
* the network bandwidth between points A and B, modulo compression
* the size of the data you are sending
So, if you have a 10GB database that you send over a 10Gb/s link to the other side of the datacenter, it might be as little as 10 seconds. If you have a 10TB database that you send over a nominally 1GB/s link but actually there's a lot of congestion from other users, to a datacenter on the other side of the world, that might take a hundred hours or so.
rsync can help a lot here, or the ZFS differential snapshot send.
so say the disk fails on your main DB. or for some reason a customer needs data from 6 months ago, which is no longer in your local snapshots. In order to restore the data, you have to transfer the data for the full database back over.
With multiple databases, you only have to transfer a single database, not all of your data.
Do you even have to stop Postgres if using ZFS snapshots? ZFS snapshots are atomic, so I’d expect that to be fine. If it wasn’t fine, that would also mean Postgres couldn’t handle power failure or other sudden failures.
* use pg_dump. Perfect consistency at the cost of a longer transaction. Gain portability for major version upgrades.
* Don't shut down PG: here's what the manual says:
However, a backup created in this way saves the database files in a state as if the database server was not properly shut down; therefore, when you start the database server on the backed-up data, it will think the previous server instance crashed and will replay the WAL log. This is not a problem; just be aware of it (and be sure to include the WAL files in your backup). You can perform a CHECKPOINT before taking the snapshot to reduce recovery time.
* Midway: use SELECT pg_start_backup('label', false, false); and SELECT * FROM pg_stop_backup(false, true); to generate WAL files while you are running the backup, and add those to your backup.
Presumably it doesn't matter if you break your DB up into smaller DBs, you still have the same amount of data to back up no matter what. However, now you also have the problem of snapshot consistency to worry about.
If you need to backup/restore just one set of tables, you can do that with a single DB server without taking the rest offline.
> you still have the same amount of data to back up no matter what
But you can restore/back up the databases in parallel.
> If you need to backup/restore just one set of tables, you can do that with a single DB server without taking the rest offline.
I'm not aware of a good way to restore just a few tables from a full db backup. At least that doesn't require copying over all the data (because the backup is stored over the network, not on a local disk). And that may be desirable to recover from say a bug corrupting or deleting a customer's data.
Try out pg_probackup. It works on database files directly. Restore is as fast as you can write on your ssd.
I've setup a pgsql server with timescaledb recently. Continuing backup based on WAL takes seconds each hour and a complete restore takes 15 minutes for almost 300 GB of data because the 1 GBit connection to the backup server is the bottleneck.
On mariadb you can tell the replica to enter into a snapshotable state[1] and take a simple lvm snapshot, tell the the database it's over, backup your snapshot somewhere else and finally delete the snapshot.
That's fair - I added "are working on a specific problem which needs a more complicated setup" to my original comment as a nicer way of referring to edge cases like search engines. I still believe that 99% of applications would function perfectly fine with a single primary DB.
Depends what you mean by a database I guess. I take it to mean an RDBMS.
RDBMSs provide guarantees that web searching doesn't need. You can afford to lose a pieces of data, provide not-quite-perfect results for web stuff. It's just wrong for an RDBMS.
What if you are using the database as a system of record to index into a real search engine like Elasticsearch? For a product where you have tons of data to search from (ie text from web pages)
In regards to Elasticsearch, you basically opt-in to which behavior you want/need. You end up in the same place: potentially losing some data points or introducing some "fuzziness" to the results in exchange for speed. When you ask Elasticsearch to behave in a guaranteed atomic manner across all records, performing locks on data, you end up with similar constraints as in a RDBMS.
Elasticsearch is for search.
If you're asking about "what if you use an RDBMS as a pointer to Elasticsearch" then I guess I would ask: why would you do this? Elasticsearch can be used as a system of record. You could use an RDBMS over top of Elasticsearch without configuring Elasticsearch as a system of record, but then you would be lying when you refer to your RDBMS as a "system of record." It's not a "system of record" for your actual data, just a record of where pointers to actual data were at one point in time.
I feel like I must be missing what you're suggesting here.
Having just an Elasticsearch index without also having the data in a primary store like a RDMS is an anti-pattern and not recommended by almost all experts. Whether you want to call it a “system of record”, i wont argue semantics. But the point is, its recommended hacing your data in a primary store where you can index into elasticsearch.
This is not typically going to be stored in an ACID-compliant RDBMS, which is where the most common scaling problem occurs. Search engines, document stores, adtech, eventing, etc. are likely going to have a different storage mechanism where consistency isn't as important.
I'm glad this is becoming conventional wisdom. I used to argue this in these pages a few years ago and would get downvoted below the posts telling people to split everything into microservices separated by queues (although I suppose it's making me lose my competitive advantage when everyone else is building lean and mean infrastructure too).
But also it is about pushing the limits of what is physically possible in computing. As Admiral Grace Hopper would point out (https://www.youtube.com/watch?v=9eyFDBPk4Yw ) doing distance over network wires involves hard latency constraints, not to mention dealing with congestions over these wires.
Physical efficiency is about keeping data close to where it's processed. Monoliths can make much better use of L1, L2, L3, and ram caches than distributed systems for speedups often in the order of 100X to 1000X.
Sure it's easier to throw more hardware at the problem with distributed systems but the downsides are significant so be sure you really need it.
Now there is a corollary to using monoliths. Since you only have one db, that db should be treated as somewhat sacred, you want to avoid wasting resources inside it. This means being a bit more careful about how you are storing things, using the smallest data structures, normalizing when you can etc. This is not to save disk, disk is cheap. This is to make efficient use of L1,L2,L3 and ram.
I've seen boolean true or false values saved as large JSON documents. {"usersetting1": true, "usersetting2":fasle "setting1name":"name" etc.} with 10 bits of data ending up as a 1k JSON document. Avoid this! Storing documents means, the keys, the full table schema is in every row. It has its uses but if you can predefine your schema and use the smallest types needed, you are gaining much performance mostly through much higher cache efficiency!
It's not though. You're just seeing the most popular opinion on HN.
In reality it is nuanced like most real-world tech decisions are. Some use cases necessitate a distributed or sharded database, some work better with a single server and some are simply going to outsource the problem to some vendor.
My hunch is that computers caught up. Back in the early 2000's horizontal scaling was the only way. You simply couldn't handle even reasonably mediocre loads on a single machine.
As computing becomes cheaper, horizontal scaling is starting to look more and more like unnecessary complexity for even surprisingly large/popular apps.
I mean you can buy a consumer off-the-shelf machine with 1.5TB of memory these days. 20 years ago, when microservices started gaining popularity, 1.5TB RAM in a single machine was basically unimaginable.
Honestly from my perspective it feels like microservices arose strongly in popularity precisely when it was becoming less necessary. In particular the mass adoption of SSD storage massively changed the nature of the game, but awareness of that among regular developers seemed not as pervasive as it should have been.
'over the wire' is less obvious than it used to be.
If you're in k8s pod, those calls are really kernel calls. Sure you're serializing and process switching where you could be just making a method call, but we had to do something.
I'm seeing less 'balls of mud' with microservices. Thats not zero balls of mud. But its not a given for almost every code base I wander into.
To clarify, I think stateless microservices are good. It's when you have too many DBs (and sometimes too many queues) that you run into problems.
A single instance of PostgreSQL is, in most situations, almost miraculously effective at coordinating concurrent and parallel state mutations. To me that's one of the most important characteristic of an RDBMS. Storing data is a simpler secondary problem. Managing concurrency is the hard problem that I need most help with from my DB and having a monolithic DB enables the coordination of everything else including stateless peripheral services without resulting in race conditions, conflicts or data corruption.
SQL is the most popular mostly functional language. This might be because managing persistent state and keeping data organized and low entropy, is where you get the most benefit from using a functional approach that doesn't add more state. This adds to the effectiveness of using a single transactional DB.
I must admit that even distributed DBs, like Cockroach and Yugabyte have recognized this and use the PostgreSQL syntax and protocol. This is good though, it means that if you really need to scale beyond PostgreSQL, you have PostgreSQL compatible options.
> I'm seeing less 'balls of mud' with microservices.
The parallel to "balls of mud" with microservices is tiny services that seem almost devoid of any business logic and all the actual business logic is encapsulated in the calls between different services, lambda functions, and so on.
That's quite nightmarish from a maintenance perspective too, because now it's almost impossible to look at the system from the outside and understand what it's doing. It also means that conventional tooling can't help you anymore: you don't get compiler errors if your lambda function calls an endpoint that doesn't exist anymore.
Big balls of mud are horrible (I'm currently working with a big ball of mud monolith, I know what I'm talking about), but you can create a different kind of mess with microservices too. Then there all the other problems, such as operational complexity, or "I now need to update log4j across 30 services".
In the end, a well-engineered system needs disciple and architectural skills, as well as a healthy engineering culture where tech debt can be paid off, regardless of whether it's a monolith, a microservice architecture or something in between.
>"I'm glad this is becoming conventional wisdom. "
Yup, this is what I've always done and it works wonders. Since I do not have bosses, just a clients I do not give a flying fuck about latest fashion and do what actually makes sense for me and said clients.
I've never understood this logic for webapps. If you're building a web application, congratulations, you're building a distributed system, you don't get a choice. You can't actually use transactional integrity or ACID compliance because you've got to send everything to and from your users via HTTP request/response. So you end up paying all the performance, scalability, flexibility, and especially reliability costs of an RDBMS, being careful about how much data you're storing, and getting zilch for it, because you end up building a system that's still last-write-wins and still loses user data whenever two users do anything at the same time (or you build your own transactional logic to solve that - exactly the same way as you would if you were using a distributed datastore).
Distributed systems can also make efficient use of cache, in fact they can do more of it because they have more of it by having more nodes. If you get your dataflow right then you'll have performance that's as good as a monolith on a tiny dataset but keep that performance as you scale up. Not only that, but you can perform a lot better than an ACID system ever could, because you can do things like asynchronously updating secondary indices after the data is committed. But most importantly you have easy failover from day 1, you have easy scaling from day 1, and you can just not worry about that and focus on your actual business problem.
Relational databases are largely a solution in search of a problem, at least for web systems. (They make sense as a reporting datastore to support ad-hoc exploratory queries, but there's never a good reason to use them for your live/"OLTP" data).
I really don't understand how anything of what you wrote follows from the fact that you're building a web-app. Why do you lose user data when two users do anything at the same time? That has never happened to me with any RDBMS.
And why would HTTP requests prevent me from using transactional logic? If a user issues a command such as "copy this data (a forum thread, or a Confluence page, or whatever) to a different place" and that copy operation might actually involve a number of different tables, I can use a transaction and make sure that the action either succeeds fully or is rolled back in case of an error; no extra logic required.
I couldn't disagree more with your conclusion even if I wanted to. Relational databases are great. We should use more of them.
> I really don't understand how anything of what you wrote follows from the fact that you're building a web-app. Why do you lose user data when two users do anything at the same time? That has never happened to me with any RDBMS.
> And why would HTTP requests prevent me from using transactional logic? If a user issues a command such as "copy this data (a forum thread, or a Confluence page, or whatever) to a different place" and that copy operation might actually involve a number of different tables, I can use a transaction and make sure that the action either succeeds fully or is rolled back in case of an error; no extra logic required.
Sure, if you can represent what the user wants to do as a "command" like that, that doesn't rely on a particular state of the world, then you're fine. Note that this is also exactly the case that an eventually consistent event-sourcing style system will handle fine.
The case where transactions would actually be useful is the case where a user wants to read something and modify something based on what they read. But you can't possibly do that over the web, because they read the data in one request and write it in another request that may never come. If two people try to edit the same wiki page at the same time, either one of them loses their data, or you implement some kind of "userspace" reconciliation logic - but database transactions can't help you with that. If one user tries to make a new post in a forum thread at the same time as another user deletes that thread, probably they get an error that throws away all their data, because storing it would break referential integrity.
> Sure, if you can represent what the user wants to do as a "command" like that, that doesn't rely on a particular state of the world, then you're fine. Note that this is also exactly the case that an eventually consistent event-sourcing style system will handle fine.
Yes, but the event-sourcing system (or similar variants, such as CRDTs) is much more complex. It's true that it buys you some things (like the ability to roll back to specific versions), but you have to ask yourself whether you really need that for a specific piece of data.
(And even if you use event sourcing, if you have many events, you probably won't want to replay all of them, so you'll maybe want to store the result in a database, in which case you can choose a relational one.)
> If two people try to edit the same wiki page at the same time, either one of them loses their data, or you implement some kind of "userspace" reconciliation logic - but database transactions can't help you with that.
Yes, but
a) that's simply not a problem in all situations. People will generally not update their user profile concurrently with other users, for example. So it only applies to situations where data is truly shared across multiple users, and it doesn't make sense to build a complex system only for these use cases,
b) the problem of users overwriting other users' data is inherent to the problem domain; you will, in the end, have to decide which version is the most recent regardless of which technology you use. The one thing that evens etc. buy you is a version history (which btw can also be implemented with a RDBMS), but if you want to expose that in the UI so the user can go back, you have to do additional work anyway - it doesn't come for free.
c) Meanwhile, the RDBMS will at least guarantee that the data is always in a consistent state. Users overwriting other users' data is unfortunate, but corrupted data is worse.
d) You can solve the "concurrent modification" issue in a variety of ways, depending on the frequency of the problem, without having to implement a complex event-sourced system. For example, a lock mechanism is fairly easy to implement and useful in many cases. You could also, for example, hash the contents of what the user is seeing and reject the change if there is a mismatch with the current state (I've never tried it, but it should work in theory).
I don't wish to claim that a relational database solves all transactionality (and consistency) problems, but they certainly solve some of them - so throwing them out because of that is a bit like "tests don't find all bugs, so we don't write them anymore".
> Yes, but the event-sourcing system (or similar variants, such as CRDTs) is much more complex.
It's really not. An RDBMS usually contains all of the same stuff underneath the hood (MVCC etc.), it just tries to paper over it and present the illusion of a single consistent state of the world, and unfortunately that ends up being leaky.
> a) that's simply not a problem in all situations. People will generally not update their user profile concurrently with other users, for example. So it only applies to situations where data is truly shared across multiple users,
Sure - but those situations are ipso facto situations where you have no need for transactions.
> b) the problem of users overwriting other users' data is inherent to the problem domain; you will, in the end, have to decide which version is the most recent regardless of which technology you use. The one thing that evens etc. buy you is a version history (which btw can also be implemented with a RDBMS), but if you want to expose that in the UI so the user can go back, you have to do additional work anyway - it doesn't come for free.
True, but what does come for free is thinking about it when you're designing your dataflow. Using an event sourcing style forces you to confront the idea that you're going to have concurrent updates going on, early enough in the process that you naturally design your data model to handle it, rather than imagining that you can always see "the" current state of the world.
> c) Meanwhile, the RDBMS will at least guarantee that the data is always in a consistent state. Users overwriting other users' data is unfortunate, but corrupted data is worse.
I'm not convinced, because the way it accomplishes that is by dropping "corrupt" data on the floor. If user A tries to save new post B in thread C, but at the same time user D has deleted that thread, then in a RDBMS where you're using a foreign key the only thing you can do is error and never save the content of post B. In an event sourcing system you still have to deal with the fact that the post belongs in a nonexistent thread eventually, but you don't start by losing the user's data, and it's very natural to do something like mark it as an orphaned post that the user can still see in their own post history, which is probably what you want. (Of course you can achieve that in the RDBMS approach, but it tends to involve more complex logic, giving up on foreign keys and accepting tha you have to solve the same data integrity problems as a non-ACID system, or both).
> d) You can solve the "concurrent modification" issue in a variety of ways, depending on the frequency of the problem, without having to implement a complex event-sourced system. For example, a lock mechanism is fairly easy to implement and useful in many cases. You could also, for example, hash the contents of what the user is seeing and reject the change if there is a mismatch with the current state (I've never tried it, but it should work in theory).
That sounds a whole lot more complex than just sticking it an event sourcing system. Especially when the problem is rare, it's much better to find a solution where the correct behaviour naturally arises in that case, than implement some kind of ad-hoc special case workaround that will never be tested as rigorously as your "happy path" case.
> It's really not. An RDBMS usually contains all of the same stuff underneath the hood (MVCC etc.), it just tries to paper over it and present the illusion of a single consistent state of the world, and unfortunately that ends up being leaky.
There's nothing leaky about it. Relational algebra is a well-understood mathematical abstraction. Meanwhile, I can just set up postgres and an ORM (or something more lightweight, if I prefer) and I'm good to go - there's thousands of examples of how to do that. Event-sourced architectures have decidedly more pitfalls. If my event handling isn't commutative, associative and idempotent I'm either losing out on concurrency benefits (because I'm asking my queue to synchronise messages) or I'll get undefined behaviour.
There's really probably no scenario in which implementing a CRUD app with a relational database isn't going to take significantly less time than some event sourced architecture.
> Sure - but those situations are ipso facto situations where you have no need for transactions.
> Using an event sourcing style forces you to confront the idea that you're going to have concurrent updates going on
There are tons of examples like backoffice tools (where people might work in shifts or on different data sets), delivery services, language learning apps, flashcard apps, government forms, todo list and note taking apps, price comparison services, fitness trackers, banking apps, and so on, where some or even most of the data is not usually concurrently edited by multiple users, but where you still will probably have consistency guarantees across multiple tables.
Yes, if you're building Twitter, by all means use event sourcing or CRDTs or something. But we're not all building Twitter.
> If user A tries to save new post B in thread C, but at the same time user D has deleted that thread, then in a RDBMS where you're using a foreign key the only thing you can do is error and never save the content of post B.
I don't think I've ever seen a forum app that doesn't just "throw away" the user comment in such a case, in the sense that it will not be stored in the database. Sure, you might have some event somewhere, but how is that going to help the user? Should they write a nice email and hope that some engineer with too much time is going to find that event somewhere buried deep in the production infrastructure and then ... do what exactly with it?
This is a solution in search of a problem. Instead, you should design your UI such that the comment field is not cleared upon a failed submission, like any reasonable forum software. Then the user who really wants to save their ramblings can still do so, without the need of any complicated event-sourcing mechanism. And in most forums, threads are rarely deleted, only locked (unless it's outright spam/illegal content/etc.)
(Also, there are a lot of different ways how things can be designed when you're using an RDBMS. You can also implement soft deletes (which many applications do) and then you won't get any foreign key errors. In that way, you can still display "orphaned" comments that belong to deleted threads, if you so wish (have never seen a forum do that, though). Recovering a soft deleted thread is probably also an order of magnitude easier than trying to replay it from some events. Yes, soft deletes involve other tradeoffs - but so does every architecture choice.)
> That sounds a whole lot more complex than just sticking it an event sourcing system. Especially when the problem is rare, it's much better to find a solution where the correct behaviour naturally arises in that case.
I really disagree that a locking mechanism is more difficult than an event sourced system. The mechanism doesn't have to be perfect. If a user loses the lock because they haven't done anything in half an hour, then in many cases that's completely acceptable. Such a system is not hard to implement (I could just use a redis store with expiring entries) and it will also be much easier to understand, since you now don't have to track the flow of your business logic across multiple services.
I also don't know why you think that your event-sourced system will be better tested. Are you going to test for the network being unreliable, messages getting lost or being delivered out of order, and so on? If so, you can also afford to properly test a locking mechanism (which can be readily done in a monolith, maybe with an additional redis dependency, and is therefore more easily testable than some event-based logic that spans multiple services).
And in engineering, there are rarely "natural" solutions to problems. There are specific problems and they require specific solutions. Distributed systems, event sourcing etc. are great where they're called for. In many cases, they're simply not.
Http requests work great with relational dbs. This is not UDP. If the TCP connection is broken, an operation will either have finished or stopped and rolledback atomically and unless you've placed unneeded queues in there, you should know of success immediately.
When you get the http response, you will know the data is fully committed, data that uses it can be refreshed immediately and is accessible to all other systems immediately so you can perform next steps relying on those hard guarantees. Behind the http request, a transaction can be opened to do a bunch of stuff including API calls to other systems if needed and commit the results as an atomic transaction. There are tons of benefit using it with http.
But you can't do interaction between the two ends of a HTTP request. The caller makes an inert request, whatever processing happens downstream of that might as well be offline because it's not and can never be interactive within a single transaction.
Now you're shifting the goalposts. You started out by claiming that web apps can't be transactional, now you've switched to saying they can't be transactional if they're "interactive" (by which you presumably mean transactions that span multiple HTTP requests).
Of course, that's a very particular demand, one that doesn't necessarily apply to many applications.
And even then, depending on the use case, there are relatively straightforward ways of implementing that too: For example, if you build up all the data on the client (potentially by querying the server, with some of the partial data, for the next form page, or whatever) and submit it all in one single final request.
>As Admiral Grace Hopper would point out (https://www.youtube.com/watch?v=9eyFDBPk4Yw ) doing distance over network wires involves hard latency constraints, not to mention dealing with congestions over these wires.
Even accounting for CDNs, a distributed system is inherently more capable of bringing data closer to geographically distributed end users, thus lowering latency.
I think a strong test a lot of "let's use Google scale architecture for our MVP" advocates fail is: can your architecture support a performant paginated list with dynamic sort, filter and search where eventual consistency isn't acceptable?
Pretty much every CRUD app needs this at some point and if every join needs a network call your app is going to suck to use and suck to develop.
I’ve found the following resource invaluable for designing and creating “cloud native” APIs where I can tackle that kind of thing from the very start without a huge amount of hassle https://google.aip.dev/general
I don't believe you. Eventual consistency is how the real world works, what possible use case is there where it wouldn't be acceptable? Even if you somehow made the display widget part of the database, you can't make the reader's eyeballs ACID-compliant.
Yeah, I can attest that even banks are really using best effort eventual consistency. However, I think it is very difficult to reason about with systems that try to use eventual consistency as an abstraction. It's a lot easier to think about explicitly when you have one data source/event that propagates outwards through systems with stronger individual guarantees than eventual consistency.
IMO having event streams as first class is the best way to think about things. Then you don't need particularly strong guarantees downstream - think something like Kafka where the only guarantee is that events for the same key will always be processed in order, and it turns out that that's enough to build a system with clear, reliable behaviour that you can reason about quite easily.
> if every join needs a network call your app is going to suck to use and suck to develop.
And yet developers do this every single day without any issue.
It is bad practice to have your authentication database be the same as your app database. Or you have data coming from SaaS products, third party APIs or a cloud service. Or even simply another service in your stack. And with complex schemas often it's far easier to do that join in your application layer.
> It is bad practice to have your authentication database be the same as your app database.
No, this is resume-driven-development, Google-scale-wannabe FUD. Understand your requirements. Multiple databases is non-trivial overhead. The only reason to add multiple databases is if you need scale that can't be handled via simple caching.
Of course it's hard to anticipate what level of scale you'll have later, but I can tell you this: for every tiny startup that successfully anticipated their scaling requirements and built a brilliant microservices architecture that proactively paved the way to their success, there's a 100 burnt out husks of companies that never found product market fit because the engineering team was too busy fantasizing about "web-scale" and padding their resume by overengineering every tiny and unused feature they built.
If you want to get a job at FAANG and suckle at the teat of megacorporations who's trajectory was all based on work done in the early 2000s, by all means study up on "best practices" to recite at your system design interview. On the other hand, if you want to build the next great startup, you need to lose the big co mentality and start thinking critically from first principles about power to weight ratio and YAGNI.
Most of our cloud hosted request/responses are within the realm of 1-10ms, and that's with the actual request being processed on the other side. Unless there's a poorly performing O(N) stinker in the works, most requests can be served with most latency being recorded user->datacenter, not machine to machine overhead. This article is a lot bonkers.
I've seen this evolve into tightly coupled microservices that could be deployed independently in theory, but required exquisite coordination to work.
If you want them to be on a single server, that's fine, but having multiple databases or schemas will help enforce separation.
And, if you need one single place for analytics, push changes to that space asynchronously.
Having said that, I've seen silly optimizations being employed that make sense when you are Twitter, and to nobody else. Slice services up to the point they still do something meaningful in terms of the solution and avoid going any further.
I have done both models. My previous job we had a monolith on top of a 1200 table database. Now I work in an ecosystem of 400 microservices, most with their own database.
What it fundamentally boils down to is that your org chart determines your architecture. We had a single team in charge of the monolith, and it was ok, and then we wanted to add teams and it broke down. On the microservices architecture, we have many teams, which can work independently quite well, until there is a big project that needs coordinated changes, and then the fun starts.
Like always there is no advice that is absolutely right. Monoliths, microservices, function stores. One big server vs kubernetes. Any of those things become the right answer in the right context.
Although I’m still in favor of starting with a modular monolith and splitting off services when it becomes apparent they need to change at a different pace from the main body. That is right in most contexts I think.
> splitting off services when it becomes apparent they need to change at a different pace from the main body
yes - this seems to get lost, but the microservice argument is no different to the bigger picture software design in general. When things change independently, separate and decouple them. It works in code and so there is no reason it shouldn't apply at the infrastructure layer.
If I am responsible for the FooBar and need to update it once a week and know I am not going to break the FroggleBot or the Bazlibee which are run by separate teams who don't care about my needs and update their code once a year, hell yeah I want to develop and deploy it as a separate service.
To clarify the advice, at least how I believe it should be done…
Use One Big Database Server…
… and on it, use one software database per application.
For example, one Postgres server can host many databases that are mostly* independent from each other. Each application or service should have its own database and be unaware of the others, communicating with them via the services if necessary. This makes splitting up into multiple database servers fairly straightforward if needed later. In reality most businesses will have a long tail of tiny databases that can all be on the same server, with only bigger databases needing dedicated resources.
*you can have interdependencies when you’re using deep features sometimes, but in an application-first development model I’d advise against this.
Not suggesting it, but for the sake of knowledge you can join tables living in different databases, as long as they are on the same server (e.g. mysql, postgresql, SQL server supports it - doesn't necessarily come for free)
I’d start with a monolith, that’s a single app, single database, single point of ownership of the data model, and a ton of joins.
Then as services are added after the monolith they can still use the main database for ease of infra development, simpler backups and replication, etc. but those wouldn’t be able to be joined because they’re cross-service.
There's no need for "microservices" in the first place then. That's just logical groupings of functionality that can be separate as classes, namespaces or other modules without being entirely separate processes with a network boundary.
I have to say I disagree with this ... you can only separate them if they are really, truly independent. Trying to separate things that are actually coupled will quickly take you on a path to hell.
The problem here is that most of the microservice architecture divisions are going to be driven by Conway's law, not what makes any technical sense. So if you insist on separate databases per microservice, you're at high risk of ending up with massive amounts of duplicated and incoherent state models and half the work of the team devoted to synchronizing between them.
I quite like an architecture where services are split except the database, which is considered a service of its own.
Well, I stand by what I said. And you are also correct, you can only separate them if they are really truly independent. Those two are correct at the same time.
Microservices does more than encapsulation and workspace segmentation. They also distribute data locality and coherence. If you have an organizational need to break something, but not on independent parts, it's better to use some abstraction that preserves the data properties.
(In other words, microservices are almost never the answer. There are plenty of ways to organize your code, default to those other ones. And on the few cases that microservices are the answer, rest assured that you won't fail to notice it.)
>> If you are creating microservices, you must segment them all the way through.
> I have to say I disagree with this ... you can only separate them if they are really, truly independent. Trying to separate things that are actually coupled will quickly take you on a path to hell.
I could be misinterpreting both you and GP, but sounds like you agree with GP - if you can't segment them all the way through, maybe they shouldn't be microservices?
Perhaps - but I think they are underestimating the organisational reasons to separate services from each other. If you are really going to say "we can't separate any two things that have any shared persistent data" then you may just end up with a monolith and all the problems that come from that (gridlock because every team needs to agree before it can be updated / released etc).
Breaking apart a stateless microservice and then basing it around a giant single monolithic database is pretty pointless - at that stage you might as well just build a monolith and get on with it as every microservice is tightly coupled to the db.
To note that quite a bit of the performance problems come when writing stuff. You can get away with A LOT if you accept 1. the current service doesn't do (much) writing and 2. it can live with slightly old data. Which I think covers 90% of use cases.
So you can end up with those services living on separate machines and connecting to read only db replicas, for virtually limitless scalability. And when it realizes it needs to do an update, it either switches the db connection to a master, or it forwards the whole request to another instance connected to a master db.
(1) Different programming languages e.g. you're written your app in Java but now you need to do something for which the perfect Python library is available.
(2) Different parts of your software need different types of hardware. Maybe one part needs a huge amount of RAM for a cache, but other parts are just a web server. It'd be a shame to have to buy huge amounts of RAM for every server. Splitting the software up and deploying the different parts on different machines can be a win here.
I reckon the average startup doesn't need any of that, not suggesting that monoliths aren't the way to go 90% of the time. But if you do need these things, you can still go the microservices route, but it still makes sense to stick to a single database if at all possible, for consistency and easier JOINs for ad-hoc queries, etc.
These are both true - but neither requires service-oriented-architecture.
You can split up your applicaiton into chunks that are deployed on seperate hardware, and use different languages, without composing your whole architecture into microservices.
A monolith can still have a seperate database server and a web server, or even many different functions split across different servers which are horizontally scalable, and be written in both java and python.
Monoliths have had seperate database servers since the 80s (and probably before that!). In fact, part of these applications defining characteristics at the enterprise level is that they often shared one big central database, as often they were composed of lots of small applications that would all make changes to the central database, which would often end up in a right mess of software that was incredibly hard to de-pick! (And all the software writing to that database would, as you described, be written in lots of different languages). People would then come along and cake these central databases full of stored procedures to make magic changes to implement functionality that wasn't available in the legacy applications that they can't change because of the risk and then you have even more of a mess!
Agree. Nothing worse than having different programs changing data in the same database. The database should not be an integration point between services.
In this example, it's the job of the "database access layer service" to manage those processes and prevent issues.
But, terrible service name aside, this is a big reason why two services accessing the same database is a capital-H Huge anti-pattern, and really screams "using this project to learn how to do microservices."
I guess I just don't see the value in having a monolith made up of microservices - you might as well just build a monolith if you are going down that route.
And if your application fits the microservices pattern better, then you might as well go down the microservices pattern properly and not give them a big central DB.
The one advantage of microservice on a single database model is that it lets you test the independent components much more easily while avoiding the complexity of database sharding.
Where I work we are looking at it because we are starting to exceed the capabilities of one big database. Several tables are reaching the billions of rows mark and just plain inserts are starting to become too much.
Yeah, the at the billions of rows mark it definitely makes sense to start looking at splitting things up. On the other hand, the company I worked for split things up from the start, and when I joined - 4 years down the line - their biggest table had something like 50k rows, but their query performance was awful (tens of seconds in cases) because the data was so spread out.
I disagree. Suppose you have an enormous DB that's mainly written to by workers inside a company, but has to be widely read by the public outside. You want your internal services on machines with extra layers of security, perhaps only accessible by VPN. Your external facing microservices have other things like e.g. user authentication (which may be tied to a different monolithic database), and you want to put them closer to users, spread out in various data centers or on the edge. Even if they're all bound to one database, there's a lot to recommend keeping them on separate, light cheap servers that are built for http traffic and occasional DB reads. And even more so if those services do a lot of processing on the data that's accessed, such as building up reports, etc.
You've not really built microservices then in the purest sense though - i.e. all the microservices aren't independently deployable components.
I'm not saying what you are proposing isn't a perfectly valid architectural approach - it's just usually considered an anti-pattern with microservices (because if all the services depend on a single monolith, and a change to a microservice functionality also mandates a change to the shared monolith which then can impact/break the other services, we have lost the 'independence' benefit that microservices supposedly gives us where changes to one microservice does not impact another).
Monoliths can still have layers to support business logic that are seperate to the database anyway.
yah, this is something i learned when designing my first server stack (using sun machines) for a real business back during the dot-com boom/bust era. our single database server was the beefiest machine by far in the stack, 5U in the rack (we also had a hot backup), while the other servers were 1U or 2U in size. most of that girth was for memory and disk space, with decent but not the fastest processors.
one big db server with a hot backup was our best tradeoff for price, performance, and reliability. part of the mitigation was that the other servers could be scaled horizontally to compensate for a decent amount of growth without needing to scale the db horizontally.
Definitely use a big database, until you can't. My advice to anyone starting with a relational data store is to use a proxy from day 1 (or some point before adding something like that becomes scary).
When you need to start sharding your database, having a proxy is like having a super power.
We see both use cases: single large database vs multiple small, decoupled. I agree with the sentiment that a large database offer simplicity, until access patterns change.
We focus on distributing database data to the edge using caching. Typically this eliminates read-replicas and a lot of the headache that goes with app logic rewrites or scaling "One Big Database".
Yep, with a passive replica or online (log) backup.
Keeping things centralized can reduce your hardware requirement by multiple orders of magnitude. The one huge exception is a traditional web service, those scale very well, so you may not even want to get big servers for them (until you need them).
If you do this then you'll have the hardest possible migration when the time comes to split it up. It will take you literally years, perhaps even a decade.
Shard your datastore from day 1, get your dataflow right so that you don't need atomicity, and it'll be painless and scale effortlessly. More importantly, you won't be able to paper over crappy dataflow. It's like using proper types in your code: yes, it takes a bit more effort up-front compared to just YOLOing everything, but it pays dividends pretty quickly.
This is true IFF you get to the point where you have to split up.
I know we're all hot and bothered about getting our apps to scale up to be the next unicorn, but most apps never need to scale past the limit of a single very high-performance database. For most people, this single huge DB is sufficient.
Also, for many (maybe even most) applications, designated outages for maintenance are not only acceptable, but industry standard. Banks have had, and continue to have designated outages all the time, usually on weekends when the impact is reduced.
Sure, what I just wrote is bad advice for mega-scale SaaS offerings with millions of concurrent users, but most of us aren't building those, as much as we would like to pretend that we are.
I will say that TWO of those servers, with some form of synchronous replication, and point in time snapshots, are probably a better choice, but that's hair-splitting.
(and I am a dyed in the wool microservices, scale-out Amazon WS fanboi).
> I know we're all hot and bothered about getting our apps to scale up to be the next unicorn, but most apps never need to scale past the limit of a single very high-performance database. For most people, this single huge DB is sufficient.
True if the reliability is good enough. I agree that many organisations will never get to the scale where they need it as a performance/data size measure, but you often will grow past the reliability level that's possible to achieve on a single node. And it's worth saying that the various things that people do to mitigate these problems - read replicas, WAL shipping, and all that - can have a pretty high operational cost. Whereas if you just slap in a horizontal autoscaling datastore with true master-master HA from day 1, you bypass all of that trouble and just never worry about it.
> Also, for many (maybe even most) applications, designated outages for maintenance are not only acceptable, but industry standard. Banks have had, and continue to have designated outages all the time, usually on weekends when the impact is reduced.
IME those are a minority of applications. Anything consumer-facing, you absolutely do lose out (and even if it's not a serious issue in itself, it makes you look bush-league) if someone can't log into your system at 5AM on Sunday. Even if you're B2B, if your clients are serving customers then they want you to be online whenever their customers are.
> I agree that many organisations will never get to the scale where they need it as a performance/data size measure, but you often will grow past the reliability level that's possible to achieve on a single node.
Many organisations have, for decades, exceptionally good reliability numbers using a backed-up/failed-over OneBigServer. Great reliability numbers did not suddenly appear only after 2012 when cloudiness took off.
I think you may be underestimating the reliability of OneBigServer.
> If you do this then you'll have the hardest possible migration when the time comes to split it up. It will take you literally years, perhaps even a decade.
At which point a new OneBigServer will be 100x as powerful, and all your upfront work will be for nothing.
I don't know the characteristics of bikesheddb's upstream in detail (if there's ever a production-quality release of bikesheddb I'll take another look), but in general using something that can scale horizontally (like Cassandra or Riak, or even - for all its downsides - MongoDB) is a great approach - I guess it's a question of terminology whether you call that "sharding" or not. Personally I prefer that kind of datastore over an SQL database.
It’s never one big database. Inevitably there are are backups, replicas, testing environments, staging, development. In an ideal unchanging world where nothing ever fails and workload is predictable then the one big database is also ideal.
What happens in the real world is that the one big database becomes such a roadblock to change and growth that organisations often throw away the whole thing and start from scratch.
> It’s never one big database. Inevitably there are are backups, replicas, testing environments, staging, development. In an ideal unchanging world where nothing ever fails and workload is predictable then the one big database is also ideal.
But if you have many small databases, you need
> backups, replicas, testing environments, staging, development
all times `n`. Which doesn't sound like an improvement.
> What happens in the real world is that the one big database becomes such a roadblock to change and growth that organisations often throw away the whole thing and start from scratch.
Bad engineering orgs will clutch defeat from the jaws of victory no matter what the early architectural decisions were. The one vs many databases/services is almost moot entirely.
Just FYI, you can have one big database, without running it on one big server. As an example, databases like Cassandra are designed to be scaled horizontally (i.e. scale out, instead of scale up).
There are trade-offs when you scale horizontally even if a database is designed for it. For example, DataStax's Storage Attached Indexes or Cassandra's hidden-table secondary indexing allow for indexing on columns that aren't part of the clustering/partitioning, but when you're reading you're going to have to ask all the nodes to look for something if you aren't including a clustering/partitioning criteria to narrow it down.
You've now scaled out, but you now have to ask each node when searching by secondary index. If you're asking every node for your queries, you haven't really scaled horizontally. You've just increased complexity.
Now, maybe 95% of your queries can be handled with a clustering key and you just need secondary indexes to handle 5% of your stuff. In that case, Cassandra does offer an easy way to handle that last 5%. However, it can be problematic if people take shortcuts too much and you end up putting too much load on the cluster. You're also putting your latency for reads at the highest latency of all the machines in your cluster. For example, if you have 100 machines in your cluster with a mean response time of 2ms and a 99th percentile response time of 150ms, you're potentially going to be providing a bad experience to users waiting on that last box on secondary index queries.
This isn't to say that Cassandra isn't useful - Cassandra has been making some good decisions to balance the problems engineers face. However, it does come with trade-offs when you distribute the data. When you have a well-defined problem, it's a lot easier to design your data for efficient querying and partitioning. When you're trying to figure things out, the flexibility of a single machine and much cheaper secondary index queries can be important - and if you hit a massive scale, you figure out how you want to partition it then.
Cassandra was just an example, but most databases can be scaled either vertically or horizontally via sharding. You are right if misconfigured performance can be hindered, but this is also true for a database which is being scaled vertically. Generally speaking you will get better performance if you have a large dataset by growing horizontally then you would by growing vertically.
Cassandra may be great when you have to scale your database that you no longer develop significantly. The problem with this DB system is that you have to know all the queries before you can define the schema.
A relative worked for a hedge fund that used this idea. They were a C#/MSSQL shop, so they just bought whatever was the biggest MSSQL server at the time, updating frequently. They said it was a huge advantage, where the limit in scale was more than offset by productivity.
I think it's an underrated idea. There's a lot of people out there building a lot of complexity for datasets that in the end are less than 100 TB.
But it also has limits. Infamously Twitter delayed going to a sharded architecture a bit too long, making it more of an ugly migration.
I do, it is running on the same big (relatively) server as my native C++ backend talking to the database. The performance smokes your standard cloudy setup big time. Serving thousand requests per second on 16 core without breaking sweat. I am all for monoliths running on real no cloudy hardware. As long as the business scale is reasonable and does not approach FAANG (like for 90% of the businesses) this solution is superior to everything else money, maintenance, development time wise.
I agree with this sentiment but it is often misunderstood as a means to force everything into a single database schema. More people need to learn about logically separating schemas with their database servers!
Another area for consolidation is auth. Use one giant keycloak, with individual realms for every one of the individual apps you are running. Your keycloak is back ended by your one giant database.
I agree that 1BDB is a good idea, but having one ginormous schema has its own costs. So I still think data should be logically partitioned between applications/microservices - in PG terms, one “cluster” but multiple “databases”.
We solved the problem of collecting data from the various databases for end users by having a GraphQL layer which could integrate all the data sources. This turned out to be absolutely awesome. You could also do something similar using FDW. The effort was not significant relative to the size of the application.
The benefits of this architecture were manifold but one of the main ones is that it reduces the complexity of each individual database, which dramatically improved performance, and we knew that if we needed more performance we could pull those individual databases out into their own machine.
I'd say, one big database per service. Often times there are natural places to separate concerns and end up with multiple databases. If you ever want to join things for offline analysis, it's not hard to make a mapreduce pipeline of some kind that reads from all of them and gives you that boundless flexibility.
Then if/when it comes time for sharding, you probably only have to worry about one of those databases first, and you possibly shard it in a higher-level logical way that works for that kind of service (e.g. one smaller database per physical region of customers) instead of something at a lower level with a distributed database. Horizontally scaling DBs sound a lot nicer than they really are.
>>(they don't know how your distributed databases look, and oftentimes they really do not care)
Nor should they, it's the engineer's/team's job to provide the database layer to them with high levels of service without them having to know the details
I'm pretty happy to pay a cloud provider to deal with managing databases and hosts. It doesn't seem to cause me much grief, and maybe I could do it better but my time is worth more than our RDS bill. I can always come back and Do It Myself if I run out of more valuable things to work on.
Similarly, paying for EKS or GKE or the higher-level container offerings seems like a much better place to spend my resources than figuring out how to run infrastructure on bare VMs.
Every time I've seen a normal-sized firm running on VMs, they have one team who is responsible for managing the VMs, and either that team is expecting a Docker image artifact or they're expecting to manage the environment in which the application runs (making sure all of the application dependencies are installed in the environment, etc) which typically implies a lot of coordination between the ops team and the application teams (especially regarding deployment). I've never seen that work as smoothly as deploying to ECS/EKS/whatever and letting the ops team work on automating things at a higher level of abstraction (automatic certificate rotation, automatic DNS, etc).
That said, I've never tried the "one big server" approach, although I wouldn't want to run fewer than 3 replicas, and I would want reproducibility so I know I can stand up the exact same thing if one of the replicas go down as well as for higher-fidelity testing in lower environments. And since we have that kind of reproducibility, there's no significant difference in operational work between running fewer larger servers and more smaller servers.
"Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care)."
This isn't a problem if state is properly divided along the proper business domain and the people who need to access the data have access to it. In fact many use cases require it - publicly traded companies can't let anyone in the organization access financial info and healthcare companies can't let anyone access patient data. And of course are performance concerns as well if anyone in the organization can arbitrarily execute queries on any of the organization's data.
I would say YAGNI applies to data segregation as well and separations shouldn't be introduced until they are necessary.
"combine these data sources" doesn't necessarily mean data analytics. Just as an example, it could be something like "show a badge if it's the user's birthday", which if you had a separate microservice for birthdays would be much harder than joining a new table.
Replace "people" with "features" and my comment still holds. As software, features, and organizations become more complex the core feature data becomes a smaller and smaller proportion of the overall state and that's when microservices and separate data stores become necessary.
At my current job we have four different databases so I concur with this assessment. I think it's okay to have some data in different DBs if they're significantly different like say the user login data could be in its own database. But anything that we do which is a combination of e-commerce and testing/certification I think they should be in one big database so I can do reasonable queries for information that we need. This doesn't include two other databases we have on-prem which one is a Salesforce setup and another is an internal application system that essentially marries Salesforce to that. It's a weird wild environment to navigate when adding features.
> Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care).
I'm not sure how to parse this. What should "asks" be?
Mostly agree, but you have to be very strict with the DB architecture. Have very reasonable schema. Punish long running queries. If some dev group starts hammering the DB cut them off early on, don't let them get away with it and then refuse to fix their query design.
The biggest nemesis of big DB approach are dev teams who don't care about the impact of their queries.
Also move all the read-only stuff that can be a few minutes behind to a separate (smaller) server with custom views updated in batches (e.g. product listings). And run analytics out of peak hours and if possible in a separate server.
The rule is: Keep related data together. Exceptions are: Different customers (usually don't require each others data) can be isolated. And if the database become the bottleneck you can separate unrelated services.
Surely having separate DBs all sit on the One Big Server is preferable in many cases. For cases where you really to extract large amounts of data that is derived from multiple DBs, there's no real harm in having some cross-DB joins defined in views somewhere. If there are sensible logical ways to break a monolithic service into component stand-alone services, and good business reasons to do (or it's already been designed that way), then having each talk to their own DB on a shared server should be able to scale pretty well.
If you get your services right there is little or no communications between the services since a microservice should have all the data it needs in it's own store.
Hardware engineers are pushing the absolute physical limits of getting state (memory/storage) as close as possible to compute. A monumental accomplishment as impactful as the invention of agriculture and the industrial revolution.
Software engineers: let's completely undo all that engineering by moving everything apart as far as possible. Hmmm, still too fast. Let's next add virtualization and software stacks with shitty abstractions.
Fast and powerful browser? Let's completely ignore 20 years of performance engineering and reinvent...rendering. Hmm, sucks a bit. Let's add back server rendering. Wait, now we have to render twice. Ah well, let's just call it a "best practice".
The mouse that I'm using right now (an expensive one) has a 2GB desktop Electron app that seems to want to update itself twice a week.
The state of us, the absolute garbage that we put out, and the creative ways in which we try to justify it. It's like a mind virus.
Actually, for those who push for these cloudy solutions, they do that in part to make data close to you. I am talking mostly about CDNs, I don't thing YouTube and Netflix would have been possible without them.
Google is a US company, but you don't want people in Australia to connect to the other side of the globe every time they need to access Google services, it would be an awful waste of intercontinental bandwidth. Instead, Google has data centers in Australia to serve people in Australia, and they only hit US servers when absolutely needed. And that's when you need to abstract things out. If something becomes relevant in Australia, move it in there, and move it out when it no longer matters. When something big happens, copy it everywhere, and replace the copies by something else as interest wanes.
Big companies need to split everything, they can't centralize because the world isn't centralized. The problem is when small businesses try to do the same because "if Google is so successful doing that, it must be right". Scale matters.
Agreed and I think it's easier to compare tech to the movie industry. Just look at all the crappy movies they produce with IMDB ratings below 5 out of 10, that is movies that nobody's going to even watch; then there are the shitty blockbusters with expensive marketing and greatly simplified stories optimized for mindless blockbuster movie goers; then there are rare gems, true works of art that get recognized at festivals at best but usually not by the masses. The state of the movie industry is overall pathetic, and I see parallels with the tech here.
> Software engineers: let's completely undo all that engineering by moving everything apart as far as possible. Hmmm, still too fast. Let's next add virtualization and software stacks with shitty abstractions.
That's because the concept which is even more impactful than agriculture and the computer, and makes them and everything else in our lives, is abstraction. It makes it possible to reason about large and difficult problems, to specialize, to have multiple people working on them.
Computer hardware is as full of abstraction and separation and specialization as software is. The person designing the logic for a multiplier unit has no more need to know how transistors are etched into silicon than a javascript programmer does.
Billions of people are on the internet now, vs 20 years ago. I dare say millions of lives have been saved (due to various things) in the past 20 years, due to the things built and deployed on the web.
We may have failed at some abstract notion of craftsmanship or performance efficiency. But we as an industry shipped. We shipped a lot, actually. A lot of it also sucked. But not enough to say the whole industry was a failure, IMHO.
What are you having difficulty understanding? I'll be happy to try help.
> The web is slower than ever.
No it isn't.
> Desktop apps 20 years ago were faster than today's garbage.
Some are, some aren't. For the same thing they clearly aren't. A typewriter makes your PC of 20 years ago look glacial garbage, if that's your standard.
> We failed.
Speak for yourself. Computers are used far more often, for more things, and by more people than they were 20 years ago, and nothing they used to be used for has been replaced by something else. You'll always have the get off my lawn types, but you did in the 2000s from the curmudgeons stuck in the 80s too.
You assessment has no impact. Nobody disagrees with the notion that programmers trade performance for reduction in complexity or better productivity. This isn't some astounding discovery, it's a tired old gripe that doesn't add anything.
Heh, there's a mention here to Andy and Bill's Law, "What Andy giveth, Bill taketh away," which is a reference to Andy Grove (Intel) and Bill Gates (Microsoft).
Since I have a long history with Sun Microsystems, upon seeing "Andy and Bill's Law" I immediately thought this was a reference to Andy Bechtolsheim (Sun hardware guy) and Bill Joy (Sun software guy). Sun had its own history of software bloat, with the latest software releases not fitting into contemporary hardware.
> The mouse that I'm using right now (an expensive one) has a 2GB desktop Electron app that seems to want to update itself twice a week.
I'm using a Logitech MX Master 3, and it comes with the "Logi Options+" to configure the mouse. I'm super frustrated with the cranky and slow app. It updates every other day and crashes often.
The experience is much better when I can configure the mouse with an open-source driver [^0] while using Linux.
I use Logi Options too, but while it's stable for me, it still uses a bafflingly high amount of CPU. But if I don't run Logi Options, then mouse buttons 3+4 stop working :-/
It's been like that for years.
Logitech's hardware is great, so I don't know why they think it's OK to push out such shite software.
Let me add fuel to the fire. When I started my career, users were happy to select among a handful of 8x8 bitmap font. Nowadays, users expect to see a scalable male-doctor-skin-ton-1 emoji. The former can be implemented by bliting 8 bytes from ROM. The latter requires an SVG engine -- just to render one character.
While bloatware cannot be excluded, let's not forget that user expectations have temendously increased.
We're not a very serious industry. Despite uhm, it pretty much running the world. We're a joke. Sometimes I feel it doesn't even earn the term "engineering" at all, and rather than improving, it seems to get ever worse.
Which really is a stunning accomplishment in a backdrop of spectacular hardware advances, ever more educated people, and other favorable ingredients.
We're much more like artisans than engineers, in my opinion (maybe with the exception of extremely deep-in-the-stack things like compiler engineering).
The problem seems to be that because there's no "right way", only wrong ways, discussions end up being circular. I'm not a civil engineer, but I imagine there is a "best way" to build a bridge in any landscape, where any decisions and tradeoffs have well defined parameters, gained through trial and error and regulation over literally thousands of years of building bridges.
Us "Software Artisans" spend almost as much time arguing as lawyers do because, like law, it's all made up. Information, and human-to-human communication via CPU instructions abstracted to the point of absurdity.
I also get the vibe that greybeards like Uncle Bob and Martin Fowler understand this very intuitively.
I get what you're saying but I reject the notion that some of these tech choices are 100% subjective and that there's no "right way" at all.
If hardware has increased in speed/capacity by a factor 10-100 in a decade and our "accomplishment" is to actually make software increasingly slow, shitty and bloated with no new added value to the user, you'll have an idea of the absurd waste and efficiency of our stacks.
When you add lanes to a highway, it generally does not improve congestion or travel times. Drivers adjust and fill up the new lanes, until travel times are roughly the same as before (but with slightly more throughput now).
So it is with hardware and software. I don't see any reason to correlate faster/better hardware with an expectation that software must also get better. It would be economically irrational for the software industry (whatever that means) to spend resources/energy on improving efficiency when the "gains" from hardware are essentially a free lunch to eat... Who would pay for lunch or spend time making their own, when hardware guys are giving you bigger portions for free?
That doesn't mean you have to like the outcome, but at least it should be perfectly predictable, given what we know about economics and game theory and incentives.
Software engineers don't want to be managing physical hardware and often need to run highly available services. When a team lacks the skill, geographic presence or bandwidth to manage physical servers but needs to deliver a highly-available service, I think the cloud offers legitimate improvements in operations with downsides such as increased cost and decreased performance per unit of cost.
> However, cloud providers have often had global outages in the past, and there is no reason to assume that cloud datacenters will be down any less often than your individual servers.
A nice thing about being in a big provider is when they go down a massive portion of the internet goes down, and it makes news headlines. Users are much less likely to complain about your service being down when it's clear you're just caught up in the global outage that's affecting 10 other things they use.
This is a huge one -- value in outsourcing blame. If you're down because of a major provider outage in the news, you're viewed more as a victim of a natural disaster rather than someone to be blamed.
I hear this repeated so many times at my workplace, and it's so totally and completely uninformed.
Customers who have invested millions of dollars into making their stack multi-region, multi-cloud, or multi-datacenter aren't going to calmly accept the excuse that "AWS Went Down" when you can't deliver the services you contractually agreed to deliver. There are industries out there where having your service casually go down a few times a year is totally unacceptable (Healthcare, Government, Finance, etc). I worked adjacent to a department that did online retail a while ago and even an hour of outage would lose us $1M+ in business.
I wonder if the aggregate outage time from misconfigured and over-architected high availability services is greater than the average AWS outage per year.
Similar to security, the last few 9s of availability come at a heavily increasing (log) complexity / price. The cutoff will vary case by case, and I’m sure the decision on how many 9s you need is often irrational (CEO says it can never go down! People need their pet food delivered on time!).
> I hear this repeated so many times at my workplace, and it's so totally and completely uninformed.
> Customers who have invested millions of dollars into making their stack multi-region, multi-cloud, or multi-datacenter...
It sounds like the idea may be bad for your workplace, but that doesn't make it uninformed here. For the average B2C or business-to-small-business application, the customer doesn't even know what a region or datacenter is, all they know is that "the internet" isn't working and your service went down with it. These customers also don't have an SLA with guaranteed uptimes. The only thing they agreed to were the Terms and Conditions that explicitly say "no warranty, express or implied".
If you're selling to large enterprises, yeah, "AWS went down" won't cut it. But in most other cases it will.
I'm going to say that an hour a year is wildly optimistic. But even then, that puts you at 4 nines (99.99%) which is comparatively awful, consider that an old fashioned telephone using technology from the 1970s will achieve on average, 5 9's of reliability, or 5.26 minutes of downtime per year, and that most IT shops operating their own infrastructure contractually expect 5 9's from even fairly average datacenters and transit providers.
I was amused when I joined my current company to find that our contracts only stipulate one 9 of reliability (98%). So ~30 mins a day or ~14 hours a month is permissible.
I think it's more of a shield against upper management. AWS going down is treated like an act of god rendering everyone blameless. But if it's your one big server that goes down then it's your fault.
>> AWS going down is treated like an act of god rendering everyone blameless.
Someone decided to use AWS, so there is blame to go around. I'm not saying if that blame is warranted or not, just that it sounds like a valid thing to say for people who want to blame someone.
“Nobody gets fired for using aws” is pretty big now a days. We use GCP but if they have an issue and it bubbles down to me nobody bats an eye when I say the magical cloud man made ut oh whoopsie and it wasn’t me.
I doubt anyone has ever been fired for choosing AWS. I know for a fact that people have been fired after deciding to do it on bare metal and then it didn't work very well.
Agreed. Recently I was discussing the same point with a non-technical friend who was explaining that his CTO had decided to move from Digital Ocean to AWS, after DO experienced some outage. Apparently the CEO is furious at him and has assumed that DO are the worst service provider because their services were down for almost an entire business day. The CTO probably knows that AWS could also fail in a similar fashion, but by moving to AWS it becomes more or less an Act of God type of situation and he can wash his hands of it.
I find this entire attitude disappointing. Engineering has moved from "provide the best reliability" to "provide the reliability we won't get blamed for the failure of". Folks who have this attitude missed out on the dang ethics course their college was teaching.
If rolling your own is faster, cheaper, and more reliable (it is), then the only justification for cloud is assigning blame. But you know what you also don't get? Accolades.
I throw a little party of one here when Office 365 or Azure or AWS or whatever Google calls it's cloud products this week is down but all our staff are able to work without issue. =)
If you work in B2B you can put the blame on Amazon and your customers will ask "understandable, take the necessary steps to make sure it doesn't happen again". AWS going down isn't an act of God, it's something you should've planned for, especially if it happened before.
I don't really have much to do with contracts - but my company is stating that we have up time of 99.xx%.
In terms of contract customers don't care if I have Azure/AWS or I keep my server in the box under the stairs. Yes they do due diligence and would not buy my services if I keep it in shoe box.
But then if they loose business they come to me .. I can go after Azure/AWS but I am so small they will throw some free credits and me and tell to go off.
Maybe if you are in B2C area then yeah - your customers will probably shrug and say it was M$ or Amazon if you write sad blog post with excuses.
It's going to depend on the penalties for being unavailable. Small B2B customers are very different from enterprise B2B customers too, so you ultimately have to build for your context.
If you have to give service credits to customers then with "one box" you have to give 100% of customers a credit. If your services are partitioned across two "shards" then one of those shards can go down, but your credits are only paid out at 50%.
Getting to this place doesn't prevent a 100% outage and it imposes complexity. This kind of design can be planned for enterprise B2B apps when the team are experienced with enterprise clients. Many B2B SaaS are tech folk with zero enterprise experience, so they have no idea of relatively simple things that can be done to enable a shift to this architecture.
Enterprise customers do care where things are hosted. They very likely have some users in the EU, or other locations, which care more about data protection and sovereignty than the average US organization. Since they are used to hosting on-prem and doing their own due diligence they will often have preferences over hosting. In industries like healthcare, you can find out what the hosting preferences are, as well as understand how the public clouds are addressing them. While not viewed as applicable by many on HN due to the focus on B2C and smaller B2B here, this is the kind of thing that can put a worse product ahead in the enterprise scenario.
It really varies a lot. I have seen very large lazy sites suddenly pick up a client that wanted RCA for each bad transaction, and suddenly get religion quickly (well quickly as a large org can). Those are precious clients because they force investment into useful directions of availability instead of just new features.
Because you have a vendor/customer relationship. The big thing for AWS is employer/employee relationships. If you were a larger company, and AWS goes down, who blames you? Who blames anyone in the company? At the C-level, does the CEO expect more uptime than Amazon? Of course not. And so it goes.
Whereas if you do something other than the industry standard of AWS (or Azure/GCP) and it goes down, clearly it's your fault.
Users are much more sympathetic to outages when they're widespread. But, if there's a contractual SLA then their sympathy doesn't matter. You have to meet your SLA. That usually isn't a big problem as SLAs tend to account for some amount of downtime, but it's important to keep the SLA in mind.
There is also the consideration that this isn't even an argument of "other things are down too!" or "outsourcing blame" as much as, depending on what your service is of course, you are unlikely to be operating in a bubble. You likely have some form of external dependencies, or you are an external dependency, or have correlated/cross-dependency usage with another service.
Guaranteeing isolation between all of these different moving parts is very difficult. Even if you're not directly affected by a large cloud outage, it's becoming less-and-less common that you, or your customers, are truely isolated.
As well, if your AWS-hosted service mostly exists to service AWS-hosted customers, and AWS is down, it doesn't matter if you are down. None of your customers are operational anyways. Is this a 100% acceptable solution? Of course not. But for 95% of services/SaaS out there, it really doesn't matter.
Depends on how technical your customer base is. Even as a developer I would tend not to ascribe too much signal to that message. All it tells me is that you don't use AWS.
"We stayed online when GCP, AWS, and Azure go down" is a different story. On the other hand, if those three go down simultaneously, I suspect the state of the world will be such that I'm not worried about the internet.
> "We stayed online when GCP, AWS, and Azure go down" is a different story. On the other hand, if those three go down simultaneously, I suspect the state of the world will be such that I'm not worried about the internet.
If nothing else, with those three all down, so will most news sources be -- so even if you're up, your customers won't get to hear about it.
You also have to calculate in the complexity of running thousands of servers vs running just one server. If you run just one server it's unlikely to go down even once in it's lifetime. Meanwhile cloud providers are guaranteed to have outages due to the share complexity of managing thousands of servers.
The AWS people now are just like the IBM people in the 80s - mastering a complex and not standards based array of products and optional product add-ons. The internet solutions were open and free for a few decades and now it’s AWS SNADS I mean AWS load balancers and edge networks.
AWS services are usually based on standards anyway. If you use an architecturally sound approach to AWS you could learn to develop for GCP or Azure pretty easily.
When migrating from [no-name CRM] to [big-name CRM] at a recent job, the manager pointed out that when [big-name CRM] goes down, it's in the Wall Street Journal, and when [no-name] goes down, it's hard to get their own Support Team to care!
No. Your users have no idea that you rely on AWS (they don't even know what it is), and they don't think of it as a valid or reasonable excuse as to why your service is down.
If you are not maxing out or even getting above 50% utilization of 128 physical cores (256 threads), 512 GB of memory, and 50 Gbps of bandwidth for $1,318/month, I really like the approach of multiple low-end consumable computers as servers. I have been using arrays of Intel NUCs at some customer sites for years with considerable cost savings over cloud offerings. Keep an extra redundant one in the array ready to swap out a failure.
Another often overlooked option is that in several fly-over states it is quite easy and cheap to register as a public telecommunication utility. This allows you to place a powered pedestal in the public right-of-way, where you can get situated adjacent to an optical meet point and get considerable savings on installation costs of optical Internet, even from a tier 1 provider. If your server bandwidth is peak utilized during business hours and there is an apartment complex nearby you can use that utility designation and competitively provide residential Internet service to offset costs.
> competitively provide residential
> Internet service to offset costs.
I uh. Providing residential Internet for an apartment complex feels like an entire business in and of itself and wildly out of scope for a small business? That's a whole extra competency and a major customer support commitment. Is there something I'm missing here?
It depends on the scale - it does not have to be a major undertaking. You are right, it is a whole extra competency and a major customer support commitment, but for a lot of the entrepreneurial folk on HN quite a rewarding and accessible learning experience.
The first time I did anything like this was in late 1984 in a small town in Iowa where GTE was the local telecommunication utility. Absolutely abysmal Internet service, nothing broadband from them at the time or from the MSO (Mediacom). I found out there was a statewide optical provider with cable going through the town. I incorporated an LLC, became a utility and built out less than 2 miles of single mode fiber to interconnect some of my original software business customers at first. Our internal moto was "how hard can it be?" (more as a rebuke to GTE). We found out. The whole 24x7 public utility thing was very difficult for just a couple of guys. But it grew from there. I left after about 20 years and today it is a thriving provider.
Technology has made the whole process so much easier today. I am amazed more people do not do it. You can get a small rack-mount sheet metal pedestal with an AC power meter and an HVAC unit for under $2k. Being a utility will allow you to place that on a concrete pad or vault in the utility corridor (often without any monthly fee from the city or county). You place a few bollards around it so no one drives into it. You want to get quotes from some tier 1 providers [0]. They will help you identify the best locations to engineer an optical meet and those are the locations you run by the city/county/state utilities board or commission.
For a network engineer wanting to implement a fault tolerant network, you can place multiple pedestals at different locations on your provider's/peer's network to create a route diversified protected network.
After all, when you are buying expensive cloud based services that literally is all your cloud provider is doing ... just on a completely more massive scale. The barrier to entry is not as high as you might think. You have technology offerings like OpenStack [1], where multiple competitive vendors will also help you engineer a solution. The government also provides (financial) support [2].
The best perk is the number of parking spaces the requisite orange utility traffic cone opens up for you.
> You can get a small rack-mount sheet metal pedestal with an AC power meter and an HVAC unit for under $2k.
Things have changed a lot and the dominant carriers are no longer willing to interconnect with small guys.
The anti-small bias now extends to the Department of Transportation in most states (which "owns" the right of way). In Washington, WSDOT has an entire set of rules for "financially small" (their term) telecoms, basically designed to prevent them from existing. They claim this is to prevent "financially small" providers from defaulting on damage they cause to the roadway ("not able to abate or correct their environmental damage").
You're missing "apartment complex" - you as the service provider contract with the apartment management company to basically cover your costs, and they handle the day-to-day along with running the apartment building.
Done right, it'll be cheaper for them (they can advertise "high speed internet included!" or whatever) and you won't have much to do assuming everything on your end just works.
The days where small ISPs provided things like email, web hosting, etc, are long gone; you're just providing a DHCP IP and potentially not even that if you roll out carrier-grade NAT.
I feel like this would open up the company to too much liability. Too many of your apartment users are torrenting/streaming/ too many DMCA filings to deal with when my main business is "hypothetically" being a top 3 nation wide payroll provider.
I have only done a few midwestern states. Call them and ask [0] - (919) 733-7328. You may want to first call your proposed county commissioner's office or city hall (if you are not rural), and ask them who to talk with about a new local business providing Internet service. If you can show the Utilities Commission that you are working with someone at the local level I have found they will treat you more seriously. In certain rural counties, you can even qualify for funding from the Rural Utilities Service of the USDA.
EDIT: typos + also most states distinguish between facilities-based ISP's (ie with physical plant in the regulated public right-of-way) and other ISPs. Tell them you are looking to become a facilities-based ISP.
The benefit that is obvious to the regulators is that you can charge money for services. So for example, offering telephone services requires being a LEC (local exchange carrier) or CLEC (competitive local exchange carrier). But even telephone services have become considerably unregulated through VoIP. It's just that at some point, the VoIP has to terminate/interface with a (C)LEC offering real dial tone and telephone numbering. You can put in your own Asterisk server [0] and provide VoIP service on your burgeoning optical utilities network, together with other bundled services including television, movies, gaming, metering etc.. All of these offerings can be resold from wholesale services, where all you need is an Internet feed.
Other benefits to being a "public telecommunication utility" include the competitive right to place your own facilities on telephone/power poles or underground in public right-of-way under the Telecommunications Act of 1996. You will need to enter into and pay for a pole attachment agreement. Of course local governments can reserve the right to tariff your facilities, which has its own ugliness.
One potentially valuable thing a utility can do is place empty conduit in public right of way that can be used/resold in the future at a (considerable) gain. For example, before highways, roadways, airports and other infrastructure is built, it is orders of magnitude cheaper just to plow conduit under bare ground before the improvements are placed.
> the competitive right to place your own facilities on telephone/power poles or underground in public right-of-way under the Telecommunications Act of 1996
This is not true. The FCC doesn't regulate the first pole attachment by a given attacher to poles owned by a given owner. The pole owners are basically free to use all sorts of lame reasons for refusing your first pole attachment request.
The FCC only gets involved when a company already has some (even just one) attachments and is getting rejected or stonewalled on making more attachments.
If you think about it, this is typical captured regulator behavior. The phone companies already have attachments to the electric utility's poles wherever they operate. So this lets the phone companies call in the FCC on any pole dispute. But it provides zero assistance to any new market entrants who want to compete with the existing phone company.
It also makes the decision to allow the first attachment a much harder decision for the pole owner, with the result being that the electrical utilities are incentivized to exclude new telecoms from competing with the phone company. But of course these new telecoms aren't trying to provide electrical services, so it doesn't look anticompetitive to a superficial analysis.
> Other benefits to being a "public telecommunication utility" include the competitive right to place your own facilities on telephone/power poles or underground in public right-of-way under the Telecommunications Act of 1996. You will need to enter into and pay for a pole attachment agreement. Of course local governments can reserve the right to tariff your facilities, which has its own ugliness.
Note that in many parts of the country, the telcos/cablecos themselves own the poles. Google had a ton of trouble with AT&T in my state thanks to this. They lost to AT&T in court and gave up.
While VOIP is mostly unregulated, be acutely aware of e-911 laws and requirements.
This isn't the Wild West shitshow it was in 2003 when I was doing similar things :)
We have a different take on running "one big database." At ScyllaDB we prefer vertical scaling because you get better utilization of all your vCPUs, but we still will keep a replication factor of 3 to ensure that you can maintain [at least] quorum reads and writes.
So we would likely recommend running 3x big servers. For those who want to plan for failure, though, they might prefer to have 6x medium servers, because then the loss of any one means you don't take as much of a "torpedo hit" when any one server goes offline.
So it's a balance. You want to be big, but you don't want to be monolithic. You want an HA architecture so that no one node kills your entire business.
I also suggest that people planning systems create their own "torpedo test." We often benchmark to tell maximal optimum performance, presuming that everything is going to go right.
But people who are concerned about real-world outage planning may want to "torpedo" a node to see how a 2-out-of-3-nodes-up cluster operates, versus a 5-out-of-6-nodes-up cluster.
This is like planning for major jets, to see if you can work with 2 of 3 engines, or 1 of 2.
Obviously, if you have 1 engine, there is nothing you can do if you lose that single point of failure. At that point, you are updating your resume, and checking on the quality of your parachute.
I think this is the right approach, and I really admire the work you do at ScyllaDB. For something truly critical, you really do want to have multiple nodes available (at least 2, and probably 3 is better). However, you really should want to have backup copies in multiple datacenters, not just the one.
Today, if I were running something that absolutely needed to be up 24/7, I would run a 2x2 or 2x3 configuration with async replication between primary and backup sites.
Exactly. Regional distribution can be vital. Our customer Kiwi.com had a datacenter fire. 10 of their 30 nodes were turned to a slag heap of ash and metal. But 20 of 30 nodes in their cluster were in completely different datacenters so they lost zero data and kept running non-stop. This is a rare story, but you do NOT want to be one of the thousands of others that only had one datacenter, and their backups were also stored there and burned up with their main servers. Oof!
Well said. Caring about vertical scale doesn't mean you have to throw out a lot of the lessons learned about still being horizontally scalable or high availability.
Some comments wrongly equate bare-metal with on-premise. Bare-metal servers can be rented out, collocated, or installed on-premise.
Also, when renting, the company takes care of hardware failures. Furthermore, as hard disk failures are the most common issue, you can have hot spares and opt to let damaged disks rot, instead of replacing them.
For example, in ZFS, you can mirror disks 1 and 2, while having 3 and 4 as hot spares, with the following command:
Disregarding the security risks of multi-tenant cloud instances, bare-metal is more cost-effective once your cloud bill exceeds $3,000 per year, which is the cost of renting two bare-metal servers.
---
Here's how you can create a two-server infrastructure:
IMO microservices primarily solve organizational problems, not technical problems.
They allow a team to release independently of other teams that have or want to make different risk/velocity tradeoffs. Also smaller units being released means fewer changes and likely fewer failed releases.
Yeah, not to mention all the extra operational issues and failure modes that come with RPCs vs function calls. Integration testing and release coordination both become more difficult as well.
But hundreds of people contributing to a single binary is probably not realistic; at some point you'll need to factor it into pieces that can be can have somewhat independent operations.
I’m not sure one binary is the scaling problem you’d hit first though - makes me wonder just how many people can work on something that gets compiled together into a monolith
I have been doing this for two decades. Let me tell you about bare metal.
Back in the day we had 1,000 physical servers to run a large scale web app. 90% of that capacity was used only for two months. So we had to buy 900 servers just to make most of our money over two events in two seasons.
We also had to have 900 servers because even one beefy machine has bandwidth and latency limits. Your network switch simply can't pump more than a set amount of traffic through its backplane or your NICs, and the OS may have piss-poor packet performance too. Lots of smaller machines allow easier scaling of network load.
But you can't just buy 900 servers. You always need more capacity, so you have to predict what your peak load will be, and buy for that. And you have to do it well in advance because it takes a long time to build and ship 900 servers and then assemble them, run burn-in, replace the duds, and prep the OS, firmware, software. And you have to do this every 3 years (minimum) because old hardware gets obsolete and slow, hardware dies, disks die, support contracts expire. But not all at once, because who knows what logistics problems you'd run into and possibly not get all the machines in time to make your projected peak load.
If back then you told me I could turn on 900 servers for 1 month and then turn them off, no planning, no 3 year capital outlay, no assembly, burn in, software configuration, hardware repair, etc etc, I'd call you crazy. Hosting providers existed but nobody could just give you 900 servers in an hour, nobody had that capacity.
And by the way: cloud prices are retail prices. Get on a savings plan or reserve some instances and the cost can be half. Spot instances are a quarter or less the price. Serverless is pennies on the dollar with no management overhead.
If you don't want to learn new things, buy one big server. I just pray it doesn't go down for you, as it can take up to several days for some cloud vendors to get some hardware classes in some regions. And I pray you were doing daily disk snapshots, and can get your dead disks replaced quickly.
The thing that confuses me is, isn't every publicly accessible service bursty on a long timescale? Everything looks seasonal and predictable until you hit the front page of Reddit, and you don't know what day that will be. You don't decide how much traffic you get, the world does.
Hitting the front page of reddit is insignificant, it's not like you'll get anywhere near thousands upon thousands of requests each second. If you have a somewhat normal website and you're not doing something weird then it's easily handled with a single low-end server.
If I get so much traffic that scaling becomes a problem then I'll be happy as I would make a ton of money. No need to build to be able to handle the whole world at the same time, that's just a waste of money in nearly all situations.
"Hitting the front page of Reddit" is a metanym for, "today you suddenly have many multiples of the previous day's traffic banging down your door, for reasons entirely outside of your control or ability to foresee." I agree this is a huge revenue opportunity - but if you can't stay up, you may not be able to capitalize on it.
This & sibling comments seem to imagine that every application is some special case of static web hosting, and if that is the domain you're working in I can see how you may be able to cheaply over provision to the point where you don't really worry about downtime. If you don't need distributed computing, definitely don't apply distributed computing to your problem and you'll have a cheaper and better time. I'm not some kind of cloud maximalist; if you're telling me you've done some diligence for your application and it's better off at Hetzner, sure, I believe it.
I'm pretty skeptical that this is most applications, however. Consider a browser based MMORPG where each interaction in the game will fire off one or many requests, and each player interacts several times a second. If hitting the front page results in hundreds of new players, it's easy to imagine having thousands of QPS.
What makes you think people are talking about static pages?
Even in the MMORPG case it should be no problem at all if you get a few thousand new players(that would be some insane conversion ratio on reddit/HN traffic). Even cheap servers are fast.
With great power comes great responsibility. When I start to learn a new cloud service, I definitely start with the billing, limits, and quotas. Concern here is definitely warranted. It's a bit like programming in C, there are some seatbelts but they're not absolutely guaranteed to work, and it's ultimately on you to do it right. I'd love to see this get much safer.
However, the corollary of this is; without this responsibility, you don't have access to the great power.
If you can't handle traffic from reddit or a larger site, you configured static pages and caching incorrectly, or you run your site on a Raspberry Pi, I guess.
Pi behind cloudfront/flare and you caching sorted (as you say) and you'll handle pretty much anything. (well, maybe make that a Pi-4 8gb if you can get one)
Having your caching set up I correctly is verrryyy easy to do. There’s lots of things you can miss, and don’t realise until a whole lot of traffic hits you
> I have been doing this for two decades. Let me tell you about bare metal.
> Back in the day we had 1,000 physical servers to run a large scale web app. 90% of that capacity was used only for two months. So we had to buy 900 servers just to make most of our money over two events in two seasons.
> We also had to have 900 servers because even one beefy machine has bandwidth and latency limits. Your network switch simply can't pump more than a set amount of traffic through its backplane or your NICs, and the OS may have piss-poor packet performance too. Lots of smaller machines allow easier scaling of network load.
I started working with real (bare metal) servers on real internet loads in 2004 and retired in 2019. While there's truth here, there's also missing information. In 2004, all my servers had 100M ethernet, but in 2019, all my new servers had 4x10G ethernet (2x public, 2x private), actually some of them had 6x, but with 2x unconnected, I dunno why. In the meantime, cpu, nics, and operating systems have improved such that if you're not getting line rate for full mtu packets, it's probably becsause your application uses a lot of cpu, or you've hit a pathological case in the OS (which happens, but if you're running 1000 servers, you've probably got someone to debug that).
If you still need 1000 beefy 10G servers, you've got a pretty formidable load, but splitting it up into many more smaller servers is asking for problems of different kinds. Otoh, if your load really scales to 10x for a month, and you're at that scale, cloud economics are going to work for you.
My seasonal loads were maybe 50% more than normal, but usage trends (and development trends) meant that the seasonal peak would become the new normal soon enough; cloud managing the peaks would help a bit, but buying for the peak and keeping it running for the growth was fine. Daily peaks were maybe 2-3x the off-peak usage, 5 or 6 days a week; a tightly managed cloud provisioning could reduce costs here, but probably not enough to compete with having bare metal for the full day.
Let me take you back to March, 2020. When millions of Americans woke up to find out there was a pandemic and they would be working from home now. Not a problem, I'll just call up our cloud provider and request more cloud compute. You join a queue of a thousand other customers calling in that morning for the exact same thing. A few hours on hold and the CSR tells you they aren't provisioning anymore compute resources. east-us is tapped out, central-europe tapped out hours ago, California got a clue and they already called to reserve so you can't have that either.
I use cloud all the time but there are also blackswan events where your IaaS can't do anymore for you.
I never had this problem on AWS though I did see some startups struggle with some more specialized instances. Are midsize companies actually running into issues with non-specialized compute on AWS?
Our problem was we had a less than 24 hours to transition to work from home. Someone came down with COVID symptoms and spread it to the office and no one wanted to come in. We didn't have enough laptops for 250+ employees. Developer equivalent 16-core, 32GB RAM , and GPU instances is radically different from general compute web front ends. And we couldn't get enough of them. We had to tell some staff to hang tight while checking AWS+Azure daily.
These weren't the typical cheap scale out, general compute but virtualized workstations to replace physical, in office equivalents.
That's a good point about cloud services being retail. My company gets a very large discount from one of the most well-known cloud providers. This is available to everybody - typically if you commit to 12 months of a minimum usage then you can get substantial discounts. What I know is so far everything we've migrated to the cloud has resulted in significantly reduced total costs, increased reliability, improved scalability, and is easier to enhance and remediate. Faster, cheaper, better - that's been a huge win for us!
The entire point of the article is that your dated example no longer applies: you can fit the vast majority of common loads on a single server now, they are this powerful.
Redundancy concerns are also addressed in the article.
> If you don't want to learn new things, buy one big server. I just pray it doesn't go down for you
You are taking this a bit too literally. The article itself says one server (and backups).
So "one" here just means a small number not literally no fallback/backup etc. (obviously... even people you disagree with are usually not morons)
> If you don't want to learn new things, buy one big server. I just pray it doesn't go down for you
There's intermediate ground here. Rent one big server, reserved instance. Cloudy in the sense that you get the benefits of the cloud provider's infrastructure skills and experience, and uptime, plus easy backup provisioning; non-cloudy in that you can just treat that one server instance like your own hardware, running (more or less) your own preferred OS/distro, with "traditional" services running on it (e.g. in our case: nginx, gitea, discourse, mantis, ssh)
i handled a 8x increase in traffic to my website from a youtuber reviewing our game, by increasing the cache timer and fixing the wiki creating session table entries for logged out users on a wiki that required accounts to edit it.
we already get multiple millions of page hits a months for this happened.
This server had 8 cores but 5 of them were reserved for the 10tb a month in bandwidth game servers running on the same machine.
If you needed 1,000 physical computers to run your webapp, you fucked up somewhere along the line.
I didn't want to write a top-level comment and I'm sure few people will see this, but I scrolled down very far in this thread and didn't see this point made anywhere:
The article focuses almost entirely on technical questions, but the technical considerations are secondary; the reason so many organizations prefer cloud services, VMs, and containers is to manage the challenges of scaling organizationally, not technically.
Giving every team the tools necessary to spin up small or experimental services greases the skids of a large or quickly growing organization. It's possible to set this up on rented servers, but it's an up front cost in time.
The article makes perfect sense for a mature public facing service with a lot of predictable usage, but the sweet spot for cloud services is sprawling organizations with lots of different teams doing lots of different mostly-internally facing things.
I agree with almost everything you said; except that the article offers extremely valuable advice for small startups going the cloud / rented VM route: Yearly payments, or approaching a salesperson, can lead to much lower costs.
(I should point out that yesterday, in Azure, I added a VM in a matter of seconds and it took all of 15 minutes to boot up and start running our code. My employer is far too small to have dedicated ops; the cost of cloud VMs is much cheaper than hiring another ops / devops / whatever.)
Yep. To be clear, I thought it was a great article with lots of great advice, just too focused on the technical aspects of cloud benefits, whereas I think the real value is organizational.
Interesting write-up that acknowledges the benefits of cloud computing while starkly demonstrating the value proposition of just one powerful, on-prem server. If it's accurate, I think a lot of people are underestimating the mark-up cloud providers charge for their services.
I think one of the major issues I have with moving to the cloud is a loss of sysadmin knowledge. The more locked in you become to the cloud, the more that knowledge atrophies within your organization. Which might be worth it to be nimble, but it's a vulnerability.
I like One Big (virtual) Server until you come to software updates. At a current project we have one server running the website in production. It runs an old version of Centos, the web server, MySQL and Elasticsearch all on the one machine.
No network RTTs when doing too many MySQL queries on each page - great! But when you want to upgrade one part of that stack... we end up cloning the server, upgrading it, testing everything, and then repeating the upgrade in-place on the production server.
I don't like that. I'd far rather have separate web, DB and Elasticsearch servers where each can be upgraded without fear of impacting the other services.
You could just run system containers (eg. lxd) for each component, but still on one server. That gets you multiple "servers" for the purposes of upgrades, but without the rest of the paradigm shift that Docker requires.
Which is great until there's a security vuln in an end-of-life piece of core software (the distro, the kernel, lxc, etc) and you need to upgrade the whole thing, and then it's a 4+ week slog of building a new server, testing the new software, fixing bugs, moving the apps, finding out you missed some stuff and moving that stuff, shutting down the old one. Better to occasionally upgrade/reinstall the whole thing with a script and get used to not making one-off changes on servers.
If I were to buy one big server, it would be as a hypervisor. Run Xen or something and that way I can spin up and down VMs as I choose, LVM+XFS for snapshots, logical disk management, RAID, etc. But at that point you're just becoming a personal cloud provider; might as well buy smaller VMs from the cloud with a savings plan, never have to deal with hardware, make complex changes with a single API call. Resizing an instance is one (maybe two?) API call. Or snapshot, create new instance, delete old instance: 3 API calls. Frickin' magic.
"the EC2 Instance Savings Plans offer up to 72% savings compared to On-Demand pricing on your Amazon EC2 Instances" - https://aws.amazon.com/savingsplans/
Huh? Using lxd would be identical to what you suggest (VMs on Xen) from a security upgrade and management perspective. Architecturally and operationally they're basically the equivalent, except that VMs need memory slicing up but lxd containers don't. There are security isolation differences but you're not talking about that here?
I would want the memory slicing + isolation, plus a hypervisor like Xen doesn't need an entire host OS so there's less complexity, vulns, overhead, etc, and I'm not aware if LXD does the kind of isolation that ex. allows for IKE IPSec tunnels? Non-hypervisors don't allow for it iirc. Would rather use Docker for containers because the whole container ecosystem is built around it.
Fine, but then that's your reason. "until there's a security vuln in an end-of-life piece of core software...and then it's a 4+ week slog of building a new server" isn't a difference in the context of comparing Xen VMs and lxd containers. As an aside, lxd does support cgroup memory slicing. It has the advantage that it's not mandatory like it is in VMs, but you can do it if you want it.
> Would rather use Docker for containers because the whole container ecosystem is built around it.
This makes no sense. You're hearing the word "container" and inferring an equivalence that does not exist. The "whole container ecosystem" is something that exists for Docker-style containers, and is entirely irrelevant for lxd containers.
lxd containers are equivalent to full systems, and exist in the "Use one big server" ecosystem. If you're familiar with running a full system into a VM, then you're familiar with the inside of a lxd container. They're the same. In userspace, there's no significant difference.
I use LXC a lot for our relatively small production setup. And yes, I'm treating the servers like pets, not cattle.
What's nice is that I can snapshot a container and move it to another physical machine. Handy for (manual) load balancing and upgrades to the physical infrastructure. It is also easy to run a snapshot of the entire server and then run an upgrade, then if the upgrade fails, you roll back to the old snapshot.
Doesn't the container help with versioning the software inside it, but you're still tied to the host computer's operating system, and so when you upgrade that you have to test every single container to see if anything broke?
Whereas if running a VM you have a lot more OS upgrades to do, but you can do them individually and they have no other impact?
This is the bit I've never understood with containers...
In the paper on Twitter’s “Who to Follow” service they mention that they designed the service around storing the entire twitter graph in the memory of a single node:
> An interesting design decision we made early in the Wtf project was to assume in-memory processing on a single server. At first, this may seem like an odd choice, run- ning counter to the prevailing wisdom of “scaling out” on cheap, commodity clusters instead of “scaling up” with more cores and more memory. This decision was driven by two rationales: first, because the alternative (a partitioned, dis- tributed graph processing engine) is significantly more com- plex and dicult to build, and, second, because we could! We elaborate on these two arguments below.
> Requiring the Twitter graph to reside completely in mem- ory is in line with the design of other high-performance web services that have high-throughput, low-latency require- ments. For example, it is well-known that Google’s web indexes are served from memory; database-backed services such as Twitter and Facebook require prodigious amounts of cache servers to operate smoothly, routinely achieving cache hit rates well above 99% and thus only occasionally require disk access to perform common operations. However, the additional limitation that the graph fits in memory on a single machine might seem excessively restrictive.
I always wondered if they still do this and if this influenced any other architectures at other companies.
Yeah I think single machine has its place, and I once sped up a program by 10000x by just converting it to Cython and having it all fit in the CPU cache, but the cloud still does have a place! Even for non-bursty loads. Even for loads that theoretically could fit in a single big server.
Uptime.
Or are you going to go down as all your workers finish? Long connections? Etc.
It is way easier to gradually handover across multiple API servers as you do an upgrade than it is to figure out what to do with a single beefy machine.
I'm not saying it is always worth it, but I don't even think about the API servers when a deploy happens anymore.
Furthermore if you build your whole stack this way it will be non-distributed by default code. Easy to transition for some things, hell for others. Some access patterns or algorithms are fine when everything is in a CPU cache or memory but would fall over completely across multiple machines. Part of the nice part about starting with cloud first is that it is generally easier to scale to billions of people afterwards.
That said, I think the original article makes a nuanced case with several great points and I think your highlighting of the Twitter example is a good showcase for where single machine makes sense.
I have gone well beyond this figure by doing clever tricks in software and batching multiple transactions into IO blocks where feasible. If your average transaction is substantially smaller than the IO block size, then you are probably leaving a lot of throughput on the table.
The point I am trying to make is that even if you think "One Big Server" might have issues down the road, there are always some optimizations that can be made. Have some faith in the vertical.
This path has worked out really well for us over the last ~decade. New employees can pick things up much more quickly when you don't have to show them the equivalent of a nuclear reactor CAD drawing to get started.
> batching multiple transactions into IO blocks where feasible. If your average transaction is substantially smaller than the IO block size, then you are probably leaving a lot of throughput on the table.
Could you expand on this? A quick Google search didn't help. Link to an article or a brief explanation would be nice!
Sure. If you are using some micro-batched event processing abstraction, such as the LMAX Disruptor, you have an opportunity to take small batches of transactions and process them as a single unit to disk.
For event sourcing applications, multiple transactions can be coalesced into a single IO block & operation without much drama using this technique.
Surprisingly, this technique also lowers the amount of latency that any given user should experience, despite the fact that you are "blocking" multiple users to take advantage of small batching effects.
As per usual, don't copy Google if you don't have the same requirements. Google Search never goes down. HN goes down from time and nobody minds. Google serves tens (hundreds?) of thousands of queries per second. HN serves ten. HN is fine with one server because it's small. How big is your service going to be? Do that boring math :)
Correct. I like to ask "how much money do we lose if the site goes down for 1hr? a day?" etc.. and plan around that. If you are losing 1m an hour, or 50m if it goes down for a day, hell yeah you should spend a few million on making sure your site stays online!
But, it is amazing how often c-levels cannot answer this question!
I think Elixir/Erlang is uniquely positioned to get more traction in the inevitable microservice/kubernetes backlash and the return to single server deploys (with a hot backup). Not only does it usually sip server resources but it also scales naturally as more cores/threads are available on a server.
Going from an Erlang "monolith" to a java/k8s cluster, I was amazed at how much more work it is takes to build a "modern" microservice. Erlang still feels like the future to me.
While individual Node.js processes are single-threaded, Node.js includes a standard API that distributes its load across multiple processes, and therefor cores.
Don't be scared of 'one big server' for reliability. I'd bet that if you hired a big server today in a datacenter, the hardware will have more uptime than something cloud-native with az-failover hosted on AWS.
Just make sure you have a tested 30 minute restoration plan in case of permanent hardware failure. You'll probably only use it once every 50 years on average, but it will be an expensive event when it happens.
The way I code now after 10 years: Use one big file.
No executable I'm capable of writing on my own is complex enough to need 50 files spread across a 3-layers-deep directory tree. Doesn't matter if it's a backend, a UI, or what. There's no way your React or whatever tutorial example code needs that either. And you don't gain any meaningful organization splitting into files when there are already namespaces, classes, structs, comments, etc. I don't want to waste time reorganizing it, dealing with imports, or jumping around different files while I code.
Oh, there's some custom lib I want to share between executables, like a Postgres client? Fine, it gets its own new file. Maybe I end up with 4 files in the end.
This is sorta how our team does things, and so far it hasn't presented issues. Each service has the vast majority of its real logic in a single file. Worst case, one day this stops working, and someone takes 10 minutes to split things into a separate file.
On the other side, I've seen people spend hours preemptively deciding on a file structure. It often stops making sense a month later, and every code review has a back and forth argument about what to name a new file.
Reminds me of a company I used to work at which took a similar approach. We used one file per person policy, each developer had their own file that contained functionality developed by them, named like firstName_lastName.ext - everyone owned their file so we didn't have to worry about merge conflicts.
On the team at my day job, it'd be very bad for each person to strictly "own" their code like that because things get handed off all the time, but in some other situations I can see it making sense.
There are some Firebase specific annoyances to put up with, like the local emulator is not as nice and "isomorphic" as say running postgresql locally.
But the main problem (and I think this is shared by what I call loosely "distributed databases") is you have to think really hard about how the data is structured.
You can't structure it as nicely from a logical perspective compared to a relational DB. Because you can't join without pulling data from all over the place. Because the data isn't in one place. It is hard to do joins both in terms of performance and in terms of developer ergonomics.
I really miss SELECT A.X, B.Y FROM A JOIN B ON A.ID = B.AID; when using Firebase.
You have to make data storage decisions early on, and it is hard to change you mind later. It is hard to migrate (and may be expensive if you have a lot of existing data).
I picked Firebase for the wrong reason (I thought it would make MVP quicker to set up). But the conveniences it provides are outweighed by having to structure your data for distribution across servers.
Instead next time I would go relational, then when I hit a problem do that bit distributed. Most tables have 1000s of records. Maybe millions. The table with billions might need to go out to something distributed.
Market gap??:
Let me rent real servers, but expose it in a "serverless" "cloud-like" way, so I don't have to upgrade the OS and all that kind of stuff.
In my opinion the best argument for RDBMSs came, ironically, from Rick Houlihan, who was at that time devrel for DynamoDB. Paraphrasing from memory, he said "most data is relational, because relationships are what give data meaning, but relational databases don't scale."
Which, maybe if you're Amazon, RDBMSs don't scale. But for a pleb like me, I've never worked on a system even close the scaling limits of an RDBMS—Not even within an order of magnitude of what a beefy server can do.
DynamoDB, Firebase, etc. require me to denormalize data, shape it to conform to my access patterns—And pray that the access patterns don't change.
No. I think I'll take normalized data in an RDBMS, scaling be damned.
> Let me rent real servers, but expose it in a "serverless" "cloud-like" way, so I don't have to upgrade the OS and all that kind of stuff.
I think you're describing platform-as-a-service? It does exist, but it didn't eat cloud's lunch, rather the opposite I expect.
It's hard to sell a different service when most technical people in medium-big companies are at the mercy of non-technical people who just want things to be as normal as possible. I recently encountered this problem where even using Kubernetes wasn't enough, we had to use one of the big three, even though even sustained outages wouldn't be very harmful to our business model. What can I say, boss want cloud.
Yes, it's very hard to beat Postgres IMO. You can use Firebase without using its database, and you can certainly run a service with a Postgres database without having to rent out physical servers.
At various points in my career, I worked on Very Big Machines and on Swarms Of Tiny Machines (relative to the technology of their respective times). Both kind of sucked. Different reasons, but sucked nonetheless. I've come to believe that the best approach is generally somewhere in the middle - enough servers to ensure a sufficient level of protection against failure, but no more to minimize coordination costs and data movement. Even then there are exceptions. The key is don't run blindly toward the extremes. Your utility function is probably bell shaped, so you need to build at least a rudimentary model to explore the problem space and find the right balance.
1) you need to get over the hump and build in multiple servers into your architecture from the get go (the author says you need two servers minimum), so really we are talking about two big servers.
2) having multiple small servers allows us to spread our service into different availability zones
3) multiple small servers allows us to do rolling deploys without bringing down our entire service
4) once we use the multiple small servers approach it’s easy to scale up and down our compute by adding or removing machines. Having one server it’s difficult to scale up or down without buying more machines. Small servers we can add incrementally but with the large server approach scaling up requires downtime and buying a new server.
The line of thinking you follow is what is plaguing this industry with too much complexity and simultaneously throwing away incredible CPU and PCIe performance gains in favor of using the network.
Any technical decisions about how many instances to have and how they should be spread out needs to start as a business decision and end in crisp numbers about recovery point/time objections, and yet somehow that nearly never happens.
To answer your points:
1) Not necessarily. You can stream data backups to remote storage and recover from that on a new single server as long as that recovery fits your Recovery Time Objective (RTO).
2) What's the benefit of multiple AZs if the SLA of a single AZ is greater than your intended availability goals? (Have you checked your provider's single AZ SLA?)
3) You can absolutely do rolling deploys on a single server.
4) Using one large server doesn't mean you can't compliment it with smaller servers on an as-needed basis. AWS even has a service for doing this.
Which is to say: there aren't any prescriptions when it comes to such decisions. Some businesses warrant your choices, the vast majority do not.
> Any technical decisions about how many instances to have and how they should be spread out needs to start as a business decision and end in crisp numbers about recovery point/time objections, and yet somehow that nearly never happens.
Nobody wants to admit that their business or their department actually has a SLA of "as soon as you can, maybe tomorrow, as long as it usually works". So everything is pretend-engineered to be fifteen nines of reliability (when in reality it sometimes explodes because of the "attempts" to make it robust).
Being honest about the actual requirements can be extremely helpful.
> Nobody wants to admit that their business or their department actually has a SLA of "as soon as you can, maybe tomorrow, as long as it usually works". So everything is pretend-engineered to be fifteen nines of reliability (when in reality it sometimes explodes because of the "attempts" to make it robust).
I have yet to see my principal technical frustrations summarized so concisely. This is at the heart of everything.
If the business and the engineers can get over their ridiculous obsession of statistical outcomes and strict determinism, they would be able to arrive at a much more cost effective, simple and human-friendly solution.
The # of businesses that are actually sensitive to >1 minute of annual downtime are already running on top of IBM mainframes and have been for decades. No one's business is as important as the federal reserve or pentagon, but they don't want to admit it to themselves or others.
> The # of businesses that are actually sensitive to >1 minute of annual downtime are already running on top of IBM mainframes and have been for decades.
Is there any?
My bank certainly has way less than 5 9s of availability. It's not a problem at all. Credit/debit card processors seem to stay around 5 nines, and nobody is losing sleep over it. As long as your unavailability isn't all on the Christmas promotion day, I never saw anybody losing any sleep over web-store unavailability. The FED probably doesn't have 5 9's of availability. It's way overkill for a central bank, even if it's one that process online interbank transfers (what the FED doesn't).
The organizations that need more than 5 9's are probably all on the military and science sectors. And those aren't using mainframes, they certainly use good old redundancy of equipment with simple failure modes.
> simultaneously throwing away incredible CPU and PCIe performance gains
We really need to double down on this point. I worry that some developers believe they can defeat the laws of physics with clever protocols.
The amount of time it takes to round trip the network in the same datacenter is roughly 100,000 to 1,000,000 nanoseconds.
The amount of time it takes to round trip L1 cache is around half a nanosecond.
A trip down PCIe isn't much worse, relatively speaking. Maybe hundreds of nanoseconds.
Lots of assumptions and hand waving here, but L1 cache can be around 1,000,000x faster than going across the network. SIX orders of magnitude of performance are instantly sacrificed to the gods of basic physics the moment you decide to spread that SQLite instance across US-EAST-1. Sure, it might not wind up a million times slower on a relative basis, but you'll never get access to those zeroes again.
> 2) What's the benefit of multiple AZs if the SLA of a single AZ is greater than your intended availability goals? (Have you checked your provider's single AZ SLA?)
… my providers single AZ SLA is less than my company's intended availability goals.
(IMO our goals are also nuts, too, but it is what it is.)
Our provider, in the worse case (a VM using a managed hard disk) has an SLA of 95% within a month (I … think. Their SLA page uses incorrect units on the top line items. The examples in the legalese — examples are normative, right? — use a unit of % / mo…).
You're also assuming a provider a.) typically meets their SLAs and b.) if they don't, honors them. IME, (a) is highly service dependent, with some services being just stellar at it, and (b) is usually "they will if you can prove to them with your own metrics they had an outage, and push for a credit. Also (c.) the service doesn't fail in a way that's impactful, but not covered by SLA. (E.g., I had a cloud provider once whose SLA was over "the APIs should return 2xx", and the APIs during the outage, always returned "2xx, I'm processing your request". You then polled the API and got "2xx your request is pending". Nothing was happening, because they were having an outage, but that outage could continue indefinitely without impacting the SLA! That was a fun support call…)
There's also (d) AZs are a myth; I've seen multiple global outages. E.g., when something like the global authentication service falls over and takes basically every other service with it. (Because nothing can authenticate. What's even better is the provider then listing those services as "up" / not in an outage, because technically it's not that service that's down, it is just the authentication service. Cause God forbid you'd have to give out that credit. But the provider calling a service "up" that is failing 100% of the requests sent its way is just rich, from the customer's view.)
I agree! Our "distributed cloud database" just went down last night for a couple of HOURS. Well, not entirely down. But there were connection issues for hours.
Guess what never, never had this issue? The hardware I keep in a datacenter lol!
> The line of thinking you follow is what is plaguing this industry with too much complexity and simultaneously throwing away incredible CPU and PCIe performance gains in favor of using the network.
It will die out naturally once people realize how much the times have changed and that the old solutions based on weaker hardware are no longer optimal.
"It depends" is the correct answer to the question, but the least informative.
One Big Server or multiple small servers? It depends.
It always depends. There are many workloads where one big server is the perfect size. There are many workloads where many small servers are the perfect solution.
What my point is, is that the ideas put forward in the article are flawed for the vast majority of use cases.
I'm saying that multiple small servers are a better solution on a number of different axis.
For
1) "One Server (Plus a Backup) is Usually Plenty"
Now I need some kind of remote storage streaming system and some kind of manual recovery, am I going to fail over to the backup (and so it needs to be as big as my "One server" or will I need to manually recover from my backup?
2) Yes it depends on your availability goals, but you get this as a side effect of having more than one small instance
3) Maybe I was ambiguous here. I don't just mean rolling deploys of code. I also mean changing the server code, restarting, upgrading and changing out the server. What happens when you migrate to a new server (when you scale up by purchasing a different box). Now we have a manual process that doesn't get executed very often and is bound to cause downtime.
4) Now we have "Use one Big Server - and a bunch of small ones"
I'm going to add a final point on reliability. By far the biggest risk factor for reliability is me the engineer. I'm responsible for bringing down my own infra way more than any software bug or hardware issue. The probability of me messing up everything when there is one server that everything depends on is much much higher, speaking from experience.
So. Like I said, I could have said "It depends" but instead I tried to give a response that was someway illuminating and helpful, especially given the strong opinions expressed in the article.
I'll give a little color with the current setup for a site I run.
moustachecoffeeclub.com runs on ECS
I have 2 on-demand instances and 3 spot instances
One tiny instance running my caches (redis, memcache)
One "permanent" small instance running my web server
Two small spot instances running web server
One small spot instance running background jobs
small being about 3 GB and 1024 CPU units
And an RDS instance with backup about $67 / month
All in I'm well under $200 per month including database.
So you can do multiple small servers inexpensively.
Another aspect is that I appreciate being able to go on vacation for a couple of weeks, go camping or take a plane flight without worrying if my one server is going to fall over when I'm away and my site is going to be down for a week. In a big company maybe there is someone paid to monitor this, but with a small company I could come back to a smoking hulk of a company and that wouldn't be fun.
> you need to get over the hump and build in multiple servers into your architecture from the get go (the author says you need two servers minimum), so really we are talking about two big servers.
Managing a handful of big servers can be done manually if needed - it's not pretty but it works and people have been doing it just fine before the cloud came along. If you intentionally plan on having dozens/hundreds of small servers, manual management becomes unsustainable and now you need a control plane such as Kubernetes, and all the complexity and failure modes it brings.
> having multiple small servers allows us to spread our service into different availability zones
So will 2 big servers in different AZs (whether cloud AZs or old-school hosting providers such as OVH).
> multiple small servers allows us to do rolling deploys without bringing down our entire service
Nothing prevents you from starting multiple instances of your app on one big server nor doing rolling deploys with big bare-metal assuming one server can handle the peak load (so you take out your first server out of the LB, upgrade it, put it back in the LB, then do the same for the second and so on).
> once we use the multiple small servers approach it’s easy to scale up and down our compute by adding or removing machines. Having one server it’s difficult to scale up or down without buying more machines. Small servers we can add incrementally but with the large server approach scaling up requires downtime and buying a new server.
True but the cost premium of the cloud often offsets the savings of autoscaling. A bare-metal capable of handling peak load is often cheaper than your autoscaling stack at low load, therefore you can just overprovision to always meet peak load and still come out ahead.
I manage hundreds of servers, and use Ansible. It's simple and it gets the job done. I tried to install Kubernetes on a cluster and couldn't get it to work. I mean I know it works, obviously, but I could not figure it out and decided to stay with what works for me.
But it’s specific, and no-one will want to take over your job.
The upside of a standard AWS CloudFormation file is that engineers are replaceable. They’re cargo-cult engineers, but they’re not worried for their career.
> But it’s specific, and no-one will want to take over your job.
It really depends what's on the table. Offer just half of the cost savings vs an equivalent AWS setup as a bonus (and pocket the other half) and I'm sure you'll find people who will happily do it (and you'll be happy to pocket the other half). For a lot of companies even just half of the cost savings would be a significant sum (reminds me of an old client who spent thousands per month on an RDS cluster that not only was slower than my entry-level MacBook, but ended up crapping out and stuck in an inconsistent state for 12 hours and required manual intervention from AWS to recover - so much for managed services - ended up restoring a backup but I wish I could've SSH'd in and recovered it in-place).
As someone who uses tech as a means to an end and is more worried about the output said tech produces than the tech itself (aka I'm not looking for a job nor resume clout nor invites to AWS/Hashicorp/etc conferences, instead I bank on the business problems my tech solves), I'm personally very happy to get my hands dirty with old-school sysadmin stuff if it means I don't spend 10-20x the money on infrastructure just to make Jeff Bezos richer - my end customers don't know nor care either way while my wallet appreciates the cost savings.
On a big server, you would probably be running VMs rather than serving directly. And then it becomes easy to do most of what you're talking about - the big server is just a pool of resources from which to make small, single purpose VMs as you need them.
In theory, VMs should only be needed to run different OSes on one big box. Otherwise, what should have sufficed (speaking of what I 'prefer') is a multiuser OS that does not require additional layers to ensure security and proper isolation of users and their work environments from each other. Unfortunately, looks like UNIX and its descendants could not deliver on this basic need. (I wonder if Multics had something of a better design in this regard.)
It completely depends on what you doing. This was pointed out in the first paragraph of the article:
> By thinking about the real operational considerations of our systems, we can get some insight into whether we actually need distributed systems for most things.
I'm building an app with Cloudflare serverless and you can emulate everything locally with a single command and debug directly... It's pretty amazing.
But the way their offerings are structured means it will be quite expensive to run at scale without a multi cloud setup. You can't globally cache the results of a worker function in CDN, so any call to a semi dynamic endpoint incurs one paid invocation, and there's no mechanism to bypass this via CDN caching because the workers live in front of the CDN, not behind it.
Despite their media towards lowering cloud costs, they have explicitly designed their products to contain people in a cost structure similar to but different than via egress fees. And in fact it's quite easily bypassed by using a non Cloudflare CDN in front of Cloudflare serverless.
Anyway, I reached a similar conclusion that for my app a single large server instance works best. And actually I can fit my whole dataset in RAM, so disk/JSON storage and load on startup is even simpler than trying to use multiple systems and databases.
Further, can run this on a laptop for effectively free, and cache everything via CDN, rather than pay ~$100/month for a cloud instance.
When you're small, development time is going to be your biggest constraint, and I highly advocate all new projects start with a monolithic approach, though with a structure that's conducive to decoupling later.
As someone who has only dabbled with serverless (Azure functions), the difficulty in setting up a local dev environment was something I found really off-putting. There is no way I am hooking up my credit card to test something that is still in development. It just seems crazy to me. Glad to hear Cloudflare workers provides a better experience. Does it provide any support for mocking commonly used services?
Yes, you can run your entire serverless infrastructure locally with a single command and close to 0 config.
It's far superior to other cloud offerings in that respect.
You can even run it live in dev mode and remote debug the code. Check out miniflare/Wrangler v2
Just wish they would have ability for persistent objects. Everything is still request driven, yet I want to schedule things on subminute schedules. You can do it today, but it requires hacks
Yes, but the worker is in front of the cache (have to pay for an invocation even if cached), and the worker only interacts with the closest cache edge node, not the entire CDN.
But yeah, there are a few hacky ways to work around things. You could have two different URLs and have the client check if the item is stale, if so, call the worker which updates it.
I'm doing something similar with durable objects. I can get it to be persistent by having a cron that calls it every minute and then setting an alarm loop within the object.
It's just super awkward. It feels like a design decision to drive monetization. Cloudflare would be perfect if they let you have a persistent durable object instance that could update global CDN content
It's still the best serverless dev experience for me. Can do everything via JS while having transactional guarantees and globally distributed data right at the edge
One of first experiences in my professional career was situation when "one big server" that was serving the system that was making money actually failed on Friday, HP's warranty was like next or 2 business days to get a replacement.
The entire situation ended up having conference call with multiple department directors who were deciding which server from other systems to cannibalize (even if it is underpowered) to get the system going.
Since that time I'm quite skeptical about "one", and to me this is one of big benefits of cloud provides, as, most likely, there is another instance and stockouts are more rare.
Science advances as RAM on a single machine increases.
For many years, genomics software was non-parallel and depending on having a lot of RAM- often a terabyte or more- to store data in big hash tables. Converting that to distributed computing was a major effort and to this day many people still just get a Big Server With Lots of Cores, RAM, and SSD.
Personally after many years of working wiht distributed, I absolutely enjoy working on a big fat server that I have all to myself.
On the other hand in science, it sure is annoying that the size of problems that fit in a single node is always increasing. PARDISO running on a single node will always be nipping at your heels if you are designing a distributed linear system solver...
As someone who's worked in cloud sales and no longer has any skin in the game, I've seen firsthand how cloud native architectures improve developer velocity, offer enhanced reliability and availability, and actually decrease lock-in over time.
Every customer I worked with who had one of these huge servers introduced coupling and state in some unpleasant way. They were locked in to persisted state, and couldn't scale out to handle variable load even if they wanted to. Beyond that, hardware utilization became contentious at any mid-enterprise scale. Everyone views the resource pool as theirs, and organizational initiatives often push people towards consuming the same types of resources.
When it came time to scale out or do international expansion, every single one of my customers who had adopted this strategy had assumptions baked into their access patterns that made sense given their single server. When it came time to store some part of the state in a way that made sense for geographically distributed consumers, it was months not sprints of time spent figuring out how to hammer this in to a model that's fundamentally at odds.
From a reliability and availability standpoint, I'd often see customers tell me that 'we're highly available within a single data center' or 'we're split across X data centers' without considering the shared failure modes that each of these data centers had. Would a fiber outage knock out both of your DCs? Would a natural disaster likely knock something over? How about _power grids_? People often don't realize the failure modes they've already accepted.
This is obviously not true for every workload. It's tech, there are tradeoffs you're making. But I would strongly caution any company that expects large growth against sitting on a single-server model for very long.
Could confirmation bias affect your analysis at all?
How many companies went cloud-first and then ran out of money? You wouldn't necessary know anything about them.
Were the scaling problems your single-server customers called you to solve unpleasant enough put their core business in danger? Or was the expense just a rounding error for them?
From this and the other comment, it looks like I wasn't clear about talking about SMB/ME rather than a seed/pre-seed startup, which I understand can be confusing given that we're on HN.
I can tell you that I've never seen a company run out of money from going cloud-first (sample size of over 200 that I worked with directly). I did see multiple businesses scale down their consumption to near-zero and ride out the pandemic.
The answer to scaling problems being unpleasant enough to put the business in danger is yes, but that was also during the pandemic when companies needed to make pivots to slightly different markets. Doing this was often unaffordable from an implementation cost perspective at the time when it had to happen. I've seen acquisitions fall through due to an inability to meet technical requirements because of stateful monstrosities. I've also seen top-line revenue get severely impacted when resource contention causes outages.
The only times I've seen 'cloud-native' truly backfire were when companies didn't have the technical experience to move forward with these initiatives in-house. There are a lot of partners in the cloud implementation ecosystem who will fleece you for everything you have. One such example was a k8s microservices shop with a single contract developer managing the infra and a partner doing the heavy lifting. The partner gave them the spiel on how cloud-native provides flexibility and allows for reduced opex and the customer was very into it. They stored images in a RDBMS. Their database costs were almost 10% of the company's operating expenses by the time the customer noticed that something was wrong.
The common element in the above is scaling and reliability. While lots of startups and companies are focused on the 1% chance that they are the next Google or Shopify, the reality is that nearly all aren't, and the overengineering and redundancy-first model that cloud pushes does cost them a lot of runway.
It's even less useful for large companies; there is no world in which Kellogg is going to increase sales by 100x, or even 10x.
But most companies aren't startups. Many companies are established, growing businesses with a need to be able to easily implement new initiatives and products.
The benefits of cloud for LE are completely different. I'm happy to break down why, but I addressed the smb and mid-enterprise space here because most large enterprises already know they shouldn't run on a single rack.
This is just a complete lack of engagement with the post. Most LE’s know they shouldn’t run a two rack setup either. That is not the size or layout of any LE that I’ve interacted with. The closest is a bank in the developing world that had a few racks split across data centers in the same city and was desperately trying to move away given power instability in the country.
Wound up spawning off a separate thread from our would-be stateless web api to run recurring bulk processing jobs.
Then coupled our web api to the global singleton-esque bulk processing jobs thread in a stateful manner.
The wrapped actors up on actors on top of everything to try to wring as much performance as possible out of the big server.
Then decided they wanted to have a failover/backup server but it was too difficult due to the coupling to the global singleton-esque bulk processing job.
[I resigned at this point.]
So yeah color me skeptical. I know every project's needs are different, but I'm a huge fan of dumping my code into some cloud host that auto-scaled horizontally, and then getting back to writing more code that provides some freeeking busines value.
If you are at all cost sensitive, you should have some of your own infrastructure, some rented, and some cloud.
You should design your stuff to be relatively easily moved and scaled between these. Build with docker and kubernetes and that's pretty easy to do.
As your company grows, the infrastructure team can schedule which jobs run where, and get more computation done for less money than just running everything in AWS, and without the scaling headaches of on-site stuff.
This post raises small issues like reliability, but missed lot of much bigger issues like testing, upgrades, reproducibility, backups and even deployments. Also, the author is comparing on demand pricing, which to me doesn't make sense if you are paying for the server with reserved pricing. Still I agree there would be a difference of 2-3x(unless your price is dominated by AWS egress fees), but most server with fixed workload, even for very popular but simple sites, it could be done in $1k/month in cloud, less than 10% of one developer salary. For non fixed workload like ML training, you would anyways need some cloudy setup.
One thing that has helped me grow over the last few years building startups is: microservices software architecture and microservice deployment are two different things.
You can logically break down your software into DDD bounded contexts and have each own its data, but that doesn't mean you need to do Kubernetes with Kafka and dozens of tiny database instances, communicating via json/grpc. You can have each "service" live in its own thread/process, have it's own database (in the "CREATE DATABASE" sense, not the instance sense), communicate via a simple in-memory message queue, and communicate through "interfaces" native to your programming language.
Of course it has its disadvantages (need to commit to a single softare stack, still might need a distributed message queue if you want load balancing, etc) but for the "boring business applications" I've been implementing (where DDD/logical microservices makes sense) it has been very useful.
I didn’t see a point of cloudy services being easier to manage. If some team gets a capital budget to buy that one big server, they will put every thing on it, no matter your architectural standards. Cron jobs editing state on disk, tmux sessions shared between teams, random web servers doing who knows what, non-DBA team Postgres installs, etc. at least in cloud you can limit certain features and do charge back calculations.
Not sure if that is a net win for cloud or physical, of course, but I think it is a factor
One of our projects uses 1 big server and indeed, everyone started putting everything on it (because it's powerful): the project itself, a bunch of corporate sites, a code review tool, and god knows what else. Last week we started having issues with the projects going down because something is overloading the system and they still can't find out what exactly without stopping services/moving them to a different machine (fortunately, it's internal corporate stuff, not user-facing systems). The main problem I've found with this setup is that random stuff can accumulate with time and then one tool/process/project/service going out of control can bring down the whole machine. If it's N small machines, there's greater isolation.
I believe that the "one big server" is intended for an application rather than trying to run 500 applications.
Does your application run on a single server? If yes. Don't use a distributed system for it's architecture or design. Simply buy bigger hardware when necessary. Because the top end of servers are insanely big and fast.
It does not mean, IMHO, throw everything on a single system without suitable organization, oversight, isolation, and recovery plans.
I don't agree with EVERYTHING in the article such as getting 2 big rather than multiple smaller, this is really just a cost/requirement issue though.
The biggest cost I've noticed with enterprises who go full cloud is that they are locked in for the long term. I don't mean contractually though, basically the way they design and implement any system or service MUST follow the providers "way" this can be very detrimental for leaving the provider or god forbid the provider decides to sunset certain service versions etc.
That said, for enterprise it can make a lot of sense and the article covers it well by admitting some "clouds" are beneficial.
For anything I've ever done outside of large businesses the go to has always been "if it doesn't require a SRE to maintain, just host your own".
> Why Should I Pay for Peak Load? [...] someone in that supply chain is charging you based on their peak load
Oh it's even worse than that: this someone oversubscribe your hardware a little during your peak and a lot during your trough, padding their great margins at the expense of extra cache misses/perf degradation of your software that most of the time you won't notice if they do their job well.
This is one of the reasons why large companies such as my employer (Netflix) are able to invest into their own compute platforms to reclaim some of these gains back, so that any oversubscription & collocation gains materialize into a lower cloud bill - instead of having your spare CPU cycles be funneled to a random co-tenant customer of your cloud provider, the latter capturing the extra value.
A consequence of one-big-server is decreased security. You become discouraged from applying patches because you must reboot. Also if one part of the system is compromised, every service is now compromised.
Microservices on distinct systems offer damage control.
> In comparison, buying servers takes about 8 months to break even compared to using cloud servers, and 30 months to break even compared to renting.
Can anyone help me understand why the cloud/renting is still this expensive? I'm not familiar with this area, but it seems to me that big data centers must have some pretty big cost-saving advantages (maintenance? heat management?). And there are several major providers all competing in a thriving marketplace, so I would expect that to drive the cost down. How can it still be so much cheaper to run your own on-prem server?
- The price for on-prem conveniently omits costs for power, cooling, networking, insurance and building space, it's only the purchase price.
- The price for the cloud server includes (your share of) the costs of replacing a broken power supply or hard drive, which is not included in the list price for on-prem. You will have to make sure enough of your devs know how to do that or else hire a few sysadmin types.
- As the article already mentions, the cloud has to provision for peak usage instead of average usage. If you buy an on-prem server you always have the same amount of computing power available and can't scale up quickly if you need 5x the capacity because of a big event. That kind of flexibility costs money.
Not included in the break even calculation was the cost of colocation, or the cost of hiring someone to make sure the computer is in working order, or the less hassle upon hardware failures.
Also, as the author even mention in an article, a modern server basically obsoletes a 10 year old server. So you're going to have to replace your server at least every 10 years. So the break even in the case of renting makes sense when you consider that the server depreciates really quickly.
Renting is not very expensive. 30 months is a large share of a computer's lifetime, and you are paying for space, electricity, and internet access too.
You're paying a premium for flexibility. If you don't need that then there are far cheaper options like some managed hosting from your local datacenter.
I didn't see the COST paper linked anywhere in this thread [0].
Excerpt from abstract:
We offer a new metric for big data platforms, COST, or the Configuration that Outperforms a Single Thread. The COST of a given platform for a given problem is the hardware configuration required before the platform outperforms a competent single-threaded implementation.
Last year I did some consulting for a client using Google cloud services such as Spanner and cloud storage. Storing and indexing mostly timeseries data with a custom index for specific types of queries. It was difficult for them to define a schema to handle the write bandwidth needed for their ingestion. In particular it required a careful hashing scheme to balance load across shards of the various tables. (It seems to be a pattern with many databases to suck at append-often, read-very-often patterns, like logs).
We designed some custom in-memory data structures in Java but also also some of the standard high-performance concurrent data structures. Some reader/write locks. gRPC and some pub/sub to get updates on the order of a few hundred or thousand qps. In the end, we ended up with JVM instances that had memory requirements in the 10GB range. Replicate that 3-4x for failover, and we could serve queries at higher rates and lower latency than hitting Spanner. The main thing cloud was good for was the storage of the underlying timeseries data (600GB maybe?) for fast server startup, so that they could load the index off disk in less than a minute. We designed a custom binary disk format to make that blazingly fast, and then just threw binary files into a cloud filesystem.
If you need to serve < 100GB of data and most of it is static...IMHO, screw the cloud, use a big server and replicate it for fail-over. Unless you got really high write rates or have seriously stringent transactional requirements, then man, a couple servers will do it.
I find disk io to be a primary reason to go with bare metal. The vm abstractions just kill io performance. In a single server you can fill up the PCI lanes with flash and hit some ridiculous throughput numbers.
The former, mostly. You don't necessarily have to use EC2, but that's easy to do. There are many other, smaller providers if you really want to get out from under the big 3. I have no experience managing hardware, so I personally wouldn't take that on myself.
Currently using two old computers as servers in my homelab: 200 GE Athlons with 35 W TDP, ~20 GB of value RAM (can't afford ECC), a few 1TB HDDs. As CI servers and test nodes for running containers, they're pretty great, as well as nodes for pulling backups from any remote servers (apart from the ECC aspect), or even something to double as a NAS (on separate drives).
I actually did some quick maths and it would appear that a similar setup on AWS would cost over 600$ per month, Azure, GCP and others also being similarly expensive, which I just couldn't afford.
Currently running a few smaller VPSes on Time4VPS as well (though Hetzner is also great), for the stuff that needs better availability and better networking. Would I want everything on a single server? Probably not, because that would mean needing something a bit better than a homelab setup behind a residential Internet connection (even if parts of it can be exposed to the Internet through a cheap VPS as a proxy, a la Cloudflare).
One thing to keep in mind is separation. The prod environment should be completely separated from the dev ones (plural, it should be cheap/fast to spin up dev environments). Access to production data should be limited to those that need it (ideally for just the time they need it). Teams should be able to deploy their app separately and not have to share dependencies (i.e operating system libraries) and it should be possible to test OS upgrades (containers do not make you immune from this). It's kinda possible to sort of do this with 'one big server' but then you're running your own virtualized infrastructure which has it's own costs/pains.
Definitely also don't recommend one big database, as that becomes a hairball quickly - it's possible to have several logical databases for one physical 'database 'server' though.
people don't account for the cpu & wall-time cost of encode-decode. I've seen it take up 70% of cpu on a fleet. That means 700/1000 servers are just doing encode decode.
You can see high efficiency setups like stackexchange & hackernews are orders of magnitude more efficient.
Not to be nasty, but we used to call them mainframes. A mainframe is still a perfectly good solution if you need five nines of uptime, with transparent failover of pretty much every part of the machine, the absolute fastest single-thread performance and the most transaction throughput per million dollars in the market.
I would not advise anyone to run them as a single machine, however, but to have it partitioned into smaller slices (they call them LPARs) and host lots of VMs in there (you can oversubscribe like crazy on those machines).
Managing a single box is cheaper, even if you have a thousand little goldfish servers in there (remember: cattle, not pets) and this is something the article only touches lightly.
The author missed the most important factor why cloud is dominating the world today. It is never about the actual hardware cost. It is the cost of educating people be able to use that big server. I can guarantee you you will need to pay at least $40k a month to hire someone to be able to write and deploy softwares that can actually realize the performance he claims on that big server. And your chance to be able to find one in 2 month is closed to 0, at least in today’s job market. Also even if you find one , he can leave you in one year to some others places, and your business will be dead.
10 years ago I had a site running on an 8GB of ram VM ($80/mo?) that ran a site serving over 200K daily active users on a completely dynamic site written in PHP running MySQL locally. Super fast and never went down!
Could you share how long you maintained this website?
No problem with the db (schema updates, backups, replication, etc...)?
No problem with your app updates (downtime, dependencies updates, code updates)?
Did you work alone, or with a team?
Did you setup a CI/CD?
...
I wrote down some questions, but in fact I just think it would be interesting to understand what was your setup in a bit more detailed fashion. You probably made some concessions and it seems they worked well for you. Would be interesting to know which ones!
Yeah, I've been saying this for a long long time now, an early blog post of mine http://drupal4hu.com/node/305.html and this madness just got worse because of Kubernetes et al. Kubernetes is a Google solution. Are you sure Google-sized solutions are right for your organization?
Also, an equally pseudo controversial viewpoint: it's almost always cheaper to be down than engineering a HA architecture. Take a realistic look at downtime causes outside of your control -- for example, your DDoS shield provider going down etc. etc. and then consider how much downtime a hardware failure adds and now think. Maybe a manual failover master-slave is enough or perhaps even that's overkill? How much money does the business lose by being down versus how much it costs to protect from it? And can you really protect from it? Are you going to have regular drills to practice the failover -- and absurdly, will the inevitable downtime from failing a few of those be larger than a single server downtime? I rarely see posts about weighing these while the general advice of avoiding single points of failure -- which is very hard -- is abundant.
I'm a huge advocate of cloud services, and have been since 2007 (not sure where this guy got 2010 as the start of the "cloud revolution"). That out of the way, there is something to be said for starting off with a monolith on a single beefy server. You'll definitely iterate faster.
Where you'll get into trouble is if you get popular quickly. You may run into scaling issues early on, and then have to scramble to scale. It's just a tradeoff you have to consider when starting your project -- iterate quickly early and then scramble to scale, or start off more slowly but have a better ramping up story.
One other nitpick I had is that OP complains that even in the cloud you still have to pay for peak load, but while that's strictly true, it's amortized over so many customers that you really aren't paying for it unless you're very large. The more you take advantage of auto-scaling, the less of the peak load you're paying. The customers who aren't auto-scaling are the ones who are covering most of that cost.
You can run a pretty sizable business in the free tier on AWS and let everyone else subsidize your peak (and base!) costs.
It really depends on the service, how it is used, the shape of the data generated/consumed, what type of queries are needed, etc.
I've worked for a startup that hit scaling issues with ~50 customers. And have seen services with +million users on a single machine.
And what does "quickly" and "popular" even mean? It also depends a lot on the context. We need to start discussing about mental models for developers to think of scaling in a contextual way.
Sure but only if you architect it that way, which most people don't if they're using one big beefy server, because the whole reason they're doing that is to iterate quickly. It's hard to build something that can bust to the cloud while moving quickly.
Also, the biggest issue is where your data is. If you want to bust to the cloud, you'll probably need a copy of your data in the cloud. Now you aren't saving all that much money anymore and adding in architectural overhead. If you're going to bust to the cloud, you might as well just build in the cloud. :)
It was all good, until NUMA came, and now you have to careful rethought your process, or you get lots of performance issues in your (otherwise) well threaded code. Speaking from first-hand experience, when our level editor ended up being used by artists on a server class machine, and supposedly 4x faster machine was actually going 2x slower (why, lots of std::shared_ptr<> use on our side, or any atomic reference counting) caused slowdowns, as the cache (my understanding) had to be synchronized between the two physical CPUs each having 12 threads.
But really not the only issue, just pointing out - that you can't expect everything to scale smoothly there, unless well thought, like ask your OS to allocate your threads/memory only on one of the physical CPUS (and their threads), and somehow big disconnected part of your process(es) on the other one(s), and make sure the communication between them is minimal.. which actually wants micro-services design again at that level.
> The big drawback of using a single big server is availability. Your server is going to need downtime, and it is going to break. Running a primary and a backup server is usually enough, keeping them in different datacenters.
What about replication? I assume the 70k postgres IOPS fall to the floor when needing to replicate the primary database to a backup server in a different region.
Great article overall with many good points worth considering. Nothing is one size fits all so I won't get into the crux of the article: "just get one big server". I recently posted a comment breaking down the math for my situation:
For the most "extreme" option of buying your own $40k server from Dell I'm always surprised at how many people don't consider leasing. No matter what it breaks the cost into an operating expense vs a capital one which is par with the other options in terms of accounting and doesn't require laying out $40k.
Adding on that, in the US we have some absolutely wild tax advantages for large "capital expenditures" that also apply to leasing:
It blows my mind people are spending $2000+ per month for a server they can get used for $4000-5000 one time only cost.
VMWare + Synology Business Backup + Synology C2 backup is our way of doing business and never failed us for over 7 years. Why do people spend so much money for cloud while they can host it themselves less than 5% of the cost? (2 year usage assumed).
They have been around forever and their $400 deal is good, but that is for 42U, 1G and only 15 amps. With beefier servers, you will need more current (both BW and amperage) if you intend on filling the rack.
The number of applications I have inherited that were messes falling apart at the seams because of misguided attempts to avoid "vendor lockin" with the cloud can not be understated. There is something I find ironic about people paying to use a platform but not using it because they feel like using it too much will make them feel compelled to stay there. Its basically starving yourself so you don't get too familiar with eating regularly.
Kids this PSA is for you. Auto Scaling Groups are just fine as are all the other "Cloud Native" services. Most business partners will tell you a dollar of growth is worth 5x-10x the value of a dollar of savings. Building a huge tall computer will be cheaper but if it isn't 10x cheaper(And that is Total Cost of Ownership not the cost of the metal) and you are moving more slowly than you otherwise would its almost a certainty you are leaving money on the table.
Aggressively avoiding lock-in is something I've never quite understood. Unless your provider of choice is also your competitor (like Spotify with Amazon) it shouldn't really be a problem. I'm not saying I'm a die hard cloud fan in all aspects but if you're going with it you may as well use it. Typically trying to avoid vendor lockin really ends up more expensive in the long run, you start avoiding the cheaper services (lambda for background job processing) for what may never really be a problem.
The one place I can see avoiding vendor lock-in as really useful is it often makes running things locally much easier. You're kind of screwed if you want to properly run something locally that uses SQS, DynamoDB, and Lambda. But that said, I think this is often better thought of as "keep my system simple" rather than "avoid vendor lock-in" as it focuses on the valuable side rather than the theoretical side.
The whole argument comes down to bursty vs. non-bursty workloads. What type of workloads make up the fat part of the distribution? If most use cases are bursty (which I would argue they are) then the author's argument only applies for specific applications. Therefore, most people do indeed see cost benefits from the cloud.
I really don't understand microservices for most businesses. They're great if you put the effort into it but most business don't have the scale required.
Big databases and big servers serve most businesses just fine. And past that NFS and other distributed filesystem approaches get you to the next phase by horizontally scaling your app servers without needing to decompose your business logic into microservices.
The best approach I've ever seen is a monorepo codebase with non-micro services built into it all running the same way across every app server with a big loadbalancer in front of it all.
No thanks. I have a few hobby sites, a personal vanity page, and some basic CPU expensive services that I use.
Moving to Aws server-less has saved me so much headache with system updates, certificate management, archival and backup, networking, and so much more. Not to mention with my low-but-spikey load, my breakeven is a long way off.
A big benefit is some providers will let you resize the VM bigger as you grow. The behind-the-scenes implementation is they migrate your VM to another machine with near-zero downtime. Pretty cool tech, and takes away a big disadvantage of bare metal which is growth pains.
I've started augmenting one big server with iCloud (CloudKit) storage, specifically syncing local Realm DBs to the user's own iCloud storage. Which means I can avoid taking custody of PII/problematic data, can include non-custodial privacy in product value/marketing, and means I can charge enough of a premium for the one big server to keep it affordable. I know how to scale servers in and out, so I feel the value of avoiding all that complexity. This is a business approach that leans into that, with a way to keep the business growing with domain complexity/scope/adoption (iCloud storage, probably other good APIs like this to work with along similar lines).
> Populated with specialized high-capacity DIMMs (which are generally slower than the smaller DIMMs), this server supports up to 8 TB of memory total.
At work we're building a measurement system for wind tunnel experiments, which should be able to sustain 500 MB/sec for minutes on end, preferably while simultaneously reading/writing from/to disk for data format conversion.
We bought a server with 1TB of RAM, but I wonder how much slower these high-capacity DIMMs are. Can anyone point me to information regarding latency and throughput? More RAM for disk caching might be something to look at.
I am using a semi big cloud VPS to host all my live services. It's 'just' a few thousand users per day over 10+ websites.
The combination of Postgres, Nginx and Passenger & Cloudflare make this a easy experience. The cloud (In this case Vultr) allows on demand scaling, backups and so far I've had zero downtime because of them.
In the past I've run a mixture of cloud and some dedicated servers and since migrating I have less downtime and way less work and no worse load times.
Being too cloudy without being too cloudy, as per the article, I've gone with a full stack in containers under Docker Compose one one EC2 server, including the database. Services are still logically separated and have a robust CI/CD set up but the cost is a 3rd of what an ECS set up with load balancers and RDS for the database would have been. It's also simpler. Have scripted the server set up, with regular back ups / snapshots but admit I would like db replication in there.
If you're hosting on-prem then you have a cluster to configure and manage, you have multiple data centers you need to provision, you need data backups you have to manage plus the storage required for all those backups. Data centers also require power, cooling, real estate taxes, administration - and you need at least two of them to handle systemic outages. Now you have to manage and coordinate your data between those data centers. None of this is impossible of course, companies have been doing this everyday for decades now. But let's not pretend it doesn't all have a cost - and unless your business is running a data center, none of these costs are aligned with your business' core mission.
If you're running a start-up it's pretty much a no-brainer you're going to start off in the cloud.
What's the real criteria to evaluate on-prem versus the cloud? Load consistency. As the article notes, serverless cloud architectures are perfect for bursty loads. If your traffic is highly variable then the ability to quickly scale-up and then scale-down will be of benefit to you - and there's a lot of complexity you don't have to manage to boot! Generally speaking such a solution is going to be cheaper and easier to configure and manage. That's a win-win!
If your load isn't as variable and you therefore have cloud resources always running, then it's almost always cheaper to host those applications on-prem - assuming you have on-prem hosting available to you. As I noted above, building data centers isn't cheap and it's almost always cheaper to stay in the cloud than it is to build a new data center, but if you already have data center(s) then your calculus is different.
Another thing to keep in mind at the moment is even if you decide to deploy on-prem you may not be able to get the hardware you need. A colleague of mine is working on a large project that's to be hosted on-prem. It's going to take 6-12 months to get all the required hardware. Even prior to the pandemic the backlog was 3-6 months because the major cloud providers are consuming all the hardware. Vendors would rather deal with buyers buying hardware by the tens of thousands than a shop buying a few dozen servers. You might even find your hardware delivery date getting pushed out as the "big guys" get their orders filled. It happens.
You know you can run a server in the cellar under your stairs.
You know that if you are a startup you can just keep servers in a closet and hope that no one turns on coffee machine while airco runs because it will pop circuit breakers, which will take down your server or maybe you might have UPS at least so maybe not :)
I have read horror stories about companies having such setups.
While they don't need multiple data centers, power, cooling and redundancy sounds for them like some kind of STD - getting cheap VPS should be default for such people. That is a win as well.
Many people will respond that "one big server" is a massive single point of failure, but in doing so they miss that it is also a single point of success. If you have a distributed system, you have to test and monitor lots of different failure scenarios. With a SPOS, you only have one thing to monitor. For a lot of cases the reliability of that SPOS is plenty.
Bonus: Just move it to the cloud, because AWS is definitely not its own SPOF and it never goes down taking half the internet with it.
"In total, this server has 128 cores with 256 simultaneous threads. With all of the cores working together, this server is capable of 4 TFLOPs of peak double precision computing performance. This server would sit at the top of the top500 supercomputer list in early 2000. It would take until 2007 for this server to leave the top500 list. Each CPU core is substantially more powerful than a single core from 10 years ago, and boasts a much wider computation pipeline."
I may be misunderstanding, but it looks like the micro-services comparison here is based on very high usage. Another use for micro-services, like lambda, is exactly the opposite. If you have very low usage, you aren't paying for cycles you don't use the way you would be if you either owned the machine, or rented it from AWS or DO and left it on all the time (which you'd have to do in order to serve that randomly-arriving one hit per day!)
If you have microservices that truly need to be separate services and have very little usage, you probably should use things like serverless computing. It scales down to 0 really well.
However, if you have a microservice with very little usage, turning that service into a library is probably a good idea.
Let's be clear here, everything you can do in a "cloudy" environment, you could do on big servers yourself - but at what engineering and human resource cost? Because that's something many - if not most - hardware and 'on-prem' infra focussed people seem to miss. While cloud might seem expensive, most of the times, humans will be even more expensive (unless you're in very niche markets like HPC)
You could also have those big servers in the cloud (I think this is what many are doing; I certainly have). That gives you a lot of the cloud services e.g. for monitoring, but you get to not have to scale horizontally or rebuild for serverless just yet. Works great for Kubernetes workloads, too – have a single super beefy node (i.e. single-node node pool) and target just your resource-heavy workload onto that node.
As far as costs are concerned, however, I've found that for medium+ sized orgs, cloud doesn't actually save money in the HR department, the HR spend just shifts to devops people, who tend to be expensive and you can't really leave those roles empty since then you'll likely get an ungovernable mess of unsecured resources that waste a huge ton of money and may expose you to GDPR fines and all sorts of nasty breaches.
If done right, you get a ton of execution speed. Engineers have a lot of flexibility in terms of the services they use (which they'd otherwise have to buy through processes that tend to be long and tedious), scale as needed when needed, shift work to the cloud provider, while the devops/governance/security people have some pretty neat tools to make sure all that's done in a safe and compliant manner. That tends to be worth it many times over for a lot of orgs, if done effectively with that aim, though it may not do much for companies with relatively stagnant or very simple products. If you want to reduce HR costs, cloud is probably not going to help much.
It seems like lots of companies start in the cloud due to low commitments, and then later when they have more stability and demand and want to save costs, making bigger cloud commitments (RIs, enterprise agreements etc) are a turnkey way to save money but always leave you on the lower-efficiency cloud track. Has anyone had good experiences selectively offloading workloads from the cloud to bare metal servers nearby?
One advantage I didn't see in the article was the performance costs of network latency. If you're running everything on one server, every DB interaction, microservice interaction, etc. would not necessarily need to go over the network. I think it is safe to say, IO is generally the biggest performance bottleneck of most web applications. Minimizing/negating that should not be underestimated.
I see these debates and wish there was an approach that scaled better.
A single server (and a backup) really _is_ great. Until it's not, for whatever reason.
We need more frameworks that scale from a single box to many boxes, without starting over from scratch. There are a lot of solid approaches: Erlang/Elxir and the actor model comes to mind. But that approach is not perfect, and it's far from common place.
> We need more frameworks that scale from a single box to many boxes, without starting over from scratch.
I'm not sure I really understand what you're saying here. I suppose most applications are some kind of CRUD app these days, not all sure, but an awful lot. If we take that as an example, how is it difficult to go from one box to multiple?
It's not something you get for free, you need to put in time to provision any new infra (be it baremetal or some kind of cloud instance) but the act of scaling out is pretty straight forward.
Perhaps you're talking about stateful applications?
I recommend the whitepaper Scalability! But at what cost?
My experience with Microservices is that they are very slow due to all the IO. We kind of want the development and developer scalability of decoupled services in addition to the computational and storage scalability in a disaggregated architecture.
One big sever, one big program, and one big 10x developer. Deploy websphere when you need isolation. The industry truly is going in spiral. Although, I must admit cloud providers really overplayed their hand when it comes to performance/buck and complexity.
What holds me back from doing this is how will I reduce latency from the calls coming from other side of the world when OVHcloud seemingly does not have datacenters all over the world? There is an noticeable lag when it comes to multiplayer games or even web applications.
So... I guess these folks haven't heard of latency before? Fairly sure you have to have "one big server" in every country if you do this. I feel like that would get rather costly compared to geographically distributed cloud services long term.
As opposed, to "many small servers" in every country? The vast majority of startups out there run out of a single AWS region with a CDN caching read-only content. You can apply the same CDN approach to a bare-metal server.
Yeah, but if I'm a startup and running only a small server, the cloud hosting costs are minimal. I'm not sure how you think it's cheaper to host tiny servers in lots of countries and pay someone to manage that for you. You'll need IT in every one of those locations to handle the service of your "small servers".
I run services globally for my company, there is no way we could do it. The fact that we just deploy containers to k8s all over the world works very well for us.
Before you give me the "oh k8s, well you don't know bare metal" please note that I'm an old hat that has done the legacy C# ASP.NET IIS workflows on bare metal for a long time. I have learned and migrated to k8s on AWS/GCloud and it is a huge improvement compared to what I used to deal with.
Lastly, as for your CDN discussion, we don't just host CDN's globally. We also host geo-located DB + k8s pods. Our service uses web sockets and latency is a real issue. We can't have 500 ms ping if we want to live update our client. We choose to host locally (in what is usually NOT a small server) so we get optimal ping for the live-interaction portion of our services that are used by millions of people every day.
You don't need IT in every location or even different hosting facility contracts. Most colo hosting companies have multiple regions. From the 800lb gorilla (Equinix):
Between vendor (Dell, HP, IBM, etc) and the remote hands offered by the hosting facility you don't ever have to have a member of your team even enter a facility. Anywhere. Depending on the warranty/support package the vendor will dispatch someone to show up to the facility to replace failed components with little action from you.
The vendor will be happy to ship the server directly to the facility (anywhere) and for a nominal fee the colo provider will rack it and get IPMI, iLo, IP KVM, whatever up for you to do your thing. When/if something ever "hits the fan" they have on site 24 hour "remote hands" that can either take basic pre-prescribed steps/instructions -or- work with your team directly and remotely.
Interestingly, at my first startup we had a facility in the nearest big metro area that not only hosted our hardware but also provided an easy, cheap, and readily available meeting space:
Disagreed. The cloud equivalent of a small server is still a few hundred bucks a month + bandwidth. Sure, it's still a relatively small cost but you're still overpaying significantly over the Hetzner equivalent which will be sub-$100.
> pay someone to manage that for you
The same guy that manages your AWS can do this. Having bare-metal servers doesn't mean renting colo space and having people on-site - you can get them from Hetzner/OVH/etc and they will manage all the hardware for you.
> The fact that we just deploy containers to k8s all over the world works very well for us.
It's great that it works well for you and I am in no way suggesting you should change, but I wouldn't say it would apply to everyone - the cloud adds significant costs with regards to bandwidth alone and makes some services outright impossible with that pricing model.
> We also host geo-located DB
That's a complex use-case that's not representative of most early/small SaaS which are just a CRUD app backed by a DB. If your business case requires distributed databases and you've already done the work, great - but a lot of services don't need that (at least not yet) and can do just fine with a single big DB server + application server and good backups, and that will be dirt-cheap on bare-metal.
In context of a "small server", I think they are equivalent. AWS gives you a lot more functionality but you're unlikely to be using any of it if you're just running a single small "pet" server.
This is one of those problems that basically no one has. RTT from Japan to Washington D.C. is 160ms. There's very few applications where that amount of additional latency matters.
It adds up surprisingly quickly when you have to do a TLS handshake, download many resources on pageload etc. The TLS handshake alone costs 3 round-trips over the network.
I once fired up an Azure instance with 4TB of RAM and hundreds of cores for a performance benchmark.
htop felt incredibly roomy, and I couldn’t help thin how my three previous projects would fit in with room to spare (albeit lacking redundancy, of course).
The problem with "one big server" is, you really need good IT/ops/sysadmin people who can think in non-cloud terms. (If you catch them installing docker on it, throw them into a lava pit immediately).
One server is for a hobby, not a business. Maybe that's fine, but keep that in mind. Backups at that level are something that keeps you from losing all data, not something that keeps you running and gets you up in any acceptable timeframe for most businesses.
That doesn't mean you need to use the cloud, it just means one big piece of hardware with all its single points of failure is often not enough. Two servers gets you so much more than one. You can make one a hot spare, or actually split services between them and have each be ready to take over for specific services for the other, greatly including your burst handling capability and giving you time to put more resources in place to keep n+1 redundancy going if you're using more than half of a server's resources.
Do they actually say they don't have a slave to that database ready to take over? I seriously doubt Let's Encrypt has no spare.
Note I didn't say you shouldn't run one service (as in daemon) or set of services from one box, just that one box is not enough and you need that spare.
It Let's Encrypt actually has no spare for their database server and they're one hardware failure away from being down for what may be a large chunk of time (I highly doubt it), then I wouldn't want to use them even if free. Thankfully, I doubt your interpretation of what that article is saying.
> The new AMD EPYC CPUs sit at about 25%. You can see in this graph where we promoted the new database server from replica (read-only) to primary (read/write) on September 15.
That says they use a single database, as in a logical MySQL database. I don't see any claim that they use a single server. In fact, the title of the article you've linked suggests they use multiple.
https://letsencrypt.status.io/ shows a list of their servers, which look to be spread across three data centers (one "public", two "high availability").
Do we know if it shows cold spares? That's all I think is needed at a minimum to avoid the problems I'm talking about, and I doubt they would note those if they don't necessarily have a hostname.
> But if I use Cloud Architecture, I Don’t Have to Hire Sysadmins
> Yes you do. They are just now called “Cloud Ops” and are under a different manager. Also, their ability to read the arcane documentation that comes from cloud companies and keep up with the corresponding torrents of updates and deprecations makes them 5x more expensive than system administrators.
I don't believe "Cloud Ops" is more complex than system administration, having studied for the CCNA so being on the Valley of Despair slope of the Dunning Kruger effect. If keeping up with cloud companies updates is that much of a challenge to warrant a 5x price over a SysAdmin then that's telling you something about their DX...
/tg/station, the largest open source multiplayer video game on github, gets cloudheads trying to help us "modernize" the game server for the cloud all the time.
Here's how that breaks down:
The servers (sorry, i mean compute) cost the same (before bandwidth, more on that at the bottom) to host one game server as we pay (amortized) per game server to host 5 game servers on a rented dedicated server. ($175/month for the rented server with 64gb of ram and a 10gbit uplink)
They run twice as slow because high core count slow clock speed servers aren't all they are cracked up to be, and our game engine is single threaded, but even if it wasn't, there is an overhead to multithreading things which combined with most high core count servers also having slow clock speed, rarely squares out to an actual increase in real world performance.
You can get the high clock speed units, they are twice to three times as expensive. And still run 20% slower over windows vms on rented bare metal because the sad fact is enterprise cpus by either intel or amd have slower clock speeds and single threaded performance then their gaming cpu counterparts, and getting gaming cpus for rented servers is piss easy, but next to impossible for cloud servers.
Each game server uses 2tb of bandwidth to host 70 player high pops. This works with 5 servers on 1 machine because our hosting provider gives us 15tb of bandwidth included in the price of the server.
Well now the cloud bill just got a new 0. 10 to 30x more expensive once you remember to price in bandwidth isn't looking too great.
"but it would make it cheaper for small downstreams to start out" until another youtuber mentions our tiny game, and every game server is hitting the 120 hard pop cap, and a bunch of downstreams get a surprise 4 digit bill for what would normally run 2 digits.
The take away from this being that even adding in docker or k8s deployment support to the game server is seen as creating the risk some kid bankrupts themselves trying to host a game server of their favorite game off their mcdonalds paycheck, and we tell such tech "pros" to sod off with their trendy money wasters.
Hetzner's PX line offers 64GB ECC RAM, Xeon CPU, dual 1TB NVME for < $100/month. A dedicated 10Gbit b/w link (plus 10Gbit NIC) is then an extra ~$40/month on top (incls. 20TB/month traffic, with overage billed at $1/TB).
All your eggs in one basket? A single host, really? Curmudgeonly opinions about microservices, cloud, and containers? Nostalgia for the time before 2010? All here. All you are missing is a rant about how the web was better before JavaScript.
It’s sad to see this kind of engineering malpractice voted to the top of HN. It’s even sadder to see how many people agree with it.
Use One Big Database.
Seriously. If you are a backend engineer, nothing is worse than breaking up your data into self contained service databases, where everything is passed over Rest/RPC. Your product asks will consistently want to combine these data sources (they don't know how your distributed databases look, and oftentimes they really do not care).
It is so much easier to do these joins efficiently in a single database than fanning out RPC calls to multiple different databases, not to mention dealing with inconsistencies, lack of atomicity, etc. etc. Spin up a specific reader of that database if there needs to be OLAP queries, or use a message bus. But keep your OLTP data within one database for as long as possible.
You can break apart a stateless microservice, but there are few things as stagnant in the world of software than data. It will keep you nimble for new product features. The boxes that they offer on cloud vendors today for managed databases are giant!