I firmly believe that monolith vs microservice is much more a company organization problem than a tech problem. Organize your code boundaries similar to your team boundaries, so that individuals and teams can move fast in appropriate isolation from each other, but with understandable, agreeable contracts/boundaries where they need to interact.
Monoliths are simpler to understand, easier to run, and harder to break - up until you have so many people working on them that it becomes difficult for a team to get their work done. At that point, start splitting off "fiefdoms" that let each team move more quickly again.
This 100%. Still depends on the specific context (e.g. what type of software are you going to build and run), but for a typical case, like web-based transactional self-service business platforms, this is where I arrived after more than 25 years in the industry.
Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
This is also my experience.
It's always that projects grow large in terms of people working on them and that is when you want to create independence between teams by reducing the amount of cooperation needed on a very defined boundary/interface between them.
Usually you start small and grow bigger, so there is only rare exceptions where it makes sense to merge microservices back into a monolith, except maybe for cases where going for a microservice architecture was a bad decision taken without actually having the above mentioned problem.
Kind of side steps the fundamental scaling issue though.
In a banking app there will be more requests for the account balance than there are logins, but logins will likely take longer.
Your argument is more around who is allowed to touch which and who is responsible when it breaks, but not around one of the core reasons to choose microservices.
If you don't want to scale the whole thing (why?), you can also deploy the monolith twice and route different API calls to different clusters. Slice and scale as much as you're willing, paying the price of deployment complexity and bin-packing the usage.
It's an imperfect solution for all of the obvious reasons. I think even the most junior engineer could point out the flaws. I won't insult HN readers by naming them!
But also: effective at some things for near-zero cost. It's probably more effective at "segregating your traffic so that Low Priority Service A can't starve out High Priority Service B" than overall scaling but, sometimes that's what you need.
paying the price of deployment complexity
For us, there was essentially no increase in deployment complexity. There was a small one-time increase in network routing complexity. Essentially requests for "www.foo.com" were routed to one group of servers and requests for "api.foo.com" were routed to another group.
The alternative was decomposing a large monolith. A goal we also pursued. But what you describe was a valuable stop-gap as we undertook that multiyear effect.
Scaling a monolith is almost always cheaper than migrating to microservices and scaling that.
When you split it into microservices you are adding a bunch of infrastructure that did not need to exist. Your app performance will likely go way down since things that used to be function calls are now slow network API calls. Add to that container orchestration and all sorts of CNCF-approved things and the whole thing balloons. If you deploy this thing in a cloud, just networking costs alone will eat far more than your $20(if not, you'll likely still need more hardware anyway).
Sure, once you add all that overhead you now may have a service that can be independently scaled.
There's also nothing preventing you from splitting off just that service from the monolith, if that call is really that hot.
> Scaling a monolith is almost always cheaper than migrating to microservices and scaling that.
No - it depends on capacity needs and current architecture.
> When you split it into microservices you are adding a bunch of infrastructure that did not need to exist.
Unless you need to scale in an economical manner.
> Your app performance will likely go way down since things that used to be function calls are now slow network API calls.
Not necessarily. Caching and shared storage are real strategies.
> Add to that container orchestration and all sorts of CNCF-approved things and the whole thing balloons.
k8s is __not__ required for microservice architectures.
> If you deploy this thing in a cloud, just networking costs alone will eat far more than your $20
Again, not necessarily. All major cloud providers have tools to mitigate this.
To clarify: I'm not a microservice fanboy, but so many here like to throw around blanket statements with baked in assumptions that are simply not true. Sometimes monos are the right way to go. Sometimes micros are the right way to go. As always, the real answer is "it depends".
> Scaling a monolith is almost always cheaper
than migrating to microservices and scaling that.
No - it depends on capacity needs and current
architecture.
It does depend, but a monolith of any nontrivial size is going to require hundreds if not many thousands of hours of engineer time. That's a lot of money right there. IME often the experience comes down to something like $250K of engineer time vs. maybe $2K/month of server capacity.
To support that 1:10000 transaction that takes
the most time and needs the most scaling?
Done in the most naive way possible (just adding more servers to one giant pool) yes, it's as ineffective as you say.
What can be effective is segregating resources so that your 1:10000 transaction is isolated so that it doesn't drag down everything else.
Imagine:
- requests to api.foo.com go to one group of servers
- requests to api.foo.com/reports/ go to another group of servers, because those are the 99th percentile requests
They're both running the same monolith code. But at least slow requests to api.foo.com/reports can't starve out api.foo.com which handles logins and stuff.
Now, this doesn't work if e.g. those calls to api.foo.com/reports are creating, say, a bunch of database contention that slows things down for api.foo.com anyway up at the app level due to deadlocks or whatever.
There are various inefficiences here (every server instance gets the whole fat pig monolith deployed to it, even if it's using only a tiny chunk of it) but also potentially various large efficiencies. And it is generally 1000x less work than decomposing a monolith.
Importing code paths that are never executed in a service is a security risk at best, a development/management nightmare on average, or an application crippling architectural decision at worst, especially when working with large monoliths. That is not a smart trade off - I would not recommend this pattern unless the cost to break off is so exorbitant that this would be your only choice.
For a monolith of any nontrivial size, you're talking anywhere from weeks to months of years of engineer work, and those are engineers who also won't be delivering new value during that time. And if other teams are still delivering new features into the monolith during that time, now that's more work for the team whose job it is to break apart the monolith.
So anywhere from tens of thousands to millions of dollars is the cost.
Whether you call that exorbitant is up to you and your department's budget, I guess.
In the context of choosing monolith vs microservices first, it's a safe bet that we're talking about startups; in the other 98% of the market you don't have a choice because something already exists.
Can you be more specific? Because if you mean scale... meh.
For the most part, scaling a system is fungible across features. In particular, in a monolithic system, if I had to add another server due to logins, I just add another server. A side benefit is if logins are down but account balance checks are up, that extra server can pull that duty too. I don't need to say "these computing resources are only for this feature."
Microservices are indeed the best approach if you want to ship your org chart. That's all there is to it. Most of the technical justifications do not make sense, outside of very niche org requirements which aren't really all that common.
Take, for example, the idea that you can scale individual services independently. Sounds amazing on paper. However, you can also deploy multiple copies of your monolith and scale them, at a much lower cost even, and at a far lower complexity. In a cloud provider, bake an image, setup an ASG or equivalent, load balancer in front. Some rules for scaling up and down. You are basically done.
'Monolith' sounds really big and bad, but consider what's happening when people start using microservices. In this day and age, this probably means containers. Now you need a container orchestrator (like K8s) as you are likely spreading a myriad of services across multiple machines (and if you aren't, wth are you building microservices for). You'll then need specialized skills, a whole bunch of CNCF projects to go with it. Once you are not able to just make API calls everywhere 1 to 1, you'll start adding things like messaging queues and dedicated infrastructure for them.
If you are trying to do this properly, you'll probably want to have dedicated data stores for different services, so now you have a bunch of databases. You may also need coordination and consensus among some of your services. Logging and monitoring becomes much more complicated(and more costly with all those API invocations flying around) and you better have good tracing capabilities to even understand what's going on. Your resource requirements will skyrocket as every single copy of a service will likely want to reserve gigabytes of memory for itself. It's a heck of a lot of machinery to replicate what, in a monolith, would be a function call. Maybe a stack.
While doing all of that you need more headcount. If you had communication issues across teams, you have more teams and more people now, and they will be exacerbated. They will just not be about function calls and APIs anymore.
There's also the claim that you can deploy microservices independently of one another. Technically true, but what's that really buying you? You still need to make sure all those services play nice with one another, even if the API has not changed. Your test environments will have to reflect the correct versions of everything and that can become a chore by itself (easier to track a single build number). Those test environments tend to grow really large. You'll need CI/CD pipelines for all that stuff too.
Even security scanning and patching becomes more complicated. You probably did have issues coordinating between teams and that pushed you to go with microservices. Those issues are still there, now you need to convince everyone to patch their stuff(there's probably a lot of duplication between codebases now).
I think it makes sense to split the monolith into large 'services' (or domains) as it grows. Just not 'microservices'.
A funny quote I read a while ago on Twitter: "Microservices are a zero interest rate phenomenon". Bit of tongue in cheek but I think there's some truth to it.
It seems people think "Big Ball of Mud" when they hear Monolith but they're not equivalent. Jut like I tend to hear "Spaghetti Code" when I hear Microservices. But again they're not equivalents. Both architectures are equally capable of being messy.
Yes? but that really isn't the problem. You can do both monoliths and micro-services badly.
The real thing that makes software easy to maintain is consistency with itself.
If you have one guy that formats his code with spaces and another tabs, that creates friction in the codebase. If you have one guy that always uses `const` or `final`, but another guy that doesn't, again hard to maintain.
The hard problems are boundaries between business logic is divided. If the application is consistent in how it divides responsibilities, it'll be a pretty clean codebase and pretty easy to navigate and maintain. If you have two different rouge agents that disagree on coding styles and boundaries, you'll have a pretty difficult codebase to navigate.
The easiest codebases have consistent function signatures, architecture, calling conventions, formatting, style, etc and avoid "clever" code.
> Almost all the successful microservice stories have started with a monolith that got too big and was broken up
The less certain you are about a system's requirements the more you should prefer a monolith.
A well-understood system (eg an internal combustion engine) has well-defined lines and so it _can_ make sense to put barriers in between components so that they can be tweaked/replaced/fixed without affecting the rest of the system. This gives you worthwhile, but modest overall performance improvements.
But if you draw the lines wrong you end up with inefficiencies that outweigh any benefit modularity could bring.
Start with a monolith and break it up as the system's ideal form reveals itself.
My co-worker gave me the quote - "if you have more microservices than clients, you're doing it wrong". Not sure if it was original or not but makes sense to me.
I think the overall thesis here is accurate. I can fill in the blanks where Fowler is light on anecdotal evidence. I have done many architecture reviews and private equity due diligence reviews of startup systems.
Nearly all the micro services based designs were terrible. A common theme was a one or two scrum team dev cohort building out dozens of
Micro services and matching databases. Nearly all of them had horrible performance, throughout and latency.
The monolith based systems were as a rule an order of magnitude better.
Especially where teams have found you don’t have to deploy a monolith just one way. You can pick and choose what endpoints to expose in various processes, all based on the same monolithic code base.
Someday the Internet will figure out Microservices were always a niche architecture and should generally be avoided until you prove you need it. Most of the time all you’re doing is forcing app developers to do poorly what databases and other infrastructure are optimized to do well.
A few years ago, after having worked on a microservice-oriented project for some time, I joined a team which developed a monolith. To be honest, it felt like a breath of fresh air. What previously took coordination across multiple teams and repositories, tens of API schemas, complex multi-stage releases (your average feature usually touches lots of microservices), complicated synchronization of data stored in tens of DB, with a whole theory on microservice communication and a whole DevOps team with their own "platform" -- now it's just a bunch of commits in the same repository, a single release. To prevent code from becoming complex spaghetti, they simply use the modular monolith architecture (modulith). There's simply a linter which makes sure module boundaries are not violated (so they are encapsulated properly, just like in microservices). Want to support more load? Just increase the number of workers/servers. The single DB becomes too large? Just split it into several DBs and connect to different physical machines as needed. Now I'm pretty skeptical of the whole microservice thing. To be honest, I can't name a single advantage microservices over monoliths anymore. All problems are solvable with a monolith just fine, without all the complexity. The only time we needed to physically split a part of the monolithic codebase into a separate repository was when the infosec department asked us to store and process personal data inside isolated infrastructure to conform to some regulations.
I think microservices can be useful for nontechnical reasons: They let you take the "org chart becomes architecture" from an unseen, not really understood force into something explicit that you can observe and manage.
Instead of multiple teams working on the same codebase and stepping on each other's toes, each team can have clear ownership of "their" services. It also forces the teams to think about API boundaries and API design, simply because no other way of interaction is available. It also incentivices to build services as mostly independent applications (simply because accessing more services becomes harder to develop and test) - which in turn makes your service easier to develop against and test in (relative) isolation.
However, what's of course a bit ridiculous is to require HTTP and network boundaries for this stuff. In principle, you should get the same benefits with a well-designed "modulith" where the individual modules only communicate through well-defined APIs. But this doesn't seem to have caught on as much as microservices have. My suspicion is that network boundaries as APIs provide two things that simple class or interface definitions don't: First, stronger decoupling: Microservices live in completely separated worlds, so teams can't step on each other's toes with dependency conflicts, threading, resource usage, etc. There is a lot of stuff that would be part if the API boundary in a "modulith" that you wouldn't realize is, until it starts to bite you.
Second, with monoliths, there is some temptation to violate API boundaries if it let's you get the job done quickly, at the expense of causing headaches later: Just reuse a private until method from another module, write into a database table, etc. With network/process boundaries, this is not possible in the first place.
It's a whole bunch of very stupid reasons, but as they say, if it's stupid and works, it ain't stupid.
Tbh, it didn't work for us: our org chart changes more frequently than the codebase's architecture (people come and go, so teams are combined, split, etc. to account for that, many devs also like rotation, because it's boring to work on the same microservices forever), so in the end basically everyone owns everything. Especially when to implement a feature, you have to touch 10 microservices -- it's easier and faster to do everything yourself, than to coordinate 10 teams.
>Second, with monoliths, there is some temptation to violate API boundaries if it let's you get the job done quickly, at the expense of causing headaches later: Just reuse a private until method from another module
This is solvable with a simple linter: it fails at build time if you try to use a private method from another module. We use one at work, and it's great.
There is also room for the in-between, not a monolith, but also not "micro" services. It is actually possible to have a number of services working together, each on their own a small monolith.
I've done multiple projects where we have fairly large service working together. Sometimes they function on their own, other times they hand of a task to another service in order to complete the entire process. Sometimes they need each other to enrich something, but if the other service isn't running that's okay for a short time.
It is also worth remembering that if all your microservice needs to be running at the same time, you just have a distributed monolith, which is so much worse than a regular monolith.
I love Martin Fowler's one pager called Snowflake server - it aged very well and I still use it as a reference in the cloud era. And this text is also good advice IMO.
What I feel that is missing on what he calls the "Microservices Premium" is a clear statement that this premium is paid in "ops" hours. That changes the game because of the "ops" resource scarcity.
In fact, the microservices dilemma is an optimization problem related to the ops/dev ratio that is being wrongly treated as a conceptual problem.
This is the simplest analysis I could come up with:
Another one here working with microservices and almost-hating it.
Also
> By starting with microservices you get everyone used to developing in separate small teams from the beginning, and having teams separated by service boundaries makes it much easier to scale up the development effort when you need to.
Nop, not for my team at least, most features require touching several microservices, so now you have either as many merge conflicts as edges (if one team is responsible for fixing what breaks in the other sides, yes, that happens) or need to have twice the meetings with twice the people to make sure each side is doing what the other expects.
The philosophy that has served me well is "do the simplest possible thing".
Sometimes problems are intrinsically complicated and the solution is required to be complex. But even in that case it's important to do the simplest thing you can get away with!
My experience is that people, myself included, almost always over-engineer unless they focus really hard on doing the simple thing. It takes concentrated effort to avoid architecture astronauts and their wildly convoluted solutions.
It's orders of magnitude easier to add complexity than to remove it. Do the simple thing!
I like to follow a pattern I call monomicro. Basically I develop separate programs for different functionality, but make them embeddable so they can be composed and run in the same memory space for small deployments. The code that runs https://lastlogin.net is a good example. For LastLogin it runs as a globally distributed cluster on fly.io, but it's also fully embeddable in any Golang program to act as an auth layer.
Recently was looking at a distributed microservice system at a company that thought they would need massive scale, but pivoted from d2c to enterprise b2b and then found out enterprise b2b companies want data separation. They would have been much better off going monolith first and probably actually sticking with a monolith.
Data separation in enterprise-land means customers don’t want components that touch their data to be shared with any other customers. So a shared but sharded database is no good. In practice you often run a standalone stack for each customer, including their own database. This pattern is a nightmare from an operational scaling perspective, but that’s part of why enterprises are asked to pay so much.
I understand what it means, I'd like to know where the monolith architecture stands in this context. One of our projects was originally a monolith, and each tenant had their own dedicated DB the monolith connected to. Then they switched to the microservice architecture (back when it was the peak of the hype) and rejected the idea of database sharding because each microservice already requires their own DB, and if you also sharded by tenant, then you'd end up with M x N databases (where M is the number of tenants and N is the number of microservices) which sounds like a lot of complexity to manage. I think they once had a bug where one tenant could see data from another tenant because they don't have strong DB boundaries anymore.
There's the ideal path, then the actual. Most monoliths, unchecked and with enough age, turn into spaghetti due to turn over, changing priorities, entropy, etc.
But at least some microservices architectures turn into macaroni - far too many small, isolated things, with no connection between them. It can have all the disadvantages of an overly-OO design, just at a larger scale.
Monolith as the final goal is probably a good way to think about the issue.
Janky modular stuff first, full of inefficiencies, few formalities, designed to move fast and _not be in production_. Once it's up and running (in a test setup), you can "see the shape of it" and make it into a more cohesive thing (a "monolith").
This is consistent with how many other crafts older than programming have coalesced over centuries. I suspect there's a reason behind it.
You don't start with an injection-molded solid piece of metal with carefully designed edges that can be broken later (or shaven off) in different ways. You machine something that you don't understand yet using several different tools and practices, then once the piece does what you want (you got a prototype!), you move the production line to injection molding. The resulting mold is way less flexible than the machining process, but much more cohesive and easy to manage.
Of course, programming is different. The "production lines" are not the same as in doing plane parts. In programming you "fly the plane" as soon as it is "air worthy" and does improvements and maintenance in mid-flight. It's often a single plane, no need to make lots of the same model.
So, with that in mind, there's an appeal for carefully planning context boundaries. It's easier to ditch the old radar system for a new one, or replace the seats, or something like that, all during flight.
If the plane breaks down mid-flight, the whole thing is disastrous. So we do stuff like unit testing on the parts, and quality assurance on a "copy of the plane" with no passengers (no real users). Wait, aren't those approaches similar to old traditional production lines? Rigs to test component parts individually, fixtures (that comes from the machining world), quality assurance seals of approval, stress tests.
So, why the hell are we flying the plane as soon as it is barely air worthy in the first place? It creates all those weird requirements.
> When you begin a new application, how sure are you that it will be useful to your users?
Well, I don't have any clue. But _almost all_ applications I ever built were under serious pressure to be useful real fast. Barely flying was often enough to stop the design process, it got passengers at that point so no need to go back to the drawing board. What it often required by the stakeholders is not a better production process, it's just making the barely flying plane bigger.
I know, I know. Borrowing experience from other industries is full of drawbacks. It's not the same thing. But I can't just explain the absurdities of this industry (which are mostly non-related to engineering) in any other way.
All of this reminds me of that "If Microsoft made cars..." joke, but it's not funny anymore.
I think there is a variant of "you don't currently need it" that people should follow more. Build towards what you think you will need, by all means. But try and only build what you need right now.
This is a good point, and lines up with the real yagni argument.
However, I do not like the readiness that people have with throwing around the yagni argument for things they don't want to support, build, or disagree with, often contorting or oversimplifying it to get their way.
The yagni argument itself is reasonable, but is often misused/abused.
Yeah. There's a healthy middle ground between boxing yourself in and boxing yourself out. Don't write things before you need them, but if you can tweak an implementation slightly with knowledge of things you're thinking about adding later, that's often a good idea idea.
Unless you are actually a FAANG, you don't need them. And in the extremely unlikely case you become the next FAANG you'll have plenty of money to throw at the problem.
Any arguments to support this claim? What's the difference then?
If there is three communicating servces: first has 90% of the business logic, second has 7%, and the last one has 3%.
Should we call the first one a monolith? And if they don't communicate?
> Any arguments to support this claim? What's the difference then?
The "micro" in "microservice".
Microservices are meant to do one "micro" thing well, whether it's image hosting or credit card transactions or supplying the content of a tweet or whatever.
A monolith does all the things, or most of the things. It's not "micro".
You don't need arguments to support it, these are just the definitions of the terms. It's semantics.
A monolith can't be a single microservice because it's not micro.
In my opinion it is too literal understanding of the concept and there is no universal borderline in size or scope.
It is common in practice to find quite fat microservices because their maintaners decided it doesn't worth to have yet another network communication latency for a specific case.
The main problem here is to be able to effectively debug and maintain several communicating services: to have a distributed tracing facility like Jaeger, centralized logs collection, schema registry and so on.
It may seem compex to set up and maintain at first, but once you have some experience it does not add much toil and cognitive load.
But having the infrastracture in place to effectively work with microservices gives you the freedom of system design choices. It does not force you to make microservices very small.
> there is no universal borderline in size or scope
There doesn't need to be. It's applied in specific contexts where the distinction is clear for that context.
You said "A monolith is just one microservice." But it's not, because they're literally defined as opposites. You might as well say "the color white is just one shade of black." It's not adding anything helpful -- it's quibbling over words.
> It's not adding anything helpful -- it's quibbling over words.
That depends on the context. If I come to the paint store I would be surprised if I'm offered either black OR white paint. As If I can not buy and use both, depending on my scenario. Black and white are generally not the same thing, but they are not mutually exclusive in your practice.
(and by the way comparing sizes to color is a bad analogy. the better one would be comparing a big ship with a set of ships. So I state that I can have a fleet of ships, some of which are also big. And one big ship is also can be considered a temporily outnumbered fleet: if it has means to cooperate I can easily add more smaller ships when needed, instead of builidng yet another deck on the main ship).
My point is: microservices vs monolith is a false dichotomy. You can use both approaches at the same time, since each one of them has its own advantages.
If your infrastructure is ready and you know how to manage microservices, than you can just start building monolith unless you decide that some particular part is better be separated due to some reasons like context isolation, separate deployment, separate scaling etc. It will take you a day for the first time. And an hour for the second time and further.
However if your "infrastructure" is just local files with logrotate on a bare-metal machine, if you don't know how to reuse the common code base for different deployment units or if you do not know how to do backward compatible protocol schema evolution, then bad luck -- you just do not have a choice when it comes to designing a new functionality. You'll have to add the new feature to your monolith despite all the benefits you could reap by separating it.
Microservice-ready infrastructure is all about having the capability to separate parts of logic easily when you want to. It neither dictates you to break your monolith right away nor to build only 30 lines of code services from now on
I mostly use statically linked monorepos with microservices. The services use the common utils from the repo. What's the difference except for not having dead code and unused dependencies in the artifacts in my case?
Monoliths are simpler to understand, easier to run, and harder to break - up until you have so many people working on them that it becomes difficult for a team to get their work done. At that point, start splitting off "fiefdoms" that let each team move more quickly again.