Hacker News new | past | comments | ask | show | jobs | submit login

To what end? To support that 1:10000 transaction that takes the most time and needs the most scaling? Just burn a wad of $20s, that will be easier.



Scaling a monolith is almost always cheaper than migrating to microservices and scaling that.

When you split it into microservices you are adding a bunch of infrastructure that did not need to exist. Your app performance will likely go way down since things that used to be function calls are now slow network API calls. Add to that container orchestration and all sorts of CNCF-approved things and the whole thing balloons. If you deploy this thing in a cloud, just networking costs alone will eat far more than your $20(if not, you'll likely still need more hardware anyway).

Sure, once you add all that overhead you now may have a service that can be independently scaled.

There's also nothing preventing you from splitting off just that service from the monolith, if that call is really that hot.


> Scaling a monolith is almost always cheaper than migrating to microservices and scaling that.

No - it depends on capacity needs and current architecture.

> When you split it into microservices you are adding a bunch of infrastructure that did not need to exist.

Unless you need to scale in an economical manner.

> Your app performance will likely go way down since things that used to be function calls are now slow network API calls.

Not necessarily. Caching and shared storage are real strategies.

> Add to that container orchestration and all sorts of CNCF-approved things and the whole thing balloons.

k8s is __not__ required for microservice architectures.

> If you deploy this thing in a cloud, just networking costs alone will eat far more than your $20

Again, not necessarily. All major cloud providers have tools to mitigate this.

To clarify: I'm not a microservice fanboy, but so many here like to throw around blanket statements with baked in assumptions that are simply not true. Sometimes monos are the right way to go. Sometimes micros are the right way to go. As always, the real answer is "it depends".


    > Scaling a monolith is almost always cheaper 
    than migrating to microservices and scaling that.

    No - it depends on capacity needs and current 
    architecture.
It does depend, but a monolith of any nontrivial size is going to require hundreds if not many thousands of hours of engineer time. That's a lot of money right there. IME often the experience comes down to something like $250K of engineer time vs. maybe $2K/month of server capacity.

Again, though: it does depend.


JohnBooty, you sound like someone I'd enjoy working with. Sounds like you have your head screwed on correctly.


A high compliment!


    To support that 1:10000 transaction that takes 
    the most time and needs the most scaling?
Done in the most naive way possible (just adding more servers to one giant pool) yes, it's as ineffective as you say.

What can be effective is segregating resources so that your 1:10000 transaction is isolated so that it doesn't drag down everything else.

Imagine:

- requests to api.foo.com go to one group of servers

- requests to api.foo.com/reports/ go to another group of servers, because those are the 99th percentile requests

They're both running the same monolith code. But at least slow requests to api.foo.com/reports can't starve out api.foo.com which handles logins and stuff.

Now, this doesn't work if e.g. those calls to api.foo.com/reports are creating, say, a bunch of database contention that slows things down for api.foo.com anyway up at the app level due to deadlocks or whatever.

There are various inefficiences here (every server instance gets the whole fat pig monolith deployed to it, even if it's using only a tiny chunk of it) but also potentially various large efficiencies. And it is generally 1000x less work than decomposing a monolith.

Not a magic solution, just one to consider.


Importing code paths that are never executed in a service is a security risk at best, a development/management nightmare on average, or an application crippling architectural decision at worst, especially when working with large monoliths. That is not a smart trade off - I would not recommend this pattern unless the cost to break off is so exorbitant that this would be your only choice.


For a monolith of any nontrivial size, you're talking anywhere from weeks to months of years of engineer work, and those are engineers who also won't be delivering new value during that time. And if other teams are still delivering new features into the monolith during that time, now that's more work for the team whose job it is to break apart the monolith.

So anywhere from tens of thousands to millions of dollars is the cost.

Whether you call that exorbitant is up to you and your department's budget, I guess.


Yes. It's better than burning £100ks by over complicating things and wasting your developer's time.

Most startups fail. You need to cover as much ground as possible while you have runway, not cock about with microservices.


You're basing your entire argument on the org in question being a startup, and missing the rest of the 98% of the market.


In the context of choosing monolith vs microservices first, it's a safe bet that we're talking about startups; in the other 98% of the market you don't have a choice because something already exists.


What? You don't think large enterprises also have to make these decisions?


Is there a large enterprise that exists in 2024 that doesn't already have existing software?


Is there a large enterprise that exists in 2024 that isn't currently building new software?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: