Hacker News new | past | comments | ask | show | jobs | submit login

A good primitive rule of thumb that's worked for us is to fork services when: a. They're performing an action fundamentally different to the the block they originate from (one service may respond to user driven actions, another may be batch or cron driven), and b. there isn't synchronous coupling between the blocks.

If all interactions are synchronous, you're often adding complexity at the IPC layer, and need more boilerplate on either side since you need the presumption that the service you depend on may not exist.

Even so, if the services are fundamentally different in their scaling needs, security demands, availability, etc splitting services is a consideration.




Where do you draw the line with co-routines in that mental model? Languages like Erlang/Elixir built on BEAM are one approach to solving this problem of "synchronous programs calling components with varying scaling needs". It feels fundamentally wrong to me to deploy the components as isolated services simply because of scaling differences.

Maybe, over time, the ideas behind k8s and Service Meshes will start to close this gap of complexity? Or is it a language problem that will create the solution, ala Erlang?

How does one manage authentication/authorization in a distributed system? Is that on the service mesh also?

Complexity is a bitch!


It is indeed bitchy!

I'd say simply having different scaling needs may not suffice to isolate services, especially if all the other parameters I mentioned stay the same. Even so, some cases for needing to isolate purely based on scaling - in my experience - have been when the scaling requirements are drastically different, with completely different geographical needs, scale ceilings, etc.

We try to keep authorization within the system itself, with additional hardening on the service mesh/container/vm/network level as needed. I don't know that this is the right way to do it, but it's worked for us thus far.

Biggest problem with isolating services is that it reduces predictable errors by giving you smaller chunks you can reason about, but raises unpredictable, unreproducible (aren't those a bitch) errors down the line because the overall complexity has grown.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: