Actually, I'd say the microservices architecture has helped in a complex project that I'm working on right now.
Consider the following that need to be done here.
- A main library that needs to load up a few gigs of data in memory
- A process that communicates with a queue of messages coming in
- A process that interfaces with mobile app (port x)
- A process that interfaces with a different kind of app (port y)
The goal is - every incoming message needs to go through to the main library and back to the app via the queue.
Monolith option - main.cc which contains all this, takes a while to start, can't queue up incoming messages till everything starts up and loads in memory, et al. Even using threads and whatnot.
Now with microservices,
- I can build a service that exposes my big-data-load library through a port. This can be loaded and restarted at will.
- Queue is running as a separate process. Messages queue when main lib is down and processed later.
- Server A and server b run separately
- A bug in one won't crash all the others
- I can manage each service independently (run them via supervisor or whatnot)
- Scaling it is easy - I can deploy each service behind load balancers, on different machines in the future without ever needing to change anything but the urls in a config file
- Monitoring - I have latencies for each service available via haproxy and the like.
Consider the following that need to be done here.
- A main library that needs to load up a few gigs of data in memory
- A process that communicates with a queue of messages coming in
- A process that interfaces with mobile app (port x)
- A process that interfaces with a different kind of app (port y)
The goal is - every incoming message needs to go through to the main library and back to the app via the queue.
Monolith option - main.cc which contains all this, takes a while to start, can't queue up incoming messages till everything starts up and loads in memory, et al. Even using threads and whatnot.
Now with microservices,
- I can build a service that exposes my big-data-load library through a port. This can be loaded and restarted at will.
- Queue is running as a separate process. Messages queue when main lib is down and processed later.
- Server A and server b run separately
- A bug in one won't crash all the others
- I can manage each service independently (run them via supervisor or whatnot)
- Scaling it is easy - I can deploy each service behind load balancers, on different machines in the future without ever needing to change anything but the urls in a config file
- Monitoring - I have latencies for each service available via haproxy and the like.
My 2c.