Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh, it is even worse.

The MAIN reason for microservices was that you could have multiple teams work on their services independently from each other. Because coordinating work of multiple teams on a single huge monolithic application is a very complex problem and has a lot of overhead.

But, in many companies the development of microservices/agile teams is actually synchronised between multiple teams. They would typically have common release schedule, want to deliver larger features across multitude of services all at the same time, etc.

Effectively making the task way more complex than it would be with a monolithic application



I've worked with thousands of other employees on a single monolithic codebase, which was delivered continuously. There was no complex overhead.

The process went something like this:

1. write code

2. get code review from my team (and/or the team whose code I was touching)

3. address feedback

4. on sign-off, merge and release code to production

5. monitor logs/alerts for increase in errors

In reality, even with thousands of developers, you don't have thousands of merges per day, it was more like 30-50 PRs being merged per day and on a multi-million line codebase, most PR's were never anywhere near each other.


Regarding monoliths...when there's an issue, now everyone who made a PR is subject to forensics to try to identify cause. I rather make a separate app that is infrequently changed, resulting in less faults and shorter investigations. Being on the hook to try to figure out when someone breaks "related" to my team's code, is also a waste of developer time. There is a middle ground for optimizing developer time, but putting everything in the same app is absurd, regardless of how much money it makes.


I'm not sure how you think microservices gets around that (it doesn't!).

We didn't play a blame game though... your team was responsible for your slice of the world and that was it. Anyone could open a PR to your code and you could open a PR to anyone else's code. It was a pretty rare event unless you were working pretty deep in the stack (aka, merging framework upgrades from open source) or needing new API's in someone else's stuff.


> I'm not sure how you think microservices gets around that (it doesn't!).

Microservices get around potential dependency bugs, because of the isolation. Now there's an API orchestration between the services. That can be a point of failure. This is why you want BDD testing for APIs, to provide a higher confidence.

The tradeoff isn't complicated. Slightly more work up front for less maintenance long term; granted this approach doesn't scale forever. There's not any science behind finding the tipping point.


> Microservices get around potential dependency bugs, because of the isolation.

How so? I'd buy that bridge if you could deliver, but you can't. Isolation doesn't protect you from dependency bugs and doesn't protect your dependents from your own bugs. If you start returning "payment successful" when it isn't; lots of people are going to get mad -- whether there is isolation or not.

> Now there's an API orchestration between the services

An API is simply an interface -- whether that is over a socket or in-memory, you don't need a microservice to provide an API.

> This is why you want BDD testing for APIs, to provide a higher confidence.

Testing is possible in all kinds of software architectures, but we didn't need testing just to make sure an API was followed. If you broke the API contract in the monolith, it simply didn't compile. No testing required.

> Slightly more work up front for less maintenance long term

I'm actually not sure which one you are pointing at here... I've worked with both pretty extensively in large projects and I would say the monolith was significantly LESS maintenance for a 20 year old project. The microservice architectures I've worked on have been a bit younger (5-10 years old) but require significantly more work just to keep the lights on, so maybe they hadn't hit that tipping point you refer to, yet.


50 PRs with a thousand developers is definitely not healthy situation.

It means any developer merges their work very, very rarely (20 days = 4 weeks on average...) and that in my experience means either low productivity (they just produce little) or huge PRs that have lots of conflicts and are PITA to review.


Heh, PRs were actually quite small (from what I saw), and many teams worked on their own repos and then grafted them into the main repo (via subtrees and automated commits). My team worked in the main repo, mostly on framework-ish code. I also remember quite a bit of other automated commits as well (mostly built caches for things that needed to be served sub-ms but changed very infrequently).

And yes, spending two-to-three weeks on getting 200 lines of code absolutely soul-crushingly perfect, sounds about right for that place but that has nothing to do with it being a monolith.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: