Hacker News new | past | comments | ask | show | jobs | submit login

Once when starting a new gig I inherited a "microservices" architecture.

They were having performance problems and "needed" to migrate to microservices. They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM. Of course if you were using microservices, you needed docker as well, so they had also developed a giant docker container containing all 12 microservices which they deployed to a single host (all managed by supervisord). Of course since they had 12 different JVM applications, the services needed a host with at least 9GiB of RAM so they used a larger instance. Everything was provisioned manually by the way because there was no service discovery or container orchestration - just a docker container running on a host (an upgrade from running the production processes in a tmux instance). What they really had was a giant monolithic application with a complicated deployment process and an insane JVM overhead.

Moving to the larger instance likely solved the performance issues. In place they now had multiple over provisioned instances (for "HA"), and combined with other questionable decisions, were paying ~100k/year for a web backend that did no more than ~50 requests/minute at peak. But hey at least they were doing real devops like Netflix.

For me, I've become a bit more aware of cargo cult development. I can't say I'm completely immune to cargo cult driven development either (I once rewrote an entire Angular application in React because "Angular is dead") so it really opened my eyes how I could also implement "solutions" without truly understanding why they are useful.




> They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM.

I've dealt with an even worse system, with a dozen separate applications, each in its own repo, then with various repos containing shared code. But the whole thing was really one interconnected system, such that a change to one component often required changes to the shared code, which required updates to all the other services.

It was a nightmare. At least your folks had the good sense to use a single repository.


Agreed. Multiple closely dependent repos by one organization is the real nightmare.


> then with various repos containing shared code

What source control system?

Also, from the article:

> even though theoretically services can be deployed in isolation, you find that due to the inter-dependencies between services, you have to deploy sets of services as a group

This is the situation we are in, like you were.


> What source control system?

Git in our case. And our direction was not to use submodules or anything like that to make life manageable. It was pretty unpleasant.


You don't always have to know why but it's somewhat frightening that so many "engineers" don't have a clue why they are doing something (because Google does it). And I'm of course guilty of it myself jumping on the hype-train or uncritically taking advice from domain experts only to find out years later that much of it was BS. Most of the time though, you will not reach enlightenment. I guess it's in our nature to follow authority, hype, trends and group think.


They're only following their incentives.

What's gonna look better on a devs CV: 'spent a year maintaining a CRUD monolith app' vs 'spent a year breaking monolith into microservices, with shiny language X to boot'.

We can be a very fashion and buzzword driven industry sometimes.

EDIT: this perverse incentive goes all they way to the top, through to CTO level. Sometimes I wonder if businesses understand just how much money and effort is wasted on pointless rewrites that make life harder for everyone.


"Resume driven development" :)


> it's somewhat frightening that so many "engineers" don't have a clue why they are doing something (because Google does it).

This doesn't stop at engineering; open offices, teaser/trick based interviewing, OKR's, ... Even GOOGLE doesn't do some of those things anymore, but the follower sheep still do.


I recently had a similar experience; our product at work is a monolith not in the greatest shape as it has technical debt which we inherited and our product is usually used condescendingly when talking to other teams working on different products. To our surprise when we started testing it with cloud deployments, it was really lightweight compared to just one of the 25 java micro-services from the other teams.

Their "microservices" suffered from the same JVM overhead and to remedy this they are joining their functionalities together (initially they had 30-40).


I'm switching to go partially due to the jvm. Hopefully I'll get better partitioning on a single small box as I start.


>They were having performance problems and "needed" to migrate to microservices. They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM.

9 times out of 10 it's because developers don't know how to properly design and index the underlying RDBMS. I've noticed there is a severe lack of knowledge of that for the average developer.


Sounds like they don't understand why it's called a microservice to begin with. They're not supposed to be solutions an entire piece of software, just dedicated bits at least that's what I'd figure with a name such as "micro". When we adopted microservices at my job (idk if Azure Functions count or not) we did it because we had 1 task we needed taken out of our main application for performance concerns and because we knew it would involve way more work to implement (.NET Framework codebase being ported to .NET Core which meant the dependencies from .NET Framework did not work anymore in .NET Core) but we eventually turned it into a WebAPI instead due to limitations of Azure Functions for what we wanted to do (process imagery of sorts).


> idk if Azure Functions count or not

Azure Functions are technically a "serverless" product, but using them as y'all intended to is a textbook definition of a "microservice" :)


> Of course since they had 12 different JVM applications, the services needed a host with at least 9GiB of RAM so they used a larger instance.

well experimentally oracle solved that problem, somewhat. you could now use CDS and *.so files for some parts of your application. it probably does not eliminate every problem, but yeah it helps a bit at least. but well it would've been easier to just use apache felix or so to start all the applications on a OSGI container. this would've probably saved like 5-7 GiB of RAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: