A lot of people may not remember this, but the microservices bandwagon was pushed heavily by Heroku, which (at the time) was a ubiquitous and influential platform among devs working at startups.
Why did they care? Because their particular hosting model at the time really couldn't handle complex architecture, so microservices speaking to each other over HTTP was the only way to do things.
The arguments in favor of microservices at least sounded good enough that they were adopted at a lot of orgs before anyone actually needed the supposed benefits, and it turned into a big mess.
Yep. Microservices were pushed hard by Martin Fowler who gives terrible programming advice that always yields extremely lucrative returns for cloud platforms.
I find all the programming advice espoused by all these "gurus" to be very suspect. Especially since most of them seem to come from a theory vs practice POV. For example "uncle bob" delivers a lot of his advice as gospel, but has he actually successfully delivered any software that's used at scale? Same with Martin Fowler et al. I've found advice by people like Linus Torvalds, Steve Yegge, etc. much more interesting/insightful/useful.
I've especially found anyone that is an "enterprise architectural consultant" to be especially suspect.
> For example "uncle bob" delivers a lot of his advice as gospel, but has he actually successfully delivered any software that's used at scale?
I fail to the the point in this cheap ad-hominem.
Uncle Bob is arguably best known by his seminal books: Clean Code, and Clean Architecture.
Clean Code talks about principles such as "you should name your variables well". Why do you believe scale is any relevant?
Clean Architecture talks about a general principle to organize software projects which has traits that are highly benefitial to maintenance, refactoring, and even re extensability. It's a nuanced take on the tried and true layered architecture. Why do you believe scale is remotely relevant?
I know personally Principal Engineers of a couple of FANGS who are great at selling ideas but are outright awful at writing code. One even famously handed POCs to SDE1s to have them rewrite and refactor his code to get some quality in it. Do you really believe in this appeal to authority fallacy to the point where the number of customers hitting a server are the measuring stick for competence and the quality of implementations?
Cuz they espouse advice without having proven it in practice. When I say at scale, I mean they literally haven't shipped any software of note, I don't mean software that's getting hit by an arbitrary number of req/s. Why I think this matters? They haven't demonstrated that their rather arbitrary list of Good Things help make successful software.
Also, some of the advice is just provably nonsensical. Like "uncle" bob says the "ideal argument's number for a function is zero (niladic function)" which is absurd. It either means your function is impure, or is literally a constant.
Most of it is just very shallow stuff that's mostly applicable to Java/C#/OO languages. A lot of the patterns might even be considered antipatterns in other languages.
I just don't find stuff written by these guys very insightful or useful.
Yup worked at a company that worshiped him and turns out that his ideas were bad, the architecture was a mess and didn’t work and the company went under.
[Tangentially related] It’s funny, now that I’m fairly heavy into board gaming, I actually hear much more from Martin Fowler in the board gaming community than the software engineering community.
Same with the whole "serverless" thing heavily pushed by Amazon and co which conveniently offer "serveless services". These trends are always pushed by marketing departments at SAAS companies.
> Same with the whole "serverless" thing heavily pushed by Amazon and co which conveniently offer "serveless services". These trends are always pushed by marketing departments at SAAS companies.
This take is pure nonsense. It's highly disappointing to see this blend of ignorant and uninformed take in HN of all places.
Amazon's serverless offerings are the ideal tool for many use cases, specially internal services. Instead of being forced to write, deploy, operate and maintain a dedicated service to handle events, just write the event handlers you need, get it to run in someone else's computer, and forget about it.
Internally, things like AWS Lambda saves a lot on wasted allocated capacity. You don't need to waste allocating a whole box to run a handler each time, say, someone uploads a file to S3.
Microservices are an option, serverless is an option. Use the right tool for the job as much as is required, but don't overdo something that causes unnecessary excessive maintenance.
> Microservices are an option, serverless is an option. Use the right tool for the job.
Developers don't need these "tools" pushed by marketing departments at first place. These aren't technical solutions, these are marketing considerations, solutions looking for a problem AKA "hype driven development".
When losing an argument, some people resort to the "it's just a tool, use the right tool for the job" cop-out instead of admitting it's a bad tool and moving on. I don't think it's a good argument at all. It's a phrase you can use to defend anything.
> What is defined as complex architecture/could you elaborate?
Earlier in its lifespan, Heroku didn't have an easy way to add common components of infrastructure to an app. If you wanted to use Postgres + some Postgres plugins + Redis + some kind of queue manager, you basically had to split those out into separate apps that would run on their own "dynos" (to use the Heroku terminology).
That makes sense, right? We all do that now. We don't usually have Postgres running on our application server.
The difference is that Heroku only let those things talk to each other over HTTP at the time, so you essentially had to put a REST API in front of every component of your system.
It was, predictably, a huge mess that I honestly didn't see many companies fully adopting.
I’m fascinated that the service architecture ecosystem seems so fixated with polarity.
The right answer is pretty much always medium sized services and medium sized repos. I guess that isn’t edgy enough to make people feel like they are doing something cool
Engineering types often prefer to think in hard rules…either an approach is absolutely better or absolutely worse.
Saying medium-size services are best means that the engineer needs to use their judgment to determine where that threshold is. It becomes more of an art than a science, which many engineers are deeply-uncomfortable with, so they stick to their dogmas.
The issue with this, lies in different judgements. If I'm looking for a bit of code that I plan to extend, I have to understand the judgement of the coder who quit 2 years ago.
If the decision to use metric or imperial bolts on a bridge were up to the individual builder...
Picking a solid "do x" or "do y" makes for much simpler pull requests.
I currently work on a project with some GraphQL, some basic json and some json-api.
Some of the app is BFF and other bits are not client to the micro.
IMO the best way to split things is either by scalability, or user needs, or sometimes by domain (but this is too nuanced to fit into a comment). I suppose there might be security or certification/regulatory requirements too, but also those are probably rare.
Let's verify that: Kubernetes is not all useful. One can do very good without Kubernetes.
Let'em come and tell me I need Kubernetes everywhere! To many people think it's black/white and that there is no grey in between. And more often I realize that the grey is even larger than any of the (hard) sides.
I think he makes some good points, especially about how a lot of things really should be libraries. I'm also not sure I really understand the semantics of it all though. Because what is a color microservice? Something you call to get a color? Do organisations actually make stuff like that? That's wild.
I can only speak for the places that I have worked, but I think that what he calls an "App" is what would likely get called a "Microservice" in a lot of places.
From context, it seems to be a well known 'quintessential extreme example' of when you have obviously gone way too far into microservices. So it's probably something super trivial like coloring text maybe? Or maybe convert between color spaces?
The main problem with microservices is simply that most people don't understand how to design them correctly.
A poorly designed monolith is pretty much always going to be superior to a poorly designed microservices architecture. The floor protecting you from bad design is significantly higher.
The biggest mistake is that people tend to create services that are far too small, and too many of them. Then it gets chatty and you develop a distributed spaghetti code. Or failing to divide them into coherent domains. Or overindexing on scaling individual functions/cost rather than logical boundaries
You should almost always err towards services that are too large rather than too small. You can always subdivide further later on.
I interview tons of senior engineers and managers from reputable companies that describe designs that would produce immense technical debt
I have found that its more the dogma to put up microservices for everything that spoils the pot.
My take on it is that we should have called is "distributed services" because micro implies that they have to be small.
Usually there are clear borders in your app of stuff that can me compartementalized, but they usually should be called medium services instead.
Also - your database is a microservice - design your data acces layer accordingly, you dont need an api endpoint, where a stored procedure (or the like) will do.
> My take on it is that we should have called is "distributed services" because micro implies that they have to be small.
> Usually there are clear borders in your app of stuff that can me compartmentalized, but they usually should be called medium services instead.
These are excellent points and probably loosely coincide with how many teams you might have working on different parts of the greater system as well.
> ...you dont need an api endpoint, where a stored procedure (or the like) will do.
I'm not sure about this. Outside of niche cases and domains (like batch processes), stored procedures are harder to develop, harder to test, harder to debug, harder to audit (or at least less convenient to, when you want to ship your logs with something like GELF) and also have worse tooling for working with them, ensuring code style rules and so on.
Do you debug stored procedures?
47% Never
44% Rarely
9% Frequently
Do you have tests in your database?
14% Yes
70% No
15% I don't know
Do you keep your database scripts in a version control system?
54% Yes
37% No
9% I don't know
Do you write comments for the database objects?
49% No
27% Yes, for many types of objects
24% Yes, only for tables
How would you feel about code that about half of people have never debugged properly, almost tree quarters of people don't bother testing, only about half uses version control for and about half doesn't even document? Because as far as I can tell, this matches my experience when working with the majority of databases out there, either the development culture or tooling (or the entire architecture of those) isn't there yet, and it might never be.
While database design should be a major consideration that begs attention, I personally maintain that the majority of the functionality should go into the app, regardless of whether you use an ORM or not. An exception to this would be using plenty of database views to make querying easier and putting consideration towards how one could handle enums and such, so you don't end up with a database that's not possible to understand without reading the app source code in another window.
While I see what you mean, I think, personally, the point is moot and also applies to any scripting language.
If you have someone in your org specialized on SQL debugging stored procedures is their job. By now there are also many sufficient tools, that will help you actually debug the code and it has been a first class citizen in visual studio for years now.
Also keep in mind, that most APIs are crud, not complex businness logic and that there is a middle ground: build your own libraries instead of microservices and share those.
This of course all is heavily debateable and only based on my personal experience, but your statistics are missing the value for the amount of people who are actually "good at sql".
Also a lot of people dont understand the value of relational data at all. So yeah, if you never learned to code you cant program. Sql is not just a second class citizien of your stack that "stores stuff" and of course you have to optimize your use cases, but again this is exactly what i mean with my top comment: you sir are dogmatic too :D
SQL is AMAZING and can be directly conected to your frontend. That does not mean its a smart thing to do.
There are some pretty good points in your comment and I'll agree that context is key here - the project, the team and even particular tech choices will have a lot of impact on what exactly you'll be dealing with and what might be the correct tool for the job.
> This of course all is heavily debatable and only based on my personal experience, but your statistics are missing the value for the amount of people who are actually "good at sql".
That said, one cannot just optimize for people who are good at a particular technology and instead one needs to look at the market averages, for the locale that they're in (unless they have that many choices of what culture to work in in the first place). In my case, where I am, we are still mostly on board with ACID and RDBMSes, however DBAs are, while not exactly a fleeting career, on the downturn (probably about 20 years away from being as obsolete as "GlassFish Server Manager", much like the DevOps trend digs into Ops engineer positions).
Sure, most people are okay with running EXPLAIN PLAN occasionally, but putting significant amounts of logic in the DB is usually a pain to deal with. I've seen first hand how much of a dumpster fire a system that has like 90% of the logic in the DB eventually gets: stored procedures for rendering forms, stored procedures for data validations, stored procedures for persisting data, with absolutely ALL of the issues that I mentioned before. Making changes in it was a mess, figuring out which of the hundred or so packages even needed the changes was a pain, the naming was nonsensical (due to length limitations for DB objects), it's like someone tried coding in a language that wasn't meant for writing proper code.
Of course, I've also seen some bad projects in the more traditionally tiered architectures, but none were as bad as the DB-centric design. That's my anecdotal experience, on which I'll base most of my future tech stack decisions. Of course, sometimes in-database processing is just the right way to go about things, especially in something like PostGIS where you don't have many options app-side for handling certain kinds of operations on the data, or need to do them without N+1 issues or too much network I/O due to latency.
Oh I agree here, my eyes also have seen terrible things.
I think one of the major points im trying to make is the same basis as old school distributed communication.
Generate a Data Access Layer( https://en.wikipedia.org/wiki/Data_access_layer ). That does not mean that you should be DB-centric, but DATA-Centric. If your stack is written in one language, create a lib, that will be the key in going forward.
Application logic as a whole can be distributed throughout thousands of services but your DAL should be a common base. - at least for things that are in one database -
Here you can use procs, views or queries depending on what you like, but once i hit a couple of million guid entries in a db, i can in fact sometimes strip seconds of queries by saving on the round trip. Especiall in ERP Apps that "need to have all the data allways".
Dont convert everything to stored procs of course because then you are just as unflexible as having a microservice per entity.
All in all, I don't believe databases are ever gonna go away and that dbas are still underrated, even though data is gold. As we have seen with machine learning, well defined data sets are the key to better models, meaning faster and quicker results.
How long is the half-life of architectural best practice?
It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Perhaps the real issue here is the "thought leaders" who should know better but are perfectly happy to sell their ideas as risk free, when in fact these things are incredibly context sensitive to the team, problem and company shape.
Nobody ever seems to say "Do you believe what you're saying, or are you simply collecting a speakers fee?".
> How long is the half-life of architectural best practice?
This is a great question to ask, though I think one should also apply that same way of thinking to frameworks, languages and a lot of the other software out there (like OS distros or databases).
Sometimes there are good ideas that turn out not to be feasible, whereas the same can happen with the technologies. Whereas if a technology has been around for a large number of years and hasn't died yet, then that's a good predictor for its continued existence: like Linux, or PostgreSQL.
> It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Another thing I've noticed is that sometimes the "spirit" of the idea remains, but the technologies to achieve it are way different. For example, we recognized that reproducible deployments are important and configuration management matters a lot, however over time many have gone from using Ansible and systemd services to shipping OCI containers.
These technologies bring their own cruft and idiosyncrasies, of course, but a lot of the time allow achieving similar outcomes (e.g. managing your configuration in an Ansible playbook, vs passing it in to a 12 Factor App through environment variables and config maps).
Of course, sometimes they bring an entirely new set of concepts (and capabilities/risks) that you need to think about, which may or may not be intended or even positive, depending on what you care about.
It would be interesting to know how long it took things like "making tall buildings" or "making safe bridges" took to shake out into roughly what they are today. I suspect, for example, the project planning aspect is pretty stable.
I had my own hell with Microservices during my last big-corp job. Having to deal with API migrations was a nightmare, because other teams never wanted to spend the time to switch to the newer versions, and it was hard to even figure out where in their codebase the API calls were being made from.
There is definitely still a lot of opportunity to breach the boundary between development and production to understand distributed systems
> it was hard to even figure out where in their codebase the API calls were being made from.
Could this have been working around with https://www.jaegertracing.io/? That was one of the requirements at the organization I worked out.
We didn't go as far as to block traffic if the request didn't have a parent span (which included the service name calling it) but if it became a problem I'm sure we could have/would have done that (like 200+ microservices for fintec/banking across like 10-15 teams) and figured out pretty quickly in CAT/UAT/QA environment what wasn't passing a span.
Unrelated tangent: Can I just take a moment to say how utterly exhausted I am with reading about how such and such OSS project is against the invasion of Ukraine, or for BLM, or against BDS, etc…? I really just don’t care and don’t want to hear about it anymore.
I sympathise with your feelings. Honestly, I'm tired of it too.
But the invasion hasn't stopped. Most in the Ukraine are surely utterly exhausted from hearing decades of their infrastructure, cities, housing and culture being destroyed on a daily basis. Exhaused from the daily disruption, broken families and the mental and physical impact of having to actively protect themselves or fight against it. But they have to keep living it and fighting against it, daily, right outside their own doors. They don't have a choice.
I am sure most in Russia are equally exhausted by the daily impact on their lives of sanctions, fear of reprucussions for an opinion let alone any attempt at change, by being implicated in actions and conflict they don't agree with or by being effectively forced to personally and physically go to the front lines.
It is exhausting, it is uncomfortable, and that is OK. Globally we need to keep rallying against this fight against Ukraine. But maybe even more importantly, especially if you want to be selfish about it, we need to remember and be uncomfortable so that we're part of avoiding the next conflcit from starting.
For your personal mental wellbeing, if you're overwhelmed which I'm sure you are given the comment, my best piece of personal advice is to try and take a solid break from the news for a while. At a minimum try to avoid the hourly and daily scrolling on social media that exposes you to so much negativity and minimise the time on that to a smaller part of each day or week. It's really hard to do that, but it's the best I've got.
> But maybe even more importantly, especially if you want to be selfish about it, we need to remember and be uncomfortable so that we're part of avoiding the next conflcit from starting.
I agree with some of your points, but not this. Personally I don't have any need to have a background level of discomfort in order to stop the next conflict. This one was started by a deluded megalomaniac who did not consult me (or my country for that matter) before he invaded a neighbor and placed us all at risk of WW3. I'm sure the next one will likewise start without my input and against my wishes.
And who's fault is that?
What happened in the Cuba crisis? How is that different from what the US is doing with Ukraine.
How is Crimea different from Kosovo or Israel?
How can the US mass murder people, children in Iraq and Afghanistan and not a single soldier stand trial? How can the US have concentration camps on Guantanamo and nothing is done about it?
You’re welcome to not hear about it anymore, the door is -> any other focus of your attention. I’m personally in favor of you availing yourself of that freedom.
Judging by the downvotes I guess other like minded people want you to be free from other people caring about things by their own silence. What a pathetic sensitivity, to demand other people provide you free software without any expression of their own humanity.
As with any of the other solutions proposed, it would take a huge organization level shift for everyone to standardize under one RPC standard (including monitoring).
It is not for the lack of solutions in hindsight, just the constant feature creep and microservice development blind sighted everyone to large organization problems.
Can’t help but think this is what Elon Musk is facing at Twitter where he’s looking to cut the ‘microservices bloat’ [0] but the complexity of a 1000+ microservice architecture if not built properly, turning off a seemingly innocuous microservice can cause cascading issues and outages. Not to mention he fired a bunch of folks - some of those microservices are likely to not have owners anymore
I don't buy any argument that claims one is better than the other. Most of the projects start as monolith, but as time goes on, the project turns into a big ball of mud. There may be a few exceptions, projects that had strong tech leads to enforce boundaries. Have you run into projects where the test suite takes hours to run? So, at some point people decide to enforce the boundaries at the service level, by splitting the monolith to services. Maybe in the future communication between services will become a bottleneck. This might be a good trade off from some perspective. You reduce the blast radius of bad decisions from engineers and make it easier to rewrite services that were done poorly.
> Most of the projects start as monolith, but as time goes on, the project turns into a big ball of mud.
Start as microservices and you get a bigger and dirtier ball of mud. Guaranteed. Because you have no notion about boundaries at start. Monolith at least have luxury of automated refactorings.
The problem is that I see microservices adopted for two very different reasons:
1) We need to split this thing out because we need to isolate it
Problematic. Generally the issue is political--not technical. The service is necessary but not getting enough support--isolate it to force fixes. Some service not being responsive enough to internal customers--isolate so tickets now can be assigned and blamed on them. etc.
2) We need to split this thing out because it's a perfromance bottleneck and we need to scale it
A reasonable choice. The people scaling something need to be able to bound and limit the scope or they'll never make any progress.
I kinda disagree. I've scaled plenty of code by just adding another monolithic app server. It rarely matters where the load is coming from, another pile of resources evens it out.
But if you have a problematic part of your codebase that regularly shits the bed, keeping it in your monolith will take it all down. Splitting it out is the smart thing.
Couldn’t agree more. I also really like his idea of apps. Micro services introduces operational complexity and overhead. Most of the time, I see teams using it by default that really should be a monolith.
I personally view anyone trying to design an application from the start using microservice architecture to be doing premature optimization. Microservices are meant to solve a scaling problem.
Yes, design your applications in a modular way such that it becomes readily possible to stub parts of it out when the time comes. Don't start with a microservice focus or you're just begging someone else to beat you to market.
> major things of value or value creation, possible core proposition, limited to a few apps
From his tweet thread. I think it's like a separate product except it's under the same company or domain name. Many companies do this. A product could be just a backend server providing API with its own database. All that is scoped by "value" .. or a target market.
This could imply that when micro services start to introduce to a system, we should question the value proposition of the whose system. We could mix up the engineering and business aspect.
I have an inkling that Netflix wasted a lot of their potential with all this microservices mess. They had to hire very experienced engineers and pay them more than anyone else because that is the only way all those microservices could be maintained.
Hmmm, technologically they've always seemed fine. Even now many of the other streaming services struggle to do that: stream video. I've never had serious hiccups with Netflix that I can recall. CBS or HbBO though? Ugh. Also their FreeBSD servers always seemed really impressive.
The headline might be a bit misleading, depending on what he intended to say. I don't read his thread as "The biggest architectural mistake at GitHub was..", more so that it was a mistake to more generally bet 100% on it, not they they specifically did. The headline makes a close association between Github and this rant, but the thread doesn't really mention GitHub
Our backend is microservice based because it's a thin layer over storage. There are functions that do processing work, but those exist so the thin layer works.
The big problem with microservices is handling failure. If you are chaining microservices together you're screwed. Thats why stuff like graphql exists.
That said, the reason microservices exist is because bloat and dependencies eventually choke monoliths. Also monoliths are difficult to scale in a cost-effective way.
Not my experience at all. We have zero problems maintaining large 20+ year monoliths written in C++. It’s all about how you architecture the monoliths and how you refactor as needed to improve it.
As a user of several internal websites written as microservices I can also say they are a mistake. You keep having to hit [Shift]+reload to get an accurate view of the current state. Each page takes ages to update. They break far more frequently than regular sites.
Of course an internal website with probably at most 1000 users should never have been written using microservices in the first place.
I think you're mistaking the front end organisation -vs- microservices which are a backend thing. Where the service lays on the monolith - microservices spectrum doesn't have to be visible externally.
to cheaply support many small things, we have to place tremendous value on consistency at every level of ownership, limiting the number of skill sets and areas of knowledge that have to exist within our teams.
Why did they care? Because their particular hosting model at the time really couldn't handle complex architecture, so microservices speaking to each other over HTTP was the only way to do things.
The arguments in favor of microservices at least sounded good enough that they were adopted at a lot of orgs before anyone actually needed the supposed benefits, and it turned into a big mess.