Monoliths aren’t very useful in many organisations where you need to build and connect 300+ systems. They also stop having simple architecture if you try. Most architecture conferences and talks tend to focus more on the enterprise side of things, and really, why would you need full time software focused architects if you’re building something like stackoverflow.
I do think things have gotten a little silly in many places with too much “building like we’re Netflix” because often your microservices can easily be what is essentially a bunch of containerised monoliths.
I think the main issue is that your IT architecture has or should have) very little to do with tech and everything to do with your company culture and business processes. Sometimes you have a very homogeneous focus maybe even on a single product, in which case microservices only begin to matter when you’re Netflix. Many times your business will consist of tens-thousands of teams with very different focuses and needs, and in these cases you should just never do monoliths unless you want to end up with a technical debt that will hinder your business from performing well down the line.
> When writing a distributed application, conventional wisdom says to split your application into separate services that can be rolled out independently. This approach is well-intentioned, but a microservices-based architecture like this often backfires, introducing challenges that counteract the benefits the architecture tries to achieve. Fundamentally, this is because microservices conflate logical boundaries (how code is written) with physical boundaries (how code is deployed). In this paper, we propose a different programming methodology that decouples the two in order to solve these challenges. With our approach, developers write their applications as logical monoliths, offload the decisions of how to distribute and run applications to an automated runtime, and deploy applications atomically. Our prototype implementation reduces application latency by up to 15× and reduces cost by up to 9× compared to the status quo.
> Seems like the mistake is building 300+ systems instead of a handful of systems.
But that’s not what happens on enterprise organisations. 90% of those are bought “finished” products, which then aren’t actually finished and you can be most certain that almost none of them are capable of sharing any sort of data without help.
Hell, sometimes you’ll even have 3 of the same system. You may think it’s silly, but it is what it is in non-tech enterprise where the IT department is viewed as a cost-center similar to HR but without the charisma and the fact that most managers think we do magic.
Over a couple of decades I’ve never seen an organisation that wasn’t like this unless it was exclusively focused on doing software development, and even in a couple of those it’s the same old story because they only build what they sell and not their internal systems.
One of the things I’m paid well to do is help transitions startups from their messy monoliths into something they can actually maintain. Often with extensive use of the cheaper external developers since the IT department is pure cost (yes it’s silly) and you just can’t do that unless you isolate software to specific teams and then set up a solid architecture for how data flows between systems. Not because you theoretically can’t, but because the teams you work with often barely know their own business processes. I currently know more about specific parts of EU energy tariffs than the dedicated financial team of ten people who work with nothing else, because I re-designed some of the tools they use and because they have absolutely no process documentation and a high (I’m not sure what it’s called in English, but they change employees all the time). Which is in all regards stupid, but it’s also the reality of sooo many places. Like, the company recently fired the only person who knows how HubSpot works for the organisation during down sizing… that’s the world you have to design systems for, and if you want it to have even a fraction of a chance to actually work for them, you need to build things as small and isolated as possible going all in on team topologies even if the business doesn’t necessarily understand what that is. Because if you don’t, you end up with just one person who knows how the HubSpot integrations and processes work.
It’s typically the same with monoliths, they don’t have to be complicated messes that nobody knows how work… in theory… but then they are build and maintained by a range of variously skilled people over 5 years and suddenly you have teams who hook directly into the massive mess of a DB with their Excel sheets. And what not.
> a high (I’m not sure what it’s called in English, but they change employees all the time).
To help you out, the word is attrition or turnover. Turnover would be more appropriate if the roles are refilled, attrition if the roles are never replaced.
Building like NetFlix is better than random unguided architectures that result from not thinking. It might not be the best for your problem though. If you don't need thousands of servers, then the complexity that Netflix has to put into their architecture to support that may not be worth the cost. However if you do scale that far you will be glad you choose an architecture proven to scale that large.
However I doubt Netflix has actually documented their architecture in enough detail that you could use it. Even if you hire Netflix architects they may not themselves know some important parts (they will of course know the parts they worked on)
I mostly use Netflix as somewhere you’ve reached a technical point where you need to spread horizontally. As StackOverflow you can scale rather far without doing so if your product isn’t streaming billions of gigabytes of video to the entire world through numerous platforms. So what I mean by it is that many of will never reach those technical requirements. Sorry that I wasn’t clear. I don’t disagree with what you say at all, but I do think you can very easily “over design” your IT landscape. Like we have a few Python services which aren’t build cleverly and run on docker containers without clever monitoring. But they’ve only failed once in 7 years and that was due to a hardware failure on a controller that died 5 years before it should’ve.
That is how I use Netflix or stackoverflow. Choosing either (despite how different they are!) is better than random unstructured building code with no thought to the whole system.
It's a very bold assumption that a team that cannot manage a monolith will somehow lay a robust groundwork that will be the foundation of a future Netflix-like architecture.
By the way - Netflix started as a monolith, and so did most other big services that are still around.
The rest faded away, crushed by the weight of complexity, trying to be "like Netflix".
There are much better options above copying someone else. Copy is better than letting anything happen, but you should do better. You should learn from Netflix, stackoverflow and the like - no need to remake the same mistakes they did - but your situation is different so copy isn't right either.
I do think things have gotten a little silly in many places with too much “building like we’re Netflix” because often your microservices can easily be what is essentially a bunch of containerised monoliths.
I think the main issue is that your IT architecture has or should have) very little to do with tech and everything to do with your company culture and business processes. Sometimes you have a very homogeneous focus maybe even on a single product, in which case microservices only begin to matter when you’re Netflix. Many times your business will consist of tens-thousands of teams with very different focuses and needs, and in these cases you should just never do monoliths unless you want to end up with a technical debt that will hinder your business from performing well down the line.