I find that architecture should benefit the social structure of the engineering team, and there are limits. I work on one of these “simple architectures” at large scale… and it’s absolute hell. But then, the contributor count to this massive monorepo + “simple architecture” hell numbers in the thousands.
Wave financial is only 350 people according to wikipedia - I doubt that’s 350 engineers. I know only of Google and Meta that can even operate with a massive monorepo, but I wouldn’t call their architecture “simple”. And even they do massive internal tooling investments - I mean, Google wrote their own version control system.
So I tend to think “keep it simple until you push past Dunbar’s number, then reorganize around that”. Once stable social relationships break down, managing change at this scale becomes a weird combination of incredible rigidity and absolute chaos.
You might make some stopgap utility and then a month later 15 other teams are using it. Or some other team wants to change something for their product and just submits a bunch of changes to your product with unforseen breakage. Or some “cost reduction effort” halves memory and available threads slowing down background processes.
Keeping up with all this means managing hundreds of different threads of communication happening. It’s just too much and nobody can ever ask the question “what’s changed in the last week” because it would be a novel.
This isn’t an argument for monoliths vs microservices, because I think that’s just the wrong perspective. It’s an argument to think about your social structure first, and I rarely see this discussed well. Most companies just spin up teams to make a thing and then don’t think about how these teams collaborate, and technical leadership never really questions how the architecture can supplement or block that collaboration until it’s a massive problem, at which point any change is incredibly expensive.
The way I tend to look at it is to solve the problem you have. Don't start with a complicated architecture because "well once we scale, we will need it". That never works and it just adds complexity and increases costs. When you have a large org and the current situation is "too simple", that's when you invest in updating the architecture to meet the current needs.
This also doesn't mean to not be forward thinking. You want the architecture to support growth that will more than likely happen, just keep the expectations in check.
> Don't start with a complicated architecture because "well once we scale, we will need it".
> You want the architecture to support growth that will more than likely happen
The problem is even very experienced people can disagree about what forms of complexity are worth it up-front and what forms are not.
One might imagine that Google had a first generation MVP of a platform that hit scaling limits and then a second generation scaled infinitely forever. What actually happens is that any platform that lives long enough needs a new architecture every ~5 years (give or take), so that might mean 3-5 architectures solving mostly the same problem over the years, with all of the multi-year migration windows in between each of them.
If you're very lucky, different teams maintain the different projects in parallel, but often your team has to maintain the different projects yourselves because you're the owners and experts of the problem space. Your leadership might even actively fend off encroachment from other teams "offering" to obsolete you, even if they have a point.
Even when you know exactly where your scaling problems are today, and you already have every relevant world expert on your team, you still can't be absolutely certain what architecture will keep scaling in another 5 years. That's not only due to kinds of growth you may not anticipate from current users, it's due to new requirements entirely which have their own cost model, and new users having their own workload whether on old or new requirements.
I've eagerly learned everything I can from projects like this and I am still mentally prepared to have to replace my beautifully scaling architectures in another few years. In fact I look forward to it because it's some of the most interesting and satisfying work I ever get to do -- it's just a huge pain if it's not a drop-in replacement so you have to maintain two systems for an extended duration.
Wave financial is only 350 people according to wikipedia - I doubt that’s 350 engineers. I know only of Google and Meta that can even operate with a massive monorepo, but I wouldn’t call their architecture “simple”. And even they do massive internal tooling investments - I mean, Google wrote their own version control system.
So I tend to think “keep it simple until you push past Dunbar’s number, then reorganize around that”. Once stable social relationships break down, managing change at this scale becomes a weird combination of incredible rigidity and absolute chaos.
You might make some stopgap utility and then a month later 15 other teams are using it. Or some other team wants to change something for their product and just submits a bunch of changes to your product with unforseen breakage. Or some “cost reduction effort” halves memory and available threads slowing down background processes.
Keeping up with all this means managing hundreds of different threads of communication happening. It’s just too much and nobody can ever ask the question “what’s changed in the last week” because it would be a novel.
This isn’t an argument for monoliths vs microservices, because I think that’s just the wrong perspective. It’s an argument to think about your social structure first, and I rarely see this discussed well. Most companies just spin up teams to make a thing and then don’t think about how these teams collaborate, and technical leadership never really questions how the architecture can supplement or block that collaboration until it’s a massive problem, at which point any change is incredibly expensive.