I agree with the general sentiment that simple architectures are better and monoliths are mostly fine.
But.
I've dealt with way too many teams whose shit is falling over due to synchronous IO even at laughably low volumes. Don't do that if you can avoid it.
"Subtle data-integrity bugs" are not something we should be discussing in a system of financial record. Avoiding them should have been designed in from the start.
So they cannot get data integrity constraints done properly with a single database? Wait until they have to do seven. Also, sounds like not even proper indexes were in place, so database amateur hour.
Until one of your IO destinations develops some latency.
Or your workflow adds a few more sync IOs into each request.
Or you suddenly run outta threads.
Then even if you're only at millions per month you've probably got problems.
> Then even if you're only at millions per month you've probably got problems.
Not in my experience. You may be using metrics for B2C websites which make $1 for each 1 million hits.
B2B works a little differently: you're not putting everyone on the same box, for starters.
I did some contract maintenance for a business recently (had no tech staff of their own, had contracted out their C# based appdev to someone else decades ago and just need some small changes now), and a busy internal app serving about 8000 employees was running just fine off a 4GB RAM VPS.
Their spend is under $100/m to keep this up. No async anywhere. No performance problems either.
So, sure, what you say makes sense if your business plan is "make $1 of each 1 million visitors". If you business plan is "sell painkillers, not vitamins" you need maybe 10k paying users to pay yourself a f/time salary.
I had a similar thought when C# introduced async/await. "Why all this complexity? What was wrong with good old fashioned blocking calls?"
I don't know the answer, but I would like to. I think it has to do with the limit on the number of processes/threads that the OS/framework can manage. Once you reach this limit, using async/await somehow allows the OS/framework to secretly use your (technically not) blocked threads to do some other work while they are (technically not) blocked.
1. Async allows you to parallelise tasks. If a request from a user needs to hit three logically independent endpoints, you don't need to do that in sequence, you can do them in parallel and thus the user will get a response much quicker.
2. OS threads can be expensive in the sense that they can block a lot of memory, to the point where you could run out of threads at some point. This is worse in some environments than in others. Apart from async, another solution for this is virtual / green threads (as in Erlang, Haskell, and much more recently, Java).
3. Some async implementations enable advanced structured concurrency patterns, such as cancellation, backpressure handling, etc.
Recently I worked on a project that was using synchronous IO in an async framework -- That tanked performance immediately and effectively meant that the application could service one request at a time while subsequent requests started queuing.
(Agreed that synchronous IO can serve hundreds of requests per second with the right threading model)
But.
I've dealt with way too many teams whose shit is falling over due to synchronous IO even at laughably low volumes. Don't do that if you can avoid it.
"Subtle data-integrity bugs" are not something we should be discussing in a system of financial record. Avoiding them should have been designed in from the start.