Even systems with 15 years of legacy and eons of manhours behind them get replaced. These projects do not always succeed but I've seen it happen.
A rewrite from scratch doesn't need full parity with the old system as long as you can discover, implement and test all the critical business use cases beforehand.
That last part is also the most common reason why large system replacement projects fail, go over buddget, or are not even attempted.
Often the people controlling the money do not have the technical knowledge how much complexity the old system hides. The safe bet is to keep it going.
> Even systems with 15 years of legacy and eons of manhours behind them get replaced. These projects do not always succeed but I've seen it happen.
Yes. It can happen. If you can't do it with a 99.9% succes rate you cannot make it a general business practice.
> A rewrite from scratch doesn't need full parity with the old system as long as you can discover, implement and test all the critical business use cases beforehand.
Beyond my general disagreement is shouldn't need full parity (man our clients would be pissed off if we cut functionality "because it was easier to rebuild a system that was functioning quite well on your end", and they have guns!), I don't think this takes into full account how much can be hidden in years and years of commits and code. We have systems being maintained where the team responsible goes "look we have a million lines of SQL stored procs. We don't know what all of them do because most of them were written in the mists of time and there's no documentation, or the documentation is so obviously wrong we can ignore it entirely". This, in spite of all handwringing about how this would never have happened if people maintained proper documentation, will happen in any legacy application. We're talking about a hundred people working side by side over decades. These things will slip through, or 'best practices' change so much that things that were a good idea then are entirely unknown now.
Even something as non-intrusive as attempting to stranglevine this will take up a lot of time and effort, if it can be done correctly.
> We have systems being maintained where the team responsible goes "look we have a million lines of SQL stored procs. We don't know what all of them do because most of them were written in the mists of time and there's no documentation, or the documentation is so obviously wrong we can ignore it entirely".
Interesting - when thinking of systems that I've personally worked on which I could never imagine being replaced, the first one that came to mind was one with exactly the same problem: a massive cache of SQL stored procedures at the core that nobody seemed to fully understand. This particular company had even implemented stuff like HTTP auth token validation and all access control with SQL scripts without any centralized access control architecture. It was outright terrifying to touch the SQL.
When I started at the company the manager advertised they "really have no upper limit" on how much they compensate developers who stick with them for years.
I guess a big problem with implementing large parts of the application logic with SQL is that it's simply not a tool fit for the purpose. Incrementally replacing, or even reasoning about, logic splintered accross assorted SQL scripts is very difficult. So nobody has the full picture of what are all the things the monolith does.
A rewrite from scratch doesn't need full parity with the old system as long as you can discover, implement and test all the critical business use cases beforehand.
That last part is also the most common reason why large system replacement projects fail, go over buddget, or are not even attempted.
Often the people controlling the money do not have the technical knowledge how much complexity the old system hides. The safe bet is to keep it going.