I wonder if Scott Hanselman ever had to maintain large system while having limited resources to do so. Churn is bad for most, apart from companies who must sell new things to keep on going, like software companies. Moving fast and breaking things is great when you don't have to consider churn cost. Calling the anti-churn crowd "dark matter developers" seems at least a bit derogatory.
I've worked on a team before where we every dev said "we need to upgrade X, we're Y versions behind" and management always responded with "no budget, this is not a priority". This continued until our hand was forced by a company wide initiative to modernize some aspects of our architecture and all of a sudden our team was frowned upon for being so far behind. Then we had to spend many months on an "upgrade project". I'm sure this same story has played out many times over in many other companies. In my own experience, often lack of tech currency/accumulation of tech debt is a management/organization problem. They won't get rewarded for upgrading things until their boss makes it a priority and there is some highly visible "upgrade project", so why bother doing so now.
So, I think there's two antipatterns and a pattern here.
Antipattern one is the one where you just get on the treadmill and churn, baby, churn. Constantly replacing things, spending way too much resources on tech and not enough on meeting customer needs. This is sort of the "new sports car every two years" model, or as a certain person who hates this site enough to make linking to him pathological calls it, the Cascade Of Attention Deficit Teenagers model. Nothing stays around long enough to enter long-term maintenance, because you keep chasing after the hot shiny thing instead of having the discipline to maintain a mature product even though maintenance isn't fun.
Antipattern two is where you don't have the resources to do maintenance or to chase after the shiny thing. This is basically treating the code as a monument, or as a museum exhibition -- every time a new feature is done, you hang it in the museum, and then you go work on the next feature. The problem with this approach is that Someday The Bill Comes Due. A dependency that you have hits EOL and nobody's made a new version in three years. You spend your time writing code to duplicate features that your language's standard library added four years ago but that you don't have access to because you haven't updated your stack in eight years. New features get harder and harder to add because it gets harder to reason about what any one piece of code in the program actually does, or everything is bottlenecked by the one guy who understands the codebase. Your bus factor is one if you're lucky and zero if the bus came.
The pattern is treating your code like a house. It needs upkeep. If you just leave a leaky bit of plumbing long enough you're going to have a flooded basement, so you bite the bullet and you fix the plumbing when something happens. But you don't trade in for a new house every two years either. You have a project list and you tackle it as best you can and sometimes you take a few days off work for a big project like painting or what have you. And you build equity that lasts.
Sure, there's an extreme on both ends. The problem you're describing is that the upgrades suggested were probably not frictionless, managers were burnt before by needless upgrades that took too much resources for little value. If tech companies took much more care wrt upgrades and not breaking stuff, then users of the tech would not see upgrades as needlessly risky.
An example of this is net core vs net fx. Net core broke many things, has not implemented full compatibility from day one, introduced netstandard , which as of latest version is not implemented by netfx, introduced new toolset (dotnet) aside msbuild (bear in mind that VS still uses Ms uild) and so on.
This is all a matter of incentives both for tech companies and for their workers - they are often rewarded for short term/easy to see by managers impact, not for bugfixing, stability or long term support. It was not always like this - in the 90s MS would be picked because they took care to not break stuff. Migrating .Net 1.1 to 2.0 was super easy.
There is always going to be major flux as new architectures are evolved until they are stable. If this was hidden, there would have been little input from outside Microsoft.
While there have been legitimate problems with this process - for example, the explicit ‘go live’ given by Microsoft before things were truly stable, there was a choice made to develop in public and the net result (sorry) seems positive to me.
Depends on the success metric. If you look at the cost of the tech churn to the businesses that have to spend much more than before to achieve similar results as before, then it is not positive result to anyone but tech providers, and (young) developers that are needed to redevelop. I wonder how much will businesses tolerate this if promised efficiency gains do not materialize for most of them.
> I wonder how much will businesses tolerate this if promised efficiency gains do not materialize for most of them.
I think you've stumbled on the core issue here. For an unchanging company for which their digital business does not need to evolve very much, having a basic IT department may just work. But as we digitize more and make technology a core competency and critical differentiator, a legacy, inflexible tech stack becomes a burden. It doesn't let you add the features that your customers may want, features which might be critical to a continuing and successful business relationship. Your customers then have similar demands from _their_ customers... all the way back to the primary sector.
I know Scott meant nothing derogatory by it. "Dark matter developers" refers to the developers you don't see. If someone is offended by that, I think it's a sign that they desperately wish they could be at the leading edge, but aren't allowed to. There's a ton of developers in the industry who have a different set of values than that.