>I was thinking more about _why_ people want that.
I think there are two primary reasons.
1.) A developer wants to develop/test against an x.y release that only changes minimally (major bug and security fixes) for an extended period of time.
2.) The point release model where you can decide when/if to install upgrades is just "how we've always done things" and a lot of people just aren't comfortable with changing that (even if they effectively already have with any software in public clouds or SaaS).
Re: point 2, I don't know how different that is for stable distributions — e.g. if you're running Debian stable you're in control of upgrades and you can go years without installing anything other than security updates if you want.
Re: point 1, I'm definitely aware of that need but the only cases I see it are commercial settings where people have contractual obligations for either software they're shipping or for supported software they've licensed. In those cases, I question whether saving the equivalent of one billable hour per year is worth not being able to say “We test on exactly the same OS it runs on”.
I think there are two primary reasons.
1.) A developer wants to develop/test against an x.y release that only changes minimally (major bug and security fixes) for an extended period of time.
2.) The point release model where you can decide when/if to install upgrades is just "how we've always done things" and a lot of people just aren't comfortable with changing that (even if they effectively already have with any software in public clouds or SaaS).
I largely agree with your other points.