Hacker News new | past | comments | ask | show | jobs | submit login

I can't imagine you leaned into any one of those releases, then. That sequence involves major changes to the kernel, the init system, the configuration management tools, the core libraries, Apache, Python, Perl, etc. Any one of those alone could (and did, in my experience) trigger a major rewrite of configuration and/or code.

I'm glad it was painless for you. In my experience, it was not, and most of the reasons were beyond my control.






What does lean into mean here? A lot of software from 20 years ago compiles (if needed) and runs fine on the latest versions.

Every major release of every major distribution makes choices. These are choices about what software to include in the first place, what versions of that software to pin (especially for LTS releases), what default configuration to provide, recommendations about how to solve certain problems, etc. These choices are made based upon the experience and opinions of the distribution maintainers. However, those maintainers are (usually) not major contributors to the software they're distributing. This means distros can make "bad" choices, choosing for example to focus on software that eventually dies out, or recommending configurations that eventually get deprecated or removed, etc. Sometimes, these choices are even made in a way such that they exclude what will become the winning alternative, leaving no migration path except complete and total overhaul.

If all Linux is to you is a place to run some application software, these choices are mostly irrelevant. As long as the software you care about continues to run, the other things are just picayune details. If this comes off as derisive, I apologize, because I'm actually broadly endorsing that view of things, as much as it is possible to achieve. But if you start really taking advantage of the things which the distribution provides out of the box and recommends, especially around large-scale multi-system operation, you end up buying into the distibution's choices. When a large organization you're a part of does it too, now the sunk costs really start to mount. As the Linux ecosystem continues to evolve, especially in different directions than the distribution chose at the time, the cost of migrating to later releases grows. This is all a good reason to me to not marry oneself so tightly to those particular choices, but that isn't always feasible with deadlines and compliance requirements and so on bearing down on the sysadmin.

There's also an even bigger problem that can arise, the distribution can just end, such as the termination of CentOS, leaving lots of people hanging. In that case, I know some who started to pay Red Hat for RHEL, but most seem to have moved on to other distros, like Ubuntu. That kind of migration has a lot of the same issues, too, once again leaving me to recommend not to lean into the particulars too much.


Using debian and bash and perl for setup and config. There is almost no work involved in the past decades; everything still works fine. I do not like busy work; trying to do things the hard way is not making me money or giving me happiness; running saas products (on non cloud cheap hardware that never dies) is and that's what I have done for the past 25 years.

There is no need to adapt things that work as they are already.


> But if you start really taking advantage of the things which the distribution provides out of the box and recommends, especially around large-scale multi-system operation, you end up buying into the distibution's choices.

You mean management interfaces and repo mirroring stuff provided by the OS vendor, like cockpitd and Satellite and whatever?


Sure, that's part of it, if those tools are used. Daemons like the particular flavor of syslog and cron are also part of it. Patched kernels used to be more common, too. I listed a bunch of things that actually broke for me before in a sibling thread; sometimes it was down to e.g. the Python packages that were in EPEL vs. the Python packages that were actually being maintained by their original authors in PyPI, or various security tools configured around paths that changed, etc. There were usually workarounds or alternatives, but they were more difficult to set up than doing things the "native" way.

I see! Thanks for referring to your sibling post, that definitely made clearer what you're talking about.

And yeah if you package stuff against, e.g., the Python libs included in the distro (or EPEL), you essentially need to maintain a repo as a downstream repo of the distro, then rebuild the whole repo with whatever subsequent release as a new upstream when it's time to upgrade. That kind of thing is doable but it's substantial integration work, and if it's aomething you do once a decade nobody is ever going to be fluent in it when it's time to be done.

I think I'd rather just maintain two repos— one against the latest stable release and one against the upstream rolling release (Fedora Rawhide, Debian Unstable, openSUSE Factory or Tumbleweed, etc.)— and upgrade every 6 months or whatever than leap the wider chasms between LTS releases.

And yeah the Python and Python libs shipped in a distro are generally there for the distro's integration purposes, which may involve different goals and constraints than app developers usually have. Building against whatever a distro ships with is not always the best way, as your painful migrations demonstrated.


> There's also an even bigger problem that can arise, the distribution can just end, such as the termination of CentOS

If you are doing something serious you probably want to chose suppliers in such a way that you can demonstrate you have security and business continuity under control. That means you probably want to use RHEL, Suse or Ubuntu, distributions for which commercial support exists.

(Ubuntu is particularly interesting because you can start with an LTS release for free and activate commercial support if business goes well, without changing your processes.)

You can think about this beforehand or wait until customers require some kind of certification and the auditors ask you for your suppliers list + the business continuity plan, among other things. You will face this if you deliver to a regulated market or if your customers are large enough to self regulate this kind of thing.

LTS not good enough? Well, cloud native does not have LTS comittement and Pipy does not provide security fixes separated from logical changes.

Try to keep your Terraform code stable for two years in AWS, or try to understand the lifecycle of AWS Glue versions from the docs. Or trust that Google will not discontinue their offers :-)

I mean, maintaining software is never easy or effortless but I respect the effort done by LTS Linux providers - they sell stability and security for a fraction of what you pay for cloud native.


apache -> nginx. Python versions. postgres. All fine.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: