Hacker News new | past | comments | ask | show | jobs | submit login

Right, but the price one pays are outdated packages.

CentOS 8 was released few days ago with kernel 4.18, which not even LTS is, and is older than the current Debian stable kernel(!).

If you need to install anything besides the base distro you need elrepo, epel, etc which I'm not sure can be counted as part of the support.




RHEL kernel versions are basically incomparable with vanilla kernel versions. They have hardware support and occasionally entire new features that have been backported from newer kernels in addition to the standard security & stability patches.

This means that RHEL 7 using a "kernel version" from 2014 will still work fine with modern hardware for which drivers didn't even exist in 2014.


That is not a good thing. RH frankenkernels can contain subtle breakage. E.g. the Go and Rust standard libraries needed to add workarounds because certain RH versions implemented copy_file_range in a manner that returns error codes inconsistent with the documented API because patches were only backported for some filesystems but not for others. These issues never occurred on mainline.

And for the same reasons that the affected users chose a "stable" and "supported" distro they were also unable to upgrade to one where the issue was fixed.


True, but it is a matter of weighing risks. I can't find it now, but I remember a few years ago there was a news story about how an update to Ubuntu had caused hospitals to start rendering MRI scan results differently due to differences in the OpenGL libraries. For those sorts of use cases, stable is the only option.


I think this is a perfect use case for CentOS/RHEL as opposed to Ubuntu when the machine has only one job and nothing shall stand in its way, ie when you expect everything to be bug-for-bug compatible. But I fail to understand why a vendor of an MRI machine charging tens of thousands for installation/support cannot provide a supported RHEL OS which costs $180-350/yr in the cheapest config [1].

[1]: https://www.redhat.com/en/store/linux-platforms


No, it doesn't. I will tell you my experience.

They bought some fancy new computers at work. Our procedures say to use CentOS 7, so we tried it, it ran like shit. Then we reinstalled CentOS 8, same. It worked, but the desktop was extremely slow. After much hair pulling I found the solution: add the elrepo-kernel repository, and update to kernel 5.x

No amount of backporting magic will make an old kernel work like a new kernel.


That's not a price, that's a feature.

If your use case doesn't consider avoiding noncritical behaviour changes for a decade to be a feature, you have other options.


if you need epel, or quicker life cycles then CentOS Stream should be just fine for you as well

People that run CentOS in prod are normally running ERP systems, Databases, LoB Apps, etc, and the only thing we need is the base distro and the the vendor binaries for what ever is service/app that needs to be installed, and probably an old ass version is JDK...

We need every bit of that 10 year life cycle, and we glad that we will probably only have to rebuild these systems 2 or 3 times in our career before we pass the torch to the next unlucky SOB that has to support an application that was written before we were born...

that is CentOS Administration ;)


The patch delta for security fixes must get larger over time as these packages age further and further away from top of tree.

I always wonder how many major vulnerabilities are introduced into these super old distros due to backporting bugs.


It's the opposite. Plenty of subsystems in the RHEL 8.3 kernel are basically on par with upstream 5.5 or so, as almost all the patches are backported. The source code is really the same to a large extent, and therefore security fixes apply straightforwardly.


So, why is RHEL not using the upstream kernel? It would allow them to avoid those issues with rust&go (and probably other software): https://news.ycombinator.com/item?id=25447752


RHEL maintains a stable ABI for drivers.

Plus, there are changes (especially around memory management or scheduling) that are fiendishly hard to do regression testing on, so they are backported more selectively.


Security audit / certification would be my guess.


That's great but what about all the other packages?


The upstream for most other packages generally move much more slowly than the kernel. The fast ones (e.g. X11, systemd, QEMU) are typically rebased every other update or so (meaning, roughly once a year).

It also helps that Red Hat employs a lot of core developers for those fast moving packages. :)


Documented cases don't seem to be common, but what comes to mind is the Debian "weak keys" scandal (2008), and the VLC "libeml" vulnerability (2019)[1]

[1]: https://old.reddit.com/r/netsec/comments/ch86o6/vlc_security...


OpenSSL upstream was almost abandoned during those days.

Software are always gonna have bugs, it's written by humans after all. The important thing is to acknowledge and work towards an ideal outcome.


Xweak keys" didn't have anything to do with backporting fixes to older versions. It was introduced into the version in sid at the time.


The kind of people that are up in arms that CentOS 8 isn't going to be supported through to 2029 are using it because it has outdated packages.


Agreed, the packages in Centos / RHEL are all super old. The RHEL license structure changes all the time and depending on which one you get it may or may not include the extended repos.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: