I think the name and the logo is good. Don't listen to the people who just keep talking and not helping! Good to see that CentOS cofonder picked this up and now became a founder of Rocky Linux. This shows dedication and rock solid background. Rocky Linux will be a project to follow, help and use in production environment. Thank you for all your hard work!
> I think the name and the logo is good. Don't listen to the people who just keep talking and not helping!
Agreed. It's funny that people here are complaining "Rocky Linux" isn't a professional name and they won't be able to convince corporates clients to use it. Yet, there exists a billion dollar revenue company named "Red Hat" which clearly is a "professional" name.
"Thinking back to early CentOS days... My cofounder was Rocky McGaugh. He is no longer with us, so as a H/T to him, who never got to see the success that CentOS came to be, I introduce to you...Rocky Linux"
— Gregory Kurtzer, Founder
I am not even sure what is going on.
1. The Name has a meaning. As shown in the quote above. And it is tribute / honour to a founder, from a previously well known project ( CentOS )
2. That meaning and the linkage in itself is extremely marketable. Especially to the target audience which are using CentOS.
3. This message explaining its meaning has been there since Day 1 according to Github history.
4. But more than half of the comments ( 150 ) are pissing on its name.
5. Which suggest that either a) They didn't actually click on the link and read anything or b) They dont like name for whatever reason.
6. I will be judgemental, and I am willing to bet those who are complaining about the name has never done any professional marketing or sales for any decent period of time.
Having said all that, they are still entitled to their opinion. But it also shows why product development and marketing based on surveying doesn't really work.
It's widely accepted nowadays, it was pretty weird in 2004. I got my fair share of jokes in 2005-2007 from friends when talking about that "ubuntu" thing, and I'll spare you the examples.
The notorious local news story about a woman who flunked out of college because her laptop came with Ubuntu comes to mind. I think that was in the mid 00s.
Same here. Things that come to mind: Rocky, the champion boxer who can "go the distance" and never gives up... the Rocky Mountains, which are, well, rocky and strong... Rocky Road ice cream, which tastes good.
A zillion times better than trying to explain what a "Suse", an "Ubuntu", or a "Manjaro" is, and that's before talking about the various types of hat-based distros.
Reminds me about how people don't even give CockroachDB a chance because of it's name. Every time it's mentioned on HN, people can't help but to bring up it's name.
I think Rocky is good. CockroachDB on the other hand is an awful name. Cockroaches are only associated with filth and are revolting to most people. Plus, you got that cock in the name which isn’t helping matters. You might think it’s obnoxious that people always point out the name without considering the product but tone deaf branding is a misstep and begs the question: what else are they screwing up if they could get the brand name so utterly wrong?
Many native English speakers don’t realize that ‘git’ is a slur somewhere between idiot and wanker. It’s somewhat comical how mundane the word has become in its new context.
A certain company, a few years ago, came very close to having a Global Information Technology Services department. Fortunately a few Brits made sure their voice was heard.
Ah yes, he was pissed off at the guy that typed in "help" after telneting to a BitKeeper repository, then using the docs that put out to make a clone of the client. Since he had a connection to the Linux Foundation, that caused McVoy to revoke the license for BitKeeper to the Linux community.
Let's face it, idea for using word "gimp" probably came before they figured out what it stands for. Even now when everyone understands what that means and what consequences are behind using that word, there is still a lot of debate around that name and instead of changing it to something more marketable they are stubborn and are sticking to it. They even allow for bigger market fragmentation (Glimpse fork of GIMP) just to not change that name, that's childish.
I think it is bad. You just cannot go to your manager and PR and tell them we are using CockroachDB. If you have a manager who would understand this though, then your manager still cannot go to his manager with that name.
Maybe its a clever tactic to reduce usage at companies that are controlled by non-technical people that are more worried about the name than the technology. Probably has a notable effect on the number of demanding/misinformed issues that they have to deal with ;)
Yeah, sounds like an awkward conversation but I know there's some huge companies using it. I guess they named it that way because cockroaches are believed to survived since the dinosaurs. I guess be the same if you named it BedBugDB, they are hard to get rid for some people too.
But I think some just dismiss it based on the name. I think if I was going to build a startup, it'd be on the top of the list for database choices only thing I really wish it had was Full Text Search and CIText but I guess those will be added some day. I think it's a neat piece of engineering though so far!
Not sure if there are solid evidence established to prove this, but Cockroaches are believed to be capable of surviving large amount of radiations (like from nuclear bomb), hence they thought it would be apt for a geographically distributed DB.
I love it... Cockroaches are survivors. When I was in grad school my lab was in a subbasement in a 100 year old building. We had huge cockroaches...
One of my colleagues caught some and put them in one of our lab freezers and forgot about them. Months later we remembered the jar... took it out of the freezer. Cockroaches thawed out and seemed to be fine... very active, almost like nothing happened.
Other experiments were performed as well... anyhow, cockroaches are hard to kill.
Diatomaceous earth is very effective at killing them. It's fascinating that they're so robust, but just a little inert powder can kill them. Though to be fair, at their scale, it may as well be a pile of razor blades.
One drop of dichloromethane does them in super fast. Their body metabolizes it into Carbon monoxide. A bit of a wake up call for those of us using dcm (our bodies do too, but less quickly)
Whats missing is an analysis of why CentOS failed. I think Rocky Linux needs to put out a plan how they will make themselves financially viable as we've had 3 high profile RHEL respins go down in the last 10 years.
CentOS failed twice, it ran out of money in 2014 and was rescued back then by Redhat sponsership. Again in 2020. Another widely used RHEL respin was Scientific Linux which mothballed when RHEL 7 was released.
There seems to be lots of potential users but not lots of potential money for a RHEL respin.
1. Red Hat Inc. does not want people to build and/or distribute gratis RHEL8 or clones. It would be trivial to just put the actual RHEL8 iso as an unsupported download on their ftp/www server and sell the support separately, like Oracle or Canonical do. Instead, they kept this ridiculous make work project called CentOS around, that involved non-trivial manual labor meticulously rebranding RHEL into RedHat owned CentOS-the-laggard-RHEL-clone, whose users apparently mainly value it for being as close to RHEL as possible without paying. To a distant observer, the whole setup looks quite absurd. I think it's actually good that they finally put an end to it. But they should have either not started CentOS 8 at all or rode it out till the end. Pulling the plug at 20% of it's lifetime is plainly a shitty move.
2. It appears that building a legal RHEL8+ clone is quite a task, and I'm guessing the amount of work involved is largely controlled by Red Hat Inc. I.e., they can make running/maintaining a project like Rocky or others more and more expensive if they choose to. I believe they going to test the limits of how difficult they can make building from their sources within the boundaries of the GPL. If you think I'm wrong, just put up a mirror of the RHEL8 (not CentOS8) SRPMs and see how long it stays up. Clearly they're not acting in the spirit of the GPL, even if they are in the letter.
3. Given the previous points, I believe any project like Rocky is a losing proposition. If the "community" really values enterprise stability so much, better put the effort into an extra-stable-extra-LTS fork of Debian or even Ubuntu, and preparing for transition away from RHEL. Or just pay up, if you really believes the RHEL stability is so valuable. Clone projects are by necessity extremely dependent on actions of Red Hat Inc which has largely opposite interests. I don't know why people would volunteer for that.
> If you think I'm wrong, just put up a mirror of the RHEL8 (not CentOS8) SRPMs and see how long it stays up. Clearly they're not acting in the spirit of the GPL, even if they are in the letter.
The GPL doesn't require you make the sources public to everybody, just the people to whom you are distributing your software.
But Red Hat does provide sources, to everybody. They go above and beyond what the GPL actually requires.
The ftp site does not contain sources for RHEL 8 (it's just sources for a bunch of add-on packages). The RHEL 8 sources are in git.centos.org, though not in SRPM form.
I know that GPL only requires Red Hat to provide sources on request to those receiving their binaries. But in principle the receiver can then legally redistribute those sources. Now AFAICT nobody's actually doing that redistributing. I don't know whether that's due to lack of interest, or due to some Red Had shenanigans that makes redistribution illegal/unattractive.
RedHat has to permit the redistribution per the GPL. But there is nothing stating that your support contract can’t be cancelled if you do it. (I don’t have any first hand knowledge of this, just a guess).
Grsecurity also uses this 'loophole.' Seeing this scheme go mainstream is really disheartening; I feel that it really undermines the intent and social value of the GPL.
See Bruce Perens's explanation: <https://perens.com/2017/06/28/warning-grsecurity-potential-c...>. The short story: adding a penalty to an action that the GPL allows is a restriction of that action, and the GPL does not allow setting additional restrictions. This has not been tested in court, as far as I know.
My understanding is that Red Hat does allow redistribution so long as you do not infringe on its trademarks. Given that no version of the GPL ever granted trademark rights, this is not an additional restriction, so this is fine.
Also, RHEL is packaging pieces of software that are not under GPL/LGPL. With permissive licenses, they could probably heavily restrict source redistribution and availability.
If half the userland is not available as source (patch and packaging included), a CentOS-like project would not be possible.
Effectively, if RedHat/IBM wanted, there are a lot of dick moves which could effectively kill Rocky.
>But Red Hat does provide sources, to everybody. They go above and beyond what the GPL actually requires.
If you had been around in the late 90s, their lip service was that they were far more than just "providing sources to everybody" (as if that's some feat), and far more about the spirit of GPL and the triumph of FOSS than what the GPL letter requires...
Of course that was lip service, like Google's "Don't be Evil", but don't go around pretending Red Hat is some benevolent "above and beyond GPL" entity...
> better put the effort into an extra-stable-extra-LTS fork of Debian or even Ubuntu
I've always thought of Debian LTS as basically the end all be all of Linux server OS stability (let's ignore the BSDs for this exercise), is the main draw of CentOS over that just the longer LTS period or is it also meaningfully more stable?
From what I understand, it's not just a matter of longer support, but also training. A lot of companies have/had mixed RHEL/CentOS environments, with RHEL on the machines they really want RH support for and CentOS on everything else to save money. Having a mixed RHEL/Debian environment would probably be a pain in the ass for all your sysadmins.
On top of this I also believe that companies prefer the RHEL opinionated implementation of a linux system vs Debian's.
RedHat is a company that sells to other companies. Therefore it has to implement their linux and make decisions in a way that works well with how other enterprises think. Big companies aren't comfortable depending on "the community" to do the right thing for them.
Not really, I have seen CentOS based infrastructure having more problems (xfs breaking docker, frankenkernel 3.10 in 2020?) whereas with Debian things just works and are not heavily patched to the point of breaking ABI.
In a vacuum, absolutely. I think GP's point was that if you already have to run the frankenkernel on some machines (because of support), then it makes management easier if you at least run the same broken frankenkernel on all of them.
The way you describe it reminds me a lot of the iOS jailbreak community. There were some people that thought that it could overtake vanilla iOS but they failed to realize they are perpetually fighting directly with a large corporation and will probably burn out and lose. It sucks but that's reality
> 1. Red Hat Inc. does not want people to build and/or distribute gratis RHEL8 or clones. It would be trivial to just put the actual RHEL8 iso as an unsupported download on their ftp/www server and sell the support separately, like Oracle or Canonical do. Instead, they kept this ridiculous make work project called CentOS around, that involved non-trivial manual labor meticulously rebranding RHEL into RedHat owned CentOS, whose users apparently mainly value it for being as close to RHEL as possible without paying. To a distant observer, the whole setup looks quite absurd. I think it's actually good that they finally put an end to it. But they should have either not started CentOS 8 at all or rode it out till the end. Pulling the plug at 20% of it's lifetime is plainly a shitty move.
But [they do](https://developers.redhat.com/products/rhel/download). A subscription buys you updates and support. RHEL itself is free. In addition, considering that almost all of Red Hat's products are "upstream first", branding is generally a single additional RPM which changes some colors. It is _not_ hard to rebrand RHEL
> 2. It appears that building a legal RHEL8+ clone is quite a task, and I'm guessing the amount of work involved is largely controlled by Red Hat Inc. I.e., they can make running/maintaining a project like Rocky or others more and more expensive if they choose to. I believe they going to test the limits of how difficult they can make building from their sources within the boundaries of the GPL. If you think I'm wrong, just put up a mirror of the RHEL8 (not CentOS8) SRPMs and see how long it stays up. Clearly they're not acting in the spirit of the GPL, if they are in the letter.
The only case in which this has happened is changing the packaging of the kernel-sources SRPM so Oracle could not pick and choose particular patches, and instead had to deal with a single massive diff to make stuff like kpatch work. The "expensive" part of building an EL8 clone is standing up a bunch of Koji builders. That isn't necessary either, strictly. It just makes "turn this SRPM into an RPM and combine it with a bunch of others into a temporary repo we can dogfood/release" easier. As always, it comes down to cost.
> 3. Given the previous point, I believe a project like Rocky is a losing proposition. If the "community" really values enterprise stability so much, better put the effort into an extra-LTS fork of Debian or even Ubuntu, and preparing for transition away from RHEL. Or just pay up, if you really believes the RHEL stability is so valuable. Clone projects are by necessity extremely dependent on actions of Red Hat Inc which has largely opposite interests. I don't know why people would volunteer for that.
The problem with this, broadly, is that the "community" is comprised of a lot of Red Hat-employed engineers. This idea that Linux is a bunch of "community" people that RH/IBM/whomever siphon off is completely fallacious. Sure, they exist, but the vast majority of Linux development is commercial. Some bug is reported in a downstream product that a customer pays for, or a feature is requested by a major customer (or the technical debt involved in maintaining something gets too high, or whatever), and professional engineers who are being paid to work on it write the patches upstream. Once they're merged, they get picked downstream into some kind of productized version.
Given that most of that happens in Fedora (or OKD, or oVirt, or whatever upstream is for a given product), CentOS being run by RH was pretty much the same embrace+extend+extinguish philosophy. The vast majority of RH engineers run Fedora, or an EL clone, and that isn't gonna change. What's different is that RHEL is no longer a growing revenue stream, and CentOS users didn't convert to RHEL at a large rate (or they reported CentOS bugs against RHEL, which is strictly against Red Hat's support policy).
What customers values is indemnity and the ability to point their finger at someone external, plus normal stuff like a security response team and responsible+timely disclosure during major CVEs. What the community value(d|s) is the ability to run commercial software that the vendor supports on RHEL without actually _paying_ for RHEL. It's not that they value 'enterprise stability'. It's that they value being able to run SAS or whatever without hacking the hell out of the installer. That isn't offered by an extra-LTS fork of Debian or Ubuntu, and no amount of complaining on Hackernews is gonna change that.
It's a single package with relatively few additions, mostly color schemes in CSS.
Anaconda documents this in /usr/share/branding
RHEL puts a lot into subpackages of redhat-releasea
But it isn't really the point. There are thousands of packages. Only a handful actually deal with branding. I was an engineer at Red Hat for 7.5 years, and I left last June. I'm intimately familiar with this, and while it's more complex than sed, it's much simpler than it sounds, and the CentOS people are definitely familiar with it. Branding is NOT their hurdle. It's the build system, as they repeatedly mention.
You're blasting Red Hat for making it seem like they're somehow opposed to clones. That's fatuous. A major engineering company with complex workflows uses their own build system. News at 11. They also make this free and document the hell out of it. Anyone who has ever built a package for Fedora is familiar with it. Anyone who has ever dealt with release management/engineering for a product inside Red Hat is extremely familiar with it, and that includes many of the core CentOS team, plus the Fedora RelEng SIG is easy to join if you want to learn. It's complicated, but it's not black magic or even hard information to get. CentOS ALREADY USES IT.
It really doesn't matter whether the RHEL image is "for development use only" (you didn't want support anyway). Stop moving the goalposts.
Agreed. The reason is more "we are running supported software in a supported combination of hardware, OS and application". That's what auditors are looking for. I've yet to see a real benefit beyond the paperwork advantage.
That's true in a literal-but-useless sense. Developers-RHEL might be bit-identical to real-RHEL at some dates, but as a product it's of course very very different due to lack of interim updates. And that's aside of the smaller speedbumps of registration and having to involve Legal if you want to run in a company.
> In addition, considering that almost all of Red Hat's products are "upstream first", branding is generally a single additional RPM which changes some colors. It is _not_ hard to rebrand RHEL. [...] The "expensive" part of building an EL8 clone is standing up a bunch of Koji builders. That isn't necessary either, strictly. It just makes "turn this SRPM into an RPM and combine it with a bunch of others into a temporary repo we can dogfood/release" easier. As always, it comes down to cost.
I don't know how true that is, the CentOS wiki makes it sound like like the debranding part is non-trivial/manual labour intensive. So according to you the main bottleneck is hardware / buildservers? If that's true, one wonders why even the minor CentOS 8.x releases lag RHEL 8.x by 4 to 6 weeks.
I guess we'll get to see it soon, with Rocky Linux development.
> The problem with this, broadly, is that the "community" is comprised of a lot of Red Hat-employed engineers. This idea that Linux is a bunch of "community" people that RH/IBM/whomever siphon off is completely fallacious. [...]
Sure, I largely believe the long-running narrative that a lot of core Linux development is paid/done by Red Hat, and that most other distros, including Debian/Ubuntu are free-riding to some extend. That's why I not really behind the cheering for Rocky, and think it's fairer in the end to either pay up, or roll up the sleeves and do a real _community_ enterprise OS based on Debian instead of cloning RHEL.
> What customers values is indemnity and the ability to point their finger at someone external, plus normal stuff like a security response team and responsible+timely disclosure during major CVEs.
The first part might be true of RHEL customers, but I don't think it's true for CentOS customers. The customer base is of course diverse, but my guess is that for a very large part of the (CentOS, not RHEL) users, objections to moving to Debian/Ubuntu are practical (legacy/switching costs, maybe followed by maybe proprietary software support), much more than principled/legal.
>>roll up the sleeves and do a real _community_ enterprise OS based on Debian instead of cloning RHEL<<
I would love to see this too, but backporting security patches is not the kind of “sexy” programming we can motivate programmers to do “for fun and for free”, especially when we tell them to do it for X number of years without having a paycheck to motivate them.
> That's true in a literal-but-useless sense. Developers-RHEL might be bit-identical to real-RHEL at some dates, but as a product it's of course very very different due to lack of interim updates. And that's aside of the smaller speedbumps of registration and having to involve Legal if you want to run in a company.
Again, stop moving goalposts. Your comment was "why can't they just put it on FTP/WWW without support". That's exactly what they do. You aren't required to register it in any way, including downloading. You want a free "product". That isn't their business model. At least they make the sources for everything available, which took Canonical years for Landscape.
> I don't know how true that is, the CentOS wiki makes it sound like like the debranding part is non-trivial/manual labour intensive. So according to you the main bottleneck is hardware / buildservers? If that's true, one wonders why even the minor CentOS 8.x releases lag RHEL 8.x by 4 to 6 weeks.
For a base release, a mock root needs to be bootstrapped. This is more or less by hand, and does definitely require incremental builds of and the rest of the base system. I guarantee the CentOS releng team has scripts that do this, but I don't know where they'd keep them (I never worked directly with that team). Once they're in a koji buildroot, that buildroot can be used to bootstrap its way to a 'release' buildroot, with successive tags. Again, this is something that any build engineer should be familiar with.
The real wrench with 8 is modularity, which is also present in Fedora. If you really want to help CentOS go help them fix it:
Issues like "special version of RPM in the buildroot" are weird non-issues specifically related to how Koji works, but Modularity does have real problems.
> Sure, I largely believe the long-running narrative that a lot of core Linux development is paid/done by Red Hat, and that most other distros, including Debian/Ubuntu are free-riding to some extend. That's why I not really behind the cheering for Rocky, and think it's fairer in the end to either pay up, or roll up the sleeves and do a real _community_ enterprise OS based on Debian instead of cloning RHEL.
This is a no true Scotsman argument. I don't know what isn't "real" or "community" about CentOS other than the fact that there was/is overlap between CentOS maintainers and RH employees. Just because they have day jobs doesn't mean they can't be part of the community, too. Hand wringing about what's fair defeats the purpose of using a distro like a tool to accomplish goals.
> The first part might be true of RHEL customers, but I don't think it's true for CentOS customers. The customer base is of course diverse, but my guess is that for a very large part of the (CentOS, not RHEL) users, objections to moving to Debian/Ubuntu are practical (legacy/switching costs, maybe followed by maybe proprietary software support), much more than principled/legal.
You inverted this argument and you're asking the wrong questions. The question isn't "why aren't people moving off CentOS?", it's "why did they use CentOS in the first place?"
It's because they were already familiar with RHEL from previous jobs, and wanted familiar tooling (apt may be nicer than yum was, but dpkg is a dumpster fire for package maintainers compared to RPM, and RPM's tooling is much more cohesive than digging around in 10 different manpages for apt-cache || apt-file || dpkg -L || whatever to get information). Kickstart is nicer in many ways than preseed. Sure, the costs of swapping all of that are non-trivial both in the time investment for administrators to rewrite tooling and for the marginal loss in productivity until they re-learn tooling.
Going forward, container-first distros are getting the brunt of new deployments, which is exactly why RHEL isn't a growing revenue stream for Red Hat anymore, and why it doesn't make sense to keep pouring money into CentOS.
Red Hat is focused on OpenShift/OKD as the new "platform". Containers have their place, and it isn't everywhere for lowly end-users, but RH didn't drop CentOS to fuck over the community. They reduced support for the community because RHEL (and CentOS) are increasingly irrelevant to a company which has put their eggs in the CoreOS+Openshift basket as their future. That isn't specific to them.
All of the major vendors see the writing on the wall. What's better than making your systems easy to manage? Making it so they don't need management at all. Do everything in k8s and update a single system image quarterly (or whatever), and leave traditional workflows as an afterthought. Red Hat is lucky enough to already have RHEL, but if I were starting a new for-profit Linux company in 2020, I'd do exactly what CoreOS did or Rancher does.
> Again, stop moving goalposts. Your comment was "why can't they just put it on FTP/WWW without support". That's exactly what they do.
That's exactly what they don't. You're the one who innocuous altered that to "without support and updates" which you know very well makes all the difference. If CentOS had the same non-update schedule as developer-RHEL, approximately nobody would use it in production. Contrawise, if RHEL did provide timely yum/dnf updates (and remove the murky "Development Use" clause), everyone would run that instead of CentOS. So now you're telling me to stop moving the goalposts back after you moved them to another field?
> You aren't required to register it in any way, including downloading.
Maybe I'm a bit dense, but I clicked on every download link on your page and they all led me to "Log in to your Red Hat account". Maybe you can post a direct URL to the ISOs?
> You want a free "product". That isn't their business model. At least they make the sources for everything available, which took Canonical years for Landscape.
No, I don't want a free product. I was stating from the first post that Red Hat Inc don't want people to have that. Which I'm completely fine with and fully support. You seem to be in complete agreement, so I don't know why you bothered with some quasi-rebuttal in the form of freebie Developer RHEL link which is very different from the CentOS/Rocky proposition.
> [details about the build process]
Ok, maybe the debranding part is not a big deal, I don't know. But my point only rests on the process being labour-intensive, hence expensive. Which I don't know you're disputing or agreeing with. If a CentOS rebuild is necessarily labour-intensive, I don't see a bright future for Rocky or similar projects. I think it is, since AFAICT almost every clone except Oracle gave up trying to keep up with RHEL 8. It's hard to prove things either way, but we'll find out soon enough if we follow the Rocky project.
> You inverted this argument and you're asking the wrong questions. The question isn't "why aren't people moving off CentOS?", it's "why did they use CentOS in the first place?"
It's because they were already familiar with RHEL from previous jobs,
So inertia / legacy / switching costs, we're exactly in agreement.
> and wanted familiar tooling (apt may be nicer than yum was, but dpkg is a dumpster fire for package maintainers compared to RPM, and RPM's tooling is much more cohesive than digging around in 10 different manpages for apt-cache || apt-file || dpkg -L || whatever to get information). Kickstart is nicer in many ways than preseed. Sure, the costs of swapping all of that are non-trivial both in the time investment for administrators to rewrite tooling and for the marginal loss in productivity until they re-learn tooling.
Now this is a completely different argument, that the RHEL/CentOS tooling are intrinsically superior to Debian/Ubuntu's. I'm pretty skeptical, since it implies that organizations who do run Debian/Ubuntu could save a lot by switching to CentOS and a bit of retraining. But this is of course not a dispute that's going to be settled in a thread like this, so let's leave it at that.
> [something about containers, OpenShift, K8s, divining Red Hat's Grand Strategy]
This does not seem on topic, so no comment.
Maybe I just expressed myself badly, so let my try again. I predict that Rocky Linux will not be a big success, and don't think doing a free-beer RHEL clone (CentOS as most users understood it) is a worthwhile endeavor, despite a seemingly large audience. Yes, giving good stuff away for free is popular (and I do believe RHEL is a good product). Because they're largely dependent on RH, who 1) appear not very enthusiastic about the idea of people running RHEL for free even without support, 2) can largely determine how expensive running a clone project will be. That's why IMO it's better to analyze if users really need a strict RHEL clone, or if what they want from it (xLTS, better tooling, commercial software compatibility, whatever) could be better developed on top of Debian, whose incentives seem to class less.
People who really really need a RHEL clone: time to pay up or go with the Stream (it probably really not-so-bad).
Now unlike you I have zero inside knowledge or experience, so maybe I'm just talking out of my ass. But I am willing to make a somewhat falsifiable prediction, that Rocky and similar clone projects don't have much chance of success. Maybe I'm all wrong and Rocky can with a handful of volunteers and some clever scripts, resurrect CentOS-as-people-understood-it. Or maybe some deep-pocket 3rd party with a better brand than Oracle will step up. We'll find out a year or so.
> Maybe I just expressed myself badly, so let my try again. I predict that Rocky Linux will not be a big success, and don't think doing a free-beer RHEL clone (CentOS as most users understood it) is a worthwhile endeavor, despite a seemingly large audience. Yes, giving good stuff away for free is popular (and I do believe RHEL is a good product). Because they're largely dependent on RH, who 1) appear not very enthusiastic about the idea of people running RHEL for free even without support, 2) can largely determine how expensive running a clone project will be. That's why IMO it's better to analyze if users really need a strict RHEL clone, or if what they want from it (xLTS, better tooling, commercial software compatibility, whatever) could be better developed on top of Debian, whose incentives seem to class less. People who really really need a RHEL clone: time to pay up or go with the Stream (it probably really not-so-bad).
> Now unlike you I have zero inside knowledge or experience, so maybe I'm just talking out of my ass. But I am willing to make a somewhat falsifiable prediction, that Rocky and similar clone projects don't have much chance of success. Maybe I'm all wrong and Rocky can with a handful of volunteers and some clever scripts, resurrect CentOS-as-people-understood-it. Or maybe some deep-pocket 3rd party with a better brand than Oracle will step up. We'll find out a year or so.
So, let's try this. I agree that Rocky will not be a big success, but completely because I don't think there's a major market demand for a RHEL clone in 2020. New deployments will end up being a container Linux distro with deployments inside VMs or containers. (RH)EL8 is relatively new with low-ish adoption. We still had customers on EL6 six months ago when I left. Institutional customers aren't likely to move to EL8 until EL7 is nearing the end of phase 3 support, since the costs involved in reworking their applications to work (and their build/config management toolchain to work with modularity+etc. This has nothing to do with Red Hat's stance, which was (for the 7.5 years I was there, and still, from everyone I know who's still there) very overtly pro-upstream.
Users don't need a strict RHEL clone. They need a life raft until they can move to containers. It's not worth re-training administrators how to use some other package ecosystem. Rocky will fail, but for these reasons
Red Hat stopped long ago of being an open source company with open source values unfortunately, not to mention that a lot of good talent had exodus long ago to FANG companies which are now driving most of development.
> Whats missing is an analysis of why CentOS failed.
CentOS did not fail. CentOS was a wild success. Like the vast majority of tech success storeis, CentOS was picked up by a major player and turned against its roots to protect the incumbent moneymaker.
But the terrific thing about the GPL, for all its flaws and despite its crazy founder, is that it ensures the health and longevity of a project despite well-funded attacks against it.
Whatever replaces it either needs a better business model (to pay for maintenance, RHEL, Ubuntu) or more community involvement (work for free, Debian). But when you’re effectively repackaging another distro, it’s probably hard to get other people excited enough to help.
I don't think anybody is against CentOS being an onboarding ramp to paid RHEL deployments.
CentOS is actually Red Hat's greatest advertisement
If IBM Red Hat wanted to push for RHEL upgrades, they should have changed the CentOS support window from 10 years to 3-5 years. If they had to wind back the EOL for CentOS date from 2029, they should have at least move it back to eg, 2024 not 2021.
I think IBM had nothing to do with this one. People think Red Hat won't do anything like this. However, there exists a part of Red Hat which is capable of doing this. That part of Red Hat usually stays behind the scenes and comes to the fore to announce that a decision has been made and the developers (hired within Red Hat as well as the general community) who are involved in the day to day running of the projects will have no say. Plus they will throw in some confusion (like the limited use license that is in the works but not yet ready for CentOS use) around the future of the project being killed just to let the community expect something good to come out of this exercise. This is not new. They did the exact same thing to the JBoss community application server[1].
I think Ubuntu has a decent business plan, charging people $225/yr to extend support from 5 to 10 years. I would happily pay that in order to avoid having to migrate as often.
CentOS just gave everything away for free and then is wondering why they're not making any money.
Paid CentOS support is called RHEL - the whole point of CentOS is repackage GPL RHEL stuff without the licensing fee. The only way I see this being viable is them getting bankrolled by a huge cloud provider - but cloud providers already decided that having their own distros was a better option.
Without the absurdly high licensing fee. A more reasonable amount (be it a saas-like low monthly charge or one-off 3-digit fee) would probably go down fine with enough institutional users to generate a decent amount of money.
There is a some up/down depending on various rebates, volume licenses, support included/excluded, etc ("nobody pays sticker price"). But in general, RedHat is in the same order of magnitude as Windows but a little cheaper due to no per-core-pricing, no necessary CAL shenannigans or weird limitations on number of users/size of company/VM/PM and stuff.
But of course it's true that in domains like HPC or cloud computing, the huge number of licenses and machines involved make a few hundred bucks per year just too expensive in sum.
Every year between 2009 and 2017, Canonical lost money. In 2018 after several rounds of layoffs they finally made a profit of ~10 million, but their revenue actually dropped from the year before.
Whether or not it is a good plan in theory, it clearly doesn't seem to be working that well for them in practice.
CentOS's 10 year support window was unreasonably long given that there's no benefit to upgrading to RHEL (and it's paid employees at RedHat during the security backporting work).
The free RedHat tier should have a shorter support window that's still long enough to attract users who want a stable platform to build long-lived appliances.
I charge more per hour for my consulting, I had a bottle of wine that cost about that last week for dinner. What revenue do they actually get out of that model?
To be fair, they do much more work than CentOS ever did (or Rocky would have to do).
In fact, Ubuntu does too much development for its own good. People jumped on it because it was “a usable Debian updated more often”, and they got all sorts of crazy UX experiments.
Yes, indeed, but I was referring to Centos. There isn't enough of a benefit for most of us to upgrade if we get security updates free for 10 years and don't need support.
And that shows how pointless the CentOS kill is. Those users won't go to RHEL but Debian/Ubuntu or if it has to be RHEL-compatible Oracle Linux and then lobby their other vendors to support Debian/Ubuntu or some other distribution as well.
I really think everyone should just say “fuck this”, abandon it and throw the time at LTS Debian. Red hat control way too much of the ecosystem at the same time so there needs to be a strong, stable, non-commercial alternative.
I know mentioning incentives and that people lie is not popular here but when the same thing happens to two different projects you have to be willfully blind to not see a pattern.
As for making it viable long term: a clear goal, community norms and quick removal of anyone not toeing the line.
Anyone believe that I will, at some point, be able to point my CentOS 8.x configuration at the Rocky Linux repo and just upgrade to it? That would be ideal. I realized GPG keys will need to be replaced, etc.
Worst case is might have to also manually replace a few "release" and "logos" packages (which is what's involved now for switching from RHEL to CentOS or OracleLinux)...
More likely there'll be a simple script to swap from CentOS (or RHEL) to Rocky.. Or they could have a "rocky-release" package with `Obsoletes: centos-release, redhat-release` and a `yum install https://rockylinux.org/rocky-release-8.2-1.noarch.rpm ; yum upgrade` is all that'd be required to swap...
TL;DR: should be very easy, but there's minor variations in methods that I doubt are finalized.
Can someone explain why it’s not exceedingly simple to clone the existing Centos concept? Isn’t all the code that does the builds, artwork replacement, etc all open source?
I would think that rebranding CentOS as Rocky is a rather trivial process of replatforming all the codebase and replacing any “Centos” with “Rocky”.
Because you need a lot engineers (so, hefty amount of money) to clone that concept? Basically CentOS concept is "freeze the version of all the packages and support them 10 years". But in real world people need new features, new bug-fixes, i.e. new versions. If you want to commit to that "never update" policy, you had to back-port everything people wants.
In short, forking it is easy, but keep it attractive is not.
I mean, if you think about it, all that's really needed is for a "s/CentOS/Rocky/g" over all of the repositories. Then, the other 99% of the project is just waiting for the packages to rebuild and get sync'd on all of the mirrors that they could just will into existence with their minds.
Really, though, let's be honest here: If they weren't spending so much time writing up press releases and commenting on issues on GitHub, they probably could've already basically been done, the new package repositories could've been published and mirrored, and half of the CentOS 8 boxes out there could've already been migrated over.
<sigh>
--
EDIT: To be clear, I am not serious. I thought the question I was replying to was completely f##king absurd but chose to respond with sarcasm (it seemed less likely to result in a warning from @dang than my initial reply).
While I enjoyed your sarcasm it doesn't help answer the question. CentOS did't do their own backporting of fixes, nor development. They take SRPMs from RHEL and do "s/RHEL/CentOS/" on them and then build the SRPMS into RPMs which get published.
Not saying all of the above is trivial but I'd think the code to do it literally exists and is itself open source.
But seriously, can somebody ELI5 this project. If Centos is just 1:1 RHEL with removed branding then what new bugs will show up in Centos that are result of that rebranding and will not be fixed by RHEL devs? Is there a code in RHEL that is also copy writed which had to replaced and maintained by Centos devs? What am I missing?
I'm not fully versed on all the things that would be needed, but at the bare minimum it would seem like you would need a bunch of automated processes for just the building
- Import src packages, making sure that you copy in changes/patches when RHEL does.
- Replace the RH trademarks in every package
- Build every package and run verification tests for each arch
- Build ISOs
You would also need I said if infrastructure servers that can scale to a large number of users for Yum/RPMs, etc.
Then you also need a set of servers for issue tracking and a way to break it out per package.
I wouldn't imagine that it is anything which can't be done, it just seems like there's a lot of little pieces that you would need to set up, and infrastructure you need to run.
Rocky is going to be tough to maintain. With CentOS being an internal part of Red Hat, it really helped for them being able to tap engineers for information about nasty CVEs getting fixed upsream in RHEL and numberous other headaches (build failures etc).
One thing to consider, for the folks that assume everyone on CentOS is a parasite allergic to paying for software... CentOS is heavily used in the HPC academic organizations, in part because paying licensing fees for an OS on 2k+ nodes isn’t workable in academia.
When I was at Princeton, a lot of clusters ran Springdale Linux[1], which is a Princeton/IAS version of RHEL compiled from RHEL source. I wonder why they didn't simply choose CentOS, and if there's any institution outside Princeton using Springdale.
Btw: it doesn't seem to have been ported to RHEL 8 (?).
They address that particular question even before the FAQ on their site:
"This project was started long before CentOS or other projects were available."
I bet the have calculated that their work maintaining this is rather simple and well-worth it, or they would have changed over to CentOS a long time ago.
Support. Commercial software packages that you may want to run often come with a list of supported operating systems which usually only includes RedHat and maybe SuSE. Ubuntu and Debian are rarely officially supported. And although you usually can get things to work somehow, your application software support will be useless because all tickets get closed with "unsupported OS, use RedHat". With CentOS you at least might have a chance to get a non-useless answer from you application's support team.
Scientific was discontinued and the sponsoring organisations switched to CentOS 8. It looks like the work involved in rebuilding RHEL 8 was too much to deal with.
Does anyone know what exactly what were the legal/financial mechanics of RedHat 'absorbing' CentOS in the first place? Is "CentOS" a trademark? Is the CentOS logo trademarked?
Who exactly owned the servers that CentOS was distributed from? Did CentOS have people on the payroll who took jobs at RedHat? (Did CentOS have a payroll in the first place?)
> The CentOS Marks are trademarks of Red Hat, Inc. (“Red Hat”).
Most servers that distribute current binary RPMs are mirrors operated by third parties. You can be reasonably certain that servers like www.centos.org, mirror.centos.org, vault.centos.org, or buildlogs.centos.org are paid for by Red Hat. Their IPs are mostly AWS.
> The CentOS project recently announced a shift in strategy for CentOS. Whereas previously CentOS existed as a downstream build of its upstream vendor (it receives patches and updates after the upstream vendor does), it will be shifting to an upstream build (testing patches and updates before inclusion in the upstream vendor.
Wow, I haven't been following this very closely - but isn't that Fedora they're describing? At least... traditionally...
Fedora was upstream, RHEL was stabalized in the middle, and CentOS was downstream - regarding patch releases and features, etc.
Fedora is desktop-focused. RHEL and CentOS are server-focused. I think there is a place for both. But who knows, maybe IBM will discontinue one, or both.
RHEL and CentOS are both server and desktop. The desktop is just not flashy or shiny or bleeding edge. The default install is/was Gnome 3 shell (For RHEL7 and CentOS 7).
I am not sure if this is still the case, but Redhat used to require 100% of its employees to use RHEL Workstation as the desktop.
Looking forward to give this a spin at home to see if it'll be the future of non-RH based linux servers, although it'll take a long time before people are willing to throw it in prod like they do with CentOS. No way to change that except time.
Also it's interesting that some people defined Rocky as being 'unstable' when others read it as being 'solid as a rock'.
I knew what CentOS was, but never followed it because I've never personally had use for it. Still, I appreciate what it did and was glad to have it as an option should I ever need a super stable distro in the future.
That said, I think the FAQ is missing an answer for a critical question: What ultimately drove CentOS to its regrettable fate and what will Rocky Linux do to avoid a similar misfortune?
Obvious next question: what drove the CentOS devs to sell to Redhat. From previous discussions, I understood that it came down to lack of resources / devs to maintain and support it. So OP's question is on point: what will Rocky Linux do to avoid a similar misfortune?
There isn't any evidence anything causes anything else. Cartesian dualism is irrefutable. So for all we know we live in a world where IBM destroying everything they touch is purely the will of a malevolent daemon who wants to tarnish the good name of the company that enabled the holocaust.
The most frustrating thing about this is that Redhat was making a profit before IBM bought them. They had existed for 20 some years on a business model that business people didn’t understand, and they were able to do that because they understood what open source would become and how they could play a role in that.
One of the things that YC is always talking about is that founders looking for ideas should look to identify situations where most people think it’s going to turn out one way, but most people are wrong and it’s actually going to turn out another way. In the context of the late 90s, where virtually all software was proprietary, they bet on open source software and support plans, and made a sustainable company on it. They contributed to open source so that they would have the expertise, gave all the software away for free, and then sold the expertise through support plans.
And then the business people came along and they’re showing a deep misunderstanding of why Redhat was able to sell support plans in the first place. People on RHEL are going to stay on RHEL, but people on CentOS — the market of people who are not paying customers but could theoretically become customers, are almost certainly going to go to Canonical. This will kill Redhat.
But Red Hat ended up being the exception that proved the rule that selling support for open source isn't a very lucrative business. There was room for one player in that space and Red Hat was it. Now with cloud providers selling support for open source bundled with the infrastructure to run it on, there isn't even room for one standalone player.
Sqlite devs have funded their development decades into the future by selling licenses to public domain software. It may not have made them a billion dollar multinational corporation, but must every company have such conqueror aspiration?
>CentOS 8 was never officially supported until 2029 so we did not go back on anything
The thing though is that RedHat is responsible for that impression. Every previous version of CentOS before 8 has been supported until the upstream RHEL pulled the plug. CentOS’s official page said it would be supported until 2029 ( https://archive.is/7Qmtw ).
A reasonable person would infer that CentOS (now controlled by RedHat, so, yes, RedHat) made the same promise that they made (and kept) with every previous version of CentOS: That it would be supported for 7-10 years. Not just over 2 years.
I definitely inferred a decade of support. If I would had known that CentOS 8 would be cut off at the end of 2021 this summer, I would not had installed it. I would had installed Ubuntu 20.04 LTS.
Indeed, replacing my CentOS 8 installs with Ubuntu 20.04 is exactly what I have been spending all last week doing.
I mean, I hate to say this, but have we considered that a big part of the reason RedHat has been profitable is because it doesn’t care about the desktop? And no, Fedora really doesn’t count.
Ubuntu’s big thing back in 2004 was that it was a well-heeled founder (and company), coming in to actually put time and money into the desktop experience on Linux in an opinionated way (obv. not everyone agrees with those opinions, but I would argue that being as opinionated as commercial/proprietary software was Ubuntu’s biggest strength in the beginning). Over the last 16 years, nearly all of the big bets on desktop development have failed. Ubuntu One (the cloud personal cloud service)? Failed (though in retrospect it was a really good idea. Too bad users didn’t pay.). Ubuntu Software Center? Failed and discontinued. Unity? Failed and discontinued. Ubuntu Phone/Touch (and Canonical had invested massively into mobile)? Failed and given to the community. Mir? Failed, probably for good reasons, but failed.
Where has Canonical made money? Enterprise and in the cloud.
I totally understand the attraction to Linux on the desktop, but every company that has approached it in a way that is focused on end-users and not the enterprise in a way that isn’t either volunteer driven or as a very small company has failed to make it any money off of it. I imagine Canonical will continue to deemphasize the desktop even more as time goes on.
Honestly, with the way things are going, I would like them to deemphasize the desktop.
Canonical made it easy to recommend linux as a desktop, but then have made it harder as time goes on, with controversies like Snap and the Amazon fiasco. I'm glad for what they have done, and wish them luck in the server space.
There are others who are now better positioned to pick up where Canonical left off on the desktop. ElementaryOS, Pop!_OS, Zorin, all of these are amazing projects that have picked up and pushed forward from where Canonical left.
I agree with you on those projects and Mint, my only response is that they are all much smaller projects that lack the funding and size/force of will that Ubuntu was able to achieve. That isn’t to take away from them at all, but aside from System 76 (a boutique reseller who until recently has primarily just sold re-badged Taiwanese laptops (good laptops to be sure, Clevo is a solid ODM), most of them are either largely community projects or very nascent businesses with a few full-time employees.
Again, that isn’t a criticism — I’m friends with some members of the elementary team and absolutely love what they do — truly. But none of those projects can make the type of investment that Canonical did or that the other big Linux vendors who have all but abandoned the desktop (SuSE/Novell, Red Hat) did, or even now-bankrupt/sold for pennies to PE companies did (Mandriva (née Mandrake), Corel, Linspire (remember those crooks!)) or that some promised to do, but later abandoned (Steam).
Maybe that’s OK. Maybe the number of Linux desktop users is content with work being done and sustained largely by community volunteers or very small companies. But as good as the work many of those groups do is, I do think the lack of a Canonical type of company does hurt the whole ecosystems ability to grow, innovate, and reliably attract new users. On a personal level, I think that everyone should give up the pretense of Linux on the desktop ever evolving beyond an extremely niche thing, and be content that the Linux kernel is at least the basis for stuff like ChromeOS and Android (which while absolutely not Linux on the desktop or on mobile, are at least major desktop platforms), but that’s just me.
Deepin is interesting because it has a strong source of funding and developers/partners and has made really great moves on the UI front and it’s partnerships with ZTE and Huawei (Huawei even ships Deepin on many of its machines now). My personal concern with Deepin is the security and privacy with it — and I have those same concerns for any state-sponsored version of Linux or any operating system to be honest. Deepin is also very insular in its development (far more than even Ubuntu), and that might just be necessary to achieve the sort of polish it has, but that distinct lack of community could be a turn-off to many.
What I’m saying is, I don’t necessarily disagree with your assertion that Canonical should pull out from the desktop even more, but I think a lot of people underestimate just how big of a void that will leave in the desktop space and as good as those projects you mentioned are, I don’t think any of them individually or collectively can fill it. Especially financially.
Because there is still a chance they can extract value from it, or up-sell someone from CentOS to RHEL and a juicy support contract. And they will keep it a round to keep the trademark, if for no other reason than to deny it from everyone else.
I wish RL all the best, but I'm glad I switched to debian-stable for server workloads (mostly Docker anyway). If anything, the story of CentOS (and White Box Linux before that) tells me a RedHat clone isn't a feasible project economically in the long run. So it may be better to put your money where your mouth is. Of my customers, none had used CentOS/RH as base image for Docker builds anyway.
What I missed in the announcements of CentOS news and the obvious disappointment is the talk about whether Stream is actually a path you can take forward.
I put together what we actually know about the CentOS -> Stream migration so far[0]. I personally might give stream a chance although if Rocky is released, I imagine it a no-brainer.
What is the value of having a separate RHEL derivative? It isn't as if the "community" can propose/submit any changes, since any changes will cease to make the downstream distribution a "bug for bug" compatible RHEL derivative. If I actually wanted to participate in the larger RHEL-derivative community, I would need to actually submit my changes to the CentOS stream project.
> Devil's advocate: why should I choose this yet-to-exist
Devil's response: nobody cares if you do. A lot of people know why they want it; the answer will in many cases be that it will fill the same niche and not be controlled by a shitty company. (If you think calling Oracle shitty is FUD, unprofessional or similar, that's fine: see 'Devil's response', above.)
It will stand or fall on its own, as a result of many different peoples' choices. For now, it is enough that something is growing in the niche from which Centos was uprooted.
> Devil's advocate: why should I choose this yet-to-exist distribution over something already existing, such as Oracle Linux?
Because there's a whole ecosystem (HPC and Scientific computing to be exact) which depends on CentOS (not RHEL, not Oracle, not Ubuntu, not Debian) primarily. A CentOS compatible distribution is not some FOSS pride thing.
IBM and RH really blew a sucker punch in this regard.
When you say that they depend on CentOS, are they using something CentOS-specific. Centos is supposed to be compatible with RHEL (minus the logos/trademarks) and shouldn't have additional fixes or features. ("bug for bug, feature for feature" <= centos wording :)). No?
CentOS don't have to have a specific feature to be preferred over RH. Being free in both beer and speech is important enough. People (incl. us) install 1000+ server clusters with CentOS. The absence of licensing fee allows us to buy more servers. The absence of licensing fee allows "small researchers" to have a verified platform to work with. If you don't have a verified platform, you cannot trust your results.
CentOS carries a legacy from Scientific Linux (which was RH compatible too) and has a lot of software packages developed for/on it. It might be a regular .tar.gz or RPM distribution but, they're validated and certified on CentOS. This is enough. Some middlewares used in collaborative projects (intentionally or unintentionally) search for CentOS signature. Otherwise installations fail spectacularly (or annoyingly, it depends).
I have to run my own application on every platform with a relatively simple test suite which checks results with 32 significant digit ground truth values. If these tests fail for a reason, then I can't trust my application's results for a particular problem. My code runs fast and it's relatively simple (since it's young). Some software packages' tests can run for days. It's not feasible to re-validate a software every time after compilation on a different set of libraries, etc. CentOS provides this foundation for free.
I think I understand a little better your point of view. CentOS became so important for the HPC community that most software is now validated against it. So even if RHEL itself were to become free (as in beer), people won't switch to it (or at least be reluctant).
My all personal systems are Debian, however when I install something research related, it's always CentOS. There's no question. I even manage a couple of research servers at my former university. They're CentOS as well.
Moreover service (web, git, documentation, etc.) servers are CentOS too to keep systems uniform even if there's no requirement. So it powers the whole ecosystem, not the compute foundation. That's a big iceberg.
In 2020 why aren't you packaging your apps as containers? Yeah, it sucks that ibm killed centos, but depending on some single distro's version of libm or libc or whatever is not their fault it's yours. Doing your job properly in this case means shipping you deps with your application, and the easiest way to do that these days is with containers..
Assuming based on GP that this is in a HPC environment, there is often a delineation between the people writing HPC software and the people maintaining the clusters and the software installed on them. Telling a brand-new graduate student with zero software development experience to just throw everything into a container results in running code that is not optimized for the hardware it's running on, which in turn negatively impacts the other users competing for compute time on HPC clusters.
There is a movement to incorporate technologies like Singularity into the HPC workflow but for established projects, it often looks like a lot of bikeshedding for negative results compared to just running the code on bare metal.
Because a cluster doesn't work like a normal computer.
Your users don't see the nodes. They submit jobs and wait for their turn in the cluster. A sophisticated resource planner / job scheduler tries to empty the queue while optimizing job placement so the system usage can be maximized as much as possible.
Also, users' jobs work in under their own users. You need to isolate them. Giving them access to docker or any root level container engine is completely removing UNIX user security and isolation model and running in Windows95 mode. This also compromises system security since everyone is practically root at that point. Singularity is user-mode and its usage is increasing but then comes the next point.
Performance and hardware access is critical in HPC. GPU and special HBAs like Infiniband requires direct access from processes to run at their maximum performance or work at all. GPU access is much more important than containerizing workloads. Docker GPU is here because nVidia wanted to containerize AI workloads on DGX/HGX systems. These technologies are maturing on HPC now.
In performance front, consider the following: If main loop of your computation loses a second due to these abstractions, considering this loops run thousand times per core on many nodes, lost productivity is eye-watering. My simple application computes 1.7 million integrations per second per core. So, for working on long problems, increasing this number is critical.
Last but not the least, some of the applications run on these systems are developed for 20 years now. So, these applications are not some simple code bases which are extremely tidy and neat. You can't know/guess how these applications behave before running them inside a container. As I've said, you need to be able to trust what you have too. So, we scientists and HPC administrators tend to walk slowly but surely.
Doing my job properly on the HPC side means my cluster works with utmost efficiency and bulletproof user isolation so people can trust the validity of their results and integrity of their privacy. Doing my job properly on the development side means that my code builds with minimum effort and with maximum performance on systems I support. HPC software is not a single service which works like a normal container workload. We need to evolve our software to run with minimum problems with containers and containers should evolve to accommodate our workloads, workflows and meet our other needs.
The cutting edge technology doesn't solve every problem with same elegance. Also we're not a set of lazy academics or sysadmins just because our systems work more traditionally.
I find it interesting that the argument that "X is FUD" is supposed to carry weight.
It's a bit like if I'm in a party, and I briskly walk up to five people and each time I hit them in the face, and then it's your turn and you move away, and I say "what? the idea that I would hit you is FUD".
It's not FUD. It's a pattern of behaviour.
Avoiding overly litigious companies - where other as-good or better choices exist - is not overly cautious, it's just good sense. Where other as-good choices do not exist, it seems perfectly reasonable (depending on your risk profile) to work with others to create the better choice.
Of course, I say all this as someone who has worked in massive multinational corporations and now work in small startups. I'm now likely never going to use Rocky Linux for exactly the reason you've hinted to - in effect, it is not a usecase either of us care about. But for those people who do need this, I'm very happy that someone has championed the cause.
I haven't seen "using Oracle Linux will result in me being sued".
What I've seen is "Oracle is evil", "don't trust Oracle", and something like "my prior history around Oracle has left such a lasting bad taste that I throw up a little in my mouth every time I touch something with Oracle in it, so I'd rather do almost anything but use something from Oracle, since using it on the daily would inevitably lead to permanent esophagus damage."
I mean... Oracle buying up MySQL was enough for MariaDB to be created and move to being the default. (well, and some of what Oracle did right afterwards).
In an earlier thread, some Oracle guy (not in the Oracle Linux team) mentioned that Oracle 8 actually builds from CentOS 8, rather than RHEL 8. I was a bit skeptical, since OL 8 usually releases much earlier than CentOS 8, but couldn't verify things either way. Someone else mentioned that RH actually only releases RHEL8 sources through CentOS8 sources. Again, I don't know how to verify, but if true they raise a lot of new questions about Oracle Linux 8 given the recent CentOS 8 announcement.
1. On an entitled system, enable the source repos and download the packages.
2. In your account online, you can download the SRPMs for individual packages.
3. In your account online you can download a minor version release iso of the SRPMs.
4. You can use https://git.centos.org to clone the actual RPM patches/spec files, and use the get_source.sh script from the centos-git-common repo to pull the package source tarballs from dist-git (useful for projects like the kernel that don’t use actual upstream as their source).
With CentOS stream (particularly C9S that will be launching mid 2021) and the switch over to GitLab which will happen in the future, everything will be out in the open in git form.
It's unlikely they would sabotage their only competitive advantage though, whereas Oracle has lots of reasons to maintain an enterprise linux distribution besides just succeeding CentOS.
I am really interested to know why should anyone go with this when Debian or Ubuntu LTS exist. The two later have not changed their policies in the last decade, and they have a clear path for upgrading. CentOS was always a clear choice for device drivers support, but I never understood the stability claims.
RHEL and its derivatives are the only linux distribution which maintains binary compatibility over 10+ years while getting not only security updates but feature additions when possible.
This is something I don't think the wider community understands, nor do they understand the incredible amount of work it takes to back-port major kernel/etc features while maintaining a stable kernel ABI as well as userspace ABI. Every single other distribution stops providing feature updates within a year or two. So LTS, really means "old with a few security updates" while RHEL means, will run efficiently on your hardware (including newer than the distro) with the same binary drivers and packages from 3rd party sources for the entire lifespan.
AKA, its more a windows model than a traditional linux distro in that it allows hardware vendors to ship binary drivers, and software vendors to ship binary packages. That is a huge part of why its the most commonly supported distro for engineering tool chains, and a long list of other commercial hardware and software.
That's the value pitch for RHEL, where it's understandable — whether or not you like the enterprise IT model of avoiding upgrades as long as possible, there's a ton of money in it.
I think the gap is the question of how many people there are who want enterprise-style lifetimes but don't actually want support. If you're running servers which don't need a paid support contract, upgrading Debian every 5 years is hardly a significant burden (and balanced by not having to routinely backport packages). There's some benefit to, say, being able to develop skills an employer is looking for but that's not a huge pool of users.
I think this is the reason behind the present situation: CentOS' main appeal was to people who don't want to pay for RHEL, and not enough of those people contribute to support a community. That lead to the sale to Red Hat in the first place and it's unclear to me that anyone else could be more successful with the same pitch.
>who want enterprise-style lifetimes but don't actually want support
But lifetimes are support. Support isn't just, or even primarily, about making a phone call and saying "Help, it's broken." After all, there's nothing keeping someone from taking a snapshot of a codebase and running it unchanged for 10 years. Probably not a good idea if you're connected to the network, but certainly possible.
I was thinking more about _why_ people want that. If you're changing the system regularly, upgrading is valuable because you don't want to spend your time dealing with old software or backporting newer versions. Most of the scenarios where you do want that are long-term commercial operations where you need to deal with requirements for software which isn't provided by the distribution, and in those cases they likely do want a support contract.
I'm not sure there are enough people left who have the “don't touch it for a decade” mindset, aren't working in a business environment where they're buying RHEL/SuSE/Amazon Linux/etc. anyway, and are actually going to contribute to the community. 100% of the people I know who used it were doing so because they needed to support RHEL systems but wanted to avoid paying for licenses on every server and they weren't exactly jumping to help the upstream.
Red Hat bought CentOS in the first place because they were having trouble attracting volunteer labor and I think that any successor needs to have a good story for why the same dynamic won't repeat a second time.
>I was thinking more about _why_ people want that.
I think there are two primary reasons.
1.) A developer wants to develop/test against an x.y release that only changes minimally (major bug and security fixes) for an extended period of time.
2.) The point release model where you can decide when/if to install upgrades is just "how we've always done things" and a lot of people just aren't comfortable with changing that (even if they effectively already have with any software in public clouds or SaaS).
Re: point 2, I don't know how different that is for stable distributions — e.g. if you're running Debian stable you're in control of upgrades and you can go years without installing anything other than security updates if you want.
Re: point 1, I'm definitely aware of that need but the only cases I see it are commercial settings where people have contractual obligations for either software they're shipping or for supported software they've licensed. In those cases, I question whether saving the equivalent of one billable hour per year is worth not being able to say “We test on exactly the same OS it runs on”.
Have you worked in banking or aerospace? 10 years of needed support/stability/predictability is nothing unusual. The old if it ain't broke don't fix it mindset prevails.
That said, if you're really in the position of depending on a free project for over five years of security support, you probably will be totally fine with just ignoring the fact it's out of support. Just keep running Debian 6 for a decade, whatever. The code still runs. Pretend you've patched. Sure, there are probably some vulnerabilities, but you haven't actually looked to see if the project you're actually using right now has patched all the known vulnerabilities, have you?
RHEL kernel versions are basically incomparable with vanilla kernel versions. They have hardware support and occasionally entire new features that have been backported from newer kernels in addition to the standard security & stability patches.
This means that RHEL 7 using a "kernel version" from 2014 will still work fine with modern hardware for which drivers didn't even exist in 2014.
That is not a good thing. RH frankenkernels can contain subtle breakage. E.g. the Go and Rust standard libraries needed to add workarounds because certain RH versions implemented copy_file_range in a manner that returns error codes inconsistent with the documented API because patches were only backported for some filesystems but not for others. These issues never occurred on mainline.
And for the same reasons that the affected users chose a "stable" and "supported" distro they were also unable to upgrade to one where the issue was fixed.
True, but it is a matter of weighing risks. I can't find it now, but I remember a few years ago there was a news story about how an update to Ubuntu had caused hospitals to start rendering MRI scan results differently due to differences in the OpenGL libraries. For those sorts of use cases, stable is the only option.
I think this is a perfect use case for CentOS/RHEL as opposed to Ubuntu when the machine has only one job and nothing shall stand in its way, ie when you expect everything to be bug-for-bug compatible. But I fail to understand why a vendor of an MRI machine charging tens of thousands for installation/support cannot provide a supported RHEL OS which costs $180-350/yr in the cheapest config [1].
They bought some fancy new computers at work. Our procedures say to use CentOS 7, so we tried it, it ran like shit. Then we reinstalled CentOS 8, same. It worked, but the desktop was extremely slow. After much hair pulling I found the solution: add the elrepo-kernel repository, and update to kernel 5.x
No amount of backporting magic will make an old kernel work like a new kernel.
if you need epel, or quicker life cycles then CentOS Stream should be just fine for you as well
People that run CentOS in prod are normally running ERP systems, Databases, LoB Apps, etc, and the only thing we need is the base distro and the the vendor binaries for what ever is service/app that needs to be installed, and probably an old ass version is JDK...
We need every bit of that 10 year life cycle, and we glad that we will probably only have to rebuild these systems 2 or 3 times in our career before we pass the torch to the next unlucky SOB that has to support an application that was written before we were born...
It's the opposite. Plenty of subsystems in the RHEL 8.3 kernel are basically on par with upstream 5.5 or so, as almost all the patches are backported. The source code is really the same to a large extent, and therefore security fixes apply straightforwardly.
Plus, there are changes (especially around memory management or scheduling) that are fiendishly hard to do regression testing on, so they are backported more selectively.
The upstream for most other packages generally move much more slowly than the kernel. The fast ones (e.g. X11, systemd, QEMU) are typically rebased every other update or so (meaning, roughly once a year).
It also helps that Red Hat employs a lot of core developers for those fast moving packages. :)
Documented cases don't seem to be common, but what comes to mind is the Debian "weak keys" scandal (2008), and the VLC "libeml" vulnerability (2019)[1]
Agreed, the packages in Centos / RHEL are all super old. The RHEL license structure changes all the time and depending on which one you get it may or may not include the extended repos.
Honestly that support is meaningless for some areas I know. In our Data Center we have hit problems with old packages and at the end you will end up with a lot of your own packages. In the end I find Debian to be a good base, and you build the rest by yourself. Even though I use Fedora for Desktop, I always have a feeling Debian is the server choice which I can extend further.
This is false. Debian provides LTS with a 5-years timespan. [1]
And there is even commercial support for Extended LTS now [2]
Also, it's worth noticing that Debian provides security backports for a significantly larger set of packages and CPU architectures than other distributions.
Do you trust Debian LTS? As much as RHEL? The documentation about Debian LTS always made me think it is not a fully fledged thing. I've always felt like Debian releases reached EOL on their EOL date, not their LTS EOL date.
> Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
Do you know something I don't? A few years back, Debian changed their LTS policy to 5 years in response to Ubuntu.
> Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
> Thus the Debian LTS team takes over security maintenance of the various releases once the Debian Security team stops its work.
Arguably, no one should be running a server that long in 2020.
I would say a better reason is that while both are Linux distributions, they are distinct dialects and ecosystems. It isn't impossible to switch, but for institutions that have complex infrastructure built around the RHEL world, it is a lot of work to convert.
It's not really about running servers for 10 years. It's about having a platform to build a product on that you can support for 10 years. RHEL software gets old over time, but it's still maintained and compatible with what you started on.
Consider an appliance that will be shipped to a literal cave for some mining operation. Do you want to build that on something that you would have to keep refreshing every year, so that every appliance you ship ends up running on a different foundation?
> Consider an appliance that will be shipped to a literal cave
This.
A decade ago I was technical co-founder of a company [0] that made interactive photo booths and I chose CentOS for the OS.
There are some out in the wild still working and powered on 24/7 and not a peep from any of them.
We only ever did a few manual updates early on - after determining that the spotty, expensive cellular wasn't worth wasting on non-security updates - so most of them are running whatever version was out ten years ago.
The "don't touch it if it's not broken" philosophy is fundamentally at odds with an internet-connected machine.
You either need to upgrade or unplug (from the internet).
There are still places out there that are running WindowsNT or DOS even. Because they have applications which simply won't run anywhere else or need to talk to ancient hardware that runs over a parallel port or some weird crap like that. These machines will literally run forever, but you wouldn't connect it to the internet. Your hypothetical cave device would be the same.
Upgrading your OS always carries risk. Whether it's a single yum command or copying your entire app to a new OS.
Besides, if you're on CentOS 8 then wouldn't you also be looking at Docker or something? Isn't this a solved problem?
The point is the amount of "touching". Applying security patches to RHEL is still a change, but it's significantly less risky than upgrading a faster-changing system where you might not even get security patches at all for the versions of software you're using unless you switch to a newer major version.
"don't touch it if it's not broken" is not a philosophy, it is a slogan. Some people say it, because it is preferable to them to run old unpatched vulnerable systems rather than spend resources on upgrades. That's just a reality. Some care about up-to-date, some don't. Most people don't really care about security, and some of those don't care even about CYA security theatre. If they did care about security, they wouldn't run unverified software downloaded from the Internet.
Why Docker has anything to do with this discussion?
I think it is mostly about running servers (standard services that don't change much) for 10 years (and more). You don't need 10 year LTS distribution for building a product. You take whatever version of OS distribution you like, secure local copy should the upstream disappear, and vendor it into your product and never deviate from it.
Of course there are use cases, but _ideally_, most workloads are staged, deployed, and backed up in such a way that it is a documented, reproducible procedure to tear down an instance of a server, rebuild, and redeploy services.
And while it may be cumbersome or cause some downtime or headaches if that isn't the case, I find the very need of doing it once every 1-3 years forces your hand to get your shit together, rather than a once per decade affair of praying you migrated all your scripts manually and that everything will work, as your OS admins threaten your life because audit is threatening theirs.
How many simultaneously running machines can you keep updating with this method? If you run non-trivial workloads for hundreds of customers, this becomes high maintenance system already with two machines. It takes ages to upgrade all applications, then validate everything works, then actually migrate with no downtime.
Honestly, 10 years is a long time for a server. I would be honestly surprised if a server lasted 10 years.
But I agree I also get the tone of "servers should be cattle and not pets, just kill them and build a new one". Which can also be done on bare metal if you're using vms/containers. It seems like most people forget these cloud servers need to run on bare metal.
Really? We've colocated our servers for the past 18 or so years.
We have about 40. The oldest is around 17 years old. Our newest server is 9 years old. Our average server age is probably around 13 years old.
The most common failure that completely takes them out of commission is a popped capacitor on the motherboard. Never had it happen before the 10 year mark.
Never had memory failure. Have had disk failures, but those are easy to replace. Had one power supply failure, but it was a faulty batch and happened within 2 years of the server's life.
The last time I worked with a ~8 year old server, it used to go through hard drives at a rate of 1 every 2 months. While we could replace them easily and it was RAID so there wasn't any data loss, I personally would've got fed up of replacing HDDs every couple of months.
Also, most of my experience is with rented dedicated servers and they just give me a new one completely so I never really see if they're fully scrapped.
My read on that is that you should be treating your servers as disposable and ephemeral as possible. Long uptimes mean configuration drift, general snowflakery, difficulties patching, patches getting delayed/not done, and so forth.
Ideally you'd never upgrade your software in the usual way. You'd simply deploy the new version with automated tooling and tear down the older.
I don't get this. If there are many servers, sure. But if it's something that runs on a single box without problem, why on earth should I tear it down?
Also, "running a server for ten years" does not need to mean that it has ten years of uptime. I think that wasn't meant.
If it is connected to the Internet, then I guess the kernel needs to be hot-patched need to be applied to avoid security issues.
Were hot kernel patches available ten years ago? I remember some company who did this (for Linux), and it was quite a while back, so it's possible. But I doubt it was mainstream.
I recall long ago that SunOS boxes had to be rebooted for kernel patches.
"Ideally" - that's the problem. I have half a dozen long tail side projects running right now on Centos 7, and a few still on Centos 6.
Do you have any idea how much effort it is to change everything over to "treating your servers as disposable"?! It's going to eat up a third (to half) of my "fun time" budget for the foreseeable future!
Exactly, young devs here are completely out of touch with operations. Of course ideally something like standard 1TB HDD+32GB RAM system would be upgraded to newer OS and apps version by a central tool in 2hours, but we don't such FM technology yet.
Rocky is going to be exactly what CentOS was: a free version of RHEL. The reason you would use this vs. Debian or Ubuntu is because you've got systems that need to mirror your production, but you don't want/need enterprise support on them.
When I worked for a hardware vendor we had customers who ran hundreds of CentOS boxes in dev/test alongside their production RHEL boxes. If there was an issue with a driver, we simply asked that they reproduce it on RHEL (which was easy to do). If they had been running debian or ubuntu LTS the answer would have been: I suggest you reach out the development mailing list and seek out support there.
Whether you like it or not, most hardware vendors want/require you to have an enterprise support contract on your OS in order to help with driver issues.
Because CentOS on enterprise hardware is way more stable than Debian. I've worked for 6 years as a sysadmin for 300+ servers and we migrated everything from Debian to CentOS and our hardware related issues just went away. Overall we had much less trouble in our systems.
That's probably because lots of enterprise hardware is only ever tested and certified to work with RHEL, and in many cases only provided drivers in an RPM format that's intended to be installed in a RHEL-like environment.
Apart from the long support, RHEL based distros also give you built in selinux support. Apparmor exists, but it's not comparable in features and existing policies.
The module itself is provided, yes. The policies are not really integrated into Debian systems. You can adjust them to work, but it's way more work than using ready ones on a RHEL-like system.
> I am really interested to know why should anyone go with this when Debian or Ubuntu LTS exist.
There is a large world of proprietary enterprise software that is tested, developed, and supported solely on RHEL. CentOS (and theoretically, Rocky Linux) can run these applications because they are essentially a reskin of RHEL. Debian and Ubuntu LTS cannot (or at least not in a supported state) because they are not RHEL.
I'm not familiar with Debian, do they have same infrastructure and documentation quality as RHEL? For example do they have anything like Koji [1] for easy automated package building?
We used CentOS as dev environments, and RHEL as production. It gave us the best of both worlds; an unsupported but compatible and stable dev environment we could bring up and throw away as much as we wanted _Without_ licensing BS. And when the devs were happy with it, the move of a project to RHEL was easy and uneventful.
And don't even get me started on the 'free' dev version of RHEL. It's a PITA to use, we've tried. It's also why we've halted our RH purchasing for the moment. Sure, it's caused our RHEL reps no end of consternation and stress but too bad. I've been honest with them, and told them that they are probably lying through their teeth (without knowing it) when they parrot the line that RH will have some magic answer for "expanded" and/or "reduced cost" Streams usage in "1st half of 21". That trust died when RH management axed CentOS8 like they did.
For me it's always been about stability and the long term support of a 'free' distribution. That has also historically been their bread and butter which got them wide-adoption.
The branding stuff was a plus to the sys-admins and Linux die hards.
I am simply not a fan of Debian/Ubuntu's utilities, with the big one being the package manager (I like yum/dnf way better), but also other things like ufw vs firewalld.
For packages like Kubernetes or big data packages one should not use anyone else's builds. I have been finding problems in Cray's modules and eventually we are using our own builds we can reproducibly support using Spack.
I would say for any piece of software, if the vendor themselves provide a package for your distro, use that, not the distro version.
In fact I’ll go a step further and say Windows and macOS got this right, in that third party developers should do the work to “package” their apps.
It would be insane for Microsoft to maintain packages for every piece of software that ships on Windows, but somehow that’s the situation we’re in with Linux.
> It would be insane for Microsoft to maintain packages for every piece of software that ships on Windows, but somehow that’s the situation we’re in with Linux.
And this is why installing ex. Filezilla on Linux is safe and easy, and doing the same on Windows is neither.
To system administrators and people managing large fleets of servers "stability" usually means "doesn't change much" rather than "doesn't crash". In that sense, RHEL tended to be more stable than Debian / Ubuntu. Though that may change somewhat with Ubuntu's recent 10 year LTS plans.
Agreed. I've used and advocated for RHEL/CentOS at work since version 5 because it was stable and predictible. That's gone now, and many of my users would prefer Ubuntu anwyay because it's what they use on their personal machines. So I'm making plans to move all our compute resources to Ubuntu LTS.
I'm wary of doing that, because in near future, Microsoft is likely to take over Canonical. You don't put all eggs into one basket. Always plan for escape, always have a plan B. Preferably one not relying on crystal-balling whims of a for-profit corporation. Rocky Linux, Alpine Linux, Debian, Gentoo, BSD, etc.
There have been some interesting observations on HN and elsewhere, that Canonical for a long time didn't know what it is doing, starting and cancelling projects, but in last years it is lowering interest in desktop and is more focused on providing cloud server software and foisting its new products and methodologies on its users(snap). Some people see this as indication that Canonical is positioning itself to be bought for the best price. It makes sense, as Canonical has a large Linux user base but can't make money there. Microsoft is making inroads in Linux world is the most likely buyer.
there has been some collaboration between the two, however MS collaborates with other small companies and that speculation never arises. I wouldn't set a bet that they're uninterested in Canonical, but the desire to buy it has always been overstretched, it is much bigger chance that they buy some other specialized distro vendor instead (like Google buying Neverware). Ubuntu is too generic in that sense.
Moving a server from Ubuntu to Debian doesn't seem a very arduous task? I've got a box in a rack that came from the factory with Ubuntu installed, but there are Debian addresses in /etc/apt/sources.list.d
Besides isn't that pretty much the exemplar of FUD?
But it is still a task, isn't it? So, now you move to Ubuntu, then if things get south, you move to Debian. Or, you could move to Debian or other less risky distribution now and likely save some time and energy.
On the other hand, maybe Ubuntu is providing something special that Debian can't do - then it may make sense to go with Ubuntu and maybe even swallow Microsoft's fishing hook if it comes.
Lately it seems to me Debian and Ubuntu have made some strange package decisions. They have morphed into a desktop oriented build with snap packages and auto-updates enabled by default (among other strange decisions). There's a ton of stuff we always end up disabling in the new release because it's super buggy and doesn't work well (I work at a small MSP). I'm not sure who replaced Ian Jackson, but Debian seems rudderless.
Centos was the rational other free choice, not that Red Hat hasn't made other equally strange decisions.
Sometimes I think we'd be better off rolling our own, like Amazon does.
Snap is proprietary and has a fairly broken implementation. Seems impressively good at preventing machines from booting, polluting the filesystem namespace (who wants 100 lines in every df?), doesn't seem to handle versioning or garbage collection well.
Server side isn't open, and Canocial repeatedly claims wide industry support ... despite not having it.
I recommend the first step in any Ubuntu system you use is to disable snap. Use something portable like flatpak that does at least have some support, is open source, and seems to have a healthy eco system.
What is the vision for Rocky Linux?
A solid, stable, and transparent alternative for production environments, developed by the community for the community.
Hence the name Rocky Linux, I suppose? Solid as a rock.
Although I'll be inclined to think of it as series of movies. Perhaps even split wood before installing Rocky 5?
"Thinking back to early CentOS days... My cofounder was Rocky McGaugh. He is no longer with us, so as a H/T to him, who never got to see the success that CentOS came to be, I introduce to you...Rocky Linux"
Hmm, to me, "rock" has connotations of "solid", but that ending "y" changes the the connotations completely - "rocky" makes me think of uncertainty, risk and peril.
As others have pointed out, the name is a tribute to a late CentOS cofounder Rocky McGaugh. It would be sad if it had to be changed. We should perhaps emphasize the CentOS lineage in the name instead.
The fact that a majority of the comments are whining about the name really shows you the worst part about open source: the non-contributing but highly entitled part of the community.
If you don’t like the name, launch your own CentOS replacement. There’s no better time than now. If there’s one thing this project does not need right now, it’s armchair marketing experts.
If you do care about a viable CentOS replacement, do something. Contribute code, money or expertise. The last thing any new and vulnerable project needs is another “idea guy” or a new logo.
I share your disappointment. Out of 150+ comments so far, I believe there has not been a single technical comment about the actual work involved in building a version-pinned RHEL clone.
Without any experience myself (beyond some kernel build maybe 10 years ago), I gathered (from https://wiki.centos.org/About/Building_8) that the majority of work involves manually de-branding the RHEL sources. This apparently can't be automated, as it requires human judgement in which packages/files de-branding is required and in which it might actually break something.
Between major version jumps, e.g. from RHEL/CentOS 7->8 there's apparently also lots of work in getting the build environment up to date.
This raises numerous interesting questions, such as:
1. Where do the upstream RHEL sources live? CentOS sources are in https://vault.centos.org/8.3.2011/BaseOS/Source/SPackages/, but where do they get them upstream? I believe they're only available to RHEL subscribers, does this give RH a way to block clones?
2. Where / what are CentOS's actual build scripts / tools? Is there some howto or writeup how to make a CentOS iso (or cloud-image) on my own PC after downloading the source tree?
3. Do CentOS devs go through the entire manual de-branding exercise with every minor/major update? Presumably they are using some sort of automation/scripting/diffing somewhere. Are these processes/tools available or documented anywhere?
I hope at some point someone with some actual knowledge about this can chime in.
> 1. Where do the upstream RHEL sources live? CentOS sources are in https://vault.centos.org/8.3.2011/BaseOS/Source/SPackages/, but where do they get them upstream? I believe they're only available to RHEL subscribers, does this give RH a way to block clones?
Red Hat publishes it's sources on https://git.centos.org. Those are then used to build CentOS Linux packages.
In the future they'll do their development there, and build CentOS Stream and Red Hat Packages from there.
> Remember, the source code at git.centos.org is basically read only, downstream code from RHEL. That’s how Red Hat complies with the GPL. Technically we go above and beyond because we are only legally required to provide code to customers, and not required to provide code for BSD/Apache/etc licensed code, only attribution.
Regarding question 2, CentOS has a somewhat custom build system for each major version afaik, for 8 this would be https://koji.mbox.centos.org/koji/
> Red Hat publishes it's sources on https://git.centos.org. Those are then used to build CentOS Linux packages.
Have you checked if this repo actually works as intended? Because I was wondering if the git repo has RHEL or CentOS sources (or both). So I tried to find out myself instead of just throwing the question out there. It went roughly as follows:
- Let me check the sources of dracut (the RHEL installer) in https://git.centos.org/rpms/dracut. Files: empty, Commits: empty, Forks: empty, Branches and Releases: judging by the names they seem to be CentOS, not RHEL sources. And they're using git.centos.org just as a code dump, not for development. Fair.
I'm not a professional dev, maybe I just don't understand. But is there a way to actually see/browse/download the dracut source code from CentOS 8.3 (let alone RHEL 8.3) from git.centos.org?
The git repository stores only the packaging related items (the specfile, the custom patches, etc.). The actual source is stored as a binary artifact that is downloaded by a `get_sources.sh` script.
That did it, thanks. From a cursory glance, it looks like it fetches indeed the RHEL (rather than CentOS) sources. It looks like the mains questions for the clone builders will be how RH is going to provide RHEL code drops for their point releases (and updates) in the future. Since right now there are separate 8 and Stream branches, but presumably the 8 will be discontinued at some point?
The RPM SPEC file in that repo will have a pointer to the actual upstream sources for the package. This is a typical scenario -- they are not re-hosting all of the sources to build a Linux distro, just the build steps needed to pull, patch, and build upstream sources.
So where do I actually get this upstream (RHEL8) source of say dracut? Because I was reacting to the comment "Red Hat publishes it's sources on https://git.centos.org. Those are then used to build CentOS Linux packages."
rpmbuild will download the sources listed in the spec file, apply the patches, and execute the build instructions to produce RPM and SRPM packages. The SRPM will contain the "as built" source tree.
RHEL source RPMs can be downloaded from http://ftp.redhat.com/pub/redhat/linux/enterprise. According to your link, CentOS doesn't use the source RPMs since 7 but uses git repos instead. I don't know where the git repos are located, however, or if it is still possible to build the whole OS from just the SRPMs.
And in terms of rebuild ability, RHEL is not a self-hosting distribution. There are missing dependencies and packages that need to be built in order to create a fully serviceable distribution as Red Hat does not ship those packages.
>1. Where do the upstream RHEL sources live? CentOS sources are in https://vault.centos.org/8.3.2011/BaseOS/Source/SPackages/, but where do they get them upstream? I believe they're only available to RHEL subscribers, does this give RH a way to block clones?
How'd they used to do it then? I assume they just became a subscriber?
Edit: just thought about it, couldn't they get them from CentOS? Would be pretty funny.
From what I understand[1], up to RHEL6, Red Hat released their sources on public ftp, but after that they became customer/subscriber-only.
I would have thought that the GPL ensures that the customer can then freely redistribute them, but I've read that companies can still make that option very unattractive through other means (e.g. terminating the customer contract, mingling sources with proprietary stuff and making them hard to disentangle). I don't know if RH is playing such games, but the complete lack of non-gated RHEL7 and 8 source code on the web gives some strong hints.
The difficulty is that RHEL isn't a GPLed work. It's a mix of free software under various licenses and logos and other oddball things that are protected by trademark or copyright and are non-free (the non-free part is mostly icons and images identifying the system as RHEL). There isn't a lot of that, but it all has to be separated out to be legal to redistribute. If you go through all the work to do that, you wind up with the equivalent of CentOS. But RHEL's customers don't have the incentive to do that work.
>> I would have thought that the GPL ensures that the customer can then freely redistribute them
I'm actually surprised Red Hat has typically shipped SRPMS in bulk to it's customers. I think it's rare that customers would use them, and the GPL allows Red Hat to be far less accomodating. It allows you to charge for each copy of the source code you convey, it doesn't need to be in such a convenient form, it doesn't need to be available on-demand, it just needs to be available on-request.
Once a customer has it, yes - the GPL allows them to hand it to Rocky Linux and let them run with it. But I think the community has been fortunate so far just to get the distro sources they have in the way they've gotten them.
edit: Perhaps "fortunate" isn't the right word, since Red Hat benefits from the community and owes the community some reciprocation. I'm just saying that legally, if Red Hat wanted to be bigger douche bags, the GPL gives them some space to do so. And I'm glad they haven't fully taken advantage of that before.
The requirement to distribute source means you have to provide the exact source for any binary that you distributed, for three years. And the source is defined as " the preferred form of the work for making modifications to it". A pointer to the upstream tarball plus a poorly organized directory with 58 patches in it isn't "the preferred form of the work ...". SRPMs are an easy way to make sure that you got it right, that the source really corresponds to be binary. Attempting to add speed bumps risks getting it wrong and providing a way for pissed-off developers to make trouble.
I think the alternative which the other commenter had in mind was simply providing desired SRPMS to customers reactively upon request, possibly with a nonzero charge to cover, rather than proactively providing them to all customers in bulk. The GPL certainly allows this, as opposed to a tarball and patches which you're right is inadequate.
Interesting wrinkle in the GPL's written offer option: the offer must be valid for "any third party", not only direct customers. But, it's certainly possible for the included written offer to reference an email address or website that is unique to each paying customer, such that valid request coming from a third party would be traceable to the intermediate paying customer whose support contract would then be canceled by Red Hat.
Does anyone know if Red Hat does this kind of watermarking of the GPL written offers, or if a licensee could share the offer anonymously, leak-style, and not get caught by the support contract people?
I don't disagree, but I've seen other open-source companies view the tarball as their source release and I wouldn't be the least bit surprised if you could at least make enough of an argument to get a judge to go along with it long enough to make it prohibitively expensive to reverse.
It's very true and for a basic reason - one minute spent doing is one minute taken away from talking/writing.
This is the reason why many excellent projects remain obscure while marketing-driven products become famous... and often take credit for other people's ideas.
Irony being they (Ian and Deborah) split up, Ian quit Debian and worked for Sun (arguably a competitor back then), Docker, ..., and unfortunately committed suicide.
I'm confused why everyone is complaining about the "Rocky" part, which is a nice tribute and sounds pretty decent, when the actual problem is the "Linux" part. It should really be called "Rocky GNU/Linux", or "Rocky GNU+Linux", because the Linux kernel is only one component of a complete GNU-based operating system compatible with the POSIX standard.
First, obvious troll is obvious. Second, complaining that the "GNU" part is awkward to pronounce when it's a regular English word unlike Linux amuses me. Linus Unix to Linux will forever haunt free software.
Right, infamous copypasta... forgive the pedantry, but the English word "gnu" (referring to the mammal) is pronounced "noo", while the non-English word "GNU" is pronounced "gah-noo"[0], as OP correctly pointed out.
It maybe shouldn't but it always amazes me by how much developers (assuming that is who they are) will subject other developers to the same bikeshedding and minutia type hassles ... that suck for everyone.
It's all grounded human nature I'm sure, but man it is terrible.
We all are frustrated by that behavior ourselves, why do it to other people?
It's even more fundamental than human nature. Bikeshedding is better understood as an emergent behavior of groups of people than it is a thing that individuals do.
Bikeshedding-type comments come to dominate conversations because they take less time and effort to compose and express, and therefore tend to get in ahead of and (especially in a synchronous communication medium) crowd out more substantive contributions. While the expertise and temperament of the individual conversation participants may be contributing factors, the dominant one is simply the size of the group.
I was reading some discussion where someone was writing some part of a website for a product. They chose a framework and the whole thing was spammed with 'why does it have to be x' type contributions. No help, no technical discussion, just these empty one off sentiments spewed at the person who has to actually do the thing. It was just sad.
Don't know if it has been mentioned, but the name Rocky is from Rocky McGaugh, co-founder of CentOS. He passed away and they named the new effort after him.
First impressions and names matter. As much as we don't like it, human beings are irrational and emotionally driven. There's nothing wrong with discussing a name.
Really? My first thought was the "Rocky Mountains" which seem like an apt metaphor for something slow, stable, and rock-solid. The logo even looks like a mountain peak.
My second thought was Rocky Balboa... but I figured that probably wasn't right.
No, @hnarn is right. And you don't really "get" this until you start maintaining an OSS repo (I speak from experience). For years, I was very critical of Linus Torvalds and his brusque attitude. Then I started a few moderately-popular OSS projects, and the entitled masses started pouring in.
After a while, it's hard to refrain from telling people to just screw off.
I think this is one of those cases where one might say the customer (or user, in this case) is always right. But there are always customers who will just find something to complain about, and often (like when the product is a free community service) they're simply not worth having as customers.
edit: Personally, I've been a heavy user and supporter of CentOS but I've almost never referred to it as CentOS. Because the point is to be binary compatible with RHEL, and it's then naturally almost the same thing as Oracle Linux and Scientific Linux, etc. So I would simply refer to "EL5" or "EL6" in code or other places. This probably won't be any different.
Pffft. Imagine coming to a forum where people commonly give their opinions only to accuse them of "whining" and "entitlement" when they simply give an opinion that you don't like.
As if questioning whether a project name is any good suddenly equates to an application to be an "idea guy" or a logo designer.
If you don't like the majority opinion about something as trivial as a name, stop visiting this forum. Does that advice sound familiar? Perhaps it doesn't feel fair for someone to offer you only one of 2 extremes? Hmmmm.
The last thing this forum needs is another open source warrior patronizing everyone for participating here.
Down to death your comment goes. That’s what you get for complaining about complaining about complaining about the name. Because obviously complaining about complaining adds much to the discussion, while complaining about those complaints about complaining draw away from it⸮
- Fedora does its thing informed by but somewhat independently of RHEL.
- Red Hat chooses a Fedora release to be the base of RHEL, forks it, and starts working on it.
- This eventually becomes RHEL X.
- Red Hat then forks RHEL X to create the RHEL X.0 Beta and eventually the RHEL X.0 release. RHEL X keeps getting work done on it which eventually lead to another fork which creates RHEL X.1 Beta and RHEL X.1.
- After each RHEL X.y is released CentOS starts the process of rebuilding it from the sources and tracking upstream changes.
The new model puts CentOS where RHEL X is and so RHEL X.y are actually forks of CentOS.
This change matters a lot to you if you care a lot about the difference between the minor releases of RHEL because there won't be CentOS 7.1 CentOS 7.3 but just CentOS 7. If you just yum update on CentOS then you probably don't care since by default it will move you up minor versions. You have to try to stay on a specific minor version.
What's nice about this change is that anyone can peel off releases from CentOS the same way Red Hat will do to make RHEL and new features become available when they're ready instead of being batched.
There is a use-case for CentOS Stream, and if all Red Hat did was announced CentOS Stream and kept CentOS proper NO ONE would have any issue with that.
There is also a use case for a production fork of RHEL as well. That's now gone. People who migrated to CentOS 8 because they thought they were getting a decade of support - that's now gone.
So what are you arguing, that the second group somehow doesn't get it?
I can’t defend cutting support for CentOS 8. That’s super shitty and I don’t really understand the move.
The part I don’t think people really get is that if your goal was to have a fork of RHEL that was as close as possible to RHEL itself in absolute value that CentOS Stream is much better than CentOS is/was. CentOS always tracked far behind RHEL and now CentOS Stream will track closely in front of RHEL.
CentOS will be useless as a replacement for RHEL. Without the guarantee of binary compatibility, any CentOS Stream update may break your locally installed applications.
And I only recall CentOS significantly trailing RHEL at the major version updates (e.g. 6 and 7). Other updates seem pretty timely, and the major version lag doesn't leave me vulnerable.
I can see this being useful for developers who are building something that needs to be compatible with the next major release of RHEL, but I'm not sure who else it will be useful for.
I replied to you in another thread but nonetheless CentOS Stream isn't going to break your binary compatibility for the same reason that RHEL 7.3 doesn't break binary compatibility with RHEL 7.2. CentOS Stream is spiritually always the next minor release of RHEL.
Unless you're the kind of person who pinned to a specific minor version of CentOS (which isn't the default and not supported for very long) you can use CentOS Stream exactly the same as you currently are and it will be a strict improvement for you. Bugfixes, security updates, and new features will come to you before they're either batched for release in the next minor version of RHEL or back-ported to the current supported releases.
>bugfixes, security updates, and new features will come to you before they're either batched for release in the next minor version of RHEL or back-ported to the current supported releases.
Security fixes are not coming to CentOS Stream first. That's been in the announcement.
They do specifically mention that some fixes may come to RHEL first.
I'm sure they'll try not to break binary compatibility, but as it appears to be somewhat experimental and targeted to developers, breaking updates may occur. Isn't that the point of this distro -- so such testing can take place before updates are rolled into RHEL?
So, fine for a developer workstation, but I don't see how it can be stable enough to use in production.
RHEL requires expensive licenses. CentOS was RHEL without the RedHat branding and without the expensive licensing.
By design, there was a nearly complete overlap between RHEL and CentOS. By "repurposing" CentOS into a "rolling release", RedHat (IBM) has broken the overlap so CentOS (free licensing) no longer competes directly against RHEL (expensive licensing).
This is so misinformed it's funny. CentOS and RHEL will now be down to the compiler flags compatible since RHEL minor releases will now just be point-in-time forks of CentOS with security fixes and backports from, you guessed it, CentOS.
CentOS and RHEL will only be exactly the same at the moment when RHEL is a point-in-time fork of CentOS. As soon ad RHEL forks from CentOS, CentOS will roll forward and will no longer be exactly the same as RHEL.
Previously, CentOS was a rebuild of RHEL. In between RHEL releases, CentOS was exactly the same as RHEL. When RHEL had a release/fix/backpoint, CentOS trailed until it was rebuilt from the new RHEL source.
The "old" CentOS was exactly the same nearly always (nearly perfect overlap) and the "new" CentOS is exactly the same nearly never (almost no overlap).
You act like RHEL 7.1 is a fixed artifact — it’s constantly receiving updates, security patches, and backports. And CentOS always trails behind on those updates so it’s never exactly the same as RHEL either.
This change makes CentOS so much closer to RHEL that it’s weird that people are acting like the opposite is happening.
That's the whole point -- it's continually receiving updates that never break binary compatibility with existing apps/packages. For example, it's a safe target for vendors to target with binary packages, whereas CentOS stream won't be.
That's true of all RHEL major versions. You can safely target RHEL 6 or RHEL 7 without having to worry what minor version they might be running. The same will be true of CentOS Stream which is the upstream for the next minor release of RHEL. CentOS Stream isn't going to suddenly jump major versions.
If the current RHEL release is 7.x then you can think of CentOS Stream as 7.(x+1). You don't have to worry about it suddenly being 8.0. Fedora plays the role of the future RHEL 8.0.
I got the same impression. Do you think it would be too late to change the name to "Rock Linux" or "Rock Solid Linux"?
Since branding is so important this really seems like something that should go through focus groups or crowd sources somehow? Maybe a poll on hacker news? Unfortunately, I really think changing the name would be critical to the success of the project. Imagine you are a technical lead and you have to convince your boss to switch from RHEL to Rocky Linux...
Edit: I know the name is in tribute to Rocky McGaugh, but I still think "Rock Solid Linux" "named in tribute to CentOs co-founder Rocky McGaugh" would make the project more successful
One disadvantage of "Rock Solid Linux" is that it shortens poorly. I think "Rock Solid" or "RSL" would be ambiguous. On the other hand I think "Rocky" sounds nice and unique (at least in this space).
Also I feel naming it "Rock Solid Linux" would be a bit gaudy or arrogant.
Given we are talking of branding, did you both completely miss the logo? To me it's clear the name is in reference to mountains, as in the Rocky Mountains.
Me too. but unfortunately, not all 'corporate' software runs or finds all the libs on Debian based distros.. which sucks I know, because I too find this whole 'corporate blessed linux' thingy really unproductive.
I work on servers used in VFX rendering pipelines and some software will simply refuse to properly run if I don't give it at least a CentOS. And if I do manage to get it running on Debian/Ubuntu, automatically I'm voiding the company's support because I'm using an 'unsupported Linux distribution'.
And yeah, even though we have the platform agnostic VFX Reference Platform, pretty much everything in this industry revolves around RHEL/CentOS. Except SideFX, they are a golden exception, and explicitly support non-Red Hat family distributions.
I doubt how successful this will turn out to be because of the following reasons:
- The old CentOS had a brand value, which Rocky Linux has to earn back all over again.
- The old Red Hat was nice to CentOS or at least wasn't particularly hostile. That does not mean IBM will be nice too.
- It may be too short a notice for current CentOS users to wait for Rocky Linux to come through. They may already move away to other alternatives like Debian or Ubuntu or Amazon Linux or whatever fits their use case.
- If because of some miracle, Rocky Linux turns out to be just as successful as CentOS, there is a chance that either Red Hat or a competitor will end up taking control of it too. Corporate sponsorship is too lucrative to decline. So, it will end up with the same fate as CentOS.
I don't see how it would be too short a notice for current CentOS users really. At least for majority of users running CentOS in their production systems and relying on a long-term support. It's not like they just go and trash their production setup the very moment its running distro lifespan is shortened. I'd expect the opposite actually, since you're looking for something well supported.
> So, it will end up with the same fate as CentOS.
Same fate as CentOS, let me think... Can I replace a few system packages to convert it to a binary-compatible, patchable, community-supported distro, just differently branded? Okay, count me in.
Just like mysql had value, but much of the open source world has just moved on to MariaDB instead.
CentOS users by nature want to stay on their current systems as long as possible. Many would have still been on CentOS 7. I doubt any will have moved away from CentOS already.
Rocky Linux is, I would say, emphatically not a Linux distro for the desktop. This is unrelated to your rhetorical question, but I think your exasperation at "yet another OS" is a bit undeserved here.
This fills a specific need for "enterprise" customers, specifically, being very slow and very stable. It's not supposed to be a consumer OS for doing normal desktop activities on, although there's nothing stopping you from using it as such.
This isn't yet another slightly tweaked fork of Ubuntu or Arch Linux or some such, where maybe that attitude is more deserved.
In terms of DE. More focus and hard work is on building something "cool" than something stable and less buggy. I would have been happy with Gnome if the base OS is stable. Yes KDE turned out to be better, but that was the least important of all cases. Lots of work hours have gone into making XFCE, LXQT, KDE, Gnome, Guix, Cinnamon, XFCE, Mate etc etc. The choice argument is futile if I or someone cannot be productive in it and spends time in linux forums. 100 Choices will never make open source more popular. You just need 1 damn good choice that "just works". Time and energy is precious.
As much as Windows is criticised, you will rarely have any issues with it in terms of hardware compatibility.
Coming to Rocky-Linux, this is server side offering. Plus its model is different to general desktop linux you use for day-to-day.
Fortunately this is a server distro, and there definitely aren't enough of those around (stable trustworthy ones).
While your experiences with the desktop are unfortunate, on most common laptops (cheap and expensive ones) most Linux distros run well, even if some very specific devices have issues with the kernel drivers
It is less a standalone distro and more a free version of Red Hat Enterprise Linux.
> Nouveau glitches.
Then why are you using it? As much work as people are putting into nouveau, they have to work more often than not against NVIDIA instead of with. If you want a system that works just use the proprietary driver.
> Why can't people just band together and create one good Linux distro for the desktop.
In this case? commercial interest, people used CentOS in production instead of buying Red Hat Enterprise. So the people in charge decided to make it useless for that. Now we have Rocky to do the same, just with people in charge that are not financially connected to Red Hat.
As many people can attest, popular Linux distributions (like Ubuntu or Fedora) work just fine on select hardware, like ThinkPad X series, or Dell XPS. But Linux on desktop is fundamentally a tinkerer's OS: great if you like tinkering and flexible when you need it but otherwise a waste of time for a lot of people :) I like tinkering and use Linux ~exclusively but don't recommend it to most of my friends.
This distro is meant for servers, or maybe long term stable desktops you’d want in a classroom or something like that. People running servers don’t need any of the stuff you mentioned.
I didn’t say you can’t use it on a desktop, just that it’s not really what it’s meant for. The issues outlined above, like sleep, Bluetooth, etc. aren’t the main things focused on.
I agree it is a problem. I think there are several reasons:
1. It's a mountain of work and loads of people involved are doing it in their limited spare time.
2. It's even more of a mountain of work because Linux developers have to write all the hardware drivers themselves too. On Windows drivers tend to be written by device manufacturers but that rarely happens on Linux because it's very difficult to write closed source drivers and they have fewer Linux users anyway.
3. A large proportion of Linux users and developers have drunk the Unix kool-aid and think that everything should stay exactly as it was in the 70s. Text based config files, services controlled by Bash scripts, etc. It's pretty much impossible to make a reliable modern system with Bluetooth, WiFi, external displays, hotplugging, etc. with that attitude.
4. Hardware makers only test on Windows so some of the bugs in stuff like suspend are probably hardware bugs that Windows happens not to trigger.
Text-based config files are one of the best features of Linux. Duplicating and managing Windows software configurations is a nightmare by comparison. Making a reliable modern Linux system is pretty easy; usually you don't need to do anything much beyond installing, but if you do, the solid reliable text config and service management files make it easier than any other OS I've used.
They're really not. They prevent you from doing things like automatically responding to config changes, or changing settings programmatically, e.g. from a GUI or installer.
1. How others prioritize their time and how limited it is is an interesting call to make. Are you familiar enough with any of the contributors to know how much "spare" time they have and how they use it?
2. See above.
3. I am locked to a Windows desktop for work, but support Centos servers for backups and such. What is wrong with text-based config files? Are GUI checkboxes a better option? They may be more discoverable, but seem to me to be less configurable. Have you read the Unix Haters Handbook? Unix's greatest flaw and greatest strength are its flexibility.
4. I can't speak much to this, but aren't some manufacturers testing on Linux?
The next CentOS will be where the core developers who are actually contributing to project will move. However, branding does matter if you want enterprise following. Rocky does sound unprofessional.
The branding here I think is a big issue. The name "Rocky Linux" sounds too homebrew and unprofessional. CentOS sounds Enterprise-ish. I find it hard to believe any corporate client would take it seriously.
"Thinking back to early CentOS days... My cofounder was Rocky McGaugh. He is no longer with us, so as a H/T to him, who never got to see the success that CentOS came to be, I introduce to you...Rocky Linux"
— Gregory Kurtzer, Founder of Rocky Linux and Co-founder of CentOS
The reputation behind a name is earned, not given. CentOS sounds fine to you because it’s been around for years and years and has built a solid reputation, it didn’t become what it was because of a “good name”. I don’t care if the distro is called Bubblegum Naruto as long as it’s stable and reliable.
I actually think CentOS sounds pretty unprofessional (although familiar), while Rocky Linux sounds unfamiliar although at least has a meaningful inspiration. I would bet it could come to be equally as esteemed as a product called "CentOS" (or even "Red Hat") eventually.
CentOS is Community Enterprise OS. Enterprise is right in the name.
I just imagine going to the big boss and saying, "We're moving to Rocky Linux" is going to be a tougher sell based on the somewhat juvenile nature of someone's first name with a Y on the end of it.
CentOS is Community Enterprise OS. Enterprise is right in the name.
"Enterprise" isn't right in the name. "ent" is.
Until a few weeks ago, I didn't know what the "Cent" part of "CentOS" meant, mostly because I didn't care. I somewhat assumed it meant the same as "penny".
Rocky Linux actually sounds good to me. Like a solid, rocky foundation on which to build your house.
True, but is that really such an advantage? Consider telling the big boss: "We're moving to CentOS, but don't worry, the 'ent' actually stands for Enterprise".
And regardless I am not really sure how I feel about the professionalism of backronyms in general.
no, the tough sell is telling your boss that you'll have to switch to paying Microsoft-range licenses for your server farm to an IBM subsidiary, whose name and logo are - literally - references to some random guy's clothing.
Lots of decisions are made for arbitrary and, frankly, obtuse reasons. Often on a whim. I would just like to give the new distro the best chance to succeed.
Clearly I seem to be in the minority, however. I guess time will tell.
I've been using CentOS for at least a decade and only learned today that it was short for Community Enterprise OS.
I think, especially around Linux and open source projects, people care much less about naming than we tend to believe. Product reputation and the endorsement of IT/developers matters far more, in my experience, than a name.
The only reason you think 'Rocky Linux' sounds unprofessional is because you are unaccustomed to hearing it in a professional context. Before I read where the name came from, I assumed it was named after the mountain range. Would that really be so strange, considering CPUs get named after lakes or MacOS releases named after cats or regions of California?
It is as resilient as cockroaches? I don’t see the problem. Postgres uses an elephant as a mascot because “elephants don't forget”. What’s the difference?
We are literally on the web. Look out for spiders!
I agree, don't get me wrong. But look up every single thread on CockroachDB to hit HN, and there's an inevitable subthread of people whinging about the name.
>The branding here I think is a big issue. The name "Rocky Linux" sounds too homebrew and unprofessional. CentOS sounds Enterprise-ish. I find it hard to believe any corporate client would take it seriously.
I agree. I mean, who would take something called "Red Hat" seriously?
Googol.com was older, and in the early days of Google featured a prominent "you're probably looking for these guys, not this site" on their front page. (It was, at the time, a math site dedicated mostly to the number.)
>The branding here I think is a big issue. The name "Rocky Linux" sounds too homebrew and unprofessional. CentOS sounds Enterprise-ish. I find it hard to believe any corporate client would take it seriously.
That seems a pretty specious argument for not using a product.
Especially since the name was used to honor one of the founders of CentoS who is now dead.
That seems like a pretty good reason for a name to me.
What's more, I'd be more concerned about functionality than a name. But that's just me.
I'm quite sure some users have not adopted DDG because of the name. "I'm just going to duckduck the [search term]" does not roll that well from the tongue when compared to "I'm just going to google the [search term]". Google for most does not mean the word googlplex, but duck for sure refers to the bird.
CentOS sounds amateur enough to me, and besides, that’s only a problem for cheap businesses that take CentOS as RHEL with a piracy waiver. Go pay for RHEL if you are looking for something professional.
Oh, that's what you thought of! I was confused as why people take offence at Rocky.
As a Calgarian, the first thing I thought of was Rocky Mountains. Due to their proximity, you see the name is so many things and places here. For example, the area surrounding Calgary is called Rocky View County.
Ubuntu is a Nguni Bantu term meaning "humanity". It is often translated as "I am because we are", or "humanity towards others", or in Zulu "umuntu ngumuntu ngabantu" or in Xhosa, "umntu ngumntu ngabantu" but is often used in a more philosophical sense to mean "the belief in a universal bond of sharing that connects all humanity"
When they replied, what I originally had written was "Why is it bad branding?". I thought it would be obvious from the parent comment that I was talking about Ubuntu, but I think they thought I was talking about "Rocky Linux".