Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've had the incredible displeasure of having to maintain multiple massive legacy COTS systems that were once designed by promising startups and ultimately got bought by IBM. IBM turned every last one into the shittiest enterprise software trash you can imagine.

Every IBM product I've ever used is universally reviled by every person I've met who also had to use it, without exaggeration in the slightest. If anything, I'm understating it: I make a significant premium on my salary because I'm one of the few people willing to put up with it.

My only expectation here is that I'll finally start weaning myself off terraform, I guess.



> Every IBM product I've ever used is universally reviled by every person I've met who also had to use it

During my time at IBM and at other companies a decade ago, I can name examples of this:

* Lotus Notes instead of Microsoft Office.

* Lotus Sametime Connect instead of... well Microsoft's instant messengers suck (MSN, Lync, Skype, Teams)... maybe Slack is one of the few tolerable ones?

* Rational Team Concert instead of Git or even Subversion.

* Rational ClearCase instead of Git ( https://stackoverflow.com/questions/1074580/clearcase-advant... ).

* Using a green-screen terminal emulator on a Windows PC to connect to a mainframe to fill out weekly timesheets for payroll, instead of a web app or something.

I'll concede that I like the Eclipse IDE a lot for Java, which was originally developed at IBM. I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.


The green screens tend to be much quicker and more responsive than the web frontends that are developed to replace them.

I've seen a lot of failed projects for data entry apps because the experienced workers tend to prefer the terminals over the web apps. Usually the requirement for the new frontend is driven by management rather than the workers.

Which is understandable to me as a programmer. If it's a task that I'm familiar with, I can often work much more quickly in a terminal than I can with a GUI. The assumption that this is different for non-programmers or that they are all scared of TUIs is often a mistaken assumption. The green screens also tend to have fantastic tab navigation and other keyboard navigation functionality that I almost never see in web apps (I'm not sure why as I'm not a front end developer, but maybe somebody else could explain that).

I'll defend green screens all day long. Lots of people like them and I like them.

Everything else you listed I would agree with you about being terrible and mostly hated though.


I second the TUI argument here.

Back in ... maybe 2005 or what, in our ~60 people family business, I had the pleasure to watch an accountant use our bespoke payroll system. That was a DOS-based app, running on an old Pentium 1 system.

She was absolutely flying through the TUI. F2, type some numbers, Enter, F5 and so on and so on, at an absolutely blistering speed. Data entry took single-digit seconds.

When that was changed to a web app a few years later, the same action took 30 seconds, maybe a minute.

Bonus: a few years later, after we had to close shop and I moved on, I was onboarding a new web dev. When I told him about some development-related scripts in our codebase, he refused to touch the CLI. Said that CLIs are way too complicated and obsolete, and expecting people to learn that is out of touch. And he mostly got away with that, and I had to work around it.

I keep thinking about that. A mere 10 years before, it was within the accepted norm for an accountant to drive a TUI. Inevitable, even. And now, I couldn't even get a "programmer" to execute some scripts. Unbelievable.


Not just accountants. I remember watching fully “non-technical” insurance admin / customer service people play the green screen keyboard like they were concert pianists. People can cope with a lot when they have to.


There is a learning curve, but not coping. One of the crest things with terminal: with experience one can type ahead, even before the form fully opened one can type data, which is queued in the input buffer and work efficiently. In a modern GUI application a lot of time is wasted with reaching for the mouse, aiming and waiting for the new form to render. That requires coping with it


I had to interact with a windows software which allows you to collect data with a digital form. We used it to digitize paper based survey by mapping free form question to a choices list.

The best oart was that it was entirely keyboard driven. If you can touch type, you can just read the paper and type away. The job was mind numbing, but the software itself was great.


Case in point: the aforementioned accountant obviously hated the new GUI-based app, exactly because of what you said. Aiming the mouse, looking for that button, etc. slows you down.


It doesn't have to. The tab order seems shortcuts are there and very usable... if anyone bothers to implement them.


Not only implement, but implement them consistently and making users aware.

Consistency is a thing. Old windows apps often followed a style guide to some degree, that was lost with web (while it's also hard, as styleguides differ between systems, like Windows and Mac) and wasn't ever as close as Mainframe terminal things where function keys had global effects.


Indeed. One of the things I keep having to tell younger people is: “webapps have no HIG!”

All of the major platforms have a HIG that tells developers how to maximize the experience for users. Webapps have dozens of ways to do things like “search”. Those who never developed for a platform with a HIG do not value it and keep reinventing everything.


In a native single-threaded UI, you can type ahead too. But it doesn't work on the web unless the page effectively reimplements an input queue.


I worked at Best Buy as a high school teenager just before they switched the green screens to some GUI monstrosity. Everyone in the store had to learn how to use the green screens (sales people, cashiers, techs, stockers - everyone) and after a few weeks / months you would get CRAZY fast.

A few years later in college I worked there again and by that point they'd transitioned to a much slower GUI that basically just wrapped the underlying green screen system. The learning curve was slightly better, but it wasn't nearly as fast.

Purpose-built mainframe-based TUIs were amazing. We lost a lot in pursuit of colored pixels.


I wouldn't say, cope, the green screen stuff has predictable field input, and predictable rules around selecting elements.

Despite its obvious downsides, for people who do regular form input and editing, it's often better than the flavor of the day web framework IMO

I mean, I wouldn't choose to use it, but I get it


I was at a ticket window buying concert tickets a couple weeks ago and was surprised to see the worker using the Ticketmaster TUI / Mainframe interface. She flew through the screens. The same experience on the Ticketmaster website is awful.


Things have changed back though - the CLI is hot again at-least amongst developers.


I find it ironic that we developers prefer to use CLI because it's quick, efficient, stable, etc., but what we then deliver to people as web apps is quite the opposite experience.


It's what the default is. TUIs default to fast, stable, high-information-density, so you have to real work to make them otherwise. And I say this next part as primarily a front-end developer the past few years: web apps default to slow, brittle, too-much-whitespace "make the logo bigger" cruft, and it takes real work to make them otherwise.

At the end of the day most people are lazy and most things, including (especially?) things done for work, are low quality. So you end up with the default more often than not.


Easier to sell the initial impression for "modern" web apps (shiny, easy-to-learn, low-skill-ceiling) vs the actual performance of TUI/"desktop" apps (mundane, effortful-to-learn, higher skill ceiling).

Maybe someone has examples of web apps made for also a high skill ceiling?

I've heard Linear, Superhuman does something like that while maintaining a nice interface, but I've never used those


in my experience, many managers tend to try to dumb products down as much as possible, to make it work for the most people. the problem is that this, together with the usual bad ui/ux, makes the product inefficient to use, especially for power users.

then, every couple of years, a startup tries to carve out a niche by making a product that caters to power users and makes efficiency a priority. those power users adopt it and start to recommend it to other regular users. this usually also tends to work quite well because even regular users are smarter than expected, especially when motivated. thus the product grows, the startup grows and voila, a tech giant buys it.

now one of the tech giants managers gets the task to improve profits and figures out, the way to do this is to increase the user base by making the product easier to use. UX enshittification ensues, the power users start looking out for the next niche product and the cycle starts anew.

rule of thumb: if the manager says "my grandma who never used a computer before in her life must be able to use it", abandon ship.


An application I used to deal with was similar, but with a somewhat quirky developer, who would deliberately flip between positive/negative confirmation questions, e.g.:

- Confirm this is correct? (Yes=F1, No=F2) - Would you like to make any changes? (Yes=F1, No=F2)

And maybe sometimes flip the yes/no F-key assignments as well.

In theory this was done to force users to read the question and pay attention to what they were doing, in practice, users just memorized the key sequences.


Ah just randomly pick between F1 and F9 for the two questions and don't necessarily put them in order. Yes=F7, No=F3

/s


We had a tower of bable collapse, when we switched to web UI. We gained a million things and lost a million things. There was an era from around 1985 to early 2000s, where a large majority of applications had a (somewhat) consistent UI, based partially around MS-Windows, partially around some IBM 'common ui' design guide principles. The hall-marks of it was - keyboard navigation was possible - mostly consistent keyboard nav - common limited set of UI controls with consistent behaviour - for serious applications, there was some actual thought related to how the user was supposed to navigate through the system during operation (efficiency)

Post-web and post 9/11, where web browser UI has infested everything, we are now in a cambryan explosion of crayon-eating UI design.

It seems our priorities have been confused by important things like 'Hi George. I just noticed, that for the admin panels in our app, the background colours of various controls get the wrong shade of '#DEADBF' when loading on the newest version of Safari, can you figure out why that happens?'. 'Oh, and the new framework for making smushed shadows on drop-downs seems to have increased our app's startup time on page transitions from 3.7 seconds to 9.2 seconds, is there any way we can alleviate that, maybe by installing some more middleware and a new js framework npm module? I heard vite should be really good, if you can get rid of those parts where we rely on webpack?'


These days most web apps aren’t written to take advantage of the browser’s built-in tab navigation, and unless the dev is a keyboard user, they don’t even think to add it. This is largely the fault of React reinventing everything browsers already have built in, and treating accessibility as an afterthought. Bare metal web apps written in straight-up HTML do have decent tab navigation. They’re still not as snappy as a green terminal app, though. My first summer temp jobs during college were data entry, in the era when you might get a terminal app or a web app, and the old apps invariably had better UX.


>The green screens tend to be much quicker and more responsive than the web frontends that are developed to replace them.

Agree! Back in 2005, I was involved in a project to build a web front end as a replacement for the 'green screen' IBM terminal UI connecting to AS400 (IIRC). All users hated the web frontend with passion, and to this day, I do not see web tech that could compete in terms of data entry speed, responsiveness, and productivity. I still think about this a lot when building stuff these days. I'm hoping one day I'll find an excuse to try textualize.io or something like this for the next project :)


This only matters if "quick and more responsive" is the only thing that matters. Yes of course you can enter payroll timesheets on a TUI if you spend days/weeks/months gaining that muscle memory. The same way you can edit in vim much faster than vscode or Eclipse if you spend weeks/months/years gaining that muscle memory.

The fact that someone who has been doing it for years can do it faster is obvious, and pretty irrelevant.

Take someone who has never used either, and they'll enter data on the web app much faster.

You don't see keyboard nav in most web apps for similar reasons. First-time users won't know about it, there's no standard beyond what's built-in the browser (tab to next input, that kind of thing), and 90% of your users will never sit through a tutorial or onboarding flow, or read the documentation.


For a work app doing data entry, there is supposed to be training for users, because it's somebody's job to use the program consistently all the time everyday.

I would agree with you if we were talking about a customer facing webpage or something. But an app for say an accountant? That should be a TUI or as fast as a TUI. The workers are literally hired to get over the learning curve and become fast with the app, so it's not as big a concern if first-use is more difficult. You arent trying to sell them a product and drive higher percentage click through.

I 100% agree with you for applications for say online shopping. Those should prioritize new user experience over long time user efficiency probably.


Are these really at odds with each other? You can have keyboard and click nav for any app.


Is there a good tuinfor web?


IBM eventually figured out that these products were terrible too, even if they saved money on paper; sold the Rational/Lotus/Sametime teams to an Indian competitor, and discontinued usage internally (I think, it's a big company).


There are people even today who want Lotus Notes back, still mourn its loss.


huh isn't it funny when you dogfood but instead of food it's... nvm

But yeah some elements of that list have convinced me to steer very clear from any products from that company


Heh. I worked on the Mac version of ViaVoice. I joined as I was already an expert in AppKit and Obj-C.

We were given old Macs running Classic to run Notes so we had two computers. One being MacOSX. Notes was the biggest pile of crap I’ve ever had to use. With one exception…

On the OSX box we were happily running svn until we were forced to use some IBM command-line system for source control. To add insult to injury, the server was in Texas and we were in Boca Raton (old PC factory as it happens). The network was slow.

It had so many command-line options a guy wrote a TCL for it.

Adding to that was the local IBM lan was token ring and we were Ethernet. That was fun.


Can we add DOORS to this list please?

I have no idea how/why IBM of all places developed or sold this software but it badly needs to die in a fire.

Database technology which would seem outdated in 1994 with a UI and admin management tools to match.


DOORS is/was a requirement management tool and frankly speaking was crap but I have never seen another software as good and comprehensive in requirement management.

I expect it to be still used in aviation or army related domain, maybe pharma.


> I don't think the IDE is good for other programming languages or non-programming things like team communication and task management.

It works great for Python and C++, honestly. If you're a solo dev, Mylyn does a great job of syncing with your in-code todo list and issue tracker, but it's not as smooth as the IDE side.

However, its Git implementation is something else. It makes Git understandable and allows this knowledge to bleed back to git CLI. This is why I'm using it for 20+ years now.


I remember using Rational Clear case at my first job. Yeah, in that case count me in on the list of people that revile the IBM products they've had to use.


Could this be an employee retention strategy? Making people use bad tooling, so that they can be proud of knowing the bad tooling no one else in the industry uses and when those people feel like looking for something new, "no one values their knowledge" in those obscure tools, so they stay at IBM?


Eclipse was nice but WebSphere Application Developer was pretty horrible - I'm not sure how they achieved that! (WSAD was/was built on Eclipse)


If you used SameTime with Pidgin, SameTime didn't suck. But maybe that's because Pidgin is awesome, and not because of SameTime.


Yeah I was just about to say this -- I used Sametime via Pidgin (I think it may still have been called Gaim back then) on my work Linux machine and it was actually quite nice.

My favourite Sametime feature within Pidgin was, well, tabs (I can't remember if the Windows client had tabs as well..?), which was revolutionary for an IM client in 2005.

But my secret actual favourite feature was the setting which automatically opened an IM window /tab when the other person merely clicked on your name on their side (because the Sametime protocol immediately establishes a socket connection), so you could freak them out by saying hello even before they'd sent their initial message.


And what was that thing they used for email?


You mean Lotus Notes?


ClearCase. You just triggered my PTSD!


I think this is an interesting graph comparing web searches for "terraform alternative" and "opentofu". Notice the spike when the IBM rumors began, and the current spike now that the acquisition is complete?

https://trends.google.com/trends/explore?date=all&q=terrafor...


Both of those are still a rounding error compared to searches for Terraform though:

https://trends.google.com/trends/explore?date=all&q=terrafor...

That being said, it'll be interesting to see if it's still a rounding error 2 years from now.


How is Red Hat going after the acquisition by IBM? From my view, it is going well. The enterprise product (RHEL) is still excellent.


Dropping CentOS was a terrible decision. I’m not sure if that happened before or after the acquisition though.


It mostly happened afterwards but it was not driven by IBM.


Centos stream still exists and it is in fact the actual upstream of rhel.


CentOS was the downstream of RHEL, and much more people used it than RedHat/IBM knew or wanted to admit. I can argue that at least 90% of their users (by the number of installs) didn't even need any help to configure/troubleshoot that either.

But with a very IBM move and with some tunnel vision, they got triggered by the few people who abuse RedHat license model and rugpulled everyone. More importantly universities, HPC/Research centers and other (mostly research) datacenters which were able to sew their own garments without effort.

Now we have Alma, which is a clone of CentOS stream, and Rocky which tries to be bug to bug compatible with RHEL. It's not a nice state.

They damaged their reputation, goodwill and most importantly the ecosystem severely just to earn some more monies, because number and monies matter more than everything else for IBM.

Remember. When you combine any company with IBM, you get IBM.


> they got triggered by the few people who abuse Red Hat license model and rugpulled everyone

Alma is not a clone of CentOS Stream. You can use Alma just like you were using CentOS. It's really no different than before except for who's doing the work.

I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?


> Alma is not a clone of CentOS Stream.

I'll kindly disagree on this with you. Reading the blog post titled "The Future of AlmaLinux is Bright", located at [0]:

> After much discussion, the AlmaLinux OS Foundation board today has decided to drop the aim to be 1:1 with RHEL. AlmaLinux OS will instead aim to be binary compatible with RHEL.

> The most remarkable potential impact of the change is that we will no longer be held to the line of “bug-for-bug compatibility” with Red Hat, and that means that we can now accept bug fixes outside of Red Hat’s release cycle.

> We will also start asking anyone who reports bugs in AlmaLinux OS to attempt to test and replicate the problem in CentOS Stream as well, so we can focus our energy on correcting it in the right place.

So, it's just an ABI compatible derivative distro now. Not Bug to Bug compatible like old CentOS and current RockyLinux.

TL;DR: Alma Linux is not a RHEL clone. It's a derivative, mostly pulling from CentOS Stream.

> I agree that communication was bad. But why do you believe that Red Hat isn't able to screw up on their own?

Absorption and "Rebranding and Repositioning" of CentOS both done after IBM acquisition. RedHat is not a company anymore. It's a department under IBM.

Make no mistake. No hard feelings towards IBM and RedHat here. They are corporations. I'm angry to be rug-pulled because we have been affected directly.

Lastly, in the words of Bryan Cantrill:

> You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end.

[0]: https://almalinux.org/blog/future-of-almalinux/


> Absorption and "Rebranding and Repositioning" of CentOS both done after IBM acquisition. RedHat is not a company anymore. It's a department under IBM.

You're wrong. CentOS Stream was announced September/October 2019, too close to the IBM announcement to be an IBM decision; it had been in the works for quite some time before, and in fact this all started in 2014 when Red Hat acquihired CentOS.

From 2014 to ~2020 you were under the impression that nothing had changed, but Red Hat had never cared about CentOS-the-free-RHEL. All that Red Hat cared about was CentOS as the basis for developing their other products (e.g. OpenStack and OpenShift), and when Red Hat came up with CentOS Stream as a better way to do that, Red Hat did not need CentOS Linux anymore.

Anyhow, I've been through that and other stuff as an employee, and I'm pretty sure Red Hat is more than able to occasionally fuck up on its own, without any need for interference from IBM.


Bug for bug is a sham and always was. It's a disservice to users to only clone something.

Underneath it all, compatibility is what matters. At AlmaLinux we still target RHEL minor versions and will continue to do so. We're a clone in the sense of full compatibility but a derivative in the sense that we can do some extra things now. This is far, far better for users and also let's us actually contribute upstream and have more of a mutually beneficial relationship with RH versus just taking.


I'll say it depends.

Sometimes the hardware or the software you run requires exact versions of the packages with some specific behavior to work correctly. These include drivers' parts on both kernel and userland, some specific application which requires a very specific version of a library, so on and so forth.

I for one, can use Alma for 99% of the time instead of the old CentOS, but it's not always possible, if you're running cutting edge datacenter hardware. And when you run that hardware as a research center, this small distinction cuts a lot deeper.

Otherwise, taking the LEAPP and migrating to Alma or Rocky for that matter is a no-brainer for an experienced groups of admins. But, when computer says no, there's no arguing in that.


If you're running cutting edge datacenter hardware, CentOS is a better fit now than it ever has been before. It will be the first to get support for new hardware within a major version, ahead of RHEL and all it's derivatives. It is possible that some hardware doesn't get support within the current major version of any of these related distros, and you'll have to wait until the next major version, which CentOS also does first before the rest.


We don't change the expected versions. We might patch/backport more to them if there are issues, but the versions remain.

Basically the goal is still to fit the exact situation you just brought up. I'm not aware of this ever not being the case if it weren't to be the case for some reason, then we have a problem we need to fix.

All of the extra stuff we do, patch, etc. is with exactly what you just stated in mind.


I'll be installing a set of small servers in the near future. I'll be retrying Alma in a couple of them, to give it another chance.

As I said, in some cases Rocky is a better CentOS replacement than Alma is.

But to be crystal clear, I do not discount Alma as a distribution or belittle the effort behind it. Derivative, clone or from scratch, keeping a distro alive is a tremendous amount of work. I did it, and know it.

It's just me selecting the tools depending on a suitability score, and pragmatism. Not beef, not fanaticism, nothing in that vein.


Sustainability is one of the core reasons why we are not using RHEL SRPMs to build AlmaLinux. RH doesn't want us doing that, and doing so would be unsustainable and bring into question the future of AlmaLinux as it can, and likely will, turn into a game of cat/mouse getting those SRPMs :)

Let us know if you have any issues!


Red Hat bringing CentOS in-house (well before IBM entered the picture) was IMO one of the first in a string of expedient decisions that were... unfortunate. When I was at Red Hat I loudly argued against some of the ways things were handled but I also understand why various actions were taken when they were.

I'd also argue that CentOS classic was mostly bug for bug compatible but probably close enough for most. It shared sources but did use a different (complex) build system as I understand it.


That closeness allowed CentOS to be a drop-in replacement for RHEL for thousands of installations and exotic hardware combinations. Unfortunately, we don't have this capability anymore. Rocky bears most of that load now.


Despite being a debian/ubuntu guy, I usually used CentOS for production deployments because it would be easy and seamless to upgrade to RHEL when I hit the big leagues.

Not anymore. I just use the latest ubuntu LTS and call it a day.

IBM/RedHat was soo predictably short sighted on this.


So you say "it would be easy and seamless", but did you ever actually do it and upgrade to RHEL? Because most people throw that out as a supposed sales pipeline that was lost, but the real life metrics indicate that almost never happened.


In one case, yes.


The free LTS/distro and pay for support if you feel like it never really worked financially. Maybe Canonical is profitable at this point. It's not Red Hat (or SUSE for that matter.)


There are many large organizations that pay for RHEL support. Supercomputers, for example. These organizations also benefited from being able to spin up analog installs of CentOS on local machines for testing. Not anymore. I expect RHEL's market dominance in these areas to diminish over time.


HPC was always a tough sales area for Red Hat and RHEL.

In general, while RHEL is obviously still an important revenue source, there's also a lot of focus on OpenShift going forward which has done of pretty good job of covering (and more) inevitable RHEL declines moving forward.


All the HPC I've used in the past was always RHEL... I wouldn't have imagined it was a tough sales area for RedHat, at least in the past.


For testing environments Red Hat will literally give them free RHEL. Problem solved.


No, they won't. I'm talking about the users of HPC centers, not the maintainers. The supercomputer cluster is at NASA or DoE and running RHEL, but the user is some grad student in Caltech or whatever. The grad student needs the analog environment to run their code before their scheduled time on the big iron.


But CentOS Stream is not CentOS.

They are completely different products just reusing branding to confuse what people are asking for.

RHEL Developer is closer, as a no-support, no-cost version of RHEL, but you still have the deal with the licence song and dance.

CentOS gave folks a free version that let you run some dev environments that mostly mirrors prod, without worrying about licences or support. CentOS stream doesn't do this out of principle. It's upstream.


To call it "completely different" is false. They are built differently, but the end result is still 90-95% the same software versions (because it has to be as the major version of RHEL). In fact, the way it is built differently is a massive improvement over the old process. The old CentOS was put together by 2-4 people at a time, with long delays after the corresponding RHEL releases, with no ability to actually fix bugs or accept contributions. The new CentOS (CentOS Stream) is built by thousands of engineers, literal subject matter experts who can actually fix bugs you report to them, or even better merge a contribution you submit. Also, the branding wasn't reused, the branding is for the whole CentOS Project, which still exists and is more active than ever. Also, you can still use CentOS in your dev environments, and it works great for that because you can prepare your production workload for upcoming changes in the next RHEL minor version ahead of time. You can also get free RHEL for dev environments, for the things you need to validate with the same minor version as your production RHEL environments.


It is different, otherwise why would then explain at length at how it is better?


But for all practical purposes, that is dropping CentOS. They completely changed the identity of the product, so the fact it has the same branding isn't going to placate anyone.


People that actually care about the distro being sustainable are quite happy with the changes. Sorry you don't get it.


so? That just means that it is not necessarily compatible with the current version of rhel deployed on our servers


It's the same major version, so it's extremely compatible. Plus if you run into something that doesn't work the same, you just discovered what's going to break for your workload on your RHEL system when the next minor version is released.


so it's not the same. Which means that it is not equivalent to what centos was. It's highly, but not fully, compatible. centos was "fully". You can't use it to test on the current version of rhel, where with centos you could. You can't say they're equivalent and then start listing the differences that make them not


It's going basically fine. If you're in engineering you would never notice the difference.


Companies are often brought in and told that nothing will change, and as long as they can pull their weight, this may be true. IBM seems a pretty diversified company, and there RedHat doing 5% of the total revenue may not be too bad. I don't know how well RedHat is doing commercially, but a few bad quarters could draw negative attention of the sort where upper-management wants to start messing with you, seek more synergy, efficiency, alignment. Being a much smaller small company within Verizon, having been left alone for a little while, we were then told that The Hug was coming. It did. We didn't grow to be their next billion dollar business unit (as no surprise to anyone in our little company), nor were we able to complement other products (ha! synergy!) and we were shuttered. At some point... engineering will notice.


RHEL has had no significant investment to keep it from becoming irrelevant in the next five years. The datacenter and deployments of linux have changed so rapidly (mostly due to the new centralization and homogeneity of infrastructure investment) that RHELs niche is rapidly shrinking.


This is clearly someone that is not paying attention to what Red Hat is doing.

RHEL is the enterprise gold standard.

Fedora is a lot of the pipeline for it, which itself has become an incredible server and desktop platform.

All the work with Open shift, backstage, podman / qubelet, etc.

They're going to be fine, from my graybeard position.


Yes also a gray beard and been around long enough to tell you RHEL is and will continue to be legacy and will continue to dwindle into obsolescence. You mentioned the cool stuff Fedora is doing, that is not RHEL. CoreOS is the future.


RedHat developers are the ones making Fedora.

Fedora is the upstream for RHEL.

You are going to see RHEL transition to bootc: https://docs.fedoraproject.org/en-US/bootc/

Get with the times, fellow gray beard: https://github.com/redhat-cop/redhat-image-mode-demo

---

* What is RHEL Image Mode?

RHEL Image mode is a new approach for operating system deployment that enables users to create, deploy and manage Red Hat Enterprise Linux as a bootc container image.

This approach simplifies operations across the enterprise, allowing developers, operations teams and solution providers to use the same container-native tools and techniques to manage everything from applications to the underlying OS.

* How is RHEL Image Mode different?

Due to the container-oriented nature, RHEL Image mode opens up to a unification and standardization of OS management and deployment, allowing the integration with existing CI/CD workflows and/or GitOps, reducing complexity.

RHEL Image mode also helps increasing security as the content, updates and patches are predictable and atomic, preventing manual modification of core services, packages and applications for a guaranteed consistency at scale. ---


I know all of this already and honestly I’m just amused at how long it has taken. Forget I said anything. Enjoy working on rhel for the rest of your life.


"Enjoy working on rhel for the rest of your life."

Show me on this doll where RHEL touched you.


RHEL 10 beta has some interesting stuff in it. Running the OS itself as a container caught my eye.


Image mode RHEL is a pretty significant investment.

Apart from that, in terms of keeping RHEL relevant, most of the attention is on making it easier to operate fleets at scale rather than the OS itself. Red Hat Insights, Image Builder, services in general, etc.

Those are the key things that would keep it competitive against Ubuntu, Debian, Alma, Oracle etc.


If RHEL is becoming irrelevant, what distro will replace it for enterprise users?


We don’t run anything on bare metal anymore it’s all containers (90k employee very large enterprise).

Of course I can’t speak for all the teams, but all new projects are going out on kubernetes and we don’t care about rhel at all, typically it’s alpine it Debian base images


You have a hardware implementation of Docker?


When I say "we don't run anything on" I mean our involvement in the infra begins after those layers; sure, maybe someone at google cloud is doing rhel stuff but we don't care. Push button receive kubeconfig.


So Red Hat Openshift.


Talos Linux. Replaced a fleet of RHEL OpenShift with Talos recently. We're planning on moving the rest to Talos within 1.5 years. Basically, bare metal OS is going to be an implementation detail abstracted away from the internal users and developers.


I'm the head of product at Sidero. Thanks for sharing! We love to hear people being successful with Talos.


Not sure why you were down voted, the product you guys create is amazing.


Why leave terraform? You don’t feel OpenTofu will carry the torch well enough?


Podman is pretty good.


> Every IBM product I've ever used is universally reviled by every person I've met who also had to use it

Not a product, but a service: is Red Hat Linux a counter example?


Everyone I know who works with IBM i (used to be system i, as/400 before that) absolutely adores it. Gods do they every nickel and dime you tho.


Isn’t MQ pretty good?


It’s heavy and old. We have to consume some but Kafka is nicer to work with typically (provided someone else is running it)


If Kafka is nicer to work with, then it must be horrible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: