1) Subscribe to the GitHub repo for tag/release updates.
2) When I get a notification of a new version, I run a shell function (meup-uv and meup-ruff) which grabs the latest tag via a GET request and runs an install. I don't remember the semantics off the top of my head, but it's something like:
Of course this implies I'm willing to wait the ~5-10 minutes for these apps to compile, along with the storage costs of the registry and source caches. Build times for ruff aren't terrible, but uv is a straight up "kick off and take a coffee break" experience on my system (it gets 6-8 threads out of my 12 total depending on my mood).
Tools like Flatpak, AppImage, Snap, Toolbox, and Distrobox can go a long way on relieving the end-user of the burden of trying to get things playing nice in those situations. Not always a silver bullet, but a useful tool to keep in the back pocket.
If it's FOSS, at least you have the option of trying to repackage it for your distribution. You're SOL if it's a proprietary application distributed in binary format.
, though.
Something I don't yet understand about flatpak et al: isn't using that for every app in the core OS experience going to chew through storage because shared libraries can't be shared across images? Or does the containerization solve that problem?
Modern storage is at least hundreds of time bigger than when shared libraries were common to save space. Every single executable having its own copy of GNU C Library isn't a big deal.
But they have to be mapped into RAM, right? And if the OS can't unify those distinct images because they're in different paks, isn't my RAM now being eaten up by clone static data that I could instead be using for actual work?
I can't speak for the others, but Flatpak is a layered solution, so files are deduped and shared across the layers (runtimes, applications) that need them.
Ah, good point. That feels win-win then; package maintainers get a huge efficiency win by just synchronizing on their base packages but they're not locked to them so they can make things work without littering up the user's global library space with oddball versions of dependencies.
I'm still hopeful that RHEL will find a way to integrate dnf system-upgrade in the future, but it's not a trivial undertaking. As long as the transaction can resolve cleanly, it's technically possible to do. But it doesn't mean you'll have a properly configured system when it boots up. Tools like leapp and its derivatives (ELevate) do a bit more under the hood work to ensure a complete upgrade. Fedora itself only ever supports and tests up to a two version bump for system upgrades, e.g. 40 -> 42, 41 -> 43, etc. RHEL major releases are jumping (at minimum) six Fedora releases.
Former Hatter here (Solution Architect Q2 '21 -> Q4 '22). Other than the discussions that took place around moving the storage/business products and teams under IBM (and the recently announcement transfer of middleware), I wouldn't have expected engineering to do that much interfacing with IBM. At most, division leadership maybe (this is just personal speculation). Finance and Sales on the other hand... quite a bit more.
We had a really fun time where the classic s-word was thrown around... "s y n e r g y". Some of the folks I got to meet across the aisle had a pretty strong pre-2010 mindset. Even around opinions of the acquisition, thinking it was just another case of SOP for the business and we'd be fully integrated Soon™.
They key thing people need to remember about the Red Hat acquisition is that it was purely for expertise and personnel. Red Hat has no (or very little) IP. It's not like IBM was snatching them up to take advantage of patents or whatnot. It's in their best interest to do as little as possible to poke the bear that is RH engineering because if there was ever a large scale exodus, IBM would be holding the worlds largest $34B sack of excrement we've seen. All of the value in the acquisition is the engineering talent and customer relationships Red Hat has, not the products themselves. The power of open source development!
It's heartening to hear that your experience in engineering has been positive (or neutral?) so far. Sales saw some massive churn because that's an area IBM did have a heavier impact in. There were some fairly ridiculous expectations set for year-over-year, completely dismissing previous results and obvious upcoming trends. Lost a lot of good reps over that...
Red Hatter since 2016, first in Consulting, now in Sales.
Oh the “synergy” rocket chat channel we had back then…
Things have been changing, for sure. So has the industry. So have our customers. By and large, Red Hatters on the ground have fought hard to preserve the culture. I have many friends across Red Hat, many that transitioned to IBM (Storage, some Middleware). Folks still love being a part of Red Hat.
On the topic of ridiculous expectations…there’s some. But Red Hatters generally figure out how to do ridiculous things like run the internet on open source software.
FWIW, the change at Red Hat has always been hard to separate between the forces of IBM and the reality of changing leadership. In a lot of ways those are intertwined because some of the new leadership came from IBM. Whatever change there was happened relatively gradually over many years.
Paul Cormier was a very different type of CEO than Jim Whitehurst for sure. But that's not an IBM thing, he was with Red Hat for 20 years previously.
I agree with you FWIW. The company also basically doubled in size from 2019 to 2023. It's very hard to grow like that and experience zero changes. And COVID happened shortly after so that also throws a wrench into the comparisons.
The point is, it's hard to point to any particular decisions or changes I disliked and say "IBM did that"
I do miss having Jim Whitehurst around. Jim spent 90 minutes on the Wednesday afternoon of my New Hire Orientation week with my cohort helping to make sure all of us could login to email and chat, answering questions, telling a couple short stories. He literally helped build the Red Hat culture starting at New Hire. Kind of magical when the company is an 11K person global business and doing 5B in revenue.
Cormier and Hicks have their strengths. Hicks in particular seems to care about cultural shifts and also seems adept at identifying key times and places to invest in engineering efforts.
The folks we have imported from IBM are hiring folks that are attempting to make Red Hat more aggressive, efficient, innovative. Some bets are paying off. More are to be decided soon. These kinds of bets and changes haven’t been for everyone.
>The company also basically doubled in size from 2019 to 2023. It's very hard to grow like that and experience zero changes.
Longtime Red Hatter here. Most of any challenges I see at Red Hat around culture I attribute to this rapid growth. In some ways it's surprising how well so many relatively new hires seem to internalize the company's traditional values.
Yeah, when I left I think there were something like 7x the number of people than when I joined. You can't run those two companies the same way no matter who is in charge.
> They key thing people need to remember about the Red Hat acquisition is that it was purely for expertise and personnel. Red Hat has no (or very little) IP. It's not like IBM was snatching them up to take advantage of patents or whatnot. It's in their best interest to do as little as possible to poke the bear that is RH engineering because if there was ever a large scale exodus, IBM would be holding the worlds largest $34B sack of excrement we've seen.
We thought the same thing at VMware until Hock moved WITH THE QUICKNESS to jack up prices and RIF a ton of engineering staff.
That said, I'm in tech sales at the Hat now, and IBM is definitely around, but it's still a cool company that tries hard to treat their people right.
They also care A LOT about being an open-source company. Most of my onboarding was dedicated to this, and sales treats it seriously.
It sounds like you may have some friction-studded history with Go. Any chance you can share your experience and perspective with using the language in your workloads?
It's mostly dead locked networking code. Hard to investigate, hard to search the culprit. And of course code bases without linter for err propagation and handling. And this-null for "methods".
Gitea 1.22 will be the last release with guaranteed migration support to Forgejo. The next Forgejo LTS release, v11, is due out around April. Migration from Gitea 1.23 will not be supported, and since it was released in December those on the fence are now at the fork in the road.
You still have time to figure out what to do, but you'll need to choose sooner than later.
Yes, I'm aware of that. I'm on a Gitea version that will migrate easily (and have confirmed that) and will not upgrade or migrate until I have decided. (I am leaning toward migrating to Forgejo.)
Thanks for that. I hadn't followed closely because right now I don't have a real desire to go back to Gitea. I'm glad to know that deadline to commit is coming up soon.
Gitea and Forgejo support OAuth integration and AGit Flow*, which is a breath of fresh air compared to the connected "fork" and PR strategies. It's a good middle ground between the "modern" method and email collaboration. With some UX tweaks it could become very accessible for many.
Available platforms like Codeberg provide the option to sign in/register with GitHub and GitLab auth, so needing "yet another account" has become a much weaker argument.
Interesting experience, not one I can corroborate myself.
So a few weeks ago I updated to Fedora 41. (Updating from N-2 -> N is supposed to be fully supported)
Correct, and I do believe F39 -> F41 was tested, but every system will be a little bit different.
I wasn't surprised that it broke my nvidia driver's dkms integration - I don't expect any distro to test their integration of the market leader GPU manufacturer.
Fedora actually does test this. What driver installation method are you using? Unless you are using NVIDIA GRID for vGPU or need to use a very specific version of the generic Linux driver, use RPM Fusion's akmod-nvidia package[0]. Normal DKMS via NVIDIA's modularity CUDA repositories or the binary installer can be fraught with issues that users don't deserve. Rather than just installing the kmods directly, an akmod package will build an RPM for the modules and install that. You'll know well in advance if there's a problem before you reboot, nor have to twiddle your thumbs during the upgrade transactions. Just give the system an extra minute after running a kernel update before rebooting (watching "ps aux" for "dnf", "kmod", and "rpm" helps). Several system configurations regarding module parameters and initramfs boot options are included as well.
I finally did this today (I have a GTX 1070), but configuring my kernel parameters[1] to disable NVIDIA's fbdev allows me to once again have alternate console TTYs. Despite being an experimental option, it is enabled by default in newer drivers and it conflicts with another framebuffer. For the past few Fedora releases since it was flipped on I've had to use a second device to SSH to my system after the version upgrade to gracefully reboot once the driver module package rebuilt. The akmod rebuild trigger doesn't stall the offline upgrade reboot, so the driver isn't prepared in time for first boot. My standard practice is configuring my system to the multi-user target before issuing an offline version upgrade, then switching back after.
the system was suspended. When I wake it up, the screen is also locked. That was weird, as I have disabled all power mgment features, and I have also passwordless autologin on this machine. Then looking at some of the logs, it turned out that xdg-desktop-portal segfaulted in the middle of the night, which in turn killed all user sessions and processes and logged me out. And then it ignored my powermgmt settings, and sent the machine to sleep.
That is definitely a new one to me. In addition to your searching, did you happen to follow through with Fedora's problem reporting checklist[3]? It's not a mandatory step, but it can be incredibly helpful having these issues documented and brought to engineering's attention.
[0]: https://rpmfusion.org/Howto/NVIDIA (alternatively just enable the NVIDIA driver specific repository pre-provided by Fedora: dnf config-manager --enable rpmfusion-nonfree-nvidia-driver)
The Redhat model just doesn’t work in 2024 with the sharks constantly looking for fresh meat.
I truly believe that the Red Hat model is still possible to achieve today, but the barrier of entry is much higher than before. What sets Red Hat apart from many of these VC backed projects is what they actually offer. Red Hat doesn't primarily provide "services" or singular components like a database^, but delivers platforms.
RHEL, OpenShift, Ansible Automation Platform, OpenStack, Satellite, etc, are the aggregation of many open source projects tied together to make an offering appealing and attractive to enterprises. They produce the infrastructure and management layers of the stack that all your services and applications are deployed on. Working at this level enables a very different degree of flexibility and "safety" in comparison to single application or SaaS style offerings.
There's distinct boundaries of their products as well from the upstream variants: Fedora vs CentOS vs RHEL, OKD/SCOS vs OCP/RHCOS, RDO vs RHOSO, Ansible ecosystem vs AAP, etc. Red Hat also delivers on support, training/education, partner-driven sales, and OEM integration/certification.
^ Main exception would really be the Java middleware solutions, but the Runtimes and Integration offerings could be argued as a platform of their own. Same with RHEL/OpenShift AI.
All the things you’re saying Redhat sells are a result of being in business for 30 years and investing billions. They were not a “platform company” for the vast majority of their existence and would not have survived long enough to transition to platforms if they launched in 2023. That’s the point.
I mean, half the reason they sold to IBM is because even Redhat didn’t think they could withstand the assault from Amazon as a standalone entity…
1) Subscribe to the GitHub repo for tag/release updates.
2) When I get a notification of a new version, I run a shell function (meup-uv and meup-ruff) which grabs the latest tag via a GET request and runs an install. I don't remember the semantics off the top of my head, but it's something like:
Of course this implies I'm willing to wait the ~5-10 minutes for these apps to compile, along with the storage costs of the registry and source caches. Build times for ruff aren't terrible, but uv is a straight up "kick off and take a coffee break" experience on my system (it gets 6-8 threads out of my 12 total depending on my mood).reply