I'm here for the upstreamed Arc GPU support, lol. I could probably have made it work with 22.04 with enough poking but I don't generally like upgrading major versions anyway. I was actually checking earlier but it hadn't released yet, lol.
Probably jumping the gun anyway - it doesn't look like Arc has a release of the dev tooling for 24.04 yet anyway, no `noble` repo yet either.
I figured at least I could get passthrough working to a 22.04 container, couldn't get the intel_iommu to behave, looks like it was still getting sucked up by the parent OS despite my GRUB kernel options. Might need to try HWE on the VM host too, or maybe it's just a fact of life based on how the outputs are wired up on my serpent canyon NUC (headless might work better, perhaps?).
I'm actually fine if the ML stuff takes a bit, I kinda just want to play with GPGPU and I definitely want to play with the AV1 encoder.
I ended up bumping into the same issue that stalled me out before, it happens in both 22.04 HWE and 24.04, the step about installing the out-of-kernel modules is busted. Installing the i915-dkms can't be done at the same time as the platform-vsec-dkms and platform-cse-dkms, apparently? But I pushed on it a little harder, and it seems like it works if you run two separate apt commands.
I've got a couple of these weird environments, I need to set up a way to virtualize and passthrough them so I don't have to maintain actual machines for it. Not sure exactly what that looks like, maybe proxmox and some qemu machines in a pseudo-ESXi configuration or something. When I get around to getting the new disks in my NAS, I can also do iscsi/iPXE I guess (and I have optane for write cache).
Sorry, I don't understand your question. By "treadmill" I mean sticking to a particular distro year after year, upgrading regularly -- it could be Ubuntu, Debian, Red Hat, CentOS, etc. Unless you're constantly switching distros, you're on a treadmill. I wasn't comparing Ubuntu to other distros.
Thanks. Yes, you're right, sometimes "treadmill" is used pejoratively to describe forced upgrades that otherwise would be unnecessary. I was referring to the fact that we are all voluntarily installing security updates and/or improved versions of the software we use, year after year.
FWIW, I did pick up the pejorative connotation at first, but then realized it probably wasn't intended that way due to the complete lack of other negativity in the post.
If you use the non-LTS versions of Ubuntu then you are expected to upgrade to each new version in short order, the previous version (23.10) is only supported until July.
Django has the same problem. You either have to upgrade all the time, or you have to jump LTS to LTS, which is also not advised from Django's perspective.
There are so many ways in which the Django upgrade process could be worse, and IMO very few ways in which it could be better. Not really sure how this is a “problem”.
Yeah, but now you have your own problems instead of pain shared by millions of other users. A lot of orgs on the same cadence is a great and under-estimated thing. I'm just sayin'
I don't get what you mean, actually. A lot of orgs upgrading simultaneously means that none of them have prior experience to look to. A staggered rollout of new software versions is often viewed as a less risky deployment method.
For one, there are more people on Ubuntu. I believe, a large part of the reason for that, is the cadence. I don't exactly like Ubuntu and I decided to depart for Debian last year, I had had just about enough of Ubuntu quirks and antics. But I held fast for so long because of the cadence. You can set your corporate schedule on month == April && (year % 4) == 0 and be done with it, for the LTS releases. With Debian you have to be at least a little tuned in to current events. I don't like that, I have other things going on.
Ubuntu also has a cadence to the non LTS. So if you want to study ahead what could break, test your stuff currently deployed on LTS, on the latest regular stable.
This is my que to off on a tangent and remark that easy cloud and Kubernetes (oh dear don't get me started but it's also useful, so what can I say) has made whatever OS runs on the servers less important. But I still care for some things.
And I don't want to run testing. I like to keep just one step behind. Of course, not everyone can be like that or there wouldn't be any experimentation and progress, but there are plenty of other people filling that niche.
To let you know - you are not alone with such considerations and I completely share the point of massive scale makes sense and cadence makes sense for planning
They are probably referring to the Ubuntu Pro (previously Advantage I believe) notices that appear when you login via SSH or do apt updates. They can be disabled/hidden anyway, but they are still conceptually invasive.
I've been running my ML stack on Arch for the last six months and haven't had any issues so far. So perhaps not a big issue on a more stable distro like Ubuntu.
I'm still torn about whether I want to stick with Ubuntu a little longer or move onto Debian/Fedora. I've been a happy Ubuntu user for a long time, but each package that gets converted to snap makes it harder for me to stay. I'm getting really tired of having to fight the distro and look for a bunch of my applications elsewhere to get a version that doesn't suck.
I felt the same way with Ubuntu. It was so frustrating. I was also considering moving to Debian or Fedora but a few years ago I ended up cracking the shits with Ubuntu and installed Mint for my daily driver just to finally move on and I've loved Mint and never felt that friction that makes me want to try other distros. I was originally intending to use Mint as a stop gap until I could be bothered with Fedora oops lol
It's got sane defaults with great configurability, and the familiarity (and popularity) that comes worth being Ubuntu based helps, of course. It has been great for me and all the people IRL who I have encouraged to install it.
Give Fedora a try! If you already know a bit about Linux configuration and are not afraid of the terminal, as it's a tiny bit more hands on and blank than Ubuntu.
I switched about two years ago and it's the best Linux experience I've ever had and I do regret not trying sooner. No bloat at all. DNF is awesome. Flatpak > snap. The release cycle is a nice compromise. Really, I am in fucking love!
When I was younger, hands on felt like a good thing (we kinda had no options) and it let to learning a lot.
But, for desktop and being productive, specially now a days, the least I want is to be hands on with my system.
I kinda want something that’s mostly out of the way. Heck, when looking at platforms, depending on scale, I prefer something opinionated to something that lets me shoot myself on the foot 30k different ways.
It’s not that I don’t want to be able to tune it. It’s just that if I need to spend hours on that tweaking vs using it, there’s eventually a loss. I’m also not saying something that can’t be tweaked, just that if it has a set of best practices, let’s start with those vs trying to rewrite it all.
I did try it a couple times in the past, it just never quite felt like home. I don't know why, I couldn't give you an objective reason as to why I didn't like it. I probably should give it another try soon.
Once a debian...
Also we've seen what Red Hat/IBM did with CentOS, they might pull something similar to Fedora, It's unlikely which make it very appealing for IBM
I don't think they could actually do that. They provide funding for Fedora's infra, but they don't make up the majority of contributors, and if you read through the actual governence model (note that the higher up the group, the less power it seems to have — the Fedora Council or whatever basically only exists to solve disputes that bubble up from lower, entirely community run and elected, groups), while Red Hat does have some influence/positions, they have far, far less power than the community just by numbers and also by who controls various things, and everything is also done by consensus to boot, so Red Hat couldn't just unilaterally change how Fedora works. At best they could withdraw funding (making Fedora less well-tested) and their people on the governing bodies, but it wouldn't amount to much. And Fedora is upstream from Red Hat Linux and CentOS and provides them with an utterly massive amount of labor and testing they couldn't hope to achieve on their own that they get by virtue of it being FOSS, so it would be a pure harm to them to shut Fedora down or make it closed even if they could, whereas the story is very different with CentOS and RHEL.
I've been using fedora for the last year and a half and been enjoying it much more then when I gave it a try 9 years ago.
Not only did it get much needed improvements everywhere, but software availability has improved by a lot. The official repos has more software available than it used to and flatpak helps compliment it a bunch. But what sealed the deal was using distrobox to easily create containers based on any popular distros that integrate almost seamlessly with your user/session. There are gotchas and it's not meant for more casual users, but you can have pretty much any software available from other distros to supplant missing stuff on fedora.
Also RPM Fusion helps easily skirt over any copyright restricted software. It's fairly easy to install the full set of hardware accelerated video codecs for your hardware this way.
To each their own but I find Fedora upgrade cycle is just a bit too tight for my preference. Properly planned you can get away with yearly but it still feels like I'm due for a dist upgrade every few months.
I'm curious to try out Silverblue, though, where this shouldn't be an issue in the same way.
From personal experience, so far there haven't been any problems with dist-upgrades. Apart from DNF messing up bash/fish completions once, which was an easy fix.
Mind you, Fedora uses BTRFS by default, which means you could also easily do an incremental snapshot before any upgrade.
That's kind of the thing for me, though: Fedora is a very up-to-date, yet very reliable experience for me. It feels, functionally, almost as bleeding edge as Arch, but with much, much less tinkering and worries about upgrades. And again, DNF is my new favorite package manager. Incredibly powerful, but as intuitive as apt.
(Whereas pacman is constant suffering, for me.) Check out the `dnf history` command, how neat is that?
Tho, I love Gnome and having the newest developments available is a huge factor for me personally.
Also, Ubuntu and Debian tend to do come with configuration decisions, which are somewhat unique. Eg. Arch-Wiki (best Linux documentation of any distro IMO) seems to be more often applicable with Fedora for me, since its more conforming to overall Linux developments and vanilla systemd. But that's mostly a feeling.
However, the whole licensing limitations and RPMFusion repo shitshow, are why I don't recommend Fedora to absolute beginners. Some common needs are not addressed in a friendly GUI way, yet, and require understanding of Linux internals. Fedora is a bit too raw for beginners, but perfect for programmers and sysadmins. Oh, and if you do updates through Gnome Software, it asks you to reboot, more often than Windows. Not a good first Linux impression.
Edit: I used "for me" too much, guess I wanted to indicate, I absolutely see how it's not for everyone and your objections are totally valid.
Why wouldn't you be able to upgrade yearly? N-1 is always supported until the next release so at worst you do a double upgrade once a year. Only 2 reboots are needed.
Anyway for personal systems I don't see the issue with upgrading every 6 months. The process isn't much different than regular updates. If you are wary of issues you can always delay the dist-upgrade of a few weeks so that any quirk not detected during beta is solved after feedback from the early adopters.
I have 2 personal laptops, one on silverblue, one on regular release with data synced. One shared laptop that is mostly used by my daughters, also on silverblue, and my professional laptop on regular fedora. I usually upgrade my 2 personal laptops on release week. The shared one however is only updated some weeks later because this is the lowest maintenance one and my professional laptop is usually upgraded the last, it usually stays in N-1 until the next release enter beta.
I moved to Fedora (Xfce spin) for that exact reason and I've been incredibly happy for the last ~2 years.
The last straw for me was the calculator app being a snap. I was frenetically working on a thing, and suddenly opening the calculator app took ~15 seconds. Looked deeper into that, it (suddenly) was a fucking snap. Ubuntu developers had decided it was a good idea to mount a 500+ megabytes layer full of gtk shit in order to run the calculator. A fucking 600kb binary. And I was running a gtk based desktop environment anyway (xfce).
Nowadays I run Fedora on laptops (or systems where I prefer software abundance to stability) and Rocky Linux (basically RHEL without logos) on my home server.
I've kept myself far from Ubuntu and GNOME and stuff works and I'm happy.
I also moved from Ubuntu to Mint, and from GNOME to MATE. Been very happy.
The only time I got annoyed at Mint was when they recently changed the default mouse pointer into something that looks like a deformed marshmallow. So instead of a pixel at the end of the pointer, we get a fat finger. I don't understand the UX mentality that thought that this was a good idea.
It's easy enough to change, but every now and then, something clobbers my UI settings and I have to remember how to change it back to the mouse pointer that actually works as a pointer.
I totally agree, used to use Ubuntu Mate, presumably because I didn’t want to deal with constant regressions. But it kept deteriorating.
I personally use the DMZ White cursor theme which used to be a common choice, but is surprisingly hard to find these days. Not even packaged in Debian*. CADT strikes again.
True. I was also surprised to read that the Ubuntu upgrade route from 22.04 to 24.04 won't be supported until the 24.04.1 release in august, which goes to show that the initial release is perhaps not the most stable..
The major downside (for me) was Debian not supporting ZFS on root out of the box. :(
Now toying with the idea of using Proxmox on my main development system (R5950x desktop), as Proxmox is based upon Debian 12 and supports ZFS on root mirrored (and RAIDZ* too if desired) across multiple devices.
Would need to figure out PCI pass through for my Nvidia graphics card though. Probably do-able, but it's an unknown factor presently.
Snap taking a dump into the output of "mount" alone is reason enough for me to hate it. I want to see my drives, not that three bundles of libraries are separately mounted into "firefox snap dir" and three others into "snap base package dir" or whatever. There are other offenders, but Snap is the worst. I resort to "df" these days, but it doesn't show filesystem type and mount flags.
No way. Flatpaks are clearly represented in the software shops of their adopted distros (I use Fedora and Pop!_OS, both of which use Flatpak).
From the software shop GUI, I can choose flatpak or dnf/apt from the dropdown. From the command-line, flatpak has its own commands (vs. apt silent under-the-hood behavior).
Flatpak is better than Snap. I use Flatpak for commercial software (Discord, Steam, etc.), but it remains my choice as a user.
The point is totally different: the purpose of both is upstream-managed distribution, consider individual distro like a container ship to be loaded, something not to care about, a commodity.
We know the arguments: often distro packagers are late to update, some projects are very complex to be packaged and demand gazillion of resources to be built, upstream devs on contrary will surely keep they project package up-to-date, sandboxing is good for safety etc. BUT we also know the outcome: 99% of such packages are full of outdated and vulnerable deps, they are themselves mostly outdated, since they are not packaged by the upstream devs who just publish the code as usual, and they have many holes punched here and there because a browser that allow to download some files but you can't open them in other apps is useless, as a pdf reader who can't read a file because it's outside the right place. Beside that you get a 30+Gb desktop deploy instead of a 10Gb, dab performances, polluted home directory, very scarce ability to automate AND all of them still need a classic package manager since they can't handle the base system.
So why them? Because SOME upstream do not want to allow third party distribute their binaries, they are commercial vendor. They NEED such system to sell they products ensuring they can work as a cancer in an open ecosystem, not designed to be a ship for something but a unique individual desktop anyone tune as he/she want.
That's why they are crap.
The next step to classic package management is the declarative/IaC one, like NixOS or Guix System. Those who want Snap, Flatpack, Appimage, ... just want Windows, with all the bloats and issues of Windows.
Just like they wasted time and effort on Unity and shuttered it in favor of Gnome, and they wasted time and effort on Mir and shuttered it in favor of Wayland, and they wasted time and effort on Upstart and shuttered it in favor of systemd.
Canonical will fail once again, but only after jerking the community around for multiple years.
> Just like they wasted time and effort on Unity and shuttered it in favor of Gnome
Unity served well for years, it would have needed a rewrite anyway for the post x11 era, so indeed there have been wasted resources, but experiments are also important in technology, and many still love what was (is) the unity user experience.
> and they wasted time and effort on Mir and shuttered it in favor of Wayland
Mir is still there and it's used. It's now a Wayland compositor but it maintains its API, the different communication protocol doesn't change its purposes.
> and they wasted time and effort on Upstart and shuttered it in favor of systemd.
When Upstart started and was used no systemd existed or was designed, so it served many well for years. Not a waste.
They didn't invest much into Upstart at all, and quickly announced a switch after Debian adopted systemd.
They still maintain Mir, as a Wayland display server.
Unity was one of the most popular desktop environments and brought Ubuntu users a lot of value over the years. It even influenced the design of GNOME 3.
I didn't understand much of the unity hate, especially compared to things like gnome 3. Really wish I could have HUD, combined titlebar/top panel, and typo-resilient search back. The latter two are possible with gnome extensions but don't work quite as well.
It's not so much hating what they do but the manner in which they do it. CLAs and not contributing upstream from the get-go means Canonical's special stuff cannot go further than Canonical.
Community forks of e.g. Unity have cropped up that ditch the CLA. Open source is open source, after all.
That said I do agree that the CLA has doomed most of their projects from gaining considerable adoption in the wider Linux community. At least while Canonical is still running the project.
Yeah I'm generally critical of Canonical for these moves, but Upstart is one I actually think was good and well done, as was their decision to move to systemd. Upstart was more pre-systemd anyway. IIRC Red Hat also used Upstart for a major release as well before moving all the way to systemd.
Yeah this is exactly it. I don't really give a damn about snaps. But when I use apt to install something, and the OS silently installs a snap version instead, that's not acceptable to me. I'm not going to ever use Ubuntu again where I have a choice, personally.
So far, an update from 22.04 LTS to 24.04 LTS on an Intel based Dell Lattitude laptop without any weird hardware produced a failure that was a white screen that just said to contact a system administrator. Could be a fluke. Trying to rescue this produced a non-bootable system. Could be user error.
After that a clean install of Ubuntu 24.04 LTS on the same machine worked, but after installing the latest Chrome on top of this, it won't start. After installing the latest Slack, that won't start either. BitWarden from the snap store will start, but after logging in, I get a endless spinner instead my vault contents. It is hard to believe that the Chrome/Slack/BitWarden issues are user error.
I'm not sure I've ever seen such a rough LTS release. I still wonder if this is just somehow me since surely they wouldn't release it without Chrome and Slack working, but I can't see what I could have done wrong just following the directions here.
Ah, that's one annoying thing I had forgotten about ubuntu, thank you for reminding me!
So i've been using ubuntu for many years (my first release was 6.06). Updating to a new release never worked. Around 12.04 times (when I started to seriously feel the need for stability) i started to only to LTS-to-LTS update, which basically meant a full reinstall, but once every two years was a more manageable.
Nowadays I run fedora, so far the upgrade experience has been incredibly great. I already did three versions jumps (36 to 37, 37 to 38, 38 to 39) and it didn't break.
I understand that 24.04 isn't really LTS until the 24.04.1 update comes out, but I like to start trying it on a secondary computer earlier, and I'm hoping not to wait that long since I really want the podman updates.
Well, for one thing, Ubuntu 24.04 got to be the first big Debian-based release after 64-bit time_t migration, and they got rid of backwards-compatible package names early. I suppose removing hundreds of libraries, including heavily dependent on, to install them from new packages (with `t64`) in proper order can cause a lot of problems on non-trivial configurations.
Existing binary code should work (unless it interfaces with 32-bit code in some affected way), but external packages need to include new dependencies.
I also had a slightly rough experience. I started my upgrade from 22.04 -> 24.04 and walked away. I came back to my system frozen on the Plymouth screen. After trying and failing to get into a TTY a few times, I booted into recovery mode, re-ran the upgrade, and it actually worked.
That was a lot rougher than my experience upgrading from 20.04 -> 22.04, which went off without a hitch.
Believe the kernel is 6.8.0, which is surprising. Fedora is a faster moving distro and the bugfix digits incremented quickly every few days to 6.8.7 today. I wouldn't run a .0 kernel and didn't allow 6.8 to install/run on my machine until it hit .5.
I'm sure they have added a few patches, but seems like they could have shipped fewer basing off .5+ ?
If you set up ozone platform using chrome://flags, it will affect you. You can start with manually specifying --ozone-platform and then in chrome://flags, set ozone platform back to default.
I haven't been able to complete a fresh install on an empty drive yet. Either the installer crashes (most common), the installer throws an error, or the live USB freezes during booting.
> In combination with the apparmor package, the Ubuntu kernel now restricts the use of unprivileged user namespaces. This affects all programs on the system that are unprivileged and unconfined. A default AppArmor profile is provided that allows the use of user namespaces for unprivileged and unconfined applications but will deny the subsequent use of any capabilities within the user namespace.
Welp, that kills any chance of namespaces being widely used by anyone outside the likes of Docker and systemd. I'd been using unprivileged mount namespaces as a way to create anonymous temporary directories, but I guess they just weren't so long for this world after all.
How is it not on the front page? https://news.ycombinator.com/item?id=40154395 with 36 points over 10 hours is but this submission with 89 points over 4 hours isn't. The submitter seems to have plenty of karma too.
I know that Canonical keep making some questionable decisions that (legitimately) upset people, but I'm still happy for all that Ubuntu has brought to the table.
As an aside, I think that has to be the best Linux intro I've seen - very slick.
Looks pretty good! This is interesting from the release notes:
> Starting in Ubuntu 24.04, Canonical no longer produces Vagrant images. This is due upstream Debian questions of maintainership and Canonical dropping vagrant from the Ubuntu archives.
I am still sad that vagrant never took off. Sure it had some rough edges that could have been smoothed over with more eyes, but it really feels like a better development path.
Containers still need to run on some kind of VM. If you're a small shop and the VM is managed for you by the cloud provider - fair enough. But a lot of larger enterprises still run servers managed in-house that are based off a golden image that is pre-configured to include observability agents, security agents, stuff that reports to centralized inventory management, in-house certificate authorities, etc. If you're a developer who ships a container, and it's only exposed to the stuff in the in-house server once it gets to a staging/pre-production environment - then you're missing a big part of the actual production environment in your dev environment. Vagrant can be a good choice here, depending on how customized the production servers are.
Then there are plenty of cases where cloud servers are insufficient and your actual production environment includes on-prem hardware with more specialized setups, then the VM vs. container argument gets more murky.
Because it's not the same. Just look at the docker package for Debian for example. It's like 3 years out of date and has an unsupported version of docker compose.
I use the docker.io and docker-compose packages in the stock Debian repository, and they seem to work perfectly fine? (Notably better than the Docker snap package that I tried to use, for that matter.)
That's okay, but it's still 4 years old. And it uses docker compose v1, which isn't supported by upstream and receives no security updates since 2021.
Docker has gotten a lot better imo since version 20.xx. buildx and buildkit are really nice now, and there's very little downside to having an up to date version (no important breaking changes, and improved rootless mode). Afaik the issue is related to some go packaging troubles or something.
Fwiw, I have never had any issues with docker on Ubuntu. Even on azureml, where VMs are by default provisioned only with Ubuntu+docker, it just really works.
> And it uses docker compose v1, which isn't supported by upstream and receives no security updates since 2021.
Is there any actual security concern here? As far as I can tell, docker compose v1 has no outstanding CVEs, and has, in fact, never had any CVEs or needed/received any security updates.
I don't disagree and I usually don't like the security argument for tools like docker compose, that's why I highlighted the features and improvements to docker more than security itself :) .
It's just nice to have actual upstream bug fixes to an actually recent version instead of random backports and patches especially for a tool like docker. It's also nice to have the much improved tools of docker 20+.
Plus, you literally get everything that's nice from Debian too. It's like Debian but with a different package "profile". Debian is perfect for some use cases but Ubuntu is great for others.
FWIW docker.io in Debian is currently at 20.10.24.
This is a bit of a digression though, I don't actually have a problem with Ubuntu overall, I just found Debian to be better for my particular Docker-hosting needs.
For me, its seems it's both newer packages and easier to add third party apt repos for even newer packages. Also, I like running newer kernels with HWE.
Debian release cycle is almost exactly 2 years, much like Ubuntu with some deviation.
You can install Debian, on ZFS if you want.
Now speaking of corporate customers, well that is not a great argument, as even more users use Red Hat, which is IMO an inferior distro.
What is good about Debian, however there is a very little chance of it disappearing one day, or dramatically switching to snap-only or some other nonsense deviation. In a way, it is even safer for corporate use than Ubuntu. This is why it is used in Google (the used to run Ubuntu before on their workstations).
It's a way to use systemd-networkd without having to write network and netdev unit files. You write YAML instead and it gets rendered to the aforementioned unit files. Some people will prefer that, and some people will hate that. :)
It is an abstraction layer over systemd-networkd (server) and NetworkManager (desktop).
It is a self-inflicted problem where a solution is another layer of abstraction. Red Hat ships NetworkManager on server too, so they have unified configuration without another layer.
For example, in the gnome team we do the uploads first to debian then we merge them with the Ubuntu changes if any, but we try as much as possible to use the same sources for both.
Purely anecdata, but I've lost count just how many Ubuntus (more accurately Debian and derivatives, I guess?) I've murdered in cold blood with sudo apt update and sudo apt upgrade.
That's just normal updates; I've never had issues doing that.
That said, in-place updates to a whole new release has been fraught across every distribution I've tried it on, whether Debian-derived, RedHat-derived, or even a few something elses.
(Gentoo being rolling didn't have that issue, but instead different problems.)
Sorry but... At the Unity (WM) time Ubuntu was the distro able to satisfy both end users and techies, adopting Gnome SHell they already say "we are like others", than pushing CRAP like snap have completed the game. Back than Ubuntu have ceased to be interested for anyone with a bit of competence and have started to be annoying for generic users.
NixOS today is not in an excellent shape, but is the go-to distro of any tech savvy, Arch have intercepted old-schools users locked in a far past, Guix System try to thrive, Ubuntu now is on par of classic RH in crappiness and commercial conduct.
It's not a distro war. It's a paradigm. Ubuntu, Arch, Fedora, OpenSuse, ... are all classic distros meant to be manually handled, some are rolling, some are not, but they share the same '80s style way of work.
Declarative distros offer:
- stability
- easy replication
- easy upgrades
- easy rollback
Ubuntu in the past was interesting because Unity was a nice desktop environment for both a newcomer and a seasoned user, it does not interfere narcissistically with the user actions like Gnome SHell that to it's best to be at the center of anything or modern KDE. Unity was as minimal a Fluxbox while as nice and dice for users like a modern desktop environment. And was an Ubuntu only show.
After that Ubuntu is just another classic distro, good for a distro war between the Debianist the RH-ist etc. It's another SuSe back than. With NOTHING special to offer, except that it annoy it's users with snap.
I understand in part what you mean, but ubuntu is also a disto that many users relay on for its stability and reliability, not just for being an hacker toy.
I still use like that and anyone can, but normal users are the main target.
Ubuntu is a classic distro, meaning hard-to-automate for deployments with invitation to buy services for that purpose, so offer no advantage in stability and reliability compared to modern declarative distros like NixOS or Guix System with their essentially read-only system, easy custom deploy and replication, easy rebuild, as a fresh install every time, easily poor man IllumOS Boot Environments with their linked generations.
"Normal users", meaning non tech-savvy ones are Windows or OSX targets, because with Ubuntu they still need to use a terminal a bit more than Microsoft/Apple stuff, and they still have to deploy their own systems. For a bit more "power users" having to manually deploy an official ISO than customize it or keep it up polluted, an update at a time, is a NIGHTMARE. Try to upgrade a normal Ubuntu for few releases and you'll see things breaking, you fix the manually augmented the entropy. With a declarative distro any updated inter-release and cross-release is a fresh install out of your config, no forgotten fixes/hacks no leftovers.
Those who claim Ubuntu as stable are stuck in a far past, before declarative distros exists.
Seeing Repology stat on how many projects are in nixpkgs, how fresh they are, you are very largely wrong... Snaps/Flatpack have very few and mostly outdated packages. They are behind ANY other package system. NixOS is the largest set of packages and the most fresh. Arch and Guix follow short. The others are behind.
Not only, try to see how many "default.nix" or "shell.nix" you'll find in FLOSS projects, like in the past the "debian" dir. and you'll surely have an Idea.
> TPM-backed full-disk encryption (FDE) is introduced as an experimental feature, building on years of experience with Ubuntu Core. On supported platforms, you no longer need to enter passphrases at boot manually. Instead, the TPM securely manages the decryption key, providing enhanced security against physical attacks.
Shameful to see Ubuntu fall into the trap that Microsoft created, where you don't own your computer anymore and it actively prevents you from accessing your own data with DRM.
You've misunderstood. Ubuntu's TPM-backed full-disk encryption isn't DRM like the type used to prevent you from copying movies. It's meant to assist your own privacy and security, by prevent other people from accessing your own data.
For example if your laptop is lost, stolen, left in a shop, seized by bailiffs, sent in for a repair, etc., or if it's a server in a data center, making it more difficult for someone to read the server's data without authorization.
Preventing access to data is the whole point of encryption. TPM stores the key to the drives attached to the machine, effectively tying the keys to the device + drive combination. (Doesn't Apple do the same?) And if you're afraid of being locked out, you have backups elsewhere, right?
Key differences versus the previous LTS version:
* Distribution packages built with additional security-hardening features
* Linux Kernel 6.8
* Gnome 46
* Updated utils and apps
---
Users who train AI models on Ubuntu machines should wait until their ML stack (typically, Nvidia + PyTorch) is known to run without issues on 24.04.
See https://news.ycombinator.com/item?id=40157645
---
Source: https://discourse.ubuntu.com/t/noble-numbat-release-notes/39...