Hacker News new | past | comments | ask | show | jobs | submit login
Debian 12 “Bookworm” (debian.org)
594 points by Chatting on June 10, 2023 | hide | past | favorite | 230 comments



Congratulations to the Debian team!

An important change appears to be the inclusion of non-free firmware by default in the official install image for the first time, as a result of this vote: https://www.debian.org/vote/2022/vote_003

Intriguing. I feel a little torn on this. One the one hand, I appreciate being able to install Debian from an official image onto a bothersome device. On the other, I can't help but feel we're losing something when even a purist distribution like Debian is forced to concede in the fight against proprietary blobs.

Edit: dropped the word 'kernel' from 'proprietary blobs', as rightly picked up by kind commenters below.


I was involved in the discussion, and I'm also torn on that issue, but at least you can disable installation of non-free firmware and install Debian without any non-free software.

On the other hand, firmware is a convoluted issue. It was always present, but became increasingly visible over the years. While I'm a strong Free Software supporter, firmware is one of the hardest parts to convert, because of the IP it entails and trade secrets it embodies.


That and the simple fact that without it the hardware you've invested in won't work. So this is very much an individual choice, if you don't have such hardware you are fine but shouldn't be voting on whether or not someone who does have hardware that won't work without proprietary blobs has to go out to buy new gear. There are all kinds of considerations that go in to this decision (for instance: environmental impact) and I'm all for compromise when it helps out others.

At the same time, you have a good point, these decisions are difficult and need to be very carefully weighted. Debian and RedHat are the two distributions that everybody else always ends up following so a major departure from established policy there has potentially huge impact downstream.


> the simple fact that without it the hardware you've invested in won't work

How is that different from the "if you want to run linux don't buy a winmodem" that we said convincingly twenty years ago ? Would you have approved that linux 2.0 added binary blobs to the kernel in order to correctly work with the hardware that you had invested in (some random winmodem) ?


Winmodems had plenty of alternatives, for PCs the choice is usually limited to 'what you were given' or 'what you recycled from a dumpster'. Linux is typically not the first OS to be installed on any given piece of hardware (even if brand new plenty of people pay either the MS or the Apple tax). And because those are exactly the people that benefit from having a working PC I'm all for maximizing their chances. Choice is a luxury.


> Linux is typically not the first OS to be installed on any given piece of hardware. And because those are exactly the people that benefit from having a working PC I'm all for maximizing their chances. Choice is a luxury.

But they don't have to choose Debian.

If Debian had maintained a strict "Free" stance, people could still use their troublesome hardware out-of-the-box simply by picking another distro that did include non-Free firmware in the installer. There are plenty of them. Like, 99% of all other distros.

Not all distros have to be everything to everyone. It's OK to have a niche.


> If Debian had maintained a strict "Free" stance, people could still use their troublesome hardware out-of-the-box simply by picking another distro that did include non-Free firmware in the installer.

This isn’t Debian’s niche though. Like many distros, Debian came with the free repos configured by default, but its had non-free packages available forever and they already had a non-free installation iso hidden away. The line had been crossed forever ago. Now it’s just a friendlier experience.

This is a good thing.


> This isn’t Debian’s niche though.

Point 1 of the Social Contract[0] is literally "Debian will remain 100% Free"

Please explain how championing Software Freedom would not be Debian's niche.

[0] https://www.debian.org/social_contract


You are entirely free to use the narrow FOSS interpreted distribution. And other people are just as free to use the ones that include the blobs they need to get their hardware to work where there is no alternative available. And hopefully over time all of those blobs will go the way of the Dodo, but in the meantime Debian keeps mind and marketshare, which unlike absolutist stances are just as important to the long term survival of the distribution as is the core philosophy.

Because you wouldn't be pushing those users to an alternative linux distro, you'd be pushing them to Apple and Windows and that's far more damaging to FOSS than to include some firmware blobs. The issue was debated at considerable length, I've followed the debate (because I use Debian and Debian derived distros on all of my machines, including laptops, servers and desktops) and I'm happy to see this outcome because it shows a certain level of maturity. The world isn't easily defined in terms of black and white. Debian is doing a great job and this minor concession is only going to strengthen its position as the FOSS distro of choice because more people will end up being able to use it successfully.

Note that far more people care about whether or not a distro works on their machine than whatever narrow reading of 'The Word' causes it to malfunction. Builders are few, consumers are many and I'd much rather see Debian succeed in the long term than die on the hill of FOSS purism.


I already did explain. You stopped reading after the first sentence. :)


Fortunately Debian recognized that being a 'niche' would sooner or later spell the end of Debian and that would be a loss.


When you are one of the biggest “root” distributions, and the template of countless big and small time derivatives, you don’t have the luxury to have niche.

When thought with a clear and unbiased mind, Debian did the absolute best they can do.

Add firmware, be open about it, install only when necessary, allow people to opt out when they know what they are doing.


We’re on the laptop age, most people can’t pick and choose the components of their computers.


Usually, people first buy a computer and then hear about Linux later, not the other way around.


It's also close to impossible to choose fully Linux-compatible hardware for some of us. Whatever decent Thinkpads there are left are practically unavailable in my country, unless you're willing to buy from foreign sellers without any warranty and pay hundreds of dollars for shipping. Things like Framework are completely unavailable and probably will be for the foreseeable future. I use desktops exclusively so I can pick and choose, but I am in the minority of a minority.


As I understand it "firmware" is essentially just the same as an EEPROM, except that using volatile memory is cheaper and easier to upgrade. No one seems to have great issues with EEPROMs (FSF doesn't anyway), but uploading that same code to the device when it starts is a huge problem? I never understood this, and especially given the huge practical trade-offs the entire thing seems fighting windmills.

The Linux-libre people even removed the warning that the CPU is vulnerable to spectre/meltdown if you don't update the microcode. But ... your CPU already comes with that microcode out of the factory, just a different version of it. How is running an older known to be buggy microcode better?


Debian distributes firmware stored in volatile RAM because it is either required to use the hardware, or, as is the case for CPU microcode updates, highly recommended for most users.

As far as I know, Debian does not distribute proprietary EEPROM firmware updates at all, as these are generally not required to use the hardware (and, depending on the device and update in question, may or may not be recommended for most users).

In other words, the difference is practical, not ideological.


Debian distributes fwupd, which accesses the Linux Vendor Firmware Service, which distributes updates for proprietary firmware stored on devices:

https://fwupd.org/ https://wiki.debian.org/Firmware/Updates


Organisations like the FSF recommend that you don't use devices that require "non-free firmware" at all. They don't recommend anything like this for devices with "non-free EEPROMs at all". Stallman himself has stated he has no problem with such devices (I can't find a direct quote on this right now, but I'm 100% sure I've seen Stallman write or say this at time point).

My point was that the entire position just doesn't make any sense: either you reject all non-free software no matter how it's loaded (which means there are very few computers you can actually use), or you just use your hardware with "non-free firmware" that would be baked in anyway and stop worrying about the entire thing.


While I'm glad to see proprietary firmware both included and segregated in its own repo, I'm wondering why RISC-V wasn't added as a supported architecture in Debian 12? It seems like that supporting at least an open ISA would move closer to the possibility of what Debian wants to see happen... so why isn't it as a distro helping to make it happen?


It's simple - because the port is not ready yet.

The plan is to have it ready for Debian 13: https://lists.debian.org/debian-devel-announce/2023/06/msg00...


At the end of the day, this was because the hardware ecosystem (and the hardware support in Linux/bootloaders/etc) isn't really "ready" yet, and also there was slow and unclear communications between the porting team, the core Debian teams and one of the hosting providers. Eventually the teams provisionally approved the port, but after that the needed actions weren't done in time for the archive-wide rebuild that happens during the process of an unofficial port becoming official. I'm not on the porting team, but am on the Debian sysadmin team, and tried to speed up the process and make it more transparent. Debian contributors are mostly volunteers, so things take time.


Sorry for the lag in my reply (I just noticed your response)... I very much appreciate the info as I found the lack of official RISC-V support in 12 surprising. I hadn't read about the blockers to being ready for the release which makes perfect sense. Looking forward to seeing in in 13!



That doesn't answer the question. It's still 'unofficial' per https://wiki.debian.org/SupportedArchitectures

If you want the maintain the status quo you support the hardware that has the most users, if you want to change it you also support the hardware you want them to use.


In Debian, there's no single entity which decides on things. There are rules, and processes.

The relevant documentation is here: https://wiki.debian.org/PortsDocs/New

Also, the rules for becoming an official port is here: https://ftp-master.debian.org/archive-criteria.html

When RISC-V satisfies the relevant criteria, they can become official.

Also, maintaining a port is tremendous amount of work. Debian users only see the tip of the iceberg. There's so much beneath that.


Who is the actor in this scenario who wants to change the status quo and therefore support the RISC-V platform in Debian? I.e. who are the people with the motivation to do that?


The Debian project has been historically a vocal proponent for open hardware/firmware/drivers etc. and here we are in 2023 having Debian somewhat controversially acknowledging that proprietary firmware may be required at least by some users, to some degree. So I am suggesting that if they really want to get there that they might need to help facilitate the change they would like to see.

Is RISC-V the be-all, end-all? No, but it could help them get a lot closer than they're likely to get with any of the proprietary ISAs.


Debian is not a general advocacy organization. Debian produces the Debian GNU/Linux distribution, and that’s mostly it. The incentive to do the work necessary to add a new architecture to Debian must, in general, come from outside Debian itself. Debian will probably not, as a project, go around looking for new architectures to add. Debian depends on other people doing the work and presenting it to Debian, which will then distribute it if it is good enough. Debian will only have the motivation to work on including a new architecture if it is an obvious lacking feature of Debian – say, if every other Linux distributions supported an architecture, and it was widely used, but Debian did not support it, then the Debian project would have an incentive to work on it. But not before.

This is just like Debian generally does not write new useful software to put into Debian, and instead depend on outside, “upstream” authors to write software. Some other organizations, like FSF and its GNU project, will have advocacy roles, and do have the incentives to write new software and to help porting to new architectures. It is there that the motivations for this work must be found. Of course, individual people can be involved in more than one organization, and many Debian people happen to be enthusiastic advocates, and will in fact often do this work. But the reason for them doing this work is not, mostly, rooted in them being a part of Debian.

Therefore, and this is my point, if you want this work to be done, you can’t exhort Debian to do it. It will not work. Ask instead why someone else, whom you think should be motivated to do the work, does not do it.


The RISC-V port of Debian was driven by internal Debian contributors, not by external actors. Similarly for Debian kFreeBSD and some other ports. OTOH, for LoongArch and ARC, those ports are driven almost entirely by the companies selling those chips. So I would say it is a mix.


> The RISC-V port of Debian was driven by internal Debian contributors, not by external actors.

Yes, but my point is, they weren’t doing that work just because they were Debian contributors; they had their own additional reason for doing it. They were not assigned the work by Debian central command. Therefore, asking Debian why Debian does not do some work or other is usually the wrong way to get something done.


Yes, there isn't much of a Debian central command, apart from the technical committee, but they can't force/direct people to do things, only to say what Debian will do in specific circumstances.

That said, the community of contributors are what makes up the Debian project, so in a sense it is Debian deciding to do things when Debian does things :)


Even the open firmware that does exist has some major issues (like needing a proprietary compiler) or the hardware that it can run on it checks for Intel signatures, or only exists on archive.org, or is packaged in Debian but not properly built from source code, or exists but is only useful for obscure/ancient hardware that only people who want libre firmware buys, requires a forked compiler/toolchain to build, etc. Basically, open firmware is a hard problem and there aren't a lot of people with the skills and interest in doing it. I'm trying to at least document what exists on the wiki, anyone know of more projects?

https://wiki.debian.org/Firmware/Open


RISC-V doesn't close the door on proprietary extensions.


Proprietary extensions need to live in the opcode space reserved for custom extensions, where it'll overlap with other vendors' own custom extensions.

If they conflict with RISC-V's own space, the chip can't use RISC-V trademarks.

And, of course, the RISC-V Consortium will never adopt proprietary extensions, as that goes against its core values.


What matters is what is on the market, and how it is made available.

Everything else matters as much as FSF point of view on something being free or not.


That's why there are licenses and agreements.

So that we do not have to rely on anybody's goodwill.


And since RISC-V allows for extensions, all bets are off regarding successful deployments being extension free.

Just like many open core products that really need the full deal for being usable in a proper way.

Expecting otherwise is whishful thinking, or academia/maker community focus.


There already are. As long as they're in custom space, there's no issue.


> you can disable installation of non-free firmware and install Debian without any non-free software.

Including but making it optional was an excellent decision. Those who need will get what they want, those who don't will have the same result as with Bullseye.


Debian was never a purist distribution. If it was then there wouldn't have been a non-free section in the archive in the first place.

The sad truth is that the Linux distributions recommended by the FSF have approximately zero users.


Is it sad? Actually working Linux on real hardware is far more important than purity IMO.


Ubuntu is based on debian and is pretty pragmatic in that respect.

it makes it easy to get working

...and it also carefully attempts to lock you in using packages that can't¹ be disabled.

just saying it is a slippery slope.

[1] well with effort you can

https://www.baeldung.com/linux/snap-remove-disable

https://gist.github.com/jfeilbach/f4d0b19df82e04bea8f10cdd59...


It's sad for the people who don't share YO.

Eg I intentionally got an old ath9k PCI-E wifi card for my Debian 11 router because it works without proprietary firmware, unlike newer ath10k etc cards.


>Actually working Linux on real hardware is far more important than purity IMO.

I agree, but I also think this depends heavily on who you ask. Stallman for example would rather have poorer functionality than compromise his personal (extremist) ethical principles.

There are a lot of folks who use laptops without wifi because the blobs are non free, so they're using ancient ThinkPads plugged into Ethernet.

Much depends on your personal computing needs.


> Stallman for example would rather have poorer functionality than compromise his personal (extremist) ethical principles.

Stallman is open to pragmatism now and again. For example, on this very topic:

https://www.gnu.org/philosophy/free-hardware-designs.en.html

> We can envision a future in which our personal fabricators can make chips, and our robots can assemble and solder them together with transformers, switches, keys, displays, fans and so on. In that future we will all make our own computers (and fabricators and robots), and we will all be able to take advantage of modified designs made by those who know hardware. The arguments for rejecting nonfree software will then apply to nonfree hardware designs too.

> That future is years away, at least. In the meantime, there is no need to reject hardware with nonfree designs on principle.


>Stallman is open to pragmatism now and again.

Not a single thing I've ever read about him has indicated this to be even remotely true. We're talking about the guy who used a Lemote Loongson-based laptop in text mode for years and isn't capable of uploading HTML to his own website (he has to rely on volunteers to do this for him, per his own website).

All that in the pursuit of ideological purity.


Please note that in the paragraphs that you quoted Stallman is talking about hardware, not firmware. He tolerates firmware blobs only if they are loaded once to the device and never changed (yes, I know it's more complex than this, read the whole essay for more precise information).


I think this would be a better summary: it's okay for hardware to have code that's baked in at manufacture time and can never be changed thereafter. It's not okay if, after you buy hardware, the manufacturer can modify the code but you can't.


Sad in that I'd love to purge all non-free software from my life, but that has proven to be impossible and increasingly more so as the years go by.


at those great (FSF) heights, the light is bright but the air is thin!


While I don't like proprietary firmware, I'm not sure if the line is drawn at a useful place.

If you have firmware/software/whatever in a device, which is updateable (as opposed to mask-rom or hard logic), I'd much rather have it transparently managed by an OS I can control, than some EEPROM with often proprietary, inscrutable, I-ask-you-nicely-please-update-your-firmware update mechanism.

IMO, the difference is:

- with OS provided firmware (and preferably no writable storage), I can be sure my device is running the same SW as the rest of the world

- with dozens of EEPROMs in my device, I can never be sure what is running on it.

Firmware that is legally not redistributable is a non-trivial, though perhaps less bothersome issue. Firmware that requires manufaturer's signature is bothersome but I would still prefer it over inscrutable hidden firmware.


> On the other, I can't help but feel we're losing something when even a purist distribution like Debian is forced to concede in the fight against proprietary firmware blobs.

The software needs hardware to run, and the whole point of the software is to make the hardware useful. If you can't use the hardware, what's the point of the software?

In my book, freedom is a function of usefulness. No amount of redistributable source code has any value to me if I can't run it.

Enabling the use of hardware I already own is not a compromise, it's a solution. It's what operating systems exist for. Debian is fulfilling its primary function. I'm glad that this necessity was finally recognised.


I disagree. I'm very disappointed that this "necessity" came into existence in the first place.

By being forced to install non-free BLOBs to be able to use "our" devices we actually admit that we're not the ones who actually control "our" computers. That's admitting full defeat! You're not the owner of "your" devices.

Given that computers are now kind of "brain extensions" this means you're not in control of a substantial part of yourself.

This has quite some implications! And I'm not even thinking about such things like future computer devices connected directly to human brains…


You don't control the hardware in the first place. Can you modify the microchips on your hardware? Can you modify the printed circuits? Can you modify a ROM in hardware? In all those cases the answer is "not really", short of some spectacular reverse engineering effort and specialized hardware and skills that even most technical users don't have (you can modify anything with enough effort). All this focus on firmware seems rather misplaced.


Let's not conflate hardware with software. The first one is practically impossible to modify for a completely different reason than the second one (physical limitations and the need for specialized hardware that costs serious money vs artificial limitations imposed by developers). You can at least repair it (unless you bought into the Apple ecosystem — you knew what you were getting into then), and a relative of mine makes good living doing just that.


Firmware is inextricable tied to hardware; you could pretty much say it is hardware. Hardware has had "software" in it for decades and no one complains about it either, except when this software is loaded in a particular fashion.


Then don't buy hardware that requires non-free firmware.


I've never had a computer which would work with the official ideologically-pure installer. Always had to use the non-free one. I'm glad this vote turned out the way it did.


> I've never had a computer which would work with the official ideologically-pure installer.

I had more than once, back when I first installed Debian in the late 1900s and early 2000s, and I believe my experience wasn't unique.

Back then, not requiring any loadable firmware was common; the hardware either didn't require firmware, or came with the full firmware in a ROM chip in the device itself. And that explains the issue: Debian is an old distribution, coming from these times when not having non-free firmware (or even any firmware at all) in the distribution was viable, and often fully usable.


Same, and I used to keep a little cache of 'known working without firmware issues' wlan cards (pcmcia), I probably still have some somewhere; though I don't think I have any pcmcia capable laptops anymore


For devices which have firmware, does it matter whether the firmware is loaded by the OS rather than hardcoded inside the device? The former at least gives an opportunity to fix bugs.

And if I'm not mistaken, this isn't about kernel blobs (which run on the CPU as kernel code), only code that gets loaded on devices (including CPU microcode).


There is a little ritual we do here from time to time where someone writes a comment that starts something like "Well, I didn't expect Stallman would be right about [issue now being reported on] but he predicted this years ago".

If it can go wrong it will, and if software isn't free then its owners will do things that the users really do not like. In this case, if they can fix bugs they can reduce functionality post-hoc. That is consequential. It is better to have freedom or certainty as to what a device does.


How can you ever really be sure that there is no way to change the code running on the hardware, either unintentionally via some exploit, or intentionally via a deliberate backdoor or a debugging interface enabled in production?

As a practical example, I have never heard anyone considering the freedomness of firmware in eMMC flash memory chips. But the talk "eMMC hacking, or: how I fixed long-dead Galaxy S3 phones" from CCC reveals that actually, Samsung eMMC chips have an undocumented debug interface to read/write the RAM of the firmware running on the ARM core inside the eMMC chip.


There is a difference from legal point. If the firmware is hardcoded in device, you do not need to accept any license contract with IP holder. You do not need to copy it, and your right to run it is implied from ownership of the device. If the firmware is independent part bundled with OS, then anyone who wants to run it or even just distribute the OS must accept the license.


Are you guessing what sounds logical to you or do you actually know the answer here?

The legal system sometimes has definitions of copying that aren't that straightforward. I've seen in a copyright context judges talk about a computer loading software into RAM being copying.

Intel microcode comes with a license: https://bugs.gentoo.org/664134


> Are you guessing what sounds logical to you or do you actually know the answer here?

IANAL, but this is a general concept of exhaustion of IP rights when the IP is sold as a part of physical medium, see (28) and article 4 of EU copyright directive 2001/29.

> The legal system sometimes has definitions of copying that aren't that straightforward. I've seen in a copyright context judges talk about a computer loading software into RAM being copying

This is handled in (33) of the directive:

"The exclusive right of reproduction should be subject to an exception to allow certain acts of temporary reproduction, which are transient or incidental reproductions, forming an integral and essential part of a technological process and carried out for the sole purpose of enabling either efficient transmission in a network between third parties by an intermediary, or a lawful use of a work or other subject-matter to be made."


The term you're looking for regarding the numbers you have in parentheses is _recital_.


Firmware is not kernel blob. It's executed on separate device and has nothing to do with Linux. It's about open hardware, not open software. I don't think that it's worth to pursue this direction for Debian.


> when even a purist distribution like Debian is forced to concede in the fight against proprietary blobs.

As far as I'm aware, nothing has recently changed in this regard. It's more of a reflection on the mentality of young members, those who tend to treat software as if it's in a vacuum, separate from all the social and moral concerns of the meatspace.


If any of the Debian team is here, congratulations and thank you for putting together such a solid, consistently high quality Linux distro.


One thing I really appreciate about Debian is that when a new stable release comes around, I can just upgrade and be reasonably sure nothing bad will happen.

It's not exciting, but a fair amount of the time, this is what people expect from their operating system. Support my hardware, give me the software I need, and stay out of my way otherwise. And that is what Debian does very well.


I heard when Bullseye came out that I should wait a bit as the initial bugs were found. I'm wondering if that was true then or now.


It depends. I use Debian on a few machines at home, if something were to break, it wouldn't be a huge deal.

If I had dozens of desktops and/or servers to take care, I would probably take the time and upgrade a few non-critical machines to see how it goes, provided I have the resources for that.

If in doubt, there's nothing wrong with waiting a month or so to see if others run into trouble. It's what I did when I worked as a Windows admin, and it saved me from major headaches more than once. (Admittedly, updates causing trouble is more common on Windows than on Linux in general and Debian specifically.)


> One thing I really appreciate about Debian is that when a new stable release comes around, I can just upgrade and be reasonably sure nothing bad will happen.

That's good feedback and I've heard it from other people. Personally I've never been able to dist-upgrade Rapbian or Ubuntu without breaking the OS.


It's been a long time since I have used Ubuntu. In the ~2009-2013 era, my desktop ran Ubuntu, and I repeatedly upgraded it from 2008.04 -> 2010.04 -> 2012.04. There were a few issues, but nothing that made the system unusable. But that was over ten years ago, I have no idea how Ubuntu has evolved since.

Raspbian has been problematic for me. I tried upgrading from Buster to Bullseye, and it went so badly I ended up reinstalling from scratch. (To be fair, the docs were clear that was a likely outcome.)

OTOH, my ThinkPad x220 has been running Debian since 2016, I installed Jessie back then and upgraded as new stable versions were released. The upgrade to bookworm has finished by now, and it's been entirely unexciting. :-)


Its baffling that RHEL-based distros still don't support in place upgrades.


Strange; I would have assumed that `dnf system-upgrade` would have made it from Fedora to RHEL by now.

Actually searching turns up https://access.redhat.com/documentation/en-us/red_hat_enterp... , which... appears to use a totally different tool? I don't really know what's going on there, but it does looks like they have some sort of support for in-place upgrades now.


Its was treated as "avoid of possible", but it loojs like that changed:

> An in-place upgrade is the recommended and supported way to upgrade your system to the next major version of RHEL.

As for RHEL-clones, your only choice is to use ELEvate(1), by almalinux, which supports other distros too.

But overall the process isn't as simple as Debian/Ubuntu, and on clones other than alma, you need to resort to third party tools, with some clones like Rocky saying that in-place upgrades should be avoided.

https://almalinux.org/elevate/


Never ceases to amaze me. There is all kinds of things wrong with Debian I am sure. But at the end of the day, what that community does is mindblowingly impressive.

Much gratitude from a Slink-and-a-half user, back in the day.


All kinds of things wrong? It’s the base for half the Linux distros. What could you possibly mean?


Half the Linux distris? I've once counted on Distrowatch. When you go by number of distris there it's more like 80% are Debian based.

If you go also by how much those distris are used it's even like 9x% of the stuff running out there is Debian based.

There are almost no independent distris. You have Arch, you have SUSE, you have RedHat and a few clones, you have Gentoo. But more or less everything else is Debian based.


Do I get it right that you mean > 90 percent of distro use is Debian-based? Because that can hardly be the case


Ubuntu and its derivatives are included.


Yes. But Red Hat, Fedora, openSUSE, SLE, Arch, Alpine, Slackware, Gentoo, NixOS are not. I have a hard time to think these represent less than 10 percent all together


Is Ubuntu still Debian based? What's the criterion for that anyway? Just using debs?


Ubuntu releases are, effectively, snapshots of Debian sid every 6 months. That's a pretty strong basis.


I wouldn't call Ubuntu a flavor of Debian as they have so big differences, but ultimately it's still mostly Debian. You can see a numerical comparison of packages here: http://qa.ubuntuwire.com/mdt/all.html


Yes, Ubuntu is still Debian based with just a little added on top.


And Slackware!


There are of course some niche distris that aren't based on the "big ones".

But I'm not sure Slackware is still at this point relevant enough to be named in a row with the others I've mentioned.

Of course the sampling was subjective, and I didn't intended to marginalize any not mentioned distris. I just thought the others are too niche to be included in such kind of pick.


I want to downvote you for sentimental reasons. I won't but I want to.


Do it, and tell the world how I mistreated all kinds of interesting (but small) projects, if it makes you feel better. :-D

Here a list of almost all the OS distris I've left out:

https://distrowatch.com/search.php?ostype=All&category=All&o...

Besides mentioning Slackware for historical reasons I should have also mentioned NixOS, which I forgot.

The former was the first Linux distri, the later gained some respectable user-base in the last years, I think.

Ah, and there is also Alpine which you can see sometimes here and there. (People using it in containers for reasons I've never understood as that's imho a recipe for trouble.)

But if you look at the rest of the list most of the stuff is really obscure. (I've tried some Solaris derivatives and Exherbo in the past but didn't even heard of all the other names.)


why is using alpine for containers a recipe for trouble?


TL;DR: It's slow and has all kinds of issues. If all you want is small containers use Distroless.

Musl is still a kind of experiment. I would not recommend running experiments in production…

Just a few random links:

https://nickjanetakis.com/blog/benchmarking-debian-vs-alpine...

https://news.ycombinator.com/item?id=28312433

https://www.linkedin.com/pulse/musl-libc-alpines-greatest-we...

https://martinheinz.dev/blog/92

https://www.linuxquestions.org/questions/linux-software-2/mu...

https://unix.stackexchange.com/questions/729342/performance-...

https://github.com/rust-lang/rust/issues/70108

https://news.ycombinator.com/item?id=23080290

https://vector.dev/highlights/2020-07-09-add-musl-and-glibc-...

There are more. Much more of those!

Musl exist in large parts just for ideological reasons. It still rides some hype in industry as industry hates GPL software…


Thanks for the links. I'm just an hobbyist, but Alpine is often marketed as being great for containers. Interesting to see another perspective


I think they were referring to the fact that everything has something "wrong" with it, but that those things don't invalidate the value of the greater whole.


Seems to be an extremely charitable interpretation of the parent comment. I also read it as -- "Despite the plethora of issues with Debian, it manages to surprise". Like the second-level commenter, I'm also curious to hear more about these "issues".


You poor person, reading malice where none was intended.

marpstar was correct.

I am a Debian fanboi since Slink. Since it was drilled into me by rvdm, ssmeenk, miquels, jdassen and dth.

And have seen Debian survive and thrive a lot of critisism. "Too slow release cycle, not Free(Dom) enough, Too free, systemd, too much politics, too many architectures, etc....etc....etc.."

Yes, any large community will be flawed and deliver flawed solutions. And I for one celebrate those flaws and features, and appreciate the sheer magnitude and accomplishment of this enormous, complex, great project.


I see this old-package argument over and over again and I think it is inaccurate, considering that an estimated 95% of Ubuntu users use the LTS version, the below table demonstrates that Debian 12 (stable) packages are newer than those of of Ubuntu 22.04. Both Debian 12 and Ubuntu 22.04 are LTS versions with 5 years of support.

    Ubuntu 22.04
        Kernel 5.19 (new installs only, existing installs 5.15)
        systemd 249
        KDE Plasma 5.24
        Gnome 42


    Debian 12
        Kernel 6.1
        systemd 252
        KDE Plasma 5.27
        Gnome 43


Debian Stable and Ubuntu LTS tend to alternate with respect to who has newer packages, because Debian ships on odd years and Ubuntu LTS on even years.

For most purposes, though, I find that I increasingly don't care about 1 or 2 years of difference in the base OS. Most of the toolchain is stable and well established. There are only a small handful of things I want to pin to a specific version (like node.js or Python), but these can usually be installed side by side with default packages. If not, I can always install it in a container. :)


Whilst I'm no longer an Ubuntu user due to their snap debacle, I don't think this is all that fair, they released over a year apart and LTS is LTS for a reason :)


This is true by definition. Ubuntu releases are forked from Debian at the time of their release, so Ubuntu 22.04 is where Debian was in April 2022.


I have always used stable on my servers and testing on my laptop but I recently switched to stable on the laptop with kernel from backports (I have fairly recent hardware). I have never been happier :) (to be fair, staging was fairly stable too, but still broke small stuff occasionally, and I feel I'm too old to deal with this ^^)


I realised this myself recently. I have used Ubuntu LTS for a long time, I don't use the in-between releases. They have about the same release cadence as Debian (2ish years) so I'm usually not losing anything much by moving to Debian.

Ubuntu probably do the HWE kernel better than stable backports kernel, the HWE kernel has a release schedule.

There's been more community support for Ubuntu in the form of PPAs but Flatpak has mostly solved that problem for the things I care about.

As such, I've already switched all my laptops to Debian, and will switch my desktop and work computer when I can be bothered.


> Ubuntu probably do the HWE kernel better than stable backports kernel, the HWE kernel has a release schedule.

For a while the Debian kernel packages had -ckt suffix (Canonical Kernel Team), although it doesn't seem to be the case any more.


And first beta of proxmox 8.0 based on debian 12:

https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.0_beta1


The amount of effort put into Debian is truly impressive. I have used it for decades and it has been remarkably stable. Use the stable release with unattended-upgrades and it's almost zero-maintenance.

Also, an estimated 96.3% of packages are built reproducibly for amd64.

https://tests.reproducible-builds.org/debian/bookworm/index_...


Had to reinstall ubuntu 3 times already since the beginning of this year and thus switched to debian, hopefully I'll be able to settle in for a while


For me Ubuntu has been the most stable distro. I probably won't move, since I want an up-to-date system that stays out of my way


Lol. Enjoy relearning the hot new way to configure DNS every release.


Netplan is garbage. I remove all that and use NetworkManager. It's good and just works.


Through gnome network settings, the same way it has always been(I think GP meant Desktop).


What happened?


- Tried 23.04 and found too many bugs (I guess this doesn't count)

- Some day 22.04 randomly started without GUI and I couldn't get it back either with ubuntu-desktop and startx

- I installed a Python package without a virtual environment and it somehow interfered with system Python, bootloader broke

It's possible that those errors were recoverable but I'm not a linux expert and I couldn't repair it after ~2h of stackoverflow


Since I disabled auto-update most of such issues have gone away and what works stays working. I suspect the second was due to an auto-update if you had that enabled and don't get me started on python versioning and the way that can impact a system.


Never go through odd versions (19.xy, 21.xy, 23.xy,…).


> installed a Python package without a virtual environment and it somehow interfered with system Python, bootloader broke

Shouldn't the OS python be protected by root?


> I installed a Python package without a virtual environment

Coincidentally, doing this is now disabled in Debian Bookworm.


A bit offtopic... are there any distros besides PoPOS that comes with the proprietary Nvidia drivers preinstalled? I tried to use (live image) Debian on an RTX 4070 PC and nothing worked just black screen after GRUB. PoPOS works out of the box but honestly I'd prefer something more simple as Debian.


How does PoPOS accomplish this without violating the license of the kernel and the NVIDIA drivers?


The sad truth is the license of non-GPL Linux kernel modules is very rarely cared about nor enforced. There are ton of embedded devices that don't bother with the NVIDIA shim scheme but ship with straight proprietary .ko:s on the device.


Since it is a distribution by a hardware vendor that ships NVIDIA GPUs, I'll assume that they got a licensed from NVIDIA to ship their operating system with the proprietary drivers.


Possibly, but what about the license of Linux? Surely nvidia.ko (being a derived work of both the GPL-licensed Linux kernel and the proprietary NVIDIA kernel object files) is non-distributable? Otherwise why does every other distribution faff around with akmods/DKMS, etc?


A lot of times you can fix boot issues like this by adding "nomodeset" to the boot command line

I always had trouble booting proxmox the first time, because even though it is a server os with no graphics, the installer is graphical. I would get black screen at boot.

I would just interrupt hte boot use 'e' to edit the command line, add 'nomodeset' and it would boot.


I have been thinking of switching to Debian from Pop!_OS and have a Thinkpad X1 Extreme Gen 2 with Nvidia GeForce GTX 1650 - if graphics drivers are an issue, then my wish is dead in water.


Why preinstalled? They are not even preinstalled on PoPOS. If you choose nvidia via powermanagement and they aren't installed, only then they will be downloaded and installed..... If you think of the hazzle because of offloading (intel by default and nvidia by demand) just purge bumblebee(at least that was the issue before boowkworm) and you have mostly the same experience.


I would play with the livecd for a while first. Even straight ubuntu gave me problems on my gen1 (wifi, sleep, graphics...). I switched it back to win10 and have a Carbon for Linux now.


Makes sense. I went with the Extreme model because I needed to drive one 4K + two HD monitors from it and was under the impression that Intel XE graphics are not sufficient for it - happy to be corrected on this front (Nvidia has been nothing but trouble on Linux).


Testing this out in a VM. I want to move away from Ubuntu (honestly, from SNAPs)


I'm also considering this, but I'm a little afraid of being stuck in the slow lane when it comes to software updates. I'm aware of Backports, but I'm led to believe it has a somewhat limited selection.

Perhaps this is a good opportunity to try a combination of Debian, for general system stability, and Nix, for specific tools where I need newer releases? Has anyone tried this combination before? If so, how did you find it?


Why not use Linux Mint?

It is Ubuntu-based but without the bad parts like snap. So you get to keep access to all the Ubuntu packages but also your sanity.


I’ve been maintaining a machine deployed to my mother-in-law’s house for ~4 years with Mint.

She went from needing tech support every time I was at her house to never overnight.

I highly recommend Mint for this scenario.


If you want to run a desktop system, I'd recommend Pop! OS. Right now, they seem to be the distro that cares the most about desktop experience.


You can run a reasonably current Debian install by just switching to the testing repos after you install a stable release. This mostly works pretty well but occasionally[1] you'll have an issue. While you could get absolutely up to the minute software (from a Debian standpoint) using the sid (i.e. unstable) repos, I wouldn't recommend it as breakage is quite common there as they are working through various packing issues and that repo lives up to its name.

[1] every couple of years in my experience... typically as they're getting closer to a new release and package breaking changes are needed/slip through.


Is this a desktop/laptop? You can always run unstable if you think stable is too old (I run unstable on my dev systems, and stable on servers/anything I want to setup and let run). FYI, if you use a Ubuntu LTS release, then unless you always run the latest LTS, the majority of the packages (being in universe) will actually be older than Debian stable (and will always be older then Debian unstable).


I’d suggest testing over unstable as very occasionally broken packages get pushed to unstable. Testing has a week or two delay, which usually catches such problems.


If you decide to run testing, be aware that while testing does get updates that address security issues after those updates work their way into testing from unstable, it does not explicitly get security updates.

Pinning some security sensitive packages to stable or unstable might be worth considering. E.g., if running testing on a client, pin firefox and extensions to stable + stable-security (note, globbing works too):

  /etc/apt/preferences.d/firefox:
  Package: firefox-esr
  Pin: release a=stable
  Pin-Priority: 999
  Package: firefox-esr
  Pin: release a=stable-security
  Pin-Priority: 999

  Package:  webext-ublock-origin-firefox
  Pin: release a=stable
  Pin-Priority: 999
  Package:  webext-ublock-origin-firefox
  Pin: release a=stable-security
  Pin-Priority: 999
The above priorities will not downgrade to stable from testing if the packages are already installed. To downgrade priority needs to be >= 1000. See 'man 5 apt_preferences'. If priority >= 1000, probably best to only do that temporarily, then adjust to lower to prevent setting a landmine for your future-self.

If there are only a couple things you want to update to newer versions that are not in backports, you can just run stable, and pin those packages to versions in testing or unstable (but only if those packages pull in no / only a few dependencies not used by other packages). If you add e.g., testing/unstable sources to a stable system, add a catchall pin to force those packages to a low priority by default to prevent accidentally updating your entire system e.g., for sid:

  /etc/apt/preferences.d/sid:
  Package: *
  Pin: release a=unstable
  Pin-Priority: 10
Pinning without thinking can result in a broken system. But, I'm typing this on a box running testing (I guess stable, as of today) with packages pinned from Bullseye, Sid, and experimental and I've never had worse issues than an update being blocked due to dependency version conflict which was easily worked-around by pinning another package/removing or downgrading a pinned package; I run unattended-upgrades on all my boxes too. But, my hard rule is no scary deps e.g., a diff version of libc being pulled in, no deps shared with other packages that I would not want to have to pin to the same release (e.g., shared with any package with tons of deps itself), and no package that wants to pull in a lot of deps regardless of how benign they appear.


You can also automatically pin security updates from unstable. I'm doing this for years now, and doing updates 4x daily using unattended-upgrades.

https://wiki.debian.org/DebianTesting#Best_practices_for_Tes...


Personally I use the official Mozilla releases of firefox & thunderbird & drop them in /opt so that they get updated as soon as an update is released.

Otherwise this is good advice.


I run a mix of Debian and Arch/Manjaro.

Debian is fine right up until you have to build something and find that the dep you need is too old so you have to build that from source, and that dep has a dep that's too old so you have to... basically build hell.

Arch seems to not have these problems but is a hair buggier on occasion.


backports?


Combining Debian with Nix or Guix is a fairly excellent way to go. Stable OS base, selective bleeding-edge apps (or hell, multiple runtime versions that would otherwise conflict). Win-win.


What packages are you concerned about? Do you use flatpaks?


I tried the same move but I couldn’t find any reason to permanently move to Debian. The biggest problem is that some of the package versions are quite old. Ubuntu is far better when it comes to software updates. The snap stuff is crap though.


I see this old-package argument over and over again and I think it is inaccurate, considering that an estimated 95% of Ubuntu users use the LTS version, the below table demonstrates that Debian 12 (stable) packages are newer than those of of Ubuntu 22.04. Both Debian 12 and Ubuntu 22.04 are LTS versions with 5 years of support.

    Ubuntu 22.04
        Kernel 5.19 (new installs only, existing installs 5.15)
        systemd 249
        KDE Plasma 5.24
        Gnome 42


    Debian 12
        Kernel 6.1
        systemd 252
        KDE Plasma 5.27
        Gnome 43


Just use Debian testing. I've essentially been running Bookworm for about year now (that's actually the version name used in my apt conf). Ubuntu is pretty close to Debian testing version-wise.


Were you trying out Sid, or stable/testing? Stable tends to contain ancient software, but sid should be recent enough.


I suspect that a lot of people who are saying that Debian is behind don't understand Debian.

for them: Whatever you download is going to be "Debian Stable." Debian Stable is as fresh and up-to-date as it's ever going to be at this moment, but it will not change significantly in the future, because its goal is stability. You throw it on something you want to run for years and not crash.

If you don't mind your system crashing every once an a while because you like new stuff, you use "Debian Testing." There is no place to download this directly from Debian, although some third-parties (like Canonical with Ubuntu) distribute customized versions of it. The way you get a non-customized version is by installing Debian Stable, changing sources.list to point at Testing (which you can do as "testing" or by its nickname), then dist-upgrading. Debian Testing is being tested to be the next Stable.

"Debian Unstable" afaict is where individual pieces of software are being tested to go into Testing. Nobody should be running it unless they are contributing to Debian, although there are apt-get masters who know exactly what they're doing who will pull bleeding edge packages from unstable individually.

About the nicknames: Stable, Testing, and Unstable aren't releases, they're an indication of the current status of a release. Each release has its own goofy name. "Bookworm" has just moved from Testing to Stable. "Bullseye," which until just now was Stable, has now become "Oldstable."

Also important is the "[yourrelease]-backports" repo, which Stable users can add to take newer packages from Testing that are 99% certain not to mess with the stability of Stable. Stable + backports is a compromise between Stable and Testing for people who want new stuff that doesn't break things.

I'm sure most people know all of that, but 1) when it comes to things like this people are often afraid to ask because they're afraid they'll look stupid, and 2) the Debian website is very utilitarian and not marketing oriented, so there's no clear entry point for people who don't already know what they're looking for.


Just to add - I think the backports repositories don't get enough attention when this occasionally comes up.

Trying to run a mix of stable and testing packages can be a pain as occasionally a package you want to bring in from testing will try to bring with it new system libraries, which in turn often conflict with the "stable" versions of packages (so you are forced to move a lot more of your system to "testing" packages than you originally wanted to fix this).

The key advantage (at least for me) of using the "backports" repositories is that it avoids this - packages are compiled against the "[yourrelease]" system libraries.


Just to correct the most glaring mistakes:

Ubuntu is based on Sid (Debian Unstable).

You can download Debian installers for Testing.

Usually nothing on Testing crashes, as such severe bugs are considered blockers to move a package form Unstable to Testing. (Of course once in a few years something slips through the testing period in Unstable. But it's than usually repaired within a few hours.)

People are running Sid. (Even I personally wouldn't recommend it.)

But one point of the parent I strongly support:

Debian's web page is a mess. I'm using this system for decades but still don't find anything on the Debian page without the help of some search engine. Also, when I need Linux related documentation I go to the Arch Wiki (and sometimes to the Gentoo docs), even as a Debian user. OTOH, you only seldom need docu, because Debian "just works" for the most part.


I've was an inveterate distro hopper, but finally settled on Debian because of its stability. Its not the most user friendly but when you get it up and running "it just works". Debian really is fantastic achievement in software.


Fantastic! Can’t wait to upgrade my servers and desktop. Debian is an absolute marvel.


But it's /not/ released though. Their own news section mentions that cd images are still being built, which seems like something that should have happened already. There's nothing but 11.7 available for download.


The bookworm apt repository is in the final state, so Bookworm is released for those with existing Debian installations who can just do a search/replace of "bullseye" to "bookworm" in their apt.sources file, and run apt dist-upgrade.


Sure, but the announcement mentions and links to Bookworm media downloads which are old 11.7 media. It's confusing, annoying and honestly kind of amateurish. I like Debian, but this is a screw-up. 20:00 in mainland Europe and it's still not out. My work laptop is in a sad state of semi-brokenness (self-inflicted), and I had been looking forward to giving it a fresh start over the weekend. Now I'll have to wait another week.


It is amazing how this group of volunteers create the foundation for many more commercial Linux ventures and use by billion-dollar companies.

A lot of end users of different distros do not even know that Debian is the foundation. I will as go as far as to say Debian had solved a lot of the hard issues and then other sprinkle it. (Probably not a popular view)

Anyways thanks to all the Debian team members. Your work ought to be better known.


The gift that keeps on giving, runs perfect on my headless 10-year old gaming-PC turned server in the basement.


I highly recommend installing proxmox and running debian VMs. It's really easy and the returns are great.


In the parent comment to yours, ''runs perfect'' means runs perfect. No need for an insertion of an additional layer of stuff to just get back to ''runs perfect''.


Congrats! Been using it on a 2017 Thinkpad X270 with MATE. Everything works perfectly. Honestly might not be the "flashiest" distro but does the job perfectly well. And personally I always recommend it to people new to Linux.


Heck yes. Best distro of the best OS on the planet. I look forward to upgrading.


Just upgraded my Debian WSL distro and the experience couldn't have been more anti-climatic – I had to double-check lsb_release to make sure I'd actually upgraded, it was that seamless.


> The new systemd-resolved package will not be installed automatically on upgrades as it has been split into a separate package. If using the systemd-resolved system service, please install the new package manually after the upgrade, and note that until it has been installed, DNS resolution may no longer work as the service will not be present on the system.

will installing over the internet work without DNS resolution?


yes, as the script adds/replaces the sources.list lines, then apt i downloads the packages, only then it starts the installation of them


But you can't (easily) get the deb package for systemd-resolved after the upgrade has (possibly) broken DNS?


yeah, that's not ideal, or possibly very bad UX, depending on how the update happens. (if the running stub resolver of resolved is not shut down, or at least an effort is made to fix resolve.conf during removal of the old package, then it might cover most of the users. as far as I understand the 11 -> 12 upgrade works without manual intervention, and this is an edge case, as systemd-resolved is not enabled by default on Debian 11.)


For those wanting to try this out note that the download links in this post are still giving the previous Bullseye (11.7) release at the moment.


Wow, and it even comes with the very latest KDE Plasma 5.27.5. Quite rare for a stable Debian and makes it more up to date than PopOS. Awesome.


Having used Ubuntu and then Pop!_OS, I have been thinking of going back to the roots and just sticking with Debian. But I also like to get the new stuff (not bleeding edge because I don't want things breaking all the time) and I have heard that stable is, well, old stuff (hence stable). Which makes me wonder how unstable is "unstable" in reality? I have a very strong preference for sticking to .deb packages due to their wide availability so not really looking at cool distros like Arch.


"Unstable" is actually unstable. What you want is "Testing," or if you're a little more conservative, you want Stable + backports: https://backports.debian.org/

> You are running Debian stable, because you prefer the Debian stable tree. It runs great, there is just one problem: the software is a little bit outdated compared to other distributions. This is where backports come in.

> Backports are packages taken from the next Debian release (called "testing"), adjusted and recompiled for usage on Debian stable. Because the package is also present in the next Debian release, you can easily upgrade your stable+backports system once the next Debian release comes out. (In a few cases, usually for security updates, backports are also created from the Debian unstable distribution.)


"Testing" has the problem where security patches also get delayed, so it ends up less secure than unstable or stable (!). Unstable isn't that bad IMHO, stable+backports also works.


I installed -rc4 on a new work laptop and it's super nice.

I have some gripes about the installer's partitioning tool but I suspect that it would have been fine if I was willing to reboot a couple more times. The laptop was locked down and I needed someone to type the BIOS password every time I wanted to boot off the USB stick.

I wanted 4k blocks all the way down but got stuck with 512b sectors.


What a weird announcement. It says "To install Debian 12 bookworm ... you can choose from a variety of installation media types to Download..." and "If you simply want to try Debian 12 bookworm... you can use one of the available live images" as if it were all available, but in fact the Debian 12 images are not ready.

It feels like the blog post was written before images were created, but they forgot to add a comment saying when it'll be available for download and published anyway.

I understand by the comments that the apt repos were already updated, so you could install bullseye 11.7 and upgrade to 12 in the OS, but seems a convoluted way to do it. I guess I won't be trying out 12 this weekend.


The CD images should be downloadable towards the end of the day if previous releases are anything to go by.

This HN post jumped the gun a little with the Debian Wiki page: the final official release happens when https://debian.org/ gets updated to point to the new installation media.


We seem to have this conversation for every release; I wish Debian would make it very explicit when exactly a new release is "released", perhaps by sticking a banner at the top of the relevant pages that says something like "Debian 12 'Bookworm' has NOT been formally released but is expected to ship at $TIME/$DATE"


Didn't expect for shiny-server to be mentioned so prominently. I might actually give Debian a shot for my personal machine. The five years of updatea sounds great, the LTS option is the main reason why I went with Ubuntu so far.


Finally. I migrated from the snap-cursed Ubuntu to Debian. I have been running Bookworm since last summer, and now I can stay in the stable distribution until I really need a new version of some software. Should not happen too soon.


> The overall disk usage for "bookworm" is 365,016,420 kB (365 GB)

Time to flex new SSD drive and install ALL the packages. Of course, I'll do it a bit later, when downloads will get back to normal levels.


Congrats and major props to the Debian community! Running Debian in vm’s is such a pleasure, very fast boot times, rock solid, easy install. In contrast recent ubuntu installs have only given me problems.


https://i.imgur.com/X9B5kFb.png <-- 365 Go (or should that read 365 Mo?)


All my VMs are Debian. Thanks a lot to everyone involved.


Nice! Debian just works, since decades. Except when you break it yourselves or update Nvidia drivers.


I'm not a Linux expert, I just want an OS that works well, and that's debian for me. Now, how do I upgrade? Just apt upgrade? Does it matter that I have i3wm instead of whatever the default is?


The upgrade instructions are in the release notes. It looks long but is pretty straightforward if you haven't installed software from outside the debian ecosystem or created a frankenstein mix of packages from non-stable, non-backports distro versions.

https://www.debian.org/releases/stable/i386/release-notes/ch...


I've installed vs code by downloading a .Deb, not an apt repo. Does that count as "software Frankenstein"? Follow up question, what's the worse that can happen? Vscode will break, or might the os get corrupted?


The Frankenstein stuff usually involves mixing package repos. Software distributed as a stand-alone .deb is generally pretty self-contained. I personally usually leave them alone. If you're following along with the upgrade instructions, it will have you remove "obsolete" packages. "Obsolete" packages are those which are not referenced from any currently configured package repo. So manually installed packages will show up there. You can decide whether to keep them or not when reviewing that list.


> Now, how do I upgrade?

Read the documentation, follow it.

https://www.debian.org/releases/bookworm/amd64/release-notes...

> Just apt upgrade?

Mostly (please do RTFM):

  # update to latest point release of current system:
  sudo apt update
  sudo apt upgrade
  sudo apt full-upgrade
  sudo apt --purge autoremove

  # Update sources list
  sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

  # upgrade to bookworm:
  sudo apt update
  sudo apt upgrade --without-new-pkgs
  sudo apt full-upgrade
  sudo apt --purge autoremove

> Does it matter that I have i3wm instead of whatever the default is?

Generally no, unless you've specified/use third party (or to a lesser extent "back port") software sources.


This is great news! I really like Debian as a docker host for self hosting since it doesn't have a lot of fluff. The old package versions do cause occasional problems though...


Do you recommend switching from Ubuntu TLS to this one? Even to a heavy LXC and multipass user?

What had kept me on Ubuntu so far has been their out of box laptop support (I.e. WiFi drivers).


I switched away from Ubuntu when they started putting advertisements in the motd and login screens.


That didn't bother me much as it is "advertisement" about things related to my system.

What I did not like was Firefox taking 20s to start.


No I’m not talking about useful things like last login, average cpu, etc. they were advertising their cloud products.


Ubuntu has (used to at least) ads as apps in the app launcher, served by canonical and not at all related to the software you're using.


Moved back to Debian because of lxc on Ubuntu requiring snaps. Turns out same problem on Debian. Snapd consumes 100% cpu. All. The. Time. Hoping Bookworm will solve it. Else I will be moving to Archlinux, which has lxc without snapd.


I don't think lxc on Debian has ever needed snapd?

https://packages.debian.org/bullseye/lxc


lxd (not just lxc) is now also available as a deb package in debian:

https://wiki.debian.org/LXD

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=768073


They're probably running lxd (which is only packaged for snap as far as I know), lxd is controlled by the "lxc" command (which isn't part of lxc...).


Ah. This questionable decision is why I abandoned LXC for Docker (initially) and eventually Podman/k8s.


The "lxc" command runs lxd, you need to run "lxc-'command'" for actual lxc.


Interesting - I don't observe such behavior with snap/lxd on my Ubuntu 20.04 and 22.04 boxes.

Not a heavy user though - maximum 15-20 VEs (virtual enforcements) per server.


Bookworm now has lxd as natively packaged deb's in the main repository. It's very nice to setup.


This is a Real Hacker News

Yeeei My favorite Fast Linux distro keep going !!!

Congratulations


Does anyone know if Bookworm includes the Ubuntu-Mate version of the Mate desktop?


Unfortunately no, the panel layouts aren't there by default. However, I've written some guides that can help close the appearance & capabilities gap.

https://www.reddit.com/r/debian/comments/12gyjpg/debian_mate...

https://www.reddit.com/r/debian/comments/13ueaub/debian_12_m...


Thank you for those links, since tomorrow I expect most of Reddit to go dark I took the liberty of archiving them so I can be sure I can go into more depth then.

https://archive.is/iH9un

https://archive.is/R0oBe

https://archive.is/IKViC

https://archive.is/1Pyzs


I'm on testing, which just released as Debian 12. mate-desktop is 1.26.0, but a few things (like mate-applets) are 1.26.1 and a few (like mate-panel) show up as 1.27.0. I don't use Ubuntu, so I don't know how these compare to whatever Ubuntu uses.


The dnscrypt-proxy package doesn't exist any more?


There was a build failure in one of its dependencies and that bug didn't get fixed in time for it to re-enter bookworm before the freeze.

https://tracker.debian.org/pkg/dnscrypt-proxy https://tracker.debian.org/news/1366204/dnscrypt-proxy-remov... https://bugs.debian.org/1017302


Have they released netinst images yet, can only see 11.7


Finally with init system alternatives or do they still shove systemd down our throat (if so I'll stick to devuan gnu/linux instead)?


Finally? Debian has always shipped sysvinit and a bunch of other less-used init systems.


I cannot find install images without systemd and with sysvinit.

Do I miss something?


That's because systemd is the default. You can choose sysvinit during installation

https://wiki.debian.org/Init#Changing_the_init_system_-_at_i...


Allright, I may not need devuan gnu/linux in the future since I can choose at installation time the init system on debian.

But it seems the efforts to actually restore the compatibility of some components with sysvinit is from devuan, not debian. May be wrong again.

Those are steps in the right direction. But I stay alert: I know that sysvinit experience could be actually desastrous on debian compared to devuan. Not to mention the debian "default" is systemd: we all know how critical is the choice of the "default" on the long run, that's why I may still go to devuan.


> But it seems the efforts to actually restore the compatibility of some components with sysvinit is from devuan, not debian.

Don't blame Debian if the Debuan contributors fail to send their patches upstream.


This is not what I recall which was strong resistance to raw blockade.

Well, it seems it did not last.


Or post-installation. Basically install sysvinit-core, copy inittab to /etc and reboot (systemd can then be removed/autoremoved/purged according to taste).


One one hand I am thankful for the fact that this guide exist. On the other hand, I am getting strong HHGTTG vibes [1] from them: "you can choose sysvinit as long as you follow a complex set of steps that's likely to go wrong detailed inside a hard-to-find package".

Maybe I'm asking too much, but for me "official" support would mean that I can do it from the installer directly, no terminals and chroots required.

[1] https://www.goodreads.com/quotes/40705-but-the-plans-were-on...


You have it backwards: Devuan shoves sysvinit down your throat, whereas Debian is the one that supports alternatives


Devuan does exactly what it says on the tin. If that suits you, use it, otherwise you are free to not use it. It follows that since nobody forces anyone to use Devuan, nothing is being shoved down throats.


Yes, I was only copying the hyperbolic language of the parent commenter.


That's one of the reasons I don't like Debian. All this old cruft for compatibility with stuff nobody should be using. Just embrace systemd, jesus.


Actually, on my custom elf/linux distro, I have neither systemd nor sysvinit.

But one is grotesquely and absurdely bigger and kludgier than the other one, so this is more about choosing the lesser evil...


systemd is the superior init system.


How did you came to that conclusion? In my experience, sysvinit comes with less bloat, and hangs randomly much less than systemd. I've worked with both old debian (sysvinit) and current centos (systemd) systems, and if I had a problem with init, it was always with systemd. If something related to disks or user login sessions fails, with sysvinit most of the time things proceed promptly, while with systemd, you're in for a waitfest/eternal hang.

Systemd seems to be propelled by distro packagers and developers, but for admins/users, it's not that great.


Because it works and writing init files with startup dependencies is cumbersome.


Sysvinit also works. How is writing init files cumbersome? Writing startup dependencies is easy - there is the Required-Start header in the ### BEGIN INIT INFO block where you put the required services.


Because of the stuff you just mentioned. I don't know if a process has started (status could be wrong), don't see the output. Also you enter a dependency hell with this type of info block.


Its superiority in kludge and bloat is not a matter of discussion, we all know that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: