Hacker News new | past | comments | ask | show | jobs | submit login
Why Use Debian Stable on the Desktop? (wayoflinux.com)
38 points by russianhun on Jan 3, 2018 | hide | past | favorite | 58 comments



This particular article seems to be an ad for consulting/support services, but their underlying point is one that I agree with:

Desktop usage (terminals, browsers, editors, word processing, spreadsheets, presentations, media listening/viewing and light creation) should not be surprising. It should not change out from underneath the user.

I've run Debian Stable for more than 15 years as my primary desktop system, almost always with XFCE. I add the Google Chrome repository and download the latest Firefox and Firefox Nightly. That's it. The whole thing runs really nicely on an i5-2500 with 16GB RAM and 2 SSDs in a ZFS mirror. I'll need new hardware when I switch from 2 1920x1200 monitors to a big 4K screen; I won't need to change my OS.


Problem is, most softwares that are NOT in the official repository are made for Ubuntu TLS, not debian stable.

So if you want to install VSCode, Telegram, Skype, Stremio, Sublime Text, Python 3.6, clipgrab, steam, dukto, tilix, etc. You know that half of them won't work out of the box on debian stable. Each year it will be different ones, to make it spicier, for different reasons that can take a huge amount of time and energy to figure it, if you ever do.

Then if you need to do anything exotic, and it's worse. E.G: I'm currently compiling a lot of crypto currency wallets (I run a master node automation service). Most of them will compile only on Ubuntu. And if you don't want precisely Ubuntu 12.04 (I compile for 16.04), you'll need a little tweak.

Don't mistake that for hatred. I donate to the debian project financially, I adore them.

But for the desktop, it's really not making my life easy.


Leave it to Ubuntu to invent their own Transport Layer Security standard


> The whole thing runs really nicely on an i5-2500 with 16GB RAM and 2 SSDs in a ZFS mirror.

To be fair, I'd expect any OS to run nicely on that spec.


Not windows 10.


W10 runs fine on my laptop, a Thinkpad T440 with an i5-4300U, 8GB RAM and SSD.

W10 does a fair few things wrong, but performance is not really one of them, unless you're on a Pentium 4 or something.


A significant fraction of the time, win10 is indexing files, running antivirus scans, running win update (multiple times a day), etc. This will often pin 1-2 cores. On my dual core i7 laptop from 2016. That’s not good performance. Since I also use ubuntu on a daily basis, I can see the benefits pretty clearly.


I converted a similar laptop from win10 to Ubuntu and now it is all cool and quiet.

Windows seem to require many many CPU cycles just to update itself and index files (yet cortana never seems to find anything, but that's another discussion).


Just wondering what made you think it's "an ad for consulting/support services"

Edit: I found the "offending phrase", I think. I dare you to click the link, it leads to a (free) online book. :D


As a serial distro hopper, my only complaint about using Debian Stable over the years is hardware compatibility. As the author noted, Stable isn't for the latest and greatest hardware. This was especially true for me a couple of years back when trying to run Debian Jessie on a then-new Intel Braswell based system. The video driver as shipped was unable to render at any resolution above 1024x768, and there were severe artifacts especially with certain fonts. Switching to Ubuntu cleared up the issues (newer kernel and newer X11 driver).

That said, Debian Stable is an ideal desktop OS for older hardware. I have a Mac Mini from the first generation of Intel Macs, upgraded to a Core 2 Duo CPU, that runs BunsenLabs Linux (based on Jessie and a 3.x kernel) just fine. In fact, it's one of the few Linux distros that still supports that 11 year old machine, long after even Apple left it behind.


It's literally the only distro I can run on my two My Book Live mini-file servers being powerPC and all. Have to run a patched kernel though.

Couldn't see running it as my desktop distro since I like to fiddle with stuff and having an older compiler and python isn't really all that desirable. Back when I hacked on blender I was even having a hard time keeping up on 'bleeding-edge' fedora since one of the main devs liked to upgrade to the newest python release as soon as it was out.


Debian has dropped PPC support moving forward, so you're on borrowed time. You may want to look into OpenBSD or NetBSD as a backup plan if you intend to keep that hardware long-term. I've found OpenBSD to be nearly painless to maintain and not too much of a learning curve (admittedly coming from Slackware being the Linux distro I cut my teeth on nearly 20 years ago).

That said, I don't know how difficult it would be to install a non-Linux OS on that particular device; I know it was designed for and shipped with Debian. I've had success installing the various BSDs on all kinds of obscure hardware but I've yet to get my hands on a MyBook Live.


Or in other words, "I use Debian Stable on my home computer and here's how I've justified it to myself".

If Debian Stable does everything you want out of an operating system, by all means use it, there's no reason not to, but some other people may want to make use of the newer features of Linux and its surrounding ecosystem that aren't available in Debian Stable yet. The idea that Debian Stable is a good choice for all users ignores those with different desired features than the author.

In other words, people have different priorities. If you've found something that works for you, good for you, there's no need to justify it.


Well, your commnt implies that you have not actually read the thing you're complaining about, as "[t]he idea that Debian Stable is a good choice for all users" does not appear in the text anywhere. In fact, the opposite is stated.


The more i have had to deal with the Linux ecosystem, the less i want to do with the bleeding edge of it.

Because you will find shit breaking, willfully, every other month or so, because some caffeine addled code monkey thought he could do a decade+ usage-tested lib better in a new language over the weekend.


> "The more i have had to deal with the Linux ecosystem, the less i want to do with the bleeding edge of it."

Sure, but that's not what I was getting at. If the bleeding edge of Linux didn't have any attractive features, hardly anyone would use the features from it in their desktop OS.

Consider a use case. Take gaming for example. Let's say you want to get the best performance for gaming on Linux, so you don't have to upgrade your gaming rig to play a game that your existing rig should be able to handle. As far as I'm aware, there have been quite a few improvements made to GPU performance in recent kernels, particularly with AMD. What distro is going to be best for Linux gamers?

You could use the newer kernel with Debian Stable, but as the age of the platform grows, there's a high chance of mismatches between libraries. In other words, if you stick with the stock distro then stable branches have benefits, but if you introduce any new features that haven't been through the level of testing given to the stock components the guarantees of stability go out of the window.


As a contrary anecdote, Arch Linux has only gotten more stable for me over the past years. I'm running it in multiple servers and had only one incident where a downgrade was necessary (which is not particularly hard either).


>I'm running it in multiple servers

You're the only one.


Eh, I also do that and in fact, archlinux.org and all of its servers are also hosted on Arch Linux. It works really quite well and there is little reason to suspect it should be any less stable than other distros.


Last I checked the Arch website was run by a Debian box, not to mention funded by Debian.


I'm fairly knowledgeable about our (Arch Linux) infrastructure because I set up a big chunk of it. I'm not sure where our donations come from, though.



As a Debian user for 15 years, I like it, but as a developer looking to contribute packages, I am displeased. I can't figure out how to update Gambit Scheme to the newest version by making a package, setting up a local repo, etc. The documentation on the subject in debian is all over the place on many sites giving conflicting answers.

Gentoo and Arch in my limited experience are much simpler to build packages for. Redhat too. One build file is all that's required.


I regularly build Arch packages. I once tried to build a Debian package, followed the tutorial, but gave up after nearly two hours. The amount of ceremony involved in Debian packaging is just absurd.


The problem with Debian is that there is no One True Way to build packages. Instead everybody builds wrapper scripts calling other wrapper scripts and it builds up into a giant mess.

I'm guilty of this myself. The script I made is heavily inspired by Arch. The resulting script for Firefox is like this: https://github.com/qznc/simpledeb/blob/master/test/firefox.D...


Debian Maintainer here (currently in the process of becoming a Debian Developer).

I agree that the documentation lacks various improvements, but i would like to add that there's a difference between building a deb package and building a deb package for Debian.

Tl;DR: To build a deb package for Debian involves lots of extra steps and cautions than to build a simple deb package because Debian is pretty strict with what can enter the official repository. The documentation needs to be improved for both cases anyway.


I'd like to update Gambit Scheme from the 4.2 series to the 4.8 series in debian unstable. if you have some time to help me figure it out, please email my username at Gmail.


As an Arch Linux user for years this hurts me. I can't even imagine living with all that old crufty stuff and then when you actually upgrade something you have library issues and so on.

For those who are to afraid to run Arch, consider openSUSE Tumbleweed. It is as up to date (or more so) as Arch Linux but they do a lot of automated testing, plus they do a lot of automatic snapshots so you can go back if needed (that is if you are willing to run Btrfs).


I think the one of the biggest things for me about stable/LTS distros aside from dealing with "old, crusty stuff", is that said software is upgraded all at once every two or three years. Doing 5 or 10 minutes of work a month looking into changelogs for packages and adjusting settings and what not is much more managable than having everything change underneath you all at once, possibly by several major versions. Its almost a miracle that stable->stable upgrades even work.


I totally agree, have been a happy Tumbleweed user for over a year now. Installation is a breeze compared to Arch.


How dare you say that installing Arch is not easy?????


> compared to Arch

Have you ever tried to install Tumbleweed? I was comparing the two, not using absolutes.


Sry folk, some oversensitive kid took it on themselves to correct what they didn't agree with, and DDoS-ed the site for about an hour or so.

Should be back online now...


I use Debian 9 with gnome. I don't know what I'm missing by not using one of these other distributions and don't care enough to seek out that information because I have thus far managed to do everything I need with Debian, and have used it for years.

Damn. Maybe I need to look around.


I have been a Debian user forever, but my current go-to for "recommending Linux to friends" is Ubuntu Studio.

Latest release pretty much rocks, everything works smoothly out of the box, and the system is set up and ready for multimedia audio/video/etc. content creation.

It still quite Debian underneath, but there is a lot to be said for having a multimedia workstation ready to go, out of the box, on the other end of the spectrum ..


Yeah, it's nice to experience a low-friction setup that does great many things. I used Ubuntu for a little while but kept breaking it. I'm a bull in an operating system china shop.


Site is down.

Google Cache: https://webcache.googleusercontent.com/search?q=cache:nHJYRo...

<obligatory-rant> I'll never understand why serving static(!) sites is so hard. Are modern blog systems still that bad? HN traffic is far below 100 req/sec (perhaps below 10 req/sec), which should be an absolute no-brainer for any modern webserver. [1] Heck, given a good internet connection, one should be able to run 10 such blogs on a Raspberry Pi and still survive HN. </obligatory-rant>

[1] According to ServerFault, challenges start at 100000 req/sec: https://serverfault.com/q/408546/175421


> I'll never understand why serving static(!) sites is so hard.

It's not hard. They're all adding stupid bloat for no good reason.

When my blog was on the frontpage, traffic peaked at 1 Mbit/s (that's megabits, not megabytes) and CPU load peaked at 5% of a single core (and only because that box runs a dozen services in parallel).

Everyone who's blog cannot withstand the HN crowd deserves to have their computer operator's license revoked.


A couple of years ago I had a couple of articles on a static blog hit HN.

Out of the box nginx on Ubuntu 14.04 on the 1GB Linode (then the second tier) handled it perfectly fine. With no disruption to a teamspeak server that was on the same host at the time.


According to support, my site just got DDoS-ed. Makes me wonder who can get THAT pissed off with an article about Debian. Anyway, it makes me feel like I'm doing something right. :D

(BTW you're right, it's a cheap host and quite slow even when it's working. Site runs on Grav (flat-file CMS), which is a lot faster than e.g. WP and ilk (on the same chep host))


Thanks for clearing this up.

It's good to know that there is sane blog software out there, and that it is actually used.


I'm trying to make it insanely sane :D Looking for a better host already...


    challenges start at 100000 req/sec
Only if you have a well configured cache in front of your cms. If you don't expect any significant traffic, it's perfectly reasonable to serve directly from your cms. And that might be super slow because optimizing a cms for perfomance does not make sense. That should be the job of a cache in front of the cms.


The best performance can be had by just dumping some .html files into a directory and pointing nginx/Apache/whatever at it.

Static site generators are a thing now.


Who cares about performance for a random blog? As long as you don't get hit by HN, of course.


On the other hand, why bother writing a blog if you don't expect any readers?

There's this hidden assumption here that "the HN crowd" = "a huge amount of visitors". The typical HN crowd for a front-page article will be on the order of 10,000 visitors. That's really not much on the scale of the internet.


Why implement premature optimisations if you expect 100 or 1000 readersa day? There's a huuuuge difference between 10k an hour or 10k a day.


This argument doesn't make any sense to me.

It might be true for self-written blog software, but amusingly those tend to have quite good performance ... either because these are static generators, or because these are rendered on the fly, but with very simple and hence fast templates.

However, there is no such excuse for popular CMS. Those have been developed over 10+ years, and have a wide range of users - small as well as large blogs.

Finally, I wouldn't call static HTML pages "premature optimization", but rather "the natural thing to do". Let's have a look at the data access pattern: On average, articles are written once, updated seldomly, and read at least 10x as often as written. With increasing popularity, this ratio shifts even more to the "read" direction. [1] Since the datasets are small (order of KiB or MiB), complete regeneration is feasible. Moreover, it is much simpler and less error-prone than caching. And you can speed up site generation with classic build tools (make, tup, etc.), if you want.

[1] That is, with increased popularity more articles are written due to the increased motivation, but disproportionally more readers will arrive.


Using Debian Stable on a desktop is probably the closer to hell that you could ever get. Imagine to never be able to get that one feature you need because it's only in a more recent version.

The best option for a stable desktop is Gentoo, where you can run old software but then unmask recent stuff if you need it. As a recent example I'll say that MTP support in stable (libmtp) was flaky for my mobile phone. With Debian I would've had to suck dicks in the bug tracker forever to get the fix backported. With Gentoo I just unmask a more recent version et voilà. And that applies to any software you could ever need: you are always going to need more recent versions of certain packages at some point. Stuff that gets fixed, a certain new feature you want, a more recent kernel...

I know someone is going to mention apt pinning but that only works in theory since pinning a more recent version of a certain package usually means you have to update half your packages to unstable because of dependencies. And in that case why even use stable?


>The best option for a stable desktop is Gentoo

Get with the times, gramps. Arch is where the racing stripes are at now. Not held back by all these weird dinosaur architectures Arch can really optimise for superior speed, and features, and modern things, leaving Slackware steamboats, Debian diesels, and Gentoo petrols in the dust, because Arch is the EV of Linux operating systems.

The world of anecdata and pointless optimalisations has transcended Gentoo. You're now the Debian stable of the next generation of Linux users, and boy do they know better!

----

with apologies to any Arch and Gentoo users out there. Your OS is fine. I just thought it was amusing if you look, out of context, at these developments.


I've used Arch years ago. It was a trainwreck. They had a very very bad QA and I had to reinstall every few months. The community is also full of people I wouldn't want to have around, as your own message shows.

Also I don't use Gentoo to get "pointless optimisations". Ricers are the low hanging fruit that's there to be mocked. I use Gentoo because it's the best rolling out there.


> full of people I wouldn't want to have around, as your own message shows.

That's way stronger than anything I intended to say, even if I made a bit of fun of the overly enthusiastic ones.

My joke was based on that way back when, when Gentoo was younger and the hip distro, whenever a thread discussed Debian on the desktop or server, a newly minted Gentoo user would drop by to extol the virtues of Gentoo for every use case possible: "But I use it on servers," "My desktop is so much faster," etc.

Now in this, and similar threads, I notice it's the Arch users who have taken over this function. So when you brought out Gentoo, it felt like a blast from the past, prompting me to accuse you of being "out of touch" for humourous effect. No offence (well, very minor offence) was intended, and I bear no ill will towards either Arch or Gentoo users.


>I've used Arch years ago. It was a trainwreck. They had a very very bad QA and I had to reinstall every few months.

Wow, just about the only OS I've never needed to reinstall is Arch (and I am a 99% Linux user and work and at home). My longest running install is something like 7 years on a desktop, and my laptops tend to die or get replaced on about a 3-4 year cycle.


IMHO the best option currently for a stable OS with up-to-date desktop apps is NixOS. NixOS allows one to have a stable base OS then install packages in the user account from a different channel. Since it uses deterministic builds, the binary caches typically mean no building needs to happen locally. NixOS permits multiple versions of any application/library/dependency to be installed side-by-side, even libc. I run such a configuration and all my desktop packages are completely up-to-date, while my base OS and system components are tracking a stable release.


There is a fundamental difference between system and applications. All common Linux distro unify them into one package management mechanism. Your comment suggests that NixOS is flexible enough to make a split.

Android and iOS clearly separate system and applications and it mostly works fine.

Ubuntu tries to split off applications via snap [0], but so far adoption seems marginal.

[0] https://www.ubuntu.com/desktop/snappy


If I understand NixOS correctly, there is no explicit distinction between system and applications, and everything is handled by the same package management mechanism. The value proposition is in having clean separation between all packages, such that each application can be presented with its own mix of system packages without any conflicts between them.


> each application can be presented with its own mix of system packages without any conflicts between them

Furthermore, everything is done, or in the way to be done, in a functional way: system configuration, deployment, etc.

There are even efforts to manage dotfiles currently getting implemented.

IMHO, the Nix way is a great leap ahead.


"Imagine to never be able to get that one feature you need because it's only in a more recent version."

Everything important has been online for a long time. The web browser refresh button works fine.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: