Hacker News new | past | comments | ask | show | jobs | submit login
Attracting and Retaining Debian Contributors (lwn.net)
153 points by sohkamyung 14 days ago | hide | past | favorite | 126 comments



Every volunteer group eventually starts to look alike.

This is as it was explained to me over a decade ago, and I don’t really need to change much since human nature doesn’t change very fast, and this holds for all the groups I’ve belonged to.

You get a lot of young adults who have more time than money. Their contributions are energetic but often short lived. They move away, their sense of self shifts away from the cause, their lives get too complicated, or they overdo it. It’s the people who pace themselves that avoid burnout. The trap is caring so much about a cause that they hurt themselves and have to step back. You get a smaller number of generally older regulars who keep the wheels on, and you have a few old timers who remember all the way back to the beginning. What has been tried. Who we have collaborated with or gotten donations from in the past.

So attract your volunteers, identify and groom some for small leadership roles. But make sure not to overload them, and encourage them to set healthy boundaries. It’s keeping them that kills many orgs. Though some only focus on retention and become an echo chamber of greybeards. I don’t believe Linux has that problem. Yet.


Getting smart young people to contribute to Linux Distros (and BSD for that matter) to me is a tough job.

Linux kicked off in the 90s because young people at that time wanted a real OS, so Linux came out and people flocked to improve it.

Today, almost all kids use Cell Phones, which are all locked down. I think only a few young kids rely on PCs, thus the issue. I hope it can be solved at some point and good contributors can be found. Otherwise we will be left with just IBM Red Hat.


I suspect the same number of people want a real OS as before, the difference is that everyone now has an OS of a weaker kind that gives them access to the same web that we're all on. So it's a similar problem that we have on the web as a whole—it's not that there's less good content (there's actually dramatically more), it's that it's hard to find it in the deluge of bad content.

The trick that we have now is that young people with potential to be highly technical won't just come to us automatically like they did when we were 60% of the web. Instead we have to make a more conscious effort to do outreach and give people a taste of the power of real computing.


I agree with your assumption that there’s the same number of people who want a real OS as there ever was.

However, I believe there are more active distros and related highly-technical projects today than ever. So everyone has to share fewer contributors, except of course whatever the few hot/popular projects are (the latter is nothing new).

I do worry though. Major academic institutions are having to put freshmen in remedial computer classes to teach them what files, folders, and devices are.

The average persons is getting farther and farther away from an actual PC and deeper into the “app” world. Everything is moving to “the cloud”. Even software development is being pushed to virtual desktops/thin clients (look at GitHub workspaces [or whatever it’s called] as an example). The extremely large majority of software development today is web/cloud based. There are more and more of the non-technical buying “computers” that are really just glorified tablets.

At some point, likely not/hopefully not in our lifetimes, it could become unprofitable for manufacturers to build PCs or only a few remain and the costs become prohibitively expensive to anyone but for-profit companies/corps.

Hopefully that’s a pessimistic outlook.


There are still a bunch of young ones on PCs because of gaming, unfortunately the gaming scene is chiefly on Windows. Proton and Steam Deck were improvements, but the market share is still heavily leaning towards Microsoft.

Maybe if things get better we could see a steadily increase in Linux users, as we are already seeing


I’m ~young (23) and care about Linux. But I’m not going to volunteer to work on it for free. I have bills to pay. I’m here to get paid.

Something that I hope dies off in the next decade is corporations leeching off of passionate volunteers doing free work.


> Getting smart young people to contribute to Linux Distros (and BSD for that matter) to me is a tough job.

I mean, they came with enthusiasm and a lot of hard work, brought a new language with many advantages (as said by the BDFL humself), dusted off the whole GPU stack, tried to formalize a bit the FS systems, then got shunned out by the greybeards because “muh youngsters and their religion”.


That's not really a fair representation of the argument, and the stereotyping of everyone who disagrees or expresses some caution as an old stubborn greybeard who refuses to learn anything new is exactly the thing people (rightfully) complain about.


Please feel free to develop what you perceive as being a fair representation of the situation.

> or expresses some caution

Bellowing about religions while monopolizing a talk Q&A is not “expressing some caution”.


I'm not going to spend half an hour (or more) doing a full detailed write-up in response to a lazy assertion, especially since your attitude does not resemble that of a person with genuine interest in understanding other people's viewpoints.

Stereotyping an entire viewpoint as you did in your previous comment is pretty much always wrong. I might be a bit more forgiving if it has included some context or details, but it doesn't. Your post is little more than an insult.

I will add that people having been choking each other over all sorts of technical stuff for as long as this "free software" thing has existed, because there are usually 20 ways to do anything and typically between 2 and 10 ways are reasonable. This is really nothing new and discussions surrounding Rust are really not that special.


The full quote:

> You're trying to convince everyone to switch over to the religion as promulgated by Rust, and the reality is that ain't gonna happen because we have 50+ filesystems in Linux. They will not all be instantaneously converted over to Rust. Before that happens, we will continue to refactor C code because we want to make the C code better. If it breaks the Rust bindings, at least for the foreseeable future, the Rust bindings are a second class citizen, and those filesystem that depend on the Rust bindings will break, and that is the Rust bindings problem, not the filesystem community at large problem, and that's going to be true for a long long time. And I think we just simply need to accept that, because the answer "you are not allowed to refactor the C code because it would break 5 critical filesystem that distros depend upon" is not a starter. So we'll see; I suspect the best thing to do is for you to continue maintaining your Rust bindings. Over time, there will be continued C code refactoring -- maybe we will start using kfree RCU. If that breaks Rust, we will find out whether this concepts of encoding huge amount of semantics into the type system is a good thing or a bad thing, and instead of trying to convince us to what is actually correct, let's see what happens in a year or two. And it will either work or it won't, and we will see, more likely, where does the pain get allocated. Because with most of these sort of engineering things it's almost always a pain allocation question.

> [...]

> You're not going to force all of us to learn Rust -- if I make a change, I will fix all of the C code, because that's my responsibility. Because I don't know Rust, I'm not going to fix the Rust bindings.

Some of his language is more emotionally charged than was productive, sure, but I think his underlying concern is reasonable: most kernel maintainers don't know Rust, and so when they make changes to C code, they won't be able to fix Rust clients of that C code. The Rust maintainers accept this, and agree to take responsibility for updating Rust code to match changes in C (which is contrary to the usual development workflow of the kernel, hence the concern over what happens if major distros start depending on Rust code).

Note also that Ts'o isn't objecting to including Rust in the kernel! His recommendation is to ship the Rust bindings and see "where the pain gets allocated" as the C code continues to evolve. (The idea being that either Rust's type system and borrow checker will make it easy to identify semantic breakages early, thus making Rust code lower-maintenance than C, or the Rust bindings will turn out to be fragile and high-maintenance, and that the best way to tell is by waiting to see).

I'm not a part of the Linux kernel community at all, and I certainly don't know all the context that led to Almeida leaving the project -- I imagine Ts'o's comments were a "straw that broke the camel's back" kind of thing. But the Linux kernel is a complex project and critical infrastructure, so surely concerns about organization and maintainability need to be hashed out before Rust can make it to production.

Also, Ts'o's comments struck me as some of the most reasonable out of that talk, at least from a technical perspective -- the next commenter after Ts'o objects to Rust in the kernel on the basis that its function call syntax reminds him of Java.


> (which is contrary to the usual development workflow of the kernel, hence the concern over what happens if major distros start depending on Rust code)

The issue here is that this wasn't decided in this moment: it was decided as soon as the Rust was approved to merge. You're right that it's a concern, which is why it was addressed back then, and so a few years later, bringing it up feels like an argument in bad faith rather than a genuine attempt to bring a concern to the table.


Just because it was discussed way back when doesn't mean everyone has to accept the resolution from that. People obviously think it wasn't addressed in a good way.

There's a lot that you can say about all of that, but calling it "bad faith" is not great, to put it mildly. I had seen the video before, but didn't realize the person speaking is Ted Ts'o (I'm sad case and don't recognize Linux developers by voice alone). Ted has spent about 30 years working on all of this stuff, including investing a great deal of his spare time on it. He isn't some sort of random internet troll or anonymous HN user. Dismissing his views as "bad faith" does not leave me impressed.


I don't see how anyone but Linus could sign off on changing something as large as "Rust code is now required to build the kernel." I'm also not aware of anyone suggesting that should change.

I know who Ted is as well. That's why I expect better behavior. Then again, I'm not involved in any way, so my opinions don't really matter.


"He could have done better" is a very different thing to say than "it feels like a bad faith argument instead of a genuine concern".

People are people and people can often "do better" in many ways on account of being people. That doesn't mean they're acting in bad faith.


> Just because it was discussed way back when doesn't mean everyone has to accept the resolution from that. People obviously think it wasn't addressed in a good way.

That's precisely it, and why we also usually call behavior like Ted's "passive aggressive", and why Steve is right to call it out as seemingly made in bad faith. The person to whom he should address his complaints is the decider, the BDFL, Linus Torvalds, not anyone else.

Snipping, delay tactics, bureaucratic maneuvering, and endless bike shedding can poison projects. If you think Rust in the kernel is a mistake (not -- you think, in good faith, this implementation detail or that is technically incorrect), you need to plainly express that view to the person who makes the decisions, instead of whining to and about the people doing good work, endorsed by project leadership.

It's fine to be on record that "this thing will never work". It's less fine to work non-constructively to kill something before it gets an opportunity to work.


Eh, it's no longer allowed to voice discontent in public once Linus has spoken? What a bizarre thing to say. This is never how kernel dev worked, or indeed, how most projects work. Linus is not and has never been a "I have decided, now everyone heed my edict"-type of BFDL.

A long-term contributor with a great deal of investment in the project phrased things in the moment in a way that perhaps wasn't too brilliant (he was clearly emotional/nervous/whatever, as can be heard from his voice), and now he's somehow a bad faith actor trying to poison the project? Again just completely bizarre.

You come out here with some pretty big accusations based on basically nothing other than "he strongly disagrees". Well, he, ehm, he is allowed to state those views. If you want to radically change how kernel dev works and expect people who have been working on this for 3 decodes to just nod and smile and voice disagreement only in the Approved Way™, then I think you will end up bitterly disappointed. That's not how it works anywhere.

You need to actually convince people. And yes, sometimes people will phrase things in a suboptimal way, but that doesn't mean they're acting in bad faith. Bizarre thing to claim with such aggression and force.

> Snipping, delay tactics, bureaucratic maneuvering, and endless bike shedding can poison projects.

So do your baseless accusations that border on character assassinations, or dismissing people's concerns as "bike shedding" that they're somehow no longer allowed to bring up.

If you want to hold people to high standards of kind, empathic, and careful communication, then start by employing those standards yourself. That is: it's okay to criticise Ted for how he handled things. It's not okay to guess at his motivations, dismiss his views outright, and things like that. One is (hopefully constructive) criticism. The other is a personal attack and exactly the sort of thing that toxifies these discussions.


> Eh, it's no longer allowed to voice discontent in public once Linus has spoken? What a bizarre thing to say.

I'm sorry you misunderstood me, because I said nothing about simply voicing one's discontent. I guess it needs to be stated explicitly:

Everyone, including Ted T'so, please feel free to voice your discontent.

My argument was only: When you do voice your discontent, please direct it to the right/responsible person (Linus) and not someone else. Or if it's easier for you: Don't shoot the messenger. Which I thought was clear when I said:

>> The person to whom he should address his complaints is the decider, the BDFL, Linus Torvalds, not anyone else.

> You come out here with some pretty big accusations based on basically nothing other than "he strongly disagrees".

I'm sorry, but recall you said:

>>> Just because it was discussed way back when doesn't mean everyone has to accept the resolution from that. People obviously think it wasn't addressed in a good way.

I only agreed with you about what I thought we both believed the problem to be: Ted, like some others, is frustrated by Rust being included in Linux, and is venting his frustration at the Rust for Linux maintainer, not Linus.

> So do your baseless accusations that border on character assassinations, or dismissing people's concerns as "bike shedding" that they're somehow no longer allowed to bring up.

To be very clear -- I didn't say these were examples of Ted's behavior, although I do think they are examples of bad behavior that have been directed at the Rust for Linux project. I thought was I clear in all my comments about what I believe the problem was re: Ted's behavior. I said Ted not taking his concerns directly to project leadership (Linus), and instead directing his ire at Wedson, was passive aggressive.

> It's not okay to guess at his motivations, dismiss his views outright, and things like that. One is (hopefully constructive) criticism.

Again -- I'm befuddled -- because I was responding to your own statement of the situation, quoted herein. If you read things differently now, 3 hours later, or you regret making that comment, because it was the first to guess at Ted's motivations, I'm fine to leave it here, but please refer to your own comment, before you start preaching to anyone else about good manners.


> When you do voice your discontent, please direct it to the right/responsible person (Linus)

He is not "the right person". The disagreement was on how to best integrate Rust in the filesystem code. That's up to the filesystem people. The great success from Linux comes from Linus not doing that kind of micromanagement. This is not how kernel dev works or has ever worked.

If "assume good faith and don't dismiss arguments as bad faith" is guessing at people's motivations then I guess I am shrug.


> He is not "the right person". The disagreement was on how to best integrate Rust in the filesystem code.

Linus is absolutely the right person because the subject for discussion was the inclusion of Rust for Linux in the kernel.

Despite this being very clear, you're now trying to conflate the "Filesystems in Rust" discussion with what you were referring to the broader Rust for Linux question.

I mean -- you're pretty slippery, but I can certainly remind you again of what you wrote:

>>>>> Just because it was discussed way back when doesn't mean everyone has to accept the resolution from that. People obviously think it wasn't addressed in a good way.

Now, I'm sure you remember what was being discussed "way back when"? Yes, it was the argument re: inclusion of Rust for Linux in the kernel, not the "Filesystems in Rust" discussion.

> If "assume good faith and don't dismiss arguments as bad faith" is guessing at people's motivations then I guess I am shrug.

I completely agree this is ordinarily the right way to act, but there are limits to what we can accept in good faith. When someone acts as passive aggressively as Ted T'so has, or as slippery and dishonest as you have here, even if it's embarrassing to me to have to point such things out, I think it's right to say "Wow maybe someone needs to have a talk with Ted or arp242". Despite your (bad) behavior, I still want someone to be honest and direct with you.


I don't know why you're so aggressive or insulting over a simple "let's assume good faith". I do know these sort of personal attacks you keep doing are explicitly disallowed on HN. So please just stop.

Rust on Linux will happen. One way or another. At the worst, one funeral at a time. I wouldn’t sweat it.

I remember a HN comment that I really liked and can’t find now that went something like:

You know what stopped me from programming? It wasn’t that I didn’t have a good computer. It wasn’t that I started later than all the other kids. It wasn’t that I had to work. Nothing. Nothing stopped me from programming.

There’ll always be folks like that. And I hope I’ll be like that for some thing too!


It's a shit take, but I like it. Upvote this guy.


> Linux kicked off in the 90s because young people at that time wanted a real OS

It's easy to underestimate how inferior MS-DOS was, unless you lived through it. Windows (3.x and later 9x) was slightly better, but still had severe issues. Modern Windows is, from what I've heard, a lot better, so the incentive to migrate is not as strong.

> Today, almost all kids use Cell Phones, which are all locked down.

Not just cell phones; with Secure Boot and BitLocker, computers are more locked down too. In particular, BitLocker means that it's harder to share disk space between Windows and an alternative operating system; back in the MS-DOS (and Windows 9x) days, it was not unusual to install Linux in a directory (or a disk image file) on the same partition used by the other operating system.


> Not just cell phones; with Secure Boot and BitLocker, computers are more locked down too.

You can just disable secure boot, no? I just got a brand new laptop today, and I could just disable it. I know you can get Linux to work with it, but I can't be arsed to mess about with it and the practical security benefits for me are basically zero.

Also, don't underestimate the friction that existed in the 90s or early 00s. Nothing worked on Linux or BSD. My government sent me .doc files. Tons of stuff was Windows-only desktop software.[1] Mucking about with xfree86 -configure and /etc/X11/.... was far from easy or straight-forward. Internet access was far more rare and "just Google it" wasn't really a thing, etc. etc. etc.

If I look at the amount of effort I spent to "just get Linux running" back in 2000 or so when I first tried it vs. today, then I'm fairly sure the overall experience is a lot better today. Sure, you need to deal with some stupid nonsense, but that was always the case.

[1]: For all the hate the "modern web" gets from some people here: it does replace tons of crappy desktop stuff with an abstracted isolated VM which, among other benefits, makes running "alternative" systems like Linux or BSD far more viable.


> You can just disable secure boot, no? I just got a brand new laptop today, and I could just disable it. I know you can get Linux to work with it, but I can't be arsed to mess about with it and the practical security benefits for me are basically zero.

An imagined conversation:

“Mom, I want to disable secure boot so I can try a free operating system.”

“Oh no you don’t. Security is important.”

This won’t happen to anyone who actually knows what they’re talking about, but it’s having a real impact on the pipeline for getting there. The conversation isn’t as imagined as all that.


Systems like Ubuntu, Suse, etc. should "just work" with secure boot. This has been the case for over 10 years (Ubuntu starting with 12.04, in a quick check). I just run a custom hacked-up version of Void Linux because I'm cool like that, but no one new to Linux is doing that – they start out with Ubuntu or such, just as I started out with Mandrake back in the day.

Also I actually got my laptop with Ubuntu pre-installed, a concept that barely existed in 2000.


Secure boots works great with Linux.

Some distros use the MS signed grub shim, but you can generally just go into the BIOS and whitelist whatever bootloader you installed (and optionally disable the factory-loaded MS keys, which I generally do, since I don’t dual boot).


>In particular, BitLocker means that it's harder to share disk space between Windows and an alternative operating system

BitLocker is completely optional, even on Windows 11 (although in some cases encryption does kick in automatically - but you can revert it).

And if you want to both use BitLocker and share files with another OS, you can easily do it with a separate unencrypted drive or partition.


Debian losing contributions is a fairly big deal because Debian and its derivatives represent the most popular Linux distributions.

You'd think Canonical would contribute more...

I think there's also a larger conversation of what folks really need from a Linux distribution in the year 2020+. Desktop Linux is fairly needy but AppImage and flatpak have largely fixed that.


Do we want canonical that involved in our beloved Debian? Look at what they did to glorious Ubuntu, if they think Snap is a good addition to any OS then should we listen to their input?


Canonical was with Ubuntu from the beginning, so if it was ever glorious, that was partially their doing.


What percentage of Debian Developers are Canonical employees? 40%? 60%?


> that involved in our beloved Debian?

They've been there for 20 years already.


> You'd think Canonical would contribute more

Context: I have +10y exp with Python, Linux (some packaging, kernel hacking, sys admin / SRE), C, several cloud providers (AWS, Azure, Hetzner, DO, OVH), etc...

Applied to a Canonical position (Senior Python Eng + required Linux exp + nice to have cloud knowledge), got rejected automatically because I don't have a university degree.

Tweeted about it, asking for some feedback. Tons of replies, same opinion: Canonical are plain elitists.

Don't expect them to contribute.


I'd rather work fast food than Canonical, you dodged a bullet.


Isn't Canonical the one that hires bases on High School performance?

> You'd think Canonical would contribute more...

How much is enough? Did you know that significant contributions to Debian are made by Canonical employees who are Debian Developers being paid to do so?

As a recent example, Debian's recent 64 bit time_t transition was driven by Canonical employees. Canonical employees continue to maintain various packages in Debian, but it's not obvious because they use their Debian "hats".


The beauty of Linux is that we can have a new popular Linux distro. It doesn't have to be Debian based. It can be Opensuse, Fedora or Arch based.


From my point of view it doesn't seem like really appreciate the Debian-style integrated self-contained OS model; what people want (these days) is more just a platform to run 3rd party applications. It is easy to see why some people see Debian more just getting in the way rather than being an asset.

Part of the problem might be that Debian is something for everyone, and has cast a very wide net in terms of packages. Especially not all packages are all that integrated to the whole, or follow Debian guidelines very closely. I don't know if it would help if Debian would tighten ship, raise the bar for packages and cull more aggressively some of the less maintained ones. Although on the flip side one of the big advantages of Debian is its vast package archives so its a balancing act.

Regarding the language aspect, that is the focus for much of the article, while I think local communities working in their native languages is definitely good thing ultimately I believe that any potential Debian contributor needs to be somewhat fluent in English. Debian is communal project after all and I imagine all the official discussions happen in English; its difficult to see someone being effective contributor if they can not communicate comfortably in community.


I'm not qualified to comment on the larger Debian ecosystem, but having been forced (due to work) to create some Debian packages... the tooling is awful. Packages are two(?) layers of tarballs, there's the option of free-form shell scripts in there to do whatever as a pre/post-install step, etc. etc. There's even several ways to try to hide some of that awfulness under the carpet, and yet it's still there.

If they want to attract people who care in the least about developer experience they'll need to consolidate and fix their package system and tooling.


Search HN for "Debian packaging", it's a very old issue that's been discussed at length.

Apparently the solution is to write yet another automation utility:

https://people.debian.org/~nthykier/blog/2023/a-new-debian-p...

Personally I have given up on Debian packaging. Work applications get shoved into /opt and configured via Ansible, and desktop systems run other distributions with simpler packaging formats (with internal or home stuff wrapped into proper packages).

I've thought about alternatives for the servers, but Alpine et al have their own issues. Nothing has been decided yet.


Debian doesn't do a good job in making life easier for its maintainers. https://michael.stapelberg.ch/posts/2019-03-10-debian-windin...


I can't find it now, but I read something just a few days ago saying that some prominent Debian members very much want to see the development/maintainership experience of the OS modernized and improved. They are suggesting (but not demanding) that packages be maintained by teams rather than individuals, manage source packages on Debian's own Gitlab instance, accept issues and PRs via Gitlab, better CI and testing infrastructure for packages, and so on.

I respect the fact that current Debian development practices have carried the OS this far, but lowering the barrier to entry for volunteers to help maintain the OS going forward could only be a good thing.


Lowering the bar to entry in software can absolutely be a bad thing in a wide range of ways.

The most obvious in the current context is possibly well intentioned amateurs proposing LLM generated patches for the usual developers to fritter away their time on until patience runs out.


I suppose that _could_ happen, but I don't see it as very likely. Maintainers are absolutely free to decline low-quality patches or ignore with contributors who expect an unreasonable amount of hand-holding. This is completely independent of whether the code was generated by an LLM or human.

I have seen maintainers who, after a few rounds of code review on a PR that's not making any progress, close the PR and say, "I appreciate your efforts but this is taking up too much time. Please address the remaining issues independently and submit a new PR once you believe you have addressed them all."

The other possibility is that the LLM-generated patch is actually fine. If it looks okay and passes the automated tests, then I can't see why it shouldn't be merged. (Assuming Debian doesn't or hasn't enacted a blanket-ban on LLM-generated code.)



Yes! That's the thing.

I made my own custom packages for Alpine Linux and Arch Linux for personal use. It is so easy that I am honestly I'm not seeing the point in doing things the way Debian does it. It just makes things unnecessarily difficult for no reason.

What does a maintainer realistically do?

He builds the software, so you expect that the package builder automatically installs the development dependencies and runs the build software inside a chroot.

In case the software requires modifications, there needs to be a way to deliver patch files or additional files that aren't part of the original software or at least a standard location to store the distro specific fork that everyone agrees on.

Once the software is built, the maintainer creates a directory structure that conforms to the distributions's conventions, adds things like systemd unit files or desktop shortcuts and other distro specific changes that exist outside the software.

I would encourage everyone to at least give Alpine or Arch Linux packaging a try. It really is quite easy. Almost as easy as writing a docker file.


> What does a maintainer realistically do?

Debian maintainers have to coax upstream packages to behave like other Debian packages and use supported linked dependencies. This covers everything from log and config file locations, startup scripts patching dependencies to match the versions supported by the release, backporting security updates, and documenting.

I trust Debian (and in turn Debian maintainers) more than I trust the upstream developer of lib-random-4-dev. I only ever had to build a custom package from source a handful of times.


Simpler packaging formats like pkgbuild and apkbuild do all of that too. They're easier to use because they do away with "convenient" automation that makes simple things trivial, and slightly more complex things very difficult. You just use the shell with all the standard commands.


> Simpler packaging formats like pkgbuild and apkbuild do all of that too.

I don't doubt it: isn't the breadth of Linux distro flavors and philosophies an amazing thing?


I agree and add Void Linux to the list in the same vein.

Generalley, ports-like package repos (all package build recipes in one repo) are really beneficial for large-scale operations or simply rollbacks. Being able to bootstrap the distro from a git repo and a well-defined set of bootstrap tools is really nice. try that with debian.

See Void Linux and Alpine for a simple, yet refreshing aproach to distro package maintenance, with low overhead.


> Almost as easy as writing a docker file.

It's easier because you get a sane shell instead of a poor reimplementation of a tiny subset of one.

I maintain many aur packages, and wrote the first one in half an hour, knowing nothing about Arch packaging at the time. Now it takes me maybe 5-10 minutes to package a small application that adheres to the standard conventions.

I spent maybe two days on my first Debian package. It was also the last one (not really, but almost so).


I'm very conflicted, because Debian's strength is the intensity of package maintainership, but it's weakness is the intensity of package maintainership. Debian has a strong focus on being a in-group culture, believes in maintainers as strong operators over their fiefs, with freedom to operate in manners they like, usually with little support or help.

There's been suggestions - ideas that maybe Debian should be a little more open to outside contributors, should have ci/CD tools & accept PRs more publicly. https://salsa.debian.org/dep-team/deps/-/merge_requests/8

I do think there would be a lot of help that would show up, a lot of drive by contributions. And some people actively converting to maintainership that wouldn't have. But at the risk of a lot of people being able to help, without acculturating onto Debian, contributing more without becoming a full maintained.

My only real Debian developership attempt was a go at packaging wayvnc a long time ago, and it's deps, well before it was available. Seems fairly straightforward, but I felt very unwelcome when I tried to ask about finding a maintainer or getting mentorship; it was a long time ago so I don't fully remember, but it was pretty discouraging and experience having re-learned so much about Debian & having some the deeds, only to feel turned away for unclear reasons.


The problem with these large open source projects is that they are already on maintenance mode and the fun part has already been done a long ago. Who wants to work for months doing menial tasks so that when they "graduate" they can start doing maintenance.


Truly. God bless the maintainers and we couldn't do it without them, but I cannot imagine myself sitting down on the weekend to do maintenance on a 20yo disk partition utility or something.


One reason I see hardly mentioned is change of tech stacks.

Most modern developers do web development with a highly abstracted stack (including yours truly).

So nobody is much familiar with C / Perl / OS level APIs. Closest they get is adhoc shell scripting in form of devops tools.

Since people are so much abstracted from the stack, they don't feel much need to contribute to it or improve it to scratch their own itch.

That's why new stuff like CNCF gets many contributions, old ones like Debian don't.


Genuine question(s): Who pays maintainers? Are they even paid? If they aren't paid, why do they do it?


The first generation of open source devs did it for fun and ideological reasons. Back then, before any of the big tech companies even existed (except Microsoft, who was widely hated) software was not a highly paid profession, more in line with being a CPA, so people looking to maximize their earning potential weren't in the field at all. Those people did finance back then.


I nearly down-voted this question because every time funding and open source come up together, there will be a lot of arguments about it.

But to answer your question: Debian maintainers are not paid to maintain Debian. People volunteer to work on the Debian project because it's something they use, love, and believe in. Working on Debian is where they get their passion and joy, money comes from somewhere else.

A few maintainers few _might_ get paid by their employer to work on Debian packages that the employer has a vested interest in. As an example, there were some Canonical employees who were also Debian maintainers, but I don't know if this is still true today. I think there are a few companies that tend to hire established Debian developers, like Freexian.


Do you feel like Debian has it different (worse) than other distributions? Does Debian not have the same number of maintainers paid by corporations, etc.? Like in comparison to Ubuntu or Redhat?


This is the open source paradox. Fun, fame, power/control, employment... If you think about it, with how useful/critical some open source projects and distributions are, it seems bizarre to have what amounts to a "group of monks of yore" maintaining it vs government funded.


At least monks have free time and presumably sponsorship or at least survival necessities


I had the same thought. For all the talk of lowering barriers... should we really be surprised that unpaid labor is hard to find?

Maybe if I were independently wealthy I would spend my time contributing to Debian.


>should we really be surprised that unpaid labor is hard to find?

Music is mostly unpaid labor and there's no shortage of musicians.


making music is a lot more fun than making software and way more immediately, personally, and emotionally fulfilling

I say this as an accomplished programmer and an absolutely shit musician. the parts of programming that are fun and the parts that make money, also, have almost no overlap whatsoever

lucrative business problems that are also fun or interesting computer science problems are so rare that whole companies are built around a single interesting CS problem surrounded by a fleet of mundane business problems


>I say this as an accomplished programmer and an absolutely shit musician. the parts of programming that are fun and the parts that make money, also, have almost no overlap whatsoever

This hasn't been my experience. While it's certainly true that much of my time as a corporate cog has been wasted in non-fun things like department meetings and stand-ups and Jira and MS Office documents and reviewing or debugging bad code from other devs, there's also been some time working on genuinely interesting programming problems.

But you're right about music: interesting programming assignments don't come along every day, whereas I can pick up my guitar at any time and play something (however badly) and hear something nice right away.


As others are said Debian developers aren't paid, by Debian at least.

Many of us use Debian in our day jobs. I used to Windows at first because there was very little else, ran away to RedHat when it became clear it's very hard to develop something reliable on a base that is a black box and so unreliable and had abysmal support, ran away from RedHat to Debian at about the time of the Fedora / RedHat enterprise split because they pushed a minor update with so many incompatibilities it broke my systems.

In Debian I found a whole pile of like minded sysadmins/system builders working collaboratively on distro they can use in their day jobs. What do you need as a sysadmin - a rock solid base. Debian moves slowly, is tested for over a year prior to release, backports security patches to stable instead of moving to a new version, has hundreds if not thousands of rules and tools to enforce quality standards, and discussions with its users (who are also the developers) that span weeks if not months over technical changes to ensure they don't break things. The flips side of this coin is you will see people complaining about how old Debian is, or how hard Debian packaging is. All true. And it's that way because it's the only way we've found to create the distro these sysadmin/system builders can build on and trust. It's not a coincidence Debian lead the world into reproducible builds.

In answer to your question "why do they do it", the answer is because we haven't found a better way. Ubuntu's constant introduction home grown features like MIR, their desktop, and the snap store are designed to benefit them, but it's users. Proprietary systems are out of the question now in many places - hidden code with hidden bugs controlled by hidden entities and state actors isn't we or many of our employers can tolerate. What other way is there? It seems to me that open source (which is what Debian is - an open source distro), it the only way to build these types of systems.


Well, this problem needs to be solved. Even if you get a tiny amount, that would inspire a lot of people. Having something in return helps a lot vs invisible hands patting on your shoulder. Most people are ungrateful and take this FOSS ecosystem granted.

My idea is to separate a 5 eur/usd from every EU/US citizen's monthly paycheck and allocate that money to the sites the user frequents, the foss tools used.


Most developers are not payed.


I think having high-quality documentation in one language would be a huge win. That is lots of effort can be spent improving, expanding, and editing existing documentation. English is a prerequisite for the wider Debian community, and it is the most widely spoken language in the world says Wikipedia¹

English 1.5 billion Mandarin. 1.1 billion Hindi 0.7 billion Spanish. 0.6 billion

I wonder how real that number is for English. In a lot of nations, people are taught English in school officially, and it is true but with very poor results. (I was taught French in school and that had catastrophically bad results)

When it comes to Mandarin Chinese, I wonder if the wider community has better mastery of it compared to English or if its the same or if it is much worse.

When it comes to the tools used for communication, having a strong standard is again beneficial. What every 10 years? Every 5 years? There is a new and hip platform that "everyone" uses.

If you can't expect younger people joining to adapt some to the wider community, then you will need a lot of difficulty to handle bridges that will cause all manner of problems.

Or should "adult" community members have to adopt whatever "everyone" is using every 5 years?

IRC has all sorts of different clients, in all sorts of different platforms.

Becoming a developer or maintainer for Debian will involve learning to use a lot of different tools that are needed to perform the tasks involved. Surely most of those tools don't change around every year?

¹ https://en.wikipedia.org/wiki/List_of_languages_by_total_num...


Reading stuff like this makes me think I should look into contributing. I don't have much programming experience, just some college courses and a very good understanding of the basics, but I'm certainly willing to learn if it would help a package I enjoy


Absolutely. That sounds like all the qualifications you need.

There are general channels where you can ask questions (like Debian Mentors on IRC and the mailing list), but feel free to reach out if you need a bit of extra help getting started. My time is limited and I am a relative beginner myself, but I'm happy to give a few pointers.


open source ain’t what it used to be


It's only going to get worse. There's a storm coming. The ageing open source engineers who hold up the whole house of cards cannot keep going indefinitely. It's going to turn into tragedy of the commons very quickly.


This is one of many reasons humanity really needs to work on anti-aging and life extension therapies. If we can make these old-timer maintainers biologically immortal, we won't have to worry about this stuff collapsing one day.


"we just need to invent living forever in the next 10 years or we're cooked" doesn't sound like a plan to me

maybe a nice plan B


So instead of doing a tedious, thankless, unpaid job for 10, 20, 30 years, I can do it for 100, 200 or 300. Sounds awesome.

I'm not advocating that you do this tedious, thankless, unpaid job for 300 years: apparently, there's already people who actually want to, for whatever unfathomable reason, so I'm advocating that those people be given the means to do so indefinitely.

I know what you are 'advocating': science fiction solutions to real problems.

What's wrong with that? It's not like anyone has any more feasible solutions to these problems.

If you actually read any of this thread you'd see many feasible suggestions for how to fix the problem (better communications methods, improved tooling, adapting to how younger developers engage, etc., etc.). And if you really paid attention, you would notice absolutely none of them start with "first, invent science that doesn't exist...".

> IRC is the default chat mechanism for Debian, but younger people do not use IRC; in Brazil, they use Telegram instead.

This is I think one of the major reasons why a lot of older open source projects are absolutely starved for developers in their 20s and 30s. A lot of the mail based workflows or IRC based communication is just utterly foreign to any young person and also objectively atrocious in many ways. Having to rely on a chat service that doesn't let you see offline messages without setting up a proxy in 2024 is just not comprehensible to anyone used to modern services, for good reason.

It's very obvious that open source projects that embrace github, discord, social media are able to do much better and attract contributors much more effectively. Purism is nice and all but if most of your core maintainers are in their 50s and 60s which is a reality now for a lot of long lived projects with no replacement coming up you're looking at the whole thing dying in a generation.


> in Brazil, they use Telegram instead

Telegram is a proprietary protocol, run by servers controlled by private entity. That is a complete anathema Debian. If IRC is replaced, it will be by open source, open access, probably running on hardware controlled by Debian, and done in a completely transparent way.

That takes a lot of effort to get going. Nonetheless, experimental servers have been set up, and everyone invited to use them. The Debian developers voted with their feet. They kept them firmly in place, on IRC.

To be fair, IRC works. Email works. Young entrepreneurs, jailing of CEO's by the French Police, and skilful marketing will attract the young to newer platforms, and so current flavours of the month you mentioned - they will become the IRC's of the next generation. When looked at in that vein, the constant churn to the new shiny platform looks like a very uninviting hamster wheel to the existing Debian Developers who just want to get shit done.

And as it happens, the youngsters seem to have no trouble adapting themselves all sorts of new software they haven't seen before - like IRC. Compared to the effort they must go to become a Debian Developer (you have to pass an exam of sorts, and there are a lot of new Debian specific tools you have to use), using IRC is a very, very small road bump.


You've outed yourself as not knowing why IRC and email are the established media. IRC is meant to be live chat rooms -- and explicitly not persistent conversation. That is the whole point. For persistence, use email. These are the only two protocols required to cover most all needs of communication. It's the simplicity and stability that is why they are preferred, and their usage translates to higher quality development -- much higher quality than is usually ever found on GitHub or Discord.

Also, putting the keys to one's communication into proprietary platforms, like GitHub and Discord, is a horrifyingly bad idea. Maybe it helps graduates of computer science work faster, but graduates of computer science also are (usually) embarrassingly bad hacks at what they do.


> IRC is meant to be live chat rooms -- and explicitly not persistent conversation. That is the whole point. For persistence, use email.

Is there a compelling reason for those to be separate, beyond "we've always done it this way"?


Yes, because the live chat that happens on IRC is meant to be ephemeral. It's a place where you go to hang out with people, shoot the shit, or ask off-the-cuff things. Most IRC moderators have a no-logging policy because it's supposed to be a shared space that people "move into and out of".

Email is where official business happens. That's where debug logs are pasted, and people can reply on a point-by-point basis, with quotes, references, and attachments. It's meant to be high quality/detail and searchable. That's why mailing lists have archives.

When IRC and email aren't separate, you get Slack and Discord -- places that don't do either off-the-cuff chat, or searchable documentation, particularly well.


Discord in my experience is almost too good at off-the-cuff chatting.


> and their usage translates to higher quality development -- much higher quality than is usually ever found on GitHub or Discord

There are plenty of great software projects with development run using both discord and GitHub. The advantages are that it allows one to run both development and a grow a community, and more importantly it brings in massive numbers of young developers who will one day become important figures in a long term project. Simply ignoring where young developers are today is not a good idea imo.

The tradeoffs you said are real, but the benefits I argue are real as well. And it's easy to make a discord bot to have all messages be duplicated in another application, such as irc, or record them.


[flagged]


Thanks Sam, really insightfull stuff. Can you please simplify to one paragraph?


[flagged]


One person losing some privileges is not a "purge".


Having read the linked posts, there's clearly a toxic atmosphere which probably drives a lot people off. I'd much rather be a subject of an angry Linus rant than a target of political play like this.


But it has noticeable consequences when that person was the maintainer of e.g. texlive and KDE...


Projects like Debian have been losing people over drama and infighting since the moment they started. It's not really clear to me that there are "purges" going on or that it's the major reason people are running away and/or not contributing.

A single conflict with a single person over two years ago is certainly not demonstrative of that.

I'd wager good money that the ten-year long infighting, bitching, and vitriol over merging /usr/bin and /bin, the extremely long and protracted systemd fight, and things like that are far more impactful in chasing people away.


Agreed. And in particular, while there has been a lot of fighting over things like systemd and usrmerge, the vast majority of it has been tiring and sometimes unpleasant but not a CoC violation.


[flagged]


If there's one thing you can count on from large OSS projects, is that they provide a steady stream of drama.

Surely puts people off contributing more than anything.


[flagged]


"SJW" ?


"Social Justice Warrior"

Social Justice Warrior is a pejorative term and internet meme mostly used for an individual who promotes socially progressive, left-wing or liberal views

https://en.wikipedia.org/wiki/Social_justice_warrior


It's not always so much the views themselves, it's the screechy, persistent and aggressive voicing of them that ends up getting someone branded an SJW. I'm sure there's a term for the equivalent for the overly noisy on the right-hand end of politics, I'm just not aware of it.

(Posting while my above comment is greyed out from downvotes)

My comment is a direct quote from Wikipedia. If you aren't aware, downvoting my comment will not change what Wikipedia says.


Nobody stands any chance of successfully editing that page.


sigh

I guess any downvote button will be abused, no matter which way you try to say "voting means 'contributes to the conversation'"


If we went to work on Arch instead, that seems like a win for the community, right? Arch is (IIRC) known for not being as far from upstream, so whatever he generates there should be more widely applicable to the community in general.


for those (rightfully) complaining about the experience of making dot-deb packaging .. yes all true and needs to improve YET the Apple iPhone app developer experience has been tremendously painful and rude for a decade also.. somehow given $INCENTIVES the app devs for iPhone just consider it table stakes?


Apps are usually first party, which makes a big difference.

The thing that seems to be relatively rare about Debian among Linux distributions is that the packaging data is intrusive - a debian/ directory within the package itself, which must have already been downloaded. Most other distros seem to have external packaging - some sort of package specification that points to the URL of the upstream and is responsible for pulling it.

It cannot be overstated how much friction this inversion adds, whether for new packages, for updated upstream packages, or for distro-specific (hopefully temporary, but Debian has a bad record for that) patching.

(I actually tried filing this as a bug once, but it immediately got moved to the mailing list, and I promptly got un-CC'ed and stopped being able to reply, since what lunatic actually subscribes to a firehose of a mailing list? Email also considered harmful)


Money is a powerful incentive.


Except for 99% psych studies showing that it is a poor incentive[1].

Talking to a person doing hotel reservations today - his personal incentives were to do give quality service (unfortunately the managers incentives were financially misaligned with that!). Most job satisfaction seems to be derived from internal incentives and money is not that good in my experience. Maybe different in the USA from New Zealand.

[1] see psych reproducability studies.


All of that is demonstration that money isn't an ideal means of incentive, but I have never worked a job without getting paid despite being asked many times to do exactly that.


No, that demonstrates that the monetary delta wasn't a good incentive. But I challenge you to find a lot of people working jobs without pay. There are some, if they have independent means, but not many. Likewise, most fiction authors expect at least the potential to make an income if enough people read their book -- etc.

There are other incentives, for sure. Evidently not enough of them attract Debian contributors.


I see little reason to use Debian on a personal computer, despite using a Debian-based distributions (Ubuntu and Debian itself) for more than 10 years. I switched to Fedora and OpenSUSE Tumbleweed and they perfectly fit my needs.


I'll concede with it being a matter of preference. For me, I used Debian for everything. I even use it to play VR games on Steam.


Same for me


Honest question. What do they do that Debian or Ubuntu don't?


For openSuSe, their open build service is awesome. You can create packages in your personal repos and submit to the official repo in one click. Though zypper (suse package manager) download packages serially and takes a lot of time during upgrades.


OBS can also build packages for many other distributions (including Debian derivatives) and automatically maintain repositories for them, and even supports using one universal RPM spec file to produce both debs and rpms (and some other formats) if you want it.

So you won't have to change distributions to take advantage of it. Really wish the service was more widely known.


OpenSUSE Tumbleweed is a rolling-release distro, so it has up-to-date packages unlike Debian where the packages are years out-of-date. This should make stuff like flatpaks and snaps much less necessary, since these are basically work-arounds for ancient packages.

Also, KDE seems to be very well supported on OpenSUSE, whereas it's more of an afterthought in most other distros, including Ubuntu and Debian.


Major advantage of Fedora is being closer to the upstream sources, both in terms of freshness and in terms of not meddling with libs or similar. Debian patches lead to several possible exploits over the last few years.


Fedora removes elliptic curve algorithms from the source code level [1] and disables hardware acceleration for H.264 / H.265 [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=615372

[2] https://www.phoronix.com/news/Fedora-Disable-Bad-VA-API


Yes, distributing non-free, patented code that requires a license, requires a license. The same goes for Debian actually[1], including blocking requests and removing packages that were included before by mistake.

I would even dare say that this is another point for Fedora, enabling https://rpmfusion.org/ is a one-liner and feels entirely native, never a broken package.

[1] https://www.debian.org/legal/patent


RPM Fusion does not give me uncrippled crypto libraries. It’s caused by their paranoia about export restrictions, not patents.

> Debian patches lead to several possible exploits over the last few years.

Which ones? There was the OpenSSL entropy bug, of course, but that was 1. in 2006, and 2. run by upstream so feels a bit unfair.


I have to admit that I never compiled a list of this type and it seems exceedingly difficult to find useful search results. I couldn't dig up the examples I had in mind from the last 2 years, but stumbled upon others I didn't know of yet in turn, e.g. RCE via Redis, no special config required:

> This post describes how I broke the Redis sandbox, but only for Debian and Debian-derived Linux distributions. Upstream Redis is not affected. That makes it a Debian vulnerability, not a Redis one. The culprit, if you will, is dynamic linking

https://www.ubercomp.com/posts/2022-01-20_redis_on_debian_rc...


Basically they have up-to-date KDE desktop environment unlike the obsolete version in Debian. Ubuntu always lags behind too.

Moving Linux and its associated projects to GitHub would definitely help. I don't buy the arguments that GitHub isn't suitable for it, e.g. because it doesn't allow each sub-project to have its own area. Things like tags help a lot to separate issues and PRs based on which sub-project they're a part of.


Totally. Microsoft is a surely benevolent entity to host the Linux project. They've proven to be excellent stewards of open source in their copilot project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: