Hacker News new | past | comments | ask | show | jobs | submit login

I've just doubled down on "making my own Debian packages".

There's tons of examples, you are learning a durable skill, and 90% of the time (for personal stuff), I had to ask myself: would I really ever deploy this on something that wasn't Debian?

Boom: debian-lts + my_package-0.3.20240913

...the package itself doesn't have to be "good" or "portable", just install it, do your junk, and you don't have to worry about any complexity coming from ansible or puppet or docker.

However: docker is also super nice! FROM debian:latest ; RUN dpkg -i my_package-*.deb

...it's nearly transparent management.






I don't mean this as a rebuttal, but rather to add to the discussion. While I like the idea of getting rid of the Docker layer, every time I try to I run into things that remind me why I use Docker:

1. Not needing to run my own PPA server (not super hard, it's just a little more friction than using Docker hub or github or whatever)

2. Figuring out how to make a deb package is almost always harder in practice for real world code than building/pushing a Docker container image

3. I really hate reading/writing/maintaining systemd units. I know most of the time you can just copy/paste boilerplate from the Internet or look up the docs in the man pages. Not the end of the world, just another pain point that doesn't exist in Docker.

4. The Docker tooling is sooooo much better than the systemd/debian ecosystem. `docker logs <container>` is so much better than `sudo journalctl --no-pager --reverse --unit <systemd-unit>.service`. It often feels like Linux tools pick silly defaults or otherwise go out of their way to have a counterintuitive UI (I have _plenty_ of criticism for Docker's UI as well, but it's still better than systemd IMHO). This is the biggest issue for me--Docker doesn't make me spend so much time reading man pages or managing bash aliases, and for me that's worth its weight in gold.


Yuuup! I'm super-small time, so for me it's just `scp *.deb $TARGET:.` (no PPA, although I'm considering it...)

Really, my package is currently mostly: `Depends: git, jq, curl, vim, moreutils, etc...` (ie: my per-user "typically installed software"), and I'm considering splitting out: `personal-cli`, `personal-gui` (eg: Inkscape, vlc, handbrake, etc...), and am about to have to dive in to systemd stuff for `personal-server`, which will do all the caddy, https, and probably cgi-bin support (mostly little home automation scripts / services).

I'm 100% with you w.r.t. the sudo journalctl garbage, but if you poke at cockpit https://www.redhat.com/sysadmin/intro-cockpit - it provides a nice little GUI which does a bunch of the systemd "stuff". That's kindof the nice tag-along ecosystem effects of "just be a package".

I'm definitely relatively happy with docker overall, but there's useful bits in being more closely integrated with the overall package system management (apt install ; apt upgrade ; systemctl restart ; versions, etc...), and the complexity that you learn is durable and consistent across the system.


In situations at work where we use something as an alternative to Docker as a deployment target, it's Nix. That has its own problems and we can talk about them, but in the context of that alternative I think some of your points are kinda backwards.

> 1. Not needing to run my own PPA server (not super hard, it's just a little more friction than using Docker hub or github or whatever)

Docker actually has more infrastructure requirements than alternatives. For instance, we have some CI jobs at work whose environments are provided via Nix and some whose environments are provided by Docker. The Docker-based jobs all require management of some kind of repository infrastructure (usually an ECR). The Nix-based jobs just... don't. We don't run our own cache for Nix artifacts, and Nix doesn't care: what it can find in the public caches we use, it does, and it just silently and transparently builds whatwver else it needs (our custom packages) from source. They get built just once on each runner and then are reused across all jobs.

> 2. Figuring out how to make a deb package is almost always harder in practice for real world code than building/pushing a Docker container image

Definitely depends on the codebase, but sure, packaging usually involves adhering to some kind of discipline and conventions whereas Docker lets you splat files onto a disk image via any manual hack that strikes your fancy. But if you don't care about your OCI images being shit, you might likewise not care about your DEB packages being shit. If that's the case, you can often shit out a DEB file via something like fpm with very little effort.

> 3. I really hate reading/writing/maintaining systemd units. I know most of the time you can just copy/paste boilerplate from the Internet or look up the docs in the man pages. Not the end of the world, just another pain point that doesn't exist in Docker.

> 4. The Docker tooling is sooooo much better than the systemd/debian ecosystem. `docker logs <container>` is so much better than `sudo journalctl --no-pager --reverse --unit <systemd-unit>.service`. It often feels like Linux tools pick silly defaults or otherwise go out of their way to have a counterintuitive UI (I have _plenty_ of criticism for Docker's UI as well, but it's still better than systemd IMHO). This is the biggest issue for me--Docker doesn't make me spend so much time reading man pages or managing bash aliases, and for me that's worth its weight in gold.

I don't really understand this preference; I guess we just disagree here. Systemd has been around for like a decade and a half now, and ubiquitous for most of that time. The kind of usage you're talking about is extremely well documented and pretty simple. Why would I want a separate, additional interface for managing services and logs when the systemd stuff is something I already have to know to administer the system anyway? I also frequently use systemd features that Docker just doesn't have, like automatic filesystem mounts (it can do some things fstab can't), socket activation, user services, timers, dependency relations between units, descri ing how services that should only come up after the network is up, etc. Docker's tooling really doesn't seem better to me.


> Docker actually has more infrastructure requirements than alternatives.

I was mostly comparing Docker to system packages, and I was specifically thinking about how trivial it is to use Docker Hub or GitHub for image hosting. Yeah, it's "infrastructure", but it's perfectly fine to click that into existence until you get to some scale. I would rather do that than operate a debian package server. Agreed that Nix works pretty well for that case, and that it has other (significant) downsides. I'm spiritually aligned with Nix, but Docker has repeatedly proven itself more practical for me.

> Definitely depends on the codebase, but sure, packaging usually involves adhering to some kind of discipline and conventions whereas Docker lets you splat files onto a disk image via any manual hack that strikes your fancy. But if you don't care about your OCI images being shit, you might likewise not care about your DEB packages being shit. If that's the case, you can often shit out a DEB file via something like fpm with very little effort.

I'm not really talking about "splatting files via manual hack", I'm talking about building clean, minimal images with a somewhat sane build tool. And to be clear, I really don't like Docker as a build tool, it's just far less bad than building system packages.

> don't really understand this preference; I guess we just disagree here. Systemd has been around for like a decade and a half now, and ubiquitous for most of that time.

Yeah, I don't dispute that systemd has been around and been ubiquitous. I mostly think it's user interface is hot garbage. Yes, it's well documented that you can get rid of the pager with `--no-pager` and you can put the logs in a sane order with `--reverse` and that you specify the unit you want to look up with `--unit`, but it's fucking stupid that you have to look that stuff up in the man pages at all never mind type it every time (or at least maintain aliases on every system you operate) when it could just do the right thing by default. And that's just one small example, everything about systemd is a fractal of bad design, including the unit file format, the daemon-reload step, the magical naming conventions for automatic host mounts, the confusing and largely unnecessary way dependencies are expressed, etc ad infinitum.

> Why would I want a separate, additional interface for managing services and logs when the systemd stuff is something I already have to know to administer the system anyway?

I mean, first of all I'm talking about my preferences, I'm not trying to convince you that you should change, so if you know and like systemd and you don't know Docker, that's fine. And moreover, I hate that I have to choose between "an additional layer" and "a sane user interface", but having tried both I've begrudgingly found the additional layer to be the much less hostile choice.

> I also frequently use systemd features that Docker just doesn't have, like automatic filesystem mounts (it can do some things fstab can't), socket activation, user services, timers

Yeah, I agree that Docker can't do those things. I'm not even sure I want it to do those things. I'm talking pretty specifically about managing my application processes. But yeah, since you mention it, fstab is another technology that has been around for a long time, is ubiquitous, and is still wildly, unnecessarily hostile to users (it can't even do obvious things like automounting a USB device when it's plugged in).

> ... dependency relations between units, descri ing how services that should only come up after the network is up, etc. Docker's tooling really doesn't seem better to me.

Docker supports dependency relations between services pretty well, via its Compose functionality. You specify what services you want to run, how to test their health, and how they depend on each other. You can have Docker restart them if they die so it doesn't really matter if they come up before the network (but I've also never had a problem with Docker starting anything before the network comes up)--it will just retry until the network is ready.

Docker's tooling is better in its design, not necessarily a more expansive featureset. It has sane defaults, so if you do `docker logs <container>` you get the logs for the container without a pager and sorted properly--you don't need to remember to invoke `sudo` or anything like that assuming you've followed the installation instructions. Similarly, the Compose file format is much nicer to work with than editing systemd units--I'm not huge fan of YAML, but it's much better than the INI format for the kind of complex data structures required by the domain. It also doesn't scatter configs across a bunch of different files, it doesn't require a daemon-reload step, the files aren't owned by root by default, they're not buried in an /etc/systemd/system/foo/bar/baz tree by default, etc.

Like I said, I don't think Docker is perfect, and I have plenty of criticism for it, but it's far more productive than dealing with systemd in my experience.


This is the way. And truthfully if you can learn to package for Debian, you already know how to package for Ubuntu and you can easily figure out how to package for openSUSE or Fedora or Arch.

Even `alien` or I think ~suckless package manager~ `fpm` for 90% of things.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: