Alpine’s use of musl means only the truly insane would be leaping to its defense. The inability to support DNS over TCP was a problem for years. Outside of that, so many things presuppose glibc. It’s an endless source of weird.
It doesn’t make the news cause it’s a hobby os that was made important when we decided the size of the container mattered most.
Glibc is a nasty piece of software full of nonstandard GNUisms that basically implements a separate standard. I've fought for years against the fact they've got functions that are not found in any other libc and have different behaviours depending on macros, often conflicting with POSIX or BSD variants.
The fact the project was run for almost 20 years by a guy that managed it in a dictatorial style was a huge reason why so many alternative libc existed (that, and the licensing). Few people here remember about EGLIBC I guess.
Glibc is also such a mess that it still does not compile with Clang, after _decades_, due to all the crazy GCC extensions they rely on. An attempt is cyclically started and then promptly aborted when some new crazy nonsense is found. For instance, last time I checked they not only used the completely insane folly that GCC nested functions are, but they also relied on GCC attributes so nasty that LLVM never bothered implemented them (like renaming functions at code generation). Using these extensions are not really necessary, and I've a strong suspicious it was more of an attempt by the GNU authors to prevent distributions to ever consider not using GCC as their main compiler.
Also how name resolution is implemented in Glibc means you can't really statically link with it. If you've ever noticed, most "statically linked executables" are not in fact statically linked, but require ld.so just for libc. There are good reasons to disallow statically linking libc, but this is not one of them. Especially since the only stable API on Linux is the kernel interface, so the only way to not have to worry about future Glibc breakage is to either link with Musl or live with the risk.
My experience with musl has been that while it’ll most probably work, you’re severely at risk of a 30%+ perf degradation, and that means the juice is really not worth the squeeze.
glibc shouldn’t be statically compiled in because it’s lgpl and so immediately infects your code if you do.
The zig linker is quite nice here because it lets you pick what glibc you want to be compatible with.
Yes, but people have been complaining for -years-, because it was broken. And rather than listen to reason, the author ignored it because of his own misunderstanding of an RFC...
Memory is not cheap in the cloud nor is bandwidth, so optimizing for image/container size is quite cost effective.
Also every headache I've had with musl libc is because glibc is insane, not because there is a problem with musl. It should be trivial to swap out your standard library, as that is the entire point of dynamic loading. Yet you cannot actually swap out the dynamic library that nearly every program on your system will need, because it's not actually a library.
People often forget that bandwidth is time. A significant fraction of redeploy time for me is docker images, and that's using alpine base images. It would be (was) far worse with something else.
You don't need a package manager in your production container, you need one for the dependencies to build your artifact (in previous stages) which gets passed on, on it's own, statically compiled, to the empty container.
And I need a tool chain for inspecting a sick container trying to figure out why CPU jumped 25% for no apparent reason.
Also some programming languages, you can’t really populate the app’s dependencies without a bunch of its dependencies. And the only way to split the difference means you have to memorize every file that gets created during the build/install process to be sure you don’t miss anything airlifting them from one container to another.
All of which amounts to me becoming the package manager. Which gets much less fun the more containers you have running in prod.
I generally respect the curation that alpine does. Not too old, not too bleeding edge.
As a non-user, I think such variety is a good thing.
Not using the same thing everyone uses forces developers to care about standards rather than to assume compatibility just because it is how the favorite implementation works. Same reason why it is a good thing that a popular browser that is not Chromium-based exists.
Alpine and musl were originally crated for embedded devices, hence the small size. It is interesting how it has become so popular in docker. https://musl.libc.org/about.html
Before other OS offered slim and smaller variants and optimized for size, Alpine was the best bet. At some point the difference was huge and that meant huge savings and a increase in productivity.
There are other things to consider, specially data transfer and build times (apk is much faster than apt in my experience). If these and other benefits are worth the trouble of musl is up to each individual.
Yeah, and the devices with embedded linux firmware also are pennies apiece. Not every linux installation is a banking mainframe, and not every ARM soc even gets those NANDs.
There are FOSS projects that sound nice and desirable but are too impractical to use in the real world. I call them "libmagicponies".
On a related topic, I don't use Ubuntu, Debian, Alpine, Arch, Gentoo, Rocky, Alma, Fedora, or {Free,Open,Net}BSD. I use CentOS 9 stream for most things: it's basically RHEL's kernel and it powers 100M's-1 B machines.
Instead of saying "too impractical to use in the real world", which is obviously false, since many other people have been using at least a part of those in the real world for decades with excellent results, you should say that there are such projects that you have never felt the need to learn about, because you happened to find one that covered well your needs, so you never had any reason to explore alternatives, which is perfectly fine.
That's a really funny comment, but it seems unnecessarily perjorative when there's an entire Linux distro built on this specific example of your magic pony.
Your position might be sufficiently extreme that it warrants reconsidering.
This is patently false. Tons of Enterprise orgs use Alpine for security concerns, ease of administration, and size. I guess all those prod microservices existence is "insane".
As a long time Alpine user (bare metal, laptop, pi, servers) I can praise the low memory footprint, simple and consistent configuration and up-to-date packages.
Alpine runs great on older hardware. Things that bugged me over the years are the lack of armv5 support (understandably an architecture that is dying) and that their native firewall awall is somewhat limited. Despite being a pleasure to use awall with iptables, it is not capable of nftables exclusive features (eg cake) as far as I understand, yet.
The lack of obscure packages in Alpines repositories is not really an issue in my scenario. Or compiling against musl. Neither is the mentioned DNS over TCP issue.
Using wlroots and sway is a breeze on Alpine and having moved from gentoo > arch > alpine over the years, I am glad Alpine exists. Second choice for me is Void Linux musl and their optional armv5 support via xbps-src.
I hope Alpine will stick around and is not only known to be slim in docker containers.
FWIW they fixed the DNS over TCP issue. [1] The only DNS issue I've run into is a lack of IPv4/IPv6 DNS lookup ordering preference like glibc's /etc/gai.conf. Useful in data-centers where 100% of lookups on specific nodes/roles are and always will be IPv4
I too use Alpine on just about everything. I'm happy with it. I do wish it would keep one old kernel around after upgrades but I worked around that.
We love Alpine because `/` is a tmpfs. Your packages and other modifications are all reinstalled from scratch each boot. This gives such a clean install compared to most other distributions that it's not even funny, it's just sad.
We're not particularly interested in NixOS or Guix but we really really want to find a glibc-based distribution that works this way, because Alpine's musl is terrible for desktop use, but the package management is just so amazingly simple and beautiful on a daily driver.
Have you tried Fedora Silverblue[1] and its cousins? `root` is an immutable image powered by something akin to `git` but for OS files. You can layer RPMs (although you should use flatpak or containers wherever possible) and /etc is mutable but also tied to the deployed image[2]. You can rollback your system to previous images, rebase it to a completely different system (i.e from `silverblue/gnome` to `kinoite/plasma`) or build your own OS image for deployment instead.
It is a different desktop paradigm for sure (for one, you won't be using a standard package manager) and you do feel its rough edges sometimes, but it is my favorite Linux experience so far, where the OS doesn't stay on my way unless I want it to be in the way: I turn my PC on, use it as usual, desktop apps are updated on background with GNOME Software/flatpak and if there is a new image available, as soon as I reboot/shutdown, next time it is up, everything has been cleanly applied. Flatpak'd apps also won't spread files across the system, so everything is as self-contained as it can be.
This is actually run my computer, with a fedora-silverblue based image but my CLI is running alpine images.
So I made a little kit so anyone can do this on any linux so you can try all the cool stuff in there without the downsides of running a less popular configuration on the bare metal.
And since it's Alpine it's always a fast download vs. a heavier container.
Yes, we tried Fedora Silverblue, but it's a little too immutable as the Nvidia driver broke after a motherboard swap and we were completely unable to diagnose or debug anything because of the immutability.
GNOME Wayland seems to be best in class though. It's incredibly sad that it broke because we did enjoy it a lot. Better trackpad support than Windows for sure.
> pretending to be simple while shuffling complexity elsewhere
We don't particularly care if, say, `lbu` is a 10,000 line bash script, if it works. Actually, we've tinkered around with KISS Linux before, where a single bash script is responsible for being the entire package manager and rebuilding the entire operating system live. We were unable to get Nvidia working on it even before the motherboard swap though :)
> To make this new Linux variant, Lorenc said, "We hired a bunch of the original Alpine team. But, Alpine was never designed for containers. It was originally designed for routers, firmware, and that kind of thing. What made it attractive for containers was its size and security." Wolfi takes that minimal approach to an extreme for the sake of security.
I’ve run into way too many issues with musl to consider it for anything serious. For containers I use debian-slim and it’s great (of course, it’s Debian after all).
There are plenty of both. One issue is you can't create static binaries with musl and tcc. If I remember correctly its because musl doesn't supply thread local variables.
I'm not complaining, but am a bit surprised to hear Drew's praises. Isn't Alpine, by using BSD-licensed muslc instead of glibc, OpenRC instead of Linux-only systemd, and by its overall business-friendly focus on containers enabling distribution free of GPL restrictions conflicting with Drew's FSF and GNU engagement?
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it a lot. We have to ban such accounts. We've had to ask you about this multiple times already.
I long for a news article/PSA explaining how name resolution differs with musl compared to glibc
This one characteristic has made me a hero, over and over, and I'm honestly tired of it.
I'd think twice before using it. What you hope to gain and how else they may be achieved. Registry caches are easy to deploy and don't get enough love.
Any sufficiently complex service will find oddities, and you might need some graybeard wizard to sort it out.
Redhat (pre-RHEL) solved package signing around 1999 with RPM 2.0/3.0 by using PGP and later replacing it with GPG. Debian solved it around 2003 by also using GPG.
With truly reproducible builds, it's possible to introduce distributed caching of artifacts and selective probabilistic rebuilds from source to attest/verify integrity in a distributed manner.
1. There should be an easy-to-use API, CLI, library, and data (sqlite db or whatever) to query package metadata efficiently.
2. The mythological purity of rolling releases building against edge versions without dependency constraints or maintaining stable versioning causes problems in the real world(tm). There are many cases where past versions are needed. Example: ffmpeg is buggy as hell and has to be managed very carefully. Another example: binutils, gcc, mpfr, mpc, and toolchain friends have to be built together with compatible versions. Further example: don't compile anything with Clang/LLVM 14+ unless you want all of your code to break because some genius decided to break the world out of ideological perfectionism. macports, Homebrew, nix, and Arch are just some who are guilty of this sin.
Packages are signed in exactly the same way Debian packages are signed, ie the package files themselves are not signed but the index file that lists them is.
Have had so many issues related to DNS Lookup failures with container images on Alpine that I honestly can’t be bothered to deal with it anymore. If such basic stuff isn’t reliable then why should I consider it for production usage?
I stick to using Ubuntu minimal container images. They’re 30MB (compressed) in size so it’s never a problem around container bloat.
Alpine DNS is so insanely weird. I had an immutable task that, after a year of smooth operation, would fail one particular DNS lookup in the JRE in one specific AWS AZ, and did fine in other AZs in the region, or if you seeded whatever host cache it uses with “dig” every so often. I couldn’t detect any difference in the DNS responses across AZs.
Those tasks then became Debian based within a couple days.
I've been migrating to nixos. I think it makes news in that everyone is obligated to hate it publicly for at least 6 months before resigning themselves to it.
Unfortunately "boring and basic and just works" is not a very accurate description of Alpine Linux. If you never want to use anything which doesn't support musl then it's a very pleasant and reliable distro, but not using glibc does make it relatively exotic for a linux distro and there is quite a bit of software that will have problems.
The first time I hit bugs trying to run a JRE on Alpine because of musl, I threw it away and never looked back. I understand there are musl builds of Java these days but I don't care, the whole thing is just an unnecessary headache.
Not denying there is a headache, but it's at least 50% arguable to conclude that the source of the headache is glibc rather than musl or any other libc.
The path of least resistance is merely the path of least resistance, not necessarily the best, especially in the long term.
At one point glibc was the weird incompatible limited headache, and the only reason you can enjoy it today instead of paying $1200 to SCO or someone, is because other people did not take the path of least resistance.
Has anyone here successfully ran Alpine on a Raspberry Pi Zero 2 W? I'm asking because despite my numerous attempts it amounted to a failure in using the WiFi and I haven't seen anyone doing so successfully yet.
It doesn’t make the news cause it’s a hobby os that was made important when we decided the size of the container mattered most.