> I especially like to follow this approach for Rust applications because the sizes of traditional (glibc) distro containers like Debian/Ubuntu can go up to 200-300MB due to bloat whereas Alpine (musl) containers can stay so minimal such as only 3 MB!
stuff like this make me think the author hasn't really understood how containers work.
200 megabytes, so what? you're only going to download those the first time for the base layer.
all other times you'll likely be reusing the base layer and only download your own layer (depending on a few things, like the tag for the image you're using).
oh and by the way, musl libc can fail in mysterious ways. a while ago there was a post by someone who had almost gone mad because gethostaddr/getaddrinfo in musl libc would fail at resolving a (valid and existing) dns record in some circumstances, returning NXDOMAIN without much fuss (warning or anything).
> 200 megabytes, so what? you're only going to download those the first time for the base layer.
The first time, and every time the base layer is updated, or any of the other layers under your application is updated. Sometimes you might not care, but I think that saving users a 200-300MB download on first use (and per base layer update) is still a great goal.
Is there really a reason to use Alpine anymore? There's RedHat's UBI-minimal/micro and Ubuntu has a minimal image. Especially with Alpine's DNS weirdness in K8s environments. I feel the whole "Debian/Ubuntu bloatedness" (from the blog) is so last decade nowadays.
We are actively working on migrating away from Ubuntu minimized images towards Alpine, because Ubuntu makes significant changes every major release that break basic expectations on the system around packaging, networking, service management and other aspects that affect lower-level system software.
If your app is run inside of an app server proxied through a web server, you probably don't care that much about the underlying OS changes. If your app is system-level, underlying OS changes are a significant maintenance cost that greatly increases friction and reduces feature velocity.
Alpine's environmental stability and tiny security footprint make it ideal for system software that is being containerized.
"sudo has been moved to community repository, which means that only latest stable release branch will get security updates in the future. Suggested replacement is doas and doas-sudo-shim."https://gitlab.alpinelinux.org/alpine/tsc/-/issues/1
Yes. I would hope /any/ OS has changes over time to continually improve, as long as those changes are properly documented and have a valid deprecation pathway, it's not an issue for software under activate maintenance.
The problem with Ubuntu is that they have made, and are accelerating, changes that provide no relevant benefits for most use cases and without good documentation or deprecation pathways, and with core OS functionality. They are making a lot of user-hostile changes that are intended to push users towards using Canonical created tooling and systems away from standardized techniques that work across distributions (e.g. the move from /etc/network/interfaces to netplan, the current push towards snaps away from apt packages).
By contrast, the changes in something like Alpine are much easier to deal with as part of our maintenance. Many of the changes are more negatively impactful for use-cases are Alpine that are outside of containerized applications, so they don't impact us.
Would somewhat agree/disagree. Debian installer is kind of a monster, but Ubuntu's changes to the installer broke compatibility with tried-and-tested PXE install software (i.e. Cobbler).
Ubuntu's packaging of Firefox as a Snap (outside of apt) though is very much a surprise at first, and makes it easy to mistake your system/browser as being up-to-date... And that Firefox Snap container is not sandboxed like OpenBSD, and has full filesystem access.
Given the privilege escalations in sudo and the bajillion untestable features built-in, arguably the Linux community should've moved on from it many years ago, following OpenBSD's lead...
really feel you will benefit more from undertanding containers (i.e. processes) correctly instead of rebuilding and evaluating different bundles of pseudo OS to wrap your programs
I am very certain that my understanding of containers is correct. I have been working deeply with containerization techniques on Linux since well before the creation of Docker, including on the (then) largest containerized production systems in the world. I've also spoken extensively at major technical conferences on the topic. I've sat on technical committees evaluating containerization techniques, both privately (inside a company) and publicly (as part of open projects).
I'm quite assured of my knowledge on the topic, but am always open to new information. If you have something specific you think you know that is related to my exact application where I am mistaken, please inform me, and I'll give it all due consideration. I suspect you don't, since you also don't know my exact application for doing this, so it seems a bit presumption to have the response that you did.
I still believe there are some (i agree not many) reasons to still use Alpine Linux. The obvious is the small size i think still maybe even smaller than Ubuntu. But more importantly the security, it focuses on security with features such as stack-smashing protection, address space layout randomization, and a hardened kernel. And isn't performance still decent for being lightweight and fast? And of course the package management apk manager. Isn't it smaller package repository than other distributions still?
Alpine is small, stable, and built by professionals with a purpose in mind. I would not consider running anything else in production, and I run it on all of my devices -- servers, workstations, laptops, phone, SoCs, even my TV runs Alpine. It's an excellent distro and one of the few that I feel can fit entirely in my head.
Both RedHat and Ubuntu made some user-hostile decisions in the past, thats why i ended up using Alpine for Server and Laptop. So far i haven't found enough reason to migrate back.
Plus, a smaller resource footprint reduces resource pressure and improves application speed. Alpine is still leading in that.
However, it is sometimes a good idea to benchmark the speed of different images, as sometimes a significant speed loss is possible.
for example: alpine and bitnami images optimized for size.
If reliability and support is important then the official debian based images are the way to go. ( --> postgres:15-bullseye )
To Canonical's credit, minimal Ubuntu is pretty small, unfortunately it bloats quickly when you start installing packages for dependencies. Minimal Ubuntu clocks in around 30MB, comparable Alpine images are around 5MB, both are significantly smaller than a typical distro installation. Packaged our software on Ubuntu takes 400MB and around 45MB on Alpine, so the base image for minimal Ubuntu is 6x larger than Alpine and the packaged image is nearly 10x larger than Alpine.
The latest UBI9-based version of ubi-micro is 24.3MB on disk. Still a bit bigger than Alpine's 7.05MB on disk. In my case, it's not enough to notice and anything under 100MB is perfectly fine.
I wish Alpine didn't exist because it's been quite a time sink for my team in terms things that are randomly different from mainstream Linux, and things that end up not being supported at all. We often have to fork projects just to add a Debian image so it's usable in our deployment context.
I switched to debian-slim a while ago and never looked back. The base may be bigger than Alpine, but not significantly after packages and frameworks are loaded.
node:lts-slim is 75MB
node:lts-alpine is 50MB
Yes, 50% larger, but also has glibc for greater compatibility and can even be faster at runtime. I know Python containers tend to be faster on Debian than Alpine. So is 25MB really worth it? Especially when that 25MB is shared between multiple containers?
> These tags are an experiment in providing a slimmer base (removing some extra files that are normally not necessary within containers, such as man pages and documentation), and are definitely subject to change.
As cosmopolitan advances, there will be even more: think about what you could do with a set of binaries that would run anywhere.
That'll happen way sooner with a muslc based distribution than with any other libc, and there aren't many muslc distributions as popular as alpine (except Void): https://wiki.musl-libc.org/projects-using-musl.html
What could you do? Normally we deploy to a known environment with a chosen OS. Unless I'm a distributor of closed source software, why should I be interested in cosmopolitan?
```
git clone <aports repo>.git # Only needed once obviously
cd aports
[cd <pakage subdir]
[TARGET_ARCH='x864_64']package_builder_alpine.sh [/bin/sh]
```
And that's it. The bringup-teardown of the container take a little time, so you can launch a shell instead, and call `package_builder_alpine.sh` within the shell.
The script will generate (temporary) keys if it cannot find them in the mounted volumes.
But even this can take some time, and so after running it once, to generate the keys etc, one can then just use the standard abuild tools, to make debugging faster.
In a second editor, on the local host, one can use any OS as per normal, e.g. git, vi, etc.
Great name! They used to say that “naming things is the hardest problem in computer science,” but now that we’ve settled on the Llamas sketch and other woolen domesticated quadrupeds for puns… I believe we have a closed-form solution.
Very interesting - after having some zfs issues these last few days I've been living in the console to learn more things and it's an interesting experience! (though on arch not alpine)
For some specific workflows I think it's a viable daily driver.
The NVMe being kicked out of the PCI bus, which has been blamed on the firmware. It plays a role, but doesn't happen on other filesystems after simple tweaks, so now I believe there are 2 distinct issues at play with one of them specific to ZFS.
I've also seen other reports of the same issue, but for a different NVMe device.
Just want to add that you can also create a full Alpine Linux filesystem with Alpine's official "apk-static" (statically compiled version of apk) and then run it as a container, with automatic network configuration, using systemd-nspawn. There is a guide someone online, I believe. This would be stateful like a chroot, but with a bit more sandboxing features, which may be desirable.
stuff like this make me think the author hasn't really understood how containers work.
200 megabytes, so what? you're only going to download those the first time for the base layer.
all other times you'll likely be reusing the base layer and only download your own layer (depending on a few things, like the tag for the image you're using).
oh and by the way, musl libc can fail in mysterious ways. a while ago there was a post by someone who had almost gone mad because gethostaddr/getaddrinfo in musl libc would fail at resolving a (valid and existing) dns record in some circumstances, returning NXDOMAIN without much fuss (warning or anything).