Hacker News new | past | comments | ask | show | jobs | submit login
Nix solves the package manager ejection problem (zeroindexed.com)
68 points by rraval on May 31, 2021 | hide | past | favorite | 49 comments



I think the issues with traditional packages all boil down to the fact that packages are not files. Packages are some transformed instance of some code, operating over a computer's resources in some way. Sure, eventually you get down to the fundamental file abstraction, but it can take a bit: all the variations of building from a single code base. What I like about Nix is that it stops representing packages as files and has its own declarative syntax. The OP observed what really is a side effect of that change, and it's great! Sometimes, though, I wish it went a bit farther than it does - there's room for a few more useful transformations between a package's code base, a package, and a file. If the nix program itself took a bit more control over the system, I think it would end up in a really cool place. The specifics of that, though, I do not know.


You might be interested in Nomia, a project recently started by a veteran Nix hacker at some startup. The idea is to generalize the store layer from Nix's model (package management) to a more universal one (resource management)

https://discourse.nixos.org/t/announcing-nomia-a-general-res...


Author doesn't even mention the ability to git bisect your entire system setup and the power if gives you to track down the exact commit that broke something.


So, how does Guix compare to Nix in 2021 for the average user (who does not want to spend all their time on package management)?


Basic commands in guix are simpler, last I compared them. "guix install", "guix upgrade", etc. These are equivalent to longer guix package commands like "guix package -i" and "guix package -u". I recall nix's package install command being odd, something like "nix-env -ia".

Package names in guix are very consistent, lowercase and hyphen-separated. I recall some nix packages having capital letters in them. It could certainly be confusing.

For declarative user packages, guix has manifests. This is a first class feature. You make a list of packages in a .scm file and apply it, which adds or removes things as needed. With Nix you need to use Home Manager, which isn't part of the main Nix project. A bit of a shame, as this declarative package management is a big selling point, and separating system and user packages lets you have a small system list for fast kernel upgrades and such, while you can keep stuff like icecat in user profile and know that the user package updates may take longer. (especially if there's no substitute and it has to be built)

I think more care is taken to making things simple for the user with guix. What you lose out on is number of things packaged. However, I find guix packages generally have less problems. Nix's mpv package performed worse than guix's with guix being on two generations older hardware (ThinkPad X220T w/ i5 being better than ThinkPad T440p w/ i7). I also ran into strange bugs in some things like pcmanfm. It may be not every package gets much attention and use.

I would say I'm an average user as far as just using my system but not contributing much aside from bug reports.

Note: In all examples I was using each package manager on its home distro, not a foreign distro.


This has been my experience, too. Granted, I have used guix longer than Nix.

Another advantage of Guix is just how readable package definitions are. Nix packages often contain embedded shell code + a "functional" DSL. Guix packages are written in a much more declarative style. It kind-of looks like executable JSON (I guess that's the point of lisp, right?).


FWIW, on NixOS, you can use buildEnv in a .nix file to have your user environment be completely declarative. I discovered this before I discovered home-manager and it's still what I use.


Feature-wise, there's no fundamental difference. All of it boils down to a matter of taste.

But in terms of support, I must say Nix is currently ahead of Guix. It has a large quantity of packages that surpasses any other distros[1], (almost) all of which goes through review. It also has support for non-Linux platforms, namely macOS and BSDs. The community is large and active, and there are various avenues which you can reach out for help.

So if you're interested in either but don't know which to choose, I'd recommend Nix. If you're feeling adventurous and also like Scheme, Guix would be the way to go.

[1]: https://repology.org/repositories/graphs


Off-topic question about Nix. I understand it works by redirecting symlinks from, say, one version of a package's files to another.

Isn't there a race-condition here -- like if I invoke a program at the wrong time while it's in the process of changing symlinks, could it pick up the wrong libraries or something? Does Linux OS allow a "changeset" of files to be locked and altered together in a batch, or anything like that?


I don't think "redirecting symlinks" is a totally accurate way of describing how Nix works.

Each Nix package, as packaged, has a hard-coded dependency on the specific hashed versions of all its dependencies. For instance, if you're building NPM and its build scripts hash to abcd1234, and it depends on Node.js whose build scripts hash to dcba4321, then /nix/store/abcd1234-npm-1.0/bin/npm has a hard-coded reference to /nix/store/dbcd4321-nodejs-1.0/bin/node.

Symlinks come into play with user profiles - you don't want to type in that full path to npm every time, so you put ~/.nix-profile/bin on PATH and you tell Nix to make ~/.nix-profile/bin/npm a symlink to /nix/store/abcd1234-npm-1.0/bin/npm.

But there isn't a race condition in the underlying packages, which are all immutable and more importantly co-installable. If you want to upgrade to Node.js 1.1 and its hash is fedc9876, then it gets installed to /nix/store/fedc9876-nodejs-1.1. And if you rebuild npm against that (even without changing its version), the hash of npm's build scripts changes as a result, and it ends up in, say, /nix/store/aaaa1111-npm-1.0.

If you upgrade your personal profile to the latest version of everything, then Nix will install those two new directories into /nix/store, repoint the symlink in ~/.nix-profile/bin/npm, and then (eventually) garbage-collect the two old directories.

But at no point does the execution of code within Nix rely on mutable symlinks. (As far as I know.)


The reason that there's no race condition is that you're not "repointing ~/.nix-profile/bin/npm", instead, .nix-profile itself is a symlink, so the entire PATH is changed atomically.


Nix doesn't work by redirecting symlinks, much less between different versions of the same package. So there is no race condition as you describe.

Simply, if package A depends on package B then A's files will end up mentioning the absolute path to B directly (say, /nix/store/abcdef-b-1.0/bin/some-bin-file, where "abcdef" is a hash).

If package C depends on a different version of package B, then C's files will contain the path to a version of B with another hash and possibly a different version number (say, /nix/store/zywxabc-b-1.1/bin/some-bin-file).


Thanks, let me clarify my question with a proper example. Let's say I install A-1.0 and A-1.1. Both versions will depend on say a resource file they expect is at /usr/share/A.res

I was thinking that nix would symlink /bin/A -> /nix/store/abcdef-A-1.0/bin/A, and also /usr/share/A.res -> /nix/store/abcdef-A-1.0/usr/share/A.res

So when I'm upgrading to A-1.1, maybe the /bin/A symlink would update a moment before the /usr/share/A.res symlink, which means invoking A at around the same time as the upgrade could pick up the wrong resource.

Do we just try to make sure that binaries know to look for resources relative to their binary-path? Or do we use chroot/containers? Sorry if this is a dumb question :)


So, in Nix (and in NixOS) there is no /bin and there is no /usr/share.

So either A.res gets installed at /nix/store/abcdef-A-1.0/share/A.res and at /nix/store/xyw123-A-1.1/share/A.res, in case A.res is part of A, or, if it's considered a dependency, then it gets installed at /nix/store/jgh456-some-other-package-2.3/share/A.res which both A-1.0 and A-1.1 depend on.

Also, A-1.0 and A-1.1 may depend on exactly the same "some-other-package" or they may depend on different versions, which would be OK since the hash of some-other-package will be different for different versions (or different variations of the same version), so there would never be a conflict.

As other posters mentioned, there is something called a user profile (and a system profile for NixOS) which does contain symlinks to the final binaries that are supposed to be in $PATH for some user (or all users, in the case of the system profile).

But these are only symlinks to the top-level binaries. The binaries themselves (and all their dependencies, including data and other binaries) contain hardcoded paths, they don't get redirected through symlinks. So once the top-level symlink gets resolved by your shell, everything is hardcoded from then on, no package sees some half-between state of their data or their dependencies' data or binaries.

You could even have multiple versions of the same package running at the same time without any conflict, as long as their mutable data is stored in different directories. For example, you can have different versions of Firefox running at the same time, as long as they use different Firefox profiles (if you try to run the same Firefox profile with different versions of Firefox at the same time, the second Firefox will complain that it is already running, as it should -- this is not very different from trying to run the same program twice at the same time).

And also, these user profiles are themselves updated/modified atomically through a single symlink, so your effective $PATH is either the old one or the new one, but never anything in-between.

PS: to answer your question, yes, we do indeed make sure that binaries know to look for resources in the right place.

Rather than look for the files relative to their path, usually packages have a "./configure" script which allows someone to tell where to install resources such as data files, man pages, etc. Usually people install resources in /usr/share or /usr/local/share (or even $HOME/share in some cases). In Nix, we just tell the package to install their files in /nix/store/abcdef-A-1.0/share and binaries will usually know where to find these resources based on the path provided to ./configure (and if for some reason they don't, we do patch source files to make sure that they do).


Ah thank you SO much, this is extremely helpful. I really appreciate you taking the time to explain that!!


Nix uses symlinks to create user environments, mostly. The rest all points to unique paths in `/nix/store`. Sometimes, environments are also used for applications, mostly modular stuff.

Take a Python service started by systemd. The systemd ExecStart points directly to the immutable Nix path of the script, and the script also has a shebang that points directly to the immutable Nix path of the Python interpreter. (The Python interp in turn also links to libraries via direct Nix paths, etc.)


After the profile is built, there is only one link that needs to be changed to activate it, so it's atomic.


not sure if I completely understand the race you describe, but I don't think there's any race. the paths and links are created at build/install time. nothing is switching around after the expression has been applied.

edit: maybe you're describing what happens if a program is run while a Nix expression is being applied; I believe what happens in that case is that the program that was being run will work fine since its environment is pointing to the previous generation, and applying the new expression creates a new generation


Isn’t that a possibility with any package manager?


How do NixOS users typically manage software that is not a Nix package, like a source code tarball where you would traditionally run configure && make && make install?


> How do NixOS users typically manage software that is not a Nix package

By writing a Nix package for it (I don't mean for this to sound flippant, tone is a bit hard to convey over text).

For example I have this alpha quality rust binary that I'm developing but I also want a stable version installed at the OS level. I write a Nix package and simply compose it into my overall NixOS configuration alongside the more official Nixpkgs: https://github.com/rraval/nix/blob/master/git-nomad.nix

> like a source code tarball where you would traditionally run configure && make && make install?

Nix has a bunch of defaults that make a conventional package like this straightforward.

Here's a package for a vanilla C binary + library that does the `autoreconf && ./configure && make && make install` dance: https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/secu...

It's almost a little misleading because the actual steps are largely inherited from the defaults, you can read more about `stdenv.mkDerivation` here: https://nixos.org/guides/nix-pills/fundamentals-of-stdenv.ht...


As a NixOS user of 3 years, this rarely happens.

When it does, I either:

* build it via Nix, 10-15 lines, equivalent to packaging it.

* build it in a nix-shell that contains its dependencies


You write some Nix code to download said tarball and build it as you said. In fact, the ever present `mkDerivation` often Just Works for autotools and cmake projects.


You can also specify paths to local sources, of course! Instead of src = fetchgit... or whatever, you can just write

  src = ./some-source-dir;
or

  src = ./.;
or whatever


That works for the kernel. But if you have to patch, say, glibc, under Nix doesn't that mean you can no longer use precompiled binaries, and have to recompile every single package that uses libc, from source?

Sure, it still works. And in the off-chance that you need to make a patch that changes the ABI, recompiling the world is exactly what you want. But usually you don't need to change the ABI (at least not in a backwards-incompatible way), and recompiling the world can take a very long time.


Another way to address this concern. Nix allows you to specify what you want to do in this case. You can either:

1) Decide you want that patch to be applied to everything, resulting in recompiling the world.

2) Apply the patch for only the specific project you are working on/testing/etc. Thus only that thing is recompiled.

3) You can cheat in various ways if your project truly requires being updated by only changing the glibc and no recompilation is feasible. Some options: the way OpenGL and graphics is currently done in NixOS, nix-rewrite (https://github.com/timjrd/nixrewrite), and LD_PRELOAD. Breaking the abstraction in this way may have practical benefits, but you also lose the reproducibility and excellent bookkeeping of all the details that Nix provides.

4) specify a container/project/VM that uses your patched glibc, so only those things are rebuilt that matter in your use-case.

The crux of the matter is that we need to be more precise in the names of things. Most of the time "glibc" refers to whatever is on your system on the same path. Nix has taken the other extreme where you can't just say "glibc", you really mean "this particular source, compiled with this particular set of flags and build script". If you want something more powerful\and between those extremes, there is some work currently on a system called Nomia that attempts to provide a far richer naming semantics, but it is still experimental. https://github.com/scarf-sh/nomia It would allow something like "glibc means anything ABI compatible with this particular thing, plus some other requirements...."


This may change in the future with Nix moving to a content addressable store.

I lack the technical know how behind it, but if the output of a package doesn't change on changing one of the inputs then nix will not rebuild it's cache.


If you're patching glibc, most of the time, you're going to change its output. Determining whether you need to redo downstream builds based on changed outputs is great if the thing you're changing is docs or optional libraries (such that most packages don't have a dependency on the thing being changed), but it doesn't help with patching the core.

The interesting case would be if you build separate build-time and runtime interfaces - like a libc.so that just has dummy symbols for everything that it defines but no actual implementations, and a libc.so.6 with the actual implementations that can change.

While most Linux distributions have the split of filenames, it's not actually done in this way - libc.so is a symlink to libc.so.6 (or in the specific case of glibc, a linker script), so it requires the actual libc.so.6 to be around and used during the build.

It would also be a bit of a semantic change in how Nix operates, as I understand it: currently you can run ./configure scripts that detect the behavior of libraries by actually running code, and if that behavior changes, Nix guarantees a rebuild. If you remove runtime libraries from what's visible during the build, you can't run such scripts anymore (or you're running them against a version of the library that doesn't trigger rebuilds when it changes).


Nix has a facility for patching libraries without recompiling the world (the intended use case was for novel zero-days where waiting for hydra to rebuild the world was unacceptable). All I know about it is that I've heard it exists though.


Guix has this, and they call it grafts. I'm not aware of Nix having anything similar just yet


To be fair Gentoo has /etc/portage/pqtches so that you can patch software from official packages instead of having to create a new one


Thanks, I didn't know about this. It's been about a decade since I actually used Gentoo and I was writing from the hip.

I've published a correction: https://zeroindexed.com/nix-ejection-problem#on-gentoo


Great post! I'm a mildly fanatical Gentoo user myself, but NixOS has tempted me into migrating to function package manager. I've no hesitation to face fiddily systems and work around limitations in the name of freedom and flexibility, so I've choosen GNU Guix. It inherits these same ideas, ostensibly from NixOS itself, but also from the Lisp-driven environment of Emacs with it's advice functions (including sophisticated extensions to the advice system itself, such as el-patch @ https://github.com/raxod502/el-patch)


Ah I didn't know about boot.kernelPatches. This is so much easier than my complicated nixpkgs kernel override.


this is a very Nixian kind of oversight and it makes me laugh

I don't think I've ever actually used boot.kernelPatches, either, and I could see myself overriding the whole kernel package instead


Read the NPM doc but I still don't understand what "ejection" is. What are we ejecting? Can someone explain this please?


> What are we ejecting?

Ourselves, it seems. A Javascript framework is like a jet, and we are the human payload. You can stay in the jet, zooming over the constantly changing landscape. But if you get tired of this zooming around (or if you get scared of hitting a mountain), then you can activate the ejection seat (https://en.wikipedia.org/wiki/Ejection_seat). Of course, now you're a mile high without a plane, but the ejection-seat comes with a parachute, so the descent will be pleasant (or, at least, non-fatal - which is a style of pleasantness).

Erm, wait, I think you were soliciting a more literal answer. :)

"create-react-app" is the Javascript framework/jet. If you want to go for the ride, then you declare a single-dependency on "create-react-app", and they will decide when/what/how to upgrade components in the framework. If you don't want to ride along with "create-react-app"s framework, then you "eject". They'll give you a little bundle (the concrete list of dependencies) and send you off on your way.


I tried this and landed in a field of debris from the jet.


LOL, best answer so far!


I believe the confusing bit is that it's not an NPM thing, it's a create-react-app thing. create-react-app is a helper tool for, as the name says, creating React applications and doing a bunch of things out of the box. Occasionally you reach the point where you need to do something more complex than create-react-app can handle for you. In that case, you run the "eject" script in the generated app (using "npm run" as the runner), which removes create-react-app's automatic build dependency from your project and sets it up as if you had put together all the pieces by hand. But create-react-app still works, for the most part.

(In this context, I think what is being "ejected" is the build dependency automatically added by create-react-app.)

I don't think it's a perfect analogy, but I see what the author is getting at - you need to break the abstraction of some packaging tool, but you want the functionality it provided to still work as well as possible.


> I don't think it's a perfect analogy, but I see what the author is getting at - you need to break the abstraction of some packaging tool, but you want the functionality it provided to still work as well as possible.

You got it. I don't think it's the best analogy either but it is pithy and works if you squint.

The essence is:

1. create-react-app is a monolithic transformation of a bunch of disparate tools into a managed workflow.

2. You can update create-react-app like any other dependency and get updates to the workflow.

3. At any point, you can break the abstraction via `npm run eject`, which drops you down a level into the raw configuration for the "disparate tools" that create-react-app was acting as a veneer over. This ejection is a point in time transformation, you no longer get the managed workflow updates from (2).

The analogy was:

1. The Linux kernel package on $DISTRO is a monolithic transformation of Linux kernel source code to Linux kernel binaries.

2. You can get updates from the package manager.

3. At any point, you can break the abstraction by dropping down a level and forking the current packaging script to adjust the transformation (like applying a patch). This fork is a point in time transformation, you no longer get the package updates from (2).


The Open Build Service and its version control tool, osc, offer some ability to fork a package from an upstream repo and still track upstream changes. It's not as elegant or convenient as Nix, but it is pretty cool. OBS supports packages for a ton of distros. It's essentially patch based, but targeting different distros (of the same package type) and different architectures can sometimes be without changing the source packages at all.

It might be fun to compare their approaches to solving this ejection problem. Nix's is much nicer to use and requires basically no infrastructure, of course.


I think you're correct that the eject mechanism was popularized with react. It very much sucks IMHO. At least yarn has a mechanism to override a dependency with your own version of a component.Therefore I am not sure the article is totally correct on this


It’s not a common package manager or NPM thing, but a create-react-app thing that a few other things have copied. A detailed description of what it involves in CRA is here: https://create-react-app.dev/docs/available-scripts/


My general understanding is that ejection means you are abandoning a state that can safely accept updates from the original source.

"You're on your own, buddy. Good luck!"


Unfortunately, Nix breaks my CUDA/NVidia support.


NixOS? I've had a good experience with Nvidia on NixOS. But my experience is a laptop w/optimus.


The problem is that I need guaranteed future support.


That sucks. I wish CUDA didn't rule GPGPU. What was the breakage that hit you?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: