Nix is running face-first into complexities of build and package managers. As an observer it looks like Python's package ecosystem in particular is a giant mess. This affects Nix disproportionately because Nix actually integrates all package updates into one channel which nobody else anywhere does (and I guess package authors often don't care to fix), and it's even worse because a lot of projects use python as a build dependency (?!) which then cascades these issues even farther.
I've been following this thread [1] about the issue which has valid interested parties including users, package authors, and nix package maintainers, and contains various proposals to solve or alleviate the problems.
The python ecosystem is not a giant mess, its just dependency hell. You need to work with what you get from upstream.
> Nix actually integrates all package updates into one channel which nobody else anywhere does
If you are using the python system packages then Arch, Debian and probably more are doing the same.
> it's even worse because a lot of projects use python as a build dependency (?!) which then cascades these issues even farther.
In practice this is not a problem at all. Build systems that use python underneath have usually very little dependencies. The bigger problems we are facing are big python (web) applications and anything that moves (very) slowly upstream like (sadly) many AI/ML projects.
FYI: I am very active NixOS maintainer and one of my foci is the python packages in NixOS.
> The python ecosystem is not a giant mess, its just dependency hell.
There are dozens (?) of actively used package and environment management systems, with no consistency and no lockfiles, many packages ignore semver, and having multiple versions of a package installed causes weird issues. I'm not sure why "a giant mess" is an invalid descriptor, I guess it's just arguing semantics.
I already posted multiple posts in the thread under the same username. Other than adding important consumers of a dependency to its tests, the only solution I could come up with is to move AI/ML packages into their own repository where they can move at their own pace and do pins/overrides more freely and/or drastically reduce the amount of python packages in nixpkgs.
Semver doesn't really apply here since nix always points at a specific version of each dependency. Multiple package versions are not visible to the same app. And you can include your own lockfile-like list.
So in practice, it's not really a bigger problem for nix than for any other release. If you need to freeze versions for an app, you freeze them. If you don't or if you're packaging a library, you can (usually) rely on automated PR testing to catch breakages and deal with them there.
The thing that you and sibling miss is that this only applies to the one direct dependency on a package, but not on dependencies required by that package. For that you have to work with any build and dependency resolution system that the package or its author uses, and you're up a creek without a paddle if the author doesn't care to help you debug your own system.
I'm not missing this. It's not a problem the way you describe it. With nix you generally choose between packaging with deps that are in the repo, or effectively provide your own list of vendored versions. Both cover all the deeper dependencies and you get the choice of integrating with the project's build system or ignoring it and providing your own lockfile equivalent. (the latter especially if the build does something really weird)
Even if authors are not responsive, nix maintainers provide the required patches or disable the broken functionality. So no, things are mostly ok.
> that this only applies to the one direct dependency on a package, but not on dependencies required by that package
Right now in nixpkgs we manage all dependencies in all packages front to back. So all dependencies of a dependency and dependencies of dependency dependencies and so on.
Uh, what? I've not used Nix with python (so I don't know if there's something particularly bad there), but most of the complexity comes from handling non-python dependencies of python packages (which different people have different needs and hence opinions on, and so seeking consistency there is like expecting all the linux distros to merge). At best there are two major systems that people will talk about (pip+venv and conda), and the reality is conda isn't a python package manager, but a more general system (and every other python system you could name is built on the pip+venv system).
> As an observer it looks like Python's package ecosystem in particular is a giant mess.
Most people would say it is. The better question is how to avoid such a mess.
My take is that maintaining backward compatibility is a core principle which needs to be strictly observed to solve that problem. And yeah, that has also become a cultural issue with Python, as the Python2/3 breakage shows.
So, could one sum it up in that Nix magnifies unsolved compatibility issues in packaging systems? Because if there were a single core Python distribution, like say, Anaconda, but nothing else, these issues would not exist. Of course, people can avoid the issues if they only use a handful of packages. But putting all into a single channel makes the problem much more acute.
> My take is that maintaining backward compatibility is a core principle which needs to be strictly observed to solve that problem.
Given that software developers never guess the correct design up front this means that you always have architecturally buggy software, and a bunch of complaining about why buggy-looking edge conditions are never fixed. There has to be some kind of release valve for software to evolve and break backwards compatibility.
I don't disagree, but it doesn't only have to go in this one direction. One of the most interesting things about Rust for example is how it tackles experimental implementations and has concepts in the compiler etc. that make unstable language features "first class". I'd say this will definitely yield better results than "well a couple of guys hacked around on some prototype forks of a compiler, and now we're stuck with the result".
Of course, they also make very impressive backwards compatibility guarantees for stable stuff (cf. Rust's "editions").
Rust has corporate sponsorship and a very experienced team of developers.
You won't get that level of attention to detail and commitment to getting the design right up front, and the willingness to maintain old APIs in the name of backwards compatibility in a single-person open source project published into a package manager done on someone's free time.
So you are probably arguing for very thick standard libraries which are maintained by the core language team, which is corporate sponsored, and a reduction in reliance on open source packages.
That also means as well that we shouldn't tolerate "shaming" of projects for taking a long time to fix and merge features since 95% of the work will be required to be done up front in thinking about the right shape of APIs.
I'm cool with all of that as long as the whole package comes along. The idea that a bunch of solo, unpaid open source maintainers are going to be doing good API design up front and maintaining perfect backcompat, while being incredibly responsive to PRs from the community is kind of "unicorn farts" levels of not going to happen in the real world. You sort of get what you pay for, and a bunch of unpaid solo volunteers are going to need to make breaking changes to fix their old mistakes and abandon maintaining their old tech debt. And if you paid nothing for it, really you're getting more than you deserve in that deal.
> So you are probably arguing for very thick standard libraries which are maintained by the core language team, […]
No, I'm not arguing that Nix should do anything in particular.
All I was saying is that the "we have to get it right the first time without much feedback" way obviously isn't the only one and there's empirical evidence of other working models.
As for PRs and so on you're really putting words in my mouth, and frankly, I don't like it. Just so you know. I have never made PRs to core Nix, but for Nixpkgs I have only had a good experience so far.
---
Edit: Or are we really talking Python? In that case, I could even less comment on PRs. But: Python has a large stdlib (it's "batteries included", after all). But also, Python often has found good ways to deal with its warts.
And I hope I don't have to argue that Python3k wasn't worth the trouble, right?
And frankly, there, I'd argue that growing to the point that python has you'll have to reexamine some more ways to gather data about community interest, for example.
From the outside, the process around the walrus operator and Guido leaving the BDFL post looks like a prime issue of either not having enough "wild information" early on or of the final decision ignoring a vocal part of a huge language community.
> There has to be some kind of release valve for software to evolve and break backwards compatibility.
You kinda insinuate that breaking backwards compatibility is kinda necessary at times.
This is not the case. Projects like
* the Linux kernel, or
* the GNU C library, or
* the Numeric -> Numpy transition around Python 2.0, or
* Common Lisp (which is much older than Python) adopting Unicode
are good examples that this is not necessary. It is not true that you have to break backward compatibility.
There are domains where breaking backwards compatibility in libraries is not acceptable at all, like vendor libraries in industrial automation. You don't throw away a 15-year old printing machine or a chemical plant just because the vendor of the automation software is tired of supporting its old interfaces.
How it is done? It starts with well-designed interfaces. And when interfaces are changed, the old interfaces are kept and become special cases of the new ones. Numeric/Numpy is a good example.
Here is a talk, brilliant as always, by Rich Hickey which explains why and how:
Python3 could have gone the same way - keeping the interpreter compatible to Python2 code, making the semantics dependent on whether a source file has a *.py or a *.py3 extension, and so on. It would have been more work but the transition would have been nearly painless, and I guess much faster. Support for old stuff does not need to go on forever - for example, Linux does not support any more Intel 386 CPUs.
It boils down to whether keeping stuff backwards-compatible is a goal of the project leaders or not.
The problem with most open source software that is in package managers is that it is usually done by one person. It isn't started by someone with a decade of interface design, it is often their first large important project, they DONT do the well designed interfaces because they haven't made the mistakes in interface design yet which they'll eventually learn from. And then when it comes to backwards compatibility it is cheap for you to say they should just support their old interfaces forever, but that has a cost and creates more friction going forwards for the projected. When it is one person working on open source who isn't getting paid, that is all somewhat unreasonable to expect and you just won't ever get it. In your world what you'll wind up with instead of back compat breaking changes is just abandoned and rotting software as maintainers give up.
You could do this by arguing that languages need very thick and well-designed standard libraries, which means that hopefully there's a large business supporting the library and there are teams of reasonably well paid software engineers who are doing the design work up front for everything. You should be explicit about that though.
I'm kind of not surprised that you cite one of Linus' asshole rants to LKML as well. Try screaming that at a single-person open source maintainer and watch them decide it just isn't worth it any more and quit on the project entirely.
If you want that then don't use anything outside of your language's standard library and don't use package managers and contributed source code at all. Write everything else yourself, no dependencies, no worries about backwards compatibility breaks.
I did not say that hobbyist packages which are used by few people and are unstable experiments should be kept stable at all costs.
But you see the linked discussion about stability in Nix is about packages like opencv, pillow, boost, pytorch, tensorflow, kubernetes, and I would expect them to behave professional.
And as said, as long too few people actually respect semver, it is pointless to suggest to use it, especially if the authors of a package do not know what a breaking change is, do not know how to avoid them to happen, and do not have a documented and specified API in some way. If you don't have an API, you can't use semver.
I do not complain. Complaining is like "Uncle Bert was totally drunk again and fell down the stairs and broke is arm and I expect him to change in order to make me happy." Or "Frank did lend my car again and damaged it, and I told him again that I do not like that".
What I do is observing things and drawing consequences. "Sorry, Frank, you can't have my car." And: "Well, Uncle Python does of lot of breaking changes, so I do not better use it for long-lived projects which I do not want to constantly fix. Maybe I could have a look around what languages do manage this better?"
This is not, I think, an attitude I am alone with. For example, the Python2 / Python 3 breakage led Konrad Hinsen, an earlier contributer to Numpy and Scientific Python, to explore Racket as a language for scientific computing:
And more concretely, and perhaps pragmatical, when I start a project or include a library, I am critical about the stability the environment offers. For example, in some situation I used gevent in place of newer and perhaps fancier solutions because it was stable across the Python2/Python3 version bump, and I did not want neither myself or my coworkers to need to re-write that part.
This does not mean I stop to use Python altogether. It is still useful for many applications. However for writing new library code, I will rather use something that is more likely to be stable.
Well they should clarify because they cite the example of the 2/3 breakage which is a major version break, and they're not complaining about packages just violating SemVer so I don't think your interpretation makes sense.
It also certainly wasn't what I was responding to, and responding with "hurr durr major versions" like I've never heard of them before is just mildly insulting (and kind of insulting to the parent comment by proxy)
> they're not complaining about packages just violating SemVer so I don't think your interpretation makes sense.
Packages regularly violate semver, to the degree it becomes a cargo cult.
It is funny that at the one hand side, people say that keeping backward compatibility is too difficult for normal package authors and contributors, and on the other hand side suggesting that using semver would improve this.
In order to actually use semver, one needs to know what backward compatibility is, what kind of changes break it, and how to make sure that this breakage does not happen. It is not that difficult. But also, semver require stability against an API, so one absolutely needs to have a clearly documented API of some sort, because otherwise, if there is absolutely nothing specific you promise, how could one expect you to keep it?
And further more, major packages should actually respect semver if they claim to use it, and not do breaking changes with minor version numbers, like for example boost does. Actually, I think if somebody uses a three-element version number and does not strictly adhere to semver, it should come in a popup box in front of every download link, because these three-part version numbers somehow imply that the package uses semver, and this in some cases (like boost) is a false promise.
And before somebody throws in that it is not *his* package that is breaking semver, but some dependency that his package happens to use: No. If you use dependencies, you are responsible for their behavior, because otherwise, one could always shift the blame somewhere else. If a dependent package breaks backward compatibility and your package is a library package, including it is a breaking change, because backwards (in)compatibility of dependencies which have visible effects (as is the case in all Python library modules, as has been discussed) is a transitive property which travels up the dependency graph. If you include visible breaking changes, then your package introduces a breaking change, and cannot honour semver without bumping the major version number.
> they're saying that the Python 2/3 breakage debacle became a model for how packages are maintained in general.
As explained above, semver is not a solution. And it is often not really followed. For example, boost is breaking backwards compatibility some times, and this is causing problems, last not least because boost Python bindings are used in so many projects.
explains why this is not a solution. It makes a difference, yes. But it is the difference between "the incompatible changes in my library are going to break your application" and "the incompatible changes in my library are going to break your application, and I am telling you this beforehand".
Is the discussion here that Nix doesn’t support multiple side by side installations of different versions making it difficult to install two packages depending on the same package but different versions ?
I thought in Nix you can have that, and python can have venv for independent library versions, is it that no one has done the work to combine the two ?
Nope. All Python programs in Nix are effectively venv'd, so-to-speak.
The problem is that if two packages A & B each depend on the same library L but they require different versions of it, any Python code that imports both as libraries will end up with two, potentially incompatible, copies of the same library in its import path. And Python doesn't support this, so it uses the same version of L with all the Python code running in that process. So now depending on whether some package P which requires both A and B imports A first or B first, A will end up using B's version of L or vice-versa. This sometimes does nothing, and it sometimes causes very weird, very subtle breakage.
Thus for all of the Python libraries in Nixpkgs to be usable in any combination by any package in Nixpkgs, there can only be one version of each Python library in Nixpkgs.
Once your package collection is large enough, you start actually encountering versioning conflicts as described above in the transitive Python dependencies of your end-user application packages. That's why Linux distros run into these integration issues. Application developers generally don't because their applications development environments are much smaller.
> I thought in Nix you can have that, and python can have venv for independent library versions, is it that no one has done the work to combine the two ?
A python environment is effectively a different take on a venv. Python packages is one giant python environment. For programs that life outside of python packages but use the packages from there we are free to apply overrides how we want. So it is possible to have different versions of a package in a different python environment.
Yeah. This is covering stuff you know if you've read that thread, but:
Putting all Python libs into a single channel is something Nixpkgs does because Python can't handle different versions of the same library in a single process. The Python libs that are used in actual applications in NixOS, then, need to be compatible or they might cause weird issues when a new or existing package tries to leverage them at the same time. Other distros do run into this but it might be worse in Nix. (Some (all?) C libs have this same issue, but they don't have rampant integration problems from version to version like the Python ecosystem does, so having just one copy of them is fine.)
And yeah, Python developers seem wary of the kind of vendorization that would fix this, and some Python package authors are very hostile about integration issues that Linux distros are more likely to hit than developers of individual downstream applications are. (I guess they don't feel like distro integrators are truly 'using their code' in the way that application or library developers would be, and those integration issues can be a lot of work to figure out for what feels like some 'fake user's' configuration problem.)
Python packaging has been an incredible mess for many, many years. I don't think the community's processes and institutions have the means to meaningfully fix it so that downstream consumers of Python packaging infrastructure don't have to use a big pile of hacks to successfully package Python applications.
Distros will probably see another language replace Python for internal tooling and sysadmin applications before they see Python packaging unified around something that behaves deterministically, works offline, lets programs reason about the dependencies of packages that are not present/installed/built, pins versions with cryptographic hashes by default, disallows arbitrary scripts at install time, sanely describes dependencies on native libraries, etc.
PS: This is not a knock on Python. The Python community faces some really tough institutional/governance/adoption problems because:
- the language is mature and the ecosystem has a lot of valuable code in it, already packaged in various ways
- the language was born without a modern package management story, because it's very old!
- big community-wide changes are democratically governed and there are a lot of stakeholders who are bound to have opinions
Consensus will be really hard to build and legacy packaging processes will stick around for a long time. And some changes that could really help, like in-process changes to library loading behavior, are likely to be seen (perhaps correctly!) as too radical/disruptive to be in the best interest of the majority of the existing community.
Applications can overwrite dependency versions they want to use. It's just not possible in the python package set because if ever two versions of the same package are in one environment very strange things happen and everything falls apart.
> The Python community faces some really tough institutional/governance/adoption problems
The Linux kernel people could have the same problems... it is genius how they have solved them, by developing git but also by establishing processes that scale.
In the Linux kernel world Linus is the final arbiter and decision-maker--he takes input from people he trusts but ultimately he and only he has the final commit. Python used to have this model too with Guido at the top as the "BDFL" (benevolent dictator for life), however when he stepped down they replaced it with a consensus of a steering committee: https://peps.python.org/pep-0013/ Both projects have very different governance as a result.
Maybe it would be more correct to say that "Nix integrates all package updates into one channel which is unique to system integrators (like Nix and Arch) but an uncommon workflow for regular application developers".
Python is _really_ tempting as a build (or test) dependency. It's an easy way to throw together code generators or similar glue that is likely to run on whatever platform is building your software.
Finally, needing to rewrite everything in Nix is nice for poorly written configurations or undocumented packages in general, but seems redundant for well maintained software. Has anyone else come up with a sane Nix strategy to avoid the overhead?
I have tried Guix (as a package manager), and it seems much better documented.
I also really like the fact that Guix uses a well-established, minimalistic, well-implemented, functional-preferred configuration language, which is Guile, the GNU implementation of Scheme, which is very much tailored to be extended with and embedded in other software, for example written in C. In part, my love comes from having had to use the alternatives: Huge configuration files written in YAML, for example, with no real documentation what all the keywords really mean, or things such as Conan, which appear declarative and are.... whatever.
I much prefer Guix UI-wise but it has some downsides:
— I’ve had more jank on Guix System as a desktop OS than on NixOS. Specifically some dbus-related stuff like notifications and appindicators (when running sway + waybar) has been very unreliable for me under Guix in ways that it hasn’t been on any other distro I’ve tried including NixOS. Still haven’t figured out why.
- Guix is slow compared to Nix. This is especially noticeable on older/weaker hardware.
- Nix home-manager has a lot more options than Guix’s equivalent - it’s really nice being able to rely on it for things like sway configuration.
That said, between the two I do lean towards Guix because I do gravitate towards Scheme more than the Nix DSL. I just wish it were a bit more polished.
> - Guix is slow compared to Nix. This is especially noticeable on older/weaker hardware.
Yeah, I have noted it is not the fastest snail on the lawn. On the other hand, I have seen so much time wasted on integration and reproducibility issues, that I'd be happy to run one day a week nothing else than a Guix install and not have any of these issues.
Which is a good thing because most users do not want to configure and program Guix stuff all the time. Many will only use it every few months, and probably don't want to learn syntax and semantics again and again.
Extreme terseness like APL or math notation is fine if you work with something all the time. However for infrastructure code and especially build systems, I think readability as well as robustness are much more important aspects.
People complain about learning 'the language' when the thing they've really been trying to learn is actually:
- the language
- the stdlib (nixpkgs.lib)
- the NixOS module system
- several particular packaging ecosystems (stdenv, buildGoModule, buildPythonApplication, etc.)
- the hooks and stuff that get exposed as variables and functions in bash-based builders
all at once! (plus maybe even the derivation format)
It makes sense as a shorthand, and I'm sure it describes the feeling, that people say 'learning the Nix language was overwhelming at first', because using the language in the context of all of those other things (essentially libraries and applications written in Nix) is the context in which one generally tries to learn it. And that really can be a lot to take in at once.
But I think when someone says 'learning the Nix language was hard' or something similar, it does sometimes mislead others about the complexity of the Nix language itself. So there's this widespread misconception that Nix-the-language has a lot to it.
But like you say, the language itself is actually super minimal. (And really good for its intended purpose, imo.)
For somebody who knows Lisp and Scheme, it is however more than nice to already know the language and be able to just read the interface description.
To give an extreme example of the opposite direction (no, I do not want to bash Nix here), take CMake and its configuration language which has no real definition. It is just painful to use.
I think nix probably needs a more beginner friendly documentation as well, by beginner friendly I mean Linux beginners. Beginners will not read through nix pills (assuming they can understand it) before trying nix, they will just give up. And there are a lot of Linux users who don't have much technical knowledge about how binaries are being linked, etc.
I think the ecosystem is now mature enough for beginner users to ignore the detailed packaging issues, just rely on home manager and nixos options for most of the setup. And I think it is should be possible to create something like a GUI for home manager, to lower the entry barrier. If people are looking for a distribution rather than a build system, we shouldn't be teaching them how to use nix as a build system, we should show them some config that just works.
Because why not? For things like installing (commonly used) packages, setup some services, I think Nix is simpler than things like Debian or Arch. You don't need to understand all the nitty-gritty to install a package. The problem is about the documentation, some information is old (nix-env for example which should be deprecated IMO), may need better discoverability (options that you don't know they even exists), and we should be able to just give some examples for users to modify instead of asking them to learn the whole language.
For advancing the goals of the project, I think increasing adaptation will be beneficial, for example to let more software provide a nix script for building, due to more users using it. And perhaps if more users are using it, more companies and universities will adapt it.
> Nix needs a new porcelain interface for it's CLIs.
It already has one with the 'nix' command, it just needs to be manually enabled under 'experimental-features', but once done there is basically no reason to ever touch any of the old commands.
Yup. I started with Nix a little over a year ago on Nix 2.3, and have only ever used flakes and the new CLI. It's complete from my point of view— the main issue is just that the pills and all the official documentation still refer to the old commands.
I wish they'd flip it over to being the default and update the docs. I know that's not trivial, and it's hard when all the long-time community members have the legacy commands in their muscle memory, but IMO the current state of affairs is actively hurting the onboard experience.
I started with a single system on NixOS around February of this year. I just recently saw an example of `nix search` instead of `nix-env -qaP`. I haven't seen any documentation for all of this new stuff. Any place to go, or do we just have to go read the source code?
`nix --help`, `nix search --help`, ... will show a man page, but a look at source can't hurt, as there are a couple of rough corners with the `nix` command that can be rather confusing.
For example `nix profile remove REGEX` will only match against the attribute part of the URL, which for Flakes is often just "defaultPackage.x86_64-linux", completely missing the name of the actual package, thus making it impossible to remove packages by name (using the index number will work).
> it just needs to be manually enabled under 'experimental-features'
Same with flakes. My impression is that Nix is either on the cusp of a major paradigm and usability change, or the status quo will be forever in a state of having "wrong defaults."
> My impression is that Nix is either on the cusp of a major paradigm and usability change, or the status quo will be forever in a state of having "wrong defaults."
I think probably both. The Nix community is host to very diverse and partially overlapping experimentation, and features like rollbacks and version pinning make the bleeding edge feel relatively safe, further fostering such experimentation.
The new Nix CLI will be huge for new users and for adoption once it's finalized. But there will probably always be a bunch of Nix users doing weird, cool shit that everyone kinda wishes was 'here already' for mainstream use.
FWIW, I'd say the new Nix CLI should see broader adoption soon. It was only just around the beginning of 2022 that they started signaling it was mature and there's been good progress to improve things like Home Manager compatibility with the new session format brought with the new CLI since.
I think mostly now a few people just need to step up to overhaul the docs.
I agree that critical mass with third-party Nix code, especially in important projects like Home Manager and Nix-Darwin, is crucial and it's almost there.
I don't want to rush the finalization of flakes or the new CLI, though, eager as I am to see them come out from behind the 'experimental' flag. There's clearly still a lot of work, including bugfixes, going into the flakes implementation.
I think the really crazy default is that flakes by default will not include git submodules, you need to pass a special flag to enable it, which is undocumented and have to dig through github issues: https://github.com/NixOS/nix/issues/4423
> Nix needs a new porcelain interface for it's CLIs.
The analogy I'd use is that Nix needs what "Github did for Git". Meaning, git is actually unnecessarily complex. But Github made git easy and accessible.
For well maintained software it's generally a few lines of a nix definition. It precisely gets hard when it's not properly maintained. Patches, insane build configuration (I'm looking at you Intel...), binary dependencies etc.
> Has anyone else come up with a sane Nix strategy to avoid the overhead?
Didn’t need to use it for a long time now, because nowadays almost everything is packaged, but there is steam-run CLI app that will run the specified executable in a “standard-like” environment. It still doesn’t support everything, but could come in handy.
In the very early days I also just had a debian file system laying around and I machinectl-d into it (very lightweight virtual machines)
If something is well-maintained, it is usually pretty easy to make Nix derivation for it. The strict discipline that Nix imposes usually becomes an issue exactly when something is not well maintained or has fundamental issues with how it is managed, but this would be a problem for adoption or port to any other package manager.
Man, there’s a lot of negative comments here. Just to add a different experience: my company loves Nix, it makes it really easy to integrate new tools into the dev/build environment without needing to document which packages, configuration, … a developer needs to apply to their machine manually.
I dont think anyone is being negative at least not the top level comments i read. I like nix but i gave it a test run at my company and no one could figure out how to make any change that wasn’t a copy and paste of what i had done.
The docs don’t help much unless you really go diving into them and most people who just want the software to run don’t want to spend the time learning it. I don’t blame them.
This is a very valid criticism of any software. Its why things like docker (containerization) win out even when it was technically around for years before. Someone made it easy to use so people used it.
Same here. Using Nix and Nixos for years, I have prepared nix-shell and packaging files for our internal tools and committed to internal gitlab. No one ever touched it even after multiple presentations from me. No way to find your way out, especially if you don't have functional programming background.
I personally think that is a bit of a cop out. Most developers should be able to do basic tasks in a system that has been setup by an expert user.
I think bazel is an ok example of this. Its a pretty complex build system but expert users can build macros and rules that the average developer can consume without having to know a ton about everything that is happening.
IMO the ability to do the above at some level is the sign of well crafted software.
Relegated to a niche subgroup, I see. There's a significant difference between spending time learning a tool and sinking dozens and dozens of hours into a tool in order to perform tasks that are so basic for other package managers.
I don't think tasks which are basic for other package managers require dozens and dozens of hours sunk into nix to learn.
"Niche subgroup" is about right in its current state. With text editors, VSCode is powerful and accessible to use, but there are power users who prefer to spend time learning Emacs.
With package managers, nix is a power tool. It's not as accessible as it could be. But, the idea of "spend time learning a tool" isn't unusual in software development.
Spending time to learn a tool is a standard requirement in our profession. Nobody but the laziest ones has a problem with it.
Programs like Nix, Emacs, VIM, Git -- they require a lot of time sunk into them to sometimes get even to basic productivity.
The latter is not okay. While I think it's unavoidable for Emacs and VIM, I've seen enough Nix and Git recipes and confusing command line aliases to conclude that Nix (and Git) can be much more friendly and have a smoother learning curve.
The ugly truth is that its community is not interested in that and even looks down on busy programmers who want to memorize a few shorthands and move on, which is a very valid mindset to have and I'm not okay with people looking down on it.
To me it looks like Nix is firmly headed in the direction of a yet another tool with a very good idea whose authors don't want to make it more usable and thus it remained a niche curiosity for people with too much free time... and the occasional corporate programming team that's perfectly served by its niche benefits.
I'd hate for Nix to become that. But at the moment everything points at this being its fate.
> To me it looks like Nix is firmly headed in the direction of a yet another tool with a very good idea whose authors don't want to make it more usable and thus it remained a niche curiosity for people with too much free time.
What gives you the indication things are headed in the wrong way?
I think things are heading in the right direction.
The last year has seen nix flakes release to the stable nix version. Flakes are a big UX improvement to Nix.
The last few releases of nix have added improved support for debugging nix code. (Poor debugging UX was highlighted as a major pain point).
Efforts from major contributors are acknowledging the importance of improving documentation. - From the latest community survey, the steep learning curve and poor onboarding experience was noted as a major pain point. etc.
> even looks down on busy programmers who want to memorize a few shorthands and move on
Ehhh.
I don't think it's fair to say "vim is a bad tool because it requires learning to get used to it". -- Fortunately, developers aren't stuck between nano and vi, they've got highly accessible tools like VSCode.. or on the command line, even micro https://github.com/zyedidia/micro
> What gives you the indication things are headed in the wrong way?
Because it started swinging in the direction of "you are not the target audience" while at the same time raving about how it's the solution to the software packaging and distribution problems -- which, pardon if mistaken, are very ambitious and big goals that affect VERY different groups of people.
Telling any of them "it's not made for you" is not doing their cause any favors.
One example: documentation and onboarding. A good amount of guides, both official and out there, still use the old-ish syntax while `nix <subcommand>` has been a thing for a while now.
...Also "flakes", "pills", really? Can we finally grow up and start using proper terminology? The cutesy jargon must go. Forever. This is not a kids game and not a hobby project anymore. You're writing software with extremely ambitious goals. Show some professionalism. I can close my eyes on that and have done so many times but I've personally known a good amount of engineering leaders that would deny usage of software on that basis alone.
Nix got to a part of its lifetime where marketing and onboarding have to be heavily prioritized and its community doesn't seem very keen on it. That dooms it to obscurity from where I am standing because I am one of those programmers that visit the website and are like: "What is this? Oh, that. How do we start? Like so? Cool. Oh... an error on the second command, seriously? OK, OK, let's just Google it -- huh, nothing. Yeah, frak that, bye".
The above has to be mercilessly chased and resolved at every occasion, aggressively. If not, Nix is going to be the next Snap / Flatpak.
And I really want to make it super clear if you're still with me: I want Nix to succeed. For now though I view it as a nascent tool that still has long ways to go. And I really wish they started learning from the mistakes of Git (confusing CLI, big docs that don't help one get onboarded quickly). But so far it's not looking good on these points.
Admittedly I last checked it out 7-ish months ago. I'll try checking it out every 3 months or so from now on. And I hope I am wrong.
> Nix got to a part of its lifetime where marketing and onboarding have to be heavily prioritized and its community doesn't seem very keen on it.
As far as I can tell, Nix is growing pretty well. The results from the last community survey indicated that most of the users started using it within the last few years.
> And I really want to make it super clear if you're still with me: I want Nix to succeed. For now though I view it as a nascent tool that still has long ways to go.
Perhaps by analogy: if apt-get is like notepad, and nix is like emacs/vim, it'd be neat for something like VSCode.
I think rough edges like "nix isn't nice to use for <some common programming language>", etc. would be good to sort out. -- But, yeah, that the documentation is rough, and the onboarding is harsh, were some of the big pain points identified in the community survey.
> Telling any of them "it's not made for you" is not doing their cause any favors.
Not every tool is well suited to all users.
I wouldn't recommend Arch or Gentoo linux distributions to someone who doesn't want to spend time tinkering, or spending time figuring out why something broke. I'd recommend Debian instead.
I wouldn't recommend Rust to a team which can't afford the time to train developers. Whereas, Go is a much simpler language that's easier to pick up.
In its current state, Nix isn't well suited to "I just want things to work, I'm not interested in a package manager more involved than apt-get".
> As far as I can tell, Nix is growing pretty well. The results from the last community survey indicated that most of the users started using it within the last few years.
Taking a single sample from recently is just coming across as fanboying and wishing for your desired conclusion to be true. Let's not go in that territory, it's not arguing in good faith.
One of my favorite technologies was "trending" for a bit but then plateau-ed. These things happen. Factors vary but usually fall within a narrow set that's well-known by the "realist" type of people. Many don't like hearing that however, hence endless bikeshedding ensues. No need for that here.
> Perhaps by analogy: if apt-get is like notepad, and nix is like emacs/vim, it'd be neat for something like VSCode.
And that's exactly what my point is. Nix is nothing like VScode for package management. It's more like an ancient version of VIM whose advocates swear that the months and years needed to learn it well will pay off to eternity. Sorry, I don't mean to bash you or anybody else but I've read forums and GitHub issues. Nix's community demeanor leaves things to be desired.
> Not every tool is well suited to all users.
If you want to "solve" package management, reproducibility et. al. then you should try to cater to all users.
I'll remind you that I really want for Nix to succeed. I hate it how one update command can change files in /etc, /var, /usr and /home. I want isolation! I want trackability! I want to issue a system-wide update command and then check logs for each package updated and which files did it touch exactly. I want that put in a time-travelling database (a la ZFS snapshots) and be able to revert whenever I wish.
These things are hugely important and extremely critical for the future.
In this context just throwing your hands in the air and saying "it's not for everyone" is just not being ambitious enough. I and many others want a replacement for e.g. pacman and apt-get. A complete, 100% replacement, that does everything better.
So far Nix is not that. Until it started closing in on that target then it will remain niche technology for fans.
Obviously so far my vision is not aligning with that of the maintainers. I get that. But I also have plenty of experience and am well within my right to use it to try and predict what traction will their tool get if they do (or don't) certain things.
There are many things like it. For example, managed operating environments where the user doesn't need to do anything (and actually can't do anything). Or disposable environments like VMs and containers.
Sure, it's not the same as massaging a special pet operating system over and over, but most people that need to produce software hopped off of that bandwagon years ago.
I get that companies that do functional programming and linux and linux on the desktop exist, but I have yet to find any company that does that at scale, at a good profit, versus competition. That's not to say that "therefore, Nix is bad", it's just that the problem isn't a technical one that nix suddenly fixes. It seems to be only a problem if you're stuck in yum/apt all day and need to get a fix to get out of that.
Nix is not an operating system. It can be used to build operating systems easily, though.
Also, there isn't anything "functional" about Nix. It's a nice sales pitch, but underneath it's just a thin layer over bash scripts and environment variables.
Right. You still end up mostly writing bash scripts when wrangling Nix. It's not some sort of ivory tower Haskelish hermetic ecosystem, it's just a very nice way to make bash scripting sane.
Nix uses string antiquotation, not string escaping. It's one of very few languages which has it. And yes, it is sane, very sane. The only sane solution to this problem.
Edolstra's thesis advisor was the first to create a scannerless GLR parser:
The first versions of Nix used a scannerless GLR parser, because it's the only way to prototype sophisticated features like antiquotation without going completely mad. Once the syntax was completely locked down it was rewritten with a separate scanner and LR(something) parser, but they're intricately entwined. The scannerful, non-GLR parser is faster but basically frozen and extremely difficult to modify. Fortunately Nix's syntax has been exceptionally stable for the last decade or more.
True string antiquotation is a feature that every language should have, but unfortunately with current technology it forces you to choose between a slow parser or a fast parser that's almost impossible to modify.
Some languages have "string interpolation" which is a weaker, more fragile form of antiquotation.
> Since ${ and '' have special meaning in indented strings, you need a way to quote them. $ can be escaped by prefixing it with '' (that is, two single quotes), i.e., ''$. '' can be escaped by prefixing it with ', i.e., '''. $ removes any special meaning from the following $. Linefeed, carriage-return and tab characters can be written as ''\n, ''\r, ''\t, and ''\ escapes any other character.
Though I anticipate better discussion from "nix didn't suit me" than "nix works".
Looking at Nix's community survey, there's been a big growth in the community over the last year or two. I think most who try nix like it, and see it as so obviously a good technology.
My pet peeve (which is present in this thread!) is when people link Nix discourse or github discussions and say "look how ridiculous it is to do XYZ in Nix."
The community discussions are one of the best parts of Nix. People are super helpful and work together to solve novel problems all the time!
In fact, those threads are people doing something about "nix didn't work for me."
I'm not GP, but my company's experience[0] is the same, and we're definitely hiring. Seeing Nix in the tech stack was one of my reasons for applying :)
Nix seems great for build servers. This is a great introduction to the motivations behind it.
I'm not sold on using it for managing developer environments (another use case it is often used for). It "solves" the problem that developers might be using different versions of libraries or compilers on their machines... but it comes at the cost of having to learn a whole new programming language, a configuration language, a whole new jargon, and workflow. It's a bit like using Docker as a development environment. It introduces a non-trivial amount of friction.
Some folks get excited about package management and configuration. Personally I don't care for it enough to over-come such a high learning curve. And I don't particularly like the workflow it enforces.
In my experience Nix works pretty great for developer environments when you don't previously have anything more than manually configured dev servers. (In general Nix is much easier to buy into if you don't already extensively use Docker, k8s, etc.)
For smaller teams with less experienced developers any tool is going to have a fairly high learning curve. Instead of having everyone learn Nix or Docker, it's very easier to have a few, more Linux experienced devs configure servers and devshells using Nix while providing simple TUI for developers to access various tools.
There are some headaches, if you want to use a full NixOS environment then there are some complications with tools like VSCode and NodeJS that download dynamically linked binaries but its terrible difficult to workaround.
And, unlike nixpkgs, Hydra is pretty unfriendly to contributions.
There is one master trusted public key for nix:
6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=
It is hardwired into the nix source code and every (unpatched) build of nix from the last decade or so. There is no revocation system. There is no public key infrastructure. If that key gets compromised, there is no backup plan. I love Nix, but this is batshit crazy.
The Hydra instance has access to the corresponding private key. So the people who merge changes to Hydra are understandably paranoid. Unfortunately this has turned the codebase into a mess.
> There is one master trusted public key .. It is hardwired into the nix source code .. There is no revocation system .. The Hydra instance has access to the private key
There are couple of CI/CD systems being produced by startups that aim to fill that role, based on BuildKit/Docker. For now, Nix is still way better than they are in many ways, with the notable exception of Windows support.
Maybe visible interest in them can push forward Nix community developer interest in polishing Nix for the same use cases.
I loved the idea, but I did not enjoy the experience.
Maybe the problem was I was trying to make it work on a less well-supported platform (I think it was ARM32). But the packages I wanted to install either weren't available, or I kept getting incompatibility errors.
I still love the idea, but these days I feel like environment managers like Anaconda make (mutable) Python development a little more manageable, and things like Docker make (mutable) Linux development a little more manageable. Basically these both make it less painful to start over with a fresh "thing" when the system starts to get crufty.
In my view, there's a spectrum from "immutable and annoyingly rigid" to "mutable by default and annoyingly unpredictable". And the sweet spot is not at either end, but something like "immutable by default, but mutation is possible".
I use Nix on an array of devices (any combo of x86/aarch64 and Linux/Darwin machines you can imagine), and while I really do love the experience, you're right that ARM is a sticking point. It's been getting better over the past few months, but still not close to x86 packaging parity.
On the other side of that coin, I tried switching back to Arch a few weeks ago after ~4 months of NixOS. Maybe it's the sunken-cost fallacy, but Arch didn't make me feel all starry-eyed anymore. Nix feels like a really dependable piece of my workflow now, and it's difficult to imagine myself going back to Homebrew/pacman.
> the sweet spot is not at either end, but something like "immutable by default, but mutation is possible".
Flatpak tried that, you're welcome to draw your own conclusions on how that turned out. The problem is that your modifications now require build hooks for every update, and you're no longer guaranteed a comprehensive runtime. With Nix, these hooks get re-written into derivations, which (in my experience) provides a more stable, sane alternative to Docker images and Flatpaks. It's also not packaged hermetically, which means that not all Flatpaks will behave the same on all machines. Something as subtle as different environment variables or display server implementations can cause your application not to launch.
Is there a way to get it so that UI applications installed with Nix show up in Spotlight Search? I remember that being my big annoyance when I tried using Nix instead of Homebrew on MacOS.
Oh, I wouldn't know (my Mac is a dumb terminal for testing and nothing else). They should have appropriate .desktop files for their Linux counterparts, though.
I recently tried Guix which is somewhat similar to Nix in theory. While I loved the idea I did not love dealing with cryptic problems resulting from it. For example I ran into an issue where one R package would only work properly if I installed other packages in a specific order, doing an update though could mess up that order breaking the package. Later updates to R broke support for certain things entirely that I don't see on non-Guix setups either. Plus getting things that required binaries I can't (or easily) compile myself to work is a pain.
I just can't justify fiddling with my package manager so much.
This is my experience with both Nix and Guix. They have advantages but they are different to the point of being unusual.
I very much want the benefits of these kinds of systems, but they both produce a run time system (shell environment, whatever) that is not typical of how most people use software. So, while most other people are helping each other out with the "usual" problems, Nix/Guix users have a different set of problems. Sure, they are reproducible problems often shared by all other Nix/Guix users, but those communities are niche compared to what is typical.
I am not sure, but it is possible that R want to be its own package manager which interferes with Guix? A similar problem seems to appear in Python, where there on Distributions like Debian there is kind of a struggle which package manager - the system-defined packages, or the user-defined ones like pip - have the last word.
I have used Guix a bit with Common Lisp libraries, and that works like a charm. I also found it useful to be able to use new Emacs packages like the newest version of Magit, without having to install or supersede the OS installation.
R does have it's own package manager but I was explicitly not using it because I read there could be issues, instead only installing packages through Guix. It was mainly a problem compiling things within R as if the order of how gcc-toolchain and the R package were setup it straight up wouldn't work. With later versions it just broke entirely and I never managed to fix it.
Despite R being the most commonly packaged software on Guix and it being a GNU project and a "common lisp for dummies" lispy language which is used by millions of students, researchers and data scientists, the developers seem to have a noticeable animosity towards it (although not as much as the "snake people"). This makes me sad, as R users would be the first to sign up for what Guix offers if it worked well and the RcppGuile R package could be used as a template for bridging the gap between the spartan scheme infrastructure and the extremely rich and beginner friendly R infrastructure. I feel like this could also help motivate the Guix developers to improve the documentation and guile porcelain interfaces so they could be used directly instead of the copious bash code they all seem to relish writing.
I don't follow. Are you saying that Guix developers have animosity towards R or the other way around? In any case I haven't seen animosity on either side.
FWIW I also wrote guix.install, which lets you install R packages through Guix (whether or not they are available in Guix) from within a running R session. I'm maintaining R packages in Guix and woudl like to see more adoption of Guix among R users, so if you have any recommendations on what pain points there are and how to overcome them I'd be happy to hear them.
This… shouldn’t happen. The whole point of Nix and Guix is that you can have multiple versions of the same thing in different dependency chains. It’s designed to avoid precisely this issue. If a package relies on something it should be built before hand, and if and only if it’s identical (well content addressable vs inputs is a whole other thing) it will share the dependency.
Would you mind sharing exactly what the R packages you were having problems with were?
I wonder if you were relying on globally installed versions of things instead of an R installation that had the packages wrapped into its environment. I’m more familiar with Nix, but you’ll typically see people do something like add this to a local project build input or development shell:
r-lang.withPackages with rPackages; [ r-ggplot2 r-data-table ];
Rather than
nix-env -iA r-4.2 r-ggplot2 r-data-table
Or the Guix equivalent
guix install r …
Globally installing things can lead to situations where you think it should work but if you really think about how the store dependencies work, they don’t.
Yeah and that's part of what made it so confusing.
Even in a pure Guix shell it only worked in a specific order. Packages were R, TMB (an R package), gcc-toolchain, gfortran-toolchain and make. You need to be able to compile C++. If was R specified before the toolchains then nothing could be compiled with TMB. I forget the exact error. I did not have those packages globally installed and I saw the same problems with Guix SD and a foreign distros.
But with R 4.2 I ran into a different problem and never fixed, that anything using the RcppEigen header would not compile.
And I don't believe Guix has the same wrapping packages into R style as Nix.
I'm maintaining most of the R packages in Guix and helping hundreds of users at a research institute with their R stuff. I haven't seen any problems like that.
The order of installation does not (and cannot) matter. The most common problem I've seen is that people mix packages from Guix with those built with install.packages that have been linked with incompatible system libraries, which cannot possibly work. This problem is easily avoided --- either by using a container shell (guix shell -C) with a separate toolchain (not the system's toolchain) or by not using `install.packages` (e.g. use `guix.install` instead).
Are any of your R users TMB users? Because I ran into that in multiple setups (Guix SD and Guix on foreign distros) where only a specific ordering worked even when using guix shell --pure. This was back in the R 4.1.2 days. In theory it should be impossible and yet I ran into it. No I was not mixing packages installed via. install.packages.
Right now with the latest version of Guix and R 4.2.1 TMB is not usable. Try running:
"guix shell --container r r-tmb make gcc-toolchain gfortran-toolchain"
then try running the linreg.R (with the corresponding cpp file, or any of the examples) example from https://github.com/kaskr/adcomp/tree/master/tmb_examples
and you'll run into "did you mean 'bad_array_new_length'? This is on a Guix SD system too...
And you have to specify make,gcc-toolchain and gfortran-toolchain for it to work normally. If you leave out gfortran-toolchain it compiles but you can't load it and it doesn't work without make. Previously I had it working with those 5 packages in a specific order.
Perhaps this is a TMB only problem but it's a giant PITA when it works fine on non-Guix setups.
I noticed that gcc-toolchain and gfortran-toolchain are mismatched: the former is at version 12 while the latter is at version 10. (Someone must have forgotten to update gfortran when they updated gcc-toolchain.)
I get no errors when using this environment:
guix shell --container r-minimal r-tmb coreutils make gcc-toolchain@10 gfortran-toolchain@10
(Feel free to send future problem reports to bug-guix@gnu.org.)
I was too frustrated by the insistence on libre purity by the main Guix channels, even if I think it is superior software, so I switched to Nix (and actually went fully NixOS after a month or so).
I had R package compilation issues on 4.2 as well, but I also had them on my Windows work machine. I'll try to test things out and see if I can figure out if it's still an issue.
That sounds like a dependency not detected by the kludge that at least Nix uses where it essentially greps everything for path names that look like dependencies.
Said differently:
- build recipes A and B add paths to the store
- B uses paths from A, but in a way that the dependency tracker does not notice
- if the paths from A were instantiated in the system first, B works by luck
I would be very happy to see details on your R problems, as I have never encountered anything like that in the many years I have been packaging R things for Guix and supporting R users at the research institute where I work.
I mean if you don't want to fiddle with it I really don't get why you would choose Guix to test drive this way of managing your packages. It's the way less mature and less supported Nix.
Not when it comes to R. I'm terribly biased, but R packaging in Guix is --- in my opinion --- very high quality, purely from source (including bundled minified JavaScript), and the tooling is great too.
Unlike R in Nix, Guix does not just automatically wrap R packages, which would lead to build and runtime errors. I would not use R from Nix.
No, because that would be comparing the number of official packages + third party packages with duplicates in Arch against the number of official packages in Nixpkgs. That's as fair as comparing the number of official Nixpkgs packages + every unofficial Nix package on GitHub against the number of official Arch packages.
A direct comparison with AUR is meaningless though. AUR is very unusual because it allows anyone to freely upload packages without going through review. So there are many duplicates and packaging quality varies wildly.
An unfortunate huge one is any way to interact w/ Intel Optimus. I can't be the only dev that wants to eat my gpu cake and have my battery life optimization cake too.
Bumblebee and optimus-manager both solve this in the aur.
I don't think the problem you encountered is due to immutability, the problem is due to the implementation, which is now improved due to better documentation and more package support.
The possibility of mutation alone will break a lot of assumptions and make program analysis a lot harder. I personally prefer no mutation at all or only when wrapped inside a cell (UnsafeCell), similar to rust. For the latter kind, we can treat the states as immutable if we don't have a cell, which can help analysis.
That's fair. I blamed immutability when I should have blamed the implementation. Thanks for helping me realize my mistake. :)
Availability of packages is what makes or breaks a distribution, though. If I can't (easily) install the software I need to do my job, I choose a distribution that can. My home Ubuntu server isn't bringing me joy, so maybe now's a good time to give Nix another shot.
In my experience the Nvidia driver support has been pretty painless (once you figure out the correct settings in configuration.nix of course), and the availability of packages is by far the best of any linux distro I've used.
I've been working on a project that uses Nix, and from an archlinux host with Nvidia hardware it has been very far from painless, with nixGL breaking every time Arch updates glibc
I think if you don't use some relatively niche packages, nix is fine. However there are some quirks such as setting LD_LIBRARY_PATH for CUDA that may need some tinkering.
If you want to give nix another try, I strongly recommend you to use nix flakes and home manager. Nix flakes allows you to pin dependency versions, and home manager provides a lot of configurations for commonly used packages.
I built my router and NAS from NixOS. It was a mostly pleasant experience. Being able to sit in an IDE on my laptop, and build up a server, incrementally pushing changes to it, with rollback if necessary, was great, and I wouldn't want to go back to anything else.
I wrote about the router here. It's pretty heavy on router stuff, and my own thoughts though...
Wow that is cool. I have been wanting to build both a router and a NAS and run nixos on them since I run nixos for everything else I do. Thanks for writing and sharing about your experience!
Deploy-rs is a great alternative. It works as wrapper on top of flakes, local (optionally, cross-) building and copying closures to target machine with activation:
One thing nixos-rebuild doesn't get you is a secrets transmission mechanism. I've been dabbling to build something independent of NixOS/Nix that would still do that neatly...
1. 'Nix is to `tar -xf && make && make install` as C/C++ is to assembly'. In many ways, Nix applies the same kinds of improvements that other technologies have.
2. Nix does try and create an elegant programming model of Unix systems.. while the Nix programming language is pure, it interfaces with the Unix system by reading files and outputting files.
I'm mixed on to what extent articles like this get to the goal of make Nix more accessible, though. It seems like preaching to the choir to me: if you like the idea of making analogies between "software is files, is like dealing with raw pointers", you'll prob'ly love diving into Nix as is anyway.
Same. I read almost every nix-related post that I see pop up on HN. I want to be convinced. I have not yet been. It seems like `brew install` with (many) extra steps, with little meaningful gain.
I'd describe many of the benefits for developers as like "docker, without containers".
e.g. if you want to try out helix, you could run `nix run nixpkgs#helix`, and it would download + run helix without installing it. (Or you could run `nix shell nixpkgs#helix` to add helix to the PATH in the current shell, without installing helix, etc.).
One use case I'm excited about for developers is the ability to declare the dependencies needed to build the project. -- So rather than copy-pasting `apt-get install` commands, you'd rely on nix to fetch the installed dependencies. (e.g. I love that I don't have to worry about what packages to install to work on qmk_firmware, or repos which provide a nix shell).
VSCode's Remote Containers supports a similar workflow to the latter.. but, it relies on containers.
We use Brewfiles to install binary dependencies needed by various projects. This is only for developer’s machines, but it’s lightweight and fast: `brew bundle` and you’re done.
What happens when you have two projects that use two different versions of the same dependency?
With Nix, you can "install" many different versions of the same program side by side in the store, and then "activate" the one you need at runtime (or with direnv).
Has never happened. I know this is something that is given as a benefit of Nix, but I have personally never encountered the situation. For every project I have worked in professionally their tool chain was standardized enough that the situation never arose.
If parallel installations like you describe is a requirement — and I’m sure that it is — then Nix looks like it could help. That’s just not something I have ever found myself needing.
With node this happens everywhere all the time hence the popularity of tools like nvm or fnm. At my current company we have projects that absolutely require java 8, or 11, and I'm sure we'll soon have Java 17 only projects, sometimes with corresponding needs regarding tomcat or maven etc. versions. It's also a common complaint with python and the most common solution seems to be a bunch of python3.x packages from your package repository, though there have been tools like tox or pyenv for this and others that try to combine solving this problem and virtualenv management.
That said, if you just want language generic toolchain management, asdf seems to have a much lower barrier to entry.
Personally I have been using nix as a homebrew replacement, because it allows me to sync my packages and versions between my personal Arch setup and my day job Mac OS setup with a single configuration
Nix is an abstraction around `tar -xf && make && make install`. In fact, those commands are even executed by nix, just in a sterile reproducible environment.
What sets nix apart from other package managers is that you are never running `make install` on your root filesystem, but `make` can still dynamically link to libraries (that also aren't installed on the root filesystem) without editing the Makefile directly to find them.
This way, you can't break existing packages, you can trivially roll back changes (because updates are new instances), and you can always start over fresh.
The only problem is that you have to wrap every package in a derivation, then publish that derivation somewhere. Right now, all derivations are tracked in a single git repo (with dozens of branches), all coordinated over GitHub Issues, and referenced by nix itself by an arbitrary (versionless) name in a global namespace in this file: https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...
That last bit can be avoided by using pinning and flakes, but it's still the default way to use nixpkgs, and documentation doesn't clarify much or offer a better consistent UX paradigm.
The article talks about the file system abstraction and that NixOS is parting ways with it:
> It is not even a new idea for Nix to propose parting ways with one of the most pervasive skeuomorphisms in computing, the file system, which naturally followed from an era where everything was a piece of paper.
What I am wondering is if this is not extremely similar to the way that plan9 handles files? As far as I understand, in plan9, there is still a file system - but there is no common root, every process can have an own view what is, for example, in /bin.
If you haven't read the original LISA paper on nix, it's a great read. So far it's the best resource I've found that documents what nix is, why it exists, and what it's solving: https://edolstra.github.io/pubs/nspfssd-lisa2004-final.pdf
Back in college, the engineering school had a farm of Linux servers from which you could access all your documents, as well as log in to your account from any number of computer labs around campus. Back when I was first learning Linux (not that long ago, 2008), I fell in love with how many things you could customize, and really create your own experience. When I got to college, I wanted to replicate this ability (install ANY program! use ANY desktop environment!), but predictably, their machines were pretty locked down.
I went snooping around the internet, and found that there was this magical software called Nix, which would let you have a package manager without root! The Linux computers in the lab didn't have root, but they did have GCC. I started learning all about build systems (mainly that you could specify a custom --prefix and essentially create your own filesystem within your filesystem), and got to work compiling Nix (or, i think, GUIX) from source. It's actually a fantastic amount of work to go from GCC all the way to a functional GUIX, even with access to every source tarball on the Internet
Eventually some admin emailed me about this project and I stopped working on making it happen shortly after, but it was such a formative experience in my tech life that I always think back fondly on it when Nix pops up.
This is also how I got into Nix! I just used a 'normal' Nix install with a fake chroot (pivot_root/proot) environment, though, so I didn't have to build from source and I got to leverage the binary caching. There were some quirks, but overall it worked well.
While nix can be very intimidating to get going, I think for just getting developer environments spun up it can be somewhat simple. I highly reccomend trying to add a `flake.nix` to your projects. It makes on boaring new devs a breeze https://medium.com/immuta-engineering/nix-and-skaffold-for-p...
Been using nix for a few weeks and while I feel like it really has some great ideas, here are what my pain points have been:
- not great documentation, especially for newer features like flakes
- nix wants to replace rustup when rustup is already doing great
- nix doesn’t seem to work that well on mac. Not sure if it’s our config our that it’s painful on mac in general
- the biggest issue: it doesn’t work well with tools (vscode, sublime merge, etc.) as you need to launch them within a nix shell and that doesnt work well (at least on mac). Now I’m wondering if it’d make sense to install tools within the flake dev shell…
> the biggest issue: it doesn’t work well with tools (vscode, sublime merge, etc.) as you need to launch them within a nix shell and that doesnt work well (at least on mac).
In what sense?
In terms of "some nix shell provides some tools, and VSCode can't see those".. direnv is one way to work with this. e.g. direnv integrates with nix to integrate the nix shell at that path, and a direnv plugin for VSCode etc. can pick up the direnv file, so that it loads the nix shell appropriately.
I tried everything from direnv (with the nix direnv thing and a diverse direnv extensions for vscode) to opening vscode from a flake dev shell, nothing works on my mac
> Which other results from programming language theory and mathematics will we be able to leverage to make software build quickly, work reliably, and further tame Unix?
I think Apenwarr's redo (https://redo.readthedocs.io/en/latest/), based on an idea from D.J. Bernstein, is a very interesting development, because it also has the "purely functional" principle at its core - and this allows for much faster parallel builds.
To add, there is another angle on immutability / purely functional definitions, which is compatibility of APIs. Rich Hickey (the creator of Clojure) has made a talk titled "Spec-ulations", in which he pointed out that there are certain operations on an API, like adding functions, adding symbols (like enumeration values), loosening preconditions (like, for example, adding keyword arguments to a function), tightening post-conditions (like, for example, removing possible error exit codes), and so on, that keep an API compatible, while others, such as removing functions, tightening preconditions, widening post-conditions (such as adding error codes or exception types), break compatibility.
And then he points out that the API itself can be seen like a persistent data structure, like a dictionary where you can add new things but not remove old things, because that would break client code. And I think this is a very important idea.
I have tried starting nix several times never got all things working. However, after being stuck in several dependency hells both in personal projects and at work, I knew I wanted what nix was proposing.
This last time I tried it actually clicked much better. I think what flakes has done is not just provided the technical solutions for why it was created, but it also made it much easier to understand a nix repo and to a newbie like me it almost seems like it results in cleaner code (I now much more often end up understanding what a nix file is doing). That together with the updated nix command in general makes it much more intuitive in most of the cases.
So I just wanted to say that to the nix team that your focus on UI is paying off for newbies like me.
Great introduction and overview on the theoretic foundations of the nix ecosystem!
For people new to it, I am trying to provide a quick glossary of terms here, as I understand them after about 2 years of using nix.
* nix: a language to create derivations and the interpreter/package-manager which provides the implementation of said language. It currently offers two command-line interfaces, the stable on with hyphenated commands like "nix-build", "nix-shell", etc. And the newer, "experimental" one which includes support for nix flakes and so on, without hyphens: nix build, nix shell, nix run, etc.
* nixpkgs & nixos is a huge mono-repo containing instructions how to fetch the source of tenthousands of software packages and how to build them on supported platforms. It also contains the whole nixos operating system and tooling to support all of that.
This tooling includes higher-level helpers for language-/environment-specific packaging, like "buildGoModule", "buildRustPackage" and so on, as well as e.g. tooling to run integration tests in a whole cluster of inter-connected linux VMs!
Packages which are submitted to nixpkgs must fulfill certain criteria, such as not using "IFD" (input-from-derivation, to simplify: "letting nix evaluate nix-code which was generated by another deriviation/"nix package".
nixpkgs is alive and well with lots of daily contribution and an everlasting effort to keep Hydra, the nix-specific CI/CD system and public binary caches up to date and responsive. Thanks to all maintainers & contributors!
* flakes are an approach to standardize a way to package nix code outside of nixpkgs but to still keep it re-usable. They are still "experimental" as the details are figured out, but nevertheless used in production. There are some frame-works to keep boilerplate low, like "flake-utils", "flake-parts" and others, as well as e.g. deployment tools like "colmena" and "deploy-rs" and re-usable helpers for system-configuration like e.g. https://github.com/nix-community/impermanence
There's lots of other stuff in the community, things like home-manager, direnv + flakes and devshells changed my workflow fundamentally to the better since I've switched. If you got the time and are still interested, join us on matrix or elsewhere :)
https://github.com/nix-community/awesome-nix
I'll consider flakes usable for packaging software when they support passing options. The respective issue [0] has been closed unfortunately. Perhaps I am misunderstanding what flakes are meant to be (a more formalized standard way to define nix packages and apps), but a lot packages in nixpkgs have a plethora of parameters that as of now can not really be mapped to a functionality in nix flakes.
It's nice that there is a workaround, but passing build options is not something that should require a library. There should be a well documented standard way to do it.
I think Nix is great to add dependencies to your project without relying on your local env or separate Docker containers for that. We use it for bob[1] and so far Nixpkgs proved very valuable. It's amazing how many packages are pushed by the maintainers, there are over 80 000 packages there.
One problem would be when you don't find the package on Nixpkgs and have to write your own expressions to build a package.
Does Nix still build a lot of things from source? When I tried it a while ago everything took forever to install because it was compiling locally. Do they have the concept of repos and repo mirroring?
Yes it by default build everything from source. However, most of the packages will have binary cache on https://cache.nixos.org/ so that your installation or update will download them instead of building them. Also you can setup your own binary cache (https://nixos.wiki/wiki/Binary_Cache) and building machine to make your own projects build faster.
> When I tried it a while ago everything took forever to install because it was compiling locally
tl;dr: This is probably due to incompleteness of the binary cache. This is pretty rare in general, but it used to be relatively easy to hit on macOS on Nixpkgs unstable before the community added some channels for use on macOS. Check out the darwin stable release channels of Nixpkgs to avoid this issue if the current defaults don't show enough improvement for you, and see below for a more complete explanation
> Do they have the concept of repos and repo mirroring?
Nix is fundamentally a source-based package manager. This means it does not use binary artifacts enriched witb metadata to perform dependency calculations at install time. This, in turn, means that it doesn't have a use for binary artifact repos of the same kind as you see for DEB or RPM.
However, Nix does support caching and distributing binary artifacts in a different way. Since all Nix builds are deterministic modulo (hopefully inconsequential) indeterminism in upstream build processes, once Nix is right about to build a source package— it has figured out all of the build parameters and source tarballs to use and so on, for that package and recursively for all dependencies— it can just ask a remote server 'Hey, do you have anything for these?'. And the remote server can answer without storing or understanding any metadata about dependencies, or statefully storing a collection of packages at a particular collective repo version, or anything like that. If the remote server answers 'no', then instead of just choking, like a binary packages manager must when a repository is missing a package, Nix just chugs along like 'ok, I'll build it myself, then!'.
So with Nix, there are hosted collections of binary artifacts, but the metadata associated with them is more minimal, and they play a much less crucial role in the install process.
The 'repo mirroring' thing likewise has an equivalent: Nixpkgs' build artifacts are uploaded to S3 and then distributed via CDN. There's no syncing mirrors because there's no state to sync (multiple copies of different versions are hosted in the same place at once, since they're quasi-content addressed). And the CDN hopefully takes care of the local mirror issue for you, but you can set up your own Nix build cache as well, or add custom binary caches. If the CI/CD system you use to do this has 'substituters' (binary caching) enabled, then it will just download packages from the main CDN instead of building them, just like your local machine would! So aside from serving the binary cache publicly, 'building' Nixpkgs is the same as mirroring it.
For third-party efforts outside Nixpkgs, it's common to use the 'free tier' offered by Cachix, a proprietary, freemium SaaS binary cache for Nix builds which is free for open-source projects.
Overall, I think this is better than the old-school setup with binary package managers and their repos. But one thing that is possible here is binary cache misses, where your collection of package recipes includes some recipes that have never been publically built and cached.
Nix uses the notion of release channels to deal with this: a Nixpkgs channel is a snapshot of Nixpkgs which only advances to a new version when every recipe in some collection has been successfully built (and cached!) by CI/CD. This lets you get the best of both worlds: binary caching for everything you could want by default, and totally transparent integration when you want to install a specific package with your own patches, customized build parameters, etc.
Generally speaking, the 'default' channels for Nixpkgs are configured based on collections succeeding on Linux/NixOS builds, so the recipes on them may not always be 'in sync' with the macOS binary caches. If you use one of the channels tested against macOS, you avoid this possible mismatch. Nowadays this is the default, and there's even a stable release channel for macOS. But this was not always so, and consequently you used to get kind of a lot of cache misses on macOS.
Overengineered? Maybe, depending on your use case. NixOS is pretty popular as a desktop OS within the community, for example, and I could see a case that its guarantees and strictures are overkill there.
But maintenance is really easy. You're basically never forced to rewrite or throw away tons of config. Doing literally years worth of updates at once is typically pretty painless. (Adding new packages to Nixpkgs or new features to NixOS can range from trivial to very hard, just depends on the details.)
There are some different/new tools for creating your own Python packages these days. It's still not truly solved in the sense of having a single clear winner , but one of these new package generation tools might serve you better:
The tools available to you at the time (pypi2nix and maybe python2nix, if it was a long time ago) have been abandoned in favor of the newer tools, I think chiefly poetry2nix but I'm not sure.
There's still the Nixpkgs buildPythonPackage stuff, I think, if your goal is to upstream a lib into Nixpkgs. But if you just want to build your own Python applications and vendorize the deps (e.g., for work), you might try one of the tools above, which weren't available 3+ years ago.
dream2nix is by the author of mach-nix IIRC and has the goal of establishing a unified standard and codebase for ${proglang}2nix type package generators. But mach-nix is still maintained and might be the more feature-complete choice between them.
Maybe Nix-y Python users and developers can reply with some of their experiences using those tools for real projects :)
I have used plain nixpkgs, poetry2nix and mach-nix for packaging "real" projects. My biggest take away is that python packaging is a hot mess. This isn't exactly news that no one has heard before though.
The initial work for packaging a complicated python app is dominated by sorting through a lot of confusing errors, no matter what tool you use. poetry2nix and plain nix has been my best experience so far in python packaging though.
For my simple python packages, I'm using plain nix and flit, which has been the simplest. It's not feasible for python applications that need complicated dependency version resolving due to python dependency pinning though.
I've been following this thread [1] about the issue which has valid interested parties including users, package authors, and nix package maintainers, and contains various proposals to solve or alleviate the problems.
[1]: https://discourse.nixos.org/t/nixpkgss-current-development-w...