Hacker News new | past | comments | ask | show | jobs | submit login
GNU Stow needs a co-maintainer (savannah.gnu.org)
170 points by nequo 5 months ago | hide | past | favorite | 115 comments



From https://www.gnu.org/software/stow/ :

"GNU Stow is a symlink farm manager which takes distinct packages of software and/or data located in separate directories on the filesystem, and makes them appear to be installed in the same place."

The idea is that instead of installing package foopkg directly into /usr/local, you could install it to /opt/foopkg-v1.2.3. Then you can run stow to make a bunch of symlinks like /usr/local/bin/foo -> /opt/foopkg-v1.2.3/bin/foo. Upgrade it to a new version, re-run stow, and now all the symlinks point to /opt/foopkg-v4.5.6/bin/foo and so on. It's pretty nifty.

However, I used it more for managing dotfiles in my home directory than anything else, making links like ~/.vimrc -> ~/src/my-config-repo/.vimrc . I much prefer using chezmoi for that now.


I've found that stock git works great for managing dotfiles, without any extra tools needed. Just a few lines of gitconfig and a shell alias is enough. It's all explained here: https://www.atlassian.com/git/tutorials/dotfiles

Perhaps other people have more complex use cases than me.


There's nothing wrong with that setup. It falls over when you start pushing it across multiple machines with substantial differences. Then Chezmoi's templating is so handy. For example, 99% of my config is the same between my desktop Mac and my various Linux servers, but I use different ssh_config settings on the 2 OSes. Chezmoi makes that very easy. Stock git doesn't. I could script something up to do handle that for me automatically, and before I knew about Chezmoi, that's exactly what I did. Now I'd prefer to let someone else write and maintain all that for me so I can move on to working on other things.


That's what branches are for. I have two personal machines and two work machines, all of which have diverging configs, which I push to two remotes (one work specific), and I merge changes between them.

This is how Git was designed to be used.


Until you want to introduce a change that affects both machines. You need to start rewriting history or cherry-pick the changes on both branches. The further the history diverges, the harder this becomes.

Using branches for this does not scale.


This is where a VCS like Pijul or Darcs would shine, since patches commute across "branches" without a new hash.


How does that work if there's a conflict? Whether or not there's a hash involved, you still have to manually apply the patch to each branch, would you not? If there's no conflict, merging each branch up in git is not hard at all, but it's still a tedious extra 'n - 1' operations that you don't need in chezmoi.

For example, say I have a line "export FOO=bar" in my .bashrc on one machine and "export FOO=baz" on another. If I then indent the line on the bar branch and try to merge to the other one, something has to tell the baz branch that the right line is both differences: " export FOO=baz". Except the conflict may not be so obvious to resolve as that! And whatever you do, you'll either have a "trellis" of 'n' branches if you merge, or 'n' parallel linear branches if you cherry pick everything. Both of those history layouts quickly become very hard (to me at least) to make sure they all contain everything they should and nothing they shouldn't.

Whereas with chezmoi, the bashrc file is a template that is the same on all machines that simply says "export FOO={{ fooval }}" and chezmoi does the templating. So you can just indent the line and fast-forward/apply on other machines and that's it.


It's not intended as a dotfile manager replacement, but a Git replacement.

As long as the patches don't conflict its fine and dandy, if there's a collision you record a resolution that fixes the conflict


Also, the conflict resolution is just another patch (Pijul patches aren't just regular diffs, they have a lot more information), so should you decide to merge it back upstream after all, you can also cherry-pick the conflict resolution along with the conflicting patch, and also without changing the hash.


One of the motivations behind Pijul was to manage custom versions of Nixpkgs while still benefiting from upstream commits. One issue that's hard with Git is that when you also want to contribute multiple changes back, you have:

1. A branch pointing to the latest nixpkgs head.

2. A branch with commit A (let's say commit A introduces a new package to nixpkgs).

3. A branch with commit B (changing some config file).

4. A branch currently at in use for your own machines, with branches 2 and 3 rebased on top of branch 1.

Every time you do anything, you'll have to remember the flow for getting the commits fetched/rebased. Which is fine if you have a DevOps team doing exactly that, but isn't too cool if you are anything other than a large company.

In Pijul, you would have a single channel (branch sort-of equivalent) and two patches (A and B) instead, which you can push independently from each other at any time if you want to contribute them back.

Darcs does the same but wouldn't scale to Nixpkgs-sized repos.


I have no idea what you're talking about. You just merge the change.


I was bitten by merge conflicts many times with such workflow. Not anymore


With `Match exec ...` you can define arbitrary machine-specific sections.

I had more problems with differences between versions than with differences between machines. tmux is the sort of program where the available config directives change every major release and there is no full backwards compatibility.


I evaluated Stow and tried Chezmoi for a while but settled on YADM. It’s the bare git repo idea with a little more sugar sprinkled on top. Perfect for my needs.


I switched from Yadm to Chezmoi.

The main reason was because there invariably is always something slightly sensitive you don't want in a dotfile and the rest of the file is okay. Yadm uses third party tools to do jinja templating. The first one envtpl stopped being maintained, and the second one j2cli (both jinja2 templaters) aren't very well maintained either.

With chezmoi I just use the golang text/template templater. I know it will always be maintained. The integrated password manager functionality for chezmoi also works awesome too.

I did initially use stow, but symlinks is just bad. You end up with all sorts of problems with that I can't even remember. My whole dotfiles is 7MB, so if a copy is made from a "source tree" to my home dir that's okay.

Chezmoi also encouraged me to do things more deterministically based on hosts and reduce the number of "scripts" that I run significantly, which led to less bugs. I use the same set of dotfiles across a number of my systems.


I use yadm with the default templating system which is based on awk...

https://yadm.io/docs/templates

i like yadm because it has simply no dependencies and can be installed literally anywhere on any archs (which is important to me)....

the yadm/awk templating system is good enough for me, it let you do some if host then output this or output that kind of things..... i never had a need for more.



I think I heard about it before, but haven’t tried it.

https://www.chezmoi.io/


I currently use Borg in Vorta to manage my dotfiles. I only have about 5, and half of them have secrets.

I could do it all with shell... but every few likes of custom scripting and config I can ditch is one less thing to worry about, and I don't really need VCS for just a few kb that rarely change.

I've been using a lot more VSCode extensions lately though, so perhaps I'll want to do something for that.


yadm https://yadm.io/ uses git for file management, but also provides some convenience on top of that


Can recommend this setup as well, it's great.


>It's pretty nifty.

It's how Amazon's stuff used to work (though not using Stow) back several years ago. No idea if they've migrated from that approach to containers, or similar, yet.

Every application you deployed would have the necessary components deployed (or re-used if you had something else that already used it), and then build the application space from symlinks to those parts. Worked really well.


Amazon has an internal Nix/Guix? Probably not much public info on this...


It's not the same.

It's been a while, so I'm sure to get things wrong, but you basically had different package groups you could set up. So a service would have its group that it could update, test, and deploy with at once.

But like if I produced a package, and another team depended on it, there was no guarantee that the group I ran in CI had versions in common with the group that they deployed.

I also remember some weirdness just within one package, like maybe your PR build was based on your local group setup and not anything "official".

The coolest thing about it is that you could make a PR against multiple repos at the same time, even if one depended on the other. Like you could add a function to a library in one repo and call it from another repo in one PR.


Alternatively, Amazon is using stow, a common GNU utility whose info page refers to a version of Perl released in 1992, or something similar to it, instead of Nix and Guix which didn't exist when Amazon started.


It used neither, but it's own implementation of the concept ("symlink all the things!", which is much older than both, and has lots of other implementations too, aside from Nix and Stow.

It's how gobolinux works too, for example.


Would Amazon rely on stow when it is in this peril? Unless they really do rely on it and maintain an internal fork, which would make this situation even worse. Or they use it anyway...


Which is also the software this submission is about


Yes. From what I was told, it was already of reasonable vintage when I joined in 2013.


Still works on link farms, yep. And works pretty well!


Ah, symlink farms, how I love to hate thee! They are alive and well.


Why do you hate them?

(FWIW this is a sincere question; given the number of these things I touch, I would very much like to know if there are problems I need to know about and/or better alternatives)


What is the use case for this? Is the idea that it can automatically turn a package into a portable package, effectively? Or is it so you can install multiple versions of the same software without conflict between them?


You install libfoo 4.7 to /usr/local/stow/libfoo-4.7 (such that you have /usr/local/stow/libfoo-4.7/bin/foocfg and /usr/local/stow/libfoo-4.7/lib/libfoo-4.7.0.so and /usr/local/libfoo-4.7/share/man/man1/foocfg.1.gz), and then libfoo 4.8 to /usr/local/stow/libfoo-4.8. Then from /usr/local/stow you run `stow libfoo-4.7` and all the contents mentioned above are symlinked into /usr/local appropriately. Then if you want to switch libraries you unstow that one and stow version 4.8.

It's highly configurable, so you can do a lot more with it than that, but that was the idea behind it 25 years ago. There were whole distros based on that, though it fell out of favor when containers became a bigger thing in the late oughts.


It lets you have informal package management of self-compiled binaries in parallel with your distro's package manager. With Stow you can install updated libs and applications into /usr/local and don't have to be concerned with conflicts. At worst you may need to set LD_PRELOAD to bypass system libs. Very useful with Debian stable when you need a new feature in something and don't want to wrestle with backports.


I've never had a problem installing self-compiled stuff into /usr/local without conflicts. I'd have to have the problem of installing multiple versions (upgrades) and needing to be able to roll back easily, along with any config files and such.


The problem is when you want to uninstall things. With Stow you don't have to track down the installer droppings scattered everywhere. The nice part of Stow is that it builds minimal symlinks and converts them to a deeper hierarchy once a second package wants to use a common directory.


In the distant past, /usr/local/stow was a NFS mount. Each machine could maintain different symlink trees.


And this sort of thing is still common in large shared research computing clusters, where stuff gets installed in arbitrary locations and old libraries with obscure build dependencies are the norm.

For this though, "modules" is still around [0]

[0] https://modules.sourceforge.net/


Curious: How do you uninstall stuff you manually installed in /usr/local?


I’ve used this a lot on Debian systems so that I could just use apt to remove the manually-compiled version: https://en.m.wikipedia.org/wiki/CheckInstall

These days I mostly use Nix which basically elominates this problem.


I don’t know how I never heard about CheckInstall! This is great.

Can Nix do what CheckInstall does, or do you need to manually build a Nix package for the program version that you want to install?


I mostly use nix with direnv so I never have to install anything globally and, between overrides and a minimal understanding of how to write a custom package, it’s surprisingly easy to get the tools you need for each project.


Anything I install in /usr/local is stuff is going to be something I really need, and for which there isn't a distro package. It stays for the life of the system.

If I wanted to install something into /usr/local that would be suspected of needing removal later, I'd build it with /usr/local as a prefix, but install it in a temporary directory, then make a tarball package out of that to keep track of the file list. That could be used to remove it. I could trivially generate an uninstall script by using find in the package directory to get a list of relative paths; converting each to a rm command. The uninstall script would be put into /usr/local and run from there.


> If I wanted to install something into /usr/local that would be suspected of needing removal later, I'd build it with /usr/local as a prefix, but install it in a temporary directory, then make a tarball package out of that to keep track of the file list. That could be used to remove it. I could trivially generate an uninstall script by using find in the package directory to get a list of relative paths; converting each to a rm command. The uninstall script would be put into /usr/local and run from there.

Compared to using stow, this is two orders of magnitude more complicated :-) With stow, you simply install it anywhere, and stow will make the symlinks into /usr for you. When you want to uninstall it, stow will remove all the symlinks. This way, I would install each package into its own directory. When I want to remove it, I use stow to delete all the symlinks, and then just delete the directory.

No need for "make uninstall", etc.


It's only theoretical; I've never uninstalled anything out of a /usr/local.

What Stow is doing, by the way, is better achieved with overlayfs, which didn't exist when Stow was first introduced.

With overlayfs you can specify multiple directories that are merged together. Multiple packages that are rooted at /usr/local can be mapped there with overlayfs.


And so your development environment artifacts can be linked in to the environment root just like any other package (really awesome).


Sorry I’m not super clear on how this works. Could you explain what it means to be linked in to the environment root?


Is there an overlap of functionality between GNU Stow and OSTree?


I've been using Stow for about two years now to manage my dotfiles. In order to improve the UX I've wrapped it within a small utility which allows me to define packages (directories of dotfiles that should be managed) such as zsh (containing .zshrc, .zshenv, .zlogin, etc) and their respective locations [1]. There's a few other niceties such as modular Zsh file sourcing, allowing encapsulation by OS, automatic git-crypt support, and dependency resolution across mac and debian.

I've been tempted to try NixOS due to its emphasis on config-as-code, but this isn't something I can feasibly do as this would fragment my dotfiles across different ways of thinking, which conflicts with my desire to have one repo for every device. Which I've achieved pretty successfully as these dotfiles exist on my personal/work macs, WSL, work/personal linux workstations and a few colleagues devices. All of which works out of the box with a single $ git clone and invocation of a bootstrap.zsh file which installs and sources everything you'd need.

The real magic behind all of this is Stow, so I'll always be eternally grateful to its maintainers. If I wasn't a complete stranger to the GNU ecosystem, I'd step up and offer my help.

For anyone who's curious here's my dotfiles: https://github.com/o-y/dotfiles

1: https://github.com/o-y/dotfiles/blob/main/bootstrap.zsh#L38-...


The GNU Stow documentation has a curious blind spot; it doesn't mention the DESTDIR convention for installing in a separate directory:

  $ ./configure --prefix=/usr/local
  $ make
  $ make install DESTDIR=/usr/local/stow/whatever
Using "make prefix=..." is not the main mechanism for overriding the install location; DESTDIR is. DESTDIR is widely supported, and documented in the GNU Coding Standards, in the Makefile Conventions section of the Release Process chapter:

https://www.gnu.org/prep/standards/html_node/DESTDIR.html


(Referring to https://www.gnu.org/software/stow/manual/stow.html#Other-FSF...)

They serve two different purposes. DESTDIR places files in a staging directory, e.g. to be packaged into a tarball.

In your example, you will end up with the program at /usr/local/stow/whatever/usr/local/bin, which I'm guessing Stow is trying to avoid, because it's ugly and the extra directories are unnecessary. Not wrong, though. With their approach, it ends up at /usr/local/stow/whatever/bin.


Right. So if the package supports install time prefix with no hassle, it could be done with make DESTDIR=/usr/local/stow/whatever prefix=/ to get rid of the usr/local components. If the prefix override causes a problem, then just live with the extra components. I'm guessing that in that case you tell stow that your package root is at /usr/local/stow/whatever/usr/local. Stow doesn't care about the extra components; your package can be anywhere you like, right?


That's not right either. The --prefix needs to be /usr/local or /usr/local/stow/package-1.0, otherwise many packages won't find their own files. The prefix path will get compiled into the binary or configuration for a lot of packages, it's not just an install time thing.

Using --prefix=/usr/local/stow/package-1.0 is problematic whenever you have a package with plugins, themes or other stuff, as those go to /usr/local/share/package/... while the app is looking in /usr/local/stow/package-1.0/share/package/

Using DESTDIR and manually removing the usr/local from the directory tree is what I would consider the correct way, even if a bit annoying.

Either way, these days I would just recommend to use Nix instead, which is a much more complete solution for what stow tries to do.


I understand the compile-time prefix. But in some projects, you can override the configured prefix variable during "make install" without changing anything in the package; the install steps will just accept those paths, as a hack for shortening the paths, in packages where that works.

The Stow documentation mentions this also.


--prefix= exists to configure the install location, and you shouldn't use any other mechanism for that.

DESTDIR exists to add another prefix on top of what --prefix= specifies, for the purpose of temporarily copying the program into, for example for packaging. A lot of programs will not run from their DESTDIR location, and they must be copied out of DESTDIR to run.

For example, --prefix=/foo DESTDIR=/bar will install into /bar/foo, but running /bar/foo/bin/prog will not function properly, until you "mv /bar/foo /foo" and run it as /foo/bin/prog. There's a very limited set of programs that will try to figure out their prefix at runtime, by checking the location of the binary, but this is hard to do properly and comes with caveats, and the programs that support this are few and far between.


Poor stow. We used it in the stone age about 20 years ago for unpackaged, shared software management on academic research clusters (Think "/usr/{{other hierarchy}}" over NFS). The problem with it is it depends entirely on symlinks and some programs get confused or just don't like them.

Nix, hab/habitat, containers, or overlay filesystems (such as with flatpak, etc.) are options that might work better that get around this problem.


I used to use it quite a lot actually. Nowadays maybe twice a year if I need to install a messy source dependency cleanly. For such rare usages symlink handling is quite a non-issue. When I used it so intensively, actually I mostly used xstow which can automatically resolve common conflicting symlinks on directories, unfortunately that's not maintained anymore.

(Sure, there's Nix and containers but stow is way faster)


Apples (stow) and oranges (containers and exo-package builders and management).

When using commands in conflict with the installed base combined and with things that should run unmodified, it gets tricky, usually in the form of shim binaries or improper PATH manipulation, to run things sufficiently isolated and predictably.

It's cheap enough and reduces the risks for leaking dependencies to create a chroot/jail/cgroup environment that only includes just enough of a standard environment and its specific dependencies rather than allowing unfettered access to all the things at all times.

Depends on what you're doing whether some things can be shoveled in or need more isolation guarantees.


I used Stow (some years ago) until I discovered the XDG directory spec. It can be a bit more painful on macOS, but generally enough respects it that it makes more sense to me to use it by default and fix/carve workarounds for what doesn't than to use tools like Stow to essentially put everything in the latter category.

That said, I don't imagine it was built for 'dotfiles' as everyone is and will discuss it being used for, so perhaps it does deserve to live on.


Site is getting hugged to death. Maybe we can update the URL?

They posted the same notice to GitHub: https://github.com/aspiers/stow/issues/104


Stow never got enough love. Gobo Linux was based on the same idea, it never got enough love.

Now we are paying for it with heavy weight solutions — containers, snaps, flatpak, etc. instead of evolving to a higher-order form of Stow.


Nix and Guix do pretty much that, at the core they are just some symlinks and environment variables. But unlike stow, they build all the rest of the package management and build infrastructure as well.

The big issue with stow is that you have to manually get the packages into DESTDIR and quite a lot of packages don't directly support that or do it in their own non-standard way, so there is far too much manual work involved getting anything installed from source. With Nix and Guix you can write a package definition once and than everybody can reuse that instead of reinventing the wheel.


A few years ago I read a blog post [0] on using GNU Stow to manage your dotfiles, I loved the idea and it inspired me to create xdot, a minimalist dotfiles manager [1] which I have been using ever since.

[0]: https://brandon.invergo.net/news/2012-05-26-using-gnu-stow-t...

[1]: https://github.com/malobre/xdot


stow was a really useful tool for me once at work. I had a "local" usr/* in my home directory for custom packages I'd install. Occasionally I'd need to swap different versions of the same library, etc. stow made the process a lot more manageable.


I've found it useful occasionally for when I've needed to install something via source that didn't include an "uninstall" target in their build configuration. Being able to "unstow" all of the symlinks will clean up all of the system directories, at which point you can just delete the entire folder where the actual installation occurred if you don't want to keep it around at all.


Exactly!


> I had a "local" usr/* in my home directory

I always just used `--prefix="$HOME"` so that everything went into `~/bin`, `~/lib`, `~/man`, etc...

I did look into stow a couple of times, and would have been fine with it dropping symlinks going into those dirs if I'd have used it.

(A few years after XDG started being commonly used I moved everything in my `~/etc` into `~/.config`, and `~/etc` is now a symlink to it. I occasionally wonder if doing it the other way around and setting up XDG_CONFIG_DIR would be more old-skool, before catching my reflection in my monitor and realising how daft that thought is.)


> I always just used `--prefix="$HOME"` so that everything went into `~/bin`, `~/lib`, `~/man`, etc...

Forgive my ignorance, but how do you uninstall stuff a year or two later?


`make uninstall`

(If a project's build system does not provide an "uninstall" target (coughcmakecough) then the project likely has other deficiencies and should be avoided.)


Wouldn't that require me to keep the source code for that particular version lying around?


Well, yes.

I mean, you could probably get away with just keeping the `Makefile` around. But for the stuff I installed from source, I was often interested in keeping the source code around anyway, for curiosity's sake. And hard drives are big, while source code trees generally aren't - comparatively speaking.


Fair enough. In that job I was working on a remote Linux system that was quite outdated. So everything I wanted to install (newer version of Emacs, etc) required me to build so many libraries, as the system ones were too old (Emacs alone required 50-100). I didn't want the hassle of keeping all the source code around.


`rm`, everything’s contained within your homedir.


With the described setup:

> I always just used `--prefix="$HOME"` so that everything went into `~/bin`, `~/lib`, `~/man`, etc...

You can't just use rm as a blunt instrument. ~/bin will contain lots of binaries from lots of packages. You want to uninstall only one package. How do you know which files correspond to that package?


Don't forget the `-fr *` parameters.


I saw a german after-midnight guy setup postgresql like this once, on linux.. rather quickly ;-) All the significant system parts were in an alternate LD_LIBRARY_PATH plus the postgresl libs


Seems like a zombie project at this point. Maybe a sign that it needs to be put down?

- written in perl

- 1 maintainer

- not much activity on GH or mailing lists

- pivot from initial purpose as “symlink farm” to dot file management

It’s had a nice run but seems much better alternatives exist now (as mentioned in comments)


It's a simple tool with no dependencies besides perl (which will be around forever) and no cybersecurity footprint. Even if it were to be abandoned for a decade it would still work fine and serve a purpose. There also exists no better alternative for symlink-farm style management.

Where do you people keep coming from?


Perl is not web scale


Perl is like the language of a sophisticated lost civilisation, its arcane incantations frightening to the current generation. But the achievements of that civilisation are unmistakable.


What does this even mean??? It's a command-line utility, why does it matter?



No, it's bigger than that https://xkcd.com/224/


>Where do you people keep coming from?

The future.


I used to do quite a bit of Perl, and occasionally have need to run scripts which are 10, 15 years old -- I can't remember a case where one of those didn't work. A two-month old Python script, that's 50/50


At least Perl encourages a culture of documentation.

Getting flashbacks to my job developing Python where I had to learn data structures by literally pausing them in the debugger after starting a big sync job.

No docs, and no pointers as to what is in the dictionary or why... but I'd better close those damn tickets or it's awkward Zoom call time!


> written in perl

Perl scripts are surprisingly resilient. I have seen Python modules only a handful years old turning to garbage, and unmaintained Perl modules still working fine after > 20 years.


Resilient, but also infamous for being hard to modify. Our security team has a massive set of automation written in Perl, and it's slowly being replaced, not because it fails, but because the original author left; and when we need to modify it, it's generally easier to rewrite in another language than figure out what it's doing.

Also, you need to manually install all the dependencies on your system before you can use it (whereas "go build" will just go get everything for you); and there are no test cases (whereas "go test" makes it natural to write unit tests as you're developing it, and keep those after whatever feature you're working on is complete). Rust is the same of course. All that adds up to, "More useful to rewrite than to modify".


> also infamous for being hard to modify

Perl is not forcing any particular style. If you don't care about maintainability you can quickly hack code which will solve the problem but will be hard to understand/update. If you do you can use tools like Perl::Critic to enforce a certain style. And lack of documentation and comments (which makes maintenance harder) IMHO not language specific problem at all.

> and there are no test cases

Modules on CPAN had tests back when open-source libraries in other languages didn't have any. It's not hard to write tests for Perl code but if you don't have time to write them or it is not a priority it's unfair to blame the language.


It's a bit unfair to compare Perl and Rust/Go, they are massively better languages with massively better tooling that fix an IMHO different problem.

Perl is IMHO for glue code, text processing, and that's it. It is not for large, structured programs - the threshold is IMHO ~10k lines.


Perl pioneered language-specific package management with CPAN, why can't you use that?


Is there a way to hand cpan a perl executable, and have it automatically download all the appropriate dependencies?

So far for me it's worked like "Run command -> get failure -> search on CPAN, install -> repeat 4-5x".

For golang it works like "go install <url> -> run command".

EDIT: And to compile something on my macbook and put it on a shared Linux box for other people on the team to use, it's "GOOS=linux go build -o $BNAME_linux && scp $BNAME_linux user@host:bin/$BNAME", done and dusted.

Look, perl was an amazing thing for its day. As you say, they pioneered new language ecosystem features. But that was all in the 90's and early 2000's; since then other systems have built on that and pioneered even better ecosystem features, so that moving back to perl is a big step backwards.


This is a non-issue for Stow though because it’s managed by your distro’s package manager.


1. What is your issue with Perl with regarding to this project?

2. Is the project not more or less complete?

3. See above.

4. What exactly is the issue?

As for alternatives, I do not see any mentioned. I did hear about https://zolk3ri.name/cgit/zpkg/ though.


1. A fashionable language (e. g. Rust) make it easier to attract new developers, some may contribute just to learn the language. Perl is as unfashionable as it gets for a language still in active use. But if the project don't need a lot of changes / new features it should not matter.


I think Savannah is getting an HN hug <3


I use rcm: https://github.com/thoughtbot/rcm

It is very simple and easy to use, has no external dependencies, and is sufficiently flexible to handle configurations for different machines and other nifty features.

I'm aware of Nix and other solutions, but you can start using rcm in 10 minutes (it's really that easy to use). If you choose Nix, for example, you'll need to spend at least several weeks of time (guessing on experience of others).


> If you choose Nix, for example, you'll need to spend at least several weeks of time (guessing on experience of others).

If you have no prior Nix experience, probably. Home Manager isn't really hard to work with, but its docs do assume basic Nixlang and Nix module system knowledge. If you try to cargo cult your way directly into a working flake with HM, you'll probably get a little lost.

But you also don't have to jump in all at once. I used a separate dotfile manager in conjunction with Nix for years, and it worked great. (I only bothered to switch to HM because the tool I was using became unmaintained! I was perfectly happy with it.) You can definitely ease into managing your dotfiles with Nix so it doesn't impose any downtime on you.

Using rcm for dotfiles plus Nix for the packages you regularly install is a pretty good idea imo. Then you can transition to using HM for dotfiles management later (or never!).


I think you can merge multiple directories onto /usr/local using overlayfs.

That seems to be what Stow is simulating, using symlinks.

Symlinks have the advantage that they persist; you don't have to recreate them after every reboot.

Stow also has ignore lists, which isn't something overlayfs will do, or not nicely; it is oriented toward directories. In the package installation use case, you could just keep unwanted cruft out of the individual installation directories that are being combined.


I use stow as a dotfile manager. works great! only problem is that there are some weird bugs. For example it would get confused when I had a `.fonts/` symlink in my home. Also, the "dot-" prefix really should be a built-in thing and not a fork.

But maybe stow wasn't made for this, maybe someone should reinvent stow with dotfiels in mind. And no chezmoi doesn't count.


I don't use Stow, but in learning what it was I did see that they did a release two days ago that sounds like it may have fixed your issue:

https://lists.gnu.org/archive/html/info-stow/2024-04/msg0000...


I evaluated Stow for dotfiles, but I wanted something simpler to deploy (single binary). I built a solution myself several years ago, which shares many features with Stow, and it’s still kicking along. https://github.com/cgamesplay/dfm


Looks great! Seems like GNU Stow inspired many devs, especially around the dotfiles use case


I heard Jia Tan is available


It's a trap!


I love Stow!


Me too! I've been using it for 20 or so years, and it's one of those pieces I didn't know it was still developed or what features were added, because the feature set from 20 years ago is still enough for me. IOW: maybe it doesn't need a maintainer at all.


I want to share a comment I wrote on the 2.1.0 release announcement in 2011: https://news.ycombinator.com/item?id=3309575


I heard Jia Tan is free /s


I'll ping Jigar


  $ stow --list --whatever-options-i-dont-know-it-just-making-a-joke
  ssh -> rsh


Came here to say exactly this


Give him some time to recover first. After all the haters destroyed the fruits of his tireless labour of love to which he dedicated years of his life. /s


Too soon, bro




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: