That said, I'm not a nix guy, but to me, intuitively NixOS wins for this use case. It seems like you could either
A. Use declarative OS installs across deployments
B. Put your app into a container which sometimes deploys it's own kernel and then sometimes doesn't and this gets pushed to a third party cloud registry, or you can set up your own registry, and then this container runs on a random ubuntu container or cloud hosting site where you basically don't administer or do any ops you just kind of use it as an empty vessel which exists to run your Docker container.
I get that in practice, these are basically the same, and I think that's a testament to the massive infrastructure work Docker, Inc has done. But it just doesn't make any sense to me
you can actually declare containers directly in Nix. they use the same config/services/packages machinery as you'd use to declare a system config. and then you can embed them in the parent machine's config, so they all come online and talk to each other with the right endpoints and volumes and such.
or you can use the `build(Layered)Image` to declaratively build an oci image with whatever inside it. I think you can mix and match the approaches.
but yes I'm personally a big fan of Nix's solution to the "works on my machine" problem. all the reproducibility without the clunkiness of having to shell into a special dev container, particularly great for packaging custom tools or weird compilers or other finnicky things that you want to use, not serve.
The end result will be the same but I can give 3 docker commands to a new hire and they will be able to set up the stack on their MacBook or Linux or Windows system in 10 minutes.
Nix is, as far as I know, not there and we would probably need weeks of training to get the same result.
Most of the time the value of a solution is not in its technical perfection but in how many people already know it, documentation, and more important all the dumb tooling that's around it!
That said, I'm not a nix guy, but to me, intuitively NixOS wins for this use case. It seems like you could either
A. Use declarative OS installs across deployments B. Put your app into a container which sometimes deploys it's own kernel and then sometimes doesn't and this gets pushed to a third party cloud registry, or you can set up your own registry, and then this container runs on a random ubuntu container or cloud hosting site where you basically don't administer or do any ops you just kind of use it as an empty vessel which exists to run your Docker container.
I get that in practice, these are basically the same, and I think that's a testament to the massive infrastructure work Docker, Inc has done. But it just doesn't make any sense to me