Looks interesting. My use case for systems like this is a little different. I am often on a work machine with a 10+-year-old Linux distribution and I have to rebuild a totally separate userland in order to be able to use modern tools. This works but it is tedious.
But several of the attempt to use things like NIX for this failed because they require modern kernel APIs for things like namespaces.
I would generally not see the need, except I spent an hour helping a friend set up a Python dev environment, and it was terrible. The problem wasn't that the env itself was hard to set up, the problem was that it was very hard to figure out which Python it would use, where the packages would go, how to prevent the other four Python versions installed from interfering with it, etc.
This looks pretty great, so I'm probably going to give it a shot. Then again, a script would use a specific/known Python version and create a virtualenv with would also work.
EDIT: Maybe I should write a script that downloads all the necessary files somewhere and creates a virtualenv with a given Python version, a la `curl https://www.whatever.com/setupvenv.py | bash`.
Honestly I don't understand the python community anymore wrt. their module system. It looked fine when I learned programming with python but the more I learned about it and what is possible the more I got irritated.
Pythons module system is in the end in my opinion fundamentally broken, and the python virtual env's are also in the end just a hack to somehow make it work. I think it's kinda sad that the hack to somehow make it work became the standard solution.
The problem with conda is that it's what broke the machine I was talking about. Apparently it doesn't very much like when you use a package manager other than conda, and things break in subtle ways.
pipenv felt like it took forever to complete simple tasks. I briefly used it for a small project and quickly realized that it was a mistake. I now just use conda environment.yml and a requirements.txt, those two cover most bases I feel like.
pyenv[0] is designed to automate what you had to do. You install it + the libs for your distro[1], type pyenv global $VERSION into your terminal, and don't think about it anymore. venvs are made from the current version, and work exactly as intended.
I get the comparison to nix and all, but docker does a heck of a lot more than reproducible builds and dependency management. It’s also a way to formalize network and other resource access, Perform local dns, automatically control the lifecycle of apps, control logging, etc. the dependencies is just one part.
I appreciate the design goals of the developer as presented here, since simpler programs also tend to perform better and have fewer bugs. The whole dependency management issue is fascinating to read about. I just run everything in docker on Linux and use yum or apt for native bootstrapping of the device (os level firewall, docker, git, Fail2ban, and a couple others). Do people who mostly use containers thru docker or kubernetes find themselves in need of more sophisticated package managers than those built in to the OS?
Right, they're not exactly reproducible (and real reproducibility is much cooler), but images solve the main use case (having a clearly defined state) almost as well, at a generally much lower cost.
That’s a valid point and I’ll consider this terminology more carefully in the future. I think a better description might be “transportable?” By that I mean that a specific commit to master builds a container one time with a corresponding tag in our registry, so that exact artifact is versioned and stored, and can then be run and rerun forever more, just as it was built. That’s how we promote releases through qa, staging and to prod and ensure we know what we’re releasing. But it doesn’t have much to do with reproducible builds.
Yes. Docker builds are not reproduceable. Build steps that define final image layers and also cache layers means that every dockerfile is a compromise towards either goal. Build inputs are "whatever happens to be on the filesystem" and "whatever happens to come down from the network". You can put process around this, but dockerfiles are working against you.
Docker builds are a great tool, a great improvement, and what portability! But I personally think you'll look back on docker builds in 4 years and see them as extremely limited.
Excellent. Running as non-root is a really nice feature. Depending on how this works, it could potentially be a Homebrew and possibly even Conda alternative. I'm definitely going to try this out.
FWIW hermès has been pretty trigger happy with their lawyers in the past. They sent E3D C&D notices until they renamed their hotend/extruder combo (read: nothing at all to do with bags) to “hermera”.
I think there's been a thing called "Hermes" at every single company I've worked for, and I know of several more at other companies. Something about the name is just irresistible to people, it seems. Though it's more common for messaging services.
I wonder if Hermes for a digital packet manager will be OK because Hermes is also a company offering packet and parcel delivery and related online services.
And since they're often late, tend to lose, misdirect or damage packages ... I'm not sure it's a good choice even if there's no legal issue or the company okays it.
Hermes is also far more (?) obviously the Greek messenger of the gods, so any negative associations with an existing company (that AFAICT doesn't even deal with digital products) shouldn't be relevant.
But only one in a literal sense and the other one is just a metaphor. They are very different things. Sure, they have in common the logistics of bundling things together in units and bringing them on track. But the same applies for kindergartens.
Hermes is the Greek god of commerce, the marketplace, and thieves. He is also the ultra-rapid messenger of the gods. His duties also involve guiding souls of the deceased into the next world.
His name and/or image pops up a lot in the context of businesses working in one of those domains. Flower delivery, bicycle couriers, airlines, PayPal's external merchant flow... anything you want to be speedy. Not so much in his role as a guide of the dead that I now of, though maybe there's a lot of companies in the death industries that invoke him.
While i kinda wish mac users would stop trying to give us homebrew, im open minded.
Im curious how this works with linux package managers that already exist, does this work like how the aur and aur helpers layer over pacman but maintain compatibility with the community repos?
But several of the attempt to use things like NIX for this failed because they require modern kernel APIs for things like namespaces.