"Installing stuff on computers so you can run your program on them really sucks. It's easy to get wrong! It's scary when you make changes! Even if you use Puppet or Chef or something to install the stuff on the computers, it sucks."
I think a lot of people feel this way. I think that fear is born of ignorance, and we should fix that.
Let's say you are working on an application in NewPopularLanguage 2.3.1, using CoolFramework version 3.3. Your Linux distro ships NPL 2.1.7 and CF2.8, which don't support a really nifty feature that you would like to have.
Important questions to ask: what is the distro's support record? Do they have a dedicated security team? Is there significant support for NPL and CF in the distro, or just a single package maintainer?
If the distro's security and NPL packaging team are good, you might want to use their versions even if it means giving up use of the really nifty feature until sometime in the unknowable future. Making an explicit, considered decision is worthwhile.
But if you really need the new versions, you should use a repeatable build system that generates OS packages exactly the way you want them. You should put them into a local repo so that when you install or upgrade a new machine, you get the version you specify, not whatever has just hit trunk upstream. And you may want your versions to be placed in a non-(system)-standard location, so that your application has to specify the path -- but be guaranteed that you can install several versions in parallel, and use the right one.
It feels like a lot of overhead, but it can save you lots of debugging and deployment time. Once you have the infrastructure tools in place, using them is not much of a burden, and pays for itself many times over.
> But if you really need the new versions, you should use a repeatable build system that generates OS packages exactly the way you want them. You should put them into a local repo so that when you install or upgrade a new machine, you get the version you specify, not whatever has just hit trunk upstream. And you may want your versions to be placed in a non-(system)-standard location, so that your application has to specify the path -- but be guaranteed that you can install several versions in parallel, and use the right one.
Exactly. You have to be fucking careful. Or you can just use a container. That's his point.
> I think that fear is born of ignorance, and we should fix that.
Actually I think it's born from having a lot of experience of installing things and it being a total nightmare..
You're right, obviously we should stick to packaged versions of libraries whenever possible, but as you say, it is not always possible.
It's easy to aim at the low hanging fruit of specifying explicit version numbers when installing packages via Puppet, Chef, Ansible or Salt.
This should be common sense because if you build servers/containers at different points in time, it's possible to have 4-5 different versions of libxyz in use depending on when that instance was spun up.
However, if you're writing code in Ruby, Python, Node, Go, or even Java, you're using a version manager for the base interpreter (e.g. rvm, rbenv, conda, etc) because the distribution-packaged version is typically a year behind, or not present at all.
Then you're using the language's package manager (Rubygems, PIP, npm, go get, mvn) to install packages.
Then a lot of these framework maintainers are bundling the necessary libraries with their package for consistent builds (e.g. nokogiri on Ruby, libv8 on Ruby, etc).
You're also making the assumption that the CoolFramework use things like autoconf/automake (which generally has the reputation nowadays of being "bloated") for enabling consistent compilation across OS variants.
It's hard to maintain explicit versions in separate locations when a typical web application nowadays has at least 100 dependencies, and the typical web site has several components (the web app, a queue, a scheduler, maybe some separate workers).
This all sounds great in theory, but I feel it is very hard to maintain in practice with a fast moving ecosystem, which almost all of the above languages are.
I think a lot of people feel this way. I think that fear is born of ignorance, and we should fix that.
Let's say you are working on an application in NewPopularLanguage 2.3.1, using CoolFramework version 3.3. Your Linux distro ships NPL 2.1.7 and CF2.8, which don't support a really nifty feature that you would like to have.
Important questions to ask: what is the distro's support record? Do they have a dedicated security team? Is there significant support for NPL and CF in the distro, or just a single package maintainer?
If the distro's security and NPL packaging team are good, you might want to use their versions even if it means giving up use of the really nifty feature until sometime in the unknowable future. Making an explicit, considered decision is worthwhile.
But if you really need the new versions, you should use a repeatable build system that generates OS packages exactly the way you want them. You should put them into a local repo so that when you install or upgrade a new machine, you get the version you specify, not whatever has just hit trunk upstream. And you may want your versions to be placed in a non-(system)-standard location, so that your application has to specify the path -- but be guaranteed that you can install several versions in parallel, and use the right one.
It feels like a lot of overhead, but it can save you lots of debugging and deployment time. Once you have the infrastructure tools in place, using them is not much of a burden, and pays for itself many times over.