1) You need to keep current on your security patches.
2) You need to upgrade your OS when it reaches end-of-life.
3) You need to make backups, and more importantly, verify that those backups will actually restore. (For git, this is most critical if you have a large team and dozens of repos.)
You can ignore a server for years, but eventually you'll get compromised or loose a hard drive. (If you use RAID, you'll eventually lose or an entire RAID array. Not fun at all, nor cheap.)
I'd say "most developers" already have a box running some kind of website where they are doing this anyway, but even if you disagree I think you have to be willing to cede "many", especially given how many people who use GitHub are currently doing Ruby on Rails or node.js work. I'd even go so far as to say that those that currently aren't /should/, as the experience being a sysadmin is important when understanding how other sysadmins will react when they see how your software is deployed, which I guess is another topic of discussion that comes up often here (oft filed under the "Debian vs. Ruby" banner).
(Part of me is wondering if many of the more controversial discussions on this site are between people who have sysadmin experience (and considered it valuable) and people who don't.)
Part of me is wondering if many of the more controversial discussions on this site are between people who have sysadmin experience (and considered it valuable) and people who don't.
I've done a fair bit of sysadmin work over the years, mostly in self-defense, because I want fewer crises when a critical development server eats itself.
Lots of people can figure out how to install Ubuntu, or rent a Linode. But if they don't master upgrades, patches and backups, they'll eventually end up paying a real sysadmin a lot of money at the worst possible moment.
If all you're doing on your server is git, then the first two points you make are covered basic apt-get usage. It's super easy.
point 3 isn't that necessary if all you're doing is git over ssh. All the people that have a checked out copy have a backup of your repo. Also, if you're only doing git over ssh then it's not hard to make the box relatively secure. ssh will be hte only open port, and it won't be on the standard port.
Maybe for a few months. When you run a server for 2 or 3 years the distro goes out of support, software upgrades are getting behind, before you know it you are compiling patches from source, ...
I've done this several times over the last 5 years and am now finally moving everything to specialized hosting services. Just hosting a web site, photo repo, mail and svn on a machine (vps) is enough to make it a serious hassle Moving all those to specialized services costs the same, is less work and more reliable.
> Maybe for a few months. When you run a server for 2 or 3 years the distro goes out of support, software upgrades are getting behind, before you know it you are compiling patches from source, ...
You can get 5 years security updates support with Ubuntu Server LTS and you can set up safe unattended updates[1] with email notification if anything ever requires your attention. If you set your backups right as well, you can get away even with catastrophic hardware failures.
Remember, this is not some gigantic scale, it shouldn't be that difficult.
I'm not saying it can't be done, I'm just saying that my experience has been different. Another example, I used to have my own installation for our websites and bug tracking and time sheet management software, on a locally-hosted machine with apache and mysql and a bunch of other software. The number of hours I've spend migrating data between versions, tweaking mod_rewrite rules, setting up backups for various databases and other data, making usage statistics etc. - I don't even want to think about it.
Recently I moved to a shared hosting server. The yearly cost of that is covered with one hours of my hourly rate (and I'm not even expensive). I can set up 50 or so different applications with a few clicks in a web UI, and upgrades are handled by it, too. Backups are taken care of, I have a web ui for dns, ssl, everything. It was like a breath of fresh air.
To each his own and each situation is different, of course. But there's something to be said for division of labor.
> The number of hours I've spend migrating data between versions,
It seems (from prev. post as well) like you were trying to stay on the cutting edge functionality wise. OTOH I was arguing for running stable and maintained Linux distribution.
I'd certainly agree that the initial investment is pretty big—not only it takes some time, but you need non-trivial specific knowledge about Linux administration—and for that reason alone I'd advocate using maintained hosting. My only beef with your comment was that, in theory (and in my experience), a properly configured VPS running stable software shouldn't require nowhere near the level of maintenance you seem to be suggesting.
Sure, but I figured we were talking about small organizations. One-man shop to maybe 10 people or so, or even more, anything too small to have a dedicated sysadmin.
Sorry, but there is no such thing as "safe unattended updates" when you have anything more complex than your home desktop box. There's way too much that can and will go wrong to be that naive.
Safe unattended means no configuration files are altered and those updates are not adding new functionality in the first place anyway. It's only Package X.Y with security patch applied.
Actually your home desktop box should be much more difficult to upgrade than a simple generic server box with generic virtualized hardware drivers.
There are valid reasons to use service providers or maintained hosting but properly configured (and backed up) VPS running stable (& maintained) software can get you a long way.
2) when it's out of date, buy a new $20/month vps, install git and ssh. Move your repo and turn off the old server. This might take 2 hours, every 5 years. Not a big deal.
But seriously, if you use git over ssh there is no sys admin work other than running apt-get upgrade