Hacker News new | past | comments | ask | show | jobs | submit login

Compared on on premises, colo or managed hosting.

When moving them off AWS it'd usually be to managed hosting on monthly contracts.

Heroku turns expensive real fast. You're paying for AWS + their margins on top of AWS.

Managed hosting ranges from API-based provisioning not much different than AWS to ordering server by server.

In practice the amount of devops time spent dealing with the server itself for me at least is generally at most matter of downloading a bootstrap script that will provision CoreOS/Flatcar and tie it into a VPN and record the details. The rest of the job can be done by simple orchestration elsewhere. I have servers I haven't needed to touch in 5 years other than recently to switch from CoreOS to Flatcar (other than that the OS auto-updates, and everything runs in containers). Once you've done that, it's irrelevant what the server is or where it is.

For modern server hardware, if you run your own colo setup, that's a matter of having PXE and tftp set up once in a colo, and you can then use an IPMI connection to do the OS installation and config remotely, so even with colocated servers, I'd typically visit the data center once or twice a year to manage several racks of servers. The occasional dead disk would be swapped by data centre staff. Everything else would typically be handled via IPMI.

E.g. one of my setups involved 1k containers across New Zealand, Germany and several colo facilities in the UK. Hetzner (Germany) was the first managed hosting provider we found that could compete on total cost ownership with leasing servers and putting them in racks in the UK. Had we been located in Germany (cheaper colo facilities than near London), they'd not been able to compete, but putting stuff in a colo facility somewhere we didn't have people nearby would be too much of a hassle and the cost difference was relatively minor.

Small parts of the bootstrap scripts we had were the only thing different between deploying into KVM VMs (New Zealand), managed servers not in the same racks (Hetzner), and colocated bare metal booting via PXE on their own physical networks (UK). Once they were tied into the VPN and the container runtime and firewall was in place, our orchestration scripts (couple of weeks of work, long before Kubernetes etc. was a thing - we were originally deploying openvz containers and so the same tool could deploy to openvz, KVM and docker over the years) would deploy VMs/containers to them, run backups and failover setups, and dynamically tie them into our frontend load balancers.

We did toy with the idea of tieing in AWS instances to that setup too, but over many years of regularly reviewing the cost we could never get AWS cheap enough to justify it. We kept trying because there was a constant stream of people in the business who believed - with no data - that it'd be cheaper, but the closest we got to with experiments with AWS was ca twice the cost.

For the record, in my current job we do use AWS entirely. I could cut the cost of what we're using it for by ~80%-90% by moving it to Hetzner. But the cost is low enough that it's not worth investing the time in doing the move at this point, and it's not likely to grow much (it's used mostly for internal services for a small team). That's the kind of scenario where AWS is great - offloading developer time on setups that are cheap to run even at AWS markups.

I tend to recommend to people that it's fine to start with AWS to deploy fast and let their dev team cobble something together. But they need to keep an eye on the bill, and have some sort of plan for how to manage the costs as their system gets more complex. That means also thinking long and hard before adding complicated dependencies on AWS. E.g. try to hide AWS dependencies behind APIs they can replace.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: