That seems like a very powerful machine at an incredibly low cost. The closest equivalent at DigitalOcean is RAM 64 GB, 16 CPUs, 1.25 TB SSD, 9 TB transfer at $320/month. If I understand correctly the DO instance will have lower performance because it's virtual, and not bare metal. That amounts to about 4-5x difference in price, which is tremendous, considering that I thought DO to be the one of the cheapest providers. Is this comparison correct or am I missing something?
The part your missing is that basically all dedicated server hosters are this cheap.
DO, Vultr, AWS, GCP, Azure are the odd ones out and extremely expensive.
The only reason ever not to use dedicated servers is if you're in the bay area and ops wages are so overinflated that you literally can't afford an ops or devops person.
For everyone else on the planet, the comparison between the wage of an ops person, and server costs, is always in favor of dedicated hardware with your own ops people.
The thing I don't understand is why would people consider AWS to have lower "ops" costs?
I can deploy my SaaS either to VPS servers (AWS, DigitalOcean, Azure, etc) or to physical servers. Both deployments use the same ansible automation. The only difference is that I additionally use terraform to set up the cloud instances.
In every case, if a machine fails it is my problem.
Any employee comes with overhead costs ($$ and time and risk). Lots of side projects get off the ground only because people don’t have to cover the fixed quantum of costs that hiring an employee brings.
“If this doesn’t take off, I’m on the hook for ~$100/month when it could have been under $10/month“ just isn’t that compelling.
You are not missing anything. I wondered about this myself, then started using physical servers from Hetzner for my SaaS. Quite happy with the results over the last 4 years.
what did the interconnection between the dedicated servers cost?
in the past, there was no vlan support, i.e. you need to buy the interconnection, which was not cheap.
There are VLANs, but I don't use them. I found it's easier to use vpncloud (https://vpncloud.ddswd.de), that way I can use the same setup for development/staging/production, and it doesn't matter if the specific provider used supports VPCs/VLANs.
Also, with vpncloud I know that my data gets encrypted and is private — not necessarily the case with various VLAN setups.
when I started with hetzner there was no vlan, you needed to pay an "extra pack" and an additional network card for interconnection, with more than two servers you also would need to pay for an router and of course the time it did take to setup.
so not really that cheap in the past.
They're not quite comparable, the Hetzner equivalent to that would be the CCX41 plus some extra storage for around $200/month (but you get 11 TB extra transfer). In general I think DigitalOcean VPS's are 1-3x the price of Hetzner ones.
I don't know why Hetzner dedicated servers are so cheap (they're also cheaper than their equivalent VPS's). I guess they take a bit longer to set up but there must be more to it than that.
A dedicated server is a one month commitment, possibly with a setup fee, Hetzner also charges to do things like replace failed drives.
You also don’t get the benefits of a virtualized instance like live migration for host maintenance. You can run your own hypervisor - but you’ll probably want extra hardware like 10Gb switches and the appropriate cross connects and as well as paying the one-time server move fee.
>Hetzner also charges to do things like replace failed drives
Are you sure?
If I look at the AX dedicated server page, I can read that the basic support is free:
"Basic support includes the free replacement of defective hardware and the renewed loading of the basic system (in so far as a disk image system can be loaded)."
They’ll replace the drive for free - but likely with another used disk with quite some hours of runtime under its belt. If you want a new disk you need to pony up most of the time.
Could you please elaborate why? I thought they should be premium, as they are faster, have more memory and better performance because of bare metal, etc.
The question should be the other way around. Dedicated servers are not cheap per-se, they're just being sold at the market price.
The question should be, why is AWS so expensive on all fronts (not just the hardware but also bandwidth)? And the answer is again because the market lets them get away with it.
AWS is so profitable it turned Amazon from a marginal business into a tech giant. As for Google and Microsoft they don't get out of bed in the morning for anything less than a billion dollar opportunity.
A large part of the cost of a VM on these clouds is paying their super high employee comp, for advanced software development e.g. CosmosDB/DynamoDB, but most of all, contributing to the high revenue growth that drives stock prices.
It's not paying for the actual cost of hardware. Smart people know this and don't run in the cloud unless they get a sweetheart deal. Consider that GitHub, for example, runs/ran in their own datacenter and used the cloud only for spillover capacity. Zoom also (although now Oracle cut them a sweetheart deal). Netflix built their own CDN etc.
At minimum you need a data replication and backup strategy. You're exposed to things like drive and RAM failures on dedicated hosts, so you'd need to think about RAID at least, unless you're running a system that clusters at a higher level (but then you need multiple machines).
However this is basically a matter of learning or hiring/renting a sysadmin to do it for you.
MTBF stats for different classes of hardware at different hosts would be valuable to have, as it can be affected by things like datacenter temperature. But I never heard of such a dataset.