Hacker News new | past | comments | ask | show | jobs | submit login

I do things in an old-school way. 2 Bare metal servers at different data centers with totaluptime.com acting as a load balancer with auto-failover.

Deployment is done via SFTP by pressing the publish button in Visual Studio. This will deploy to the inactive server. I then manually trigger tests on GhostInspector (this could be automated via API) to make sure I didn't break anything. Then I run a custom script to make the load balancer redirect traffic to the upgraded server.

Solo founder, small bootstrapped business generating 50k/month with 1000 paying customers. Hosting costs are under $500. I could double the number of clients without needing to upgrade the hardware. I looked into moving to AWS or Azure, but can't justify paying 4x more for the same performance.




Thanks for sharing, I'm considering bare metal as well for a project. Is latency between the two data centers an issue for you, e.g. is one of your two servers running a SQL database as master?


Right, I replicate the database. Latency hasn't been a problem for our volume. During our peak hours we get 30 requests/second, so it's pretty manageable.


As someone who like down-to-earth first approach, I find it strange that reading about doing old-school way sounds, actually, refreshing. It would be interesting for anybody who builds starting from small blocks, do you have some blog about ongoing stuff?


This is probably a very dumb question, but what exactly do you mean when you say you run bare metal servers at a data center?

I run a kind of similar setup sans Visual Studio, so I'm very interested in understanding your setup a little better.


Bare metal means I'm not running on VMs, but on dedicated servers, like the ones you can find on OVH and many others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: