Hacker News new | past | comments | ask | show | jobs | submit login

With the price of pro fiber (redundant with SLA) I recently moved some apps back to our own servers in house. This did cut the price down dramatically. I would not recommend it for super critical apps (except if you have your own state of the art data center, but I am not speaking about that), but having 5-10 servers in a secure cabinet will give you infinite flexibility for little cost. We pay around 5000$/y to keep running about 50k$ worth of hardware. It's 10-20 times less that what we would pay on AWS. Of course this approach has limits, but it can work in some scenarios and should not be "de facto" dismissed.



I like the idea, but remember that the SLA with your fiber provider doesn't mean your fiber magically will come back within a few hours when someone in the street cuts through the cable by accident. Your clients will still experience downtime unless you have two fiber lines going in different directions - completely redundant. If you don't have that, I would recommend some kind of a licensed microwave link on your roof as a failover.


We have a fallback coax connection going another way, it's only 500Mb/100Mb/s but it does the job if the fiber is cut. We are also working on having a 5G fallback just in case, as 5G bandwidth can be quite high on short range.


Sounds like you have a pretty decent set up then. With all the redundancies in place (and hopefully power redundancy too, batteries, generator) you can run mission critical software from there.


You always have a DR offsite, even if you are hosting on Big Cloud that would be sensible to setup.

Your RTO and RPO needs will dictate your DR setup in both scenarios. Most apps can take few hours hit if the alternative is spending 2-3x .


Yes, and we also have off site replication using ZFS send/recv. Which would let us restore everything in a few hours in case of major disaster.


My current employer has the same setup (my colleagues are mostly linux nerds so comfortable with managing their own hardware), mostly because of security but I can imagine cost is a big factor as well. We'd need a development VM for each developer, additional VMs for nightly installs, a build server farm, and systems for hosting git, project management, etc.


How do you deal with data egress costs w/r/t AWS though?


We don't use AWS.


I get it that you’re indie and running web services from home? If you’re willing to share, I’d love to see what kind of apps (that aren’t “super critical”) one can from a cabinet from home.


"in house" doesn't necessarily mean in a residential home; it refers to "on premise" more generally.

A previous company of mine did the same thing - they converted a maintenance closet into a server closet. Even with renovation costs to improve ventilation and electrical load to support the use case, it worked out substantially cheaper than cloud hosting. A few things we ran on it:

- A large data infrastructure. We had an EDI[1] side of the business, and egress bandwidth costs would have eaten the product margins and then some. A lot of traditional EDI happens over (S)FTP, and customers only periodically accessed the system (daily/weekly/monthly/quarterly, depending on the customer). Most enterprise EDI systems have retry logic built in, so minor amounts of downtime weren't relevant. If the downtime were for more than several hours, we could cut over to a cloud-based backup (which was fairly cheap to maintain, since ingress bandwidth is generally free).

- Our analytics environment. In addition to standard reporting, we also used our analytics toolset to create "data utilities", allowing powerusers on the business teams to be able to access bulk datasets for their own downstream processes. The bandwidth usage would have again been cost prohibitive to cloud-host, plus the data was co-located on-premise as well.

- Our B2B website. Traffic volumes were minimal, and it was primarily a static website. So hosting it behind Cloudflare added enough uptime guarantees for our needs.

- Dev environments. Both dev environments for all of the above, as well as something similar to LocalStack[2] (it's been a while, not sure if that was the tool used or something else) to mimic our AWS environment

For all of those, less than a day of downtime had negligible financial impact. And downtime more than a day was a non-issue, as we had off-site fail-over plans to handle contingencies longer than that.

We also operated several services and applications where every single minute of downtime created a visible impact on our financials. Those were all hosted on AWS, and architected with redundant and fault-tolerance built in.

[1] https://en.wikipedia.org/wiki/Electronic_data_interchange

[2] https://localstack.cloud/


Thank you for the info. I got carried away mistaking “in house” for running a business (and hardware) from home!


To add, although not quite the same as running a full server closet, I've also repurposed an old laptop as an always-on server and run a handful of web services for both professional and personal purposes from my home connection:

- Some utility applications I maintain for consulting clients. These tend to be incredibly low volume, accessed a handful of times a month at most.

- I host some analytics infrastructure for clients. It's for reporting, rather than data capture, so latency and uptime aren't super critical.

- I run a personal zerotier[1] network, which I use as both a virtual LAN across all my devices hosted everywhere as well as to tunnel my laptop and mobile traffic when I'm not at home. My internet gateway is hosted at home, so all my mobile and public wifi traffic routes through my home connection.

- I do a minor bit of web scraping. This is fairly low volume, and a mix of both personal and professional work. If it was higher volume I wouldn't host it at home, due purely to the risk/potential hassle of IP blocks and complaints.

I have a fairly stable symmetrical 1Gbps fiber connection (plus a block of static IPs). It's a residential connection so has no firm SLA, but still achieves ~500Mbps when experiencing "congestion" (in whatever form that means for fiber to the premise) and has only been down twice (once during a total power outage and another an attempt to use the parental control features for one device resulted in every device being MitM'd by an invalid SSL cert).

I also have a mobile hotspot I use when traveling for work, which I have connected as a failover ISP when I'm at home. This covered both instances of downtime I've experienced. And in the case when it doesn't and a client needs to access a service, I maintain external backups that I can spin up on VPS somewhere in a pinch. Probably not enough guarantees for a primarily SaaS based business, but has worked without a hitch when user-facing SaaS websites/apps are only a minor secondary component of the work.

[1] https://www.zerotier.com/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: