Hacker News new | past | comments | ask | show | jobs | submit login
Self hosting in 2023 (grifel.dev)
566 points by michalwarda on Feb 19, 2023 | hide | past | favorite | 296 comments



Self-hosting is weird. I hear a lot of people say it's so much work, but I put like an hour a month into my server at most. I think it's mostly fine from an administrative standpoint, as mentioned elsewhere, not much has actually changed from 2003. You definitely don't need kubernetes. If anything, I think most of what's changed since then is that there's been a flood of bad sysadmin advice making people believe they need cluster management stuff on a single computer. Cattle is plural. What you've got is a pet. Love it and it will love you back.

I host a lot of weird crap on my living room server including a publicly available Internet search engine, and server administration wise it's not a lot of work at all. The search engine requires a fair deal of babysitting, but that's on me for building shitty admin tools.

It's also great to have a beefy always-on computer to run compute jobs flooring all cores of the CPU for 200 hours on and not have to worry about bills later. Like I can't recommend this enough. There's so many cool projects gated behind taking more than a few hours to process.

What does give me a hassle is hosting stuff I didn't build myself though. Like gitea. Got a bunch of bots signing up creating spam profiles. I think as a rule I'd recommend against hosting anything that offers comments or sign ups. Cloudflare helps, but only just.


I used to keep a server in my room for a decade and I plan to retire it this year. An hour a month is actually a lot as this overhead is highly non-uniform; the server may work fine for months then need a whole day worth of work, which I can no longer easily sustain. [1] That said, I have of course run lots of softwares I haven't written, so maybe the actual takeaway from your experience should be that.

[1] For example, distro upgrades. Have a distro that somehow promises no breakage? Good! Now I can spend hours to migrate to that distro and learn all its quirks only to find that the promise didn't actually hold.


This is sort of true of owning a car or a home or basically anything. Of those things, my server requires by far the least amount of maintenance (even considering needing to take a day to do something more invasive maintenance once two years or so).

I also think it's perfectly fine if my site is down for a day or so because I'm away on travel and it's gone offline. 89.9999% has five nines as well.


> I also think it's perfectly fine if my site is down for a day or so because I'm away on travel and it's gone offline. 89.9999% has five nines as well.

I think this is the key takeaway: host at home unless you want good uptime.

I don't host from home when I want the page to be available even if:

1. There's a power outage at my home

2. I need to e.g. change a lightbulb, so I have to turn off the power (and therefore my router and server)

3. Someone accidentally switches off either the router or server


You turn off the power to change a lightbulb? It never occurred to me before that that could be a dangerous activity. You hold it by the glass and unscrew it, then do the reverse for the new one. Your hand should never goes near the socket, right?


I feel like there's a joke about sysadmins and lightbulb changes hiding in here somewhere


> host at home unless you want good uptime.

It doesn't even really take much to get 99.999% uptime from a home server. I have my server, cable modem, router, and firewall plugged into a small UPS that can keep everything going for about 30 minutes if the power goes. I've never had downtime due to power interruptions.

If I lived in an area where the power was less reliable, I'd use a bigger UPS.

In order for someone to accidentally switch off a critical piece of equipment, they'd have to go to my closet and remove all of the boxes and other crap I have stacked in front of it in there. Not likely to happen by accident.


You realize that 99.999% uptime is 6 minutes of downtime a year? How is your internet even that stable? And do you never restart the machine?


A UPS solves a lot of these problems. You may lose availability during a power outage, but you don't lose uptimes, and those hard shut-offs are nasty for harddrive longevity.


Self-hosting shouldn't be a chore. Self-hosting should be fun! Sometimes I just do maintenance because I can and I like it, not even because it's needed. I could be making the dumbest incremental changes that don't even net a profit in the end just because I like doing it so much.

To me they are definitely pets and I love them for it.


Same here!


It's a broad church I think, it depends on what you're self-hosting and why. Email is more work than Nextcloud, and an hour or two of maintenance work feels different when it's a hobby you're doing on a quiet weekend than when you're busy with life and some essential service has gone down unexpectedly.

Security, as well, is a very deep rabbit hole that can be hard to really get your head around unless you have specific training in it. I "self" host some stuff on a public-facing VPS but there are a couple of services I have moved back to third-party hosting because I had persistent anxiety about unexpected downtime or data breaches.

> It's also great to have a beefy always-on computer to run compute jobs flooring all cores of the CPU for 200 hours on and not have to worry about bills later.

What do you mean not having to worry about bills? Presumably flooring your CPU for 200 hours would generate a hefty enough electricity bill or am I misunderstanding?


> What do you mean not having to worry about bills? Presumably flooring your CPU for 200 hours would generate a hefty enough electricity bill or am I misunderstanding?

Not as much as you'd think. Looking at my power bills, the difference between a month with a relatively idle server, and a month with a very busy server somewhere around 30 KWh. As a year average, the server has added about 60 KWh to my consumption.


I don't think people usually complain about it being too much work in a sense that you have to put in several hours a week. People usually even like the initial setup procedure because it's fun. When I was studying self-hosting things was just another hobby.

For me it's now more about always having it in the back of my mind that I should do regular updates, make sure the disks are fine and everything is healthy. If something goes wrong the first thought will be "something broke" and not "oh I guess the provider has some issues, let's try again later". Or if in a few years I don't want to do it anymore I know I'll always postpone moving it to a provider because it'll be a hassle.

There's just so many other things to worry about or work on that I don't want to also add time to make sure my email server is running.


> I don't think people usually complain about it being too much work in a sense that you have to put in several hours a week. People usually even like the initial setup procedure because it's fun. When I was studying self-hosting things was just another hobby.

I wonder if this might not cause some enthusiasts to go for far more complicated set-ups than they really need, which in turn causes a bit of a hang-over when they have less time for the "hobby".

> There's just so many other things to worry about or work on that I don't want to also add time to make sure my email server is running.

I'd argue that of all the things you could self-host, email is probably among the ones where the juice is least worthy of the squeeze. It legitimately is a constant pain in the ass from set-up to the daily operations.


> having it in the back of my mind that I should do regular updates, make sure the disks are fine and everything is healthy.

All of that can be easily automated. If my system starts getting flaky, I get an email telling me well before it actually fails. Theoretically, if an automated update fails, I'll be notified of that as well, but that hasn't happened in 15 year or so, so it's been a while since it was tested.


Why not just disable signups to your Gitea? E.g. just invite people to it who you want on it, if anyone.


I have self hosted Gitea, but its for my team, and you can only reach it with my self hosted wireguard.

It is not a lot of work at all, I update the Gitea lib a few times every year and its as simple as replacing one binary file.


Yeah mine is publicly accessible, I should have probably added.


If you don't need all that cloud operations/deployment infrastructure, self hosting a website from home in 2023 is as easy as it was in 2003. Get a DDNS, do the NAT port forwarding, run some apache/nginx/whatever and push your files with sftp or similar to the server. In contrast to 2003, upstreams got better -- even Germany has average upstream speeds of 20mbit/second nowadays [1]. With 20mbit/sec you can do quite something with a page-only website and contemporary event based webservers. And thanks to SoC systems, energy consumption is a no brainer.

[1] https://www.statista.com/statistics/1338657/average-internet...


Honestly a basic VPS is a lot easier to me than running a physical home server. Dirt cheap, never have to worry about power failures, dust, heat, home ISP being shitty, etc.


I do both. I run a home server with several services that are used by me when I'm not home, and by my close friends.

I also use a VPS for services intended for the general public.


Doing both makes a lot of sense for development as well... you don't want to pay for a VPS for something that isn't going to be developed to completion or run full time, save that for the real projects.


I have only a static website with not too much content or traffic, I even use the free oracle VPS that they offer. It took some while to get the hang of ingress rules and opening ports in their interface in addition to the CLI but now it works and I’m not paying anything. Out of curiosity: where do you get your VPS?


I run vpcs and homeserver I avoid all the DNS and port forwarding issues by having a wireguard Server in obenbsd that also runs unbound, so i habe some internal DNS entrys for stuff i host at home linke Cale der and co tacts


> self hosting a website from home in 2023 is as easy as it was in 2003. Get a DDNS, do the NAT port forwarding

Did they have CGNAT in 2003? Because that breaks everything.


Even from the client side, CGNAT can break a lot of things. I'm very glad that my mobile provider gives me IPv6 as well - that allows me to get to my home systems without any NAT in the way. Even hotspot clients get an IPv6.

Of course, IPv6 is its own can of worms, but (cross my fingers) it's working for me.


CGNAT is both amazing and shocking. Amazing in that it kinda works. And shocking because the average person will never know why their shit is broken.

For me it broke online gaming. Luckily my ISP simply gives a static-ish IP to anyone who has an issue with CGNAT. But some ISPs force you to pay for a static IP, and some don’t even offer static IPs. I mean, I’d pay, but soon that won’t even be an option. It’s scary.


For me 2023 is exactly like 2003 from this point of view: a dynamic DNS account with inadyn/ddclient to refresh it when the IP changes, and it's almost like having a static IP.

Even better if you have e.g. a Linode or any server with a public IP that can run Wireguard or OpenVPN. Then you can run your own VPN server, configure your DNS, and connect to anything from anywhere.

Yggdrasil (https://yggdrasil-network.github.io/) is also another interesting IPv6-based solution - I have played with it a bit, but I still prefer to use my VPN, and do nginx reverse proxy from my Linode to my network over VPN when needed.


I wish I could get both. Alas, I’m not behind a CGNAT and getting IPv6 would require giving that up :/


also tailscale could be an option.


Use 6in4.


I'm in the US so I pay over $100/mo. for 15mbps upstream, but at least no NAT.


I would argue it is much easier now. There are a ton of distributions who are catering for this use case, yunohost is probably the most well known one, but there are a lot more if you look on the awesome selfhosted list.

I would also say that if you only want a static site the OPs setup is almost overkill. Just install caddy and a dyndns service. Takes care of the certificate for you and is super simple to setup.


FYI, Caddy has a DDNS plugin https://github.com/mholt/caddy-dynamicdns. Works with any DNS provider that has a Caddy DNS plugin which you'd typically use to solve ACME challenges, but instead it will set A/AAAA records.


Thanks for your work on Caddy!

Also, for those that don't know, Caddy also automates SSL certs (unlike nginx) and can render markdown files using templates [0].

[0]: https://caddyserver.com/docs/caddyfile/directives/templates


This is new to me. I was going to run a separate DDNS for my homeserver, but given that the domain is only used in Caddy anyway, this reduce 1 place to configure stuff. Thanks


Yeah, cloudron is a other distro for this use case.


My problem is that I have never been able to purchase residential internet that allowed me to open port 443 or port 80.


You can setup a wireguard network then proxy pass from a cheapie VPS if that’s helpful. Probably only worth it if you have multiple services though.


Use Cloudflare. Or, rent the cheapest machine you can with a public IP and use something like Rathole. I like Cloudflare because it’s fast and free, but they do MITM you.


Can you set up Cloudflare as a blind pipe? Just forward the packets with rewritten return address


Cloudflared is what I believe they are referring to and it proxies HTTP/S, although maybe it can do more.


Why do ISPs do this? Is there a legitimate reason or they just feel that their clients don’t host these services so they might as well block the ports for safety?


Spam reduction from botnetted machines, and selling business lines, are the reasons I have seen given.


Major reason will always be support burden. For http to work reliably you need to put clients into DMZ anyway. If you let computer illiterate end users operate in DMZ you will end up supporting them anyway, because in the your network will be at risk of depeering/blacklisting. Supporting http behind NAT is again significant support burden that costs actual money while your competitors offer service access cheaper. It makes sense to not only not support http behind nat but block ports altogether for retail and offer proper liability waived DMZ access under enterprise plans.


ISPs are media companies. They do not want regular people publishing content or having a voice unless they can get their commission. The world would not have become just 5 websites with identical content if everybody had a server at their house.


I believe this to be true because in the usa they really are media companies, aka Comcast, and I don't have this problem in other countries where I lived where ISPs were ISPs and nothing else. They delivered internet and maybe also phone.

In Taiwan I get 1gbps down, no caps, and a bunch of weird networky config stuff I don't quite understand but apparently you don't get in the usa, for like 30 usd/month. It's awesome.

Plus I'm torrenting basically 24/7 and my isp just doesn't give a fuck. When I did that in the usa Comcast sent me an email for every single torrent lol. My download is in the tens of terabytes a month and uploads are at least a terabyte a month. ISP doesn't care. Love it.


Not sure but probably some security thing - if you don't need it why leave it open to abuse.

An ISP I've used in the past hasn't allowed self hosting on a standard plan but has allowed on a plan you need to ask for specifically.


Not necessarily security. Maybe just not having to deal with reports for self hosted illegal content. They might be more than happy to let other companies deal with that. Their competitors are probably doing the same thing.


Do isps still do this? I've been using a business cable account for years, but when our local telco went fiber and wanted me to switch, they told me they didn't have such restrictions. I haven't made the switch because configuring pppoe is more complicated than plugging in a network cable and asking for an ip.


You can try https://pinggy.io


Cloudflare Tunnel is great for NAT traversal.


Definitely. I'm just so used to the git pipeline flow that it would break my usual flow a bit too much :). That's why those new tools just make it so amazing! Especially combined with better and better ISP


Not _exactly_ as easy, you also need a valid https cert, and set up a cronjob to renew it. And for some other reply here, "as easy as it was in 2003" really doesn't mean "easy" :v


> Not _exactly_ as easy, you also need a valid https cert, and set up a cronjob to renew it.

Web servers like Caddy automate this for you: you just indicate that you want HTTPS for a particular site and the rest is taken care of for you (in the case of public sites and HTTP-01 challenges, at least). Link: https://caddyserver.com/docs/quick-starts/https

Even Apache2 has mod_md which does pretty much the same thing (sans DNS-01 provider integrations, at least out of the box), so it's going to be good enough for most cases and similarly easy to Caddy. Link: https://httpd.apache.org/docs/2.4/mod/mod_md.html#mdomain

Nginx integrates well with certbot, which does take a bit more work and configuration, but even that is passable: https://certbot.eff.org/

Things can get a bit tricky when you want to run your own CA or ensure mTLS, but in most cases neither will be necessary.

As for whether you even need HTTPS in the first place, I'd say that it won't hurt in most cases and will guard against MitM, the ISP included.


Can't do http challenges because my ISP blocks port 80 inbound.


You configure Caddy to disable certain challenge types[0], which in this is the HTTP challenge, like so:

    example.com {
     tls {
      issuer acme {
       disable_http_challenge
      }
     }
     file_server
    }
[0] https://caddyserver.com/docs/caddyfile/directives/tls#acme


> Can't do http challenges because my ISP blocks port 80 inbound.

My ISP also put me behind CGNAT, which effectively meant that all of the inbound traffic got dropped. I worked around that by getting the cheapest VPSes that I could find and then setting up WireGuard and simply forwarding the traffic to my homelab servers. So I got all of the compute that I have available locally, all of the RAM and all of the cheap HDD storage, but a static IP address.

I actually wrote about the process a few years ago: https://blog.kronis.dev/tutorials/how-to-publicly-access-you...

(note that you probably would only want to forward 80 and 443 ports in most cases, not everything; outside of testing boxes)

Personally, I opted for Time4VPS in the end, which I use for the rest of my hosting as well: https://www.time4vps.com/linux-vps/?affid=5294#annually (affiliate link, they do have good discounts at the moment for yearly billing, though)

Then again, something like Scaleway Stardust instances could also be a really good fit, when they are available: https://www.scaleway.com/en/stardust-instances/

For those not chasing after the savings of a few Euros, Hetzner is also going to be more than enough: https://www.hetzner.com/cloud (or DigitalOcean, or Vultr, or any other VPS provider out there)


I don't need port 80 now that DNS challenges are easily automated. Port 443 is open, so that's fine.

Also, if I have a VPS, why not just serve from the VPS?


"Not _exactly_ as easy, you also need a valid https cert, and set up a cronjob to renew it. And for some other reply here, "as easy as it was in 2003" really doesn't mean "easy" :v"

Took me 15 minutes.

https://developers.cloudflare.com/cloudflare-one/connections...


Super easy with the right DNS provider and something like go-acme/Lego. Add a crom job and done. Yeah, not zero effort but compared to early Let's Encrypt with HTTP-01 and such it's quote easy.

https://github.com/go-acme/lego


You don't have to have that. Expectations have just increased since 2003.


iOS won't let websockets work without https so I think you can't do things like self-host jitsi meet. Many appliance type devices won't allow adding self-signed certificates either.


To be fair, you couldn't self host jitsi meet in 2003 either


Websockets didn't exist in 2003, so that's just a case of higher expectations.


Back to plain sites and not websockets, browsers didn't go through a missing https warning process in 2003 either so there are higher barriers and not just expectations.


> even Germany has average upstream speeds of 20mbit/second nowadays [1]

Is it common for EU countries to not have symmetric gigabit fiber?


Same in Australia. I don’t believe you can get a symmetrical residential connection. Business, sure. But residential, no.


On that note: with our (Australian) National Broadband Network (NBN), residential connections don't always have the best uptime.

For example, at my last house (Melbourne), a 100/40 FTTC connection... the NBN company kept on having multi-hour outages (booked ahead of time) every 2-3 months. Note that's not my ISP, it's the NBN itself. For "upgrades" in the area and similar.

So very much "not business grade". Not even startup grade really. :(


Going from 1000/50 to 1000/1000 would cost me an 860€ premium right now (Germany) and this is a good offer, it can get more expensive without special promotion.


It got better with fiber in my area. Now 1G symmetric is available for 199€ per month.


Note: Europe, but outside EU/EEA.

Yup. I have a gigabit fibre right up to my home, but the best speed available to me as an individual is 400/40. Gets really annoying from time to time when I need to upload like a couple of gigabytes.

On the bright side it's like €30/month, which also gets me cable and some streaming service subscriptions (HBO Max, Tidal, shit like that), which from what I can tell is a pretty good deal compared to more western countries.


At least in France, it's very common. I'd say symmetric is the exception.


I use DDNS with Cloudflare, which offers free DNS. I built an open source Windows client for this (could be cross-platform, just haven't bothered yet).

https://github.com/drittich/DnsTube


What is the worst case scenario for security on your network, in the above self hosted setup ?


No idea where they got that data but I call bullshit. Most people I know are still on DSL which means 20mbit upstream is the usual maximum (sometimes 40), so not sure where this even amount of fibre users would live.


In the US this is almost always prohibited by the residential internet terms of service, and can subject you to account cancellation.


Is it really? Which part is prohibited? Isn't that part of what you sign up for when you specify upload speed with your service contract?

I'm in the US and have never not had something or other being served on public facing ports


"servers" are the part that is prohibited per ToS, even if the ports aren't blocked.


In many places in the US 20mbit/s up is either very expensive, or not an option.


True if you ignore certs and the possibility of CG-NAT.


That is not "easy". If it was, no one would be using cloud hosted sites like carrrd or squarespace or wix or whatever


Those are not solutions to “I don’t know how to host”, those are solutions to “I don’t know or want to know anything about how websites work below the GUI layer”. The hosting (in terms of dns and serving the bits) is almost incidental to their business model of designing and managing.


I'm all for self-hosting and run a few services on my LAN, but would never allow HTTP traffic from the internet to enter my network. If you've ever seen a public nginx log, you'll know it constantly gets hit by all kinds of bots and vulnerability scanners. No matter how secure and isolated I make the web server, I just don't want to see that traffic, nor expose my home IP to further scrutiny.

It's trivial to host a static site on someone else's infrastructure for very cheap or free, and you get to benefit from CDNs if you need it. I don't see what you would gain from doing this from your home network.


Random http traffic seems about as harmless as it ever was, if you just host static pages, or your own scripts/software and nothing generic.

What's a bit iffy is the privacy aspects. If you host anything that can identify you, your identity is just a HTTP request away for anyone who sees traffic from your IP address.

But these concerns can be somewhat mitigated by making the server drop connections unless a known Host header or SNI is sent.

Wholly recommended if you host stuff on your hope IP!

Easily done with nginx by ensuring all server {} blocks have a proper server_name and a separate default server {} block just having return 444. So unless someone already knows the domain name of your home server, they'll not be able to access it with just an IP address. You'll stop many bots this way, too.


> If you host anything that can identify you, your identity is just a HTTP request away for anyone who sees traffic from your IP address.

run it behind Cloudflare


Not sure why I'd let some US company see all the traffic, if the same goal can be achieved other way and the goal is privacy...

DDoS is probably mitigated at residential ISP level. At least I never had to deal with it with the current ISP I had for 14 years, and I had like 10 MBit/s upstream bandwidth most of that time.


I think an alternative you can use a cheap VPS (there are very inexpensive options for this[1] for like $15/y) and tunnel traffic from your home server. If you're only serving static content, that might be enough to host everything as well (or $20/y for a kvm slice; for anything more complicated, I think 1GB might be advised).

edit: this way, you get a free IPv4 (static) address as well, as well as security and avoiding DDoS on your home connection.

[1] https://buyvm.net/openvz-vps/


> would never allow HTTP traffic from the internet to enter my network

The security aspect is a struggle. Something like shellshock [1] needs to be patched immediately, which really requires automation. With shellshock, even self-hosted static hosting could be impacted. Similar vulnerabilities be discovered in the future.

I'd want to isolate anything external facing from the rest of my home network for that reason.

[1] https://en.m.wikipedia.org/wiki/Shellshock_(software_bug)


Put it behind cloudflare if you’re worried about it. It’s free and increases performance.


You can create a second network behind your router to keep your LAN separate from the server's LAN.


Not sure why you'd self-host a public site from home when you can get a VM from https://www.hetzner.com/cloud for € 4.81 /mo (1vCPU / 2GB RAM / 20 TB bandwidth) with much better internet access. Residential internet connections are more unreliable, have poor upload speeds and requires running a server 24/7 with ongoing electricity running costs.

Or if it's a static rendered website you can publish on GitHub Pages for free and use a CNAME to use a custom domain which is what we do for our docs https://docs.servicestack.net created with https://vitepress.vuejs.org that uses GitHub Actions to automatically build and publish the site on commit.


You don't control a VPS, uptime doesn't matter, and even my 5 Mb/s upload is just fine. Electrical costs are marginal. But most importantly there is nothing more convient than $ cp photo.jpg ~/www/ and sending someone the link example.com/photo.jpg. Plus you have tons of space.

That's not to say that a VPS in addition isn't useful or fun. https://indieweb.org/POSSE and all. It just isn't at all required.


I mean is scp any more difficult?


Or git push if you are using GitHub pages?


It’s strange to compare self hosting and GitHub pages.

In most case the main factors to choose self hosting are independence, fun and learning by tinkering. That’s three items that GitHub Pages doesn’t provide.


Just recently I linked my home LAN with my VPS with WireGuard. I needed to configure queues on my home router because I got consistent 100MBps on the wire and this is a bit too much for my VPS provider (too much to pay, not to handle).


I don't think people are seriously doing self-hosting for practical reasons - it's more for the fun and pride of setting something up yourself.


I run a basic commercial site on my home because after 2 years, my co-op owned fiber ISP has never had so much as a blip of downtime. If they did, I can spin up a cloud server, pop the backup down there, and redirect via Cloudflare in probably less than hours.


Some people have symmetric gigabit (or faster) home internet these days.

Renting slices of someone else's shared HW hardly compares in terms of price to owning the full machine. Expected lifetime of some cheap SBC you'd use for hosting your personal projects is ~10 years. It can cost say EUR 50-100 to run it that long these days. 10 years of that hetzner VM is EUR 577. That's 500 EUR I can do something else with per machine.

Whenever I try calculating HW ownership vs renting in the "cloud" for personal usecases it at worst comes up to paying off in a year, and usually sooner.

There are also many other benefits of having physical access to the HW.

Just right now about 7 SBCs I run have total cost of ownership at $20 on average each and I've been running them for 5-7 years...


> if it's a static rendered website you can publish on GitHub Pages for free

Reminder that if you are going to do this, then it has to lie within the subset of uses deemed acceptable by the GitHub terms, which forbids hosting on GitHub Pages for e.g. e-commerce or any other business reasons; GitHub Pages is not a 1-to-1 substitute for e.g. Cloudflare Pages or Netlify.


I want to self-host a public site because I can be fully in control of the stack. It gives satisfaction to distribute a website to the entire world without depending on third parties other than your utilities company and internet connection. It's fun and I learn a lot from it. But from a business perspective, your solution is ofcourse better, just not as fun.


Hetzner is a good provider, but you can get some resources for free as well. https://paul.totterman.name/posts/free-clouds/


These free tier offers will kill you on the bandwidth costs.


Two alternatives: cloudflare in front or oracle (which has ridiculous free traffic)


He said it’s a static page too… don’t even need a VM. He can just dump it into Amazon S3 and put cloudfront on top of it. Unlimited scaling across the globe, 1TB free of data transfer, free SSL.


> if you want to Dockerize it, then Docker related stuff is required i.e. Kubernetes

Yeah... I'm not surprised the author gave up on it, but you absolutely do not need to use k8s if you're dockerizing something like a blog. I'm a little baffled at this sentiment.


Why k8s? docker-compose works just fine.


Once you get into dozens of pods with multiple services with complex configuration, need to reliably orchestrate the order in which those services come up and need to use shared nothing secrets, Kubernetes starts to pay off these days, thanks to the number of handy recipes and playbooks available after searching through all the spam and marketing fluff.


Yeah. Maybe I phrased it badly but I meant like more complicated apps than a blog :D


Ah, gotcha! Even then, I've found i've gotten a pretty decent distance with docker-compose before k8s sounds like an attractive (or least bad) proposition


> a cheap UPS to keep it running in case of power outage (which happened once in the past year, so it might be a bit overkill).

I don't think this is overkill, I think it's necessary. A UPS is pretty cheap compared to the completely-uncontrollable chance of losing power in the middle of working on something. Sure, there's autosave and most things are relatively recoverable, but this is just such a cheap investment (at least, for most people who will be reading this forum, I understand it isn't for everyone) compared to such a scary reliance on something that is not very reliable (which makes me very upset about income disparity, it's something everyone should be able to have).

My power is extremely unreliable so maybe I'm a little biased - we get brown-outs (power drops for maybe 2 seconds and comes back) maybe 3-4 times a year, which is wild imo for what should be a very well-developed area (relatively wealthy suburb of Chicago) but it's a pretty old building so not too surprising I guess. That said I will NEVER have a desktop setup with a UPS, and I encourage absolutely everyone to do the same.

I also have a wireless card so that in an emergency I could tether my pc to my phone for internet if need be, and I highly recommend this as well. It's enough to reconnect and send a couple Slack messages quickly and then disconnect again and wait out an outage, without driving up your data bill. (Just use your phone? Maybe, but I don't have everything installed, and I can't type for shit on my phone keyboard anyway)


Have the feeling this is definitely more of an American problem. I'm from a European country and the last time I have had any sort of power outage is more then 10 years ago (can't even remember it to be honest). Even with the current surge in prices the production and transmission system is still reliable as ever.


Here in Western NJ, ash trees have been decimated by the emerald ash borer. The big impact is in communities with lots of ash trees are now getting constant power outages from dead trees coming down on lines. The power company is supposed to trim back dead trees near the lines (ever since Hurricane Sandy), but the ash decimation has overwhelmed them.

Our little town of 6,000 or so loses power 3-4 times a year. One road with lots of ash trees loses power 10x a year.

People are beyond UPS here and into generators.


UPSs also have other benefits in power outages as well, once you've shut the computer down. You can use it to keep LED lighting on at night. Charge phones and tablets. Moreover, if your power is out, you might still have 5V on your cable line -- so if you can deliver the 8-9W the modem wants (and likely similar to your router) you can still have your normal internet service.

If you have a beefy enough one (we're talking $200+ UPSs here), you can use it to power a modest refrigerator for a bit, getting temps back down to proper levels to prevent food spoilage if you end up with a long term outage that spans 48-72 hours. Most fridges only pull 120-180W when they run, and depending on the temp and time of year, only run the compressor for 15 to 20 minutes at a time. Plugging the fridge into it once or twice on the 2nd and 3rd day of an outage can save you a TON of grief and food spoilage.


I used to use UPSes at home, which protected me from a handful of power outages.

However, those lead acid batteries don't last forever, and when they fail you're worse off than using a dumb regular surge protector. Now you have to replace and responsibly recycle the battery.

For me that e-waste became impossible to justify. A couple of power outages a year is completely acceptable for most home servers.


I think a UPS a good idea for Raspberry Pis. They are rather sensitive to power transients, and anything that causes a crash and leaves the SD in a not-happy state means it won't boot! (A known issue with the Pi.)

I use a UPS and log2ram (which moves the most active directories to a RAM-disk) and have found the Pis to be perfectly fine home servers:)


"I don't think this is overkill, I think it's necessary. A UPS is pretty cheap compared to the completely-uncontrollable chance of losing power in the middle of working on something."

I'm using a laptop as a server and will buy a UPS for my modem/wifi. That combo will offer many hours of uptime without power.


I'm selfhosting a few services (a small php website w/ blog & admin, a nextcloud instance, some small django websites, and my txt rants here: https://misc.l3m.in/txt/ ).

Why txt? Because I only have 600kB down & 200kB up, and even if I can share websites with images (webp) and other files (fonts, css...), my connection is too bad to be able to share this kind of content to hundreds of visitors, like the hundreds of visitors HN is bringing to my website when I share a post and it reach the front page (with a peak of 75k unique visits on a txt file in one day, that's nearly 1 visit per second, if 2 or 3 people are visiting some of my bigger pages, then my upstream connection is all taken by the visitors).

On top of that, robots that are crawling my websites are a real pain; out of the 7k requests I got yesterday on all my websites, nearly 6k are from crawlers (and 500 from bots trying to break my websites), that's ~75MB of data. Some of them are spamming my websites with 1 request per second for hours, so I created a new rule to ban them using fail2ban.

But even with all of this, selfhosting feel liberating ; I own my data, all of it is saved here in my house, I control my server (an old refurbished dell optiplex fx160 with 3 gigs or ram, an intel atom 220, and an old hdd of 500GB), and my websites are still faster than the vast majority of the web.


Your txt site are really similar to mine: https://wiki.chungn.com/

I found it's really nice to store those text data in txt files instead of a database.


What a nice knowledge base you have! I'm kinda jealous.

The point of the txt thing (at first) was to show a friend that you can start a blog right away with only some txt files (he wanted to start a blog, but he started right away to worry about the time to setup something). In the end I used this txt thing to share posts to a broad audience without clogging my connection too much.


More people should try to self host things again. Seems that people are forgetting how things work and instead use services that does it for them at some cost.

Self hosting is rarely complicated. It's also easy to achieve better performance and lower cost in many situations.


This. I have anywhere access to 5 machines, way more storage than I need, and 100% control of everything that goes on in the network.


For someone who knows nothing about self-hosting, where should I start? I'm a self-taught junior dev but haven't had a chance to dabble with any aspect of deployment yet... willing to learn though.


You can check out some selfhosting communities for ideas, help or to lurk.

https://www.reddit.com/r/selfhosted/

Setting up your own DNS like pihole and a personal vpn to enable you into your home network when outside is usually a good starting point. Or even simpler and potentially useful setup your own gitea instance, it's easy just use the sqlite backend.


Set up a basic web server, then try something like WordPress/Drupal (with some database backing it)... then maybe SMTP/IMAP with spam/av filtering and a web frontend (roundcube/squirrelmail)... that will probably keep you busy for a few months :)


I self host and use a RPi 3B+. Actually I have a total of 3B+ but only one is actually doing anything for the website. I have my database running on a RPi 4b with 4gb. I could easily consolidate to a single Pi but early on I had different plans for each one but just keep 'em running for no good reason.

I'm fortunate in that I have a static IP. Initially I was using NOIP and it worked great but then noticed my IP address never changes. Also, NOIP still exposes your actually IP which I didn't care for.

Now instead of NOIP I rent a cheap $4/month VPS with nginx to reverse proxy to my home. This does require me to open a port, which again I'm not a huge fan but it is what it is.

My next iteration will be where I close the port and do updates via SSH. I'm the only one who uses my website so it is more of a playground for me than anything.

Finally, I feel setting up a server - or just interacting with a remote computer, be it across town, across the country or in the other bedroom, is a good skill to have. When I got my 1st remote job that gave me access to a server I was more than comfortable to do what I needed to do.


I do similar with the VPS and reverse proxy, but use a wireguard tunnel from my home device to the VPS to allow nginx to hit my local device instead of forwarding a port.

Avoids punching a hole in your local network, avoids issues when your home IP changes, and ensures nothing ends up going over the internet in the clear.


aye, is also what I do - tunnel to VPS. then I hit the VPS from wherever, which is nice if I get sent out of state for work (and sometimes out of country).


Ah very nice. I may do that instead. Thanks for the tip


I'm doing that as well. I'm using an IONOS VPS since they have a DC close to my ISP's upstream, so less added latency than most VPS's.


Have you considered using something like CloudFlare Tunnels? Hides your IP and let's you apply 2-Factor over services you expose.


how would noip work if it didn't expose your ip? This statement confuses me. It's funny how used laptops that are 10x faster than an rpi4 are about the same price. Obviously they use more power though :)


At the time I set it up I didnt know what I was doing or how it all worked, etc etc, so that is where the statement about it exposing my IP came from. It has been so long that when I first set it up I may have known it was exposed but didnt care. As time went on I just didnt like knowing it was exposed. Why? I just prefer it not be. Not that it couldnt be found I am sure

And you are absolutely right about the laptops and I have 2 right now that are just collecting dust. But currently I have no reason to change to a laptop. It is kind of like driving a nail in the wall with a sledge hammer, sad to say a laptop is more than I need in a server at this moment. The only benefit I get from a laptop for my current needs is a built in battery backup.


I'm planning on a similar setup and avoid exposing the network directly. Didn't have time to do that yet but I'll look into that this week!


One thing to be mindful of is security.

The last you want is for your setup to be compromised, be used to send spam/botnets and then getting perma-banned from your ISP.

I would argue it's a lot less risky just to put your static site on S3/Github etc.


If you're using a modern web server like nginx, then you have to really do something wrong to get hacked serving static files.


Agree - an up-to-date system running only nginx and fail2ban is likely to be more secure than some vendor's who-knows-what's-on-it image which exposes various "services".


That's not really fair, zero days will always exist.


In order to be fair, threat models should be taken into account. People seem to be conflating nation state operations using advanced capabilities worth at least hundreds of thousands of dollars to compromise high value targets/infrastructure with "my pet project may get 0 day'd" which is the exact opposite of being fair. Moreover, if the argument is "zero days will always exist" you may as well stop using technology entirely.


What setup is immune to zero days?


S3, Github, Netlify etc are going to be infinitely more secure than your home setup.

And are basically free.


But no more immune to zero days


theyre not immune to 0-days tho.

they just have more staff on hand to respond, run updates, and do the due diligence of checking and implementing patches.


Yeah, I would decouple it from any residential ISP, but use a $4 VPS to still self-host. That also removes the need for a DDNS setup. Given some basic Linux skills, all you need is SSH plus nginx+certbot on Debian with unattended-upgrades or similar, that you can point your domain to.


Would recommend Tailscale over SSH.

Really easy to setup/use and means you don't have to open up ports like 22 which are constantly being port scanned.


I'm not familiar with Tailscale, but you can easily use a different port than 22 for ssh if you want to; put a different port number in `/etc/ssh/sshd_config` on the server, and then set your `~/.ssh/config` to use that port on your clients. If you're going to be opening it up externally, you also can map a different port to the local one (which is necessary if you want to be able to ssh into two different servers with the same ssh port from the outside anyhow).


You can move ssh to a different port.


Anyone who thinks they need to move ssh to a different port likely doesn’t have their ssh security configuration setup optimally.


I don't get it, wouldn't you use ssh keys and also just move it to a different port anyway to save the annoyance of random scriptkiddy pokes all the time?


Moving the port away from 22 changed my fail2ban logs from hundreds of lines a day to one every few weeks at most.

While you shouldn't skip on other security measures, there's no real downside to changing your SSH port and getting off the target list for a lot of botnets.

(As opposed to e.g. port knocking, which is stronger but makes it easy to lock yourself out and may create issues with some SSH clients.)


Making it explicit: try to always use keys for ssh, avoid passwords. If you have to use passwords, make it very long (20+ chars) and random. Don't use dictionary words or reuse passwords from anywhere else.


Yep, and I always additionally just disable pw authentication altogether, and set PermitRootLogin to either No or without-password.

You can also do things like firewalld off (or with hosts.allow) 22 to just an ssh bastion/jumphost src (or your house IP), but I find that’s usually not necessary (although an excellent further step if you are a bit paranoid) as long you do what was mentioned in first paragraph.


There's other reasons to use a different port. I move it to cut down on the failed login attempts (yes they'll never succeed given I only allow keys) but they clutter the log so I can't tell if I'm actually being targeted or not. On an obscure port if I do see attempts I know something serious is going on.


You don’t get perma-banned from your ISP for getting malware. That’s pure FUD.

Far more people (at least 2 orders of magnitude) end up infected with malware and become members of botnets from just clicking dumb shit. They don’t get banned from the ISP either.


I had my cable ISP call me a couple times about a business account a few times. I run my own mail server. A POP3/SMTP account got hacked and started sending out spam. They're very polite, professional, ask you to fix the problem and move on.


I'm surprised you got a phone call, when I worked at a tier 2, we would just disconnect you and wait for you to call us instead.


running your own mail server is an exercise in pain. not sure if you're brave or dumb, but props to giving it a go.


Heh. I've been running my own since the mid 90's. Originally I worked for some early ISPs, and continued it as a hobby of sorts. (My main personal email is on gmail.)


If you're running an email server that is running at full throughput to send spam (due to your clear negligence) from a residential ISP account then depending on what plan you've purchased it may violate your terms of service.

I know of friends who have been banned for doing exactly this.


But you will get banned from your isp if they find out you are self-hosting a blog and photo album shared with family and friends.


Depends on your ISP. I use one of the biggest ones in the US, and they won't do any such thing. They don't care if you're running servers at all, unless you start getting a lot of incoming traffic -- then they'll ask you to upgrade to a commercial account or stop it.

I've been running servers used by family and friends from my home for about 20 years now. It's never been an issue at all.


Using a web browser with JS execution enabled is a lot more dangerous than running nginx showing .html files in directories.


I was running a couple million page views/mo on a cluster of raspberry pis in my parents home a few years back. After some disruptions on my family's network I decided to setup a Wireguard VPN with Docker Swarm and a GCP instance to essentially route from GCP IP address -> Raspberry pi cluster such that my IP wouldn't get leaked and didn't have to worry about dynamic IP addresses

Very hard to do and finicky to setup but at the time was suitable for my needs


Why not sure cloudflare?


As mentioned in the post. I would probably not use this setup with my more "exposing" projects like full web apps.


Indeed. My friend had a crypto miner put on his self-hosted server.

This was merely weeks after telling him to switch to Tailscale, but it was a great learning experience.


Wanted to clarify some of the points raised about Next.js:

> I’ve been using Next.js for a while and hosting the apps built with it on AWS with custom express servers. One day I’ve noticed that my servers are getting red-hot while doing almost nothing, and response times got huge.

OP mentions they're hosting a static website now but must have previously been rendering a server-rendered page. It's not clear whether they explored the static-site generation support in Next.js, which would have avoided any regressions in server-rendering performance, making this a non-issue.

> Long story short the library introduced a huge performance downgrade that was not caught by existing tests. > Because benchmarks were using only Vercel’s (creators of NextJS) “Edge” infrastructure. And the bug was happening everywhere but not there.

Indeed, there was a regression, but it was not due to lack of tests as a whole. The linked threads point to issues both self-hosting (serverful) as well as on Vercel (serverless). There was a week between a fix being reported through the opened issue and a fix being placed on a canary release. Regressions will happen – the best thing is ensuring more tests are added and things are fixed quickly.

> We need alternative independent hosting to ensure that the community does not get stuck with a single provider.

You can (and will always be able to) host Next.js, both completely static (drop files in an S3 bucket) or on a server (Docker, EC2, whatever you want).

Just wanted to clarify those things. Your new site looks great, nice work.


Wow. Didn't expect a response directly from Next.js VP!

Before anything I wanted to say that I love Next.js and I'm using it in my projects daily. It's definitely the best solution right now at the market for the project types me and my company is producing :).

In regards to the inconsistencies in the article. Yes it was a server rendered page, not this particular blog. A much more complicated project.

Also I didn't want to sound as if the regression was handled badly. Quite the opposite, as soon as I've pinpointed the issue and was able to create a good Issue in the tracker the response was very swift!

And I understand that the regressions will keep on happening. I've been building apps for long enough to see waaaaay bigger problems slip into production. So no hard feelings!


I’m always looking to see how we can improve things. Appreciate your feedback! Let me know if you have any other questions in the future.


I'm still not convinced that self hosting is cheaper though.

Where I live, both the electric grid and Fibre connections are unreliable, and the ISP charges about $3/month for a static IP. Assuming 2.5W for a RPi, it takes 1.8kWh a month, costing around $0.25 for electricity. This also has the downside of using residential IPs, having to deal with CGNAT/NAT if the ISP has no static IPs, exposing your home IP, slow disk speeds in SBCs, etc.

Services like Hetzner have private servers for almost the same amount, which gives you faster storage, server IP addresses (for email reputation), DDoS protection, and a whole lot more to sweeten the deal.

The only thing I self host today is a script that runs on my OpenWRT router, that uploads the probed data from my inverter and BMS to monitor my PV setup. I look forward to get rid of it too, once I get better BMS/Inverter.


Costs for hosting a static site or something with minimal computation is down to 0 at this point using any one of a dozen cloud/edge providers, so it's hard to compete with them on the basis of cost. The arguments for self hosting are really about having full control over your service stack.


The problem with a lot of these solutions is they have a nearly vertical complexity curve as soon as you want to get past a static site or minimally dynamic site.


> The arguments for self hosting are really about having full control over your service stack.

This is why I do the portion I host at home. That, and keeping the data on machines I actually own and control.


Self-hosting becomes much cheaper when you need big machines. A system with, say, 64G RAM will cost to buy outright about the cost to rent such a system in the cloud for a couple of months. I run a few test boxes (that are used by my team) in my garage for this reason.

I deployed a GRE tunnel to a small box I rent from Hivelocity that has a /26 routed to it, to work around lack of proper connectivity from a "business" HFC connection to said garage.


I think self hosting may be cheaper for huge data volume stuff? Like I serve (legal) video, audio, audiobook, book, and photo content, it amounts to around 20TB of content. It cost me around... 1k in capital for all the components for it, most of which went to the harddrives? And according to the self hosting people on reddit I'll probably need to swap a drive every few years, around 150 bucks a pop. Beyond that though my only cost is electricity, for which I have no idea but our monthly electric bill total is like 80$/month anyway so I'm not stressed about it.

Anyway I'm not paying in/out costs or costs for static hosting. Looking at hetzner's offerings it apears to be 536.58 euros/month for 10tb of storage? That's block storage on SSD, maybe they have cheaper HDD storage somewhere?

Point is I think my server setup may have "paid for itself" (not that it makes me money) after one month. And that's for 38TB total of RAIDZ1 space, I'm only filling 20TB so far of it. In actually it's like 60 something TB of space lol I just wanted RAID so I could lose a drive. I've got plenty of room to grow and swap components as needed.


> Assuming 2.5W for a RPi, it takes 1.8kWh a month, costing around $0.25 for electricity.

Note that it's virtually free in winter (or cheaper if you have a heat pump or only heat during off-peak hours) as the energy used by the RPi is that much energy not consumed by heaters


And you don't need to pay for a static IP adddress. Just front it with Cloudflare and have a cron job to update your A record regularly via the Cloudflare API.


As mentioned in the blog post. I wouldn't probably put a "real" production app on it so easily. You can replicate the same effect and more on a service like hetzner. But if you have a Raspberry Pi laying around and like 2-3 hours. At least a blog is enough! And the amount of hardware related things you'll learn is massive.

I've been running a few web bootcamps in the past. Explaining the hardware part through cloud admin panels is extremely hard. Having a physical thing right in front of you is spoko simple. Much easier for people to *click.


Self hosting can allow your budget to go much further. Tiny PCs like Intel NUCs or Lenovo Thinkcentres[1] are very light on power usage. I bought one that idles at 10 watts, which is $10.32/yr for me (assuming I never power it off). I bought it on Ebay and upgraded it to 32 GB memory for a total cost of $250. The cheapest comparable server on Hetzner costs around $36/mo so I break even in less than a year.

For production purposes, a public cloud has many benefits you can't get at home. But for hobby purposes, keeping the cost low is most important for me.

1. https://forums.servethehome.com/index.php?threads/lenovo-thi...


I got a 15 watt Intel NUC for free as a business discard. It has an NVMe, 16GB DDR4, and a spinny 1TB SATA drive for backup. My ISP hasn't gone down or made me change addresses in years, but I'm waiting on that day so I can have Copilot write me a script to automatically update my DNS settings in Cloudflare (with DDOS protection) every 5 minutes via cron so I don't have to explore the world of DDNS again.

OTOH, if you have to deal with double NAT, good luck with that hot garbage.


It's cheaper for me. I do pay for a DDNS service, but that's very cheap. The machine doing the serving is one that would be running 24/7 anyway, so there's no additional electricity.


This is amazing. More devs should build stuff like this for themselves. On the extreme one of the scale there is the amazing Andreas Kling building his own operating system. Building a whole system will give you unprecedented control and understanding. That is what being a hacker is all about.


Having a server under your hand and deploying other things than webpages to automate your infra and life is a very underrated experience.

I don't host my pages myself, but the server(s) I have power the invisible infrastructure which accelerates my life a lot.


Infra such as? Just interested out of curiosity. You can keep it vague if you prefer…


Well, nothing to keep hidden. DNSMasq, Syncthing, a tool which I developed that sends e-mails when things go wrong, etc.

The resulting infra is hidden, because it doesn't flow through any popular services or something. You can argue that Syncthing is using public discovery servers, but you can put it on a small VPS, and you'll have a complete off-the-grid installation of it, too.

I host it on an OrangePi zero, so it's unobtrusively small. It just vanishes somewhere at home.


And I have to say that this "physical" server running and seeing it just makes the experience of web developing much more "real". At least for me :)


It is. I still get awe inspired that some markup I type and put on this disk here can be accessed across the world. Like I can physically touch the tape if I want to. It reminds me of how wondrous this whole thing felt like when I first discovered the internet in the 90s.


I'd never heard of Coolify, but it looks like what I've been searching for, i.e. an all-in-one solution for hosting stuff on a VPS (or Pi). But the docs [1] say it requires a minimum of 30 GB of disk space.

Does anyone know if that is accurate? What could possibly be taking up that 30 GB if all I want to do is host a static site and maybe Deno or Node? I'm fairly certain I set something similar up in the past on a much smaller MicroSD card...

[1] https://docs.coollabs.io/coolify/requirements


You can also check out CapRover, similar self-hosted PaaS like Coolify but simpler to set up and manage in my opinion: https://caprover.com/


If you don't need/want a UI then Dokku is another option. It is more mature with things like built-in backups for the database. I've been a happy user for many years now. Coolify seems nice as well though.

https://dokku.com/


Dokku was my Plan A :)


I you want something even lighter, have a look at https://github.com/piku


I have a 32 GB SD card it's running on it no problem. It currently takes like 4 GB (it can store previous docker images for speeding up builds and allowing auto rollback).


Thanks! That sounds much more reasonable. I wonder why they put 30 GB as the "Minimum required resources"...


Linux is really bloated these days. I am confident you could self host a decent website on OpenBSD with less than a GB of disk space and several GB of RAM. It is really incredible to me how in 2023 we are doing basically nothing that could not be done in 2013 but it takes an order of magnitude more computing power. Serving a medium size website is not really that complicated unless you are hosting a bunch of videos or whatever.

Stories like these just go to show me how much fat there is in Tech. The sector is in for a sharp reality check. You don't need 10s of gigs of RAM and 20 cores to host your blog or small business e-commerce site. But if you use the latest bullshit framework then maybe you do...


30GB disk space is the recommended disk space to be able to host something with it. Coolify itself is a few hundred MB (thank you, node_modules).

(developer of Coolify here)


I've spent some time on exploring the current self hosting scene after I got frustrated about pricings on different platforms and how absurdly complex certain stuff became. I'm happy to share it with you guys!


Proxmox and some great guides on GitHub might make life a lot easier for many people. I am astounded as how simple and point and click it has been.


  The blog that you are currently reading has a perfect
  PageSpeed score 100 / 100.
Unfortunately, no it doesn't. The page that has this blog post scored 95 in performance for mobile in one run and 86 in the next. It scored 91 for desktop.

They'd get a higher score by adding text compression and by serving static assets (all of them?) with an efficient cache policy.


I see 95 mobile, 100 desktop. It's on the front page of HN, so presumably the server is taking a bit longer to respond to all the traffic.


You were there looking over their shoulder when they wrote "At least at the moment of writing it", and are asserting they are lying?


Weird, shows up as 100 for me... I'll look into that though! Thanks for heads up!


You're welcome! I dislike the tool for this reason - it changes too much. Though I've found that the guidelines are very good for my sites.

PS I get 100 consistently for your home page, so maybe you have an optimization there that can be replicated?


Wait how do you add text compression?


Generally, you add mod_deflate, or a similar package, to .htaccess and add the mimetypes on a new AddOutputFilterByType line along with the algorithm. This is one page [1] that describes the process for apache or nginx, though there are millions of webpages that give various methods.

[1] https://blog.hubspot.com/website/gzip-compression


Thank you. I thought somehow it was supposed to be done client side. Although tbh I hadn't heard of mod_deflate either.


I see a baby flying out of the bathtub here.

You can still use NextJS - just get it to prerender the entire site statically. Then you get the plus sides of using NextJS, but can take your static ZIP file of files to any host you like, or self host.

You then get, for example, an HTML file, with all the elements ready on page load, which can be 'rehydrated' when React kicks in for deep functionality and interactivity.


Can you explain this please ?

I have a netxjs based discussion forum in development. Is it as straightforward as issuing a command to pre-render and then the site re-hydrates via Apollo on it's own ?


I have not used apollo but to explain:

Say you have a useEffect that calls an API: maybe firebase or parse. That useEffect only gets run on the client. For the server side render: it runs the react code based on the initial props and takes that first rendering and creates the static html from that. When that static html is run on the browser it will run react and subsequent renders can update the dom.


Forums are dynamic so no you can't fully static deploy that


Yes this only works when you use JS to implement anything dynamic, user specific, requiring up to date (up to the second) data and so on. You can keep the static content fairly up to date.

For HN for example, you could prerender much of the content including latest comments and get Next to keep it up to date, cache invalidating as necessary. Basically have it rebuild static pages as posts are made. Then use fetch calls or maybe sockets to get the very latest posts and keep it up to date, even live updates if you wish.

ISR is the jargon here: https://nextjs.org/docs/basic-features/data-fetching/increme...

It is a bit like a mirror of what HN does. HN will use caching so that if you are logged out you get fast renders of pages. Instead of caching, you are pregenerating and keeping it up to date.


Thanks, this was useful.


> Turns out in 21st century you can even update [Raspberry Pi] without downtime.

Hopefully I didn't just miss it in the article, but... how?


Kernel care is apparently free for the raspberry Pi

https://tuxcare.com/patch-raspberry-pi-systems-without-a-reb...

Also I think you can use Ubuntu PRO on the pi, which includes Livepatch.


I'm only seeing a long and posing as an article telling me to sign up for a 7 day trial.


All Linux kernel livepatch stuff are paid services, as I understand it, the Linux kernel live patches aren't possible to just produce automatically, it requires a team with enough Linux kernel knowledge to make it work and usually such teams want to get paid.

Also, I think that the base Linux kpatch tools are open source, but the infrastructure that RedHat/SUSE/Canonical/etc use to provide them are not. However, I think the Gentoo folks do have some open infra code.

https://github.com/dynup/kpatch https://wiki.gentoo.org/wiki/Elivepatch https://wiki.gentoo.org/wiki/Live_patching https://github.com/gentoo/elivepatch-server https://github.com/gentoo/elivepatch-client


> KernelCare understands that hobbyists need protection too, so we offer this benefit to Raspberry Pi enthusiasts free of cost. The currently supported chips are the BCM2711 (Pi 4) and BCM2837 (Pi 3 and later models of the Pi 2), and we offer support for Ubuntu Focal Fossa for 64-bit ARM platform, and soon support for Debian and Raspbian.


I personally prefer to self host Proxmox with ZFS. It is ready for enterprise use and can be easily backed up via external drives. There is even a Raspberry Port (https://github.com/pimox/pimox7), but I would never use a Raspberry to self-host again (not because it is not suitable or reliable, just because I need a little more power)

I use

  Fujitsu D3417-B1
  Xeon e3-1225v5
  32GB DDR4 ECC
  Samsung 980 Pro 2TB (with up-to-date firmware)
  Pico PSU 150w
and it is using 9.3W in Idle and about 12W running my daily services. It can even run a macOS VM for experimenting / software development, USB-Passthrough, Replication, etc. etc.

It is rock solid stable, not too power hungry, can run nearly ANYTHING and I never looked back.


While I like Proxmox and have used it in a startup, I wouldn't recommend it for enterprise simply due to how flaky the Terraform integration was the last time I checked. Probably a year or two ago. Regardless, the Terraform integration was done in the typical open source fashion of a single developer, where I'd question the amount of resources put into getting the software past 80% completion. The company behind Proxmox does not seem to prioritize the enterprise market segment.

We ended up using the REST API for automated provision management and there are certainly warts. It's a far cry from being a turn-key solution, which I'd argue is preferred by a large enterprise.

It's great for click-ops if that's your use case, but that also wouldn't qualify as an enterprise use case.


Title would be better corrected as "website hosting". Self-hosting became a term for a much more ambitious scene to provide all kinds of services usually offered by FAANG/"big players", mostly in opposition to them.


Disagree, I hear people use the term self-hosting all the time referring to this same thing. I hear your scenario called de-googling (or whichever your big co of choice is).


Not to downplay the intent, but hosting static site is much simpler in something like github or netlify. Also in my area business internet with fixed IP is more expensive than what I would pay for decent enough cloud instance($50 vs $10).


With dynamic dns static IPs are not as critical. Most routers support them natively. What it does is quite simple - every time the router receives a new IP, it updates your dns entry.

Not to mention that most “dynamic” IPs change only while the router is offline and your lease is up. So as long you don’t power it off, it may not change for large intervals of time.


I have had my "dynamic" IP for two years now. I even managed to keep it when I moved to a new unit in the same appt complex


My home network also seems to have no problem holding an ip despite being dynamic. I usually have to change the mac on my router before I'll get a new ip. In 15 years, I think I've only had it change unintentionally 2 or 3 times.


Presumably you only need a fixed IP for email delivery. Public sites like blogs could be fronted by a CDN, and you can manage private sites behind a vpn like tailscale etc.

I think a design where you rent a cheap $5-$10 vps with a static IP that forwards SMTP messages both ways through a secure tunnel to your personal mail server in your home would be a good starting point for self hosting.


Definitely simpler than setting up everything from group up. Though ATM I would argue that It would take me the exact same amount of time to get another app online :). Regarding IP, in the blog post further down I'm explaining that I also don't have fixed IP and how you can setup DDNS. Thanks for reading!


I’ve been using NextJS for a while and hosting the apps built with it on AWS with custom express servers. One day I’ve noticed that my servers are getting red-hot while doing almost nothing, and response times got huge. Long story short the library introduced a huge performance downgrade that was not caught by existing tests. Why so? Because benchmarks were using only Vercel’s (creators of NextJS) “Edge” infrastructure.

Impossible! I'm reliably informed that frameworks are "battle-tested" and I'm being cavalier by doing something simpler that's sufficient for my use case.


Self hosting is appealing but I’m worried about exposing my home IP to the world. Is this something to be worried about


Open only the minimum ports that you need (eg. only 80 & 443 for a web server). Run simple software (eg. nginx) and keep it up to date. Minimise the impact of searching bots (eg. fail2ban). Avoid running scripts (eg. take care with php). And your server will be more secure than your modem (touch wood)!


You expose your home IP every time you visit a website from a device inside your home.


I think they're talking about opening the router firewall to inbound traffic, as opposed to the standard outbound traffic.


But you only open it to your red zone machine, which should be a separate net from everything else. You're not opening your entire network to inbound traffic. If you get hacked, the damage is limited.


Yeah, it’s a legit concern (even if low probability).

An easy workaround is setting up a $5/mo VPS to act as a bastion host and relaying all your traffic through that.


Host your DNS on 3rd party provider like Cloudflare and proxy all http requests. I've got traffic from all foreign countries turned off from them since I'm not an international business. I imagine you can probably configure your server to only accept connections from cloudflare if you're ultra paranoid.


Can you "proxy proxy?"

My setup has nginx config files for each of the subdomains, each of which does a proxy_pass to some port for whatever the service is. Then my server box hosts like 20 different services, all of which right now I just point to from google domains using dynamic dns.

So instead I would have requests go to... what, an nginx I host on cloudflare?


You can setup Cloudflare tunnels to proxy straight to the internal reverse proxy. This hides your IP and you don't have to open any ports on your network.

If you don't want to rely on Cloudflare you can also rent a cheap VPS which you could use as a public reverse proxy which points to your internal reverse proxy through a vpn like a self-hosted wireguard or a service like tailscale. I just did this same exact setup and only had to add some nginx config to get the real IP address of the client instead of the public reverse proxy's.

Either way your own network is safe and hidden from the public.


I have an older house and cold cellars are common here. It's a room outside of the foundation in the basement and the roof is the floor of my porch. It was originally intended to keep preserved food cold passively.

So I threw a 32 core xeon server in there. It's isolated enough that the fan noise doesn't carry, and nothing I can do will make the core temps go over 28C. Free cooling during the summer, and in the winter, waste heat from the server helps keep the cold from leeching into the rest of the basement. It's really a beautiful solution.

The server itself runs proxmox, and I've got four or five services exposed to the internet behind an nginx proxy. Most of the time I don't ever have to touch it.

The -only- thing I don't host at home is my email server. Mostly because it's very old and I don't want to upset it, but also email behind a residential IP can be painful.

Everything I host is just for my personal use, and it's been a pretty worthwhile experience. It doesn't take too much of my time, I get to learn new skills, and at the end of it, I get a service I was going to use anyway, but I have absolute certainty that my data is under my control and no one else need be involved. It's nice.

I still need to figure out long term backups, but for now I periodically dump to an external drive and store it in a fireproof safe in the same room. That's as secure as I can get in this environment and I think it's good enough.


A more easier way of connecting your raspberry PI server to a public domain / URL is to use https://pinggy.io

It's just one command.


While UPNP port forwarding works well for IPv4 (once you make sure to allow it in your local router config), I didn't find find it working yet for IPv6 and had to make custom port forwarding rules for the IPv6 address. Also, while I couldn't use port 80 I was happy that Caddy has working Let's Encrypt cert generation for 443-only usage. These "small" issues are the largest hinderence in self hosting at home, from my point of view.


> a cheap UPS to keep it running in case of power outage

What are the cheap options for RPI? I'v read that powerbanks are not designed as UPS and have no passthrough charging: https://goughlui.com/2021/09/03/note-potential-issues-of-usi...


> Similarly, the bare metal solution consisting of nginx with manual Let’s Encrypt cert setup was too much of a hassle.

It's not a hassle. Setup once and then forget about it. It's dead simple, easy and there's a million tutorials on it online.

But it's boring (this is where I'm going to old-man-ramble about things, so better stop reading now). Imo, that's why we have so much bloat, complexity and overkill use of containerization that requires orchestration in software and devops. We don't go the easiest possible route, but the one that seems interesting and most fun.

I totally get playing with new stuff and going totally over board just to host a static html file for a personal project, with the goal to learn something new. But it's not just personal projects, it's whole companies that adopt this kind of approach, not even thinking 2 years ahead, when the "cool new software" you used is now not only not cool and new anymore, but abandoned and unmaintained.

I get that this is just a random "i did a thing" blog post, but this just annoys me :(


Sorry to annoy you ^^. I'm not a native speaker and I guess it's my wording in the post. What I meant is "setting it up for every new project I would be working on might be cumbersome".

Honestly I thought only friends and family would read it and I didn't spend enough time polishing the nuances in the writing. I didn't expect it to blow up so much :D.

But going back to the point I fully understand your frustration about the overcomplicated approach. Though for my defense I'm planning on hosting many more apps using this setup. In fact I already am but didn't want to overcomplicate the post.

I left it for part 2. Subscribe RSS for followup xD. I feel like a YouTuber after saying that.


I really wish someone could convince me self hosting is worth it but more often than not it just seems like busywork. Do you really need to self host your static website? There is no real privacy gained and probably some security lost. Then you get into hosting more complicated apps, email, etc, and making sure you can access them from anywhere at any time and it just doesn't seem worth it to me.


I had a huge paragraph in the article about decentralization and how important it's in my mind for the future of internet but I've scrapped it because it felt like I was a blockchain guy even without mentioning it. So I'll make it simpler.

Honestly I just feel awesome seeing those blinking lights in my room and thinking it's sending packets to other people.


If you live in a country like the United States with strong legal protections then they cannot make you unlock your devices to gain incriminating evidence as it's a fifth amendment violation. Depends on the jurisdiction. As long as you don't do any biometric Id and limit it to passwords they cannot make you verbalize or write down the password to unlock the device.

Moreover, no one can seize or examine your data without being a criminal. On the other hand, Google et al do this every day.

You may not care but these are real privacy gains


It can be as difficult or hard as you make it. If you want just nginx serving static files or k3s with multiple services on a tiny cluster is up to you. The benefit for me is that I can be confident that my data is mine. But mainly because it's fun and great for learning.

In my experience it's not much harder than navigating the AWS interface, where I often feel very lost. But of course YMMV.

> There is no real privacy gained

Why not? Other than I can't be sure that my device and browser doesn't spy on me I don't see your point?


What privacy is gained by self hosting a static site rather than using anything else?


One (or more) less service(s) to give your e-mail. If you want to see usage you can just check the page loads instead of having to add analytics because github pages doesn't share any stats. The visitors get the same benefits because it's one less site behind cloudflare. The big providers can't figure out (and possibly sell) your interests based on the sites you host.

Not my reason to self host (static) but more a welcome side effect.


It removes the exposure to a service provider.


I don't find it much work at all, actually. I run a webserver, a VPN, a mailserver, and a few other odds and ends. I probably spend a couple hours a month maintaining it.


For me google no-tool (as it often is just random score) shows 88/91 for the page. On subsequent run it's 95/100 though.


I also selfhost a lot. For me it's mainly a hobby. I have a fiber connection and an old laptop that now functions as a Ubuntu server. Laptops are nice because it means it can easily endure brief power outages.

I run everything in Docker images with a single docker-compose.yml. For 99% of the folks more than sufficient and way less complex than Kubernetes. I use Caddy as a reverse proxy. Configuration is dead simple and it has auto SSL certificate renewal. Certificates are just there, you don't even need to think about it. I run it together with Authelia so that I can shield web services behind a 2FA SSO login portal. Now I can securely access my personal web services without the need for VPN.

Some things I selfhost: Nextcloud, Adguard, Gitea, Drone (local CI/CD runner, love it), VPN, Samba server, Jellyfin, a few websites, etc.


Does anybody know how you’d do this if your ISP uses carrier grade NAT? (Eg mobile dongle). My limited understanding is that with CGNAT there isn’t a unique public IP that points to you, (even if only temporarily). So presumably DDNS is out?

I guess IPv6 could/would solve this one day?


I use a Cloudflare tunnel, and set up my DNS at Cloudflare to route incoming requests through the tunnel


You are correct that a CGNAT doesn't give you a unique IP address, and you have no control over port forwarding. My ISP put me behind a CGNAT right while I was working on adding letsencrypt support to ZitaFTP Server (which requires the server to be internet accessible). They would have given me a static IP address if I paid them a lot more...

The solution is a tunnel between your machine and an external server with a static IP address. The external server will forward requests to your machine.

Cloudflare tunnel is an option (thanks watchdogtimer), and there's also a service called Ngrok. I took the most complicated option: setting up a reverse ssh tunnel to a VPN that I already had.


Self hosting is really cool. However when it comes to static hosting, apart than keeping your data at home and having fun setting up servers, I don't really see the point when there's so many free good options. I use CloudFlare Pages for my blog, their integration with GitHub is flawless and you get to host your website at edge for free.

When I need to host dynamic content, another good option is CloudFlare Tunnels (I promise I'm not sponsored by them, their products are good is all :)), so you don't have to do shenanigans updating DNS records on the fly, and more importantly you don't have to compromise your network opening up and forwarding ports.


CloudFlare is a MITM against its clients, by design. Everyone is good when things are good, but when trouble comes ..


True. Is there a Content Delivery Network (CDN) that isn't a MITM, though? Sitting between the client and server is how they work...


If it comes, you switch to a different hosting provider in about 5 minutes, that's not a very big effort


I have lost count of how many sites Cloudflare has interfered with, insisting on proving myself human yet again, during the past three months. This is about the time I started using the LibreWolf browser (even disabling adblocker). When I open up Chromium, the problems stop.


I don't want to use Cloudflare.


100% agree that people should consider self-hosting. just be mindful of for what purposes. let me elaborate:

if all you want is a simple static blog, i find little to no reason behind going beyond github pages.

for self-hosting some cloud services, go for self-hosting BUT don't expose it to the internet willy-nilly. rather use tailscale or something and access it behind a vpn. if you really want to offer a public utility to others by running it in your household, it is often not worth the tradeoffs with making it secure enough.


I still enjoy self hosting, and part of the challenge now is "how can I do it without having a loud power hungry server?" I did it with the pi3 for a few years, but expanded into Proxmox and a small form factor Dell box with a new m2 ssd in it. It's been great so far, and does quite well on the UPS when we lose power.

Went from a PowerEdge 2650 in the spare room to a way smaller optiplex 3060. SO much quieter and friendlier to the power bill. Also, a lot faster too. :)


He did mention dynamic DNS, but that’s really only ok (in the US at least) if you’re willing to to be down did a few minutes every day, and as long as you get hardly any traffic. Even with your 300 (100)mbps fiber connection, you’ll only get 5-10mbps upload, and only if you have virtually no incoming traffic. It’s (barely) ok for zoom or counterstrike, but you’ll be blocked for hosting a web server. The days of being able to self host are long gone.


> if you’re willing to to be down did a few minutes every day

I haven't had this issue with my setup. There is a couple of minutes of downtime every few months, but when it happens, it usually happens late at night when nobody's using it anyway. It's yet to cause anyone to not be able to reach the server when they've wanted to.

This certainly depends on what ISP you're using, though.

> you’ll be blocked for hosting a web server

Again, depends on your ISP. Mine won't block you for this.


Weird post.

If this whole infrastructure is built just to render a static page, I can guarantee that from the point of view of someone reaching it out from Indonesia, then the PageSpeed wouldn't matter. It would be super slow.

In fact https://gtmetrix.com/reports/grifel.dev/KDVSwSmr/ shows that render time of your page is 1.8s and TTI is 1.2 sec. Which is 4X times of what PageSpeed shows.

I host my blog on CF Pages, and my front pages loads is subsecond time (around ~725 ms actually). This incurs no electricity bills or any additional costs to me. And it's aggressively cached, compressed and also distributed on Edge Cache. Which is literally impossible with RPI at home.


He mentions that he doesn't see it as a problem for his use-case. "so I’m not on the edge right next to your house, but I guess it’s not a problem."

I don't see why it's a weird post.


> I’ve bought a domain (grifel.app) on Google Domains. I use it because they have all of the extensions like app and dev, don’t do much marketing BS

They will also flush everything connected to your Google account down the drain the moment their AI decides you violated some of their terms of service. With no possibilty to restore or recover.


Good stuff man. Very happy to see some folks still experimenting and trying things and then writing about it. Thanks for sharing Michal!


Question: were you forced to use a subdomain with your chosen method, or was selfhosting also possible on the main domain that you purchased?


Possible on both.


I pay a data center a hundred or so dollars a month and put a server running nginx and my nixos mailserver. No downtime. Power is guaranteed with an sla and it's faster than anything.

Why bother self hosting at home when you can just do it for real by paying the same price as an internet connection?


Because I will be paying for an internet connection regardless and my Pi hasn't had any downtime for the last two years, and its 8 years since I last had a power cut.


I live in the PNW where down trees regularly kill my power for days. If your power is reliable, and your internet connection is good, then go for it.


Looks like it even survived the Hackernews frontpage hug of death?


> a perfect PageSpeed score 100 / 100

Just used 10mins of my lunch break to go from 9x to 100 myself. Thanks for the reminder ;)


Anybody else noticed that the link for "Cloudflare Workers" goes to netlify.com?


Is there a good guide that explains all the in and outs of building a server?


Thank you, Michal, for mentioning Coolify in your blog post!

(Developer of Coolify here )


just get a cheapest VPS from lowendbox


I've been self-hosting since 2003 - starting with a repurposed Pentium 1 sitting under my bed, with Slackware still installed through tons of floppy disks. And by this day I host ~30 services. My two cents:

- Mileage may vary. I see plenty of people self-hosting their SearX(NG), Miniflux, Wallabag, Grafana, PiHole instance on a RPi at home. And that works perfectly fine: those services are relatively lightweight, they don't take more than a few tens of MBs of RAM, installing them is usually as easy as a docker pull, and they aren't too bandwidth-hungry for a decent home connection. Jellyfin? Sure, but streaming high resolution content may warm up the RPi quite a bit. NextCloud? Hmm, it starts to be a bit borderline for a RPi at home - especially if you don't add some decent fast external disks. Matrix? Hmm, that starts to be a heavy beast - and if you decide to federate your server it may eat a lot of bandwidth and disk space. Mastodon, or anything involving federation on scale? Unless you're ready to add a few GB of storage every couple of weeks and can receive and content very fast, it may be a problem (and it also takes 1.5 GB of RAM to run a small instance of my server). Keycloak, ElasticSearch, Kafka or any fat beast running on a fat JVM? At least 1 GB of RAM goes to run each of these beasts. Bitwarden? The latest versions will freeze everything unless MSSQL can allocate at least 2 GB of RAM at startup. So do you want to run your personal website or a few small services on your RPi at home? You can probably do it. But self-hosting other popular services will require you either more CPU power (and, in most of the cases, still an x86 architecture), more memory, more/faster disks, more bandwidth, or all of them.

- I've reached a balance where I run my small services (SearXNG, Miniflux, Wallabag, PiHole, Gitea, Grafana etc.) on a RPi. The medium ones (Jellyfin, NextCloud, Matrix, Keycloak etc.) on a Celeron mini-PC with SSD disk and ~10 TB of storage attached. Those that take >1.5 GB of RAM (like Bitwarden and Mastodon), or need to be accessible even if my house goes on fire (like Bitwarden itself, the main node of my VPN and the git server with all of my synchronized configurations), or eat up more bandwidth than my house could handle (like Jitsi), are running on a couple of Linode servers outside of my network. And, as the traffic for some of those services starts to increase, I'm even considering using Cloudflare.

- I agree with what other people mentioned in this topic: you don't need Kubernetes for everything. Sometimes Docker/Podman containers managed by systemd services are more than enough. I personally like to run Arch whenever I can - there's an AUR package literally for everything, and a wiki page for nearly everything. Then you don't need to waste extra disk space to host a lot of Docker images. All the configuration files are stored on an external git server connected over VPN. So bootstaping a system or moving/copying services is usually as simple as running a git clone and running a couple of Ansible playbooks to automate the deployment. Sure, Kubernetes can help, but it's just one among the possible tools that you have in your box.


Sorry to be the "grumpy old man", but i don't get it.

In my experience, self hosting usually falls into two categories, people that "just want to host a simple static website", and "host all the things", and both are usually served better by using the cloud.

Most are in the category "i want to learn about self hosting", meaning they have (close to) zero experience, and while self hosting by itself isn't hard, maintaining a secure environment is, and that's where many people fail.

For the "simple static website", you can host it for free pretty much everywhere you like. Github pages, Azure static web apps, and countless others all offer stable, professional cloud servies for free, without the risk of exposing your network to the internet.

For the "host all the things", you see people attempting to mimic and entire data center at home, complete with monitoring, CI/CD and everything, and while i appreciate the learning experience, most people are blissfully unaware of the chore it is to maintain such a thing. Most of these services are better off being in the cloud.

- email is a likely thing people will want to self host, which is also perhaps the most stupid thing to do. First of all, it is a chore to keep your server off of various block lists, and you gain nothing but pain by self hosting it. Email is insecure by design. Every email has at least 2 parties, the sender and the receiver, and with >50% of the worlds recipients running on Google/Microsoft/Yahoo/whatever, your email will get indexed. A much better alternative is letting someone who knows what they're doing host your email, and use a personal domain. That way you can move your MX records if need be, still maintain your email address if changing provider, and let someone else deal with the problems of running the service. If it's privacy you want, use something else, or use encryption. In both cases, self hosting gives you nothing additional.

- cloud storage is another contender, and the most common "excuse" is that cloud hosting is too expensive, and yes, if you plan to store 200TB in the cloud then it is, but maybe instead you need to look at which files are needed when away from home, and use the cloud for those, and leave the rest at home, accessible by VPN if need be. If you need privacy with cloud file hosting, something like Cryptomator (https://cryptomator.org/) is much easier/better than maintaining your own server. (as a side note, you can get around 20TB of cloud storage for €20/month, or roughly the price of the electricity required to run a 4 drive NAS for the same time, but not including cost of hardware).

Not matter your setup at home, you will never create something as resilient as the major cloud datacenters. i.e. OneDrive (paid version) stores your files across multiple geographically separated data centers, using erasure coding, so if one data center dies, your files are still available in another center, and hastily being replicated to a third center. It uses atomic writes (like CoW filesystems, ZFS, Btrfs, APFS, etc) to ensure data written is correct, and has checksumming (inherent in the erasure coding), as well as versioning of files (OneDrive has unlimited file versioning for 30 days rolling), meaning you get at least some ransomware protection.

So in the end, most people are way better served by putting their stuff in the cloud and encrypting it, than they are exposing insecure services from home.

By all means, build the cluster at home as a learning experience, but save yourself some trouble and keep it within your LAN. If you need to access it externally, use a VPN instead. With modern VPNs like wireguard, there is very little overhead, and your data will thank you for it (as will your family as you suddenly have a lot more time to spend with them!).


Sending emails is hard because of the blocklists, but receiving (and storing) is easy. You can self-host the latter and delegate the former to a company specialized in it.

> Not matter your setup at home, you will never create something as resilient as the major cloud datacenters

That's true for hardware failures, but there are weekly stories on HN about major cloud providers randomly deleting people's accounts; so you need to setup off-site backups yourself either way.


> You can self-host the latter and delegate the former to a company specialized in it.

But what would you gain from it ? You gain no privacy, no improvement of service, no added resilience (probably less). All you gain is additional work maintaining your services.

> so you need to setup off-site backups yourself either way

You will always need backups regardless if self hosting or cloud hosting it, but by not opening up your firewall, you will have a much more secure home network, especially if you're not a security expert, which many people are not.

My personal mail backup consists of running a local dovecot instance that is synchronized every x hours using imapsync, and the dovecot mailstore is then backed up by my normal backup jobs every x hours. My provider does support an API for downloading a backup, but it's slower and more error prone than simply just synchronizing the email locally, and if need be, i can access my mailbox locally.

Restoring it in case of data loss or changing provider is also "easy", as i can simply reverse the imapsync.


> You gain no privacy

You do:

1. You don't reply to many incoming emails, so they would never be seen by the MTA

2. Even when replying, you don't necessarily include the whole original email in your response (though that's only a very minor improvement)

3. MTAs normally don't store emails, and it would be expensive for them to do so as you aren't paying them for storage. This protects your past and present emails from the MTA turning evil (or being hacked) in the future.

> by not opening up your firewall, you will have a much more secure home network, especially if you're not a security expert, which many people are not.

Even consumer-grade routers support DMZs. With the right instructions, it's possible to only open the firewall to the server and keep it out of the home network.


> You don't reply to many incoming emails, so they would never be seen by the MTA

Every email has a sender and a receiver, and if either of those parties are on cloud hosted email, your email will be seen by an MTA, and you can bet your life that Google/Microsoft/whatever will snatch your recipient email and catalog it.

As i said, if you want privacy use something else, like Signal for instance, or use encryption, in which case it doesn't matter where you store your emails as any information you expose is already exposed by the protocol itself (sender/recipient/topic/date/source ip/destination ip/etc).

> MTAs normally don't store emails, and it would be expensive for them to do so as you aren't paying them for storage. This protects your past and present emails from the MTA turning evil (or being hacked) in the future.

So do backups, but with much lower maintenance, and more security/resilience, simply by being "offline".

> Even consumer-grade routers support DMZs. With the right instructions, it's possible to only open the firewall to the server and keep it out of the home network.

Ask your friends what a DMZ is. I'm certain that most people on HN will know what it is, and a large share will probably also know how to set it up, but then the issues with hairpin nat, name resolution, and other stuff starts cropping up, which is where some people just give up and instead expose it from the LAN so that they may access it from home as well.

Expand the scope to also include VLANs and you have an even smaller group.

Next up is that many people will happily use the same server for exposing services to the internet as well as internal stuff, so now you're just one CVE away from having everything on your server encrypted/deleted/leaked. You can partially mitigate that by using jails/containers, but thats another layer you need to familiarize yourself with, with the risk of once again getting it wrong.

Most people would be much better off just setting up a VPN and using that to access their home network, and letting professionals worry about securing services.

Edit: I should add that i'm not against people setting up servers at home for experimenting/learning, it's only when they expose those services to the internet it bothers me.

There is certainly value in experimenting with stuff in a homelab, but when there are so many free servies available that does stuff better than almost any reasonable homelab can hope to, there is very little point in accepting the additional risk.

In the case of a "static webpage" use case, you can publish that for free with GitHub, or you can chose to expose it from home, opening up your firewall, as well as ports to your server. Congratulations, you're now a network and system administrator, as well as responsible for maintaining SSL/TLS certificates, ensuring uptime (if the webpage has value, otherwise why bother publishing it in the first place).

In the "free" package you get resilient infrastructure with redundancy on every level (power/internet, hardware, software, services), you get a professional staff that babysits services, and you don't have to worry about anything expect creating the content you want to publish.


just use $4 per month vps? I used a $10 vps as I also host lots of other stuff other than a website.


Man, I feel like sometimes people forget the 1990s. I was there, running an Apache service and whatnot.

It sucked. Cost a lot of time, and really, fiddling with firewalls and Apache configuration didn't teach me much.

All hail PaaS and the like. I'm not looking back.


Meh... Even the best PaaS platforms are child's play compared to advanced declarative configuration systems like Nix.

One file and I can recreate my multi box self hosting setup. Why would I bother with a platform with a bunch of state?


One Docker compose file and ...

Most PaaS type offerings for Web dev separate state and code. I can destroy and redeploy a whole cluster in two commands


I find it easier with Nginx. Certainly feels easier than I remember back around 1999 with Apache. Also there's way more information out there on how to configue and set up than there was twenty years ago.


he lost me at "I've bought a domain on Google Domains"


Astro be pretty chill if you want a no computation approach to making a website




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: