> Is there any point in Fail2Ban if you're using keys and have disabled passwords?
I’d say no. Back in the mid 2000s, I used to use Fail2Ban as an extra layer of defense but users on Stack Exchange and Hacker News (like tptacek¹) convinced me that it was pointless if I’d already disabled password authentication.
To minimise noise in my logs and to have an extra layer of defense, I only allow TCP access to Port 22 (with rate-limiting) from my home ISP’s network block, my work IP address and via the Wireguard network interface (in case my home or work ISP change the IP addresses they provide to customers).
I have considered using Fail2Ban to stop spammers using too much Postfix resources but so far I’ve got away with postscreen and configuring Postfix to reject spam attempts as early as possible during the SMTP transaction. Similarly, my Apache server gets hammered with exploit attempts but I haven’t got around to investigating how useful Fail2Ban would be for minimising how much server resources are used in responding to these malicious HTTP requests.
I move SSH to a different port... I know it's easy enough to discover through a port scan, but it cuts a LOT of noise down. For home, I only allow the wireguard port from the outside.
I tend to start with Ubuntu Server these days as the SSH config is pretty much where I want in the box and will import my public key during setup. I also now use Caddy for reverse-proxy duties over Nginx.
Fail2Ban doesn't do much for SSH other than keeping your logs cleaner if you're using key based auth. It's quite good for protecting other services like Vaultwarden for example. Of course, it's just one additional layer. The important part is to configure the services themselves to be more secure.
There are a few other posts on HN with the same title. Some things to also consider that I had not seen mentioned:
PCI
CIS
Etc…
Include many more things specifically around ssh that you can do outside of fail2ban, also things that are requirements for the above….
These posts are good but slightly miss a lot of security practices that are “standard”. As always the best security is not allowing the system to be connected to anything.
But in the event that you have to have a system with such availability, it’s always best to introduce at least CIS foundations and whatever you see fit for security. Just my .02..
I have received a lot of feedback regarding this. I'm waiting for Ubuntu to update their CIS docs for 24.04, I'll update my post when they do. I keep a lot of my blog posts regularly updated, this post will be one of them.
Modern take would be to simply not open anything to the outside world - except WireGuard (TailScale or such).
From there everything is either considered "localhost" or a local network.
You can setup one or two central boxes (actual home lab "server" where you already have HTTP based services, and a raspberry pi zero 2 for backup) with TailScale.
With remote devices (including phones) in same tailscale network - you can access anything in home network as if you're physically home (but also have ACLs for kids/friends/etc).
On the other (professional) end - well then NginX and SSH are not even on the same network interface. And you run NginX LB/ReverseProxy on separate boxes compared to where actual apps/websites are ...etc.
In case of "zero trust network" the answer is no it doesn't violate.
With WireGuard or TailScale/CloudFlare/etc you still know/verify identity of every person/device that has access to the (virtual and through it real) network.