I'm mostly thinking about all the stuff that goes wrong. Corrupted data files, server(s) die, monitoring, getting paged, backups, testing restores, yada yada. It works until it doesn't, and then the value you derive from self-hosting is questioned when there's a failure. If you can self-insure against shit happening & like doing so, then it's probably worth it. Mostly just point this out for the potential self-hosters who haven't thought everything through, and more likely (like myself) who wouldn't know what to think of in the first place.
There are definitely some things to be considered here, however I find that most people drastically overestimate the amount of work associated with hosting things.
Also they tend to underestimate the amount of work required when using managed solutions. For example, you'll certainly want to do secondary backups and test restores even for managed options.
I don't see any difference in maintenance effort. No matter if the database is self-hosted or in the cloud, I need external backups (built-in backups on that cloud service are convenient but not sufficient) and a tested way to restore the whole infrastructure from scratch.
Someone else managing that DB doesn't prevent me (or someone malicious) from destroying the data, various infra screwups still are a thing (though perhaps less common). The only difference is if doing the same things through the managed DB admin panel takes meaningfully less effort than ssh on a self-hosted box; and it seems a wash - in the managed DB is easier to use complicated features than when self-hosted, but they tend to be more complex for simple things.
If uptime is critical and you don't want this to be a full time job then don't do it... Opt for PaaS.
But for me uptime isn't critical, I can happily be down for a few days if needed. It hasn't happened once yet but I can tolerate it were it to happen.
Plus this is how I learn. By doing it I learn how to do it and what should be considered. When it goes wrong I learn how to avoid that in future and what I should do to minimise risk of it recurring.
I find the database to be the easiest past to manage, it's file storage and availability of that which I outsource to AWS S3. The volume of that is harder for me to trivially solve. Self hosting Ceph seems more intimidating. Postgres is easy though.
I had a similar setup, only with MySQL as the database since 2003. Started on a server in a closet of my own apartment, then moved to Leaseweb, then Hetzner. The project had its highest point some 14 years ago, with well over 1mln page views per month, but I could still fit it on a single bare metal server.
As the project started to fade out, I moved to Hetzner Cloud to reduce the cost of standby infrastructure.
It was down for 4 hours in a row several times over these 18 years, but otherwise the uptime was close to six sigma (under 3 hours per year).
It's surprising how rock solid hosting providers are. Can't imagine hosting anything substantial on AWS after all these years.
In practice, both hardware and software used in production is 10 times as reliable as it was 10 years ago. I’ve seen less downtime on the average Dell rack server with RHEL than the average yearly AWS downtime.
Additionally, running a managed DB involves a lot of upkeep as well, it’s just a different set of tasks.
It's only reachable from the app servers, backed up to tarsnap.
It's really not a lot to learn and hasn't been a chore to operate.