> A server installed with half-decent care can run uninterrupted for a long long time, given minimal maintenance and usual care (update, and reboot if you change the kernel).
For my earlier home setups, this was actually part of the problem! My servers and apps were so zero-touch, that by the time I needed to do anything, I'd forgotten everything about them!
Now, I could have meticulously documented everything, but... I find that pretty boring. The thing with Docker is that, to some extent, Dockerfiles are a kind of documentation. They also mean I can run my workloads on any server - I don't need a special snowflake server that I'm scared to touch.
We have found out that, while some applications are installed much easier with Docker, operating them becomes much more harder on the long run.
NextCloud is a prime example. Adding some extensions (apps) on NextCloud becomes almost impossible when installed via Docker.
We have two teams with their own NextCloud installations. One is installed on bare metal, and other one is a Docker setup. The bare metal one is much easier to update, add apps, diagnose and operate in general. Docker installation needed three days of tinkering and headbanging to get what other team has enabled in 25 seconds flat.
To prevent such problems, JS Wiki runs a special container just for update duties for example.
I'd rather live document an installation and have an easier time in the long run, rather than bang my head during some routine update or config change, to be honest.
I document my home installations the same way, too. It creates a great knowledge base in the long run.
Not all applications fit into the scenario I told above, but Docker is not a panacea or a valid reason to not to document something, in my experience and perspective.
At that point I believe that if software wasn't written Docker-first it will just be plainly shit to manage as Docker container.
Docker containers more often than not are a crutch for app that have absolute shitshow install process to just bake that process in Dockerfile instead of making it more sensible in the first place.
For a home lab/home prod, time to install is not necessarily the worst thing to optimise for. For example, I’m not going to spend several hours each on 8-10 different alternative applications, just to install them so I can get a feel for which one I want to keep.
Docker is an amazing timesaver, and probably the only reason why I run anything more ambitious than classic LAMP + NFS on my home lab.
I agree some software doesn't fit into Docker-style application containers well, but I'm still a fan of Infrastructure-as-Code. I use Ansible on LXD containers and I'm mostly content. For example, my Flarum forum role installs PHP, Apache and changes system configs, but creating the MySQL database and going through the interactive setup process is documented as a manual task.
I could automate this too, but it's not worth the effort and complexity, and just documenting the first part is about as much effort as actually scripting it. I think it's a reasonable compromise.
If you make a habit to perform all changes via CI (ansible/chef/salt, maybe terraform if applicable) you get this for free too. See your playbooks as "dockerfiles".
For my earlier home setups, this was actually part of the problem! My servers and apps were so zero-touch, that by the time I needed to do anything, I'd forgotten everything about them!
Now, I could have meticulously documented everything, but... I find that pretty boring. The thing with Docker is that, to some extent, Dockerfiles are a kind of documentation. They also mean I can run my workloads on any server - I don't need a special snowflake server that I'm scared to touch.