Hacker News new | past | comments | ask | show | jobs | submit login

Docker and docker-compose also make it really easy to store all of the data in a single location, which opens up easy snapshot management with something like ZFS or Btrfs, and backups that way.

This is a thing I've found throws people off - you don't need the OS at all, just those volumes.

  /srv/docker_data
      name_of_project/
          docker-compose.yml
          volumes/
              my_project_db1/
Just for example.

At upgrade time I can snapshot that whole project and test, and revert if it goes bad.




Your file structure is interesting, I like the fact that the data and composition are together.

I actually went away from separate docker-compose files and use a large one (and wrote a small utility to start, stop, pull, etc.)

I will have a deeper look at that, if I managed to combine this with the caddy configurations that would be pretty much awesome :) EDIT: caddy supports globs, this si won-der-ful. I will switch to your configuration and rewrite my tool accordingly (adding a way to bootstrap the docker-compose file and the caddy config).

Thank for having chimed in!


Group things logically vs just doing one huge docker-compose file or 500 individual things.

If you need 7 services together for that feature to work right, write them together. If you need 3 for the next one, group those.

If you at some point want to remove one, just bring down that docker-compose. That's it. Up to you to then backup/delete/leave as is.

You can write a script to automate when you start the machine. But you can also set things that are needed to autostart.


> If you need 7 services together for that feature to work right, write them together. If you need 3 for the next one, group those.

After giving it a thought, this is probably the biggest drawback of that approach: when you have services that other services depend on (say, a db), then you cannot in the docker compose take that into account (as the other services will have their own docker compose and that the requires element points to a service within the same docker compose).

I can love with that, though - the fact that docker compose + caddy config (for the reverse proxy) and the data are all togather is fantastic and allows for easy bootstrapping


I've never used Caddy, and I'm not sure how dynamic it is. I use Traefik for reverse proxy so that each docker-compose configures the proxy for the application in that folder using labels. docker-compose up/down adds and removes the proxy config for that application.

There's an externally defined Docker network for the reverse proxy and web applications.

Each app should run it's own self-contained everything. App, DB, Redis, you name it.

I use the term "application" fairly vaguely. The number of containers each directory contains really depends on the application.

Real world example. These are each a single docker-compose file:

Nextcloud, has containers for application, PostgreSQL DB, Redis, Draw.io, and Collabora CODE. I use all of this exclusively from Nextcloud so it made sense to bunch them together. Nextcloud, Draw.io, and Collabora are all added to the reverse_proxy network in addition to the one docker-compose automatically creates.

Gitea has the application container and it's own PostgreSQL container. Again, the Gitea application is added to the reverse_proxy network.

This simplifies it when I want to back up or move between machines. It also makes it possible to run different DB versions should you run into incompatibilities. It kind of sounds like you're trying to run a single DB server for everything?


> I use Traefik for reverse proxy so that each docker-compose configures the proxy for the application

I've used traefik v1 and v2 and I did not like it. This is of course a personal opinion and I know it has its strengths. The fact that the config is though labels in docker-compose was putting me off, as well as some other things. But I know it is good and used a lot.

Caddy is a web server that works great and has a well-thought configuration (especially v2). It is not dynamic by default (but there are some images that bring is dynamism à la traefik, and a REST API)

> Each app should run it's own self-contained everything. App, DB, Redis, you name it.

It depends on the setup. In my home environment, running several backends such as MariaDB, PostgreSQL etc. is too much. Yes, it is the right approach (including the fact that you do not have dependencies) but the mileage varies.

(ah, you've edited your answer to add some points so some of my comment is redundant)


Sorry for the edits. One time I'll hit reply and have it say what I wanted it to, but this was not that time.

The benefits of choice! I use Traefik because the config is through labels in docker-compose.

I'm also running it at home. Ryzen 3 2200G (4c/4t), 32GB RAM, $100 Intel NVMe. Running roughly 40 containers in my docker VM, plus a few extra VMs. It's enough for my family and a couple of friends.

You'd be amazed at how low impact a small PostgreSQL or MariaDB instance is. I/O is the largest bottle neck. You can feel an HDD holding you back with a bunch of DBs churning simultaneously.

Of course, YMMV. If we're talking about a Raspberry Pi, disregard everything I've said. I'd run as few DB instances as possible too.


No, you are right. The curse of premature optimization.

I have an older Skylake with 25 GB RAM or so. Load is 0.51, RAM is at 5 GB or so... Plenty of space to grow but I am rather new to docker (~8 years as opposed to 30+ in standard Linux) so I did not do too much of research.

I will go for really independent, contained blocks, it has nothing but advantages.


If you want traefik-like config from container labels, simply use: https://github.com/lucaslorentz/caddy-docker-proxy


This is it.

It gives me control of each application individually.

I can snapshot, back up, and even migrate each application between machines easily.

It also helps out when I have an application that I need to control versions in. Using the latest tag can be fun and exciting (read dangerous and cause breakage), so if the tag is in the docker-compose file I immediately know what version that snapshot was running if I need to look back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: