If I give someone a sample docker-compose file, they can immediately run my service regardless of OS. If I distribute manually, I need to provide instructions for setting up the proper dev environment and packages for several common OSes and distros (brew maybe, apt, rpm, etc.).
Speaking personally, I know how to write a proper Dockerfile, (and it's a skill that's learnable in a couple hours). I have no idea how to distribute packages through other formats and avoid footguns.
There's a lot more to running a service than getting it running easily. Longevity/stability through applying updates should be engineered too.
The problem is when using Docker, you can't tell what the dependencies are for your app. Telling me i need node or Go or php and/or what database or redis, etc etc gives me an instant feel for how the deployment, as well as how security updates, need to be applied.
Docker is just a black box by comparison... at least to me. All my attempts at running docker solutions on small vms have ended badly (ports opened publically, rampant disk use, poor log files management, lack of security updates...). Seriously, I wish devs would at least list the tech stacks they're using in their apps in the readme.
However, i do grok that people who've embraced the ecosystem would like it that way... but not all techs like it.
For me, the best projects are those that lay it all out, warts and all and then have a separate repo that manages the docker state.
> The problem is when using Docker, you can't tell what the dependencies are for your app.
That's all in the Dockerfile though... it's a simple and standard way to show how to install something. It's just like a makefile, but it's one that always works whatever the environment you are in, whatever the dependencies that you already got in your system and in most case, whether it has been maintained or not.
> ports opened publically
... and that wouldn't have happened if you used something else? How so? Unlike any other solution where each application choose how ports are configured, on Docker you actually need to be aware of the port to open and specify it. I means sure you could have not known that by default it does it over 0.0.0.0 but that would be true for almost any applications (and that's when you are even aware of the ports, that's a basic CTF challenge to have an unknown port open).
> rampant disk use
That's a good point, I agree that the storage can be quite annoying, but at the same time, you handle that the same way you would handle that with any other software, simply by knowing where the storage goes and why.
> poor log files management
I love how log is handled with Docker, there's a single log output and that's it. It's the source of truth for logging, easy peasy. You want your logs to be pushed to another system? Well connect it to the Docker logging system and that's it.
> lack of security updates
Could you develop that one? You are responsible to keep updating whatever you use, whether it's a docker container or an application. One or the other doesn't change that. Maybe you means that it's so easy to make a Docker image that it's just as easy to stop updating it thus making it less secure for whoever use it? I means sure... that would make it less secure but it's still possible for any application to be abandoned, it is your responsibility to make sure whatever you use will be maintained.
> Seriously, I wish devs would at least list the tech stacks they're using in their apps in the readme.
I haven't seen many Docker images that doesn't do that. I do have seen a few that doesn't go into depth on how to setup the environment, but the tech stack, that's mostly a given.
The dependencies in your Dockerfile is as opaque as you wish. I've seen a lot of public available images built "FROM base" or similar, hidden somewhere in their maybe public CI. There is nothing in Dockerfile that actually shows your dependencies.
The same with logfiles. Some images contains everything, and the instructions is to have volumes for their poorly written applications log output. Just because Docker-the-application does stdout doesn't mean that the processes running in a container does stdout.
For security updates you need to be sure that the maintainer of the image is not only looking for updates to the application, but also all explicit and transient dependencies. In a classic setup with a package manager and explict dependencies those are the responsibility of other teams/individuals.
That said, Docker with friends has very much changed the landscape for the better, and I would have a hard time going back to maintaining least denominator java versions, python virtualenvs and tracking down incompatible shared libraries.
> If I give someone a sample docker-compose file, they can immediately run my service regardless of OS
_if they're already bought into the docker ecosystem_, this is true. if not, then they first have to go read up on docker first: figure out how to install it (OS-specific), enable the docker system services (i think systemd more or less standardizes this step), configure a user that has permissions to manage docker deployments (also frequently OS-specific), etc.
not saying docker is or isn't a worthy tradeoff between balancing distribution work between the code authors and the OS packagers and the users. just don't blind yourself that it is another thing that users have to learn before they can use your stuff.
Speaking personally, I know how to write a proper Dockerfile, (and it's a skill that's learnable in a couple hours). I have no idea how to distribute packages through other formats and avoid footguns.