> The author doesn't seem to understand linux containers.
> Docker doesn't run a "vm" image and it definitely doesn't run another kernel [...]
Yes I do understand them!
In fact, in a former life (i.e. during the long forgotten time when Linux containers were just a novelty and LXC was seen as the "future") I was a big proponent of containers.
What I was not a big supporter of is containers, that instead of being as you well say "more closer to chrooted processes", instead they are almost stand-alone VM images lacking a kernel. In fact Fly.io does exactly this -- they take a Docker container, add a Linux kernel, and run everything in a Firecracker "micro" VM.
----
> that certainly won't lose any data when you purge it by mistake
> If you don't understand what you are doing, you can "purge data by mistake" doing other things as well.
Yes, butter-fingers are a thing... But with Docker it's so easy to delete a container by mistake (that although shouldn't have state it does); on the other side, if you are about to issue `rm -R -f .` and you are inside `/var/lib/mysql` you do have quite a few opportunities that perhaps you are doing something stupid (you have to explicitly write `rm`, add `-R` and manually type `/var/lib/mysql`)...
----
> The download size may be bigger than a statically linked single executable... but if you are running a lot of different processes in docker, they may share the underlying layers which may mean that you end up using less disk space overall.
OK, say that for "server applications" (that happen to use tens of micro-services) this makes sense.
But how about a "tool application", say a static-site builder; does it now make sense to use Docker to run this tool?
I would think if you depend on data stored in a container being persistent, it's an indication you really should be mounting a volume to the container and persisting the data there. Then container restarts won't matter. Best practices are generally for containers to be "cattle" instead of "pets", and data persistence usually has a different solution.
Regarding using a container for distributing static site generators, does anyone really do that? I think you may be building up a straw man here, I haven't seen anyone recommend this workflow. Could you elaborate on how this relates to the parent comment's mention of using less space when running many containers? What the parent comment mentions is, in fact, highly relevant to some of the main use-cases of containers, which is running many containers of the same service on one or many hosts. Space can be saved there with layer caching.
> Could you elaborate on how this relates to the parent comment's mention of using less space when running many containers?
Could you elaborate on how does "running many containers" relates to the topic of distributing and running *a single tool* (that was the context of the article)?
> Could you elaborate on how does "running many containers" relates to the topic of distributing and running a single tool (that was the context of the article)?
YOU may be distributing just a single tool but your users most likely use the computer for more than one tool. If they use many tools under docker, it is likely that many of those tools share the same underlying layers.
> instead they are almost stand-alone VM images lacking a kernel
How are docker images even close to a VM image? Docker image is closer to shipping your application in a zip file with all the dependencies than it is to a VM image. In a VM image, you need a "minimal OS" which has to include the kernel, drivers, required OS binaries + everything that would be in a container.
Docker processes are literally running natively on host machine making linux system calls just like any native executable would... except they only see the resources you want them to see. Docker processes don't need the docker daemon to be running. In contrast, if you were to run a VM, you'd have to have at least minimal OS kernel with all the required OS processes and drivers and the application on top of that... not to mention the hypervisor itself.
> But with Docker it's so easy to delete a container by mistake
You do have to write "docker rm" to remove a container. It doesn't just disappear. The only way I can imagine someone losing data is if they think they are writing to the host filesystem when inside a container (without using volumes as one should) and delete the container thinking that their data exists in the user filesystem.
> But how about a "tool application", say a static-site builder; does it now make sense to use Docker to run this tool?
This is how you can run a "tool application":
docker --pull run --network none --rm -it -v $(pwd):/src klakegg/hugo:latest
The above will only give the hugo process access to the current working directory and it has no access to your filesystem outside of that, not to mention it has no access to your devices, your network or many other things a host process would have. It makes sense to run it under docker just for that. Plus, you can be sure that it will never have any library incompatibility because you updated something else, even if something basic like glibc comes with breaking changes, will always run the up-to-date version (if that's not what you want, remove --pull or use a fixed version tag instead of "latest" tag) and you never ever have to worry about what programming language your tools are written in or how they are distributed. You also don't have to rely on the original author distributing docker files (in the case of hugo for example, the docker images are maintained by a third party). As long as they have one way to build and run it, you can create docker images yourself and make it available for others too.
> In a VM image, you need a "minimal OS" which has to include the kernel, drivers, required OS binaries + everything that would be in a container.
A docker image and a VM image can be the same.
For example say you have a simple web server that just listens on port 443 and does it's thing; if you compile it statically, and it doesn't need any other files, you don't even need a VM image.
You can simply create an initramfs that contains that executable (nothing else, no drivers, no configuration files, no distribution, nothing).
Just boot the Linux kernel (granted, a custom build that uses built-in modules that the hypervisor actually uses), use proper kernel arguments to either statically or dynamically configure the network stack, and instruct the kernel to use the server executable instead of the `init`. Your server doesn't even know it's running in an "empty" VM.
(Just read about how AWS Lambda implements everything by using Firecracker.)
Yes I do understand them!
In fact, in a former life (i.e. during the long forgotten time when Linux containers were just a novelty and LXC was seen as the "future") I was a big proponent of containers.
What I was not a big supporter of is containers, that instead of being as you well say "more closer to chrooted processes", instead they are almost stand-alone VM images lacking a kernel. In fact Fly.io does exactly this -- they take a Docker container, add a Linux kernel, and run everything in a Firecracker "micro" VM.
----
> that certainly won't lose any data when you purge it by mistake > If you don't understand what you are doing, you can "purge data by mistake" doing other things as well.
Yes, butter-fingers are a thing... But with Docker it's so easy to delete a container by mistake (that although shouldn't have state it does); on the other side, if you are about to issue `rm -R -f .` and you are inside `/var/lib/mysql` you do have quite a few opportunities that perhaps you are doing something stupid (you have to explicitly write `rm`, add `-R` and manually type `/var/lib/mysql`)...
----
> The download size may be bigger than a statically linked single executable... but if you are running a lot of different processes in docker, they may share the underlying layers which may mean that you end up using less disk space overall.
OK, say that for "server applications" (that happen to use tens of micro-services) this makes sense.
But how about a "tool application", say a static-site builder; does it now make sense to use Docker to run this tool?