1) packaging: this is the feature that's easiest to see benefits from. Having a single artifact that can be run on your CI infrastructure, development machine, and production environment is a massive win.
2) scheduling: there are big cost savings to be had by packing your application processes more efficiently onto your infrastructure. This might not be a big deal if you're a startup, and you haven't yet hit scale.
3) dev environment: It's powerful to be able to run exactly what's been deployed to prod, on your local machine. I've not found developing in a container to be great though; I still use the Django local dev server for fast code-loading. (It's possible to mount your working directory into your built container; this is just personal taste).
4) security: containers are not as robust a security boundary as hypervisors, so they are less suitable for multi-tenant architectures. The most common use-case is to run your containers in a VM, so this isn't necessarily a problem. As an additional defense-in-depth perimeter, containers are great.
5) networking: think of network namespaces as a completely isolated network stack for each container. You can run your containers in the host namespace using `--net=host`, but this is insecure [1]. Using host networking can be useful for development though. In general the port forwarding machinery allows your orchestrator to deploy multiple copies of the same container next to each other, without the deployed apps having to know about other container's port allocations. This makes it easier to pack your containers densely. (More concretely, your app just needs to listen on port 8000, even if Kubernetes is remapping one copy of it to 33112, and another copy to 33111 on the host).
6) secrets: containers force you to be more rigorous with your handling of secrets, but most of the best practices have been established for some time [2]. The general paradigm is to mount your keys/secrets as files, and consume them in the container; Kubernetes makes this easy with their "Secrets" API. You can also map secret values into env variables if you prefer.
7) container images: the Dockerfile magic is a pretty big win for building artifacts; the build process caches layers that haven't changed, which can make builds very fast when you're just updating code (leaving OS deps untouched). Having written and optimized a build pipeline that produced VMDK images, and experiencing the pain of cloning and caching those artifacts, I can attest that this a very nice 80/20 solution out of the box.
1) packaging: this is the feature that's easiest to see benefits from. Having a single artifact that can be run on your CI infrastructure, development machine, and production environment is a massive win.
2) scheduling: there are big cost savings to be had by packing your application processes more efficiently onto your infrastructure. This might not be a big deal if you're a startup, and you haven't yet hit scale.
3) dev environment: It's powerful to be able to run exactly what's been deployed to prod, on your local machine. I've not found developing in a container to be great though; I still use the Django local dev server for fast code-loading. (It's possible to mount your working directory into your built container; this is just personal taste).
4) security: containers are not as robust a security boundary as hypervisors, so they are less suitable for multi-tenant architectures. The most common use-case is to run your containers in a VM, so this isn't necessarily a problem. As an additional defense-in-depth perimeter, containers are great.
5) networking: think of network namespaces as a completely isolated network stack for each container. You can run your containers in the host namespace using `--net=host`, but this is insecure [1]. Using host networking can be useful for development though. In general the port forwarding machinery allows your orchestrator to deploy multiple copies of the same container next to each other, without the deployed apps having to know about other container's port allocations. This makes it easier to pack your containers densely. (More concretely, your app just needs to listen on port 8000, even if Kubernetes is remapping one copy of it to 33112, and another copy to 33111 on the host).
6) secrets: containers force you to be more rigorous with your handling of secrets, but most of the best practices have been established for some time [2]. The general paradigm is to mount your keys/secrets as files, and consume them in the container; Kubernetes makes this easy with their "Secrets" API. You can also map secret values into env variables if you prefer.
7) container images: the Dockerfile magic is a pretty big win for building artifacts; the build process caches layers that haven't changed, which can make builds very fast when you're just updating code (leaving OS deps untouched). Having written and optimized a build pipeline that produced VMDK images, and experiencing the pain of cloning and caching those artifacts, I can attest that this a very nice 80/20 solution out of the box.
[1]: https://github.com/docker/docker/issues/14767 [2]: https://12factor.net/