It's some pretty amazing tech, and I love it with Vagrant for my local dev environment, and for full deploys. I'm not such a big fan of Dokku- it's a great 'heroku replacement' but one of the best strengths of docker, is you can just upload your code to a private repo(or use a trusted build on a private repo!) and pull that down and run to deploy. Dokku going through the buildpack process, while making it nice from a quick git push angle, sacrifices one of Docker's strongest features(consistency).
This is a great point and has bugged me for for the past few months - I'm sat there watching my 'npm install' whir by and I'm thinking - 'I reckon this is not the "docker way"'
Having said - I'm still trying to work out the combination of source repo and docker repo - i.e. 1 dev makes a code change - they commit the code to github - they then commit the docker image to the registry.
Now - does developer number 2 'git pull' or 'docker pull'
What I really like is how docker does not really enforce anything upon you - leaving the above question to be answered any which way.
I don't think sharing the image is necessary during development. I just add the Dockerfile to the repository and each dev builds their own image. If they are methodical they can pull the repo and rebuild the image at the beginning of their work day. Most times it will be rebuilt in a second because there have been no changes. If they don't rebuild the image, it will be the main suspect as soon as something breaks anyway, so I can't see this causing much trouble.
Personally, I git pull and I just have all the important repos [e.g. things with Dockerfiles that people want to be notified if they should rebuild] dump to chatroom notifications so people know they might want to rebuild.
quaunaut: I've never used Dokku so I'm not sure if this is helpful, but I've been experimenting with using https://github.com/progrium/buildstep to build containers using buildpacks. I'm finding that it allows for fast builds and a consistent image that I can push/pull to different environments. There is a PR (https://github.com/progrium/buildstep/pull/50) for adding buildpack caching that makes the builds even faster by caching gems and assets (I happen to be using it for a rails project).
Just a sidenote, but why do some software projects consider 0.11 higher than 0.7? I remember when jQuery did this it broke tons of plugins that used a simple numerical comparison to see if the version was high enough (which seems sensible enough to me).
0.11 is higher than 0.7 because version numbers aren't decimal numbers. The 0 is the "major" version number and the 11 or 7 is the "minor" version number. Many projects follow the Semantic Versioning convention.
I like how they're saying that Docker's strength is in letting you never learn how to package applications. Which is a bit like saying that a Google self-driving car's strength is that you never have to learn how to drive.
The useful features of Docker are resource isolation, image layering, and staged deployment. The idea that you can run two versions of PHP or move your files between distros has been a solved problem for at least 30 years.
I'd go even farther and say that the main useful feature of docker is the image layering. Even the resource isolation has been available with LXC and cgroups for a while now.
But docker certainly has a nice way of letting you use all these features in a friendly package. It's often not the most original things that are the most groundbreaking, sometimes it's when something comes along and takes all the parts you already have and puts them all together in a useful way.
You can combine it with Vagrant as well. I used to use Vagrant with Virtualbox, but using the Docker provider, the development is much faster. The config files we are using is public and open source: https://github.com/czettnersandor/vagrant-docker-lamp
However, if you don't have Linux, Vagrant can start a Virtualbox image with Linux and run Docker in it with a little modification in the Vagrantfile
If you're looking for an Ubuntu-based Vagrant box, you can try out my Vagrant box (http://ferry.opencore.io/en/latest/install.html#os-x). It's based off of 14.04. Be warned though, my box is very large (~4GB) as it contains many pre-baked images (Hadoop, Cassandra, etc.).
Thanks. Where do I issue vagrant halt (i.e. which folder)? The workaround I found was to go virtualbox directly and issue a poweroff. Not clean at all :(
You can do it either way depending on what you're trying to achieve. I have a dev container set up that runs Xvnc and/or Xpra in a container so I can connect to it from anywhere.
Possibly going a bit off-topic here - but isn't Xvnc arguably a "headless" X server, in that it appears to be a X server and a VNC server - but as X client/server terminology is round they other way from VNC (and pretty much everything else) it isn't really a "graphical" application in the sense of jmnicolas's original question...
Assuming you are arguing in favour of VMs, the benefits of Docker stand, and you can perfectly run Docker containers within a VMs.
The feature sets/usage sweet points of OS-level package managers, language-specific package managers, VMs, containers, distinct/same hosts, ... are both overlapping and different enough that you need judgement to choose which one you want, but they are certainly not exclusive.
If your use case means you prefer VMs over containers and you don't need to combine them, fine, but every situation is different.
Someone with access to the docker control socket effectively has root on your machine: e.g. `docker run -v /etc:/external_etc ubuntu visudo -f /external_etc/sudoers`.
Don't let untrusted users (or scripts) run docker.
This is a good point but applies to other tech that seeks some of the same ends. And if an attack is penetrated that deep then you would be screwed anyway. What's nice from a security perspective is Docker actually lowers the attack surface and shrinks the access that any potential outside attack can have.
It's some pretty amazing tech, and I love it with Vagrant for my local dev environment, and for full deploys. I'm not such a big fan of Dokku- it's a great 'heroku replacement' but one of the best strengths of docker, is you can just upload your code to a private repo(or use a trusted build on a private repo!) and pull that down and run to deploy. Dokku going through the buildpack process, while making it nice from a quick git push angle, sacrifices one of Docker's strongest features(consistency).