Hacker News new | past | comments | ask | show | jobs | submit login
Docker: Lightweight Linux containers for consistent development and deployment (linuxjournal.com)
86 points by Isofarro on May 20, 2014 | hide | past | favorite | 30 comments



This seems to be a fairly outdated article, despite the displayed date- it says the 'newest' version is 0.7, but we've been on 0.11 since 2 weeks ago: http://blog.docker.io/2014/05/docker-0-11-release-candidate-...

It's some pretty amazing tech, and I love it with Vagrant for my local dev environment, and for full deploys. I'm not such a big fan of Dokku- it's a great 'heroku replacement' but one of the best strengths of docker, is you can just upload your code to a private repo(or use a trusted build on a private repo!) and pull that down and run to deploy. Dokku going through the buildpack process, while making it nice from a quick git push angle, sacrifices one of Docker's strongest features(consistency).


This is a great point and has bugged me for for the past few months - I'm sat there watching my 'npm install' whir by and I'm thinking - 'I reckon this is not the "docker way"'

Having said - I'm still trying to work out the combination of source repo and docker repo - i.e. 1 dev makes a code change - they commit the code to github - they then commit the docker image to the registry.

Now - does developer number 2 'git pull' or 'docker pull'

What I really like is how docker does not really enforce anything upon you - leaving the above question to be answered any which way.

Docker ftw!


I don't think sharing the image is necessary during development. I just add the Dockerfile to the repository and each dev builds their own image. If they are methodical they can pull the repo and rebuild the image at the beginning of their work day. Most times it will be rebuilt in a second because there have been no changes. If they don't rebuild the image, it will be the main suspect as soon as something breaks anyway, so I can't see this causing much trouble.


This is exactly what I do, just using Trusted Builds.

Right now one of my biggest curiosities, is doing mass deploys and unified logs. I've not tried to so it hasn't mattered yet, but it could be fun.


Personally, I git pull and I just have all the important repos [e.g. things with Dockerfiles that people want to be notified if they should rebuild] dump to chatroom notifications so people know they might want to rebuild.


quaunaut: I've never used Dokku so I'm not sure if this is helpful, but I've been experimenting with using https://github.com/progrium/buildstep to build containers using buildpacks. I'm finding that it allows for fast builds and a consistent image that I can push/pull to different environments. There is a PR (https://github.com/progrium/buildstep/pull/50) for adding buildpack caching that makes the builds even faster by caching gems and assets (I happen to be using it for a rails project).


Just a sidenote, but why do some software projects consider 0.11 higher than 0.7? I remember when jQuery did this it broke tons of plugins that used a simple numerical comparison to see if the version was high enough (which seems sensible enough to me).


0.11 is higher than 0.7 because version numbers aren't decimal numbers. The 0 is the "major" version number and the 11 or 7 is the "minor" version number. Many projects follow the Semantic Versioning convention.

http://semver.org/


Because 11 is higher than 7. It's not version 0.1.1 or version "dot one one", it's version "dot eleven". Eleven is a bigger number than seven is.


it's true, 11 > 7 but 0.11 < 0.7


Version numbers are not always decimal numbers.


I like how they're saying that Docker's strength is in letting you never learn how to package applications. Which is a bit like saying that a Google self-driving car's strength is that you never have to learn how to drive.

The useful features of Docker are resource isolation, image layering, and staged deployment. The idea that you can run two versions of PHP or move your files between distros has been a solved problem for at least 30 years.


I'd go even farther and say that the main useful feature of docker is the image layering. Even the resource isolation has been available with LXC and cgroups for a while now.

But docker certainly has a nice way of letting you use all these features in a friendly package. It's often not the most original things that are the most groundbreaking, sometimes it's when something comes along and takes all the parts you already have and puts them all together in a useful way.


You can combine it with Vagrant as well. I used to use Vagrant with Virtualbox, but using the Docker provider, the development is much faster. The config files we are using is public and open source: https://github.com/czettnersandor/vagrant-docker-lamp

However, if you don't have Linux, Vagrant can start a Virtualbox image with Linux and run Docker in it with a little modification in the Vagrantfile


How do you turn off the VM that is hosting the dockers? I.e. when none are running?

Btw, I found the vagrant documentation for the docker provider to be a bit lacking. I probably wasted a day getting it to work :(


If you're using Vagrant, you'll have to manually shut off the VM via `vagrant halt`.

If you're just trying to get started with Docker, Boot2Docker (https://github.com/boot2docker/boot2docker) is probably the way to go.

If you're looking for an Ubuntu-based Vagrant box, you can try out my Vagrant box (http://ferry.opencore.io/en/latest/install.html#os-x). It's based off of 14.04. Be warned though, my box is very large (~4GB) as it contains many pre-baked images (Hadoop, Cassandra, etc.).


Thanks. Where do I issue vagrant halt (i.e. which folder)? The workaround I found was to go virtualbox directly and issue a poweroff. Not clean at all :(


I guess not, but can you run graphical applications in Docker ?


It's no problem to run an X server in the container and use VNC/X client/openssh to access it from your machine. See for example this blog post: http://blog.docker.io/2013/07/docker-desktop-your-desktop-ov...


Don't you mean run the X server locally and an X client in the container?


You can do it either way depending on what you're trying to achieve. I have a dev container set up that runs Xvnc and/or Xpra in a container so I can connect to it from anywhere.


Possibly going a bit off-topic here - but isn't Xvnc arguably a "headless" X server, in that it appears to be a X server and a VNC server - but as X client/server terminology is round they other way from VNC (and pretty much everything else) it isn't really a "graphical" application in the sense of jmnicolas's original question...


I get this is a 'hacker' but in what way this is a 'news'?


http://ycombinator.com/newsguidelines.html -

"On-Topic: Anything that good hackers would find interesting."


And a massive pain for security.


Assuming you are arguing in favour of VMs, the benefits of Docker stand, and you can perfectly run Docker containers within a VMs.

The feature sets/usage sweet points of OS-level package managers, language-specific package managers, VMs, containers, distinct/same hosts, ... are both overlapping and different enough that you need judgement to choose which one you want, but they are certainly not exclusive.

If your use case means you prefer VMs over containers and you don't need to combine them, fine, but every situation is different.


There are valid concerns.

Someone with access to the docker control socket effectively has root on your machine: e.g. `docker run -v /etc:/external_etc ubuntu visudo -f /external_etc/sudoers`.

Don't let untrusted users (or scripts) run docker.


Which is exactly what the Docker docs say: "First of all, only trusted users should be allowed to control your Docker daemon."

http://docs.docker.io/articles/security/#docker-daemon-attac...


This is a good point but applies to other tech that seeks some of the same ends. And if an attack is penetrated that deep then you would be screwed anyway. What's nice from a security perspective is Docker actually lowers the attack surface and shrinks the access that any potential outside attack can have.


Depends on the use cases.

Your sweeping statement applies to a minority of them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: