Hacker News new | past | comments | ask | show | jobs | submit login
Introducing Docker Content Trust (docker.com)
101 points by dkasper on Aug 12, 2015 | hide | past | favorite | 30 comments



I don't get the point of creating yet another signing framework. Integrating GnuPG would have saved them time and money. Also, it is used on virtually any linux distro package manager (which is about just a bit more critical than docker)...

Perhaps I missed something though? (https://www.youtube.com/watch?v=at72dhg-SZY&feature=youtu.be...)


This integration is built on The Update Framework, which has some distinct advantages over GPG's model.

First, TUF allows you to have freshness guarantees over the content. In GPG's model a MITM or malicious mirror can serve you old, known vulnerable content that you'll accept as valid because the signatures verify. This is not possible with TUF as metadata is additionally signed with a timestamping key.

Second, TUF has a property called 'survivable key compromise' which basically means that there are a hierarchy of keys involved in the system, each with a different responsibility and security requirements. There's a root key that's kept offline, a target key responsible for signing actual content, a timestamping key for freshness, and a snapshot key to tie all the other keys together. GPG's model does allow for signing subkeys, but it is rather clunky to use and many of the Linux package managers don't support using signing subkeys, sadly.

Finally, GPG's usability leaves something to be desired. Docker makes pushing and pulling of images extremely easy, essentially making everyone a publisher of content. GPG works when publishing software is more rare and you can take the time to use a new utility in order to get security guarantees, but we wanted to make it extremely easy so that anyone can do it.

For more background, this paper does a good survey of existing package managers and where they fall short: https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.p...


> Finally, GPG's usability leaves something to be desired. Docker makes pushing and pulling of images extremely easy, essentially making everyone a publisher of content. GPG works when publishing software is more rare and you can take the time to use a new utility in order to get security guarantees, but we wanted to make it extremely easy so that anyone can do it.

A wrapper would have done an awesome job at that... I use GnuPG daily, enter a passphrase once and boom. Mails are signed, my password manager unlocked. Where is the "unusability" in this?


You're right, wrappers can abstract away complexity. That's effectively what TUF is: a wrapper framework around low level crypto primitives that achieves a secure content distribution system. GPG alone would not have given sufficient guarantees around freshness and survivable key compromise.

TUF should be understood as a higher level concept than GPG. There are additional features of the TUF spec that we'll be implementing in later versions, such as threshold signing (k of n signatures required for verification) and secure delegation.

For what its worth, TUF could be implemented on top of GPG just fine. If folks have an appetite for that we'd welcome contributions here: https://github.com/docker/notary


> In GPG's model a MITM or malicious mirror can serve you old, known vulnerable content that you'll accept as valid because the signatures verify.

This presumes you use HTTP, have a compromised SSL cert, or have pissed off the NSA. At the worst one would be installing older packages with known vulnerabilities via a replay attack, not a fresh code injection by the attacker. This is more the fault of the package manager running over HTTP than GPG, as far as I can see.

Here's a paper covering 'survivable key compromise' by the same chaps: http://freehaven.net/~arma/tuf-ccs2010.pdf. Interesting stuff.


> This presumes you use HTTP, have a compromised SSL cert, or have pissed off the NSA.

This is not taking into content mirroring. TUF allows you to treat all mirrors as potentially malicious allowing anyone to reliably deliver trusted content, even in an untrusted network. GPG does not provide a way to detect active attacks other than signature verification.


This is still an apples to oranges argument. GPG is a way to sign data in a trusted manner, including data that is delivered by both trusted and untrusted systems. If you want to point fingers, point at APT, YUM or RPM, not GPG.


They didn't, they used http://theupdateframework.com/

Python is also looking at using this for their package management system, pip.

https://lwn.net/Articles/629426/


Signify[1] from OpenBSD is also extant, and somewhat lighter than GnuPG for this purpose.

[1] Discussion: https://news.ycombinator.com/item?id=9708120


I don't understand why we don't make the container ID a hash of the image contents, have the docker CLI verify it, and let people use conventional means to trustedly pass around the correct container ID.

This seems like a lot of extra infrastructure and process in a space where there is already a lot of infrastructure and process.


In that case updating a container to a new version would require everyone to change the container ID they are using. This introduces friction that either causes people not to update, or to develop wrappers that do something similar to this.

Granted, this doesn't always apply to packages you build yourself.


Having signed images to allow trust does seem valuable, but easily verifiable images, without the need for crypto, seems like a more fundamental building block upon which more complex processes can be built.


You are right, they are not mutually exclusive, and we are working on both in parallel. We are working on specifying a standardized way to 1) hash a container in its runnable form, and 2) attach arbitrary signatures constructed from that hash. That will allow using your favorite existing tools (eg. gpg) to create arbitrary trust and verification systems. This is part of the OCP project, and will be implemented by RunC which we donated last month. See https://github.com/opencontainers/runc and https://github.com/opencontainers/specs .

As a rule of thumb, end-to-end trust and naming is most useful to developers ("How do I know I'm building on the right dependency, and using the latest and most secure version?"), and low-level hashing is most useful to ops ("How do I enforce a whitelist of containers allowed on my production cluster based on home-made PKI and policies?")

Another distinction is that you can use Notary (and Docker Trusted Content) with any kind of content - for example Compose files, source artifacts, system packages used to build the container, etc.


A bit OT but it would have been automatic if docker used a content-addressable filesystem to store the image content. Unfortunately `docker commit` is nothing like `git commit` as it creates a differential snapshot of the filesystem. It unfortunate because now the user has to be careful not to insert a lot of data between steps in his Dockerfile or they will be shipped with the container even if removed by subsequent tests.


Would docker make a good sandbox or can applications break out?


Unfortunately, no, Docker is not a good sandbox, because it prioritizes compatibility over security while choosing a very wide API (the Linux kernel API) as its security boundary.

Privilege escalation vulnerabilities are found in the Linux kernel fairly regularly -- like, monthly, sometimes weekly. An attacker who can run arbitrary code in your Docker container would only need to wait a couple weeks for the next vulnerability report (or poke around the kernel code and find a new one) and then hit you before you can patch. The most recent example is this batch of CVEs: http://www.openwall.com/lists/oss-security/2015/07/22/7

Part of the problem is that the Linux kernel API is gigantic with lots of obscure features that haven't been carefully vetted. One way to solve this problem is to drastically constrain the attack surface by doing things like using seccomp-bpf to block obscure system calls, not mounting /proc or /sys, etc. Unfortunately doing this will sometimes break apps. Usually the apps can be tweaked to work around the missing features.

Docker is not meant to be a sandbox. Docker is meant to be able to run any arbitrary Linux software. So Docker comes down on the side of compatibility, and does not use attack-surface-reduction techniques (unless you manually configure them, which no one does).

In contrast, Sandstorm.io (of which I am lead developer) prioritizes security over compatibility, and makes attack surface reduction mandatory for all apps. Some docs:

https://docs.sandstorm.io/en/latest/developing/security-prac...

https://blog.sandstorm.io/news/2014-08-13-sandbox-security.h...

The second link is almost exactly a year old, but has proven true: we've seen a lot of kernel exploits in the last year that were non-events for Sandstorm. The above-mentioned CVE, for example, did not affect Sandstorm because we block the modify_ldt syscall.

Note that Google Chrome's sandbox pioneered these techniques -- they originally created seccomp-bpf.


It makes a good sandbox because it uses a chroot/pivot_root + unshares pid/net/uts/mnt/ipc namespaces, but it doesn't use user namespaces so root in the container is root on the host which is a bit scary.


And user namespaces are really close to being merged: https://github.com/docker/docker/issues/15187


Great!


Another important point: running untrusted code in Linux containers is not considered safe in all configurations, but it is safe in some configurations. So if you know what you're doing, you can already configure Docker to run untrusted payloads. That is what platforms like Heroku, Dotcloud, Google App Engine have been doing for years. The most important aspect of such a configuration is: don't allow running it as root (you can specify uid/gid privilege drop from Docker's CLI, management API and in the image format).


Classic app engine doesn't use docker, and Managed VMs app engine is run in a Compute Engine VM in the customer's own project, partly to avoid having to worry about running untrusted code since Compute Engine handles the security there.


You are correct. GAE classic is not a good example since it uses language-specific sandboxing. Hopefully you'll agree that my overall point still stands: when configured and managed properly there is consensus that you can run certain untrusted payloads in linux containers today.


I'll have to take your word for it :) I'm not savvy enough about this topic - I just know the MVM architecture.


Yes, where that configuration involves:

- Not running as root.

- Enabling the no_new_privs prctl to prevent attacks on buggy setuid binaries.

- Using an aggressive seccomp-bpf filter.

- Hiding /proc, /sys, most of /dev, etc.

- Other things I'm forgetting at the moment. :)


Given the list of things you need to do to ensure security, I'm going to go out on a limb and say that "configured and managed properly" is something that does not happen in reality. Until there is something more fundamental making breaches impossible, you're never going to be sure that there isn't something that you've missed.

It's ok that Docker containers are not yet secure sandboxes, and it would be a great if that changed.


Exactly. It really needs to be the platform's job to provide a reasonable sandbox configuration that users can turn on or off as a single switch. Or, better yet, one that is always on. Because while it's generally easy for apps to work around things like missing /proc, most developers aren't going to bother if it isn't mandatory that they do so.

But potentially breaking compatibility with existing apps is understandably not something Docker wants to do. (Whereas it's something sandstorm.io is happy to do, because apps already need to be tweaked in other ways for it.)


Thanks. Have there been documented breakouts or it more theoretical?


Even if the code can't break out, be aware of other issues of non-fully virtualized multi-tenancy:

https://www.cs.unc.edu/~reiter/papers/2014/CCS1.pdf


Not in recent versions.


http://www.openwall.com/lists/oss-security/2015/07/22/7 is three weeks old and would allow breaking out of Docker. Bugs like this (in Linux) are found regularly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: