Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This initial discussion is just about the container image format and we would really like to see convergence on that front.

As container runtimes Rocket and Docker have different design goals though. As one example, Rocket is designed to be a standalone tool that can run without a daemon and works with existing init systems like upstart, systemd and sysv. We needed this standalone property because on CoreOS containers are our package manager and the daemon can get in the way of doing that correctly.

It is ok that Docker and Rocket have different design goals and both can exist for different purposes. But, I think we can share and converge on an image format that can be used by multiple implementations that includes cryptographically verifiable images, simple hosting on object stores and the use of a federated DNS based namespace for container names.



Brandon, let me respectfully ask you 3 questions:

1) As you very well know, Docker is already working on cryptographic signature, federated DNS based namespace and simple hosting on object stores. If you "would like to see convergence", why didn't you join the effort to implement this along with the rest of the Docker community? The design discussion has been going on for a long time, the oldest trace I can find is at https://github.com/docker/docker/issues/2700 , and the first tech previews started appearing in 1.3. Yet I can't find a single trace of your participation, even to say that you disagree. If you would like to see convergence, why is that?

2) You decided to launch a competing format and implementation. That is your prerogative. But if you "would like to see convergence", why did you never inform me, or any other Docker maintainer, that you were working on this? It seems to me that, if your goal is convergence, it would be worth at least bringing it up and test the waters, ask us how we felt about joining the effort. But I learned about your project in the news, like everybody else - in spite of having spent the day with you, in person, literally the day before.

3) Specifically on the topic of your pull request (which we also received without any prior warning, conveniently on the same day as your blog post). So now we have 2 incompatible formats and implementations, which do essentially the same thing. Once we finish our work on cryptographic signature, federated dns based naming etc, they will be functionally impossible to distinguish. How will it benefit Docker users to have to memorize a new command-line option, to choose between 2 incompatible formats which do exactly the same thing? I understand that this creates a narrative which benefits your company, CoreOS. But can you point to a concrete situation where a user's life will be made better by this? I can't. I think it's 100% vendor posturing. Maybe it's bad PR for me to say this. But it's the truth. Give me a concrete user story and I will reconsider.


> How will it benefit Docker users to have to memorize a new command-line option

User here. I couldn't care less about a new command-line option, but it would be worth a lot if I could run any image on any platform.

If you claim this is "all about the user" then talk more about what the user gains or loses.

Is the biggest downside really just another command-line option? Docker already has a metric fuckton[1] of command-line options, what's one more?

Impugning the motives of your competitor is at best an irrelevant distraction, and at worst an indictment of your own motives.

[1] https://docs.docker.com/reference/commandline/cli/


> but it would be worth a lot if I could run any image on any platform.

That technology exists, it is called a VM. Any platfrom that supports x86 for example will run any x86 compatible image. You can use wrappers and scripts like Vagrant on top of it.

Or if you want all hosting managed as a pool of resources (storage, CPU) try something like oVirt.

http://www.ovirt.org/About_oVirt


And VMs far more heavy-weight and doesn't address any of the reasons why people prefer containers to VMs for some types of workloads


> and doesn't address any of the reasons why people prefer containers to VMs for some types of workloads

I was responding to one reason -- which is "running any image on any platform".

> why people prefer containers to VMs for some types of workloads

Sure but there are no magic unicorns underneath, knowing what you get from a technology requires some understanding on how it works. Saying things like "I want very lightweight but also want it to run any image on it" is asking for a trade-off. Or a complicated multi-host cabaility based platform.


I've got a use case. ACI support would let me use containers without being coupled to Docker's registry. I really don't want to run that software, and I really, really don't want to rely on Docker Hub. ACI's use of existing standards for their "registry" implementation is a major draw for me.


Actually, I think you don't have to rely on Docker's registry:

- you can simply use Dockerfiles and build your own images,

- apparently, it seems like you can host your own registry [1]

- you can even use a service run by CoreOS, ie Quay, to host your Docker images [2]

I'm not sure I understand what you mean by "I really don't want to run that software". Does it mean you don't want to use Docker ?

[1] https://blog.docker.com/2013/07/how-to-use-your-own-registry...

[2] https://quay.io/


To clarify, I don't want to run my own registry and I don't want to rely on any third party for image hosting. I just want to pull tarballs from a dumb file server. No need to run a registry for that, and no one company has a privileged position in the namespace.

It's maddening, because I love Docker-the-concept but not Docker-the-implementation nor Docker-the-ecosystem. I honestly do understand how many would find the UX of "Docker, Inc. at the center of things" to be a refreshing convenience, but to me that notion is frustrating and repellent, as much so as if Git remotes defaulted to GitHub.


> I just want to pull tarballs from a dumb file server.

Is there something I'm missing that you couldn't just use wget? If you have the URIs, I can't imagine how pulling down an image by name would be more than a quarter-page Python script, even if you include the untarring and such.


Yeah, that's about what I've been doing, but AFAICT I lose the benefits of layering when I refuse to speak the registry protocol. Docker's export command dumps the entire image tree, so I'm stuck transferring GB-sized payloads to deploy the tiniest change to my app. appc manages to do layers without a coordinating registry. (Kind of funny that CoreOS bought Quay, on that note.)


Yes, you can run your own registry, but doing so without every pulling anything from DockerHub means rebuilding all images yourself and tagging all of them for your own registry and pushing to that, or DNS / firewall hacks to redirect requests for index.docker.io (or forking Docker).

They've made it much harder than necessary.


There is nothing "respectful" about anyone's behavior on either side of this trainwreck, and calling each other out on a web forum isn't going to help anything.


shykes, I will follow up to all of this on the proposal on GitHub.


In that case I don't see how the image format ties into any of what you just said. Seems to me the image format is completely irrelevant. Docker's format could be augmented to include all the security features you want and rkt could just use docker containers. That's where the confusion is. It is clear that the image format is orthogonal to all the other issues you mentioned.

By the way I don't have a dog in this race and am not rooting for either side. Just from purely a technical perspective and resource use the fragmentation is now starting to feel something that is mostly driven by public relations and marketing. As someone that tries to use the best tool for the job I now have no compelling reason to choose either format and run-time which means I'm just going to wait it out and both sides are going to lose contributions from independent open source developers because their effort is going to be wasted.


It's not just the image format, it's about getting the DNS based federation and content-addressible images, which effectively takes away "index.docker.io"'s special status.

And that's where the problem is. I can very much understand why Docker sees holding onto that as a great advantage to them, but it's not an advantage to me as a user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: