We've been using containers rather heavily in our infrastructure for a few years now (neither rocket, nor docker) and we've developed our own toolset to handle the container images, and to manage the containers.
Even though although it kind of deprecates a lot of our work, I really see the value in having a standard that can be used with different container runtimes, and I'll be looking at migrating our internal format to the app container specs. Having tools like this to handle migrations makes a lot of sense to me. We can continue developing our tools, without marrying a specific backend.
Interesting move by CoreOS here to create what will likely be a false dichotomy for docker in the public sphere (as an indicator of their openness). If you truly believe docker is fundamentally flawed you'd be doing your users a disservice writing this. If its transitionary, create your own docker fork/binary instead of a public scene to try to force dockers hand. Lots of fragmentation to come, which sucks because the ecosystem is so important.
Agreed, it is a bit like saying: We do not need a standard http protocol. If you want to use Firefox use Firefox, otherwise use Chrome. No need that they display the same web pages.
> just do it in your own project and let the best project win
> Bullshit. If it's open, let the best idea win.
So are you suggesting docker should merge and maintain support for a container spec they weren't involved with which, was created because docker is "fundamentally flawed"?
I'm sure the pull-request author knew that this would do nothing more than cause fuss in the community. Shykes comment seems to me like it was a response to what seems like a hardly legitimate PR and much like a PR stunt.
Docker has 722 contributors on github, I'm sure the community will discuss and decide what to do with this while I watch this battle play out and work with both products.
I can't speak for panarky, but there are two separate issues here:
1. Rocket implementing the docker image format
2. Rocket PR'ing the rocket image format to docker.
It is 100% reasonable and likely a good business move to reject the PR, but the only reason to be mad about (1) is if it benefits the end user in a way that weakens Docker's market share, which it does.
I see what Shykes is getting at though. I mean, if you want to use the rocket, then use the rocket. If you want to use docker, then how does adding another runtime help a docker user, when they could simply switch to rocket? I can understand how their might be a "convenience" factor, but if you're already actively deploying with docker images, then why bother with a separate image type?
Don't get me wrong, I think it's great the coreos guys are trying to make a bridge between the two projects, but so far, I don't see a need for this.
I'd like to use Rocket personally (I currently use Docker--while I think CoreOS's "containers everywhere" concept is a little misguided, I don't believe there's a good reason to have a permanent daemon, and if I need one I have Mesos) but make things I do easily available to Docker users.
(Docker's beefs here feel more like a company defending their turf than an open-source project, and that troubles me.)
Yeah, I think my biggest turn off from Rocket is just how PR-oriented all their moves seem to be. I'd be cooler with what they were doing if they appeared to be more open with their motives, but they couch so much of the self-interested stuff they do in terms of benefitting me and "openness". Trips my BS alarm, even if it's legit.
> At the same time as adding Docker support to Rocket, we have also opened a pull-request that enables Docker to run appc images (ACIs).
I really hope this lands and something constructive can come out of it. There is a lot more that can be gained by these communities working together and not promoting divisiveness.
Adding a PR with working code was simply to show that adding this feature is something that is possible. It is OK if nothing from this implementation gets merged.
> There is a lot more that can be gained by these communities working together and not promoting divisiveness.
Gained by who though? These are for-profit enterprises and there are real-money gains involved in controlling the spec. For better or worse, CoreOS controls the App Container spec and make no mistake that the primary reason they want it in Docker is because it benefits them. This of course does not exclude the possibility that users benefit.
Still, I'm of the opinion that all of this fighting behind the scenes (which, if you've been paying attention, this is) is kind of bad for everyone and a waste of resources.
Competition is benefitting users and producing some relative waste of resources on each company. We are benefitting because at the end of the day the container world will be more open.
But I think that this VC-backed model might not be beneficial for OS as a business in the mid-long term. The race to $0 is greatly accelerated.
Do you think that the VC-backed model is good for open-source as an ecosystem, even if not a business? Because, watching the growth of Docker and CoreOS, I'm starting to wonder if it isn't just a tire fire for us at every level.
I agree, I think it would be much better for everyone if the two container specs merged.
However, after reading "This is a simple functional PR..." in the blog post, I was surprised to see the PR adds over 38k lines of code. Seems like that will take a while to review.
little bit of snark: wasn't Docker "fundamentally flawed"? If that was really the premise to launch Rocket why bother with this humongous PR?
Don't get me wrong, I totally see how this is good for Rocket, just be honest and admit the "fundamentally flawed" argument was mainly smoke and mirrors to justify a defensive-offensive move by a VC-backed, for-profit company launched against another VC-backed, for-profit company.
Again nothing wrong with that, it's business and in fact a good move but in my eyes CoreOS lost quite some trust when they tried to potray Rocket as a selfless act of kindness towards the community that needed to be saved.
All of us want containers to be successful, they solve a ton of problems. But, part of that success is getting the format and the security correct. And we want to have that technical discussion and settle on those best practices for all implementations.
There are things in the App Container spec that we would like to see in Docker, this is why we put in the work to make a spec, write the code to make it work and start a technical discussion. This has been the goal since the beginning. The problems that exist in the current Docker Engine that we would like to address are technical and real:
1) We believe in having a decentralized and user controlled signing mechanism. In the appc specification and rocket we use the DNS federated namespace. See the `rkt trust` subcommand and the signing section of the ACI spec.
2) We believe that image IDs should be backed by a cryptographic identity. It should be possible for a user to say: "run container `sha512-abed`" and have the result be identical on every machine because it is backed by a cryptographic hash.
In rocket another thing we wanted to do was enable running the download/import steps outside of being a root user. For example today you can download and import an image from disk in the native ACI format with rkt. And in the next release `rkt fetch` will be runnable as a user in the same unix group as `/var/lib/rkt/`.
> All of us want containers to be successful, they solve a ton of problems.
Not sure I want containers to be successful (unless of course the main business is building and marketing containers). I want my problem solved but whether they are solved with containers, mocks, jails, VMs, and so on doesn't matter as much.
Deploying to a cleanly defined fresh state without paying any performance penalty. Documenting your dependencies by writing the deployment script (=Dockerfile) and not having to reinvent the wheel everytime (image inheritance). Sandboxing linux applications without paying any performance penalty. Creating a PaaS where your services internally always see the same standard port, externally they're linked together through docker, thus separating the routing concerns from your application logic.
These are all great but I get the most of the same benefits from VMs and many more:
fresh state/ no performance penalty (AMI+autoscaling)
Document dependencies, not reinvent (Packer file)
Sandboxing (same)
Always use same standard port (easier with VMs as 1:1 map)
I know most people think that containers/docker/whatever new stack does these things better and they may be right. The benefits however don't outweigh the costs in weaker toolset and less mature stack.
For my use cases, the biggest problem is that containers don't solve the "where does this run" question. Whenever I ask this, people loudly exclaim "anywhere!" which is the same as "I don't know" to me.
AWS AMIs run in 11 regions x N AZs around the world. This solves a much bigger technical problem for me than "it's lighter weight and easier to do incremental releases on top of" which seem to be the only things in favor of containers.
Many people, including Amazon, say "run containers on VMs!" This seems unnecessarily complex for little additional gain.
I'm really curious if the containerization folks are using Packer and if not why not.
I run containers in Amazon. Not using their service, because their service is silly, but on Mesos.
I am not locked into a 1:1 tenancy between applications and instances (though I could have it if I wanted). Multitenancy is trivial. I have the ability to spin up new instances of my applications to combat spike loads or instance failures in single-digit seconds rather than in minutes. My developers can run every container within a single boot2docker VM instead of incurring the overhead of running six virtual machines. It's easier to integration-test because my test harness doesn't have to fight with Amazon, but can rather use a portable, cross-service system in Mesos. In addition, I don't have to autoscale with the crude primitive of an instance in production. Multitenancy means that I can scale individual applications within the cluster to its headroom, and only when the entire cluster nears capacity must I autoscale. I can better leverage economies of scale, while allowing me to leverage more vCPU power to applications that need it (running two dozen applications on a c3.8xlarge is very unlikely to bring to bear at any given time less computational performance to a given application than running each application on its own m3.medium).
I could do this without containers and with only Mesos. It would be worse, but I could do it. I could not do this at all with baked AMIs and instances without spending more money, doing more work, and being frustrated by my environment. I know this I've built the same system you describe (I preferred vagrant-aws because when something broke it was easier to debug, but we moved to Packer before I left) and I would never go back to it. It was more fragile and harder to componentize than a containerized architecture with a smart resource allocation layer. The running context of a container should be "anywhere", and it should be "I don't know", and you caring about that is a defect in your mental model.
The one thing I don't believe you, is that you don't have a performance penalty. If you compare a bare metal machine running VMs with a bare metal machine running containers, there will always be a performance penalty for the VMs - they are more heavyweight by definition by running an additional kernel for each VM. Even assuming CPU doesn't get a hit at all, it still incurs memory and diskspace penalties. As an effect, from an IaaS POV, a container can be made available cheaper than a VM, and you can think much less about using them from a performance point of view - does it make sense logically or from a security standpoint? -> use it.
That much is correct, it's still tied to namespaces, cgroups and the standard mechanisms for implementing jailing on Linux. The point is it stands at a layer of abstraction designed for easier portability than outright depending on LXC.
Ya, the messaging is starting to get really confusing. If the container formats really are that similar then there is no point in two parallel implementations, either augment docker containers or app containers. Doing both at the same time is just silly since from the looks of it they are going to converge on the same format anyway.
This initial discussion is just about the container image format and we would really like to see convergence on that front.
As container runtimes Rocket and Docker have different design goals though. As one example, Rocket is designed to be a standalone tool that can run without a daemon and works with existing init systems like upstart, systemd and sysv. We needed this standalone property because on CoreOS containers are our package manager and the daemon can get in the way of doing that correctly.
It is ok that Docker and Rocket have different design goals and both can exist for different purposes. But, I think we can share and converge on an image format that can be used by multiple implementations that includes cryptographically verifiable images, simple hosting on object stores and the use of a federated DNS based namespace for container names.
1) As you very well know, Docker is already working on cryptographic signature, federated DNS based namespace and simple hosting on object stores. If you "would like to see convergence", why didn't you join the effort to implement this along with the rest of the Docker community? The design discussion has been going on for a long time, the oldest trace I can find is at https://github.com/docker/docker/issues/2700 , and the first tech previews started appearing in 1.3. Yet I can't find a single trace of your participation, even to say that you disagree. If you would like to see convergence, why is that?
2) You decided to launch a competing format and implementation. That is your prerogative. But if you "would like to see convergence", why did you never inform me, or any other Docker maintainer, that you were working on this? It seems to me that, if your goal is convergence, it would be worth at least bringing it up and test the waters, ask us how we felt about joining the effort. But I learned about your project in the news, like everybody else - in spite of having spent the day with you, in person, literally the day before.
3) Specifically on the topic of your pull request (which we also received without any prior warning, conveniently on the same day as your blog post). So now we have 2 incompatible formats and implementations, which do essentially the same thing. Once we finish our work on cryptographic signature, federated dns based naming etc, they will be functionally impossible to distinguish. How will it benefit Docker users to have to memorize a new command-line option, to choose between 2 incompatible formats which do exactly the same thing? I understand that this creates a narrative which benefits your company, CoreOS. But can you point to a concrete situation where a user's life will be made better by this? I can't. I think it's 100% vendor posturing. Maybe it's bad PR for me to say this. But it's the truth. Give me a concrete user story and I will reconsider.
> but it would be worth a lot if I could run any image on any platform.
That technology exists, it is called a VM. Any platfrom that supports x86 for example will run any x86 compatible image. You can use wrappers and scripts like Vagrant on top of it.
Or if you want all hosting managed as a pool of resources (storage, CPU) try something like oVirt.
> and doesn't address any of the reasons why people prefer containers to VMs for some types of workloads
I was responding to one reason -- which is "running any image on any platform".
> why people prefer containers to VMs for some types of workloads
Sure but there are no magic unicorns underneath, knowing what you get from a technology requires some understanding on how it works. Saying things like "I want very lightweight but also want it to run any image on it" is asking for a trade-off. Or a complicated multi-host cabaility based platform.
I've got a use case. ACI support would let me use containers without being coupled to Docker's registry. I really don't want to run that software, and I really, really don't want to rely on Docker Hub. ACI's use of existing standards for their "registry" implementation is a major draw for me.
To clarify, I don't want to run my own registry and I don't want to rely on any third party for image hosting. I just want to pull tarballs from a dumb file server. No need to run a registry for that, and no one company has a privileged position in the namespace.
It's maddening, because I love Docker-the-concept but not Docker-the-implementation nor Docker-the-ecosystem. I honestly do understand how many would find the UX of "Docker, Inc. at the center of things" to be a refreshing convenience, but to me that notion is frustrating and repellent, as much so as if Git remotes defaulted to GitHub.
> I just want to pull tarballs from a dumb file server.
Is there something I'm missing that you couldn't just use wget? If you have the URIs, I can't imagine how pulling down an image by name would be more than a quarter-page Python script, even if you include the untarring and such.
Yeah, that's about what I've been doing, but AFAICT I lose the benefits of layering when I refuse to speak the registry protocol. Docker's export command dumps the entire image tree, so I'm stuck transferring GB-sized payloads to deploy the tiniest change to my app. appc manages to do layers without a coordinating registry. (Kind of funny that CoreOS bought Quay, on that note.)
Yes, you can run your own registry, but doing so without every pulling anything from DockerHub means rebuilding all images yourself and tagging all of them for your own registry and pushing to that, or DNS / firewall hacks to redirect requests for index.docker.io (or forking Docker).
There is nothing "respectful" about anyone's behavior on either side of this trainwreck, and calling each other out on a web forum isn't going to help anything.
In that case I don't see how the image format ties into any of what you just said. Seems to me the image format is completely irrelevant. Docker's format could be augmented to include all the security features you want and rkt could just use docker containers. That's where the confusion is. It is clear that the image format is orthogonal to all the other issues you mentioned.
By the way I don't have a dog in this race and am not rooting for either side. Just from purely a technical perspective and resource use the fragmentation is now starting to feel something that is mostly driven by public relations and marketing. As someone that tries to use the best tool for the job I now have no compelling reason to choose either format and run-time which means I'm just going to wait it out and both sides are going to lose contributions from independent open source developers because their effort is going to be wasted.
It's not just the image format, it's about getting the DNS based federation and content-addressible images, which effectively takes away "index.docker.io"'s special status.
And that's where the problem is. I can very much understand why Docker sees holding onto that as a great advantage to them, but it's not an advantage to me as a user.
Basically App Containers is about laying down the gauntlet for Docker because the changes they are asking for are/were unlikely to be accepted without backing them up with the pressure of facing a competing project if they're not.
The federated nature of image identity that CoreOS is pushing for is a direct challenge to the special status that Docker has given index.docker.io, and that they have been strongly resisting attempts to change.
I don't care much if Rocket or Docker "wins", but I really hope the App Container federated approach does.
Right, in which case this should be the messaging instead of "Look guys docker can run ACI images". Why waste effort on interoperability if the end game is federated image identities? Pour all engineering resources into making that happen instead of silly patches for interop since that can always happen later.
We've been using containers rather heavily in our infrastructure for a few years now (neither rocket, nor docker) and we've developed our own toolset to handle the container images, and to manage the containers.
Even though although it kind of deprecates a lot of our work, I really see the value in having a standard that can be used with different container runtimes, and I'll be looking at migrating our internal format to the app container specs. Having tools like this to handle migrations makes a lot of sense to me. We can continue developing our tools, without marrying a specific backend.