Hacker News new | past | comments | ask | show | jobs | submit login
RunC – A lightweight universal runtime container, by the Open Container Project (runc.io)
259 points by tlrobinson on June 22, 2015 | hide | past | favorite | 61 comments



It's not mentioned on this page, but these runc containers will also run on Windows[1], which is pretty amazing.

Not having to run the Docker daemon will also be pretty nice. Currently when I upgrade Docker, all containers have to be restarted on the host due to the daemon dependency. So to maintain uptime with Docker containers currently you better be running your stuff clustered (e.g. via Mesos/Marathon).

Standalone containers was something I felt rkt (alternate container format from CoreOS people) got right, so nice to see it carrying over here through the collaboration.

[1]: https://twitter.com/docker/status/613039532422864896


Is it spinning VirtualBox up in the background or virtualizing syscalls? That's the interesting question.


Neither, Microsoft is creating the equivalents (and more) in the Windows Kernel in the next version of Windows Server and contributing the api compatibility to runC


This would for running a Windows container, I assume?

Microsoft implementing the Linux system call ABI, now that would be truly amazing. I guess it's possible (FreeBSD does some of this), but I guess this is not that?


Yes, this is for windows specific workloads. Not the ability to virtualize the Linux ABI in Windows. That's what VMs are for.


Joyent did that for SmartOS with their "Triton" project, which lets you run Docker containers on their (SmartOS) cloud.


Judging by the context here, I'm guessing that they're implementing a virtualization API that's capable of creating VMs which run anything an x86 processor can run.


This is by far the most important announcement of the Keynote, it really shows that Alex (CoreOS) and Solomon (Docker) managed to come together and create this as part of a new open standard.

If there is ever gonna be a page in some book that covers server software development, I believe it will tell about this standard.


What we're seeing emerge here is a new universal binary format.

I've thought of Docker containers for a long time as gigantic statically linked binaries. This isn't necessarily a bad thing (though it does present issues). In some ways the process of installing the different moving pieces of a service and configuring them is a bit like manually "linking" something -- sub-services like MySQL, Redis, etc. are analogous to libraries.

Now what we're seeing is a runtime for this binary being ported around to different platforms. This could get interesting.


> Now what we're seeing is a runtime for this binary being ported around to different platforms. This could get interesting.

With x86 being the primary server/workstation workhorse (lets exclude mobile platforms for the moment), is all of this abstraction necessary? Doing Infrastructure and DevOps, I definitely see the benefit of containerization for build reproducibility and decreasing the friction developers face for running applications locally as they run in production.

Not everyone needs Borg/Mesos/Mesosphere. Not everyone needs containerization. When you have a hammer though, everything starts looking like a nail.


I bet they all use HTTP to communicate too... Seriously though, I tend to agree, could this be solved by using more static binaries?


A return to statically linkable glibc would be a good start on that. It's definitely the weakest link in the chain (as in, the only thing which a pulled in cgo dependency tends to start complaining about).


It could, but there are many existing technologies (apps, libraries, languages) that are easier (and more consistently) turned into a container than into a static binary.


What we're seeing emerge here is a new universal binary format.

Insofar as libcontainer can ram as many ways to implement OS-level virtualization across platforms, that is.


Containers won't run across different OSes (Linux/Windows) without VM virtualization. The spec doesn't actually cover the executable binary format. In the end, a container process is just a native process of the host operating system that is isolated from other processes. There is no OS virtualization taking place.


That entirely depends on your definition of "virtualization". I for one would completely consider all of the namespace support that docker/rkt/lxc use to be virtualization. The reason being since the container has a different pid/network/mount namespace from the host, it is a "virtualized" version thereof.

Now clearly this is different than traditional full operating system virtualization ala KVM/Xen, but it is absolutely still a form of virtualization. Just because something is different doesn't make it wrong.


it's effectively what apple has been doing in OSX with .app


Sometimes, posts like this appear on hacker news that are completely impenetrable. I read the page for RunC and the only thing I could find out about it from that page is that it is a "Container", and these are its specs. A "Container" is something that is used by "Docker". "Docker" is a program that "allows you to compose your application from microservices". A "microservice" is "a software architecture style, in which complex applications are composed of small, independent processes" (we're down to wikipedia at this point). So a "microservice" is an abstraction of unix design, and finally we're on solid ground. If you were interested you might be able to work backwards through this list of projects, researching what each one does, and then you could find out what RunC actually is.


The unstated assumption of this announcement is that if you haven't heard of Docker, you may have been living under a rock. It arguably has more hype after two years than Java did in 1996-97.

The other part of the problem is that this is a rapidly evolving space with a ton of money and attention being pored into engineering and competitive battles playing out, and not so much in the marketing and clear explanations. runC is in part a symbolic "bury the hatchet" moment for a public feud over standards with CoreOS and others that began in December 2014. If you haven't been following the inside baseball, it it's all kind of confusing.


It's written in Go.

Another key bit of infrastructure moves to a safer language.

"This is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning."


You might be overstating the "safer language" aspect. As another commenter suggests, they are using Go as little more than glue to the C or C++ kernel and libraries that run the important bits. And these searches should instill a bit of fear in your heart when running Docker:

https://github.com/docker/docker/search?utf8=%E2%9C%93&q=uns...

https://github.com/docker/docker/search?type=Code&utf8=%E2%9...

Docker has such prodigious and repeated use of "unsafe" that it makes one wonder what, if anything, using Go has bought them in terms of safety or reliability? It seems like virtually every interface used in the Docker source code relies on unsafe casts to pointers, and I hesitate to say I am not sure how well the Go garbage collector meshes with casting pointers and providing them to the kernel and other libraries.

Something like Rust's native ability to ensure a pointer provided to external code lives long enough would be very useful.


In this project -- as in Docker and rkt -- Go serves as little more than a glue language, replacing the role of Python (in fact, it could have possibly been written in bash) -- not C. The core functionality is part of the kernel and is written in C (or C++ in the case of Windows, I believe).


Only incrementally safer.


"https://runc.azurewebsites.net" "Copyright (c) 2015, Linux Foundation"

Interesting times …


and hosted by Azure...


Terrible. This website also promotes proprietary software right on the home page. The Linux Foundation really doesn't seem to care about the "free" part of FOSS anymore.


The Linux Foundation is just a renamed Open Source Development Labs and it merged with the Free Standards Group is memory serves. They never have and never will have much of anything to do with the Free Software Foundation other than the GPLv2 license.


Perhaps they are concerned with Linux and thus named themselves the LINUX Foundation rather than the Free Software Foundation.


>The Linux Foundation really doesn't seem to care about the "free" part of FOSS anymore.

Yeah, isn't it great? More pragmatism, less BS.


Speaking of containers I've always wondered if running a GUI app (something like a kiosk) would work in these sorts of containers? I know you can do it hackily in docker by playing with X and xfvb. But does any of them natively (easily) support it?


Yes, you basically just have to mount the X11 socket as a volume: https://blog.jessfraz.com/post/docker-containers-on-the-desk...


As discussed in the HN thread, this is dangerous and counterproductive to security. It should not be encouraged.

https://news.ycombinator.com/item?id=9086751


Correct me if I'm wrong, but shouldn't Wayland fix these kinds of security issues in X?


That's my understanding, but I think that's a ways off.


Over on the Windows side, there is also another container technology called Glassware made by Sphere 3D. Focus on productivity containers/end user apps.

http://www.sphere3d.com/


The buzztalk is strong with them: "The V3 Optimized Desktop Allocation (ODA) technology is the first and only technology that allows for hyper-converged appliances to be delivered in a distributed fashion."

Sometimes I wonder about the strength of this sort of sales behaviour but ultimately I guess they make their own ecosystem by requiring specialized techies to translate even the sales pitch - promoting a sort of "it has got to be super-hyper-vigilantly efficient if I can't even understand what its good for, plus I can wash my hands of it professionally if it fails because anyone in their right mind can see the necessity of having someone else evaluate the usefulness of a system this incomprehensible"-response from certain managers.



Not yet; though doing it in HTML5 would be relatively trivial (start container on a port, then point a browser to it). Most kiosks these days are just browsers anyway.

I would expect native graphics (including 3d graphics) are a feature we'll see down the road.


What does this mean for rkt?


From their latest blog post

https://coreos.com/blog/app-container-and-the-open-container...

> CoreOS remains committed to the rkt project and will continue to invest in its development. Today rkt is a leading implementation of appc, and we plan on it becoming a leading implementation of OCP. Open standards only work if there are multiple implementations of the specification, and we will develop rkt into a leading container runtime around the new shared container format. Our goals for rkt are unchanged: a focus on security and composability for the most demanding production environments.


"To be clear though, the point of the OCP is not to standardize Docker, but rather to standardize the baseline for containers. Polvi explained that with an open standard, there can be multiple implementations of the standard. So for CoreOS, it means the company will continue to work on its Rocket container technology, while Docker will continue to work on the Docker container technology."

http://www.eweek.com/enterprise-apps/docker-rivals-join-toge...


It's a little difficult to keep track of all the activity happening in the container area. The runC repo's history begins in June 2014? I believe rkt was announced in Dec 2014, right?


> I believe rkt was announced in Dec 2014, right?

Yes but even ACI and Rkt were being worked on months before the announcement.


It was just a week or two before the announcement. You can see the commit activity here: https://github.com/coreos/rkt/graphs/commit-activity. The earlier blips are standard license files and other boilerplate from a project template used for new repos.


You could be right, I seemed to recall it was started in July. They did move repos around a few times and eventually split the ACI out of the Rkt repo, so perhaps some history was dumped/re-written at some point.


It's just libcontainer by far.


That's what I noticed, it looks like a trimmed version of nsinit. Guess just to make everyone happy and keeping libcontainer within the company


What's the relation between RunC and the AppContainer spec?


Runc is the neutral implementation of the new open container standard that was created with the Appc guys at the table. Although not stated explicitely, the open standard will most probably supersede the appc spec


Was appc not open enough? Or unfixable? Why create yet another standard?


> Was appc not open enough? Or unfixable? Why create yet another standard?

Based on history between Docker and ACI/Appc, I'd wager Docker didn't want to fully submit to a standard they had zero input on (should note they elected to have zero input, snubbed their nose at it, and declared their own "open" standard).

It's likely this new standard was the CoreOS team compromising with Docker to include them in the circle. Ultimately it will yield portable containers and a better ecosystem. A big win for the community and users.

As has been said before, an Open Standard is far superior to an Open Implementation.


As far as I can tell, the Docker folks weren't consulted when CoreOS was developing Rocket and App Container. They certainly seemed to be surprised when it was announced.


> They certainly seemed to be surprised when it was announced.

Yes they were, but they really should not have been. Docker had started with a rough draft of an open specification, but then removed it. Every time the idea was brought up, it was shot down. Docker viewed it as a strategic move to not have an open spec that anyone could implement and use container images with.

However, with such a critical piece of technology, it was unreasonable to expect no one else to want it to have a common standard format which could be interchangeable with other container runtime implementations.

So I suspect the "surprise" was more "anger" than anything.

> As far as I can tell, the Docker folks weren't consulted when CoreOS was developing Rocket and App Container.

Right, they were not consulted prior to it's public release. However ACI was an open standard and actively requested contributions to help shape it. ACI was very early when it was released and needed implementations to help shake out the bugs. At that point, Docker actively refused to participate, and instead a week later cooked up their own "open" implementation which was nothing more than rough documentation of how Docker behaves internally (not something a person could write an implementation against).

Initially Docker staffers seemed excited to contribute, but it got shot down quickly by shykes.

See these github issues:

https://github.com/docker/docker/issues/9538

https://github.com/docker/docker/issues/10643

And there was shykes' rampage when CoreOS made the initial announcement, which began with this post:

https://news.ycombinator.com/item?id=8683705

So, it's not surprising that, fast forwarding to today, concessions had to be made in order for Docker to save face and not appear to cave into public demand for a standardized open format that no single for-profit organization controlled entirely. Now both parties get good PR for working together and settling their differences, and the community gets a truly open standard from which, I expect, a great many implementations will arise.


As noted in the parent comment, the Open Container Format will supersede appc. It will also adopt a lot of the ideas from appc, as noted in the CoreOS announcement [1].

[1] https://coreos.com/blog/app-container-and-the-open-container...


To allow all of the involved parties to save face? In this case, no one is acquiescing to anyone else, and AppC can get presented as a great Coming Together by the tech press. Everyone wins.


The sample manifest at https://github.com/opencontainers/runc makes it look like you just specify a big list of mounts, rather than it supporting the higher level concept of overlayed layers that both Docker and appc support.

Are layers not part of the opencontainers spec, or is the sample just missing that bit?


That's correct, layers are not part of the spec. It's the caller's responsibility to setup the container bundle (manifest + one or more rootfs directories) so that runC can load it. How that is done is not the concern of runC.

This is by design: it turns out layers, as implemented by Docker, are not the only way to download or assemble a container. The appc specification uses a different, incompatible layer system. Many people like to "docker import" tarballs which bypasses layers entirely. The next version of Docker is gravitating towards pure content-addressable storage and transfer (a-la git), possibly eschewing layers entirely.

runC is not concerned by how you got a particular runnable object. It's only concerned with its final, runnable form. The result is a nicely layered design where different tools can worry about different levels of abstraction.


No expert here, but I would guess the important bit is the "root": value, which in the example points to rootfs. rootfs, I suppose, would be the directory on the host system containing the entire root filesystem that is to be mounted at /. This could be created using overlayfs, or a COW filesystem like btrfs, which would be how you'd achieve layers.

In this way, the notion of layers is external to the container spec.

Again I'm no expert, so corrections welcome.


Great to see that AppC is joining too. That was going to be a weird splinter otherwise.


runC is libcontainer.....


runC is a wrapper around libcontainer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: