If you've ever looked at the underlying code of "LXC" it's basically a bunch of spaghetti scripts that utilize cgroups. It was awesome for its time, but it's really not something worth supporting in 2016 vs. the new container runtimes.
This is extremely misinformed. When did you last look at it? Can you post some examples of the spagetti scripts mentioned? What features do the 'new 2016 container run times' have that the LXC project did not provide initially and does not have? Really?
It's sad to see this level of dimissive reductive fud being used against the LXC project liberally and consistently especially here on HN to promote other container solutions.
A quick visit to linuxcontainers.org or the LXC github repo will quickly and dramatically dispel such notions.
The LXC project is written in C and has been in development since 2009 along with cgroup and namespace support in the kernel. It's mature, robust and easy to use. It provides advanced container networking options from veth to macvlan and supports aufs, overlayfs, btrfs, LVM thin, ZFS for snapshots and clones, since at least 2011. It was also the first to support unprivileged containers with its 1.0 release in 2013.
It's this work that Docker was based on and its unfortunate the ecosystem far from highlighting the benefits they got many choose to downplay and misrepresent the LXC project leading to comments like the above.
LXC is now on 2.0 with multiple improvements from proper display of cgroup controls and container resource usage inside containers with the lxcfs project to initial support for live migration.
It is also significantly easier to use than single app containers especially for multiple processes and networking without needing to create custom process managers, run apps in specific ways or network workarounds. LXC runs the container OS's init in the container, Docker and others do not. There are reasons for both approaches. Hope for better informed discussions on Linux containers here.
Well the big one that LXC does and has always lacked is a story for how to ship code. Docker has distribution / hub, rkt has quay, LXC has... nothing.
If LXC had a registry long ago, I suspect you'd see a lot more people using it directly.
Now the big difference is that LXC is mostly (as of right now) a Canonical project and they have a habit of trying to compete with the world. LXD looks interesting from a technology POV, somewhere between Intel's Clear Containers (where they seem to have gotten the idea from), and Docker. I don't see it really gaining any traction however.
A few examples that spring to mind of them competing with the world and for the most part, failing:
* bzr - if you want a python vcs, use mercurial
* Upstart - we all know how well that worked out
* Dekko - their fork of the lightweight email client Trojita
* juju for config management. This one is arguably one of the best things Canonical has produced tech wise
* Unity/hacked up GNOME - because working with upstream is boring, and the world wants lots of Ubuntu phones
* Their own window manager, Mir, because they can do it better than Xorg developers who are writing Wayland with decades of experience.
LXC has always had a registry of base os images. LXC is a normal open source project and is not VC funded and does not have a lot of the building wall kinds of features like the hub and tons of others things that others would embrace.
LXC's problem is near zero marketing and the significant muddying of the waters about containers by the vc funded docker ecosystem which launched off LXC. How does one position LXC or LXD as 'NIH' when others based off it, or misrepresent it as low level when it has far more sophisticated tools, features and maturity?
Isn't it an irony for all the criticism of bash scripts from the docker ecosystem the Dockerfile is actually a bash script. There still remains way too much misinformation about containers here on HN and it damages informed discussion and credibility. VC funded companies should not be allowed to hijack open source projects in this manner without due credit and proper information.
Lets talk about overlayfs, aufs, namespaces, cgroups and all the Linux components that are containers. Are these projects to work in anonymity without any credit or rewards while VC funded companies suck the value and momentum. Is this the 'open source model' hn wants to support. Compete on value not disinformation. This only leads to low quality discussions.
Ubuntu tends to do their own thing and I don't particularly see anything wrong with diversity. What I find more galling is this underhanded positioning of anything not controlled by Redhat as NIH. Is systemd, nspawn criticised as nih. Flatpak is not dimissed as NIH but Snappy is. How does this kind of brazen manipulation happen? I can't fathom ordinary unbiased users thinking in this way and this kind of politics is becoming a problem for informed discussion and wider choice in Linux. Unity is surprisingly polished for a Linux project and I would not have even tried if I didn't look beyond the prevailing prejudice. Let's not do this.
Docker stopped used LXC stuff a while ago. Sure docker uses Linux namespaces and such, but that has a lot more to do with OpenVZ then LXC.
People seem to like LXC because they want to treat containers as 'light weight virtual machines' and run full stack Linux OSes in them with init systems and multiprocesses and such things.
However that isn't the direction the industry is moving, which is focusing on containers as simple runtime packaging were you have the absolute minimum needed to run a single application.
RKT is interesting because it doesn't depend on a a separate daemon to manage containers. Instead you use the host's systemd daemon to manage containers in a similar way you manage everything else in a system. It simplifies things like managing cgroups and improves reliability.
Some of it is powered by fans and those who bought into the hype of some of the commercial entities pushing their containerization. I think at some point to make what they bought in look good, they have to put down what others have. That happens pretty much with any popular technology. Remember, how Javascript on the server was "async", and therefore better, and was going to kill all the other languages and frameworks? Kinda the same idea.