CoreOS works the same way. All containers. You can run `toolbox` to get into a systemd-namespace'd Fedora container (any other container can be specified; it's just Fedora by default), from which you're supposed to do all your troubleshooting/analysis (caveat: systemd-namespace does not seem to support `auditd` well).
I still strongly dislike "containers". It's not worth the complexity or instability. Two thumbs way down!
Does it though? I use CoreOS without containers (for the nice auto-updates/reboots), and it works really well with just systemd services. I'm aware the branding sells it this way (esp. the marketing rebrand as Container Linux or whatever), but does it run any containers as part of the base system? I've found CoreOS with containers not very reliable, and CoreOS without containers extremely reliable.
Since I use Go on servers which has pretty much zero dependencies, what I'd really like to see is the operating system reduced to a reliable set of device drivers (apologies to Andreessen), cloud config, auto-update and a process supervisor. That's it.
Even CoreOS comes with far too much junk - lots of binary utils that I don't need, and I'd prefer a much simpler supervisor than systemd. Nothing else required, not even containers - I can isolate on the server level instead when I need multiple instances, virtual servers are cheap.
CoreOS is the closest I've seen to this goal, the containers stuff I just ignored after a few tests with docker because unless you are running hundreds of processes, the tradeoff is not worth it IMO. Docker (the company) is not trustworthy enough to own this ecosystem, and Docker (the technology) is simply not reliable enough.
The OS for servers (and maybe even desktops) should be an essential but reliable component at the bottom of the chain, instead of constantly bundling more stuff and trying to move up the stack. Unfortunately there's no money in that.
Yeah, I agree and I think that FreeBSD jails in particular are much better (to be fair, I am not very well informed on Solaris Zones, so maybe they're the best). They are certainly much less ostentatious and do not try to redo everything for their own little subworld like Kubernetes does.
I sat down one day to try to write down what would make Linux containers/orchestration usable and good, and realized after about 20 minutes that I was describing FreeBSD jails almost to a T. The sample configuration format I theorized is very close to the real one.
However, I think that there's good reason for actual deployments of containerized systems to remain niche, as it did until the VCs started dumping hundreds of millions into the current Docker hype-cycle, and the big non-Amazons jumped on board as a mechanism to try to get an advantage over AWS.
What people really want are true VMs nearly as lightweight and efficient as containerized systems. In fact, I think many people wrongly believe that's what containerized systems are.
Sure. Here's one example that I've dealt with in the last week.
We have a server that receives the logs from our kubernetes cluster via fluentd and parses/transforms them before shipping them out to a hosted log search backend thingy. This host has 5 Docker containers running fluentd receivers.
This works OK most of the time, but in some cases, particularly cases when the log volume is high and/or when a bug causes excessive writes to stdout/stderr (the container does have the appropriate log driver size setting configured at the Docker level), the container will cease to function. It cannot be accessed or controlled. docker-swarm will try but it cannot manipulate it. You can force kill the container in Docker, but then you can't bring the service/container back up because something doesn't get cleaned up right on Docker's insides. You have to restart the Docker daemon and then restart all of the containers with docker-swarm to get back to a good state. Due to https://github.com/moby/moby/issues/8795 , you also must manually run `conntrack -F` after restarting the Docker daemon (something that took some substantial debug/troubleshooting time to figure out).
We've had this happen on that server 3 times over the last month. That's ONE example. There are many more!
Containers are a VC-fueled fad. There are huge labor/complexity costs associated and relatively small gains. You're entering a sub-world with a bunch of layers to reimplement things for a containerized world, whereas the standard solutions have existed and worked well for many years, and the only reason not to use them is that the container platform doesn't accommodate them.
And what's the benefit? You get to run every application as a 120MB Docker image? You get to pay for space in a Docker Registry? Ostensibly you can fit a lot more applications onto a single machine (and correspondingly cut the ridiculous cloud costs that many companies pay because it's too hard to hire a couple of hardware jockeys or rent a server from a local colo), but you can also do this just fine without Docker.
Google is pushing containers hard because it's part of their strategy to challenge Amazon Cloud, not because it benefits the consumer.
I think you are suffering a bit from the over-engineering of Kubernetes and Docker and throwing the baby out with the bath water. Containers in general are great for simplifying deployment, development and testing. We use docker currently and it works great, but we are using just docker and using it only as a way to simplify the above. We are deploying 1 application to 1 system (EC2 via ASG).
There is also nothing keeping you using Docker for containers. LXC also works great and it has no runtime, so you have none of the stability issues you can get with Docker. Though I must say Docker has improved a lot and I think it will stabilize and _it_ won't be an issue (not as sure about Kubernetes).
Sure, I agree that both Docker and k8s (at some level, k8s probably had to have a lot of that complexity to interface with Docker) are overengineered, and that there are better containerization processes/runtimes.
But I still don't think containers are what most people want. People need/want ultra-lightweight VMs with atomized state. NixOS looks promising but I haven't used it yet. It seems to give you a way to deterministically reason about your system without just shipping around blobs of the whole userland. You can also approximate this on other systems with a good scripting toolkit like Ansible.
All I want is a way to encapsulate an application and its dependencies as a single artifact that I can run, test and deploy. Right now containers are the best way to achieve this but I'll probably be happy with any solution to this problem.
NixOS does look interesting and I've considered playing with it for personal projects, but IMO it is still to fringe for use at work where you need both technical usefulness and a general consensus that it is appropriate (i.e. mindshare).
It seems deploying thousands of ultra-lightweight VMs with atomized state would still require an orchestration layer. I don't follow how that would remove complexity and/or improve stability.
It removes complexity because you can already use a lot of stuff that exists. Kubernetes has established itself as a black box.
Kubernetes has the concept of an "ingress" controller because it has established itself as the sole router for all traffic in the cluster. We already have systems to route traffic and determine "ingress" behind a single point (NAT). Kubernetes also manages all addressing internally, but we have technologies for that (DHCP et al). Kubernetes requires additional configuration to specify data persistence behavior, but we have many interfaces and technologies for that.
VMs would be able to plug into the existing infrastructure instead of demanding that everything be done the Kubernetes way. It reduces complexity because it allows you to reuse your existing infrastructure, and doesn't lock you in to a superstructure of redundant configuration.
kube is very ostentatious software in this way, and it makes sense that they'd throw compatibility and pluggability to the wind, because the strategic value is not in giving an orchestration platform to AWS users, but rather to encourage people to move to a platform where Kubernetes is a "native"-style experience.
As for orchestration, people were orchestrating highly-available components before Kubernetes and its ilk. Tools like Ansible were pretty successful at doing this. I have personally yet to find executing a `kubectl` command less painful than executing an Ansible playbook over an inventory file -- the only benefit would be faster response time for the individual commands, though you'd still need a scripting layer like Ansible if you wanted to chain them to be effective.
Right, and this orchestration layer is supposed to be designed for resilience and stability from the start, because it's an architectural problem that cannot be fixed later without a complete redesign of everything. There is one such proven architecture - supervision trees and it addresses both problems: complexity and stability. VMs are not even necessary for this, nor are cloud platforms.
I think at this point it's accepted that supervision trees are an app or runtime concern. Operating systems and distributed platforms have to support a wider range of architectural patterns.
Disclosure: I work on a cloud platform, Cloud Foundry, on behalf of Pivotal.
Supervision trees have enough flexibility to support any architecture pattern imaginable, you got it kind of backwards. They are like trees of arbitrary architecture patterns. The idea is to limit the scope of any possible problem, so that it only affects a tiny part of the system, but at the same time reducing complexity by only having to deal with little responsibility in each supervisor. Kind of a tiny orchestration system for each service, instead of a centralized one.
Nonsense. You can run your private Docker registry or if you want to support stuff like authentication and access control use Sonatype Nexus. Both open source.
> but you can also do this just fine without Docker
Not as easily. You'd need to use VMs with all their associated costs (especially if you use VMware) to provide proper isolation, and the hosting department usually will have day-long phone calls with the devs to get the software to reproducibly run, and god forbid there's an upgrade in a OS library. No problem there with Docker, as the environment to the software is complete and consistent (and if done right, immutable).
We run a "private Docker registry" ... backed by Elastic Container Registry. I'm sure this is the case with the vast majority. You're certainly right that it's possible, but it's about as likely as using cloud platforms in a rational way to start with (elastic load scaling only).
Yeah, kind of. The complexity of the current iteration is not wholly the fault of Docker, though I'm sure some of the utilities had to increase complexity to work well with Docker (k8s just barely got support for other container platforms). This was a story about an annoyance/bug/issue with Docker, but I have annoyances with other things too.
Some people did not know how to do any server management before kubernetes became a big deal, so they think kubernetes is the only way to do it. For the rest of us, I don't think there's a lot of value brought by this ecosystem.
I still strongly dislike "containers". It's not worth the complexity or instability. Two thumbs way down!