I love the suggestion for new users to try minikube. I got started with minikube and kubernetes recently and it was only then when I had an aha moment with containers. I get it now. I know containers have been around a while but with kubernetes the orchestration difficulty has been lowered to the point where I can't imagine going back to the way I was getting things working before. From minikube I moved to kubernetes on GCE, and it mostly just worked. I still use minikube for my local dev environment.
Yes minikube rocks. It essentially fulfills the dream promised but poorly delivered by Docker Compose - a development environment as similar as possible to production.
In fact, IMHO kubernetes has tried to do something similar with .. but it is not engineered ground up for simplicity. Which is why it has MULTIPLE tools for this - minikube, kubeadm, kompose - but nothing matching the ease of use of docker and its yml files.
Kubernetes deployments are done via yml (or json) files too. They are called manifests.
I think you are misunderstanding the tools listed. Minikube sets up a single-node local cluster. Kubeadm sets up a multi-node cluster. No matter how or where your cluster is set up, you still deploy with manifests.
I have deployed several clusters with kubernetes and with Swarm. I think my wording was hyphenated at the wrong place.
Swarm has a new yml file format - the compose v3 which is pretty damn awesome. Kubernetes has had yml files for a while, but the gap in simplicity/usability is massive.
Which is what my point was with minkube and kubeadm and kompose - for swarm, you use a single tool for either a single node cluster.. or a multi node cluster. Even more, kompose was invented to read from the same Docker Swarm compose file format - because it is so intuitive.
I'll go one step further - kubeadm does not actually have high availability support, so you actually have to use kargo or kops to reasonably deploy in production.
Kubernetes introduces a lot of upfront complexity with little benefit sometimes. For example, kargo is failing with Flannel, but works with Calico (and so on and so forth). Bare metal deployments with kubernetes are a big pain because the load balancer setups have not been built for it - most kubernetes configs depend on cloud based load balancers (like ELB). In fact, the code for bare metal load balancer integration has not been fully written for kubernetes.
Now, my point is not that kubernetes sucks - I think its a great piece of tech. But its around why do people think Docker Swarm will die.. or that it sucks? Because, relatively speaking, while kubernetes NEEDS all kinds of complicated orchestration tools (and consultants!) to set it up .. Swarm on the other hand is damn easy to setup by a developer building his first stack.
The issue that I have with the newly imagined Docker Swarm is that it continues Docker's trend of trying to be a jack of all trades.
I have no specific examples for Docker Swarm, but using this approach in other areas has led to some pretty major deficiencies in Docker's design that they have been slow to fix, and I'm not keen on seeing that happen again.
incidentally the embedded DNS feature is fairly extensively leveraged by kubernetes - it takes of the situations where you dont want to muck around with underlying /etc/hosts (on the actual metal) and do your changes only on the containers.
But I'm hearing what you are saying more and more - Docker Inc is having a huge PR problem. Docker Swarm may actually be good, but people are generally disliking the organization itself.
You dont see these kind of answers with Fleet, Mesos..even Openstack. Docker Swarm is a genuinely sweet piece of tech.. so this is rather unfortunate.
They have a PR problem because they break interfaces in minor releases and have actively pushed back on criticism for doing so as "they need to be agile" and "can't be boxed in by competitors".
I would also like to know the answer to this question. Every time I try setting up a Kubernetes cluster, it's an exercise in frustration. Docker Swarm is much easier in comparison.
Add to the fact that Docker Swarm is adding Enterprise features (such as Secrets in 1.13) and that is has an Enterprisey version (Docker Datacenter) which supports multiple teams, why would I - an Enterprise developer and architect - look at Kubernetes over Docker Swarm?
My £0.02 is that Kubernetes has the backing of Google who have a tremendous amount of experience with container orchestration. And while using Kubernetes it really shows, things are pretty well thought out, lots of features out of the box etc.
With docker swarm it's taken them this long to get simple secrets integrated, and as with all of my experiences with first party docker tools: they seem ok at first, but the devil (and problems) are in the details.
I trust Google more to get this right, and I highly doubt Kubernetes is going anywhere.
Red Hat are doubling down on Kubernetes too (second biggest contributor to Kubernetes), and if there is anyone who is good at taking parts of an opensource eco system and supporting them for an enterprise, its Red Hat.
Because Kubernetes, even though it is still young, is a lot more mature then Docker Swarm.
Kubernetes is also based on years of running containers with Google itself, it solves real problems. Allowing containers to run in the same pod allows for much nicer composability than running multiprocess containers.
Have you tried setting up a k8s cluster recently, I believe they added kubeadm for much easier setup in 1.5, which was released a few weeks ago.
The other responses to your posts are great, but I'd like to add one thing to them;
Why are you equating low barrier of entry with quality? I think MongoDB ought to have taught everybody in this field that you can have a low barrier of entry and still be a crap product.
I didn't see any mention of quality in GP's post... For me Docker swarm is a simpler implementation than Kubernetes and likely more suitable for simpler deployments.
Kubernetes whilst a cool product still has a lot of rough edges even now. One I encountered recently was that to upgrade a locally deployed cluster from 1.4 to 1.5 the answer appears to be "re-install from scratch" as the upgrade script is still "experimental" (https://kubernetes.io/docs/admin/cluster-management/#upgradi...)
I've tried both. Kubernetes is targeted really towards large production clusters -- even kubernetes itself requires quite a bit of resources. A single non-HA cluster initialized by kubeadm for example couldn't even schedule itself on a n1-standard-1 machine on GCE.
For a MVP or a small production stack that runs on one server, I would go with Docker Swarm for its simplicity and small footprint. And even if you do end up scaling across many nodes, you still won't need k8s (kubernetes).
I too salute CoreOS for doing the right thing for their customers and the ecosystem. Kubernetes was something that was hard to predict, it didn't grow organically but was suddenly released by Google.
Right now I believe Kubernetes is the project with the most accepted pull requests per day. This came up in a talk from GitHub at Git Merge 2017. It shows that k8s is on its way to becoming the default container scheduler platform. It will be interesting to see how Docker Swarm and Mesosphere will compete during 2017.
The container scheduler is becoming the next server platform. The fifth one after mainframes, minicomputers, microcomputers, and virtual machines.
While configuring GitLab to run on k8s we learned that much of the work (like Helm Charts) doesn't translate to Docker Swarm and Mesosphere. I think there might be strong network effects similar to the Windows operating system.
Completely agreed. The landscape 2-3 years ago looked incredibly different, and I picked Mesos for a rather ambitious project. After it became relatively clear that k8s was going to eclipse Mesos and not by a little bit, trying to unwind that decision basically cost me my job through some political infighting. It's a shame, but such is the price of early adoption, I guess.
Given the same information, I'm really confident that I'd still make both choices the same way.
Hi David, can you reveal some of your environment (e.g. on-prem vs cloud), were there any technical reasons for switching or was it primarily a matter of perception of velocity/popularity between the two projects?
Just to add some of my own perception as someone who works on Mesos, Mesos continues to be popular with large technology companies that don't make their technical investments lightly: Twitter, Apple, Netflix, Uber, Yelp, for example. Companies continue to choose a Mesos stack based on its technical merits. The project is still moving fast and adding powerful primitives to support the needs of production environments while distributions like DC/OS are trying to make Mesos more approachable (easy to install, administer) and comprehensive (providing solutions for load balancing, logging, metrics, etc). I hope you will take another look at the Mesos ecosystem at some point, a lot of care has gone into it :)
Not OP, but I think the perception is that Mesos requires you to roll a lot more of the solution yourself. That's fine if you're a large company who can throw hundreds of developers at your platform, less so if you've got 5 or even 50.
Might be interesting for the docker runtime in the future as well. That is, assuming k8s spends enough time on supporting alternatives like rkt as well as docker.
Do you think k8s's support for docker and rkt will become the same as the interface becomes standardized with the Open Container Initiative? https://www.opencontainers.org/about
There is currently a ongoing refactoring of the container communication stuff in kubernetes to remove all container engine specific code and just use the Open Container Runtime Interface. This allows any container engine to run with kubernetes as long as they implement the Open Container Runtime Interface.
> The fifth one after mainframes, minicomputers, microcomputers, and virtual machines
Interesting though that the last 3 paradigms are largely built on each other. I'm involved in a deployment at the moment which started with buying servers, implementing VMs on them, and finally laying k8s on top of that.
I know most uses of k8s won't ever really see the layers below, but they're still there...
Hmm that's a pity even though it shouldn't come as a surprise for anyone who's actively using/involved with fleet.
I like the simplicity and flexibility of fleet (basically distributed SystemD) a lot. Don't necessarily want to switch to a bigger scheduler like Kubernetes. Anyone have any suggestions for/experiences with an alternative simpler scheduler (like Nomad or an alternative solution like the autopilot stuff from Joyent)?
Nomad dev here. We should definitely tick the simplicity box for you. If not, let me know. :)
Nomad is a single executable for the servers, clients, and CLI. Just download[0] & unzip the binary and run:
nomad agent -dev > out &
nomad init
nomad run example.nomad
nomad status example
And you have an example redis container running locally!
Nomad supports non-Docker drivers too: rkt, lxc templates, exec, raw exec, qemu, java.[1] To use the "exec" driver that doesn't use Docker for containerization you'll need to run nomad as root.
Nomad user here. No k8s experience. I have been using it for more than 6 months (docker container + short running jobs). If I can name the main features I like: deployment simplicity, responsive scheduling, disaster recovery and service discovery integration.
Sorry to hear that! We've definitely focused on stateless containers until 0.5 which introduced sticky volumes and migrations. Useful in some cases but definitely doesn't cover all persistent storage needs.
Extensible volume support will be coming in the 0.6 series via plugins.
We are moving toward container-pilot and it's A+. We have been using an adapted autopilot pattern for some time now with our thick VMs and it's been great. There is no one system that solves all problems and fits all paradigms, but it seems like container-pilot / autopilot as a pattern is very successful at delivering simplicity.
BTW, we are also using Triton (formerly SmartDC) from Joyent and are absolutely loving it. It's not without it's rough edges, but it is by and far the best public / private cloud option we have found that supports containers and VMs.
Same here. What made me like fleet despite the many problems with it is the simplicity and that it is not a container scheduler but a systemd unit scheduler, so it is far more flexible than just a container scheduler.
I have projects where Kubernetes is probably the right choice, but I have many more where Kubernetes is massive overkill and where I also need/want the distributed systemd units.
I've been telling friends and co-workers I think kubernetes has won the orchestration war. But even as I did so I wanted something simpler for my own purposes, and so was using fleet.
Luckily for me, I'd stuck with making all my units global and driving their deployment off of metadata. I think I'll just strip off the [X-fleet] section, and start deploying them straight to systemd with ansible.
This is roughly what we're doing. Ansible to manage specific containers on hosts. It works quite well, and with some IP tables shenanigans we have a lot of power over how we roll new containers out.
Ansible brings inventory management to the table. I can have an inventory with my backend and frontend instances tagged, run my playbook, and it will copy/start the appropriate systemd units.
As someone who's been working with containers since docker was released, I feel like this is the right decision.
CoreOS are awesome, and I hope that rkt takes off (no pun intended)
K8s has been a fun companion to travel with on the road to stability, but I think they've now got it right. I remember the confusion regarding config file formats, network architecture, persistent storage etc and I'm happy to say they've mostly got it nailed now.
Congrats to thocken and team ️
My next experiments are with the smartos docker support and Kubernetes. Hopefully I can get K8s running nicely on solaris zones and get better container isolation happening ️
Once again, I think CoreOS have made the right decision here, but that doesn't preclude major changes in K8s itself!
I think Kubernetes is a really interesting product and obviously has a lot of momentum. That said for something thats seeing wide adoption it still has a lot of rough edges and things that need fleshed out.
One I ran across recently was the upgrade process for clusters. Per (https://kubernetes.io/docs/admin/cluster-management/#upgradi...) it seems that unless you're on GCE the best way to upgrade a cluster is by rebuilding it from scratch as the upgrade script is still "experimental", which doesn't seem great.
The other area that I think Kubernetes is lagging Docker quite a bit on is security documentation and tooling. There's no equivalent of the CIS guide for Docker or Docker bench, both of which are useful in understanding the security trade-offs of various configurations and choosing one that suits a given deployment.
Building a cluster from scratch is usually not a bad idea: You create a new cluster with the upgraded version, combine both clusters through federation and start moving pods from the old to the new cluster.
Upgrading a cluster in place will come in the future.
Whilst for major upgrades that might make sense, what about instances like a high risk security fix where upgrade speed is important... People don't want to be re-building from scratch in that kind of setup...
I fully understand your issue. Creating a new cluster means for me running a script that sets up a new cluster in ~15min. There is https://github.com/apprenda/kismatic which can help simplify your cluster setup if you run in a enterprise environment.
You can also take a look at https://coreos.com/tectonic where coreos provides a enterprise kubernetes distribution that supports updating a kubernetes cluster without downtime but I personally haven't tested tectonic.
>That said for something thats seeing wide adoption it still has a lot of rough edges and things that need fleshed out.
Yes, I'm concerned about this not just with k8s, but Docker as well. Both are very immature products and there's a massive rush to adopt them, attributable almost entirely to social pressures and the insecurities of people who lead these tech depts.
When things like StatefulSets and persistent storage are still iffy/under development, it should be clear that these things are nowhere near production-ready.
I don't get that move. fleet was extremly well suited to schedule a kubernetes high available master.
as soon as you have 3 etcd nodes and 3 fleet nodes you could use fleet to bootstrap kubernetes in a way more stable fashion than all of the other available options.
if people remove the low level tools to manage a cluster it will be harder and harder to bootstrap higher level stuff.
but well, what to expect in the container space, stuff changes there just way too often.
You can do that kind of bootstrap without fleet. Just use ignition or cloud-config with the right systemd units and a bunch of fixed IP addresses. I think the CoreOS folks worked on a number of ways to simplify and automate bootstrapping of the Kubernetes control plane, so they saw fleet as redundant now. Besides, it took a long time for it to get something resembling a mechanism that updates units in the cluster.
That said, being a lower level tool as you point out, it can be useful during e.g. troubleshooting. Imagine the case where `fleetctl list-machines` returns more nodes than `kubectl get nodes`.
with fleet you could have a single kubernetes master that would've been started on another node, as soon as one node would go down. that won't work with just systemd units.
I think it is a brave decision which might affect the current users of Fleet for a while but will prove to be a good for the community overall.
If you think from a new comers perspective who is actually getting started with container orchestration he/she does a lot of research to choose a framework/tool and if you provide them with a lot of suboptimal solutions it doesn't really help (I do not mean Fleet is suboptimal but k8 is already close to become a standard). It is always better to have one or two standard solutions for a particular problem. Parallels can be drawn from the javascript world where we have this influx of libraries, frameworks and tooling which only does few things differently than others but this has led a lot of confusion specially among beginners and instead to thinking deeply about core concepts people are often seen chasing the new shiny frameworks.
I am sad to see fleet go. Fleet was quiet simple to setup but k8s was a monster. They have so much terminology and it tries to cover all the cases of cloud orchestration. I think my fallback now is Swarm (hope it gets more stable though)
I was in the same boat of leaning toward the simplicity of Compose/Swarm/Docker Cloud, and even took a look at Rancher which supports Swarm & their own compose/Cattle scheduler. After spending months trying to get these to work effectively, and battling with their continuous changes & instability -- I eventually gave Kubernetes a shot. There's definitely a greater learning curve to understanding what all the terminology is, but for the basic uses of deploying a set of services, it turns out it's actually not as complicated as it first seems.
My shortcut was using https://github.com/kubernetes-incubator/kompose to convert my docker-compose.yml to the equivalent K8S objects. It wasn't as simple as just running it, but it let me see what it would basically take to do the same thing in Kubernetes. It ended up taking just a few days to wrap my head around it all and get it up and running. Probably even easier if you use something like GKE which manages the cluster for you. If you're investing in using containers for the long-haul, I think it's definitely worth the learning overhead.
There are only three key object types you need to understand to start using K8S: Deployments, Pods & Services. Feel free to msg me if you have some questions about getting started.
They're not over. Kubernetes might be dominant in the short term, but it's very complex compared to the use cases a lot of people have been using things like fleet for, and I for one will continue evaluating other options for that reason as most cluster deployments I work on by far have needs where the complexity of something like Kubernetes is totally unnecessary - there will be plenty of space for alternatives for that reason.
kubeadm is not fixing the underlying complexity - it's putting a veneer on top. It's certainly helpful to simplify the deployment, but kubeadm is only needed in the first place because of how complicatd kubernetes is.
To be clear I'm not saying there aren't deployments where the complexity of something like kubernetes isn't necessary.
But most people only run a small number of servers. I'd argue most clusters people are deploying are going to stay below 10 servers for their entire lifetime, and a dozen or two services that generally tends to need basic high availability and load balancing and 1-3 different data stores with replication/data persistence requirements. For that kind of setup, while you certainly can run kubernetes, the complexity of it simply isn't needed.
That's the boat I'm I'm - we have less than a dozen services and all we need is packing them neatly on worker nodes and some load balancing. Setting up Kubernetes for that looked like an absolute overkill. I need to try Docker Swarm and Nomad again but either of them has to support rolling deploys out of the box - lasy time o checked neither of them did
This is interesting, but has a potential problem - what do you use to schedule the control plane?
Right now, we use Fleet to schedule a highly available k8s API server and associated singleton daemons. Then API server is required to get anything else scheduled in the cluster.
How are they going to solve this bootstrap problem?
As moondev pointed to, eventually bootkube will handle bootstrapping k8s clusters. At my company we just set everything up using cloud-config. A systemd unit boots the kubelet on each server, and static k8s manifests are loaded by the kubelet to run the rest of the k8s components as pods. This way, the kubelet itself is the only component that is not managed by k8s itself.
> At my company we just set everything up using cloud-config. A systemd unit boots the kubelet on each server, and static k8s manifests are loaded by the kubelet to run the rest of the k8s components as pods.
This is the exact same methodology that i've been using and it's worked rather well. The current CoreOS documentation [1] on running Kubernetes follows this methodology too.
This is precisely what we do on our cluster as well for the node-level daemons - kubelet and kube-proxy.
We use fleet to schedule the HA API server. You cannot use the Kubelet to schedule this, because you need an API server to schedule cluster-wide pods.
The only solution I can see is to have a config that launches a special 'master' node that runs the API server, but this is uncompelling to me. I'd rather have every single node be identical, and get the API server to pop up somewhere in the cluster using a master election process - which is precisely what fleet does.
Fleet is still not necessary for HA control planes. There is no danger in running multiple API servers as once. For some time now, the controller manager and scheduler binaries have supported built-in leader election with the --leader-elect option.
Sensible move, though I hope it's not too disruptive for Fleet users. Don't think they have any option though. The list of easy ways to try K8s should include conjure-up on Ubuntu for either laptop-scale or large cloud/Vmware/bare metal deploys