Hacker News new | past | comments | ask | show | jobs | submit login
Why Kubernetes is winning the container war (infoworld.com)
269 points by bdburns on Sept 9, 2016 | hide | past | favorite | 159 comments



I've used both mesosphere and Kube now (in production) and I feel I can safely comment on this.

Kube is winning for the same reason React/Redux (and now Mobx) is winning and why Rails was winning at the time. Community.

The community for Kube is awesome and the work people are doing in the field is being noticed all over the place.

I've seen people (myself included) that moved production clusters from mesos to Kube just because the activity of the development and how secure they feel with the community and the project going forward.

React and Rails (at the time) had the same sort of pull towards the community and why a lot of people on-boarded.

Golang is most likely a factor here too. I feel most people find Golang friendlier than Scala/Java. That's why Kube has many more contributors, the hurdle for contributing is easier to jump


>I've seen people (myself included) that moved production clusters from mesos to Kube just because the activity of the development and how secure they feel with the community and the project going forward.

This is a bit disappointing and disconcerting to hear! I understand that technology doesn't happen in a vacuum, but when you have two well supported/known technologies, picking the most 'popular' one rather than something based on technical merit is a huge issue for me.


The popularity is great, but Kubernetes also happens to have an awesome technology too. Where it intersects with the community is that the technical direction actually listens to real-world use-cases that members of the community brings up. Combined with the amount of activity, Kubernetes is evolving fast.

For example, the PetSet feature of Kubernetes 1.3 involved a really long discussion with people trying to work out how and why to use it.

I remember back in the Kubernetes 1.1 days when I was trying to get AWS ELBs to talk with Kubernetes and possibly finding drivers to automatically set up proxy points. In my search, I found the Github issues where people were discussing that very thing. The person who wanted AWS integration was there talking about the specific needs and quirks of the AWS ecosystem. There were a series of patches made to beef it up, and the discussion on Github documented how the feature was conceptualized (so as to be part of a more general solution on port management).

In contrast, I don't see that with Docker. I can't say anything about Mesos, as I've never tried using it.


I agree, that's exactly the proof of a strong community.

Community goes a long way but has to co-exist with strong technology and being open to change.


To me it started with a POC with the feeling "Hey, this thing is interesting lets try it out". The ease of use and the stability of it impressed me so much that I moved everything on top of kube. I know of at least one more company that did the exact same thing and now all their production infra is running on top of Kube.


> I moved everything on top of kube

Could you define the "everything" please.

What applications are you running? How many of them? How much resources do they use?


I don't think popularity was the #1 criteria, but can be a tie-breaker when you have 2 quality solutions to choose from a technical perspective.


One more thing

Since I have set-up both in production (and more than once) I can tell you for a fact. Kube is easier to setup and easier to run with less problems along the way.

Since I am running essentially the same cluster on kube that I ran on mesos, this is a no-bullshit comparison.

[To be fair] I never used dc/os, I used marathon and chronos on top of mesos.


I'm in exactly the same boat as you with the caveat that I did look into and test DC/OS. It is too little too late. Also I see Mesosphere (as a company) more or less smothering the ecosystem very similarly to how Docker is doing with Swarm and the Docker ecosystem.

I went to MesosCon this year and the previous. Last year it was a really fun kind of scrappy conference. This year it was basically "buy DC/OS". Even the open source vendor booths all had "works with DC/OS" stickers. It reminded me of "Dell recommends Microsoft Windows XP Professional for the best experience" crap that Microsoft used to pull with OEM's licensing windows. I was really unimpressed. Whereas Mesosphere is adding hooks into mesos to add things like AAA (Authentication, Auditing, Authorization) and then implementing it as proprietary modules in their commercial DC/OS, It is native and open source in Kubernetes (thanks to Redhat contributing code from openshift).

So on one hand I see a young ecosystem being swallowed by a single vendor who is slowly squashing the community. On the other hand I see several vendors doing everything in the open and a huge community. With the technical capabilities of both quickly converging as k8s implements a lot of the things Mesos used to have over it, Mesos is becoming less and less "sexy" and more like a liability unless you want to roll your own everything like Apple did with their Jarvis scheduler for Siri.


You should give DC/OS a try, Mesos required some time and skill to implement, DC/OS GUI and CLI installers solves that completely.


I absolutely agree. DC/OS looks really interesting.

I am working on an open source project called "The Startup Stack" to essentially help startups bootstrap (and manage) a cluster in the cloud. link: http://docs.the-startup-stack.com/

Right now, it's working using mesos/marathon etc... 2 companies are using it in production.

In the last couple of months I've been thinking about making the move to Kube for this project. I might evaluate DC/OS as part of it.

Working on this project made me appreciate how hard it is to generalize and hit everyone's taste. There's a lot of customizing around cloud infrastructure now and I definitely feel there are more common grounds we can create.

I'll give DC/OS a shot before making my final decision.

Thank you for your suggestion and your awesome comments on this thread. I was always impressed with the mesosphere team in previous discussions here on HN.


b.t.w, when I started the-startup-stack DC/OS was an enterprise offering.

Coincidentally, you announced the open source version on my birthday this year. Maybe it's a sign :)


you are welcome, and if you need anything just reach out on the dcos slack community channel, you can find me there


>I've seen people (myself included) that moved production clusters from mesos to Kube

Sorry I am new to this. I saw a presentation where someone was running Kubernetes on DC/OS . Why are people doing that if they can just run pure kubernetes.


Some people like (the beta) K8s implementation on DC/OS because they like K8s abstraction for container orchestration , but at the same time they also like DC/OS because of distributed stateful (data) services support, While you can run everything in a container on K8s, the advanced data services require some special handling (see the comments about postgress and glusterfs from another commenter), and some people like to get the best of both worlds

for example, if you restart a Kafka node on any container orchestration platform, container is destroyed and respawn on another node which is great, but you lose all your data and the broker has to rebuild all that queue again, while Kafka on DC/OS wait and maintains the volume just in case you are having temporary maintenance restart before relocating the workload.

another example is Cassandra if you want to add more than 1 node, the traffic to resync the data across 2 nodes can actually degrade the performance on your cluster or even taking it down.

I can go on and on, all of these issues and requirements come from running these mission critical data apps in production, with each application having its own logic on how it handles day 2 operations, but you don't know what you don't know unless you tried running them in production.

Some people talk about the custom scheduler feature or Petsets in K8s, those features are cool and hats off to the K8s community, but they are still early in development while the framework architecture is a core mesos functionality since day 1.

And finally, everyone is free to run whatever they want/like, each company and person might have different preferences on how they want to run workloads and how much effort they want to go through that, I don't think it's productive to go to another VI vs emacs here ;-)


I don't know why people would. I don't :)


For a devOps fan like me, k8s has been a godsend, and what I like in particular is their 3 month release schedule. There are still some hiccups like no good documentation (or a tutorial really) on setting up shared writeable storage and how to handle databases, or more importantly replication.

The k8s team is very responsive and I'm sure these will be ironed out in the near future so we can all adore each other's cattle :)


> no tutorial

Your wish is my command! https://kubernetesbootcamp.github.io/kubernetes-bootcamp/


There's also Kelsey Hightower's "Kubernetes the Hard Way" https://github.com/kelseyhightower/kubernetes-the-hard-way


Nice! Container Solutions did a great job.

Suggestions where I can find internals documentation/book? Other that the obligatory 'look in the source code'.


(kubernetes co-founder here)

Kubernetes Developer docs: https://github.com/kubernetes/kubernetes/tree/master/docs/de...

Kubernetes Developer proposals: https://github.com/kubernetes/kubernetes/tree/master/docs/pr...

They're by no means perfect, but we try to be super open about our development process.

Many of the github issues also have comprehensive discussions of topics, searching the repo for topics you're interested in will often surface interesting discussions.

If you want to get more deeply involved, I'd suggest the Kubernetes community meeting (Thurs, 10am Pacific)


Thanks, I'll take a look.

I learned a lot from Kelsey's "Taming microservices with CoreOS and Kubernetes" at OSCON EU 2015 https://www.safaribooksonline.com/library/view/oscon-amsterd... and https://github.com/kelseyhightower/intro-to-kubernetes-works...

It gave me invaluable in-depth understanding, regretfully a little outdated by now.

EDIT: I see that his book, due soon, has the same information as in the presentation, and much more.


Are you guys still working on the documentation ? I tried to setup a test Kubernetes cluster for a recent hackathon and couldn't get it to work. minikube kept cribbing that 'host only adapter we created cannot be found' ( I think this error is from docker-machine ) and the vsphere tutorial failed while trying to upload the vmdk file.


The singular Kelsey Hightower has written "the" book: http://shop.oreilly.com/product/0636920043874.do


It's still in "raw" status, but the release date says October 2015.

These days, with 3-6 month release cycles, I can't see how a traditional publisher could stay current unless you have a rapid iteration cycle.

Also, I wonder how platform-agnostic the book is? I know many resources out there start by having you set up a GCP account.


Oct 2015 was when the first preview went live. Not sure when the last release went out but my copy is missing 3 chapters at the end on deployments.

This book does the same re GCP with most stuff done via the gcloud CLI that I've seen anyhow(I've only skimmed the book). Pretty sure Kelsey Hightower works for Google so it makes sense it is specific to their products.


I personally recommend client-go, official kubernetes go library. Comments it types made me understand kubernetes concepts much quicker. It's quite small and easily understandable, even without go experience. See my post https://kozikow.com/2016/09/02/using-go-to-autogenerate-kube...


How did you handle shared writable storage and databases?


We (actor.im) also moved from google cloud to our servers + k8s. Shared persistent storage is a huge pain. We eventually stopped to try to do this, will try again when PetSets will be in Beta and will be able to update it's images.

We tried:

* gluterfs - cluster can be setup in seconds, really. Just launch daemon sets and manually (but you can automate this) create a cluster, but we hit to that fact that CoreOS can't mount glusterfs shares at all. We tried to mount NFS and then hit next problem.

* NFS from k8s are not working at all, mostly this is because kubelet (k8s agent) need to be run directly on a machine and not via rkt/docker. Instead of updating all our nodes we mounted NFS share directly to our nodes.

* PostgreSQL we haven't tried yet, but if occasional pod kill will take place and then resyncing database can became huge issue. We ended up in running pods that is dedicated to specific node and doing manual master-slave configuration. We are not tried other solutions yet, but they also questionable in k8s cluster.

* RabbitMQ - biggest nightmare of all of them. It needs to have good DNS names for each node and here we have huge problems on k8s side: we don't have static host names at all. Documentation said that it can, but it doesn't. You can open kube-dns code it doesn't have any code at all. For pods we have only domain name that ip-like: "10-0-0-10". We ended up with not clustering rabbitmq at all. This is not very important dataset for us and can be easily lost.

* Consul - while working around problems with RabbitMQ in k8s and fighting DNS we found that Consul DNS api works much better than built-in kube-dns. So we installed it and our cluster just goes down when we kill some Consul pods as they changed it's host names and ip. And there are no straightforward way to fix IP or hostnames (they are not working at all, only ip-like that can easily changed on pod deletion).

So best way is to have some fast(!) external storage and mount it via network to your pods, this is much much slower than direct access to Node's SSD but it give you flexibility.


As long as you associate a separate service with each RabbitMQ pod, you can make it work without petsets. (Setting the hostname inside the pod is trivial, just make sure it matches.) Then you can create a "headless" service for clients to connect to, which matches against all the pods.

If you set it up in HA mode, then in theory you don't need persistent volumes, although RabbitMQ is of course flaky for other reasons unrelated to Kubernetes -- I wouldn't run it if I didn't have existing apps that relies on it.


> RabbitMQ is of course flaky for other reasons unrelated to Kubernetes -- I wouldn't run it if I didn't have existing apps that relies on it.

I'm surprised because I know teams which are very satisfied with running RabbitMQ at scale. Could you elaborate?


RabbitMQ doesn't have a good clustering story. The clustering was added after the fact, and it shows. I've written about it on HN several times before, e.g. [1]. Also see Aphyr's Jepsen test of RabbitMQ [2], which demonstrates the problem a bit more rigorously.

With HA mode enabled, it will behave decently during a network partition (which can be caused by non-network-related things: high CPU, for example), but there is no way to safely recover without losing messages. (Note: The frame size issue I mention in that comment has been fixed in one of the latest versions.)

We have also encountered multiple bugs where RabbitMQ will get into a bad state that requires manual recovery. For example, it will suddenly lose all the queue bindings. Or queues will go missing. In several cases the RabbitMQ authors have given me a code snippet to run with the Erlang RELP to fix some internal state table; however, even if you know Erlang, you have to know the deep internals of RabbitMQ in order to think up such a code snippet. There have been a couple of completely unrecoverable incidents where I've simply ended up taking down RabbitMQ, deleted its Mnesia database, and started up a new cluster again. Fortunately, we use RabbitMQ in a way that allows us to do that.

The bugs have been getting fewer over the years, but they're not altogether gone. It's a shame, since RabbitMQ should have a model showcase for Erlang's tremendous support for distribution and fault-tolerance. You're lucky if you've not had any issues with it; personally, I would move away from RabbitMQ in a heartbeat, if we had the resources to rewrite a whole bunch of apps. We've started using NATS for some things where persistence isn't needed, and might look at Kafka for some other applications.

[1] https://news.ycombinator.com/item?id=9448258

[2] https://aphyr.com/posts/315-jepsen-rabbitmq


Thanks a lot for elaborating. This is exactly the kind of insights I wanted to know.


What do you recommend instead of RabbitMq on kubernetes? I use RabbitMq as Celery backend . I should probably switch to redis...


Yeah this is tricky. I'm not a huge fan of all these distributed file systems like EBS, NFS or others - That doesn't make much sense for most DBs.

I prefer to have a DB which is "cluster-aware", in this case, you can tag your hosts/Nodes and use Node affinity to scale your DB service so that it matches the number of Nodes which are tagged - Then you can just use the a hostDir directory as your volume (so the data will be stored directly on your tagged hosts/Nodes) - This ensures that dead DB Pods will be respawned on the same hosts (and be able to pickup the hostDir [with all the data for the shard/replica] from the previous Pod which had died).

If your DB engine is cluster-aware, it should be able to reshard itself when you scale up or down.

I don't think it's possible for a DB to not be cluster-aware anyway - Since each DB has a different strategy for scaling up and down.


Yes, and for "cluster aware" DBs it is easy to write a small controller that makes it "Kubernetes cluster aware". Here is a demo we did with a WIP etcd controller: https://youtu.be/Pc9NlFEjEOc?list=PL69nYSiGNLP1pkHsbPjzAewvM...


I've found this to be a pain when setting up a container-based environment. The easiest approach is to just to avoid it as much as possible - hopefully your cloud provider has some managed services (i.e. AWS RDS) that will handle most things for you.

Otherwise you need to separate your available container hosts into clusters: Elasticsearch cluster, Cassandra cluster, etc. and treat those differently from your machines you deploy your other apps to, which to be fair, they are different and need to be treated differently.


GlusterFS was pretty much the only damn thing that I could get to work in a reasonable amount of time (Tried NFS, Ceph, Gluster, Flocker).

Basically the solution (until we get PetSets at least) is to:

1. manually spin up two pods (gluster-ceonts image) without replication controllers because if they get down we need them to stay down so we can manually fix the issue and bring it back up.

2. each pod should have a custom gcePersistentDisk mounted under /mnt/brick1

3. from within each pod, probe the other (gluster peer probe x.x.x.x)

4. each pod should preferably be deployed to it's own node via nodeSelector

5. once the pods have been paired on gluster, create and start a volume

6. make a service that selects the two pods (via some label you need to put on both)


Disclaimer: I work for Mesosphere (Champions of Apache Mesos and DC/OS)

We have total respect for K8s, but I don't think you can claim winning just based on the community and stars.

OpenStack has a huge larger community of developers and advocates, but it is still haven't reached it's potential despite many years and incredible effort, seminars and summits

Also most of these next gen infr projects now (DC/OS which is powered by Mesos, K8s, docker swarm ) are converging feature wise, but they also have their strengths and weaknesses, some of these are temporary, some of these are structurally by design

Mesos (and DC/OS) were not just designed for scale, but also for extensibility and different workloads, which is why you can run Cassandra, Kafka, Spark, ..etc In production. None of these workloads run as traditional containers, with DC/OS they have their own operational logic such as honoring placement, simple installation, and upgrade, which is a core design function of the two-level scheduler

People usually complain that standing up Mesos is hard, which is why we we built DC/OS and open sourced it to give you the power of mesos and its awesome capabilities in managing containers and data workloads without going through all the effort to stitch everything yourself. Check it out at DCOS.io, I am sure you guys will be blown away.


Speaking of DC/OS, not related to the article at hand, but I noticed this repo in there:

https://github.com/dcos/lashup

Just wanted to say that is some seriously good stuff -- a CRDT based, distributed masterless database, and what seems like a failure detector as well.


The problem I always here with Mesos is that it doesn't scale down well. I.E., that it is not developer friendly or, to put it another way, you can't run it on your laptop.


Actually that isn't true. Minimesos runs quite well on your laptop if you wish to test that way:

https://minimesos.org/


DC/OS (A Mesos distribution) can be used on your laptop with dcos-vagrant or dcos-docker:

- https://github.com/dcos/dcos-vagrant

- https://github.com/dcos/dcos-docker

However, your point is somewhat valid in that Mesos requires you to allocate resources to everything you run, while resource allocation/enforcement is optional in Kubernetes. So you can easily run too many things and freeze your computer with Kubernetes, while Mesos is more conservative.


I have been trying to understand advantages of DC/OS over Mesos. Why should I even consider it? All I can understand is that it has a better(?) UI. I understand that even this and the DCOS CLI and packages work with Mesos too? So what's in there in DC/OS?


Tl;DR: DC/OS provides all the components you need on top of Mesos to run in production, all pre-packaged and tested together with easy to install way

Mesos provide the core resource management and scheduling, think of it as the Kernel of an (DC) operating system. On top of Mesos you usually need:

1- Container/Workload Scheduling capability which is Marathon, now built in and integrated in DC/OS so you are just up and running in minutes 2- Management (Beautiful DC/OS UI and CLI) to manage all cluster and workload operations. It also includes Installer (of all components including Zookeeper, networking, ..etc) 3- App Store (Universe) that allows you to easily find and deploy latest packages such as Kafka, Cassandra and Spark with a single command, in regular Mesos you needed to find the Framework that worked with a specific version of Mesos, troubleshooting, ...etc. Now it's an app store like experience 4- Core infrastructure components such as very cool service discovery and loadbalancing (Minuteman), Storage drivers, CNI support

If you want to run docker containers, DC/OS also gives you the ability to use the docker daemon or the new Universal containerize (technology preview) which we just released.

Because mesos is very modular and very powerful, it was built to have modular implementation of all system components, but our team and our founders ran some of the largest implementations of Mesos (Twitter, Airbnb, ..etc) and they complied all thes best practices and the bits and pieces in DC/OS.

Hope that explains,


I attempted to spin up a DCOS cluster on AWS the day that you Mesosphere opensourced it. I followed the tutorial step by step and the installation failed, I was never able to log in to the admin console. It was completely opaque and I wasn't able to troubleshoot it in any meaningful way. After A couple of hours I gave up and went back to my Mesos setup. It was a very frustrating experience. I'm not sure how it is now but my experience was disappointing.


I know it's young still, but I think Nomad is going to get a share of this market with little effort.

I played with Mesos & k8s and I picked Nomad instead. Now I'm not managing a huge fleet of servers I want to abstract away as much as I wanted a simple, consistent framework for allocating resources to containerized tasks across a small group of nodes. And I don't think that use case is anything to sneeze at and for a new user there just isn't anything out there as easy as nomad IMO.

https://www.nomadproject.io/


For a simple solution, also see Rancher. I've used them about 2 years ago and even then it was a very useful and stable tool. http://rancher.com/


I also use Rancher and quite happy with it.

The cool thing is that you can set up and environment using cattle (their own orchestration system), k8s, mesos or swarm. Up to you and your preferences.


I wanted to use nomad, but at the time their tutorials (not sure about their current state) were not really good, so I ended up rolling my own solution which serves me well since then (https://github.com/crufter/puller).


I agree about lack of tutorial or documentation. Hashicorp documentation always feels like an internal wiki company wiki or a reference guide for someone who is already quite familiar with the APIs. I just checked the Nomads docs and it looks pretty much like it did when I looked at it 4 or 5 months ago.


Nomad is definitely shaping up to be a nice alternative if k8s and mesos are overkill for your use case and you don't want to be vendor locked to a simplified option like ECS.


It pretends to compare Kubernetes, Apache Mesos and Docker Swarm. This article says Kubernetes has a lot of stars on github (doesn't compare it to Docker or Mesos, only says Kubernetes has a lot), same for Slack/Stack Overflow and number of CVs mentioning the tech ... I will pass Infoworld opinion from now on.


I completely agree that this is an article only intended to praise Kubernetes. Stating Google invented Linux containers alone is wrong as well.


I can bring up an app on Linux or Windows from bare metal in minutes by hand. But the way it's supposed to be done now is something like this, right:

  1) Use one of Chef/Puppet/Salt/Ansible to orchestrate
  2) One if those in item 1 will use Vagrant which
  3) Uses Docker or Kubernetes to
  4) Set up a container which will
  5) finally run the app
Really?


  1) Developer pushes code to repo, its tested, if pass>
  1) Rkt/Docker image made by CI/CD system and pushed to docker registry
  2) Automatically deploy new image to staging for tests and master for production (can be a manual step) 
  3) Sit back and relax because my time invested up front saves me hassle in the future
I have 30+ nodejs apps that can be updated on a whim in under 5 seconds each.

Setting up bare metal instances, even when using something like ansible is slow and cumbersome compared to what I'm now doing with k8s

Doing an apply/patch to kubernetes nodes takes seconds (assuming services aren't affected).

edit: Sorry, for the unfamiliar by services I mean load balancers/ingress, it's a k8s term. It takes 45s-1minute to update glbcs so I modify them as rarely as possible.


How do you solve your #2?

We push our docker images to quay.io but pull+deploy is still manual. How does the target platform detect a new container on the registry in order to kick off a pull+deploy?


Take a look at Drone (http://readme.drone.io, not to be confused with the hosted Drone app). It allows you to execute arbitrary commands after publishing the Docker image. You can tell it to run kubectl to perform a deploy, for example.

Drone can also run CI tests and build stuff in temporary containers, which avoids the need to compilers etc taking up space in the final image, and negates the need for squashing (which busts your cache).

Much faster than Quay. Quay starts a new VM in AWS every time it builds, which is super slow. Drone just starts containers.

(You have to self-host Drone, though. Since it requires running Docker in privileged mode it's not a good fit for running under Kubernetes, unfortunately.)

(Kubernetes on GKE always runs in privileged mode, but that doesn't mean it's a good idea!)


Quay.io has been beta testing a new build system (based on Kubernetes under the hood) that should make builds a bit faster. If you're interested in testing it out, your organization can be whitelisted. Tweet at @quayio and we can get you set up.


Disclaimer: I work for Red Hat on OpenShift v3, an open-source enterprise-ready distribution of Kubernetes.

We have solved #2 by deploying an internal docker registry [1], push to it once a build succeeds, and automatically (or not) trigger deployments once an expected image lands in it [2].

In the future we will support webhooks so you can trigger deployments from external registries such as quay.io or DockerHub.

You can have a look at the all-in-one VM we have:

git clone https://github.com/openshift/origin

cd origin

vagrant up

vagrant ssh

oc cluster up

The last command will spin up a cluster with a docker registry, a HTTP router, and a bunch of images for you to play around with.

https://asciinema.org/a/49402

[1] The same underlying registry thay powers DockerHub: https://github.com/docker/distribution

[2] A Kubernetes controller loop that watches for new images and triggers new deployments.


There are a number of ways to do this. You could have your CI system create the new object manifest (say nodeapp1.yaml) completely with the image: nodeapp:$version. You could have your CI system do a simple sed, like;

  sed -i -e 's^:latest^:'$VERSION'^' kube/myapp-deployment.yaml

Regardless of how you do that, then your CI/CD can run simple kubectl commands to deploy the new container;

  kubectl apply -f kube/nodeapp-deployment.yaml --record
One of the problems here I've yet to tackle is if the deployment fails and you're doing rolling updates you won't get an exit for the build to fail in CI/CD. I need to do a call to the kubernetes api to see which revision of the container/deployment is now running and compare it to the previously running version to fully automate CI/CD. Haven't had time to set this up but I will soon unless I discover a better method.

Edit: Also I just realized my numbers are 1, 1, 2, 3. Woops.


One of the engineers at CoreOS has a WIP demo that spins up a staging instance of each GitHub PR on Kuberenetes[0]. Here's an example PR [1]. I imagine the scope of this tool could be extended to a staging cluster after merge.

[0]: https://github.com/alekssaul/webhook/tree/github-support [1]: https://github.com/alekssaul/simplewebapp/pull/40


The CI/CD system should do that.


On one host? Easy. No real value.

Two to twenty hosts? Harder, especially when ensuring that the hosts are set up identically.

Twenty-one hosts on up? Very hard.

Automated creation and removal of hosts? Virtually impossible.

Steps 3 and 4 can be avoided if you like, using any number of techniques (including the good, old fashioned linux package with init files), but they do make a number of challenges disappear (such as consistency in code and deployment across hosts) when you run into those problems.

So, no, not everyone needs Kubernetes (I'd argue most people don't), but I can't see not using 1 or 2 (even if 1 is bash scripts).


The problem with "by hand" is now the dependencies or environment needed for that app to run are only in the brain of the person with that hand, or documented in a separate document that is prone to being out of date. With a build environment that includes containerization, all of your apps dependencies are guaranteed to be documented or it will not run correctly.


Your assessment is correct. It takes much longer to just run the app for the first time. The savings come in when you want to redeploy the app in multiple environments, test it, and develop new features on it. That's when all of these steps make sense - although, hopefully you'll be able to remove steps 1 & 2 when everything is a docker / Kubernetes workload.


Most of those technologies overlap. Except for some disaster development scenarios, no one is actually using all those technologies in the same environment (i.e. they're not using Chef and Vagrant and Docker in production, at the same time)

Chef/Puppet/Salt/Ansible these days are just used to provision 1 VM/container (if even that, many people have just reverted to shell scripts) and that container is then managed using a container management tool (Swarm/Kubernetes).


"(if even that, many people have just reverted to shell scripts)"

Isn't Ansible basically just a way of executing shell scripts across multiple hosts in a sensible manner? At least that's how it's been explained to me.


Yes. Most of the value of Ansible is the library 100 of tasks like copy a file and replace variables, or restart a service if the config changed. It can do a lot of work in a few lines... But can also be a bit too much if you are already in a single container.


'Bringing up an app' is the easiest part of ops work. Everything that happens after that is hard and immensely complicated. These tools do not exist for their own sake.


> 'Bringing up an app' is the easiest part of ops work

It's also the part that people are generally trying to address by using containers


I'd argue that while containers have been attractive for that reason, they're actually really bad at it (layered binary blobs?) -- and good at the other stuff. It's pretty unfortunate that's been one of the main propelling forces behind containerization, I think we've ended up with subpar tools because of it.


Bringing up an app isn't the use case. Bringing up an app dozens, or hundreds, of times is.

Think of it like this. If you need to build one moderately simple widget, is it faster to build it by hand, or to build an entire factory to make it? Obviously, building it by hand. Now, how about if you need to build 100, or 100,000? How about if they all need to be exactly the same? Factories start to make sense at a certain point.


What? No. You don't use Chef to run Vagrant to run Kubernetes to set up a container to run your app. That doesn't make any sense, and makes me wonder if you are lashing out simply because you don't understand what these technologies actually do.


Managing one app on one machine (physical or virtual) is a significantly different problem from needed to manage dozens or hundreds of applications/servers.

When you need to automate the entire deploy process, possibly scale up or down the number of servers/services running based on demand, or to run the same operation across hundreds of servers: that is when you need Kubernetes. Trying to do any of these tasks manually when you're dealing with a large number of machines/services/servers becomes unmanageable very fast.


I only do steps 3-5 here. I no longer find any value from Chef/Puppet/Salt/Anisble or Vagrant.


How do you get Docker and the Kubernetes client containers up? How about your host monitoring?


Can script all that yourself if you properly document it so somebody else can come along a figure out exactly what is going on should you disappear/quit/call in sick/hit by bus.

Elastic Search 'beats' ( https://www.elastic.co/products/beats ) can do all the monitoring/stats, deploying snapshots to multiple servers can also be automated in Go or drop a Go binary on the server for some kind of Command&Control architecture for continuous remote maint, starting/stopping containers ect (assuming all security precautions have been considered).

This worked for a deployment of ~75 docker containers ymmv


So, automated, just not with the usual suspects. That seems reasonable.


You PXE boot coreos and use a cloud-config.


There are many options. You can use Googles GKE which manages the k8s cluster for you. You can use kops or Hyperkube for bare metal/aws.

Host monitoring you would utilize Kubernetes daemonsets which install whatever monitoring tool you want to use onto every single node in a cluster.


How did you turn that into 5 steps?

  1) Use one of Chef/Puppet/Salt/Ansible to bootstrap docker
  2) Use docker to build a container for the app (once)
  3) Use docker to start your container on multiple hosts


It's all about knowing how to build an open source community

This. Engineering excellence is secondary. You can get away with complete craptitude in your tech if you can build community. (I won't name examples.) Of course, it's better if you also have technical excellence. On the other hand, you can have technical excellence, but it will come to naught if you have community destroying anti-patterns.


In my experience, I haven't been coming to k8s because I particularly like the developer experience (despite their efforts to focus heavily on it), but because it cleanly supported some things that I need.

For instance, with k8s, out of the box every running container in a clustered system is discoverable and has its own IP. If you're writing distributed applications, and you're using containers principally as a tool to make your life easier (and not as part of an internal paas or for handling data pipelines or some other use case), having that sort of discovery available out of the box is great.


One thing I've found extremely difficult to handle is the Zookeeper cluster model of containers. Where when a thing dies, a thing has to come back and be able to referred to as "zookeeper-1" forever. The way to do this currently is to use a service in front of a replication controller with one pod. This feels wrong all over. Supposedly they have a thing called Pet Sets [1] coming to solve this, but it's been in the works for an eternity. Also we've started to outgrow the load balancing simplicity that the k8 load balancer gives you, and I have not seen a nice migration path to something like HAProxy in Kubernetes. All that said, we like kubernetes a lot.

[1] To distinguish from cattle. If you have a red blue and green goldfish, and your red goldfish dies, you can replace with another red fish and not really notice, but if it's purple, the others won't play with it.


Re: load balancing. Out of curiosity, why not just use haproxy? Or nginx, or your load balancer of choice.

Run haproxy in host networking mode, as a daemonset, and use it as your ingress point?


So it would basically be daemon set sitting on top of all the other services, with their own load balancing turned off? That should work. I like that


Email's in profile. I'm happy to discuss specifics offline, as I have some experience with this. Can show some example configs of how we've employed it.


That's how we do it with Vamp. Interested in your thoughts: http://vamp.io/documentation/installation/kubernetes/


While we haven't hit performance limits, but we are doing something same and it works great.


For a lot of clustered software it makes sense to build small controllers and resources that bridge Kubernetes's knowledge of the system and the distributed systems knowledge. See this demo of a WIP etcd controller from the community hangout: https://youtu.be/Pc9NlFEjEOc?list=PL69nYSiGNLP1pkHsbPjzAewvM...


PetSet will indeed solve your exact use case, and the docs are "Alpha" as is the feature.

But here is a Zookeeper example: https://github.com/kubernetes/contrib/tree/master/pets/zooke...


Petsets can be used in 1.3, but there are some outstanding bugs that still need to be squashed. I believe it will reach beta status in 1.4. Beta features are usually pretty production-ready (deployments, for example, are flagged as beta in 1.3, and have been rock solid for me).


Disclaimer: PetSet co-author

PetSets will probably be beta in 1.5. The problem with rushing PetSets is that the worst thing that can happen would be that we say PetSets are beta and then it eats someone's DB for lunch.

The goal is to get the core function stable and predictable (put enough stress testing and partition testing that we are comfortable with it) and put it into a longish beta while we add out the features that are necessary to make it able to predictably and stably keep running your clustered services. We want beta to mean "stable enough to trust".


That's a bit disappointing. People have been waiting for petsets for quite a while now. But thanks for the info.

FWIW, some other less-stable features (the new volume attaching logic in KCM, for example) should have been marked as beta. You guys are not quite consistent about what you feel is stable. :-)


Yeah, there's a fair amount of stuff going in and we aren't always consistent about what we turn on / off. In OpenShift we tend to be much more conservative (i.e. 1.3 OpenShift defaults controller volume attach to off, but the Ansible upgrader will turn it on by default).

We promise to do a better job of communicating alpha status - it's been a topic of a lot of discussion recently. To be fair - we've only had two (almost three!) update releases so we're still trying to be consistent in our messaging. 1.4 will try to be a lot better.


Thanks. I'm not really complaining :-)


Re HAProxy, take a look at Traefik. It's a thin, full-featured load balancer that supports Kubernetes out of the box.

https://traefik.io/

Another option is running Nginx via the Nginx Ingress Controller, which is also great.


One thing that I think Kube (and dc/os) are missing is what Chef is working on right now. Application definition should live within the app and consumed by the scheduler.

Chef's product is called Habitat https://www.habitat.sh/ and it has some VERY interesting concepts that if Kube will implement it will be much more interesting (to a lot of people).

Right now, the deployment and configuration of an application is supposed to be separated but I feel they need to be just a bit more coupled. An engineer that develops the application will define a few things like "domain" and how you connect to the application and this will be consumed by the scheduler.

Right now, dc/os and mesos are really fine grained around the DevOps people and I feel that the first that will crack the "batteries included" approach will win the fight by a knock-out.

Imagine something like Heroku on your own infrastructure, if an engineer can launch and deploy a micro-service with the same ease they deploy and access a Heroku application. That will be awesome.


While coupling is convenient, there is a problem when you are not the only consumer of your code. If you have an open source project that is to be deployable by anyone, you can't put Kubernetes files there (there's not really a way to write a generic Kubernetes manifest -- in addition to volumes and secrets, things like labels are environment-specific). This is also true for other things like .drone.yml for Drone.

So you have to maintain a private "release fork" that you periodically rebase into. This becomes a workflow bottleneck, and the VCS history will not look good, either. Even worse if you have multiple branches you deploy from.


This is true. It's especially true if you have services depending on one another. I think there's a middle-ground there.

Right now, 99% of people I know use Github-Workflow, master is always deployable (often CD), other branches are able to deploy using "hubot deploy branch-name to xxx"

I think if you take the latter and you make it easier to deploy an application on top of your cluster, you can gain a lot.

The workflow today is more cumbersome. If you could create your application, attach some definitions to it and launch it to the scheduler, it would be better.


> Right now, the deployment and configuration of an application is supposed to be separated but I feel they need to be just a bit more coupled.

You can dynamically configure kubernetes from within the cluster. You can either use client-go library or talk to the api server directly with rest. For other languages than Golang you may need to set up proxy server, but it's a single kubectl command.

So all the basic functionality is there, but IMO configuration should be separate if it is possible - having it in source control is nice. I only use dynamic api for scheduling kubernetes jobs.


What I imagine is something like the Procfile for Heroku.

The app decides what it runs and how, you can set configuration like domain, whether the application is public/private etc...

Once you push the application to the scheduler it will handle the rest.


Ah. Quite a lot of things you mention are possible by dynamically generating yaml config.

I wrote a blog post about using go kubernetes library to generate yaml : https://kozikow.com/2016/09/02/using-go-to-autogenerate-kube.... It's quite nice, even for static config IMO, due to type safety.

You can easily write golang binary that would ask user series of questions "what database would you like", "do you want public access", etc. and get yaml as output.


Yup. What I am saying is this: This should be a part of the core offering IMHO.

I think most people choose Heroku over rolling their own infrastructure simply because of the ease-of-use for getting started.


That's basically what Deis does. They use Kubernetes as the backbone and build a heroku layer on top.


From a different perspective, it's great to see Chef making a credible play in the container space.


Have you seen Deis? It seems they put just that, private-Heroku, on top of K8s. It even supports Heroku's buildpacks.

https://deis.com/


I've looked at their docs many many times. Never took the plunge.


Just as an outside observer developing on a platform, I see my fellow devOps team members working in Kubernetes and it's been a shit storm for the most part where containers disappear randomly, stuff breaks and they are on call on the weekends not having fun. I have my own clients on other projects using AWS where I just upload and click buttons like a dumb monkey and I end up looking more competent even though I'm completely not. I've consequently not been motivated to dive into these DIY deployments just yet.


I haven't dived into Kubernetes yet, but I set up Rancher for our new application and it has been nothing short of amazing so far. I can't express how happy we've been with it.

I previously tried the Mesos/Marathon route (with Mesosphere and then again with Mantl) and that was nothing but a huge waste of time due to all the maintenance that was necessary for all the required servers. With Rancher, it's just spin up a container for each host with a single command and you're done.


I've been looking at Rancher for a few weeks, haven't put an application yet. How do you perform rolling updates with Rancher+Kubernetes? With Kubernetes itself, currently I just do `kubectl rolling-update <rc> --image <new-image>`


From the subtitle:

It's all about knowing how to build an open source community -- plus experience running applications in Linux containers, which Google invented


It kinda did, right? Maybe, "invented" is to bold of a term, maybe there were others before. But, I've heard folk tales that Google has been running stuff in containers since the early to mid 2000s.


Google contributed cgroups to Linux, so saying that it invented Linux containers is a fair statement. Other operating systems had containers before Linux.


Not exactly. Google contributed the namespace support to Linux and IBM contributed much of the initial control group bits to Linux. Together, they form the modern "container" that we have all came to know and love.


would you consider "chroot" a form of container?


I was where I worked, with technology no one's heard of now, I doubt Google was then.


The setup to get k8s running isn't great, but once it's running and you understand it's config files it makes things so much easier. We're getting ready to deploy k8s at work soon and begin moving more there as we can.

From what I understand, and is completely not in the article, Mesos is designed for scale while most start-ups (and even established companies) can't afford or justify. K8s is simpler but still robust. Better than just fleet or compose and clearly still better than swarm (based on posts read here on hn).


We are all working to make it easier still with things like "Self-Hosted Kubernetes" to bring up and manage the clusters: https://coreos.com/blog/self-hosted-kubernetes.html

And the work that we have been going on with kubeadm. https://github.com/kubernetes/kubernetes/pull/30360 https://www.youtube.com/watch?v=Bv3JmHKlA0I


sig-cluster-lifecycle has a solution in process: kubeadm.

It will make setting up a cluster as easy as 'kubeadm init' on the master, and 'kubeadm join --token $TOKEN --api $IP' on the nodes.

Video demo: https://www.youtube.com/watch?v=8cirvYVhRoI&feature=youtu.be...

The commands to test it: https://gist.github.com/errordeveloper/bdd268e71fc8916c1d0ef...


Disclaimer: I work for Red Hat.

> The setup to get k8s running isn't great,

+1 But they are greatly improving.

For a local test environment have a look at OpenShifts 'oc cluster up' https://github.com/openshift/origin/blob/master/docs/cluster...


Someone elsewhere in this thread complained about the NFS experience with k8s. I know OpenShift contributed some or all of that code so does it improve the NFS experience as a layer on top of k8s?


Disclaimer: I work at Red Hat on OpenShift.

All of the code Red Hat contributed around Kubernetes storage plugins went into Kubernetes upstream. If you have any questions or problems, feel free to raise an issue or reach out to us in the community. Red Hat has a large number of contributors on the Kubernetes project and we are big fans of the community and the technology!

Red Hat OpenShift provides a an enterprise-ready Kubernetes distribution that also builds on top of Docker, RHEL/Fedora/CentOS and includes integrated networking (OVS), routing (HAProxy), logging (ELK stack), metrics, image registry, automated image builds & deployments, integrated CI/CD (Jenkins), and self-service UX (Web, CLI, IDE). You can check out the free Origin community distro here - https://github.com/openshift/origin or sign up for our free hosted developer preview here: https://www.openshift.com/devpreview/. We also offer commercially supported solutions - https://www.openshift.com/container-platform/.


Alternative local cluster for OSX is kube-solo https://github.com/TheNewNormal/kube-solo-osx - it's pretty good, and runs as a native Mac app


We're also working on making it easier to deploy and scale k8s:

- https://jujucharms.com/canonical-kubernetes/

Currently in alpha, feedback and PRs accepted, etc.

Disclaimer: I work for Canonical


Rancher makes it easy to setup a k8s environment,

Running k8s in AWS with Rancher - http://rancher.com/running-kubernetes-aws-rancher/

August 2016 meetup on HA k8s Clusters - https://www.youtube.com/watch?v=ppY9cqTvBVE


Seems like "Why Kubernetes is winning the orchestration war" is a more appropriate title for this?


I use Mesos and K8s heavily as well as contribute back to the projects, and while I do agree this is leaning towards being fan-fare there is a bit of truth to this.

Community is a big deal, people tend to underestimate this; Putting aside newer companies, when a larger enterprise ventures out to open source they do take community as a major factor as you have to consider the tool you build will have to last 5 years, maybe 10, maybe more.

To delve further into the Docker side of things, I personally wish that the company would focus on its core business instead of stretching itself with extra things. I get the need to improve the UX, which they do very well considering how far we have come from LXC.

I feel Mesosphere starting to go down the feature creep route as well, but I wish them all the best as I loved Mesos since the beginning all those years ago.


Why is no one mentioning Docker's response with version 1.12, the new built-in orchestration called swarm mode (different than just swarm): http://blog.nigelpoulton.com/docker-launches-kubernetes-kill...

Granted, the title is fanboyish, but it really seems to be a significant response to Kubernetes.


IMHO nobody is winning anything that matters right now because the current transition is a transition to an additional level of abstraction which is definitely not properly met by any of the tools available.

What we now need is tools that allow architectural fences around components, and reliability guarantees around subsystems ... versus not only technical but also business-level risks (including state-level actors)... often across borders, including for example exchange rate risk. This is based on business-level risk models, not some engineer feels X or Y type reasoning which is (often very well, but broad-picture uselessly) based on technical know-how.

I prototyped such a system pretty successfully, you can read the design @ http://stani.sh/walter/cims/ .. it's incomplete (critically hard to explain investment utility for non-tech business types) but at least infrastructure agnostic and practically informed.

NB. To be non-humble, by way of demonstration I am a guy who has been called to conceive/design/set up/manage from-scratch datacenters for extremely visible businesses with highly motivated attackers with USD$Ms at stake for penetration. Systems I personally designed run millions of USD/month and have many times that in investment. And it's obvious that both Google ("compete with amazon on cloud business... high-level desperate for non-advertising income!") and Docker ("VC! growth! growth! growth!") have their own agendas here... we should trust no-one. It's early days. Bring on the ideas, build the future. Hack, hack, hack. We the little people own the future. It's ideas.


Looking through all of these comments we should be glad, as a community, that there are a number players in this burgeoning orchestration space. Nobody has won and nobody should win. They each have their strengths and weaknesses and there is no one size that fits all.


Sad to see Mesos losing steam. My understanding was that Mesos subsumes the functionality of Kubernetes thanks to its Aurora scheduler. But it has much more customized schedulers, for different purposes, that might make it more efficient to run complicated pieces of software.

For instance, it certainly is possible to run a Cassandra cluster by having each instance run in its own Docker container. My understanding is that it would be much more efficient to run this cluster with a dedicated Cassandra cluster instead.

Is this right? Or are the performance benefits of running a dedicated Cassandra scheduler on Mesos negligible compared to running them in containers?


It would be nice if open source projects (especially popular databases) came with k8s definition files so that you wouldn't have to write the yaml yourself.


I did that for my open source project SocketCluster. See https://github.com/SocketCluster/socketcluster/tree/master/k...

Also, if you look at the main Kubernetes repo on GitHub, you can find the yaml files for various popular open source projects in there as well. Some of them are out of date though - Things have been changing so fast.

I think a problem that still comes up is that you still need yaml files to account for the interactions between different services too. Each service doesn't live in a vacuum.

We're currently working on building an complete auto-scalable stack on top of Kubernetes and a deployment service to go with it. See https://baasil.io

Everything we do is open source except the actual website.


I use SC for my startup and it works great! Thanks. I'll have a look at the Baasil service, it sounds interesting.


I wish kubernetes has more examples for example their vsphere volume driver has almost 0 documentation/tutorials on how to set that up.

http://kubernetes.io/docs/user-guide/volumes/#vsphere-vmdk-e...

I believe is inadequate


Agree - better doc is a focus for us. We've added providers so fast the doc has not caught up.


The author is just gaga over google. He presents no comparison, no benchmarks and no mention of alternatives except docker swarm.


What is the best resource to learn Kubernetes like a boss? I like ebooks, but will take anything, as long as it's up-to-date and easy to follow without being a long-term sysadmin.


You might not like this answer, but... play with it. In our environment, we're compiling and deploying Kubernetes by hand (I mean, not entirely by hand, but you get the idea). It was daunting at first but is getting easier and easier all the time, and we have a better handle on how things work as a result. What's going on under the hood, what the mass of command line parameters actually mean, and so on.

The documentation is lacking. Badly. And doesn't seem to keep pace with the features that are added in every point release, or in the alpha/beta versions. A lot of what we've learned has been by experimentation, poking at things, following the k8s user discussions, watching Github, reading code.


Can you elaborate on the information you wish was documented but isn't?

Also, have you seen the box at the bottom of the docs that says "I wish this page ..."? It goes right into their issue tracker, which increases the likelihood of something getting fixed.


To be fair, I have not seen or used that link, but I will take note of it for the future - thank you, that's very helpful.

I should keep notes on specifics, so I apologize that I can't highlight a particular thing that I've been frustrated by in a given moment. I'll certainly say that the documentation is improving.

As a general item, I think my biggest struggle has been hitting a wall with the documentation - there are some things that are left almost as exercises to the reader, especially getting into more advanced topics (how does one do rolling upgrades or canary deployments in more esoteric situations, how might one do load balancing properly in a master-slave style configuration versus something more round-robin oriented, etc.)

And I don't like to levy a complaint without acknowledging that anything opensource is ripe for improvement by contribution, but my professional situation prevents me from doing so. Trust that I would love nothing more than to help and not just make an empty observation or moan about it.


So I typed out this whole answer and only realized at the end that I should have asked: have you tried their Slack channel: https://kubernetes.slack.com/messages/kubernetes-users/ and/or their mailing list https://groups.google.com/forum/#!forum/kubernetes-users for getting answers to your specific concerns?

how does one do rolling upgrades or canary deployments in more esoteric situations

So there are a couple of moving parts to that: foremost, Kubernetes is only a tool, and so I would doubt there is The One True Way(tm) of doing canary deployments -- and I'm taking liberties with your comment in that I presume you don't mean the literal rolling-upgrade which is ``kubectl rolling-update`` and is very smart. Having said "it's just a tool," the first thing that sprung to mind when reading canary deployments was the health checker built into the ReplicationControllers. There are several ways one can check on the "health" of a Pod, and k8s will remove any Pod from service that has declared itself ineligible for service.

If the QoS metrics are too broad for any one given Pod to have insight into, k8s has a very robust API through which external actors can manipulate the state of the cluster, rolling back the upgrade if it turns out the new deploy is proving problematic.

I hope he doesn't throw tomatoes at me for suggesting this, but Kelsey Hightower <https://github.com/kelseyhightower> is an amazing, absolutely amazing, community resource. If you don't find inspiration from the talks he has given, then reach out on Twitter and ask for a good place to read up on your concerns. I bet if he doesn't know the answer, he will know the next contact for you to try.

load balancing properly in a master-slave style configuration versus something more round-robin oriented

Now there is something you and k8s may genuinely have a disagreement about - in its mental model (to the best of my knowledge) every Pod that matches the selector for a Service is a candidate to receive traffic. So in that way, there is no "slave" because everyone is equal. However, if I am hearing the master-slave question correctly, failover is automagic because of the aforementioned health check.


I have, yes, and there's no doubt they're both crucial in getting answers to some of those more advanced topics.

edit I want to also add that the Kubernetes community meeting is fantastic. I don't attend every week but I do play catchup on the videos released after.

Regarding rolling updates, I have found that looking at the kubernetes API - and discerning what kubectl is doing under the hood - is helpful in some cases. We've taken to API inspection to assist with some of the more complex update patterns we're trying to address.

Completely agreed on the value of Kelsey to the community. I hesitate about contacting any one person directly on a topic, but his guides and github repo are just outstanding.

On the load balancing/master slave thing - I would have agreed completely with you. Master/slave configurations seem antithetical to some of the core Kubernetes concepts... but then the waters are becoming muddied because it seems like PetSets are a response to that missing pattern. I think you're right on the mental model. Every pod is (from what I understand), or at least should be, seen as an equal candidate, the only thing being "can it receive and service requests, if yes, kubernetes doesn't care".

Failover by letting Kubernetes do the dirty work is, of course, an option. If the master or a slave dies - it'll come back up. Except... it's also at odds with anything that handles its own clustering model (thinking Redis Sentinel or Redis Cluster, or InfluxDB when it had clustering support). Sometimes "coming back up" in a system that has an orthogonal view to Kubernetes is challenging.

It also doesn't accommodate for the situation where... what if Kubernetes doesn't bring it back up for some reason? If the pod was killed for resource contention, or something along those lines. And now I have two slaves, no master, nowhere to feed writes?

I dont't complain too loudly about these things, because most often the solution is (correctly) to think differently about it, but there are real-world considerations that don't always fit perfectly with the existing Kubernetes model - but we may still have to use them from time to time.


PetSet co-author here:

Think of PetSet as a design pattern for running a cluster on Kubernetes. It should offer a convenient pattern that someone can use to Do-The-Right-Thing.

The hard part of clustering software is configuration change management. If an admin adds a new member to the cluster by hand, they're usually correct (the human "understands" that the change is safe because no partition is currently going on). But an admin can also induce split brain. PetSets on Kube try to make configuration changes predictable by leveraging their own strong consistency (backed by the etcd quorum underneath the Kube masters) to tell each member the correct configuration at a point in time.

The PetSet goal is to allow relative experts in a particular technology to make a set of safe decisions for their particular clustered software that behaves exactly as they would predict, so that they can make a set of recommendations about how to deploy on the platform.

For instance, CoreOS is working to provide a set of recommendations for etcd on pet sets that would be correct for anyone who wants to run a cluster safely, and encode that into tutorials / docs / actual pet definitions. They're also driving requirements. Other folks have been doing that for elastic search, zookeeper, galera, cassandra, etc. PetSets won't be "done" until it's possible to take the real world considerations into account safely when writing one.


That's really valuable insight, thank you. The biggest struggle so far has been with zookeeper and Kafka. Things that are stateful have posed a lot of difficulty - from a mental perspective more than fighting against Kubernetes specifically, just trying to think and adhere more to microservices principles.

I'm following PetSets very closely and I think they're going to help a great deal.

Is it accurate that PetSets were introduced to compensate for the type of thing I've singled out? That certain models just don't quite fit with the "everything is equal and interchangeable" notion? Or does it just feel that way? I don't want to end up missing the point of PetSets or using them for something they're not truly intended for, or leaning on them only to hit a stumbling block.


Yes. PetSets are intended to be step one of "some things need to be stable and replaceable at the same time". So that unit can then be composed into apps (i.e. Zookeeper + Kafka might be a PetSet for ZK and multiple scale groups for Kafka). basically trying to boil the problem space down into "what do cluster aware software need to be safe", like protection from split brain and predictable ordering.

There is likely to be more needed (on top or around) because we need to also solve problems for highly available "leaders" and active/passive setups. Think of what something like pacemaker does to ensure you can run two Linux systems in an HA fashion, and then map the necessary parts to Kube (fencing, ip failover, cloning, failure detection). Still a lot of hard problems to solve - PetSets are the first building block.


I agree with the parent comment, it is easiest to just play with it.

If you're particularly interested in Python webapps, I've written a tutorial on how Kubernetes can help deploy a Django app in a scalable and resilient fashion: https://harishnarayanan.org/writing/kubernetes-django/




These articles never mention the elephant in the room, AWS.

How many containers are running on Elastic Beanstalk and ECS?

I'd wager magnitudes more are running containers by reading the docs and clicking around than those that mastering the cutting edge challenges getting running on Mesos, Kube or Swarm.

Another blind spot is Heroku. Every day new businesses spin up containers with a 'git push Heroku master' and don't even know about it.

All these providers, platforms and tools have their place.

I simply don't think the "winning the war" talk is accurate or constructive.

Disclaimer: I worked on the container systems at Heroku and now manage 100s of production ECS clusters at Convox.


What "cutting edge challenges" are there to get running on Mesos/Kube/Swarm? Are you just referring to getting the cluster booted? There are just so many options for that these days.

Just curious, I was skimming through ECS docs last night and it seems like some of the basic building blocks look very similar to their equivalent constructs in Kube/Nomad.


Automating VMs with Elastic Beanstalk and deploying to a platform like Heroku are effectively solved problems.

Container orchestration brings in new challenges like operating a consensus database, managing data volumes and load balancing.

And this is on top of VM management.

Yes, ECS, Kube, Nomad are very similar. All relatively hard to use right now compared to their simpler ancestors.

Here is more in depth info about the challenges of ECS:

https://convox.com/blog/ecs-challenges/


On Safari OS X, the article is blocked by a full-page ad that I can't dismiss.


Don't see any ads on Chrome with Adblock Plus.

It shows 32 blocked requests though (ouch).


That seems light for most pages...


About average, I'd say.

Quick check:

* cnn.com: 30-40 * bbc.com: 10-15 * washingtonpost.com: 10-25 * medium.com: 1-2 * teccrunch.com: 25-40


Innovation is what makes this industry so exciting to be a part of. I joined the tech world, via Holberton School[1] just a mere nine months ago, and already so much has changed. It makes it challenging to keep up, but that's the fun part.

The future of technology is held in the hands of open source projects.

[1] https://www.holbertonschool.com




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: