Hacker News new | past | comments | ask | show | jobs | submit login
Five Months of Kubernetes (danielmartins.ninja)
263 points by dreampeppers99 on Sept 14, 2016 | hide | past | favorite | 36 comments



We've been deploying Kubernetes for clients since, well, 1.0 (very recently) and have nothing but great things to say about it. If you want something approximating a Heroku-like experience but in your own environment (AWS, GKE, or even on-prem) K8s is a super awesome way to get there. Sure, like anything it's got some rough edges that you'll get cut on, but it improves every 3 months. :D

Big kudos to the k8ts team at Goog, and all the other contributors!


Did you have to upgrade clusters for your clients yet? Is there a way upgrade clusters to a newer version of k8s safely?


It is definitely not easy using "naked" open source Kubernetes, but it can be done. My company [1] packages Kubernetes-on-autopilot, which includes HA-configuration, single-click updates to the latest k8s version, etc.

We run it for our customers on their AWS clouds and even on enterprise infrastructure. So it's just with any OSS project: DIY-vs-invest-into-tooling.

A huge benefit of adopting Kubernetes for your SaaS product is this: it becomes much easier to push updates to enterprise environments. Selling SaaS to private clouds is pretty sweet: the size of a deal is larger but the incremental cost of R&D and ops (!) is now very low, because of standardized deployment models like k8s (what we do) or Mesos.

[1] https://gravitational.com


whats your opinion using plain vanilla Salt vs kube-up vs kube-aws vs the new kops ?

the problem with k8s is that there is way too much stuff happening now and it is hard to get started quickly.

For example, we started using Docker when it was alpha - for a single man startup.. getting up and started was very easy. On the other hand, using k8s on a 3 node cluster is... scary.


In my opinion, kube-up and other kubernetes-specific provisioning helpers were created with the goal of letting people of varying backgrounds to complete numerous Quick Starts and tutorials with minimal pre-requisites. They make no assumptions about user's expertise in system administration.

For production use, you should definitely embrace the tooling you have already invested in and make Kubernetes a part of it. How else would you integrate it with your existing identity management, monitoring, storage/backups, etc. I am no expert in Salt, but I'm assuming Ansible (my choice) is similar, then yes: building a solid set of Salt recipes would be my recommendation, or just hire us: we'll install and manage hundreds of clusters for you, one for each engineer, even. ;)


or just hire us

:)


We start from kops, but tweak it. Kops is great, except it's not integrated into the rest of your VPC etc. as 'old-gregg' suggests.

Or you could hire us, too. We do consulting/kube-as-a-service for folks in AWS/GKE :D (https://www.reactiveops.com).


we had taken this conversation to mail, but this is what I wrote:

its an ecosystem question. Docker became wildly successful because a developer could start using it in production within minutes. In early stage startups there is NO difference between developers and devops. All of these developers who deployed their MVP apps on docker (like me) are the ones who are starting to pay for services like Codeship, etc. We have grown with the Docker ecosystem.

The question is whether k8s is giving us that flexibility ? I dont see that. For example, the next easy step up for people is Docker-compose: single host.. many services. And that gradually extends to docker-swarm.

Lot of people say I "don't need k8s for 3 node cluster", but I'm still stuck...because I need to do something.

So I have to use docker compose. And then I'm solidly locked into the docker ecosystem.

That's the problem - your value prop is great when someone has 100 clusters... But I'm asking you from an ecosystem perspective : how will you make sure someone like me will choose k8s at some point in the future, when it is unusable for me right now ?

Or do you believe that k8s is so vastly superior to anything else out there, that when I scale up enough... I will have no choice but to move to it (and then call you for help)


> Kubernetes does not offer a clean solution for a number of problems you might face, such as stateful applications.

petset is a solution for stateful applications [1]. It's still in alpha though. I heard rumours that it will enter beta in about 3 months.

1. http://kubernetes.io/docs/user-guide/petset/


> In general, Elastic Beanstalk works fine and has a very gentle learning curve; it didn’t take long for all teams to start using it for their projects.

This is one of the most pressing issues I tend to evaluate on infrastructures. I'm curious to see a few months down the line your opinion on how the dev teams embraced Kubernetes' setups independently or if they kept depending on a dev-ops team to do so.


I believe the reason why people tend to restrict the cluster access to just a handful of people is because - at least for now - these tools lack proper ways to control access to specific resources by specific groups of people. I mean, there are ways to do that in Kubernetes today[1], but the setup is kept as an exercise to cluster operators.

I don't know for sure, but this is less of a problem in some distributions, such as OpenShift[2].

Once this problem is solved, there's no reason to "shield" the cluster from devs.

[1] http://kubernetes.io/docs/admin/authorization/

[2] https://docs.openshift.com/enterprise/3.0/admin_guide/config...


Disclaimer - I work at CoreOS and on the Kubernetes auth sub systems

Today, most of the auth-N plugins[0] (the upstream equivalent to the OpenShift doc you linked to) are relatively minimal. Most of these are aimed at providing pluggability for other apps that focus on ease of use. Google uses the webhook implementation for its GKE auth-N and we (CoreOS) continue to try to make our OpenID Connect server federate to more backends (LDAP, GitHub, etc.).[1]

With these kind of tooling, it's completely possible to map auth-Z policies to, say a group of LDAP users. But yes, there's a lack of canonical documentation on how to go about this. We're always trying to negotiate how much of this should live in core Kubernetes and how much should be provided by third party services (and what the upstream docs should endorse).

But today I'd still (perhaps because I'm biased) recommend giving your CEO different credentials for your prod cluster :)

[0] http://kubernetes.io/docs/admin/authentication/ [1] https://github.com/coreos/dex


I like dex a lot. Very cool project. Was gonna use it + k8s for a side project until I realized for the scale I had (1 user) Auth0 + Heroku is still just fine, but dex lets you run Auth0 on your own and completely in your control which is awesome. No more setting up authentication!


To add to Eric's point - the goal over the long term is to flesh this stuff out and have both flexibility and multi-tenancy in Kube. OpenShift could be opinionated about security (no compromise) while for Kube we wanted to make sure we built first a system that was flexible enough to be used in many ways.

I think that was the correct choice, because most Kube deployments are still single ops team focused, and it was possible for CoreOS to build and integrate Dex without having to face a high bar.

It'll get more opinionated eventually :)


I've been using ecs-cli for my container deployments on AWS https://github.com/kaihendry/count

I wonder if I should try kubernetes. It seems a lot more complex but the tooling looks better maintained.


There's no problem in sticking with your current solution if it's working for you.


I'd be very interested to see the code for the AWS Lambda functions mentioned—specifically the one about ephemeral development environments based on open PRs. We're building something similar at InQuicker and it'd be great to see how other people are approaching it.


It' not open source, but I'll try to sell the idea to our CTO. :)

Just to give you more details about its inner workings, this function is written in JavaScript that gets called when certain events come from GitHub ('pull_request', 'status', and 'push'), and uses kubectl to modify the corresponding deployments depending on the event.

Nothing fancy there, trust me.


Do you only create copies of the stateless pieces of each stack, or do you also copy databases?


We currently only run stateless apps on Kubernetes. All databases are hosted elsewhere (RDS, MongoDB Cloud, etc)


Let me rephrase that: Do you spin up a new copy of any necessary data stores when you deploy a topic branch of an app, or do all versions of the app share the same view of data in their environment (e.g. staging/production)?


No, these 'development' environments point to other services in 'staging'.


Great article, very easy to read and tons of useful info. Love seeing fellow Brazilians giving it a go.


Thanks, man! :)


> First, we created one DNS record for each service (each initially pointing to the legacy deployment in Elastic Beanstalk) and made sure that all services referenced each other via this DNS. Then, it was just a matter of changing those DNS records to point the corresponding Kubernetes-managed load balancers.

If anyone could explain this, does this mean - the services still are being accessed from Public IP or does Kubernetes managed load balancers are Private IPs that can the individual nodes know about ?


By default, all ELBs created by Kubernetes are external, and they load balance traffic to every node in the cluster to the service port (each service gets its own port number which is the same in every node).


So, thats a lot of AWS traffic cost for the traffic between different services :(


You can stay within the cluster via connecting with "servicename"(ie, https://mynodeapp/) - they're added to one another as environment variables.

If you need to communicate with another namespace on the cluster you can do servicename.myothernamespace.svc.cluster.local, etc.


That's true if you reference each service via its external ELB CNAME (or a Route53 DNS that points to it); however, Kubernetes comes with a built-in DNS server that you can use to discover the endpoint IP addresses in the pod CIDR (all pods are in the same subnet via the overlay network).


You can define services to be internal. I presume this does not use an elb but I have not tried it yet.


A lb (glbcd/elb) is only provisioned if you declare type: LoadBalancer. ClusterIP and NodePort don't provision one, and you can always communicate over podIP or via the sky-dns cluster.svc.local records, the former having HA implications.


There has been some work towards having a bare metal IP allocator for metal (you route a subnet to the cluster and when someone adds a Service with type loadbalancer you get handed a magic HA VIP). Not yet in Kube, just OpenShift, but I think we'll eventually work to have it be something you can use if you control your network infra.


Is there any work toward utilizing Openshift Origin on GKE? I'm aware that I can manually configure things to do so but I'd really prefer to to it just be a deployment/daemonset that ties into my GKE instance. This probably affects the customer base of Openshift so I assume it's not likely, but I'd really love to use Openshift on GKE.


Did you try ECS as an alternative to Elastic Beanstalk? Would be interested in hearing your pros and cons for ECS vs Kubernetes on EC2.


I didn't try it because, as mentioned in the footnotes, ECS isn't available for the some of the regions we use, but if you look around, you can find comparisons between the two[1].

I'll probably give ECS a go when (if?) Amazon enables it globally.

[1] https://railsadventures.wordpress.com/2015/12/06/why-we-chos...


I am very close to giving "k8s" a try.

Despite a lot of work on trying to figure out how to get us up on docker for CI workloads - the ecosystem is very confusing in terms of docker-cloud, docker-compose vs. docker-machine (on linux vs. osx) etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: