Hacker News new | past | comments | ask | show | jobs | submit login

I think you are suffering a bit from the over-engineering of Kubernetes and Docker and throwing the baby out with the bath water. Containers in general are great for simplifying deployment, development and testing. We use docker currently and it works great, but we are using just docker and using it only as a way to simplify the above. We are deploying 1 application to 1 system (EC2 via ASG).

There is also nothing keeping you using Docker for containers. LXC also works great and it has no runtime, so you have none of the stability issues you can get with Docker. Though I must say Docker has improved a lot and I think it will stabilize and _it_ won't be an issue (not as sure about Kubernetes).




Sure, I agree that both Docker and k8s (at some level, k8s probably had to have a lot of that complexity to interface with Docker) are overengineered, and that there are better containerization processes/runtimes.

But I still don't think containers are what most people want. People need/want ultra-lightweight VMs with atomized state. NixOS looks promising but I haven't used it yet. It seems to give you a way to deterministically reason about your system without just shipping around blobs of the whole userland. You can also approximate this on other systems with a good scripting toolkit like Ansible.


All I want is a way to encapsulate an application and its dependencies as a single artifact that I can run, test and deploy. Right now containers are the best way to achieve this but I'll probably be happy with any solution to this problem.

NixOS does look interesting and I've considered playing with it for personal projects, but IMO it is still to fringe for use at work where you need both technical usefulness and a general consensus that it is appropriate (i.e. mindshare).


It seems deploying thousands of ultra-lightweight VMs with atomized state would still require an orchestration layer. I don't follow how that would remove complexity and/or improve stability.


It removes complexity because you can already use a lot of stuff that exists. Kubernetes has established itself as a black box.

Kubernetes has the concept of an "ingress" controller because it has established itself as the sole router for all traffic in the cluster. We already have systems to route traffic and determine "ingress" behind a single point (NAT). Kubernetes also manages all addressing internally, but we have technologies for that (DHCP et al). Kubernetes requires additional configuration to specify data persistence behavior, but we have many interfaces and technologies for that.

VMs would be able to plug into the existing infrastructure instead of demanding that everything be done the Kubernetes way. It reduces complexity because it allows you to reuse your existing infrastructure, and doesn't lock you in to a superstructure of redundant configuration.

kube is very ostentatious software in this way, and it makes sense that they'd throw compatibility and pluggability to the wind, because the strategic value is not in giving an orchestration platform to AWS users, but rather to encourage people to move to a platform where Kubernetes is a "native"-style experience.

As for orchestration, people were orchestrating highly-available components before Kubernetes and its ilk. Tools like Ansible were pretty successful at doing this. I have personally yet to find executing a `kubectl` command less painful than executing an Ansible playbook over an inventory file -- the only benefit would be faster response time for the individual commands, though you'd still need a scripting layer like Ansible if you wanted to chain them to be effective.


Right, and this orchestration layer is supposed to be designed for resilience and stability from the start, because it's an architectural problem that cannot be fixed later without a complete redesign of everything. There is one such proven architecture - supervision trees and it addresses both problems: complexity and stability. VMs are not even necessary for this, nor are cloud platforms.


I think at this point it's accepted that supervision trees are an app or runtime concern. Operating systems and distributed platforms have to support a wider range of architectural patterns.

Disclosure: I work on a cloud platform, Cloud Foundry, on behalf of Pivotal.


Supervision trees have enough flexibility to support any architecture pattern imaginable, you got it kind of backwards. They are like trees of arbitrary architecture patterns. The idea is to limit the scope of any possible problem, so that it only affects a tiny part of the system, but at the same time reducing complexity by only having to deal with little responsibility in each supervisor. Kind of a tiny orchestration system for each service, instead of a centralized one.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: