Hacker News new | past | comments | ask | show | jobs | submit login

What happens when your machine dies? You get paged, your service is down, you have to migrate the services to a new machine.

With Kubernetes, it starts new container on new machine, and you don't get paged.

You could write scripts to do failover, but that takes works and will be buggy.






I was more questioning why a .container file would work for system services versus application services, since basically all those same problems occur for system services too.

Either way this type of argument just comes down to "should I cluster or not" but to think out loud for a bit that's just basic HA planning: Simplest solution is keepalived/etc for stateful services, standard load balancing for stateless. Don't want load balanced services running all the time? Socket activation. Don't have a separate machine? Auto-restart the service and you can't cluster anyway. The only thing you'd really have to script is migrating application data over if you're not already using a shared storage solution, but I'm not sure there's any easier solutions in Kubernetes


What’s the advantage of configuring, testing and maintaining all of that instead of a 10 line Kubernetes manifest that does more?

Not having to install and manage Kubernetes? Unless you're paying someone else to run it for you (in which case this entire conversation is sort of moot as that's way out of scope for comparison) that stuff is all still running somewhere and you have to configure it. e.g. even in small-scale setups like k3s you have to set up shared storage or kube-vip yourself for true high availability. It's not some magic bullet for getting out of all operational planning.

Also even in separate components it's not really "all of that". Assume an example setup where the application is a container with a volume on an NFS share for state: on a given node we'd need to install podman and keepalived, a .container and .volume file, and the keepalived conf. An average keepalived conf file is probably 20ish lines long (including notification settings for failover, so drop like 8 lines if you don't care about that or monitor externally) and looking at an application I have deployed a similar .container file is 24 lines (including whitespace and normal systemd unit file boilerplate) and the NFS .volume file is 5 lines. So ballpark 50 lines of config, if you or others wanted to compare configuration complexity.

Also, fun fact, you could still even use that Kubernetes manifest. Podman accepts .kube files to manage resources against itself or a Kubernetes cluster; I've been recommending it as sort of a middle ground between going all in on k8s versus transitioning slowly from a non-clustered deployment.


You can always use something like Docker Swarm or Nomad which achieves the same end result as Kubernetes (clustered container applications) without the complexity of having to manage Kubernetes.

Just spawn another VPS with your application and connect to load balancer. Even better - use Fedora CoreOS with Butane config and make that VPS immutable.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: