Nobody wants to wake up at 1AM because their singly-homed service just went down. Kubernetes might be a fine tool to achieve that, but I want to point out that there are much simpler failover solutions available. Failover is something you should definitely have, but also something you definitely don't need kubernetes for.
This feature won't even work for anything designed for fault tolerance. I believe it can only work for things that either rely on other properly fault tolerant services or that are completely stateless, idempotent, so they can be retried and eventually consistent (in which case the state still has to be handled by properly fault tolerant systems). Either way kubernetes cannot help here.
You mean a failover solution? It's hard to give a list here because it is such a large space and it completely depends on your product/application. It is more like a category of use cases than one specific use case.
Some ideas for things you could do in a web/website/internet context assuming you have a single point of presence:
One type of "HA" is network-level failover; haproxy (L7), nginx (L7) and pacemaker (usually L3) seem to be very popular options, but I think there are dozens of other alternatives. In terms of network-level failover, things get more interesting once you are running in multiple locations, have more than one physical uplink to the internet or do the routing for your IP block yourself.
For application-level failover and HA, one option for client/server style applications is to move all state into a database which has some form of HA/failover built in (so pretty much every DB). I think this is very common for web applications and also for some intranet applications.
Assuming you have a more complex application running in a datacenter, there is also a lot of very interesting stuff you can do using a lock service like Zookeeper or etcd or really almost any database. Of course, you can also make your app highly available without using an external service; there is a mountain of failover/HA research and algorithms that you can put directly into your app (2pc, paxos, raft, etc). Of course all of these require some cooperation/knowledge from the application. For some apps it might be very hard to make them "HA" without relying on an external system, but for some apps it will be trivial.
Note that when you move away from a web/datacenter context, to something like telecommunication or industrial automation, so something that doesn't run as a process in a datacenter but is implemented {as, on} hardware in the field, failover and high availability will have an entirely different meaning and will be done in a totally different way.
Most single-instance, single-zone failover scenarios can be handled with shell scripts, the AWS API, and cron.
But the parent's comment is missing the point. K8s is not for failover. K8s is literally just a giant monolith of microservices for running microservices. It's not intended to provide failover for non-microservices, it's intended only to run microservices, and as a side-effect of needing to be able to scale them, it inherently provides some failover mechanisms.
FROM golang:1.8-alpine
ADD . /go/src/hello-app
RUN go install hello-app
FROM alpine:latest
COPY --from=0 /go/bin/hello-app .
ENV PORT 8080
CMD ["./hello-app"]
The difference in simplicity is not in the interface that is presented to you as a user. The difference is that your shell script will have a couple hundred lines of code, while the docker and kubectl commands from above will pull in literally hundreds of thousands of lines of additional code (and therefore complexity) into your system.
I'm not saying that is a bad thing by itself, but there definitely is huge amount of added complexity behind the scenes.
That's nice and all as long as it works. If there are any problems with it (network, auth, whatever) have fun with even diagnosing the bottomless complexity behind your innocous lil' kubectl command.
The error messages are quite good. There are extensive logs. There are plenty of probes in the system. Just the other day we `kubectl exec <pod_name> /bin/bash` to test network connectivity for a new service.
To the best of my exposure, Kubernetes is a well engineered system.
The point is that automation software for admin tasks is a zero-sum game: the more a tool does, the less your devops staff will be able to leverage their know-how. The more magic your orchestration does, the less your ops will be able to fix prod problems. And for getting anything done with k8s you'll need a whole staff of very expensive experts.
Look at it this way. Kubernetes is a Tesla Model X, and scripts/aws/cron is an electric scooter.
You can try to go cross-country with either of them. One was engineered to survive extreme environments, protect you from the elements, and move really fast. The other was engineered to leave you exposed, operate in a moderate climate, and go much slower.
If you have problems with the former, it's time to call the technicians. If you have problems with the latter, you might fix it yourself with a pocket screwdriver.
That was my point. I wanted to point out that while some people have only/first heard about failover in the context of kubernetes, it is not something that is specific to kubernetes or even the problem that kubernetes was build to solve.
Of course it is not designed to be a failover solution specifically and using it (exclusively) as such would be ill-advised; I was just trying to be diplomatic while pointing that out.
We're using elastic beanstalk currently (multi docker conatiner). Not that I'm advocating it and I'm really interested in k8s now that aws has eks, but ebs is really simple to use for a simple setup.