everything old is new again? If there is not any good easy method in Kubernetes as q3k describes than why should somebody go into this overengineering approach?
> as someone that works in operations.. This is pretty common think. Lets still do things how we did it on bare metal inside kubernetes.
Because haproxy and nginx are proven technologies with limited and very well known failure modes, which means there's exactly 5-6 well documented, well understood and very well known ways haproxy and nginx can fail.
Experienced ops people understand that one does not optimize for the blue skies -- during "everything works wonderfully" all non completely broken technologies perform at approximately the same level. Rather these people optimize for the quick recovery from the "it is not working" state.
I believe the OPs point is that using HAProxy with a floating IP is a bit of anti-pattern in Kubernetes. The idiomatic Kubernetes way would be to use an ingress object. Both HAProxy and Nginx have ingress controllers for Kubernetes which use a load balancer in front of them. The article then goes on to talk using corosync and pacemaker which are cluster technologies in their own right. This is really bizarre. Running a cluster on a cluster would not be many people's idea of "optimizing for quick recovery."
The article, though, suggests also adding corosync and pacemaker. So 4 things on top of the already complex K8S. I bet someone later throws in a service mesh. Imagine troubleshooting all that.
That's exatcly in order to blow in this fog that you want to install a layer7 load balancer designed with observability in mind. Seeing what's happening is critical when everything changes by itself under your feed. With a component like haproxy you get accurate and detailed information about what's happening and the cause of occasional failures allowing you to address them before they become the new standard.
1000 times this. Corosync and pacemaker alone are more or less as complex as K8S itself. Well, I'm exaggerating a bit but really, all the HA clusters that I saw, done with corosync in the past 10 years ended up failing anyway (and with fireworks!) one way or another.
Add this on top of Kubernetes? No, thanks. Life is stressful enough.
Yup. Corosync + Pacemaker can and will implode in a spectacular fashion exactly when you don't want it to. And 2 node cluster will split brain sooner rather than later. I'd rather use keepalived if required, since it's a lot easier to understand and manage.
I agree, that's a very odd way of addressing a problem of entry points failure:
* If one is doing it in a cloud and wants to avoid potential issues with the clients behind broken DNS resolvers, then one simply nails the entry point instances to specific IP addresses and in event of an entry point failure, assigns an IP address of a failed instance to a standby instance, resulting in a nearly immediate traffic swap.
* If one is running entry points on physical hardware, then the solution is to bind the entry points to a virtual IP address floating between the instances using VRRP.
* Finally, if one wants to be really super-clever and not drop sessions using controlled fail-over, one does does VRRP + service mac addresses similar to Fastly's faild or Google's MagLev. (But really, this is addressing 99.99999% reliability when in most of business cases 99.9% would do just fine and 99.99% would be amazing ).
You wouldn't use VRRP in these cases these days - since the networking world has moved on and now L3 is generally pushed to ToRs, you would use BGP to announce a /32 or /128 and configure an ECMP group on the ToR. This gives you not only redundancy then, but also traffic splitting. Maglev does use BGP, too (although in a way indirectly due to scale reasons).
Just because a customer has access to L2 fabric via a VLAN between customer's hardware does not mean the customer has access to be able to tweak ToR switch configuration.
Using BGP on a L2 vlan for handling a single digit number of IP addresses allocated to the entry points is akin to using K8S to host a static HTML page.