For example - metrics-server provided by AKS from the go is running in highly insecure manner. If you want to change that -> you cannot because they have automatic tools to keep bugging you out.
Another - constant disconnections of PV..
Another - 1/3 times the new provisioned node in vmss has a broken kubelet and doesn’t successfully register in control-plane. I was literally shocked when it happened twice in a single day. Response from support was that we are supposed to monitor that ourselves and drain unsuccessfully provisioned node - (we already were and it was mentioned in opening ticket) makes scaling horizontally REALLY PAINFUL..
CNI default reservation of IPs (30) - cannot be less - so if you have a service node running and you want only few pods to run on it for HA - well sucks to be you.
Kubenet not working until up recently with anything - for example AG, tho AG is a disaster of a service by itself.
Various API failures related to networking - sometimes control-plane lost connection to AKS subnet for some time (fixed by itself by still...)
Another - constant disconnections of PV..
Another - 1/3 times the new provisioned node in vmss has a broken kubelet and doesn’t successfully register in control-plane. I was literally shocked when it happened twice in a single day. Response from support was that we are supposed to monitor that ourselves and drain unsuccessfully provisioned node - (we already were and it was mentioned in opening ticket) makes scaling horizontally REALLY PAINFUL..
CNI default reservation of IPs (30) - cannot be less - so if you have a service node running and you want only few pods to run on it for HA - well sucks to be you.
Kubenet not working until up recently with anything - for example AG, tho AG is a disaster of a service by itself.
Various API failures related to networking - sometimes control-plane lost connection to AKS subnet for some time (fixed by itself by still...)