This is a neat project. The documentation could be clearer, however. I was interpreted the phrase "create a cluster of thousands of nodes in seconds" to mean that it offered a significant speed up in initializing real clusters. Rather, this appears to be a tool for mocking kubelet to enable development and and testing at scale. Or am I misunderstanding?
I'm not so sure. My first association was of some fancy kind of "deamonless" system for controlling nodes, maybe by automatically SSHing into the node or whatever.
Looks like a neat tool for learning k8s without paying for lots of nodes. I bought a used Dell R620 so was able to create a dozen VMs to test with, but something like this could be a way to do something similar without requiring the big hardware.
One of the use cases mentioned here is testing. One should also look at kind [1]. It is light weight, a really good candidate to setup dev environments.
I wonder how big of an effort it would be for Apple to take some engineers and make Mac OS support kernel namespaces/cgroups that would bridge the gap between all of these cool container features Linux has that requires a virtual machine to recreate on Mac.
Docker Desktop runs dockerd in a Linux VM with Apple's hypervisor framework. You can also run containers in a Linux VM with Parallels or VMware Fusion hypervisors, QEMU on macOS also uses HVF not KVM. But you can't run VMs inside those VMs as it stands today. This works fine on Intel Macs which means you can't experiment and use KVM - one of the killer features of Linux and things like https://kubevirt.io/ and firecracker. Or VMs running stuff like Proxmox TrueNAS or ESXi (all possible on Intel macs and every x86_64 CPU)
It seems to me that the right fix is for Docker Desktop to support M1. Docker, kubelet, the k8s control plane, and everything else has supported ARM for ages. There is no need for that extra VM and therefore no blocker on nested virtualization.
It's not an ARM problem, it's a kernel/OS one. Same as Windows, macOS simply doesn't have what it takes (namespaces, cgroups, etc.) to run Docker/Linux containers natively, therefore an intermediary Linux VM is needed.
Agreed, but considering that kubernetes now supports joining windows workers to run windows containers, as well as integrated support for dockerd inside wsl2... That leaves macOS as honestly the worst platform for any kind of container related work.
To what extent can I run kubernetes locally with this tool? I don’t need many k8s features and existing local k8s solutions always have a noticeable CPU usage.
The docs don’t clearly communicate things like this.
There are some projects that store data in k8s but can otherwise run locally. One can run those without a k8s cluster locally by using kubebrain with in memory storage: https://github.com/kubewharf/kubebrain
You can actually run the kubelet in standalone mode without etcd, controller-manager, scheduler or api-server. Simply drop any static pods manifests into /etc/kubernetes/manifests and the kubelet will launch pods for you. Remove them and kubelet will too. Fun fact this is how kubeadm bootstraps a control plane from "scratch"
What kinds of things would someone evaluate with this where the limitations aren’t a factor? If I were testing load, I’d want performance and lifecycle behavior to be accurate, but I guess there are some other use cases?
One area we've been poking at is better understanding API server and scheduler decisions when large number of nodes fail at the same time. We don't actually need to be running the workloads to get useful information.
We're experiencing a lot of performance issues with Istio in larger clusters of 4-5k pods and several k services, where high churn rates causes the control plane to go ballistic.
KWOK allows us to reproduce this load without spending a penny.
Slight tangent inspired by the premise, but why do EKS etc cap the number of pods and other resources on a node?
We try to enable our customers to start on 1 node and scale up at their discretion, but found we had to into all sorts of contortions to make that work. Likewise, k8s is resource hungry. We can run the docker compose version on a much smaller laptop wrt CPU & RAM.
In my testing, it's a limitation in either the container runtime or kubelet in processing all the events that flow from pods. And since the container networking and container storage interfaces aren't part of Kubernetes, there are likely scaling issues in those pieces of software as well.
I believe you're correct, although pedantically that would only apply if one is using their vpc-cni <https://github.com/aws/amazon-vpc-cni-k8s#readme> and not with a competing CNI. Kubelet offers a configurable for the number of Pods per Node <https://github.com/kubernetes/kubelet/blob/v0.26.2/config/v1...> which defaults to 110 for what I would presume is CIDR or pid cgroups reasons and thus is unlikely to differ by instance size as the ENI limit you mention does (IIRC)
# Mapping is calculated from AWS EC2 API using the following formula:
# * First IP on each ENI is not used for pods
# * +2 for the pods that use host-networking (AWS CNI and kube-proxy)
#
# # of ENI * (# of IPv4 per ENI - 1) + 2
#
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI
Kubernetes has a default (and considered best practice) limit of 110 pods per node. Are you seeing them cap it lower than that? I've never checked this personally on EKS, TBH.
Also, what do you mean by capping resources? In my experience with EKS, I haven't had any issues fully utilizing a nodes resources, even in EKS.
(Disclaimer: I haven't looked too much into this tool other than a cursory glance)
It seems this tool isn't meant mostly for testing how different components behave under different scenarios and/or load. This is probably particularly helpful for custom controllers or operators. What happens to your controller if it's constantly reconciling 100k pods? What about 5k nodes? Something else? If this tool makes creating a "loaded" cluster easy, it's definitely handy. Would have saved me some time doing something similar a few months ago.