Hacker News new | past | comments | ask | show | jobs | submit login
KWOK: Kubernetes WithOut Kubelet (kubernetes.dev)
151 points by mikece on March 1, 2023 | hide | past | favorite | 47 comments



This is a neat project. The documentation could be clearer, however. I was interpreted the phrase "create a cluster of thousands of nodes in seconds" to mean that it offered a significant speed up in initializing real clusters. Rather, this appears to be a tool for mocking kubelet to enable development and and testing at scale. Or am I misunderstanding?


You are correct, although for what it's worth the phrase "without Kubelet" from the name is fairly unambiguous to anyone in the Kubernetes ecosystem.


I'm not so sure. My first association was of some fancy kind of "deamonless" system for controlling nodes, maybe by automatically SSHing into the node or whatever.


That was my first impression as well.


Same. I feel like the title set me up for confusion. Still very cool and very used but not what I was expecting.


Yeah I thought it was some series of hacks with ebpf and systemd...


Probably because the authors are not native in English.


Looks like a neat tool for learning k8s without paying for lots of nodes. I bought a used Dell R620 so was able to create a dozen VMs to test with, but something like this could be a way to do something similar without requiring the big hardware.

Also a clever name for the project.

May also have a place in a CI stack


One of the use cases mentioned here is testing. One should also look at kind [1]. It is light weight, a really good candidate to setup dev environments.

[1] https://kind.sigs.k8s.io/


I wonder how big of an effort it would be for Apple to take some engineers and make Mac OS support kernel namespaces/cgroups that would bridge the gap between all of these cool container features Linux has that requires a virtual machine to recreate on Mac.


They should enable nested virtualization on M1/M2 first. As it stands now it's impossible to do anything with KVM without dual booting Asahi


Wait really? How does Docker macOS work on M1/M2? Or does it not?


Docker Desktop runs dockerd in a Linux VM with Apple's hypervisor framework. You can also run containers in a Linux VM with Parallels or VMware Fusion hypervisors, QEMU on macOS also uses HVF not KVM. But you can't run VMs inside those VMs as it stands today. This works fine on Intel Macs which means you can't experiment and use KVM - one of the killer features of Linux and things like https://kubevirt.io/ and firecracker. Or VMs running stuff like Proxmox TrueNAS or ESXi (all possible on Intel macs and every x86_64 CPU)


It seems to me that the right fix is for Docker Desktop to support M1. Docker, kubelet, the k8s control plane, and everything else has supported ARM for ages. There is no need for that extra VM and therefore no blocker on nested virtualization.


It's not an ARM problem, it's a kernel/OS one. Same as Windows, macOS simply doesn't have what it takes (namespaces, cgroups, etc.) to run Docker/Linux containers natively, therefore an intermediary Linux VM is needed.


Agreed, but considering that kubernetes now supports joining windows workers to run windows containers, as well as integrated support for dockerd inside wsl2... That leaves macOS as honestly the worst platform for any kind of container related work.


Doh, I forgot about that. You're right, the VM is needed anyway.


> Docker Desktop runs dockerd in a Linux VM with Apple's hypervisor framework

Which in my experience uses at a minimum 4gb of RAM just sitting idle with no containers running


Would this solve anything? Most containers use a linux userspace, which requires an actual linux kernel.

Your idea would more similar to Flatpak (containerized GUI apps).



To what extent can I run kubernetes locally with this tool? I don’t need many k8s features and existing local k8s solutions always have a noticeable CPU usage.

The docs don’t clearly communicate things like this.

There are some projects that store data in k8s but can otherwise run locally. One can run those without a k8s cluster locally by using kubebrain with in memory storage: https://github.com/kubewharf/kubebrain


You can actually run the kubelet in standalone mode without etcd, controller-manager, scheduler or api-server. Simply drop any static pods manifests into /etc/kubernetes/manifests and the kubelet will launch pods for you. Remove them and kubelet will too. Fun fact this is how kubeadm bootstraps a control plane from "scratch"


If you don't need full k8s, maybe k3s.io might be more suited? Althogh I've found kind/minikube to be fine for usage, by ymmv.


> To what extent can I run kubernetes locally with this tool?

Why do you want to use this tool specifically? Have you considered minikube? What features/use case are you looking for?

Do you want to run stuff on Kubernetes? If so, Docker Desktop on macOS and rancher desktop have built-in Kubernetes functionality.

or do you want to experiment with Kubernetes itself?


What is this actually doing to simulate a 'node'?


What kinds of things would someone evaluate with this where the limitations aren’t a factor? If I were testing load, I’d want performance and lifecycle behavior to be accurate, but I guess there are some other use cases?


One area we've been poking at is better understanding API server and scheduler decisions when large number of nodes fail at the same time. We don't actually need to be running the workloads to get useful information.


We're experiencing a lot of performance issues with Istio in larger clusters of 4-5k pods and several k services, where high churn rates causes the control plane to go ballistic.

KWOK allows us to reproduce this load without spending a penny.


How does it compare with e.g. cluster loader using hollow nodes?


Slight tangent inspired by the premise, but why do EKS etc cap the number of pods and other resources on a node?

We try to enable our customers to start on 1 node and scale up at their discretion, but found we had to into all sorts of contortions to make that work. Likewise, k8s is resource hungry. We can run the docker compose version on a much smaller laptop wrt CPU & RAM.


Kubernetes is only tested against 110 pods per node: https://kubernetes.io/docs/setup/best-practices/cluster-larg...

In my testing, it's a limitation in either the container runtime or kubelet in processing all the events that flow from pods. And since the container networking and container storage interfaces aren't part of Kubernetes, there are likely scaling issues in those pieces of software as well.


It's been a while since I've used EKS, but I seem to recall it's primarily based on the maximum ENIs the instance type supports.


I believe you're correct, although pedantically that would only apply if one is using their vpc-cni <https://github.com/aws/amazon-vpc-cni-k8s#readme> and not with a competing CNI. Kubelet offers a configurable for the number of Pods per Node <https://github.com/kubernetes/kubelet/blob/v0.26.2/config/v1...> which defaults to 110 for what I would presume is CIDR or pid cgroups reasons and thus is unlikely to differ by instance size as the ENI limit you mention does (IIRC)


# of pods are essentially capped by the worker node choice.

below excerpt from: https://github.com/awslabs/amazon-eks-ami/blob/master/files/...

  # Mapping is calculated from AWS EC2 API using the following formula:
  # * First IP on each ENI is not used for pods
  # * +2 for the pods that use host-networking (AWS CNI and kube-proxy)
  #
  #   # of ENI * (# of IPv4 per ENI - 1) + 2
  #
  # https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI
doc on EC2 instance types ("Amazon EKS recommended maximum pods for each Amazon EC2 instance type") https://docs.aws.amazon.com/eks/latest/userguide/choosing-in...


Kubernetes has a default (and considered best practice) limit of 110 pods per node. Are you seeing them cap it lower than that? I've never checked this personally on EKS, TBH.

Also, what do you mean by capping resources? In my experience with EKS, I haven't had any issues fully utilizing a nodes resources, even in EKS.


I have a problem where I want to create an extension for https://external-secrets.io/v0.7.2/ and I want to be able to test it in a CI CD pipeline.

Woulds this allow me to run pulumi (infra as code) to setup a cluster simualtion and run some tests?


The nodes here are fake, they won't actually run your software. For CI/CD clusters I would recommend kind (https://github.com/kubernetes-sigs/kind).


Thanks


I was half-expecting the acronym to be a pun by developers who are fan of dutch cartoonist Evert Kwok[0], but it seems completely unrelated.

[0] https://www.evertkwok.nl/


This is creative as hell. Testing now but this seems like a nice tool for lightweight crds that you just need to CRUD and store metadata in.


Could this be used to emulate Kubernetes on a serverless environment? Pretty cool if this is possible.


How is it different with minikube?


minikube is a VM basically. This emulates thousands of nodes on your laptop so you can test like how your operator you are developing performs.


how is it different than `kind` ?


(Disclaimer: I haven't looked too much into this tool other than a cursory glance)

It seems this tool isn't meant mostly for testing how different components behave under different scenarios and/or load. This is probably particularly helpful for custom controllers or operators. What happens to your controller if it's constantly reconciling 100k pods? What about 5k nodes? Something else? If this tool makes creating a "loaded" cluster easy, it's definitely handy. Would have saved me some time doing something similar a few months ago.


`kind` is like `minikube` but using docker containers instead of VMs, but as the page says "was primarily designed for testing Kubernetes itself".


how is this working ? is this like cilium eBPF ?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: