Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Each GizmoEdge worker pod was provisioned with 3.8 vCPUs (3800 m) and 30 GiB RAM, allowing roughly 16 workers per node—meaning the test required about 63 nodes in total.

How was this node setup chosen? Specially 3.8 vCPU and 30 GiB RAM per? Why not just run 16 workers total using the entire 64 vCPU and 504 GiB of memory each?





Hi nodesocket - I tried to do 4 CPUs per node, but Kubernetes takes a small (about 200m) CPU request amount for daemon processes - so if you try to request 4 (4000m) CPUs x 16 - you'll spill one pod over - fitting only 15 per node.

I was out of quota in Azure - so I had to fit in the 63 nodes... :)


But why split up a vm into so many workers instead of utilizing the entire vm as a dedicated single worker? What’s the performance gain and strategy?

I'm not exactly sure yet. My goal was to not have the shards be too large so as to be un-manageable. In theory - I could just have had 63 (or 64) huge shards - and 1 worker per K8s node, but I haven't tried it.

There are so many variables to try - it is a little overwhelming...


Would be interesting to test. I’m thinking there may not be a benefit to having so many workers on a vm instead of just the entire vm resources as a single worker. Could be wrong, but that would be a bit surprising.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: