Hacker News new | past | comments | ask | show | jobs | submit | icy's comments login

@icyphox.sh! I post about K8s and distributed systems.


Building a nodeless Kubernetes service; no servers/worker nodes, pods are scheduled transparently as micro VMs (planning to make this flexible, so Fly Machines, Cloud Run, Cloudflare Workers, what have you). Optionally, you can bring on your own worker nodes if you want to!

Still super WIP, and the landing page hasn't been updated yet but here's a quick little early access form! https://tally.so/r/me2Q8E


Similarly, for Templ/Tailwind/AlpineJS components, I’ve recently found https://goilerplate.com.


Nice! Maybe I should build a small adapter so Templ components can be used with gomponents.


Working on a managed Kubernetes service that allows you to bring your own worker nodes. Spin up a control plane in your region of choice and build a K8s cluster using whatever—VMs, bare metal, or heck, even your Raspberry Pis at home.

Get in on the early access waitlist here: https://kapycluster.com


The git is still hosted at peppe.rs.


trawk (tree awk) was one of the initial names for this (not author, but know him personally)


Hey, completely unrelated, but you replied to my post re: YC's fall batch and partnering up (https://news.ycombinator.com/item?id=41178593). If you're still interested, I'd love to chat! You can write to me at anirudh@oppiliappan.com


I’ve been thinking/slowly building a service that hosts Kubernetes control planes. Bring your own worker nodes. Users can get a fully managed control plane (upgrades, HA, etc.) in their region of choice and can use whatever workers they want—be it cloud VMs, bare metal or your laptop too.

I’ll eventually open source the single binary agent that’ll bootstrap a host into a K8s node. Just run it rootless with a join URL and voilà!

Also in the pipeline is a global load balancer service (for your clusters).

What do y’all reckon? Interesting? Yay/nay?


Kubernetes is really built on the assumption that workers are in the same LAN as the control plane. Long latency between the control plane and workers affects workload reliability; heartbeats need to be configured with longer timeouts, for example. Pod-to-pod communication, where pods run in regions on opposite sides of the globe, supposedly on the same pod network CIDR, is also going to be flaky. There's a long history of projects attempting to take LAN-local designs and make them resistant to regional failure by superimposing a LAN on top of a WAN and it never works as intended. Furthermore, various service meshes already present ways of helping to direct/shape traffic between clusters (i.e. between regions), when service architecture evolves to truly support multiple regions.

You run a risk of building something that seems to "work" and falls apart for non-obvious, head-scratching reasons for many or most users.


> Kubernetes is really built on the assumption that workers are in the same LAN as the control plane. Long latency between the control plane and workers affects workload reliability; heartbeats need to be configured with longer timeouts, for example.

Yep, and that's why I'm designing this to provision control planes as close to the user's workloads as possible (sub ~100ms latency, which is plenty acceptable for reliability; the likes of Scaleway do similar).

> Pod-to-pod communication, where pods run in regions on opposite sides of the globe, supposedly on the same pod network CIDR, is also going to be flaky

Certainly, but this is not something that I as control plane provider care about -- it's a design decision the user has to account for, and I consider this freedom of choice a nice one to have.

I suppose the main sell here (and the few folks I've spoken to seem to agree) is the flexibility a decoupled control plane offers. Being able to migrate your bare metal setup to "managed" K8s; running a homelab cluster without dealing with the ops (upgrades, cert rotations, etc.); or simply being able to use VMs from cheaper cloud providers.

But yeah, it's definitely a hard problem but solvable within the right constraints.


What’s the realistic likelihood of getting accepted as a solo founder? My potential cofounder dropped out and I’d really really like to apply for this batch.


Wanna partner-up? :) I'm in a similar situation.


There are solo founders every batch, so realistic enough.


I've been running this on K3s at home (for my website and file server) and it's been very well behaved: https://git.icyphox.sh/infra/tree/master/apps/garage

I find it interesting that they chose CRDTs over Raft for distributed consensus.


from an operations point of view, I am surprised anyone likes Raft. I have yet to see any application implement Raft in a way that does not spectacularly fail in production and require manual intervention to resolve.

CRDTs do not have the same failure scenarios and favor uptime over consistency.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: