Hacker News new | past | comments | ask | show | jobs | submit login

> Does it mean the backhaul is private and not tunneling through the public internet?

Backhaul runs only through the encrypted tunnel. The Wireguard connection itself _can_ go over the public internet, but the data within the tunnel is encrypted and never exposed.

> I use Cloudflare Workers and I find that at times they load-balance the traffic away from the nearest location [0][1] to some location half-way around the world adding up to 8x to the usual latency we'd rather not have. I understand the point of not running an app in all locations esp for low traffic or cold apps, but do you also "load-balance" away the traffic to data-centers with higher capacity?

This is actually a few different problems. Anycast can be confusing and sometimes you'll see weird internet routes, we've seen people from Michigan get routed to Tokyo for some reason. This is especially bad when you have hundreds of locations announcing an IP block.

Server capacity is a slightly different issue. We put apps where we see the most "users" (based on connection volumes). If we get a spike that fills up a region and can't put your app there, we'll put it in the next nearest region, which I think is what you want!

CDNs are notorious for forcing traffic to their cheapest locations, which they can do because they're pretty opaque. We probably couldn't get away with that even if we wanted to.

> Frequently? Are these server-routers running in more locations than data centers that run apps?

We run routers + apps in all the regions we're in, but it's somewhat common to see apps with VMs in, say, 3 regions. This happens when they don't get enough traffic to run in every region (based on the scaling settings), or occasionally when they have _so much_ traffic in a few regions all their VMs get migrated there.

> Interesting, and if you're okay sharing more-- is it that the anycast setup and routing that took time, or figuring out networking wrt the app/containers?

Anycast was a giant pain to get going right, then Wireguard + backhaul were tricky (we use a tool called autowire to maintain wireguard settings across all the servers). The actual container networking was pretty simple since we started with ipv6. When you have more IP addresses than atoms in the universe you can be a little inefficient with them. :)

(Also I owe you an email, I will absolutely respond to you and I'm sorry it's taken so long)




> Wireguard + backhaul were tricky (we use a tool called autowire to maintain wireguard settings across all the servers).

I'm guessing that's? https://github.com/geniousphp/autowire

Looks like it uses consul - is there a separate wireguard net for consul, or does consul run over the Internet directly?


Consul runs over a different connections with mutual TLS auth. That's the project we use!


Any chance you have more details on GP's question about the tech basis of the router (ebpf, dpdk)? I didn't find this component among the OSS in the superfly org.


Doh, missed that. We're not doing eBPF it's just user land TCP proxying right now. This will likely change, right now it's fast enough but as we get bigger I think we'll have more time to really tighten up some of this stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: