Tailscale is one of my favorite companies. They're clearly on to something. Here's a great post by their CTO explaining a lot of the motivation and vision behind it: https://crawshaw.io/blog/remembering-the-lan
IMO the main outstanding questions/concerns are:
* Is the VPN model really the way to go? If someone gets their hands on one of your Tailscale nodes, they can access every service on your tailnet, which are likely running with reduced security since that's a huge part of the appeal. This is exactly the situation BeyondCorps/Zero Trust was created to avoid. Tunneling services[0] are more of a Zero Trust approach, but they can't match the seamlessness of Tailscale once a node is connected to the tailnet.
* Can it expand into the layman market? I wonder if the average person will ever be willing to install a VPN app on all their devices. On the flipside, I could see TS partnering with someone like Google to integrate TS tightly with Android and set up a private network between all your Google-signed-in devices.
* The relay system - DERP is nice, but it's primarily intended for signaling/fallback. It feels like CGNAT adoption is growing faster than IPv6 is, and I wouldn't be surprised if fewer and fewer p2p connections succeed over time[1]. DERP forces everything over a single TCP connection (HOL blocking), and I'm not sure it even has any flow control.
* Use in web browsers - They got a demo of this working, but it's pretty involved. You have to compile the entire Tailscale Golang library to WebAssembly which is a large artifact, and it's DERP-exclusive.
* Portability in general - Depending on WireGuard, as awesome as it is, is fairly limiting. You either need admin privileges to create the TUN device, or you need to run an entire TCP stack in userspace alongside your own WireGuard implementation. I'd be interested to see something like Tailscale implemented on top of WebTransport.
> * Is the VPN model really the way to go? If someone gets their hands on one of your Tailscale nodes, they can access every service on your tailnet, which are likely running with reduced security since that's a huge part of the appeal. This is exactly the situation BeyondCorps/Zero Trust was created to avoid. Tunneling services[0] are more of a Zero Trust approach, but they can't match the seamlessness of Tailscale once a node is connected to the tailnet.
At the very least there's ACLs so you can tag devices and restrict access down to specific ports and protocols based on either user identity or device tag.
At my org we use tailscale much like a VPN, to give users access to a few internal web apps, and with ACLs those users can only hit the webserver on 443 and nothing else to that node. This way the web server itself has no ports exposed on the host, ufw deny all incoming.
I can't answer if the VPN model is really the way to go, long term - probably not, but for our use case Tailscale has been absolutely perfect, and we accepted the tradeoffs were worth it over a more "complete" zero-trust approach, and the complexities that come along with it.
What Tailscale doesn't solve is access to the data that web app serves if the user's machine is compromised, as tailscale is just determining "can the user hit the webserver on port 443?" and does nothing to evaluate the state of the user's host.
I guess that's all to say, I/we don't see Tailscale as a zero-trust solution, but more or less a more convenient VPN with easier to use ACLs. Cloudflare Tunnel and the likes are much better suited to implementing a zero trust approach.
I think there's still value though. A zero trust approach is the correct way for most organizations, but there's still a big niche for Tailscale especially for small-medium orgs and self-hosters/homelabbers.
Tailscale is not just more convenient but also more efficient if your VPN meshes a lot ( not all traffic going to the same place). Because nodes can establish connections directly. A traditional VPN can't do that.
This is the main reason I use a mesh vpn (though not tailscale)
> What Tailscale doesn't solve is access to the data that web app serves if the user's machine is compromised, as tailscale is just determining "can the user hit the webserver on port 443?" and does nothing to evaluate the state of the user's host.
Tailscale has some cybersecurity integrations to configure access depending on the device posture. For example, blocking access to a webserver if the device is out of date, or if malware is detected, or if the firewall is disabled, etc. But I don't use any of those integrations and can't speak to them.
The posture implementation is quite easy to work with. There’s a growing list of integrations, and you can also roll your own with the posture API. I’ve used Kolide so far and will be integrating with Kandji on another tailnet. They also have Intune, JAMF, Crowdstrike, and SentinelOne.
The same posture API can be used to restrict access to devices in your inventory or to set up just-in-time access to a sensitive asset. For the latter, you can use a Slack app provided by Tailscale or integrate with an identity governance workflow to set a posture attribute with a limited TTL. Your tailscale policy just needs to condition the relevant access on the attribute.
On your first point, I've been using tailscale for a bit and its ACL feature addresses most of my concerns there. My laptop can ssh into any of my servers but not the other way around, and my servers cant talk to each other unless I set them to.
The ACLs might look a bit scary at first, but they are actually quite intuitive once you coded up a rule or two.
It basically works by tagging machines (especially those deployed with an API key) and grouping users. Then you set up rules which allow groups and tags can communicate with each other on specific ports. Since the default rule is DENY, you only need to specify rules for communication you actually want to allow.
For instance you would create a tag for `servers` and a group for `sre`. Then you setup an ACL rule like this to allow SRE to ssh into servers:
> If someone gets their hands on one of your Tailscale nodes, they can access every service on your tailnet, which are likely running with reduced security since that's a huge part of the appeal. This is exactly the situation BeyondCorps/Zero Trust was created to avoid.
In addition to the ACLs mentioned by the sibling, a tailnet is not quite a plain-old VPN overlay network, in that each device on a tailnet gets assigned a predictable, durable LAN IP address based on the credentials that device is logged into Tailscale with.
Which means that, for at least the "personal" devices (laptops, phones, tablets), you can configure your servers on a tailnet to do something that's less finicky than full-on credential-based auth, but still more secure in practice than no auth: namely, host-based authentication — which should be a reasonable 1:1 proxy for user authentication (assuming the constraints from the previous paragraph.)
To put that in concrete terms: on a tailnet, a user's SSH credential for a given server can simply be the fact that the user is able to originate the connection from the expected LAN IP address of the user's workstation. Except that instead of that LAN + the user's workstation living in a physical building, they're both virtual, and the user's physical workstation (of the moment) must provide credentials to bind to the tailnet IP that allows it to present itself as the virtual workstation.
Great insights, I think you will like OpenZiti, Anders, which is included in your list for both itself and zrok, which we built on top.
Directly answering your concerns:
- Deny by default and least privilege model means getting access to a node does not give you access to all services on the overlay. This includes SDKs so that only embedded apps are authorised, the apps have no listening ports on underlay and are literally unattackable via conventional IP-based tooling, all conventional network threats are immediately useless.
- Its open source nature means its being adopted by companies to create more powerful ecosystems.
- The overlay, while looking similar to DERP, uses individual service encryption and routing with flow control and smart routing (I know people who get much much better performance as a result).
- Our SDK includes a 'clientless' endpoint for the browser called BrowZer - https://blog.openziti.io/introducing-openziti-browzer. All users need to do it log into their IdP and everything else is done automatically, without involvement from the user.
- We don't build on Wireguard, which provides much more flexibility.
I love what you're doing with OpenZiti. I've looked at it multiple times, and I always come away feeling like it's not a good fit for me, and indiehosters in general.
I think the concept of making a simple SDK for embedding tunneling in apps is unique and very compelling.
However, for me to commit to a platform like that, the most important question is: if upstream changes their license, runs out of money, or just generally takes things in a direction I don't like, what are my options?
Ideally, the platform would be so simple that I can just fork it myself or with a small team without too much effort. The best way to create a platform like this is to build around simple, open protocols. I've never gotten the feeling OpenZiti is designed this way. I've never found any documentation on the network protocol. Your platform also offers many features I don't need, which makes it even higher risk to consider forking.
Note that I'm not trying to say you're doing something wrong. I'm not aware of any tunneling platform that provides this, which is why I'm currently building one myself (a successor to boringproxy).
I get the feeling OpenZiti is rather enterprise focused. And that makes sense, it's almost certainly where all the money is. I really hope you guys are able to prove the value of app-embedded tunneling.
But I'm looking for a very simple consumer product/platform.
- Agreed. OpenZiti is not trying to focus on indie hosts. It has the goal to completely transform how networking and connectivity are done, to make secure by default and a simple user experience the de facto standard.
- Our path to do this definitely depends on monetising enterprise rather than indiehosters. That said, you can build abstractions on OpenZiti, which are much more simple and focused on indie hosters. A good example is zrok (https://zrok.io/), which makes sharing super simple (publicly, privately, and more), and is built on OpenZiti. Likewise, it's FOSS and permissively licensed under Apache 2.0 while also having a free SaaS.
- Likewise, we truly do believe in the power of app-embedded to transform networking and connectivity, but I would note the majority of people (self-hosters and enterprises alike) today use it as a superior private connectivity platform rather than for the app-embedded. They may use the SDKs, or consider it in the future, but the main selling point is the power of the platform, making it dead simple to do private connectivity across networks while abstracting away a lot of complexity (no need for VPNs, SDWAN, inbound ports, complex ACLs, L4 load balancers, public DNS, etc).
> "feeling like it's not a good fit for me, and indiehosters in general."
Maintainer here so I'm gonna be biased with this hot take, but I really don't agree with this particular sentiment.
I would turn it around instead and say that most indie hosters are maybe not looking for the levels of protection a zero trust overlay network provides. That is a believable reason for me why it might be perceived as not a good fit. If you're not looking for the sort of security that OpenZiti affords the operator, it will certainly feel less of a fit than a classic VPN-like solution. It also focuses on a different paradigm wrt connectivity centered around individual services. That does mean the learning curve is absolutely steeper because it's not "just IP" and all our years of ip-based-know-how are useful, but not to make the most of the system. While one can use IP/L3/L4 just fine with OpenZiti, it's certainly not trying to be an IP-based VPN (like many of the other solutions are). That also might lead to feeling like it's not a great fit.
For the people who want the sort of security OpenZiti provides, however. It really is an easy-to-use (my bias showing) solution that plenty of indie hosters use already. :)
Not trying to sound too defensive here (a little is ok, right?) but I also appreciate the comments and feedback, thank you!
We use very light weight libraries - https://openziti.io/docs/learn/core-concepts/security/connec... - incl. mbedTLS (from Arm) and ChaCha20-Poly1305 (same as Wireguard) by default. We have tons of use cases in constrained environments, both CPU and the network transport. This includes embedding our SW on military drones, into industrial FWs, and more.
I will preface by saying I am not a Nebula expert, and it may have changed since I last looked.
Similarities:
- Fully open source, using CAs as strong identities (rather than relying on SSO from third parties), completely self-hosted (with 3rd party SaaS options), and providing scalable, performant overlay networking.
Differences:
- OpenZiti is focused on connecting services based on zero trust principles. In contrast, Nebula focuses on connecting machines – e.g., you can authorize only a single port without needing to set up ACLs or firewall rules.
- OpenZiti does not require inbound ports or hole punching, it builds outbound only connections via an overlay which looks sort of similar to DERP (but better with app specific encryption, routing, flow control, smart routing etc). This overlay also removes need for complex FW rules, ACLs, public DNS, L4 loadbalancers, etc.
- As alluded to above, truly private, zero trust DNS entries with unique naming – if you wanted to call your service "my.secret.service", you can do that; it does not force you to have a valid Top Level Domain.
- OpenZiti includes SDKs (along with appliance or host based tunnels) to bring overlay networking and zero trust principles directly into your application.
Sounds amazing and like it addresses my issues with Nebula. I know that Nebula/Defined Networks was/is working on better Kubernetes integration, but it seems unlikely to become generally available. Is that something you're supporting? i.e. as pod sidecar to authenticate services like nebula has ACL.
What's your funding model? Are enterprises willing to sponsor the development?
I think Nebula has a lot of trust solely because it's made at/used by Slack. In a similar sense, why should enterprises trust OpenZiti? If services do not use e2ee (e.g. service mesh with TLS) but rely on OpenZiti, it places a lot of trust in OpenZiti. How has the code been audited? Why are you confident that it's cryptographic implementation is secure?
OpenZiti is developed and maintained by NetFoundry (https://netfoundry.io/). We provide a productised version which is very easy to deploy, manage, operate, and monitor with high SLAs, support, legal/compliance, liability, security, updates, feature requests etc.
We are not rolling our own crypto, we use well vetted open source standards/implementations - https://openziti.io/docs/learn/core-concepts/security/connec.... If you don't trust that, you can easily roll your own - https://github.com/openziti/tlsuv/blob/main/README.md. I know people who do that. Yes, its been audited, and run my many large enterprises in security conscious use cases - e.g., 8 of the 10 largest banks, some of the largest defence contractors, leaders in ICS/OT automation as well as grid etc.
Yes, we support K8S in a lot of ways, both for tunnelling and deployement - https://openziti.io/docs/reference/tunnelers/kubernetes/. There are more native options being worked on incl. Admission Controller and Ingress Controller but I honestly don't know the exact status of either. If they interest you, feel free to ping me on philip.griffiths@netfoundry.io. I can get more info.
Sounds great. It puzzles me that Nebula hasn't done what you're doing with OpenZiti.
In my opinion, Kubernetes networking is flawed, in that service mesh authentication with mTLS has unnecessary overhead, Cilium network policies are clumsy using labels and work poorly with non-pod workloads (i.e. CIDR-based policies), multi-cluster is hacky, and external workloads are inconvenient to set up.
So a simple plug-and-play solution that solves these problems would be great.
My guess is that is how they want to commercialise, they make that bit harder so that more people pay for their hosted solution. I have sympathy, monetisation allowing maintaining FOSS can be a challenge. We all have bills.
I agree with a lot of what you say. Tbh, this is also why we are advocates of app-embedded ZTNA. You get mTLS (and way, way more) out of the box, without the overhead, and its super easy to run your K8S or non-K8s workloads anywhere. No need for VPNs, inbound FW ports, complex ACLs, L4 loadbalancers, public DNS and more. It is thus much easier to build distributed systems which are secure by default from network attacks.
You comment kicked off a big internal chat, which led to someone creating a document on our overlay approach, vs service meshes. I took that, wrote some extra details, comparison and summary - https://docs.google.com/document/d/1ih-kuRvfiGrJODZ5zVjwFLC2....
TL:DR, we believe service meshes introduce complexity with control plane synchronization, service discovery challenges, and network overlays. A Global Overlay removes Kubernetes service dependencies and shifting networking to a Zero Trust, software-defined global overlay which is much simpler, automated and secure.
Couple of additional small notes (maintainer here)
> In a similar sense, why should enterprises trust OpenZiti?
you don't have to. It's open source - so you go look at all the code and judge for yourself but perhaps better than that (well different anyway) is that OpenZiti allows you to use your own PKI for identities if youlike. With third-party CA support, you can make your own key/cert and deploy them to identities if you desire. https://openziti.io/docs/learn/core-concepts/pki/#third-part...
> If services do not use e2ee
with OpenZiti you basically get this by default between OpenZiti clients. (once offloaded from the OpenZiti overlay, it's up to the underlying transport protocol)
> - OpenZiti does not require inbound ports or hole punching, it builds outbound only connections via an overlay which looks sort of similar to DERP (but better with app specific encryption, routing, flow control, smart routing etc). This overlay also removes need for complex FW rules, ACLs, public DNS, L4 loadbalancers, etc.
The routers that you deploy to make up the overlay still need inbound ports though, right? I thought that's what 10080 was doing.
Yes, but the risk posture is very different. The question I like to ask is, 'what does it take to exploit a listening port on the overlay to get to a service':
- (1) need to bypass the mTLS requirement necessary to connect to the data plane (note, each hope is uses its own mTLS with its own, separate key).
- (2) have a strong identity that authorizes them to connect to the remote service in question (or bypass the authentication layer the controller provides through exploits; note again, each app uses separate and distinct E2EE, routing, and keys)
- (3) know what the remote service name is, allowing the data to target the correct service (not easy as OpenZiti has its own private DNS that does not need to comply to TLDs)
- (4) bypass whatever "application layer" security is also applied at the service (ssh, https, oauth, whatever)
- (5) know how to negotiate the end to end encrypted tunnel to the 'far' identity
So yes, if they can do all that, then they'd definitely be able to attack that remote service. Note, they only have access to 1 single service among hundreds, thousands, or potentially millions of services. Lateral movement is no possible. So the attacker would have to repeat each of the 5 steps for every service.
Agree that they are on to something. I gave a tech talk about them a while ago at work and said that I think they are on the cusp of providing a consumer VPN product that appeals to mainstream consumers. The Apple of VPNs, everything "just works" and is easy to understand.
Tailscale isn’t really a VPN, it’s an OSI layer 5 for the TCP/IP world. It makes connectivity as easy as 90s LAN parties were.
I use Tailscale
- so I can do remote tech support on my 81 year old mother’s computer
- So I can remote in to my desktop from anywhere with my mobile phone or iPad or Vision Pro or Steam Deck if I need a file or need to print something
- watching streaming media from my home network when I’m travelling (and avoiding VPN blocks because my home computer isn’t on a known VPN network)
And the best part is none of this required almost any configuration beyond (a) installing the software, (b) checking the “allow exit node” box on my home computer, (c) sharing my mom’s computer onto my tailnet.
The Magic DNS feature is super cool as well. I'm not sure exactly what the mainstream killer app would be. But I feel like Tailscale is poised to execute if/when it arises.
Perhaps the AI age makes everyone more data privacy conscious.
I've also long thought that eventually every household will eventually have a mini server for home automation and storing personal information. The rise of the cloud kinda slowed this down, but I don't think cloud and home server are mutually exclusive.
> Is it because lot of people are just using a VPN as a proxy replacement, watering down the original meaning of the word?
Yes. The question was about a “mainstream consumer”. While “mainstream” is always a moving target, today (in March 2025) that mainstream consumer believes that a VPN == NordVPN == ExpressVPN == what we call/know as a proxy.
NordVPN added some mesh features and you can CTRL-F this thread to find a confused person asking “how is tailscale different than Nord?”
I used to host an Arma 3 server using Kubernetes, I had a scalable set of headless clients to distribute the AI load. My friends called said it was the smoothest server they ever played on despite using hundreds of AI groups. With Tailscale I wouldn't have needed host networking enabled on the Pods, come to think of it.
The CPU controlled squads of enemy soldiers and vehicles the players shoot. Arma is a first-person shooter game. The game engine it uses is not heavily multi-threaded, but the multiplayer system has some weird quirks that you can exploit to distribute AI processing across multiple networked instances, either in a multi-core or multi-machine topology.
I use VPN (usually Tailscale, though I have the Proton subscription package that includes their VPN - mainly useful if for some reason my home internet is slow or out, otherwise I would just TS it) on all public WiFi. My work's remote access blocks logins from outside the US, so if I'm out of the country, my wife and I both need VPN to be able to log in.
Interestingly, while my work's network blocks Tailscale's initial authentication, it doesn't actually block the traffic. I can authenticate my iPad via cell phone tethering or just before I leave the house and it will work when I connect to their network. It's a personal device without any access to their internal network, and I'm using the guest network, so I'm not compromising security to actual work devices. But when I'm stuck up there and I want to stream a movie from my NAS at home, I can.
I had some of my family install Tailscale to access my tailnet. They can watch movies from my collection more easily than using Netflix, and we can share files through the client with a single click. I have other friends using it to play old-school dedicated server games without having to deal with CGNAT/hairpin NAT problems.
Maybe if there was a mainstream reason to connect home machines with their phones. Personal backup, game streaming, etc. im not in this camp of believing it but maybe!
I do this - I self host my movies/TV, ebooks, comics, photos, etc. and use tailscale to access it from anywhere. It's not really great for "mainstream" but for "tech enthusiast" it's very useful. Basically anyone who would consider buying a NAS (most consumer NAS devices can also run Docker containers these days)
Maybe it's more enthusiast than layman, and I guess it's also not much of a market, but in the video arrrchival space it's pretty widespread, with people running e.g. Jellyfin behind Tailscale.
I think you nailed it. TS is great but is in a middle ground niche with more targeted alternatives squeezing it from both sides:
1. If you actually need strong security, you are likely to go with open source zero trust or their commercial versions.
2. If you don't need strong security, you will often view VPN an insurance policy (TS simplifies but is still more difficult than 'do nothing').
So you end up with a relatively narrow band of 'use cases' like NAT traversal; semi-privacy; access to private IP hosted services. Enough to sustain a venture funded company?
> I wonder if the average person will ever be willing to install a VPN app on all their devices.
Of course the average person will be willing to install a VPN app: all it takes is a bit of internet censorship, blocking access to their favourite services, and some geofencing, where services limit access to them based on IP address.
Just ask people from China, Russia, Ukraine, Turkey, UK, Germany, etc.
But what you're referring to as a "VPN app" is something very different than what the parent poster is referring to with respect to what Tailscale is.
When you use services like NordVPN, Mullvad, Surfshark, etc., you're just installing a VPN client, and you're basically just using them as a reverse proxy to hide your IP address (present it as coming from another country). That is the use case you are talking about.
Tailscale is very different. It is about setting up your own VPN so that you can access devices from your home or wherever from the Internet at large in a secure manner.
I pointed someone who, admittedly, has issues following complex computer setup instructions at Tailscale, they easily set up their own tailnet and use it to access home devices all the time to the point they barely remember it's there
> But what you're referring to as a "VPN app" is something very different than what the parent poster is referring to with respect to what Tailscale is.
Does that matter? It still shows willingness to install.
I think it matters a lot because the use cases are so different.
Just look at the US - tons of people now install a VPN app like Nord or Mullvad to get around state-level porn blocks. In other countries it's to get around other types of censorship. And to install those apps on something like a phone or laptop is trivially easy.
The use cases for installing Tailscale (I need a home network and I need to be able to access these devices from the Internet) is, I would guess, ~5% compared to the other VPN use case. I'm a software developer, and I don't need it.
Indeed. VPNs were originally created to allow secure remote connections to and between LANs. The whole privacy thing is a by-product, and they're not that great at it.
Tor exists and is far better at providing privacy.
Tor is far to easily blocked, and given that a great number of nodes are compromised, it is likely it is far worse at providing privacy than some self-hosted vpn on some cheap hosting.
Beyondcorp was mainly created to advocate cloud services and to minimise the (legit) worry off CIOs to see all their data in services connected to the entire internet.
I still think VPN has a good usecase. It's great extra layer of defense and also a nice way to disclose access to devices at different locations.
I don't use tailscale as it's too commercial for me but I use another VPN mesh service. Of course you still need to secure your endpoints properly.
we've got a tailscale integration that takes care of the security concerns. set policy to decide what can talk out to the tailscale node and what the tailscale gateway is granted access to. this is especially important when you can't run a tailscale client on the devices you want to connect
For prototyping web services I’ve been using https://tuns.sh which uses a managed version of dish. What’s great is there’s nothing to install locally since it uses ssh port forwarding to function.
> set up a private network between all your ~~Google-signed-in~~ devices.
I've been doing something like this as a fun side project. Idea is to get everything to pass through piholes and have both clear and VPN exit nodes. So then I can send some pis to people and we can create an internal network to share things like files, movies, streaming services, whatever. It also can increase security, especially making it easier for people like my parents when I need to fix their computers and I can just block malware for them, to some degree at least. It's also been very useful debugging stuff in my home network while I'm out somewhere else. And I can access any of my anywhere. I'm out traveling? Still got all my movies and stuff.
One big issue is Apple, who doesn't seem to respect DNS and VPNs, especially local network access... the other aspect is that it makes some ssh automation annoying because they will change things, such as getting the name of the current ssid (wtf?!). So I can't just make a conditional in my config to go through TS instead of local network based on that
Though part of my gripe is just not having this in general. I can want to work on a certain machine I don't open and if I'm on an internal network but if external I want to do a proxy jump. The ssid is the most obvious and consistent way to determine this, at least to me. Anyone got another idea?
IMO the main outstanding questions/concerns are:
* Is the VPN model really the way to go? If someone gets their hands on one of your Tailscale nodes, they can access every service on your tailnet, which are likely running with reduced security since that's a huge part of the appeal. This is exactly the situation BeyondCorps/Zero Trust was created to avoid. Tunneling services[0] are more of a Zero Trust approach, but they can't match the seamlessness of Tailscale once a node is connected to the tailnet.
* Can it expand into the layman market? I wonder if the average person will ever be willing to install a VPN app on all their devices. On the flipside, I could see TS partnering with someone like Google to integrate TS tightly with Android and set up a private network between all your Google-signed-in devices.
* The relay system - DERP is nice, but it's primarily intended for signaling/fallback. It feels like CGNAT adoption is growing faster than IPv6 is, and I wouldn't be surprised if fewer and fewer p2p connections succeed over time[1]. DERP forces everything over a single TCP connection (HOL blocking), and I'm not sure it even has any flow control.
* Use in web browsers - They got a demo of this working, but it's pretty involved. You have to compile the entire Tailscale Golang library to WebAssembly which is a large artifact, and it's DERP-exclusive.
* Portability in general - Depending on WireGuard, as awesome as it is, is fairly limiting. You either need admin privileges to create the TUN device, or you need to run an entire TCP stack in userspace alongside your own WireGuard implementation. I'd be interested to see something like Tailscale implemented on top of WebTransport.
[0]: https://github.com/anderspitman/awesome-tunneling
[1]: https://tailscale.com/blog/how-nat-traversal-works